X
Business

DevOps success factors: Culture, APIs, and security

In a swiftly-changing technology landscape, organisations must maintain their competitive advantage through relentless experimentation and continuous delivery of improvements.
Written by Tas Bindi, Contributor

As little as a decade ago, software was shipped in a CD-ROM to a storefront, purchased, and likely abandoned after the user's initial installation. Today, code is shipped via the internet, meaning that continuous software updates are not only achievable, but expected, whether it's for desktop, mobile, or browser-based applications.

In an age where competitive advantage requires fast time to market, high service levels, and relentless experimentation, enterprises that cannot continuously deliver improvements risk losing in the marketplace.

The problem has been that development and IT operations have competing goals.

Development is concerned with responding rapidly, while operations is perceived to prefer stability, reliability, and security -- and the best way to achieve these goals is not to introduce new features or changes.

Enter DevOps, a portmanteau of 'development' and 'operations', which is a production philosophy that tries to bring together the best of both worlds -- experimentation and iteration with a measure of control.

It's only in the last five years that DevOps has picked up momentum, alongside software-defined everything.

DevOps as culture

Franco Ucci, senior director, Fusion Middleware at Oracle, told ZDNet that it was around five years ago the company began to use DevOps principles and practices to reshape itself. Reflecting on his experience, Ucci noted that DevOps is not just a technology undertaking, it's also a cultural shift.

As such, DevOps needs to be treated as a positive change management exercise, Ucci said.

"The first thing I'd suggest doing is just socialising within your organisation, showing everyone that there's an interest in self-improvement within the organisation. Let everyone know that there is an opportunity to participate in this and encourage people to talk about what it means for them. Try and treat it as a positive change management exercise, not as a negative one," he said.

Ucci added that it's important that everyone is on the same side because in the past, development and IT operations teams often played a game of 'blaming ping pong' when things went wrong, rather than working on the shared goal of fixing the product.

franco-ucci-2.png

Franco Ucci, senior director, Fusion Middleware at Oracle

(Image: Oracle)

The key, Ucci said, is fostering a culture where product development and process improvement are treated as experiments, after which learnings are shared.

"One of the capabilities that you want to build into a DevOps culture and framework is the idea of sprints. Those sprints could be about cutting a bit of code. It could be a deployment of a capability from one type of technology to another type of technology," Ucci said.

"The idea is if you're doing regular sprints as a team, you're having learnings that are shared across the team. That's actually part of a DevOps implementation."

Ryan Eames, chief architect and principal consultant for digital transformation at Verizon Enterprise Solutions, also noted the importance of establishing the right culture before implementing a collaborative approach to product development.

"I cannot emphasise enough the importance of instilling the right culture and the need for management teams to be unfaltering in their commitment to the process. Understanding that it's all about the team -- not egos, conducting blameless post-mortems, acknowledging that things break even if it's nobody's fault, and understanding that it's okay to ask for help," Eames told ZDNet.

He added that understanding the differences between agile and waterfall approaches, and the fact that a minimum viable product is not a finished product, will go a long way to reducing friction and mismanaged expectations.

But once a mature organisation has cultivated support with the affected business unit, there are still significant technical challenges ahead that require the right skills behind them, Eames said.

"These include integrating disparate legacy systems, onerous regulatory compliance requirements, fire drills on existing applications, and just keeping business-as-usual alive. It's much harder for established enterprise than it is for cloud natives to attract the right skills that can translate DevOps successes into legacy management results," he said.

Michael O'Dea, director of development operations at AI security company Cylance, said one of the mistakes organisations make when trying to establish a DevOps team is they tend to force DevOps onto existing departments.

michael-odea.jpg

Michael O'Dea, director of development operations at Cylance

(Image: Supplied)

"In my experience, DevOps is something that needs to be a bottom-up type of endeavour. You're best off placing DevOps engineers into the development teams themselves and then working with a centralised DevOps team to help develop tooling and establish monitoring," O'Dea said.

"That's one of the overriding philosophies we've been trying to push in Cylance -- especially as we grow and look into other product space areas. We're trying to establish DevOps liaisons. The teams that have one or two engineers who have been familiar with either a DevOps type of role or full stack development, they are more amenable and they're better at working with the DevOps team and the tooling that's available to make that project a success and avoid the siloing and the in-fighting that can happen when try to force [DevOps] down onto the engineers."

By having DevOps liaisons, developers can rapidly build and enhance their products throughout the development and release cycle because most of it is in their hands, O'Dea said.

"It only becomes a matter of needing the DevOps team to step in by the time they get to a production release. From the moment that they begin iterations to the time that they pass a software release, the entire development process is in the Cylance development team's hands so that they are not held up by external forces making it more difficult for them to do their jobs," he added.

O'Dea noted that as Cylance reached a certain level of maturity, checks and balances needed to be put into place.

"You've got QA, you've got security compliance, and that's when we basically began to introduce the Cylance DevOps team that became responsible for final push to release," O'Dea said.

"At that point, we can make sure that all the Is are dotted and all the Ts are crossed, that everybody has signed off on this release, and that we've chosen a proper time for the release to have minimal impact on our customers."

O'Dea admitted that it's not an easy endeavour to find talent that can wear multiple hats. In fact, combining of the operations skillset with the development skillset is a skill in itself, O'Dea said.

"You do have to look for the diamond in the rough," he said. "One thing that I have discovered in my position in leadership in the Cylance DevOps team is that one of the best, most valuable traits of [DevOps] engineers -- the ones that I've hired -- has been a fairly eclectic background. If you look at [me], I've served as a software engineer, I've served as tech support, I've served as IT administrator."

"For some other managers, some of these resumés may seem too eclectic -- these people have been all over the place. But DevOps is a role that is really all over the place and that experience really does add up to being able to be multifunctional, which every DevOps engineer has to be."

Eames and Brian Smith, VP of technical operations at Tableau, both said that a good starting exercise for companies who have no experience with DevOps is selecting one particular service that's well-contained and doesn't have too many integration points, and piloting a DevOps structure.

"Have the development and operations teams work together to deliver the next version. Take time up-front to agree on the process, tools and desired outcome. When you decide on the methodology, provide joint training for the team," Smith told ZDNet.

"While agile development has been a common practice for quite a few years, it may be new to the operations team. At the same time, production operations may be a new concept for some development organisations. Working closely together the two teams can learn from each other and create a feedback loop that can improve the overall service delivery process," he added.

Optimising DevOps through APIs

Brad Drysdale, from Mulesoft's APAC office of the CTO, cautioned that DevOps is not a silver bullet.

"[DevOps is] not the panacea to solving all of your problems," he told ZDNet. "What we're seeing is, because projects can be delivered easier and faster, organisations are repeating a lot of the same effort in every project."

In a DevOps world, the period of time between when a company has an idea for a feature and when customers are using it and providing feedback must be as short as possible, Drysdale noted. However, when projects are built in isolation -- even when DevOps principles are being used -- the path to production isn't fast enough, he added.

photo2.jpg

Brad Drysdale, CTO, Mulesoft Asia Pacific

Image: Supplied

According to Drysdale, combining DevOps with an API-led connectivity approach allows companies to deliver new capabilities, launch new products, and pivot rapidly.

"Let's say Project One had a requirement to get some customer data out of an old legacy system, then through the development work and the DevOps initiative, [the company is] able to take that customer information from a mainframe out to a mobile device. Project Two might have a similar requirement where that customer data is taken from the mainframe so that it can be displayed on a website or an Apple Watch. But if they build that project in isolation to the first one, then a problem starts to emerge," he explained.

"There is a lot of repetitive work that's done across projects and we think that's what's slowing the IT delivery component down."

Now that continuous deployment is becoming normal, the tooling space has exploded to help developers embrace more automation and more efficient product cycles, Drysdale said.

"Every time you build something in a project -- for example, getting that data from the mainframe to another project or to a device -- you do that using modern APIs. You do it in such a way that once you produce that data from the mainframe through a modern API, you allow it to be discovered and consumed inside of the organisation, so when Project Two and Project Three come along, [developers] realise this asset already exists and can reuse it and deliver [product updates] far more quickly," Drysdale said.

"This way, central IT is not on the critical path to delivering 100 percent of the capabilities -- and that's what speeds things up."

Eames communicated a similar sentiment, saying APIs allow companies to "leverage automation and orchestration, get to market quicker, avoid building things that have been built better elsewhere, and fundamentally allow your software to scale better."

When applying this combined DevOps and API-led connectivity approach, Drysdale said it's best to start from the top down. For example, if a mature insurance company undergoing digital transformation decides to create a mobile application for its customers, it would need to first establish what that app would require to function -- such as access to customer data.

"It needs to provide a single view of the customer -- their personal information, the current status of their policy, the status of outstanding claims. That information comes from a whole bunch of systems from across the organisation," Drysdale said.

"Instead of going straight to central IT and asking for access to the data across three systems, the business asks 'Does access to this data already exist in a well-defined, well-governed, well-secured state through modern APIs that we can discover in an internal app store?' If they exist, the [developers] can very rapidly assemble those lego blocks and put them together to deliver that single view of the customer.

"If it's a new organisation, the APIs probably don't exist, so they have to build them with that digital audience in mind and design it with governance and security built in."

Drysdale added that the advantage of building top-down is that all the APIs built as part the initial project have a business justification to exist.

"The first time you're building the API, you're building it as part of the application that you're building to service your digital audience. The application is probably how you retain customers, service new customers, let customers self-service, which drives costs down in the business," Drysdale said.

"Then when Project Two comes around, and you need access to the policy data and the customer data, and you want to augment it with some risk, fire, and weather information, two of those already exist.

"Central IT is happy because it's through the same front door, it's same entry point for that data, and they are comfortable with the security and governance aspects of it because they baked them in the first time around."

O'Dea said Cylance has been using an API-led DevOps approach from its inception four-and-a-half years ago, helped by the fact that many tools were already available when it started.

Today, all of Cylance's software releases go through an automated process from the point when new code is checked in to the point it's released, O'Dea said.

"From the very first locations that we launched, we were determined to make sure [everything we did was] reproducible and that we could always go back and create another web server or another application server or another build server," O'Dea said.

"[It was] very important that we had the flexibility to provision large numbers of machines to be able to do the massive amount of data crunching. Fifteen years ago, ten years ago, even seven years ago, that would have been very difficult to do because we would have needed to provision all of that iron hardware."

Eames said that it's "almost impossible" for Verizon to keep up with customer demand for new features and services using traditional methods of software development, which is why it uses different applications and tools to boost the volume of release cycles and bring additional capabilities to the market faster.

In doing so, Verizon has been able to reduce the time taken to provision managed services from six weeks to just a couple of hours, Eames said. In addition, DevOps has reduced Verizon's operational costs by more than 35 percent and increased Tier 1 problem resolution by 83 percent.

Smith also said that applying DevOps has allowed the company to automate a significant amount of workload, "reducing risk, increasing velocity, and improving reliability."

"One of the major tenets of DevOps is automation. The more manual a process, the longer it takes and you increase the likelihood of making a mistake. Automation is critical to the success of a DevOps team. In fact, your goal should be to automate everything in the delivery pipeline from code check-in to deployment," Smith said.

"To accomplish this, DevOps teams implement configuration and automation tools that leverage APIs to streamline their processes, reduce errors, and ultimately create a more consistent and reliable customer experience."

DevOps and security are not at odds with each other

Drysdale said security is one of the key reasons companies do not want to do things differently.

"If you're taking customer data out of the mainframe and the mainframe has traditionally been the domain of central IT -- a small group of people who can be very untrusting -- then it's very hard to ask them to liberate that data and allow lots of people and lots of projects in the business to attach to that data and use it for business purposes, because they fear that they lose control," Drysdale said.

"Whereas by leveraging modern APIs, central IT is able to liberate access to the data that's contained in something like a mainframe ... When done properly, with security baked into the design, [developers] are able to reuse that data and build new projects on top of that.

"It's actually inherently more secure if you provide one governed entry point to that data, rather than six projects coming to the central IT team and IT having to provide six different mechanisms by which that data is accessed from the mainframe. Doing it once is great, doing it six times is probably manageable, doing it 100 times becomes untenable."

Ucci said DevOps is about tying in iterative development with stability and security testing.

"What a DevOps framework can provide is that, right from the word 'go' when you're cutting your bit of code to participate in an overall application, you could also set up the security capabilities that you want to make sure are preserved and adhered to along the way," he said.

"It could be something like segregation of duties. It could also be making sure particular personas can only do particular things, so you've actually got those capabilities automatically built in before the applications are actually deployed. So you've got those tests being automatically set up and applied."

Continuous integration -- a core aspect of the DevOps process -- involves building the code, testing it against a prepared set of unit and integration test suites, and creating reports that include the resulting artifacts, O'Dea said.

"When you're working with cloud providers or you're working with API-driven solutions, you're actually able to create source-code-controllable or source-code-capable expressions of what your infrastructure should look like," O'Dea said. "What we discovered at Cylance is that by doing that, you also then have a baseline for what things should look like by including things like firewall rules and routing operations and security groups and the like in our infrastructure as code."

"Then when we actually want to find out if everything is operating correctly, we have a baseline. For the purposes of doing an audit, you know from the infrastructure as code exactly what your environment should look like, so you can simply compare what your environment looks like now to the infrastructure code that established it and you can identify anomalies very, very quickly."

O'Dea said Cylance uses base operating system images -- created by its DevOps team -- for all of its applications.

"What Cylance has benefited from that is we know the patch circumstances of that particular host because it's based on an image that we have provisioned. We know the firewall configurations, the firewall systems that are available on it, we know the other security solutions that are available on it," O'Dea said.

"We use configuration management software to ensure that every 15 minutes, these hosts dial in, and if anybody has made a change to the firewall or to the local system security configuration, we're able to reset it and we're also able to get an alert that it had to be reset so we know to take a look at that host and [understand] why did that happen, what user made that change.

"Infrastructure as a code at Cylance has been a huge boon for being able to verify our operational security on a day-to-day basis."

O'Dea also pointed out the importance of documentation from the moment DevOps becomes a part of a company's development efforts.

"In the earliest days, you may only have three or four engineers who really do know every last aspect of your solution ... If for any reason you were to lose an engineer, that can cause a major dearth in your organisation," O'Dea said.

"It's important to have documentation, despite the fact that you've got large quantities of automation. I've heard a lot of DevOps engineers outside of the Cylance organisation say that code is self-documenting and that's not necessarily true. As you reach a large point where you have 40 or 50 subsystems that you're managing and 10 or 12 people doing it, people do not have the time to go read the code. You really do need to have a strong effort of documentation."

Editorial standards