TNS
VOXPOP
Tech Conferences: Does Your Employer Pay?
Does your employer pay for you to attend tech conferences?
Yes, registration and travel are comped.
0%
Yes, just registration but not travel expenses.
0%
Yes, travel expenses but not registration.
0%
Only virtual conferences.
0%
No reimbursement.
0%
Cloud Native Ecosystem / Containers / Kubernetes

10 Key Attributes of Cloud Native Applications

What is Cloud Native computing? What are the tools and platforms needed for running workloads completely in the cloud. All the answers will be found here.
Jul 19th, 2018 9:00am by
Featued image for: 10 Key Attributes of Cloud Native Applications
Feature image via Pixabay.

Cloud native is a term used to describe container-based environments. Cloud native technologies are used to develop applications built with services packaged in containers, deployed as microservices and managed on elastic infrastructure through agile DevOps processes and continuous delivery workflows.

Where operations teams would manage the infrastructure resource allocations to traditional applications manually, cloud native applications are deployed on infrastructure that abstracts the underlying compute, storage and networking primitives. Developers and operators dealing with this new breed of applications don’t directly interact with application programming interfaces (APIs) exposed by infrastructure providers. Instead, the orchestrator handles resource allocation automatically, according to policies set out by DevOps teams. The controller and scheduler, which are essential components of the orchestration engine, handle resource allocation and the life cycle of applications.

Cloud native platforms, like Kubernetes, expose a flat network that is overlaid on existing networking topologies and primitives of cloud providers. Similarly, the native storage layer is often abstracted to expose logical volumes that are integrated with containers. Operators can allocate storage quotas and network policies that are accessed by developers and resource administrators. The infrastructure abstraction not only addresses the need for portability across cloud environments, but also lets developers take advantage of emerging patterns to build and deploy applications. Orchestration managers become the deployment target, irrespective of the underlying infrastructure that may be based on physical servers or virtual machines, private clouds or public clouds.

Kubernetes is an ideal platform for running contemporary workloads designed as cloud native applications. It’s become the de facto operating system for the cloud, in much the same way Linux is the operating system for the underlying machines. As long as developers follow best practices of designing and developing software as a set of microservices that comprise cloud native applications, DevOps teams will be able to package and deploy them in Kubernetes. Here are the 10 key attributes of cloud native applications that developers should keep in mind when designing cloud native applications.

[cycloneslider id=”kubernetes-series-book-3-sponsors”]

10 Key Attributes of Cloud Native Applications

  1. Packaged as lightweight containers: Cloud native applications are a collection of independent and autonomous services that are packaged as lightweight containers. Unlike virtual machines, containers can scale-out and scale-in rapidly. Since the unit of scaling shifts to containers, infrastructure utilization is optimized.
  2. Developed with best-of-breed languages and frameworks: Each service of a cloud native application is developed using the language and framework best suited for the functionality. Cloud native applications are polyglot; services use a variety of languages, runtimes and frameworks. For example, developers may build a real-time streaming service based on WebSockets, developed in Node.js, while choosing Python and Flask for exposing the API. The fine-grained approach to developing microservices lets them choose the best language and framework for a specific job.
  3. Designed as loosely coupled microservices: Services that belong to the same application discover each other through the application runtime. They exist independent of other services. Elastic infrastructure and application architectures, when integrated correctly, can be scaled-out with efficiency and high performance.

Loosely coupled services allow developers to treat each service independent of the other. With this decoupling, a developer can focus on the core functionality of each service to deliver fine-grained functionality. This approach leads to efficient lifecycle management of the overall application, because each service is maintained independently and with clear ownership.

  1. Centered around APIs for interaction and collaboration: Cloud native services use lightweight APIs that are based on protocols such as representational state transfer (REST), Google’s open source remote procedure call (gRPC) or NATS. REST is used as the lowest common denominator to expose APIs over hypertext transfer protocol (HTTP). For performance, gRPC is typically used for internal communication among services. NATS has publish-subscribe features which enable asynchronous communication within the application.
  2. Architected with a clean separation of stateless and stateful services: Services that are persistent and durable follow a different pattern that assures higher availability and resiliency. Stateless services exist independent of stateful services. There is a connection here to how storage plays into container usage. Persistence is a factor that has to be increasingly viewed in context with state, statelessness and — some would argue — micro-storage environments.
  3. Isolated from server and operating system dependencies: Cloud native applications don’t have an affinity for any particular operating system or individual machine. They operate at a higher abstraction level. The only exception is when a microservice needs certain capabilities, including solid-state drives (SSDs) and graphics processing units (GPUs), that may be exclusively offered by a subset of machines.
  4. Deployed on self-service, elastic, cloud infrastructure: Cloud native applications are deployed on virtual, shared and elastic infrastructure. They may align with the underlying infrastructure to dynamically grow and shrink — adjusting themselves to the varying load.
  5. Managed through agile DevOps processes: Each service of a cloud native application goes through an independent life cycle, which is managed through an agile DevOps process. Multiple continuous integration/continuous delivery (CI/CD) pipelines may work in tandem to deploy and manage a cloud native application.
  6. Automated capabilities: Cloud native applications can be highly automated. They play well with the concept of infrastructure as code. Indeed, a certain level of automation is required simply to manage these large and complex applications.
  7. Defined, policy-driven resource allocation: Finally, cloud native applications align with the governance model defined through a set of policies. They adhere to policies such as central processing unit (CPU) and storage quotas, and network policies that allocate resources to services. For example, in an enterprise scenario, central IT can define policies to allocate resources for each department. Developers and DevOps teams in each department have complete access and ownership to their share of resources.
Group Created with Sketch.
TNS owner Insight Partners is an investor in: Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.