Cloud-Native Glossary

This is a community-contributed living glossary of terms relevant to the Cloud Foundry ecosystem and cloud-native community at large. For key terms related to specific Cloud Foundry projects, including BOSH, CFAR and CFCR, please visit Why Cloud Foundry.

If you would like to add to or amend these definitions, please email

12-factor apps

Defines a methodology to create applications that can be delivered on top of cloud infrastructures. There is a website that explicitly defines the 12 factors and why they matter. The overall goal is to write applications that run on all clouds, can support continuous integration and deployment, and can scale without significant changes.

Blue/Green Deployment

The practice of running two identical production environments with the goal of minimizing downtime and risk. One is running (say, the “Blue” environment) at any one time, with the other (say, the “Green” environment) is idle. Changes can be made to the idle (Green) environment, then production loads switched over to it. This minimizes downtime. Should problems occur in the new (Green) environment, production loads can be immediately switched back to the Blue environment, thus minimizing risk.


This term is used within the Cloud Foundry community to describe pieces of software that act as links to applications written in specific languages. There are binary and staticfile (HTML, CSS, JavaScript, and Nginx) buildpacks, as well as buildpacks for Java, .Net, Go, Node.js, PHP, Python, and Ruby.


Containers-as-a-Service emerged with the rise of Docker, and is now used to describe the delivery of web apps written to any container technology. It is often compared to Platform-as-a-Service (PaaS), although the two terms are not necessarily discrete. (For example, Docker is often considered to be the platform for apps written to it, and the Cloud Foundry platform has its own containers as part of the platform.)

Cloud Application Platform

This term refers to software that provides a Platform-as-a-Service (PaaS) for cloud computing. A PaaS is often thought of as something that can be used to deploy software to cloud computing infrastructure. It can also be used for continuous delivery of software throughout the entire development, testing, and deployment cycle. Cloud Foundry Application Runtime is a PaaS. It is available in an open source software version (known as OSS CF). It is also available in customized distributions (or “Certified Platforms”) that have been certified by the Cloud Foundry Foundation.


Cloud-native development is not concerned where a new app or service is hosted, but how it is created. Cloud-native principles state the apps and services shall be hosted in distributed system environments that are capable of scaling to tens of thousands of self-healing, multi-tenant nodes. They must be packaged in containers, managed dynamically, and oriented around microservices.

Cloud Provider Interface

Interface with which BOSH interacts and configures individual Cloud Providers.


This term refers to discrete pieces of an operating system that can be used to deliver software services. Containers can be thought of as the virtualization of an operating system, and are popular with the Linux operating system. Docker containers have become popular recently, although containers are also available within Cloud Foundry technologies in the form of Cloud Foundry Container Runtime.

Continuous Integration/Continuous Delivery (CI/CD)

These terms are used in a roughly equivalent manner. They can be contrasted with traditional “waterfall” methods of software development, in which projects move from step to step over a defined period of time. With the CI/CD approach, releases are not necessarily scheduled every six or 12 months, for example, but the software continuously evolves. Version numbers and major upgrades are often still announced, but the overall goal is to refine the software on a continuous basis.


This is an approach rather than a product, and is essential to the successful development and deployment of applications to the cloud. The term comes from combining the terms “Development” and “Operations.” Its creation brings together two groups within enterprise IT that often consider themselves opposed to one another. Boiled down to essence, developers focus on software and value features and benefits, while operators focus on how software runs on hardware, and value deadlines and performance. Developers live in a world of unlimited possibilities, while operators will say they live in the “real world” of hard truths. To date, the largest group of Cloud Foundry Runtime users still come from the dev side, according to recent research by the Cloud Foundry Foundation.

Digital Transformation

Companies who wish to keep up with the times, avoid being disrupted by known and unknown competitors, and continue to gain efficiencies in their development and operations are turning to this term. It is a general use term, and often cited as being prone to hype by technology vendors. Even though Digital Transformation is not a product that one can buy, it represents a vision and long-term thinking that can allow companies to achieve the most they can imagine from their enterprise IT. Companies such as Uber and AirBNB are often cited as examples of companies that were born digitally transformed, with an emphasis on software, data flows, and user experiences rather than tangible IT assets. Other companies with a longer history are also participating in their own initiatives, often using technologies and approaches such as cloud computing, the Internet of Things, blockchain, and modern AI techniques to achieve their goals.

Distributed System

This term has been in use for decades, and is a fair way to describe the mission of the Internet from its beginnings. Today it is used as a general way to describe cloud computing, in which computing resources (processing, memory, and storage) are networked across distances vast and small to accommodate a large, diverse group of users with differing needs. The term can be contrasted with centralized, mainframe computing and even the use of personal computers when work is being done offline.


Abbreviated as FaaS, this is a relatively new term associated with emerging serverless infrastructure. Just as serverless is typically deployed to handle small, fast, event-driven demands that are found particularly in IoT (Internet of Things) deployments, FaaS is an approach that does not go through ongoing application services. Response times are expected to be very fast (in the milliseconds).

Hybrid Cloud

This term refers to a combination of private and public cloud resources. The hybrid infrastructure is determined by the enterprise using these respective clouds.


This term refers to delivery of the actual computing resources in a cloud infrastructure. It can be within a private cloud, with virtualized or containerized resources, or from a third-party public cloud provider. IaaS is typically delivered as an “instance” that describes the amount of CPU power and memory provided. Separate storage instances are also available. Users are responsible for managing these resources, either directly or through a platform service.


This term refers to the services that are often delivered within containers. They should be independently deployable, and are often loosely coupled with one another within an application or architecture. The term “micro” can be misleading, as the services delivered within containers can be quite complex and sophisticated.


This term refers to an infrastructure that encompasses more than a single cloud, whether all private, all public or in a hybrid. Multi-cloud strategies might involve the use of an on-premises or colocated private cloud infrastructure along with Amazon Web Services (AWS), Google Compute Engine (GCE), Microsoft Azure, or other public clouds. There is no strict definition to be followed, as companies each determine their best mix of cloud technologies. A multi-cloud strategy can mitigate the potential of vendor lock-in, while also fine-tuning an enterprise’s need to balance budgeting and costs, performance metrics, the roles of compute and storage, and geographic requirements.

Open Source

This term refers to software that is available in a form that is free of charge, supported by a community, and open to any changes users wish to make. The Linux operating system is the classic example, although open source software is also delivered as languages, frameworks, platforms, and applications. It is common for companies to offer paid support and modifications of open-source software. Popular examples include Red Hat Linux and the commercial Cloud Foundry distributions. A key advantage of open source software, according to its proponents, is its development by a community rather than a single company, to facilitate more diverse innovation, an avoidance of organizational groupthink, fresh approaches to problems, and transparent communications about particular features and issues.

Platform-as-a-Service (PaaS)

This term refers to the software that sits between applications and infrastructure in the cloud. Platforms are designed to work with private and public clouds, as well as the various hybrid approaches that are becoming popular with enterprises. The platform is used to deploy and manage applications on whatever infrastructure has been chosen, removing much of the burden of managing the setup and operational details of the infrastructure.

Private Cloud

This term refers to computing resources and applications that are being virtualized and delivered as cloud services to a single enterprise. The computing resources are located either on-site (also known as on-premises or “on-prem”) or in a colocation facility managed by a third party.

Public Cloud

This term refers to computing resources provided and managed by a third party, such as Amazon Web Services.


This latest term in the world of cloud computing refers to services from cloud service providers or software platforms that are measured in seconds rather than hours, and do not reveal specific server instances to the user. They appear “serverless” to users, even though there are still actual servers running somewhere in the background. Serverless infrastructure is thought to be a promising approach for IoT deployments, which are often event driven, characterized by large bursts of small data packs, and require significant flexibility to accommodate widely varying resource requirements over any given period.

Software-as-a-Service (SaaS)

This term refers to software that is delivered over an Internet connection, rather than through a CD, thumb drive or other shrink-wrapped method. Users typically pay a license fee rather than a purchase price. The SaaS provider is responsible for the computing resources needed to support the applications. Salesforce is the classic SaaS for enterprise IT, although all the major providers now deliver some sort of SaaS. The numerous “apps” for mobile devices are also considered to be SaaS.


An app or content that requires no backend code other than a webserver.  Examples of staticfile apps are front-end JavaScript apps, static HTML content, and HTML/JavaScript forms.


This term refers to an image within Cloud Foundry that is wrapped with specific packaging for the infrastructure. It typically contains a bare minimum operating system “skeleton” with a few common pre-installed utilities.

Virtual Machine

The idea of virtualization within enterprise IT is an old one, often used to describe memory management in an earlier age of Unix-based systems. Today, a “virtual machine” is an instance within cloud computing infrastructure that appears to the user to be a real system with specific resources. However, this system is being created by the provider out of resources from several systems, to maximize the use of resources within a datacenter facility while maintaining the expected features and performance expected by the user.


This term has been in use within IT for several decades to describe any computing resource that can be separated, or “abstracted,” from its hardware. For cloud computing, it means that server resources can be separated from their original systems and pooled into virtual machines. This approach allows a higher percentage of each individual server to be put to use and provides a way for users to scale up and scale down access to server resources as needed.