14 min read

Red Hat OpenShift — The Basics

1. What is OpenShift

OpenShift is a powerful, enterprise-ready containerization platform by Red Hat. It simplifies the deployment, management, and scaling of containerized applications, offering developers and operations teams a comprehensive set of tools and features built on top of Kubernetes. Think of it as Kubernetes made easier and more feature-rich for business needs.

Internal Development Platform

An Internal Development Platform (IDP) like OpenShift provides developers with a ready-to-use environment to build, deploy, and manage applications securely and efficiently inside a company, offering built-in tools for CI/CD, container orchestration, monitoring, security, and self-service — all based on Kubernetes but fully integrated.

CNCF Platforms White Paper

History

OpenShift originally came from Red Hat’s acquisition of Makara, a company marketing a platform as a service (PaaS) based on Linux containers, in November 2010. OpenShift was announced in May 2011 as proprietary technology and did not become open-source until May 2012. Up until v3, released in June 2015, the container technology and container orchestration technology used custom-developed technologies—more on OpenShift services and technologies in the “OpenShift Platform services” part.

2. OpenShift vs Kubernetes

While OpenShift is built on top of Kubernetes, it is an enterprise-ready product that provides Day0 to Day2/3 capability, with security, elasticity, integration, and development properties out of the box. OpenShift is an entire “Internal Development Platform” built, tested, and supported by Red Hat.

Kubernetes is the engine, OpenShift gives you the car with dashboard, security system, and service plan.

For all that Kubernetes can do to orchestrate containers, users still need to integrate other components like networking, ingress, load balancing, storage, monitoring, logging, multi cluster management, continuous integration and continuous delivery (CI/CD), and more to accelerate the development and delivery of containerized applications — at scale. Red Hat OpenShift offers these components with Kubernetes at its core because, by itself — Kubernetes is not enough.

Kubernetes and OpenShift Relationship

  • Kubernetes is the orchestration layer for managing containers, while OpenShift is an enterprise distribution of Kubernetes, integrating additional services and features.
  • OpenShift enhances Kubernetes by providing built-in services for monitoring, logging, security, and governance.
The difference between OpenShift and Kubernetes

Key Features of Kubernetes

  • Kubernetes facilitates service discovery, load balancing, and self-healing capabilities, ensuring high availability of applications.
  • It manages container storage and secrets, allowing for the secure handling of sensitive information.
  • Kubernetes does not handle CI/CD processes directly but provides APIs for integration with CI/CD tools.

OpenShift Enhancements

  • OpenShift integrates additional features such as a registry for container images, monitoring and logging, advanced routing capabilities, pipelines, CI/CD, and more.
  • It supports both Linux and Windows containers, allowing for diverse application workloads.

Kubernetes from Day 0 to Enterprise-ready

Install: Templating/Provisioning; Validation; OS setup.

Deploy: Identity and security; App monitoring and alerts; Storage and persistence; Egress, Ingress, and integration; Host container images; Build and Deploy; Scale and sizing.

Harden: Platform monitoring and alerts; Metering and chargeback; Platform security hardening; Image hardening; Security certifications; Network policy; Disaster recovery; Resource segmentation.

Operate: OS patch and upgrade; Platform patch and upgrade; Image patch and upgrade; App patch and upgrade; Security patches; Continuous security scanning; Multi-environment rollout; Enterprise container registry; Cluster and app elasticity; Monitor, alert, remediate; Log aggregation.

3. OpenShift official products

The official products under the Red Hat OpenShift umbrella, as of April 2025:

OKD

The open-source project that powers OpenShift is called OKD — Origin Community Distribution.

Self-Managed Services

  • Red Hat OpenShift Virtualization Engine: Although OpenShift is mostly associated with a “containerization platform”, it can be used only to deploy, manage, and scale virtual machines (VMs).
Red Hat OpenShift Virtualization Engine
  • Red Hat OpenShift Kubernetes Engine (formerly Red Hat OpenShift Container Engine): delivers the foundational, security-focused capabilities of enterprise Kubernetes on Red Hat Enterprise Linux CoreOS to run containers in hybrid cloud environments.
  • OpenShift Container Platform (OCP, formerly known as OpenShift Enterprise): This is an enterprise-grade Kubernetes platform that you can self-manage on your chosen infrastructure (on-premises, public cloud, or virtualized environments). It provides a comprehensive set of features for building, deploying, and managing containerized applications. Red Hat OpenShift Container Platform adds a full set of operations and developer services and tools, including Serverless, Service Mesh, and Pipelines. With OpenShift Container Platform, organizations can adopt a hybrid cloud strategy and start building cloud-native applications. The proven platform includes a complete set of services that empower developers to code with speed and agility for applications while providing more flexibility and efficiency for IT operations teams.
  • Red Hat OpenShift Platform Plus: builds on the capabilities of OpenShift Container Platform with advanced multicluster security features, Day-2 management capabilities, integrated data management, and a global container registry. With OpenShift Platform Plus, organizations can more consistently protect and manage applications with increased security across open hybrid cloud environments and application life cycles.
Red Hat OpenShift Container Platform

Managed Services

These are fully managed OpenShift offerings where Red Hat or a cloud provider handles the underlying infrastructure and management of the OpenShift cluster.

  • Red Hat OpenShift Dedicated: A fully managed OpenShift service offered directly by Red Hat, running on either Amazon Web Services (AWS) or Google Cloud Platform (GCP).
  • Azure Red Hat OpenShift (ARO): A jointly engineered, fully managed OpenShift service on Microsoft Azure, operated and supported by both Microsoft and Red Hat.
  • Red Hat OpenShift Service on AWS (ROSA): A jointly managed OpenShift service on AWS, operated and supported by both Red Hat and AWS.
  • Red Hat OpenShift on IBM Cloud: A fully managed OpenShift service on IBM Cloud, operated and supported by both Red Hat and IBM.

Developer-Focused Offerings

  • Red Hat OpenShift Local (formerly CodeReady Containers): A lightweight, single-node OpenShift environment for local development on your personal workstation (Linux, macOS, Windows).
  • Red Hat OpenShift Playground: A free, temporary, online OpenShift environment for developers to experiment and learn.

4. OpenShift architecture

In v3, Docker was adopted as the container technology, and Kubernetes as the container orchestration technology.

The v4 product has many architectural changes

  • Container runtime: CRI-O
  • Interacting with pods and containers: Podman
  • Container build tool: Buildah, thus breaking the exclusive dependency on Docker
Red Hat OpenShift architecture

OpenShift container platform Architecture (green ones are modified or new architecture components by Red Hat).

OpenShift Nodes

Like vanilla Kubernetes, OpenShift also differentiates between two types of nodes: the cluster master and the cluster workers.

OCP architecture services

Core infrastructure and Kubernetes services (on all nodes)

  • RHEL CoreOS: The base operating system is Red Hat Enterprise Linux CoreOS. CoreOS is a lightweight RHEL version providing essential OS features and combines the ease of over-the-air updates from Container Linux with the Red Hat Enterprise Linux kernel for container hosts.
  • etcd (Cluster Data Store, state of everything): A key-value store that holds all cluster data, configurations, and state information, ensuring consistency across the cluster.
  • OpenShift CLI (oc): A powerful command-line tool that provides comprehensive control over OpenShift resources.
  • Network Operator (SDN): Manages the software-defined networking within the OpenShift cluster, providing network isolation and connectivity between pods and services. OpenShift supports different network plugins like OpenShift SDN, OVN, and others.
  • Routes: Provide external access to applications running within the OpenShift cluster by exposing services at a specific hostname.
  • Storage Management: OpenShift integrates with various storage solutions (e.g., persistent volumes) to provide persistent storage for applications.
  • Kubelet: runs on each node to manage pod lifecycles and communicate with the API server.
  • CoreDNS: provides internal DNS so pods and services can resolve each other by name.
  • Monitoring and Logging: OpenShift often includes integrated monitoring (using Prometheus and Grafana) and logging (using Vector and LokiStack or newer alternatives) solutions for cluster and application health.
  • Operators: OpenShift preferred method of managing services on the OpenShift control plane. Operators integrate with Kubernetes APIs and CLI tools, performing health checks, managing updates, and ensuring that the service/application remains in a specified state.
  • Platform Operators: Special Operators managed through the web console that install, configure, and update core platform components (like storage, networking, or monitoring) across the cluster.
  • Application Operators: Automate the deployment, management, and lifecycle of user applications or services, making complex app operations like upgrades, backups, and scaling easier and more reliable.

NOTE: from the latest version 4.18, Fluentd is deprecated and is planned to be removed in a future release. As an alternative to Fluentd, you can use Vector instead. The preferred logging stack now includes Vector for log collection and LokiStack for log storage and visualization. According to Red Hat, this combination offers improved performance, resource efficiency, and scalability compared to the legacy EFK (Elasticsearch, Fluentd, Kibana) stack.

Master Nodes (also called Control Plane Nodes)

These nodes manage the OpenShift cluster and are responsible for the overall cluster state and decisions like scheduling, monitoring, and managing the lifecycle of containers.

Kubernetes services:

  • Kube API server
  • Scheduler: Assigns work (pods) to available worker nodes based on resource availability and other factors.
  • Cluster/Controller manager: Monitors the state of the cluster and makes necessary adjustments, ensuring the cluster matches the desired state.

OpenShift services:

  • OpenShift API Server: The front-end for the Kubernetes/OpenShift control plane that handles client requests and manages communication between components. Extends the Kubernetes API with OpenShift-specific objects and functionalities (e.g., Builds, Deployments, Routes, Projects).
  • OpenShift Web Console: A user-friendly graphical interface for managing and monitoring OpenShift clusters, applications, and resources.
  • Operator Lifecycle Manager (OLM): Manages the installation, upgrade, and lifecycle of Kubernetes Operators on a cluster.
    It automates tasks like deploying Operators, handling updates, resolving dependencies, and ensuring that Operators stay running and healthy.
  • OAuth Server: Provides authentication and authorization for accessing the OpenShift API and web console.
  • Cluster Version Operator (for upgrades and management).

Worker Nodes (also called Compute Nodes)

These nodes are responsible for running the application workloads (pods) in the cluster.

The true minimum core services are Kubelet, Kube-Proxy, CRI-O, and SDN components.
Others like monitoring, logging, and tuning are important but can vary a little depending on your OpenShift setup.

  • Kubelet: Watches the API server for pod assignments, manages the lifecycle of pods/containers on the node.
  • Kube-Proxy: Maintains network rules and load-balances traffic between services and pods.
  • CRI-O (Container Runtime Interface — Open): A lightweight container runtime to run Kubernetes containers (instead of Docker). It’s a Kubernetes Container Runtime Interface enabling the use of Open Container Initiative compatible runtimes. CRI-O supports OCI container images from any container registry.
  • SDN/OVN-Kubernetes: Provides the pod-to-pod networking. Workers usually run parts of the cluster networking like Open vSwitch (OVS) agents.
  • Node Tuning Operator (optional but common): Optimizes node-level performance settings like kernel parameters.
  • Monitoring Agents: Things like Prometheus Node Exporter run to monitor node health and resource usage.
  • Logging Agents (optional, depending on config): Like Fluentd and Vector to ship logs to centralized storage.

Container management and automation:

  • OpenShift Registry: A built-in, integrated container image registry that stores and manages container images within the cluster, supporting internal builds, deployments, and secure image distribution.
  • Builds: A system for creating container images from source code, Dockerfiles, or other inputs. OpenShift provides various strategies for building images.
  • Deployments: Manage the rollout and lifecycle of application replicas. OpenShift offers enhanced deployment strategies like rolling updates and canary deployments.
  • Image Streams: Provide an abstraction layer for managing container image tags and notifications when new images are available. Track changes to container images and trigger automatic rebuilds or redeployments when new images are available.
  • Source-to-Image (S2I): An OpenShift tool that builds reproducible container images directly from source code by injecting it into a builder image, simplifying application builds and deployments.

Governance and Development

  • Projects (Namespaces with Enhancements): Provide isolation and collaboration features for teams working on different applications within the same cluster. Custom resources are used in OpenShift to group Kubernetes resources and to provide access for users based on these groupings. Projects can also receive quotas to limit the available resources, number of pods, volumes, etc. A project allows a team to organize and manage its workload in isolation from other teams.
  • Templates: Allow you to define and deploy complex applications consisting of multiple resources in a repeatable way.
  • Service Catalog (Optional, may be deprecated in newer versions): Allowed developers to discover and consume services offered within the cluster or externally.
  • Broker (Optional, related to Service Catalog): Managed the lifecycle of services provisioned through the Service Catalog.
  • Runtimes and xPaaS: These are base-ready-to-use container images and templates for developers. A set of base images for JBoss middleware products such as JBoss EAP and ActiveMQ, and for other languages and databases, Java, Node.js, PHP, MongoDB, MySQL, etc.

OpenShift specialized services

  • OpenShift Data Science: A platform for developing, training, and deploying machine learning models on OpenShift.
  • OpenShift GitOps: Built on Argo CD, is an OpenShift feature that automates application delivery and cluster management by integrating Git-based declarative configurations with CI/CD and Kubernetes.
  • OpenShift Pipelines (Tekton): A cloud-native CI/CD solution that runs on OpenShift.
  • OpenShift Service Mesh (Istio): A platform for connecting, securing, and managing microservices on OpenShift.
  • OpenShift Serverless (Knative): A platform for building and running event-driven, serverless applications on OpenShift.
  • Advanced Cluster Management for Kubernetes: While not strictly named “OpenShift,” this product is tightly integrated and designed to manage multiple OpenShift clusters (and other Kubernetes clusters) at scale.

5. OpenShift Provisioning (IPI vs UPI)

Installer-Provisioned Infrastructure (IPI)

This is the recommended and most automated method. The OpenShift installer takes responsibility for provisioning the underlying infrastructure (compute, network, storage) on the chosen platform.

Supported Platforms:

  • Public Clouds: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), IBM Cloud
  • Private Clouds and Virtualized Environments: VMware vSphere, Red Hat OpenStack Platform (RHOSP), Nutanix, Bare Metal (with specific network and load balancer requirements)

User-Provisioned Infrastructure (UPI)

In this method, you are responsible for setting up and managing the underlying infrastructure before deploying OpenShift. This provides more control but requires more manual configuration.

Supported Platforms:

  • Public Clouds: AWS, Azure, GCP (often used for highly customized setups)
  • Private Clouds and Virtualized Environments: VMware vSphere, RHOSP, Nutanix, Bare Metal (common for environments with existing infrastructure or specific requirements)