Demystifying OpenShift Sizing !

In my last post I discussed about technical capabilities of OpenShift Container Platform (OCP) and the value it brings to cloud native application developers and administrators. In this post, I’ll discuss about OpenShift subscription models and sizing guidelines as recommended by Red Hat. This will clarify some of the questions such as — how the OCP platform is priced ? how many applications I can run in my OpenShift subscription ?is there any limit of CPU and Memory usage in my OpenShift subscription ?

Before getting into the details of the above questions let us first understand what are different form OpenShift subscription available to users. As of now following 6 different subscription models are available –

Following diagram from OpenShift will help you to understand the above model. For more details please go thru here.

Fig : OCP and OKE

Now let us try to understand the subscription model. Here I’ll discuss subscription model in the context of OCP, because as a complete OpenShift environment customers implements and manages it !Subscription : Following table shows what are included in OCP subscription –

Subscription : Following table shows what are included in OCP subscription -

As full platform OCP also includes –

- Fully Automated Installers

- Over the Air Smart Upgrades

- Enterprise Secured Kubernetes

- Kubectl and oc automated command line

- Operator Lifecycle Manager (OLM)

- Administrator Web console

- OpenShift Virtualization (CNV)

- User Workload Monitoring

- Metering and Cost Management SaaS Service

- Platform Logging

- Developer Web Console

- Developer Application Catalog

- Source to Image and Builder Automation (Tekton)

- OpenShift Service Mesh (Kiali, Jaeger, and OpenTracing)

- OpenShift Serverless (Knative)

- OpenShift Pipelines (Jenkins and Tekton)

- Embedded Component of IBM Cloud Pak and RHT MW Bundles

Subscription Types :

There is one base subscription type of OpenShift — Red Hat OpenShift Container Platform Premium (2 Cores or 4 vCPUs). It is important to know that subscriptions are only based on number of hosts. There is no cost per application or no cost for memory footprint. Based on the environment sizing any number of workloads can be run. Where bigger hosts/VMs are required to run workload for specific use cases, customers can stack these subscriptions.

It is also important to understand that OpenShift subscription comes in 2 core units. There is no way to utilize just 1 core from a subscription. For example, if there is a required of using 3 cores , then two subscriptions each of 2 core units, i.e. 4 cores in total will be consumed.

Before we get into details of Sizing, let us briefly understand core and vCPU relationship.

Core/vCPU : In simple terms virtual machines use virtual CPU for computing. We denote virtual CPUs as vCPU. Any virtualised system can be hyper-threaded or non hyper-threaded. However our discussion here will be related to hyper-threaded system only. So what is hyper-threading ? Like multithreading where we run an application process in simultaneous concurrent multiple thread functions, in virtualisation world for some x86 processors a single processor core appears as 2 processor cores to the operating system. Thus it allows two threads to run simultaneously in two cores. Therefore in hyper-threaded system 1 core = 2 vCPUs.

For OpenShift subscription default conversion rule used by Red Hat is 2 cores = 4 vCPUs, i.e. for each core 2vCPU processing power of VM is considered. For example if we have a 8vCPU virtual machine, then according to this default conversion rule VM has 4 Cores, i.e. it needs two 2-Core OpenShift subscriptions !

For more on Linux virtualisation check here. To figure out more on hyper-threading, refer here.

With the above understanding let us now discuss about OpenShift Sizing.

OpenShift Sizing :

To understand sizing, first we need to understand any application deployment model in OpenShift. Application deployment in OpenShift considers following –

1. Any application can be deployed in container images.

2. Containers are grouped into Pods. It is the basic deployment unit. In most of the cases a Pod will have 1 container. However there could be situations where more than one containers along with application container are packaged in Pods.

3. These Pods run in nodes. Nodes are also called worker nodes as per Kubernetes terminology.

4. Worker Nodes are managed by Master Nodes. Master Nodes or Masters are considered to be part of Control Plane as per Kubernetes terminology.

5. Masters are part of OpenShift subscription.

6. Apart from Masters and Worker Nodes, there can be Infrastructure Node in OpenShift cluster. However setting up of this node is “Day 2” task. Infrastructure node hosts registry, router, log aggregation, metric collection, OpenShift container storage, Quay and Red Hat Advanced Cluster Manager.

A bare minimum OpenShift environment sizing is comprised of 3-nodes cluster or known as Compact Cluster . Here, Master and Worker run on same node. For more details please see here.

Now to stat with ball park sizing for OpenShift environment we need to know following requirements from customer –

1. Number of Clusters required — Any cluster has its own master and infrastructure nodes and it is a distinct OpenShift environment. Sometime customers may need different clusters for Dev, Test and Prod environment. Also different clusters may have varied security requirement — such as Dev cluster is lesser secured than Test cluster and/or Prod cluster. As per Red Hat recommendation for any high available (HA) cluster ( i.e. typically required to run production workload) 3 Masters are required. Also for high available infrastructure at least 3 Infrastructure nodes are required.

2. Number of Application Instances — As mentioned above applications run inside Pods. Therefore to get total number of Pods we should consider total number of development and production application instances.

3. Resource Consumptions- Here memory usage and CPU consumption data for applications should be collected. Settings like VM settings, Java heap size etc should be considered. Applications requiring more CPU means more cores will be required which will be converted into more vCPUs as explained above.

4. Overhead — Consider some amount of buffers as there could be existing monitoring, security or other agents running on the nodes. OpenShift itself consumes around 0.5 GB of resources which is not available for applications. Consider overheads from other application and sum it up to get total overhead.

5. VM or Bare Metal — Customer may set up OpenShift environment on VM or in any standard server. Consider its memory and CPU standard as described before. Consider that 1 OpenShift subscription comes in 2 Core unit or 4 vCPUs when hyper-threading is enabled (explained earlier).

Now with the above parameters, let us consider the following example scenario for a customer -

With above requirement,

Number of Worker Nodes = 2400/(64–1) = 38 Nodes

Number of vCPU = 38 * 4 = 152 vCPUs

Number of Core = 152/2 = 76 [ as 1 core = 2 vCPUs ]

Number of Subscriptions = 76/2 = 38

Conclusion :

Above example should give you a fair idea on how to start with OpenShift sizing in a real life OpenShift environment set up. However there are following suggested predefined sizing package to quickly start with an OpenShift environment -

- Red Hat OpenShift Container Platform (OCP) Premium with 2 cores

- Red Hat OpenShift Container Storage, Premium (2 Core)

- Red Hat JBoss EAP for OCP, Premium ( 2 Core)

Distinguished Chief Technologist with 24 years of experience in areas of OOP, SOA, Cloud, DevOps and Banking Transformation.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store