Oracle Kubernetes Engine (OKE): A Complete Overview
In today’s cloud-native world, container orchestration has become a critical part of modern application deployment. Oracle Kubernetes Engine (OKE) is Oracle’s fully managed Kubernetes service on Oracle Cloud Infrastructure (OCI), designed to simplify the deployment, scaling, and management of containerized applications. This article provides a complete overview of Oracle Kubernetes Engine (OKE), its architecture, networking, storage, and the benefits of using Kubernetes on OCI.

What is Oracle Kubernetes Engine (OKE)?
Oracle Kubernetes Engine (also known as OCI Container Engine for Kubernetes) is a fully managed, highly available, and scalable Kubernetes service provided by Oracle Cloud Infrastructure.
OKE allows developers to deploy and manage containerized applications in the cloud using Kubernetes, the open-source platform for automating container deployment, scaling, and operations across clusters of hosts. Oracle manages the Kubernetes control plane, ensuring high availability, security, and reliability, while customers retain control over the worker nodes that run their applications.
Ways to Launch Kubernetes on Oracle Cloud
OCI provides three primary approaches to run Kubernetes:
1. Roll-Your-Own Kubernetes
A DIY approach where users manually provision OCI compute resources and install Kubernetes, Docker, or other container runtimes themselves.
2. Quickstart Experience
An automated setup using Terraform templates provided by Oracle (available on GitHub) to quickly build Kubernetes infrastructure.
3. Oracle Kubernetes Engine (OKE)
A fully managed service where Oracle handles control plane operations, allowing users to deploy Kubernetes clusters in just a few steps. Among these, OKE is the recommended and most efficient approach for production workloads.
Kubernetes Clusters in OKE
OKE supports flexible cluster configurations:
- Up to 3 clusters in the monthly flex model
- 1 cluster in the pay-as-you-go model
- Up to 1000 worker nodes per cluster
- Up to 110 pods per node
Clusters can be managed using:
- OCI Console
- OCI CLI
- REST APIs
Deployment Architecture in OCI
OCI infrastructure is organized into:
- Regions – Geographical locations
- Availability Domains (ADs) – Isolated data centers within a region
- Fault Domains – Logical groupings for hardware isolation
OKE clusters can be deployed in:
- Single-AD regions
- Multi-AD regions for higher availability
This design ensures resilience, scalability, and minimal downtime.
Networking in OKE
Networking in OKE is powered by Kubernetes Services and OCI Load Balancers.
Load Balancer in OKE
The Kubernetes LoadBalancer service automatically provisions an OCI Load Balancer, which:
- Routes traffic to pods running on worker nodes
- Performs health checks
- Distributes traffic based on load-balancing policies
This enables seamless exposure of applications to the internet or internal networks.
Controllers in Kubernetes (OKE)
Kubernetes controllers continuously monitor the cluster’s state and ensure it matches the desired configuration.
In OKE, two commonly used ingress controllers are:
- NGINX Ingress Controller
- F5 BIG-IP Controller
Both controllers manage incoming traffic and routing rules, enabling advanced application delivery and security.
Reference link:
https://k21academy.com/kubernetes/oracle-kubernetes-kubernetes-concepts-benefits-uses/#3