by Lior Nabat
by Lior Nabat
Over the last several years, the adoption of Kubernetes has increased tremendously. In fact, according to a Cloud Native Computing Foundation (CNCF) survey, 78% of respondents in late 2019 were using Kubernetes in production. Leveraging Kubernetes allows organizations to create a management layer to commodify clouds themselves and build cross- or hybrid-cloud deployments that hide the provider-specific implementation details from the rest of the team.
One crucial part of the Kubernetes ecosphere is Operators—a tool initially introduced by CoreOS in 2016 to utilize the Kubernetes APIs themselves to deploy and manage the state of applications. Operators are a critical part of the deployment and operation of applications in a cross/hybrid environment. They can help manage and maintain state across a federated Kubernetes deployment (multiple Kubernetes clusters running together) or even across clusters.
But what exactly are Operators, and how do they help manage these stateful applications? Let’s take a look at Operators in detail, how they work within Kubernetes, and how the KubeMQ messaging platform uses Operators to help you build complex and scalable messaging services with minimal coding and overhead.
At a high level, Operators allow you to automate tasks beyond what Kubernetes natively provides. Operators are software extensions that hook into Kubernetes APIs and the control plane to manage a custom resource (CR)—or an extension to the Kubernetes API. The CR describes the desired state of the application, and the control plane component (the Operator itself) monitors the CR to ensure that the application is running as expected.
For example, an Operator might deploy and scale a pod, take and restore a backup, manage network services and ingresses, or manage a persistent data store.
Since Operators hook into native Kubernetes tools like kubectl, they become a common language for managing complex deployments where state is involved. Using a Helm chart is excellent for deploying and managing a stateless application like a web server. Still, for deploying stateful systems like etcd, PostgreSQL, and KubeMQ, Operators are the key to success.
Before we dive into understanding why Operators are an essential ingredient to KubeMQ’s success, let’s first examine the benefits of KubeMQ.
At a high level, KubeMQ isn’t just a message broker or queue but a messaging platform that can be used to build a message-based architecture that works across multi- or hybrid-clouds and edge computing. KubeMQ allows services in these environments to communicate with each other in any messaging pattern—pub/sub, streams, queues, and so on. KubeMQ is Kubernetes Native and is easy and quick to deploy in less than a minute.
The KubeMQ messaging platform comprises four core components — server, bridges, sources, and targets.
Combining all of these components allows for creating a cross-cluster messaging and no-code-message-based-microservices architecture on Kubernetes.
So how does KubeMQ work in the world of Operators?
First, KubeMQ deploys as an Operator to ensure that it can operate at a native Kubernetes level. One of the tenets of utilizing KubeMQ is that it is better to create many small clusters and bind them together rather than creating one massive cluster.
This allows for better performance, scalability, and resilience. One key to success with this approach is utilizing Operators as the deployment and management tool for KubeMQ. The Operator deploys the clusters and ensures that the various KubeMQ bridges, sources, and targets are configured correctly for each cluster. This extends to how KubeMQ was written utilizing Go. This makes KubeMQ fast and helps hook KubeMQ into native Kubernetes data models, events, and APIs, making it less complicated to manage the state of the clusters. It also allows for easier configuration validation.
Deploying as an operator also helps KubeMQ keep overhead to a minimum. For example, a large financial company with high volumes of real-time messages for price quotes, transactions, and client funding leverages KubeMQ to decrease the number of servers previously required to fulfill their needs. It has also allowed the company to reallocate the operational overhead to better tasks, rather than monitoring and maintaining messaging infrastructure. Similarly, the company has leveraged the KubeMQ operator to help elastically scale their infrastructure based on the load. For example, when markets close, demand drops, and the clusters can scale down accordingly.
The KubeMQ Operator also helps track state—a key reason for leveraging KubeMQ for reliable cross/hybrid cloud deployments. First, this state can validate that the desired capacity and configuration are in place for each cluster. Comparing the desired state in the CR against the existing state in Kubernetes allows the Operator to ensure that failures are caught and addressed, capacity is added as required, and the various bridges, sources, and targets are configured.
One KubeMQ customer in the agricultural vertical heavily leverages this feature of KubeMQ to run their messaging platform across edge computing systems along with cloud deployments. This provides better performance and reliability and allows them to grow their business with new services without interruptions or downtime. They simply create new clusters and configure them as required. Knowing that the KubeMQ operator validates the CR definition helps to prevent deploying clusters with faulty configurations.
Operators are critical for managing and deploying complex Kubernetes systems.
KubeMQ heavily leverages the Operator model to help customers succeed in building out complex and scalable messaging platforms with minimal coding and overhead. This helps to deliver greater business value by allowing organizations to solve business problems quickly and efficiently without spending time and resources on managing and maintaining messaging infrastructure.