Install VMware RabbitMQ in Kubernetes using Cluster Operator

Deploying VMware RabbitMQ in Kubernetes with Cluster Operator

Posted by Alfus Jaganathan on Monday, March 25, 2024

Background

If you’re embarking on the installation of VMware RabbitMQ for Kubernetes, this article is designed to simplify the process for you. It introduces best practices, high-level instructions, and steps to ensure a smooth setup. For in-depth guidance, the official documentation remains your go-to resource.

Note: This guide utilizes the Cluster Operator for the RabbitMQ installation.

VMware RabbitMQ for Kubernetes, previously known as VMware Tanzu RabbitMQ, is a commercial version of RabbitMQ enhanced by VMware. It introduces unique features absent in the open-source version, such as Warm Standby Replication and Intra-cluster Compression. For more information, please refer here.

Now, let’s dive into the detailed installation steps for VMware RabbitMQ.

Note (1): The insights shared here pertain to VMware RabbitMQ for Kubernetes version 1.5, the latest version at the time of writing.

Note (2): For air-gapped environments, adhere to the Important notes in the official documentation.

Prerequisites

Note: Refer to Prerequisites before you Install VMware RabbitMQ for Kubernetes section in official documentation for more detailed information.

Getting Started

Below are summarized the Kubernetes resources required for installation, which will be explored in detail later.

  1. CertManager
  2. Namespaces
  3. Secrets, SecretImports and SecretExports
  4. ServiceAccounts, ClusterRoles and ClusterRoleBindings
  5. PackageRepository
  6. PackageInstall
  7. RabbitMQCluster

Detailed Steps

Let’s delve into each of the components one-by-one in detail.

CertManager: Install CertManager if not already present by executing the following command (update the version as needed).

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.5.3/cert-manager.yaml

Namespaces: Run the following block to create three namespaces for various purposes.

kapp deploy -a tanzu-rabbitmq-namespaces -y -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: generic-secrets
  labels:
    name: generic-secrets
---
apiVersion: v1
kind: Namespace
metadata:
  name: rabbitmq-installers
  labels:
    name: rabbitmq-installers
---
apiVersion: v1
kind: Namespace
metadata:
  name: rabbitmq-clusters
  labels:
    name: rabbitmq-clusters
EOF

Secrets, SecretImports and SecretExports: This step involves creating a secret for Tanzu registry credentials, exporting this secret to targeted namespaces, and ensuring it can be imported where required. This is crucial for the installation and operation of RabbitMQ. Here, the secret tanzu-registry-credentials-secret is exported to rabbitmq-installers and rabbitmq-clusters namespaces.

Note (1): Replace the pivnet username and pivnet password with the right credentials.

Note (2): Update to your own registry url (current url: registry.tanzu.vmware.com) and above credentials, if the package is placed in another location.

kapp deploy -a tanzu-rabbitmq-secrets -y -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: tanzu-registry-credentials-secret
  namespace: generic-secrets
  labels:
    name: tanzu-registry-credentials-secret
type: kubernetes.io/dockerconfigjson
stringData:
  .dockerconfigjson: |
    {
      "auths": {
        "registry.tanzu.vmware.com": {
          "username": "pivnet username", 
          "password": "pivnet password",
          "auth": ""
        }
      }
    }    
---
apiVersion: secretgen.carvel.dev/v1alpha1
kind: SecretExport
metadata:
  name: tanzu-registry-credentials-secret
  namespace: generic-secrets
spec:
  toNamespaces:
  - "*"
---
apiVersion: secretgen.carvel.dev/v1alpha1
kind: SecretImport
metadata:
  name: tanzu-registry-credentials-secret
  namespace: rabbitmq-installers
spec:
  fromNamespace: generic-secrets
---
apiVersion: secretgen.carvel.dev/v1alpha1
kind: SecretImport
metadata:
  name: tanzu-registry-credentials-secret
  namespace: rabbitmq-clusters
spec:
  fromNamespace: generic-secrets
EOF

ServiceAccounts, ClusterRoles and ClusterRoleBindings: This step establishes the required permissions for the service account, which will be utilized by the package installer and the RabbitMQ operators.

Note: The service account must be created in rabbitmq-installers namespace.

kapp deploy -a tanzu-rabbitmq-serviceaccounts -y -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: tanzu-rabbitmq-install-cluster-role
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  - mutatingwebhookconfigurations
  verbs:
  - "*"
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - customresourcedefinitions
  verbs:
  - "*"
- apiGroups:
  - apps
  resources:
  - deployments
  verbs:
  - "*"
- apiGroups:
  - cert-manager.io
  resources:
  - certificates
  - issuers
  verbs:
  - "*"
- apiGroups:
  - ""
  resources:
  - configmaps
  - namespaces
  - secrets
  - serviceaccounts
  - services
  verbs:
  - "*"
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - clusterrolebindings
  - clusterroles
  - rolebindings
  - roles
  verbs:
  - "*"
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - "*"
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - get
  - patch
- apiGroups:
  - ""
  resources:
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - persistentvolumeclaims
  verbs:
  - create
  - get
  - list
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - pods/exec
  verbs:
  - create
- apiGroups:
  - apps
  resources:
  - statefulsets
  verbs:
  - create
  - delete
  - get
  - list
  - update
  - watch
- apiGroups:
  - rabbitmq.com
  - rabbitmq.tanzu.vmware.com
  resources:
  - "*"
  verbs:
  - "*"
- apiGroups: 
  - policy
  resources: 
  - podsecuritypolicies
  verbs: 
  - use
  resourceNames:
  - vmware-system-privileged
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tanzu-rabbitmq
  namespace: rabbitmq-installers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tanzu-rabbitmq-install-cluster-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: tanzu-rabbitmq-install-cluster-role
subjects:
- kind: ServiceAccount
  name: tanzu-rabbitmq
  namespace: rabbitmq-installers
- kind: Group
  name: system:serviceaccounts
EOF

PackageRepository: To create the tanzu-rabbitmq-package-repository, execute the following code block in a command shell. This step establishes the required package repository, enabling the retrieval of necessary packages from PivNet.

Note (1): Before proceeding, you may need to accept the End User License Agreement (EULA) for the package p-rabbitmq-for-kubernetes/tanzu-rabbitmq-package-repo if you haven’t already done so.

Note (2): After running the command, wait a few seconds and then verify if the required package is loaded by executing kubectl get packages -A | grep " rabbitmq.tanzu.vmware.com.1.5.3"

kapp deploy -a tanzu-rabbitmq-package-repository -y -f - <<EOF
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageRepository
metadata:
  name: tanzu-rabbitmq-repo
  namespace: rabbitmq-installers
spec:
  fetch:
    imgpkgBundle:
      image: registry.tanzu.vmware.com/p-rabbitmq-for-kubernetes/tanzu-rabbitmq-package-repo:1.5.3
EOF

PackageInstall: To deploy all required RabbitMQ operators—including the Cluster Operator, Messaging Topology Operator and Standby Replication Operator—run the following code block in a command shell. This action will create the tanzu-rabbitmq-package-install app, which orchestrates the installation of the operators.

Note: All the above mentioned operators will be instantiated within the rabbitmq-system namespace.

kapp deploy -a tanzu-rabbitmq-package-install -y -f - <<EOF
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
  name: tanzu-rabbitmq
  namespace: rabbitmq-installers
spec:
  serviceAccountName: tanzu-rabbitmq
  packageRef:
    refName: rabbitmq.tanzu.vmware.com
    namespace: rabbitmq-installers
    versionSelection:
      constraints: 1.5.3
EOF

With the Cluster Operator successfully installed, we’re now poised to specify the necessary RabbitMQ cluster configurations. By providing these configurations to the operator, it will be able to create and manage the RabbitMQ cluster according to our specifications. This step is crucial for tailoring the RabbitMQ deployment to meet our specific needs and operational requirements.

RabbitMQCluster: To initiate the RabbitMQ Server Cluster, execute the following code block in a command shell. This will create the tanzu-rabbitmq-cluster app, effectively launching your RabbitMQ Server Cluster.

Ensure you update the persistence:storageClassName to the appropriate value for your environment and adjust the persistence:storage size as necessary to meet your requirements.

An override is included to append Prometheus-based annotations. This addition facilitates scraping of metrics, ensuring your monitoring setup can effectively gather the data it needs from your RabbitMQ cluster.

kapp deploy -a tanzu-rabbitmq-cluster -y -f - <<EOF
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
  name: rabbitmq
  namespace: rabbitmq-clusters
spec:
  replicas: 3
  imagePullSecrets:
  - name: tanzu-registry-credentials-secret
  service:
    type: LoadBalancer
  resources:
    limits:
      cpu: "1.5"
      memory: 2Gi
    requests:
      cpu: "1"
      memory: 2Gi
  rabbitmq:
    additionalPlugins:
      - rabbitmq_stream
      - rabbitmq_stream_management
    additionalConfig: |
      cluster_partition_handling = pause_minority
      cluster_formation.node_cleanup.interval = 10
      vm_memory_high_watermark_paging_ratio = 0.65
      vm_memory_high_watermark.relative = 0.7
      disk_free_limit.relative = 1.0
      collect_statistics_interval = 30000
      queue_leader_locator = balanced
      log.console.level = info      
    advancedConfig: ""
  persistence:
    storageClassName: "storage class name"
    storage: 20Gi
  override: 
    statefulSet:
      spec:
        template:
          metadata:
            annotations:
              prometheus.io/port: "15692"
              prometheus.io/scrape: "true"
EOF

Production Recommendation: For production environments, it’s highly recommended to enhance your deployment with the following configurations to ensure optimal operation and reliability.

  • Pod Anti-Affinity: Implementing pod anti affinity rules ensures that pods are distributed across different nodes. This practice enhances resilience and availability by reducing the likelihood that a single node failure will disrupt the service.

  • Priority Class: Assigning a priority class indicates the importance of the pods. By marking RabbitMQ pods as system critical, you prioritize their scheduling and resource allocation over lower-priority workloads. This helps maintain RabbitMQ’s performance and availability under resource contention.

Note: Ensure the system-cluster-critical priority class exists in your Kubernetes cluster. If it doesn’t, create it or choose an appropriate alternative and update the configuration accordingly. This setup is pivotal for maintaining robust service levels in a production environment.

Incorporate these settings into your RabbitMQCluster configuration like so:

spec:
  override: 
    statefulSet:
      spec:
        template: 
          spec: 
            priorityClassName: system-cluster-critical
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app.kubernetes.io/name
              operator: In
              values:
              - rabbitmq
          topologyKey: kubernetes.io/hostname

Once your RabbitMQ cluster is successfully deployed, it will be ready to use within a few moments. Following this, you’ll want to obtain the external IP address and the credentials for accessing the RabbitMQ management interface or for connecting your applications to the RabbitMQ service. Here’s how you can retrieve this important information:

Obtaining the IP Address

To find out the external IP address assigned to your RabbitMQ cluster, use the following command. This IP address is crucial for accessing the RabbitMQ management UI or for application connections:

kubectl get svc rabbitmq -n rabbitmq-clusters -o jsonpath="{.items[0].status.loadBalancer.ingress[0].ip}"

This command lists services in the rabbitmq-clusters namespace (or your specified namespace if different), filtering for rabbitmq and then displaying the external IP address.

Obtaining the Default Username and Password

RabbitMQ provides default credentials upon installation, which you can retrieve as follows:

  • Username: Execute the command below to get the default username:

    kubectl -n rabbitmq-clusters get secret rabbitmq-default-user -o jsonpath="{.data.username}" | base64 --decode
    
  • Password: Similarly, retrieve the default password using:

    kubectl -n rabbitmq-clusters get secret rabbitmq-default-user -o jsonpath="{.data.password}" | base64 --decode
    

With the IP address and credentials at hand, you can now access the RabbitMQ management UI by navigating to http://:15672 in your web browser, using the obtained username and password for login.

Your applications can connect to the RabbitMQ service using the retrieved IP address on port 5672 for messaging operations. This setup should now be fully operational and ready to integrate with your applications.

To perform a quick connectivity test with your RabbitMQ cluster using the RabbitMQ PerfTest tool, you can use the Docker image provided by the RabbitMQ team. This test helps ensure that your RabbitMQ deployment is functioning correctly and can accept connections. Before running the test, ensure you have Docker installed on your machine and replace , , and with the actual values you obtained from your RabbitMQ setup:

docker run -it --rm pivotalrabbitmq/perf-test:latest --uri amqp://<username>:<password>@<IP Address>:5672 --id "connectivity test 1"

This command runs the PerfTest tool within a Docker container, attempting to connect to your RabbitMQ server using the specified credentials and IP address. The –id “connectivity test 1” part is optional and simply labels the test run for easier identification.

The output from this command will indicate whether the tool was able to successfully connect and interact with your RabbitMQ server. Successful connectivity suggests your RabbitMQ cluster is correctly configured and ready for further use and integration with your applications.

To streamline the management of the Cluster Operator directly from the command line, VMware provides a Kubectl plugin specifically designed for this purpose. This plugin extends kubectl with additional commands for RabbitMQ, making it easier to deploy and manage RabbitMQ clusters within your Kubernetes environment.

Advanced Authentication Configuration for RabbitMQ

To enhance the security and management of your RabbitMQ cluster, you might consider integrating advanced authentication mechanisms such as OAuth 2.0 or LDAP. These methods provide robust frameworks for managing access and permissions, offering a secure and scalable way to handle authentication and authorization.

OAuth 2.0 Integration

OAuth 2.0 offers a powerful standard for secure and flexible authentication. If you’re looking to integrate OAuth 2.0 with your RabbitMQ cluster, it can provide a seamless way to manage access tokens, granting different levels of access based on scopes and roles.

For a step-by-step tutorial on setting up OAuth 2.0 with RabbitMQ, check out Configure RabbitMQ with Oauth2.0. This article covers everything from setting up the OAuth 2.0 server to configuring RabbitMQ for OAuth 2.0 authentication.

LDAP Integration

LDAP is a well-established directory service protocol that allows for a centralized authentication mechanism. Integrating LDAP with RabbitMQ enables you to leverage your existing directory services for user authentication and authorization, streamlining user management and security.

For comprehensive instructions on integrating LDAP with your RabbitMQ cluster, visit Configure RabbitMQ with LDAP. This article provides detailed steps for configuring RabbitMQ to authenticate and authorize users based on LDAP directories.

Installing the Kubectl RabbitMQ Plugin

To install the plugin, follow the instructions provided in the VMware documentation. Here is a brief overview of the steps involved:

  • Download the Plugin: The plugin can be downloaded from the VMware website or the specific release page associated with the RabbitMQ Operator. Ensure you select the version compatible with your Cluster Operator.

  • Install the Plugin: Once downloaded, make the binary executable and move it to a directory included in your system’s PATH. For Linux and macOS, this might look like:

    chmod +x ./kubectl-rabbitmq
    sudo mv ./kubectl-rabbitmq /usr/local/bin
    
  • Verify Installation: To ensure the plugin is correctly installed and accessible, you can run:

    kubectl rabbitmq --help
    

This command should display help information for the RabbitMQ plugin, confirming its successful installation.

Using the Kubectl RabbitMQ Plugin

With the plugin installed, you can now manage RabbitMQ clusters more intuitively. For example, creating a new cluster is as simple as:

kubectl rabbitmq create cluster <cluster-name>

Likewise, you can list all RabbitMQ clusters, get details about a specific cluster, and even open the management UI directly from the command line. Check the plugin’s documentation for a comprehensive list of available commands and usage examples.

This approach significantly simplifies the management of RabbitMQ clusters, allowing you to leverage Kubernetes-native commands and resources. For the most current and detailed instructions, always refer to the official VMware documentation.

Uninstall

To uninstall your RabbitMQ cluster from the Kubernetes environment, you’ll need to use the kapp tool, which was previously used for the deployment. This tool provides a straightforward way to manage and delete applications deployed in your cluster. Below are the commands to remove the RabbitMQ cluster and associated components systematically, you can run them all together as below:

kapp delete -y -a tanzu-rabbitmq-cluster
kapp delete -y -a tanzu-rabbitmq-package-install
kapp delete -y -a tanzu-rabbitmq-package-repository
kapp delete -y -a tanzu-rabbitmq-serviceaccounts
kapp delete -y -a tanzu-rabbitmq-secrets
kapp delete -y -a tanzu-rabbitmq-namespace

These commands initiate the deletion of all resources associated with the RabbitMQ cluster in the reverse order of their creation. It’s important to wait for each command to complete before moving on to the next to ensure all resources are properly cleaned up.

Remember, executing these commands will permanently remove the RabbitMQ cluster and associated data. Ensure you have backed up any important data or configurations before proceeding with the uninstallation process.

Github Repository

For those interested in deploying VMware RabbitMQ for Kubernetes with a streamlined approach, leveraging a repository like Github:vmware-rabbitmq-install-basic can significantly simplify the process. This repository contains templated code that utilizes ytt, a tool for generating YAML configurations from declared values, enabling customizable and repeatable deployments.

Advantages of Using the Repository

Simplicity: The repository abstracts away the complexity of manually creating inline YAML for RabbitMQ deployment.

Customization: With ytt, you can easily customize the deployment to meet your needs without editing YAML files directly.

Repeatability: Deployments can be repeated across different environments with minimal changes, ensuring consistency and saving time. By following the instructions and utilizing the templated configurations provided in the repository, you can efficiently deploy VMware RabbitMQ for Kubernetes, tailored to your specific requirements.

Additional References

Hope you had fun coding!


comments powered by Disqus