In a GKE cluster environment, a backend service defines how the HTTP(S) cloud load balancing network service distributes incoming traffic. By default, the method for distributing new connections uses a hash calculated based on five pieces of information:
- The client’s IP address.
- The source port.
- The load balancer’s internal forwarding rule IP address.
- The destination port.
- The protocol.
You can modify the traffic distribution method for HTTP(S) traffic by specifying a session affinity option.
Let’s start by asking a couple of questions about the Workload Automation (WA) deployment in a GKE cluster regarding the console network.
When hundreds of users are logged into the web console, how are inbound traffic requests handled?
How is the inbound traffic from the console clients redirected to the multiple console instances installed as a pod in the cluster?
As for the Kubernetes proxy models, the WA traffic bound for the service’s IP:Port is proxied to an appropriate backend without the clients having any knowledge of Kubernetes, services, or pods.
If you want to be sure that all connections from a particular WA console client always pass to the same WA console pod, you can set the session affinity based on the client IP addresses by exposing the Load Balancer Session Affinity service type in the configuration file of your WA deployment.
Continue reading this blog to discover exactly how to do that!
For more information about where you can download WA containers to install, or the related helm chart, see the appropriate readme file:
- HCL customers:
https://github.com/WorkloadAutomation/hcl-workload-automation-chart/blob/master/README.md
- IBM customers:
https://github.com/WorkloadAutomation/ibm-workload-automation-chart/blob/master/README.md
To deploy the Workload Automation console and enable session affinity, you simply expose the LoadBalancer_SessionAffinity service type. This can be done by editing the values.yaml file as follows.
With session affinity enabled, you can be sure that you are always connected to the same console pod, keeping your session always active. In this way, you can continue to automate your workload without interruption.
Embrace the power of Google cloud native services such as Google Cloud SQL. Workload Automation supports the installation of the server and console on Cloud SQL for SQL Server. To take advantage of the flexibility of the Google Cloud Database. Check this out!
From the Google Cloud Platform Console, search for the “cloud sql” resource and create a new SQL Instance of the SQL Server.
Once your database instance is up and running, you can customize the values.yaml file with the information of your new database.
To install the Dynamic Workload Console on Cloud SQL, configure your deployment as follows:
To also install the server component on Cloud SQL, configure your deployment as follows:
Configure the WA server with an internal or public load balancer
If you need to manage traffic across multiple servers in your GCP cluster, you can opt for an internal or public load balancer.
- To deploy the Workload Automation server with a public load balancer, specify LoadBalancer as the service type.
- To deploy Workload Automation server with an internal load balancer, specify LoadBalancer as the service type and expose the internal load balancer annotation for GKE, as shown in the following figure:
Deploy your Workload Automation configuration for the console and server
After you have completed customizing the values in your values.yaml file , including the values explained earlier in this blog, you are ready to deploy your Workload Automation environment, including the console, on your GKE cluster.
For more information about how to deploy, read the following README files:
- HCL customers: https://github.com/WorkloadAutomation/hcl-workload-automation-chart
- IBM customers: https://github.com/WorkloadAutomation/ibm-workload-automation-chart
We hope you enjoyed this article, and that you will take the time to try out a configuration of this kind. You won’t regret it. Send us your feedback and comments, they help us to provide you with useful content!
Do not hesitate to reach out to us for any questions or doubts!
Author’s
Federico Yusteenappar, Workload Automation Junior Software Developer, HCL Technologies
Federico joined HCL in September 2019 as Junior Software Developer working as a Cloud Developer for the IBM Workload Automation product suite. His main focus has been the extension of the Workload Automation product from a Kubernetes native environment to the OpenShift Container Platform. He has a Master’s degree in Computer Engineering.
LinkedIn: https://www.linkedin.com/in/federicoyusteenappar/
Pasquale Peluso, Workload Automation Software Engineer, HCL Technologies
Pasquale joined HCL in September 2019 as a member of the Verification Test team. He works as a verification tester for the Workload Automation product suite on distributed and cloud-native environments. He has a Master’s degree in Automation Engineering.
Davide Malpassini, Workload Automation Technical Lead, HCL Technologies
Davide joined HCL in September 2019 as a Technical Leadworking on the IBM Workload Automation product suite. He has 14 years of experience in software development, and he was responsible for the extension of the Workload Automation product from a Kubernetes native environment to the OpenShift Container Platform and REST API for the Workload Automation engine. He has a Master’s degree in Computer Engineering.
Filippo Sorino, Software Developer, HCL Technologies
Filippo joined HCL in September 2019 as a Junior Software Developer and works as a Verification engineer for the IBM Workload Automation product suite. He has a Bachelor’s degree in Computer Engineering.
Serena Girardini, Verification Test manager, HCL Technologies
Serena is the Verification Test Manager for the Workload Automation product in distributed environments. She joined IBM in 2000 as a Tivoli Workload Scheduler developer and she was involved in the product relocation from the San Jose Lab to the Rome Lab during a short-term assignment in San Jose (CA). For 14 years, Serena gained experience in the Tivoli Workload Scheduler distributed product suite as a developer, customer support engineer, tester and information developer. For many years, she maintained the role as L3 fix pack release Test Team Leader and, during this period, she was a facilitator during critical situations and upgrade scenarios at customer sites. In her last 4 years at IBM, she became the IBM Cloud Resiliency and Chaos Engineering Test Team Leader. She joined HCL in April, 2019 as an expert Tester and she was recognized as Test Leader for the product porting to the most important Cloud offerings in the market. She has a Bachelor’s degree in Mathematics.
Start a Conversation with Us
We’re here to help you find the right solutions and support you in achieving your business goals.