Run MATLAB Parallel Server on Kubernetes
Deploy MATLAB® Parallel Server™ and MATLAB Job Scheduler on a Kubernetes® cluster using a Helm® chart.
Requirements
To use this reference architecture, you need:
A running Kubernetes cluster that meets the following conditions:
Uses Kubernetes version 1.21.1 or later.
Meets the system requirements for running MATLAB Job Scheduler. For details, see MATLAB Parallel Server Product Requirements.
Configured to create external load balancers that allow traffic into the cluster.
kubectlcommand-line tool installed on your computer and configured to access your Kubernetes cluster. For help with installingkubectl, see Install Tools on the Kubernetes website.Helm package manager version 3.8.0 or later installed on your computer. For help with installing Helm, see Quickstart Guide on the Helm website.
Network access to the MathWorks® container registry,
containers.mathworks.comand the GitHub® Container registry,ghcr.io.A MATLAB Parallel Server license. For an overview of MATLAB Parallel Server licensing, see Determining License Size for MATLAB Parallel Server.
You can use either:
A network license manager for MATLAB hosting sufficient MATLAB Parallel Server licenses for your cluster. The license manager must be accessible from the Kubernetes cluster. You can install or use an existing network license manager running on-premises or on AWS®. To install a network license manager on-premises, see Install License Manager on License Server. To deploy a network license manager reference architecture on AWS, select a MATLAB release from Network License Manager for MATLAB on AWS (GitHub).
A MATLAB Parallel Server license configured to use online licensing. To view and modify the license manager type, see Select Licensing Configuration Option.
The default MATLAB Parallel Server installation provided as part of the Kubernetes MathWorks reference architecture uses a network license manager by default. This license manager might differ from your current license configuration.
Deploy from GitHub
To deploy MATLAB Parallel Server in Kubernetes, use the Helm chart and deployment instructions provided in the following GitHub repository:
Architecture and Resources for MATLAB Parallel Server in Kubernetes
Deploying MATLAB Parallel Server and MATLAB Job Scheduler onto a Kubernetes cluster creates several pods and a load balancer in your Kubernetes cluster. This table summarizes the deployed resources.
| Resource Name | Description |
|---|---|
| The MATLAB Job Scheduler job manager is the central management point for submitting, queuing, and distributing MATLAB jobs across the available workers in the cluster. |
| A controller process that monitors MATLAB Job Scheduler and creates and destroys MATLAB workers based on the job manager's requirements. |
| MATLAB workers, which are independent processes that perform
computations. Each worker has its own pod. The |
Load Balancer | A load balancer service to expose MATLAB Job Scheduler to MATLAB clients running outside of the Kubernetes cluster. |
| A proxy process that routes traffic from the load balancer to MATLAB Job Scheduler and worker pods. |
By default, connections between MATLAB clients and the pods in Kubernetes are verified using mutual TLS (mTLS). This graphic shows the connections between the client and the pods.
If your MATLAB Job Scheduler cluster is configured to support multiple MATLAB releases, and any of them are earlier than R2026a, the Helm chart also deploys these additional resources.
| Resource Name | Description |
mjs-pool-proxy-n | A proxy process for parallel pools of MATLAB workers for releases earlier than R2026a. Each worker is associated
with a specific proxy. The |
mjs-parallel-server-proxy | A proxy process that routes traffic from MATLAB clients running R2026a or later to the MATLAB Job Scheduler and worker pods. |
Before R2026a: When you deploy MATLAB
Parallel Server and MATLAB Job Scheduler on a Kubernetes cluster, the deployment also creates mjs-pool-proxy-n pods
that proxy parallel pool traffic for MATLAB workers. Each MATLAB worker in the cluster uses a dedicated pool proxy pod, and the
mjs-controller creates creates and destroys these pool proxies based on
the number of workers, placing each proxy in its own pod.
This graphic shows the architecture that the deployment creates in R2025b and
earlier.