Scaling#
This page discusses horizontal scaling to meet increasing demands.
The page does not cover scaling services such as Kafka, Zookeeper, and PostgreSQL Metrics.
Horizontal Scaling of ESS#
Horizontal scaling of ESS involves adding more worker nodes (VMs) to run additional instances of ESS services; for instance, having 2 instances of LDP service, each running on a separate server, instead of a single LDP service running on one server.
Scale Services Independently#
With ESS, you can scale each service independently of each other. For example, you can have 3 instances of the LDP service and 1 instance of the Solid OpenID Connect service.
Stateless Services#
All user-facing ESS services (LDP, etc.) are stateless. Using stateless services strongly supports horizontal scaling as a user’s requests to a given service do not need to route to the same instance of that service.
Scale a Deployment#
You can use Kustomize Overlays to scale your deployment.
Example: Scale a Deployment Manually#
Create an overlay structure as described in Customize ESS.
Add the customization overlay:
#kustomization.yaml ... patches: - target: kind: Deployment name: ess-signup-static patch: |- - op: add path: /spec/replicas value: 3
By adding more Signup page pods, the customization helps with resiliency.
Example: Scale Using a Horizontal Pod Autoscaler#
Create an overlay structure as described in Customize ESS.
Add the customization overlay:
#kustomization.yaml ... resources: - ess-ldp-autoscaler.yaml
#ess-ldp-autoscaler.yaml # ESS Autoscaling # This needs a Metrics server to be installed. See: # https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ # The command to install a metrics server is: # `kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml` # When the average CPU load is below 40 percent, the autoscaler tries to reduce the number of instances # in the deployment, to a minimum of nine which means it uses more than one machine. # When the load is greater than 40 percent, the autoscaler # tries to increase the number of instances in the deployment, up to a maximum of twenty. --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: ess-ldp spec: maxReplicas: 20 minReplicas: 9 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: ess-ldp # Average CPU usage across all LDP instances in the cluster. targetCPUUtilizationPercentage: 40