This is a beta (i.e. in progress) version of the manual. Content/features are subject to change.

Update

Periodically, updates to the Inrupt Enterprise Solid Server (ESS) components are released. Follow the instructions below to update.

Step 1: Obtain ESS Reference Deployment Files

  1. Backup the minikube directory and .zip file (i.e. the existing configuration):

    cd ~/ESS
    mv minikube minikube-backup
    mv ess-minikube-deployment.zip ess-minikube-deployment-backup.zip
    
  2. Download the latest ESS Deployment .zip file:

    curl -O 'https://download.software.inrupt.com/<TOKEN>/release/raw/names/ESS-Minikube-Deployment/versions/<VERSION>/configuration-reference-minikube.zip'
    
  3. Once downloaded, unzip the file:

    unzip ess-minikube-deployment.zip
    
  4. Go into the minikube directory:

    cd minikube/
    

An upgrade may only require updates to an ESS service’s Kubernetes file <service>-deployment.yaml or may require more comprehensive changes to the files.

To determine which, consult the release notes. You can also contact Inrupt support.

If upgrade changes only requires an update to the version in the <service>-deployment.yaml, update to the new version:

...
   containers:
     - name: ess-<service>
       image: docker.software.inrupt.com/<docker image name>:<new version>
       imagePullPolicy: IfNotPresent
...

If more comprehensive changes are required, the latest reference deployment files can be obtained as follows:

  1. Assuming the AWS Installation guide for ESS was followed, the Kubernetes configuration files are typically referenced at ${RELEASE_DIR}/deployment/kubernetes/aws.

    Back up all of the subdirectories in this directory in case they are needed for reference in the future.

  2. Download the latest .zip file of the AWS reference deployment:

    rm -rvf /tmp/update-ess
    mkdir -pv /tmp/update-ess
    cd /tmp/update-ess
    curl -O 'https://download.software.inrupt.com/<TOKEN>/release/raw/names/ESS-AWS-Deployment/versions/<VERSION>/configuration-reference-aws.zip'
    
  3. Unzip the file and copy into the kubernetes/aws directory:

    unzip configuration-reference-aws.zip
    cp -rvf deployment/kubernetes/aws/* ${RELEASE_DIR}/deployment/kubernetes/aws
    

Step 2: Deploy New Kafka Cluster

Launch the latest strimzi.io based Kafka service in the Kubernetes cluster:

kubectl apply -f 'https://strimzi.io/install/latest?namespace=ess'
kubectl apply -f kafka/
kubectl wait kafka/ess-event --for=condition=Ready --timeout=300s

The kubectl wait ... command waits until Kafka is fully deployed and ready. When ready, the output should resemble the following:

kafka.kafka.strimzi.io/ess-event condition met

Verify that the Kafka namespace has spun up correctly:

kubectl get all

The output should resemble the following:

NAME                                             READY   STATUS    RESTARTS   AGE
pod/ess-event-entity-operator-75f7864d9d-cncll   3/3     Running   0          56s
pod/ess-event-kafka-0                            2/2     Running   0          101s
pod/ess-event-zookeeper-0                        2/2     Running   0          2m15s
pod/strimzi-cluster-operator-6c8d574d49-6pkd9    1/1     Running   0          3m13s
...

The AWS Installation guide for ESS, creates a Managed Streaming for Apache Kafka (MSK) backend service instance. Generally, it should not be necessary to remove or replace the instance during an upgrade. However, should you need to remove or replace the instance, consult the official AWS documentation on version upgrades and high availability for MSK.

Step 3: Update Domain Name in Reference Configuration Files

Reconfigure the newly downloaded YAML files with the domain for your specific environment.

Important

You must escape all dots (.) in the domain.

Linux
cd ~/ESS/minikube/
sed -i 's/local\-ess\.inrupt\.com/<your domain>/g' **/*.yaml **/*.sh
MacOS
cd ~/ESS/minikube/
sed -i '' 's/local\-ess\.inrupt\.com/<your domain>/g' **/*.yaml **/*.sh
Windows

Go to the minikube directory:

cd %ESS_RELEASE_DIR%\minikube

Using a text editor or other tool, update the .yaml files with your specific domain.

Note

If you only updated the software version on each deployment, and the reference deployment files were not re-installed (leaving the old configuration in place), you can skip this step.

Follow the instructions in the Step 6: Kubernetes of the installation guide to edit the new Kubernetes configuration files.

Step 4: Apply Latest ESS to Local Kubernetes Environment

  1. Apply ESS in your Kubernetes cluster:

    kubectl apply -f config/ -f ess/
    

    The output should resemble the following:

    configmap/identity-service-config unchanged
    configmap/ldp-service-config configured
    configmap/registrar-profile-config unchanged
    secret/inrupt-ess-secret unchanged
    deployment.apps/ess-ldp unchanged
    deployment.apps/ess-websocket configured
    deployment.apps/grafana unchanged
    deployment.apps/ess-identity unchanged
    
  2. Verify that the ESS components are running in the cluster:

    kubectl get all
    

    The output should resemble the following:

    NAME                                 READY   STATUS    RESTARTS   AGE
    pod/ess-identity-54bb74cb96-phn7j       1/1     Running   0          44m
    pod/ess-ldp-7d5bc75bcd-hpwnv            1/1     Running   2          44m
    pod/ess-websocket-7656ff466f-6c7kw      1/1     Running   0          44m
    pod/grafana-66b9fff47c-wvt87            1/1     Running   0          44m
    pod/postgres-6698d9f79b-hxbzf           1/1     Running   0          44m
    pod/postgres-metrics-5d44c54746-4pvkm   1/1     Running   0          44m
    pod/prometheus-7b747d45-g8qhm[3]        1/1     Running   0          44m
    pod/webserver-8588dcf69-fgv89           1/1     Running   0          44m
    ...
    
  3. Restart the LDP and WebSocket services:

    kubectl rollout restart deployment.apps/ess-ldp deployment.apps/ess-websocket
    
    kubectl rollout status deployment.apps/ess-ldp deployment.apps/ess-websocket
    

    The output should resemble the following and the upgrade is complete:

    deployment "ess-websocket" successfully rolled out
    deployment "ess-ldp" successfully rolled out
    

Generally, only the deployment and possibly the environment_config configuration files should change for a new release. For example:

  • If only the software version in the 05_deployments/ files has changed, the update can be accomplished with a kubectl apply command:

    kubectl apply -f 05_deployments/
    

    The output should resemble the following:

    ...
    deployment.apps/ess-ldp configured
    ...
    
  • If multiple directories in the reference deployment have changed, they should be applied in the same order they appear in the installation guide, e.g.:

    kubectl apply -f 01_namespace/
    kubectl apply -f 02_services/
    kubectl apply -f 03_config/
    

When the changes to a pod are applied, Kubernetes will create new pods along side the old ones, e.g.:

NAME                                READY   STATUS     RESTARTS   AGE   IP             NODE                                         NOMINATED NODE   READINESS GATES
efs-provisioner-79d8bccb6-s9qrt     1/1     Running    0          32m   10.1.100.119   ip-10-1-100-188.eu-west-2.compute.internal   <none>           <none>
ess-identity-756b9f5cbf-kmbr2       1/1     Running    0          32m   10.1.100.113   ip-10-1-100-131.eu-west-2.compute.internal   <none>           <none>
ess-ldp-69b9ddd6c7-9vwvz            0/1     Init:0/1   0          1s    <none>         ip-10-1-100-131.eu-west-2.compute.internal   <none>           <none>
ess-ldp-8f8f6674c-5798p             1/1     Running    0          32m   10.1.150.19    ip-10-1-150-159.eu-west-2.compute.internal   <none>           <none>
ess-websocket-57565d7c87-xcvv5      1/1     Running    0          32m   10.1.100.165   ip-10-1-100-131.eu-west-2.compute.internal   <none>           <none>
proxy-deployment-7cd9564454-d9pwx   1/1     Running    0          32m   10.1.150.205   ip-10-1-150-159.eu-west-2.compute.internal   <none>           <none>

At this point, there are two LDP pods - the old one that is still running, and the new one (with a STATUS of Init:0/1) that is running the init container.

For a short period of time, both containers will be live and serving traffic. During this time, the ess-ldp service will direct traffic to any live pods. This means that the new service must be compatible with the old service, but so long as this condition holds, no down time is required of any service.

Once enough new pods are running to fulfill the deployment’s replica count, Kubernetes tears down the old pods to contain only the new pods:

NAME                                READY   STATUS    RESTARTS   AGE   IP             NODE                                         NOMINATED NODE   READINESS GATES
efs-provisioner-79d8bccb6-s9qrt     1/1     Running   0          33m   10.1.100.119   ip-10-1-100-188.eu-west-2.compute.internal   <none>           <none>
ess-identity-756b9f5cbf-kmbr2       1/1     Running   0          33m   10.1.100.113   ip-10-1-100-131.eu-west-2.compute.internal   <none>           <none>
ess-ldp-69b9ddd6c7-9vwvz            1/1     Running   0          51s   10.1.100.199   ip-10-1-100-131.eu-west-2.compute.internal   <none>           <none>
ess-websocket-57565d7c87-xcvv5      1/1     Running   0          33m   10.1.100.165   ip-10-1-100-131.eu-west-2.compute.internal   <none>           <none>
proxy-deployment-7cd9564454-d9pwx   1/1     Running   0          33m   10.1.150.205   ip-10-1-150-159.eu-west-2.compute.internal   <none>           <none>

Note

If there has only been config changes to the ConfigMaps, the ESS services will need to be restarted to pick up these config changes:

kubectl rollout restart deployment.apps/ess-ldp deployment.apps/ess-websocket

kubectl rollout status deployment.apps/ess-ldp deployment.apps/ess-websocket

The output should resemble the following:

deployment "ess-websocket" successfully rolled out
deployment "ess-ldp" successfully rolled out