Install on AWS

Overview

This purpose of this document is to walk you through the process of installing Inrupt’s Enterprise Solid Server (ESS) onto the Elastic Kubernetes Service (EKS) in Amazon Web Services (AWS).

Note

This guide has been certified on client workstations with the following configurations: macOS 10.15.4 and zsh 5.7.1. There are no known issues using this guide with another Linux-based operating system or interactive shell; however subtle differences may cause unexpected results. If in doubt, contact Inrupt support to discuss.

Architecture

This installation guide creates a high availability deployment in AWS.

Inrupt ESS AWS Deployment Architecture Diagram

Key Architectural Points

  • All access to the Inrupt services passes through an Elastic Load Balancer (ELB). These are the only resources in the public subnets, and are created by EKS processes. They are configured to only allow traffic on ports 80 and 443 (HTTP and HTTPS), and only allow that traffic to pass through to the “proxy” containers in EKS.

  • The proxy containers are nginx containers that serve two purposes:

    • Route application traffic to one of the public-facing Inrupt service containers (identity and LDP). By using kube-dns, these containers can be scaled out without any changes to the client or proxy logic.

    • Serve a basic Resource Description Framework (RDF) document to internal HTTP requests. This is the “registrar agent” and will be covered later in this document.

  • EKS capacity is managed by different Auto Scaling Groups (ASGs). This allows the EKS capacity to scale up or down depending on needs:

    • “ess” services: The services providing the Inrupt public Application Programming Interfaces (APIs) (identity, LDP), as well as the proxy containers, are placed on EKS nodes marked for “ess” services.

    • “backend” services: Any remaining services are placed on EKS nodes marked for “backend” services. For this installation guide, the “backend” services include the monitoring and logging services, such as Prometheus, Grafana, Postgres Metrics and Kibana.

    • This separation allows the two sets of services to scale up/down independently of each other. They will not use the same underlying EC2 worker nodes.

  • The EKS worker nodes will be provisioned with an instance role. That role will allow the containers access to Key Management Store (KMS) keys and System Manager (SSM) parameter store secrets. This allows the containers to pull sensitive information into their processes without exposing the secrets via the Kubernetes API or storing them in code repositories or other insecure locations.

  • For simplicity, this installation guide will show Kubernetes pulling images directly from Inrupt’s software repositories. However, it is recommended customers set up their own Docker image repository such as Elastic Container Registry (ECR) to avoid having an explicit reliance on an outside resource. Depending on the license contract, customers may also be rate or bandwidth limited in their access to Inrupt’s repositories. Pulling images from an ECR repository in the same AWS region as the containers will speed up the time it takes for EKS to launch containers and contain risk to components the customer directly manages.

  • EKS nodes will also have access to an Elastic File Storage (EFS) volume for some local storage shared between service containers.

  • Several backend services are required outside of the Kubernetes cluster. This installation guide will walk through creating these using AWS managed services, but customers may choose to provide their own (either via a managed service or manually managed instances):

    • Kafka/Zookeeper - this will be provided via a Managed Streaming for Apache Kafka (MSK) cluster in this guide. This provides asynchronous event messaging between ESS components.

    • PostgreSQL relational database - this will be provided via a managed Relational Data Service (RDS) instance in this guide. This provides the RDF and binary data storage for the ESS system.

  • Customers will most likely wish to also install monitoring systems such as Grafana and Prometheus. This guide will not walk through the installation and configuration of these systems, but for additional information users can refer to the appropriate installation guide.

  • The ESS system consists of several microservices, each one available as a self-contained Docker image.

    Docker aims to make containers entirely platform independent so that they run on any Docker or Kubernetes host system. However, differences in the virtualization stack (e.g., Hyper-V on Windows) can cause unexpected differences.

    Note

    Inrupt certifies that all of its containers have been tested on and will run on recent versions of Amazon Linux 2. There are no known issues running Kubernetes worker nodes with another RHEL based Linux distribution, but these have not yet been certified by Inrupt.

Prerequisites

Site Planning

ESS by design requires users to access the system via known domain names. Obtain a domain (e.g., mycompany.com) from a domain registrar, and be familiar with adding CNAME and TXT based records to your Domain Name System (DNS) provider. In addition to the root level domain, you will need to reserve and secure subdomains for each public facing API service.

Repository Account

The ESS Docker images and install configuration files are hosted on Inrupt’s CloudSmith Repository. To download, you need to obtain the following details from Inrupt:

  • An entitlement token, and

  • The version number of the software that you can download with the token.

To obtain the token and associated version number, you can either:

Terraform

Inrupt strongly recommends following the Infrastructure-As-Code process to manage your AWS inventory and promotion of changes between environments. This guide will leverage Terraform for this purpose. The latest version of Terraform is recommended, but version 12+ is required at a minimum (v12 introduced breaking changes to the syntax over v11). Terraform can be downloaded and installed from Terraform’s official site.

To check the Terraform installation and version, run:

terraform --version

The operation returns the version information, e.g.:

Terraform v0.12.26

Command Line Utilities

Several command line utilities are required to perform the ESS installation. These are all free and open source.

curl

curl is a command line application used for a variety of data transfer operations, especially with HTTP based services. The installation guide uses curl to test some services via HTTP.

curl is installed by default with recent Ubuntu and macOS installations, and is available for almost all *nix package managers, or can be compiled from source. For more information, see curl’s official download page.

To check the curl installation and version, run:

curl --version

The operation returns the version information, e.g.:

curl 7.64.1 (x86_64-apple-darwin19.0) libcurl/7.64.1 (SecureTransport) LibreSSL/2.8.3 zlib/1.2.11 nghttp2/1.39.2
Release-Date: 2019-03-27
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS GSS-API HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz MultiSSL NTLM NTLM_WB SPNEGO SSL UnixSockets

jq

jq is a command line application for processing JSON strings. The install guide uses jq to parse responses from calls to web based services like the AWS API.

To install, refer to the jq official download page that includes several options including single-file binary downloads.

To check the jq installation and version, run:

jq --version

The operation returns the version information, e.g.:

jq-1.6

kubectl

kubectl is a command line utility to interface with Kubernetes clusters. This installation guide uses kubectl to launch and manage ESS containers in an AWS EKS cluster.

To install, refer to the kubectl installation instructions.

To check the kubectl installation and version, run:

kubectl version

The operation returns the version information, e.g.:

Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server kubernetes.docker.internal:6443 was refused - did you specify the right host or port?

Note

The version request response should include the “Client Version" information.

The response may also include an error message like the one above if you do not have a Kubernetes server running locally. You can ignore the message as long as the version information is returned.

git

git will be used to pull some third party utilities used to create TLS certificates.

To install, follow the instructions at https://git-scm.com/downloads.

To check the git installation and version, run:

git --version

The operation returns the version information, e.g.:

git version 2.19.0

Amazon Web Services

AWS Account

For the creation and organization of AWS accounts, refer to your organization’s policies.

This installation guide assumes a single AWS account and uses an IAM user with administrator access in that account.

To create the IAM user:

  1. Log in to the AWS Console.

  2. Navigate to the IAM service.

  3. Create a new IAM user.

  4. Assign the new user the AdministratorAccess policy and request programmatic access.

Important

Make note of the access key id and secret key when creating the IAM user. These are your credentials to access AWS APIs, and should be secured as highly sensitive information as they will have access to do almost anything to your AWS account. These will be used later to configure your AWS CLI and Terraform interactions.

AWS CLI

If possible, all interactions with AWS should be done through Terraform. When that is not possible, users can use the AWS Command Line Interface (CLI) to interact with AWS. Additionally, Terraform uses the CLI configuration to obtain a profile and access/secret keys.

Version 2 of the CLI will be required for some segments of this install guide. Full installation instructions are available here.

To check the CLI installation and version:

aws --version

The operation returns the version information, e.g.:

aws-cli/2.0.17 Python/3.7.4 Darwin/19.4.0 botocore/2.0.0dev21

Installation

The steps for deploying a full installation of ESS to AWS are described below.

Step 1: Environment Variables

Throughout this guide, environment variables will be used to store repeated values to allow copy-paste of commands. These environment variables are:

Variable

Description

AWS_REGION

Name of the AWS region (e.g., us-east-1, eu-west-2) into which ESS will be deployed. To set:

export AWS_REGION=eu-west-2

ENVIRONMENT

Name of the environment (e.g., dev, stage, or prod) to which ESS will be deployed. To set:

export ENVIRONMENT=dev

ROOT_DIR

Root directory for the ESS deployment. To set and make the directory:

export ROOT_DIR=/tmp/installation-guide
mkdir -pv ${ROOT_DIR}

More environment variables will be set during the course of the installation.

Step 2: AWS Configuration

AWS Profile

Various points in this installation guide will refer to the “AWS Profile”. Interaction with the AWS CLI can be done with profiles, usually defined in ~/.aws/credentials. To create the ~/.aws/credentials file:

  1. Create and edit the ~/.aws/credentials file:

    mkdir -p ~/.aws
    vi ~/.aws/credentials
    
  2. Update the file with content similar to the following, substituting your AWS access key id and secret:

    [default]
    output=json
    region=us-east-1
    
    [dev]
    aws_access_key_id = LZVCODJKQWS
    aws_secret_access_key = LZVCODJKQWS
    region = eu-west-2
    
    [prod]
    aws_access_key_id = XATLTWAUPZK
    aws_secret_access_key = NCCALCCYTNV
    region = us-west-1
    

The above example defines a default profile. This profile will be used if another profile is not explicitly specified, but as the default profile has no keys, you will not be able to do anything with it. It also defines dev and prod profiles with a different sets of keys, and different regions. Additional details, including how to manage AWS credentials other than the ~/.aws/credentials file can be found at https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html.

To define these environment variables and configure your credentials file:

aws --profile ${ENVIRONMENT} sts get-caller-identity

The operation returns:

{
  "UserId": "LMNOPQRSTUV",
  "Account": "123456789012",
  "Arn": "arn:aws:iam::123456789012:user/yourname"
}

Important

Make note of the profile name configured here. This will be referred to as the “AWS Profile” later in this guide.

SSH Keys

Secure Shell (SSH) keys will generally not be needed in the ESS setup. Services should either be running in Kubernetes where kubectl can grant access to necessary systems, or in managed services (e.g., RDS, MSK, etc.) where SSH access is not possible. However, since this tutorial will be provisioning EC2 instances (e.g., the Kubernetes worker nodes) with SSH keys, and in emergencies or for debugging in pre-production environments, operations can get access to instances via SSH.

SSH keys can not practically be generated from Terraform. There are some work-arounds that make it possible, but this is generally discouraged because it does not fit the Infrastructure-As-Code pattern well, and it can leave the key files in plain text in Terraform state files. Instead, generate them using the AWS CLI.

The example below assumes you are generating a key named ssh-${ENVIRONMENT}:

mkdir -p ${ROOT_DIR}/ssh
cd ${ROOT_DIR}/ssh
aws --profile ${ENVIRONMENT} --region ${AWS_REGION} ec2 create-key-pair --key-name ssh-${ENVIRONMENT} --query 'KeyMaterial' --output text > ssh-${ENVIRONMENT}.pem
chmod 0600 ssh-${ENVIRONMENT}.pem

You can view the content of the file:

cat ssh-${ENVIRONMENT}.pem

The file contains the private key:

-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----

Note

SSH Keys should be treated as highly sensitive data, as any security keys in the customer environment. Save the ssh-${ENVIRONMENT}.pem file somewhere safe such as 1Password or another secrets manager. You will need the name (ssh-${ENVIRONMENT}) for Terraform configuration later, and you will need the .pem key file for SSH access.

Step 3: Download ESS Configuration Files

This section will walk you through the process of obtaining the ESS configuration files.

  1. Create a directory inside of ROOT_DIR that this guide will refer to as RELEASE_DIR:

    cd ${ROOT_DIR}
    mkdir release
    cd release
    export RELEASE_DIR=$(pwd)
    
  2. Download the configuration files from Inrupt using the <TOKEN> and <VERSION> registry account values provided by Inrupt:

    cd ${RELEASE_DIR}
    curl -1 -o 'configuration.zip' 'https://download.software.inrupt.com/<TOKEN>/release/raw/names/ESS-AWS-Deployment/versions/<VERSION>/configuration-reference-aws.zip'
    unzip configuration.zip
    

The configuration package contains a number of resources to help install, configure and maintain an Enterprise Solid Server system in various ecosystems (e.g., local install, shared server, AWS cloud installation, etc.). This Installation Guide will be leveraging files in deployment/infrastructure/aws and deployment/kubernetes/aws.

Step 4: Create Infrastructure

Following the Infrastructure-As-Code concept, infrastructure assets will be created using Terraform as the management tooling. The Terraform configure.tf file contains all the variables set to their default values. To modify the values, you can use modify the values in the configure.tf file and/or a variable definition file.

  1. Download and unzip the Terraform files using the <TOKEN> and <VERSION> registry account values provided by Inrupt:

    cd ${RELEASE_DIR}
    curl -1 -o 'infrastructure.zip' 'https://download.software.inrupt.com/<TOKEN>/release/raw/names/ESS-AWS-Infrastructure/versions/<VERSION>/infrastructure-reference-aws.zip'
    unzip infrastructure.zip
    
  2. Create a variable definition file. The following creates a file .variables/<ENV>.tfvars, substituting your environment value set in Step 1: Environment Variables :

    cd ${RELEASE_DIR}/deployment/infrastructure/aws
    mkdir -p .variables
    vi .variables/<ENV>.tfvars
    
  3. In the .variables/<ENV>.tfvars file, specify values for the following properties:

    Note

    These are the only variables that can be included in the variable definition file. For other variables, use the configure.tf file.

    • environment_name set to the value from Step 1: Environment Variables.

    • ssh_key_name set to the value from SSH Keys (i.e., "ssh-${ENVIRONMENT}" ).

    • domain set to the domain under which you wish to host ESS resources (e.g., “mycompany.example.com”)

    • aws_region set to the value set in Step 1: Environment Variables,

    • aws_profile_name set to a name (e.g., “staging”, etc.)

    environment_name = "dev"
    ssh_key_name= "ssh-dev"
    domain = "mycompany.example.com"
    aws_region   = "us-west-2"
    aws_profile_name = "dev"
    

All other variables are optional and have cost and performance implications (e.g., running 2-4 worker nodes instead of only 1 in EKS or running db.t3.small instances for RDS instead of micro or medium). To modify these other variables, modify the values in the configure.tf file,

Terraform Init

Run terraform init to download the latest AWS client integration:

cd ${RELEASE_DIR}/deployment/infrastructure/aws
terraform init

The operation returns its progress:

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "http" (hashicorp/http) 1.2.0...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.64.0...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 2.64"
* provider.http: version = "~> 1.2"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Terraform Workspaces

While not strictly necessary, it is recommended that users leverage Terraform workspaces as they allow everything to be compartmentalized for more complicated deployments with multiple environments, different sets of AWS credentials, state files, etc..

Note

If you decide to not create a workspace, your state files will be in the infrastructure/aws directory as terraform.tfstate and terraform.tfstate.backup rather than the values shown later in this document of terraform.tfstate.d/${ENVIRONMENT}/terraform.tfstate and terraform.tfstate.d/${ENVIRONMENT}/terraform.tfstate.backup.

To create the new Terraform ${ENVIRONMENT} workspace:

cd ${RELEASE_DIR}/deployment/infrastructure/aws
terraform workspace new ${ENVIRONMENT}

Note

If you are working with an existing set of Terraform state files, you can use terraform workspace select ${ENVIRONMENT} instead of new.

Terraform Plan

To view (but not apply) proposed changes to your infrastructure, run terraform plan with the -var-file option set to your properties override file. No changes will be applied.

cd ${RELEASE_DIR}/deployment/infrastructure/aws
terraform plan -var-file .variables/<ENV>.tfvars

The operation returns the proposed changes information:

Refreshing Terraform state in-memory prior to plan...
... lots of Terraform output ...
Plan: ## to add, 0 to change, 0 to destroy.

Terraform Apply

To apply the proposed changes to your infrastructure, run terraform apply with the -var-file option set to your properties override file. The same output as terraform plan will be shown, but the user will be prompted for confirmation to apply.

  1. Run terraform apply:

    cd ${RELEASE_DIR}/deployment/infrastructure/aws
    terraform apply  -var-file .variables/<ENV>.tfvars
    

    The operation returns the the proposed changes to apply followed by the prompt:

    ...
    Plan: ## to add, 0 to change, 0 to destroy.
    
    Do you want to perform these actions in workspace "dev"?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value:
    

    At this point, no changes have been made, only another view into the proposed changes. Only an answer of yes at this point will allow you to continue. Any other response will exit Terraform.

    Note

    Applying the changes will take approximately 15-20 minutes to run. The EKS cluster creation alone will take 10-15 minutes.

  2. Enter yes to continue.

    The operation returns various status information similar to the following:

    ...
    Apply complete! Resources: ## added, 0 changed, 0 destroyed.
    
    Outputs:
    
    output-notes =
    ########## Important Variables to Note ##########
    
    For Kubernetes deployment:
      AWS_REGION:        eu-west-2
      POSTGRES_URL:      ess-dev-postgres-ldp.123a4567b890.eu-west-2.rds.amazonaws.com:5432
      POSTGRES_PASSWORD: ABigLongStringFORTheInstallationGuide.ldp
      TLS_KAFKA:         b-2.ess-dev-kafka.....c4.kafka.eu-west-2.amazonaws.com:9094,b-1.ess-dev-kafka.....c4.kafka.eu-west-2.amazonaws.com:9094
      EFS_ID:            fs-a123bc4d
    
    Other useful information:
      AWS profile:            dev
      EKS Cluster Name:       ess-dev-cluster
      Instance Role ARN:      arn:aws:iam::123456789012:role/ess-dev-eks-worker-nodes
      Current user IP:        12.345.678.901
      LDP Postgres Cert ID:   rds-ca-2019
      MSK Zookeeper Connect:  z-3.ess-dev-kafka.....c4.kafka.eu-west-2.amazonaws.com:2181,z-2.ess-dev-kafka.....c4.kafka.eu-west-2.amazonaws.com:2181,z-1.ess-dev-kafka.....c4.kafka.eu-west-2.amazonaws.com:2181
      MSK Plaintext Boostrap: b-2.ess-dev-kafka.....c4.kafka.eu-west-2.amazonaws.com:9092,b-1.ess-dev-kafka.....c4.kafka.eu-west-2.amazonaws.com:9092
    
    ################################################
    

Note

The actual outputs will be different and specific for your environment.

Make note of all the outputs displayed at the end of this process above (i.e., everything after Outputs:), as these will be used in subsequent steps. If you lose this output and need to see it again, terraform apply will show it again, even if no infrastructure changes need to be applied.

Step 5: Manage Secrets

The Inrupt microservices use a number of environment variables to configure functionality. For example, the Inrupt LDP service image will use QUARKUS_DATASOURCE_PASSWORD and QUARKUS_DATASOURCE_USERNAME to set credentials to connect to the Postgres database. In a development environment, these variables can simply be set as environment variables on the containers. However in production environments, or any environment with sensitive data, these variables are stored and passed in plain text and should not be used.

(Recommended) For AWS deployment, the AWS SSM Parameter Store can be used to store the variables.

Terraform creates all the ESS secrets for the AWS SSM Parameter Store. Most of the ESS secrets have valid default values whereas some have default values of to_be_defined. For those with the to_be_defined values, you must manually set them to valid values.

  1. Log into the AWS web console.

  2. Switch to the region in which you are deploying (e.g., eu-west-2).

  3. Open the AWS Systems Manager console.

  4. Select Parameter Store from the left navigation.

  5. Update the following parameters:

    • /ess-<ENVIRONMENT>/identity-service-keystore

    • /ess-<ENVIRONMENT>/proxy-service-server.crt

    • /ess-<ENVIRONMENT>/proxy-service-server.key

    Note

    • Replace the <ENVIRONMENT> in the parameter names with the value of the ${ENVIRONMENT} environment variable set previously.

    • Encrypt each parameter as a SecureString (rather than the default String), and use the KMS key alias/ess-<ENVIONMENT>-eks-container-key that was created by Terraform rather than the default AWS owned key.

  6. Create the following parameters:

    • /ess-<ENVIRONMENT>/ess-jwks.key

    • /ess-<ENVIRONMENT>/ess-jwks.crt

    • /ess-<ENVIRONMENT>/fluentd-auditor.key

    • /ess-<ENVIRONMENT>/fluentd-auditor.crt

The following table gives a description of the parameters:

Parameter Name

Description

/ess-<ENVIRONMENT>/identity-service-keystore

JSON Web Key Store (JWKS) which can be generated using a number of methods (see Appendix B). The parameter value should be a multi-line string similar to:

{
  keys: [
    {
      e: 'A..B',
      n: 'c..D',
      d: 'E..F',
      p: 'G..H',
      q: 'i..j',
      dp: 'K..L',
      dq: 'm..n',
      qi: 'o..P',
      kty: 'RSA',
      kid: 'Q..R',
      alg: 'RS256',
      use: 'sig'
    }
  ]
}

Default: to_be_defined

/ess-<ENVIRONMENT>/postgres-db

Reference to the backend storage system.

Default: Set to the JDBC URL to the Postgres that was created by Terraform.

/ess-<ENVIRONMENT>/postgres-username

Username of the Postgres user.

Default: Set to ess.

/ess-<ENVIRONMENT>/postgres-password

Password of the Postgres user.

Default: Set to the password for the Postgres DB that was created by Terraform.

/ess-<ENVIRONMENT>/proxy-service-server.crt

Base64 encoded representation of the signed certificate.

Note

All external access to the ESS services passes through the proxy containers in EKS. These are exposed to the internet via Elastic Load Balancers in the public subnets.

These proxy services need to be secured with signed TLS certificates. Inrupt recommends using Let’s Encrypt for the purposes of this reference deployment. In a production environment, it is up to the customer to determine a suitable TLS certificate provider.

For information on creating Certificate Authority (CA) signed certificates, see Create External Certificate/Key.

For information on generating Base64 encoded representation of the certificates, refer to Appendix D.

Default: to_be_defined

/ess-<ENVIRONMENT>/proxy-service-server.key

Base64 encoded representation of the signed key.

Note

All external access to the ESS services passes through the proxy containers in EKS. These are exposed to the internet via Elastic Load Balancers in the public subnets.

These proxy services need to be secured with signed TLS certificates. Inrupt recommends using Let’s Encrypt for the purposes of this reference deployment. In a production environment, it is up to the customer to determine a suitable TLS certificate provider.

For information on creating Certificate Authority (CA) signed certificates, see Create External Certificate/Key.

For information on generating Base64 encoded representation of the certificates, refer to Appendix D.

Default: to_be_defined

/ess-<ENVIRONMENT>/fluentd-auditor.key

Base64 encoded representation of a self-signed key. For the corresponding certificate property, see /ess-<ENVIRONMENT>/fluentd-auditor.crt.

  1. First, create the key and certificate:

    openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:4096 \
       -keyout /tmp/auditor-tls.key \
       -out /tmp/auditor-tls.crt \
       -subj /CN=fluentd-auditor
    
  2. Create the /ess-<ENVIRONMENT>/fluentd-auditor.key parameter and set to the Base64 encoded value of the created key. To get the Base64 encoded representation of the key:

    cat /tmp/auditor-tls.key | base64
    rm /tmp/auditor-tls.key
    
  3. Create the /ess-<ENVIRONMENT>/fluentd-auditor.crt parameter and set to the Base64 encoded value of the created certificate. To get the Base64 encoded representation of the certificate:

    cat /tmp/auditor-tls.crt | base64
    rm /tmp/auditor-tls.crt
    

/ess-<ENVIRONMENT>/fluentd-auditor.crt

Base64 encoded representation of the self-signed certificate for auditing.

See the /ess-<ENVIRONMENT>/fluentd-auditor.key parameter description for steps to create the certificate and the corresponding key and get the Base64 encoded values.

/ess-<ENVIRONMENT>/ess-jwks.key

Base64 encoded representation of a self-signed key. For the corresponding certificate, see /ess-<ENVIRONMENT>/ess-jwks.crt.

  1. First, create the key and certificate:

Note

/ess-<ENVIRONMENT>/ess-jwks.key needs to contain a Base64 encoded key and /ess-<ENVIRONMENT>/ess-jwks.crt needs to contain the Base64 encoded certificate.

To create the key and certificate, run the following command:

openssl req -x509 -sha256 -nodes -days 365 \
  -newkey rsa:4096 -keyout /tmp/ess-jwks.key \
  -out /tmp/ess-jwks.crt \
  -subj /CN=ess-jwks.ess.svc.cluster.local

To retrieve the Base64 encoded values, run the following commands:

cat  /tmp/ess-jwks.key | base64 -w0
cat  /tmp/ess-jwks.crt | base64 -w0

rm /tmp/ess-jwks.key /tmp/ess-jwks.crt

(needs to be created)

/ess-<ENVIRONMENT>/ess-jwks.crt

Base64 encoded representation of the self-signed certificate. For the corresponding key, see /ess-<ENVIRONMENT>/ess-jwks.key.

(needs to be created)

For more information regarding Kubernetes secrets, refer to Appendix A.

Step 6: Kubernetes

For this installation guide, the Inrupt Enterprise Solid Server services will be provided as Docker images, and launched into a Kubernetes environment provided by AWS EKS. The EKS infrastructure (e.g., control plane, worker nodes, networking, etc.) is setup in the Terraform sections above. This section covers:

  1. Creating a config file for kubectl to communicate with the EKS cluster.

  2. Provisioning Role-Based Access Control (RBAC) to create resources in the EKS cluster.

  3. Launch ESS containers into the EKS cluster.

Kubernetes abstracts quite a bit of operational complexity away from end-users. For example, the AWS scaling groups provide dynamic capacity to the cluster, so cluster operators generally will not need to know how many physical instances are available, how they are created/provisioned, or how to connect to them.

Even with that abstraction, this reference deployment imposes some additional conventions:

  • Everything will be launched into an ess namespace - this allows you to separate all of your resources, and if needed fully delete everything with a single command (kubectl delete namespace ess).

  • You will create “services” before containers; services abstract the containers behind them. Services allow you to create and destroy containers, possibly changing ip addresses, ports, etc., but anything referencing the service will not need to change to accommodate. Services may also provide additional functionality such as load balancing across multiple containers.

  • It is not recommended you launch databases or other systems requiring heavy write persistence into Kubernetes.

Create kubectl Config File

Communication with the EKS control plane is done through AWS API calls using the same credentials profile used with Terraform above.

  1. Begin by creating a new kubeconfig file with the AWS API. The kubectl config file will be written to ${ROOT_DIR}/eks/kube.config.

    mkdir -pv ${ROOT_DIR}/eks
    cd ${ROOT_DIR}/eks
    aws --profile ${ENVIRONMENT} --region ${AWS_REGION} eks update-kubeconfig --name ess-${ENVIRONMENT}-cluster --kubeconfig ${ROOT_DIR}/eks/kube.config
    

    The operation returns its status information:

    Added new context arn:aws:eks:eu-west-2:123456789012:cluster/ess-dev-cluster to ${ROOT_DIR}/eks/kube.config
    
  2. Set the environment variable KUBECONFIG to the kube.config file path.

    export KUBECONFIG=${ROOT_DIR}/eks/kube.config
    

    Note

    Although you can specify the kube.config file path on every kubectl command with the --kubeconfig argument, setting KUBECONFIG environment variable will apply to all kubectl command.

The output kube.config file will look similar to the following:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS......o=
    server: https://......eu-west-2.eks.amazonaws.com
  name: arn:aws:eks:eu-west-2:123456789012:cluster/ess-dev-cluster
contexts:
- context:
    cluster: arn:aws:eks:eu-west-2:123456789012:cluster/ess-dev-cluster
    user: arn:aws:eks:eu-west-2:123456789012:cluster/ess-dev-cluster
  name: arn:aws:eks:eu-west-2:123456789012:cluster/ess-dev-cluster
current-context: arn:aws:eks:eu-west-2:123456789012:cluster/ess-dev-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-1:123456789012:cluster/ess-dev-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - eu-west-2
      - eks
      - get-token
      - --cluster-name
      - ess-dev-cluster
      command: aws
      env:
      - name: AWS_PROFILE
        value: dev

This essentially directs kubectl to use the selected profile to obtain a token with which it can communicate with the EKS cluster.

Provisioning Access

Kubernetes access is controlled by Role Based Authentication (RBAC). In AWS, EKS can map roles to IAM users. This means that if you have multiple IAM users in your AWS account, you can grant them each different levels of access inside of the EKS cluster. Full documentation of Kubernetes RBAC is outside of the scope of this document. This example will only add a single user that has full control of everything inside of the EKS cluster.

  1. Determining the ARN for your user:

    aws --profile ${ENVIRONMENT} sts get-caller-identity
    

    The operation returns information similar to the following:

    {
        "UserId": "....",
        "Account": "123456789012",
        "Arn": "arn:aws:iam::123456789012:user/your.username"
    }
    
  2. Create a new file named eks.roles with information returned:

    cd ${ROOT_DIR}/eks
    vi eks.roles
    

    Edit the eks.roles file with the following content:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: aws-auth
      namespace: kube-system
    data:
      mapRoles: |
        - rolearn: _INSTANCE_ROLE_
          username: system:node:{{EC2PrivateDNSName}}
          groups:
            - system:bootstrappers
            - system:nodes
      mapUsers: |
        - userarn: _YOUR_USER_ARN_
          username: _YOUR_USERNAME_
          groups:
            - system:masters
    

    Replace the variables above as follows:

    _INSTANCE_ROLE_

    The value of Instance Role ARN from the output of the terraform apply step.

    _YOUR_USER_ARN_

    The Arn from the get-caller-identity step.

    _YOUR_USERNAME_

    This does not need to match your username from the get-caller-identity step (the end of the Arn after :user/), but it is recommended that it does.

  3. Finally, update the EKS cluster with these permissions:

    kubectl apply -f ${ROOT_DIR}/eks/eks.roles
    

    Upon success, the operation returns the following message:

    configmap/aws-auth created
    

You can now test communications with the EKS cluster.

  • To see information on the cluster’s nodes:

    kubectl get nodes
    

    The operation returns the information on cluster nodes:

    NAME                                         STATUS     ROLES    AGE   VERSION
    ip-10-1-100-178.eu-west-2.compute.internal   NotReady   <none>   15s   v1.14.7-eks-1861c5
    ip-10-1-100-221.eu-west-2.compute.internal   NotReady   <none>   15s   v1.14.7-eks-1861c5
    ip-10-1-150-41.eu-west-2.compute.internal    NotReady   <none>   17s   v1.14.7-eks-1861c5
    ip-10-1-150-60.eu-west-2.compute.internal    NotReady   <none>   15s   v1.14.7-eks-1861c5
    

    Note

    The worker nodes may report a status of “NotReady” for some time after applying the EKS roles auth file. Give the cluster a few minutes and retry kubectl get nodes and they should report as Ready.

  • To see information on the cluster’s namespaces:

    kubectl get namespaces
    

    The operation returns the information on namespaces:

    NAME              STATUS   AGE
    default           Active   175m
    kube-node-lease   Active   175m
    kube-public       Active   175m
    kube-system       Active   175m
    

Obtaining Docker Images

The Inrupt Enterprise Solid Server services are provided to customers as Docker images. These will generally have URLs similar to the following:

Note

x.x.x will be the ESS version (E.g. 1.0.0)

docker.software.inrupt.com/inrupt-identity-service:x.x.x
docker.software.inrupt.com/inrupt-ldp-jdbc-service:x.x.x
...

Access to these repository URLs will be provided by Inrupt outside of this installation guide. You will need to contact Inrupt in order to obtain the information required to download the Inrupt ESS from the Inrupt repositories. Please send an email to requests@inrupt.com and attach your public key (either RSA or PGP). You will be contacted to verify the fingerprint for your public key and then you will receive an email with an encrypted and signed entitlement grant and instructions on how to download the product.

This installation guide will show the Docker images being pulled directly from the Inrupt repositories into the EKS cluster, however this is not recommended for production usage. Customers should instead obtain their own image repository hosting and transfer images there, and update the deployment files later in this installation guide to point to the new URLs. Additional details are available in Appendix E.

Users in pre-production environments may skip this step and continue with the installation guide.

Step 7: Docker Credential String

In order to pull Docker images from a Inrupt’s repository, Kubernetes generally needs credentials for that repository. Kubernetes currently requires these to be in a Kubernetes secret.

  1. Create a Kubernetes secret named registrykey. The TOKEN is the same one that was used to download the zip of configuration files above.

    kubectl create secret docker-registry registrykey --docker-server=docker.software.inrupt.com --docker-username=inrupt/release --docker-password='<TOKEN>' --docker-email=your@email.com
    
  2. Retrieve the docker credentials.

    kubectl get secret registrykey --output=yaml | grep .dockerconfigjson
    

    The command returns the credential information similar to the following:

    .dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuc29mdHdhcmUuaW5ydXB0LmNvbSI6eyJ1c2VybmFtZSI6Im....
    type: kubernetes.io/dockerconfigjson
    

    The returned .dockerconfigjson value (e.g. eyJhdXRocyI6...) is to replace _REPLACE_ME_DOCKER_CREDENTIALS_ during a later step.

  3. Delete the secret registrykey.

    kubectl delete secret registrykey
    

Note

If you are using AWS ECR for your repository images, you can leverage IAM roles for your worker node instances to grant access to the repositories without any secrets. This is more secure and requires less configuration work, but will work ONLY with AWS ECR repositories.

Step 8: Launch EKS Containers

The configuration package downloaded in the steps above contain Kubernetes configuration files that can be used to launch the ESS system into EKS, but some configuration is required.

The launch order for the Kubernetes configuration files is important, so the directories contain numbered prefixes as follows:

01_namespaces

The /01_namespaces directory contains a single YAML configuration file that will create namespaces. Currently this only includes a single ess namespace that will hold all Inrupt resources. However, this may be extended to hold additional namespaces (e.g., for a Kubernetes Dashboard if desired).

Create the ess namespace:

cd ${RELEASE_DIR}/deployment/kubernetes/aws
kubectl apply -f 01_namespace

To verify, display the namespaces:

kubectl get namespaces

The operation should return ess among the namespaces:

NAME                   STATUS   AGE
default                Active   6h17m
ess                    Active   22s
kube-node-lease        Active   6h17m
kube-public            Active   6h17m
kube-system            Active   6h17m

Once the ess namespace has been created, set the namespace for the current context to ess such that new resources will default to being created in the ess namespace.

kubectl config set-context --current --namespace=ess

To verify, display the context information

kubectl config get-contexts

The operation should list ess in the NAMESPACE for the current context:

CURRENT   NAME                                                         CLUSTER                                                      AUTHINFO                                                     NAMESPACE
*         arn:aws:eks:eu-west-2:123456789012:cluster/ess-dev-cluster   arn:aws:eks:eu-west-2:123456789012:cluster/ess-dev-cluster   arn:aws:eks:eu-west-1:541395654956:cluster/ess-dev-cluster   ess

02_services

The /02_services directory contains YAML files that define services. Services sit in front of groups of containers and abstract them to a single interface. That interface can be a simple ClusterIP (allowing the containers to be accessed by other containers in the cluster), or more complicated services like a LoadBalancer (the implementation will depend on the provider, but for EKS, this will launch an Elastic Load Balancer).

  1. Create the services.

    cd ${RELEASE_DIR}/deployment/kubernetes/aws
    kubectl apply -f 02_services
    

    The output reports on the services created:

    service/ess-identity created
    service/ess-ldp created
    service/proxy created
    
  2. View information on the services.

    kubectl get services -o wide
    

    The operation returns information similar to the following:

    Note

    While all of the services are important, pay special attention to the proxy service. This will create an ELB, which will be the only public access into the ESS system.

    NAME            TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                      AGE   SELECTOR
    ess-identity    ClusterIP      172.20.181.145   <none>                                                                    10001/TCP,10000/TCP          43s   app=ess-identity
    ess-ldp         ClusterIP      172.20.170.93    <none>                                                                    10101/TCP,10100/TCP          42s   app=ess-ldp
    proxy           LoadBalancer   172.20.111.159   1234abcd5ef67ghi89j0123k45l6m7n8-9012345678.eu-west-2.elb.amazonaws.com   80:30535/TCP,443:30915/TCP   42s   app=proxy
    

    Note

    Take note of the EXTERNAL-IP of the proxy service as you will need to add 3 DNS CNAME entries that all point to this address.

DNS Records

Add 3 DNS CNAME entries that point to the ELB proxy service. For example, assuming that your DNS controls the domain mycompany.com, you will then want to add DNS entries for identity.mycompany.com and mycompany.com.

The exact mechanism of adding DNS records is highly dependent on your domain/DNS supplier. Refer to your domain/DNS supplier on instructions on how to add the CNAME entries.

NAME        TYPE       TTL        IP Address
identity    CNAME      10 min     <ELB Proxy service external IP>
ldp         CNAME      10 min     <ELB Proxy service external IP>

Links to documentation for setting the DNS CNAME entries for some DNS suppliers are given below:

03_config

Most of the Kubernetes layer configuration has been abstracted out of the other configuration files and placed into the /03_config directory. This should allow the bulk of the configuration to remain static when moving between multiple deployments/environments. Although most of the configuration is in the /03_config, few remain in other directories. The steps below finds all files that need to be updated.

  1. Find all files with variables that need to be replaced. This can be done with a simple string search for _REPLACE_ME_:

    cd ${RELEASE_DIR}/deployment/kubernetes/aws
    grep -r "_REPLACE_ME_" *
    

    The operation returns a list of files:

    03_config/ess-config.yaml:  REGISTRAR_AGENT: "https://identity._REPLACE_ME_DOMAIN_/registrar-agent.ttl"
    03_config/ess-config.yaml:  IDENTITY_URL: "https://identity._REPLACE_ME_DOMAIN_"
    
    ...
    
    04_storage/efs-provisioner.yaml:            server: _REPLACE_ME_EFS_ID_.efs._REPLACE_ME_AWS_REGION_.amazonaws.com
    06_autoscale/cluster-autoscaler-autodiscover.yaml:            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/_REPLACE_ME_EKS_CLUSTER_NAME_
    
  2. Go through each listed files and replace the _REPLACE_ME_ string as described:

    String to Replace

    Description

    _REPLACE_ME_DOMAIN_

    Domain you are hosting the ESS system on.

    _REPLACE_ME_COMPANY_COUNTRY_

    Your company’s 2-letter country code for TLS certificate generation (e.g., US).

    _REPLACE_ME_COMPANY_STATE_

    Your company’s state for TLS certificate generation (e.g., Massachusetts).

    _REPLACE_ME_COMPANY_CITY_

    Your company’s city for TLS certificate generation (e.g., Boston).

    _REPLACE_ME_COMPANY_ORG_NAME_

    Your company’s organization name for TLS certificate generation (e.g., Acme, Inc).

    _REPLACE_ME_COMPANY_ORG_UNIT_

    Your company’s organizational unit for TLS certificate generation (e.g., Engineering).

    _REPLACE_ME_ENVIRONMENT_

    AWS Profile Name, stored in ${ENVIRONMENT} environment variable.

    _REPLACE_ME_AWS_REGION_

    AWS deployment region, stores in ${AWS_REGION} environment variable.

    _REPLACE_ME_POSTGRES_URL_

    In terraform output as POSTGRES_URL.

    _REPLACE_ME_POSTGRES_HOST_

    In terraform output as POSTGRES_HOST.

    _REPLACE_ME_POSTGRES_PORT_

    In terraform output as POSTGRES_PORT.

    _REPLACE_ME_TLS_KAFKA_

    In terraform output as TLS_KAFKA.

    _REPLACE_ME_EFS_ID_

    In terraform output as EFS_ID.

    _REPLACE_ME_EKS_CLUSTER_NAME_

    In terraform output as EKS Cluster Name.

    _REPLACE_ME_DOCKER_CREDENTIALS_

    Docker credentials string from Step 7: Docker Credential String.

    _REPLACE_ME_ELASTICSEARCH_URL_

    In terraform output as ELASTICSEARCH_URL.

    To edit the file, you can use a text editor like vi to make the string replacements. Alternatively, you can also use a stream editor, such as sed, to make these replacements. For example:

    1. Before modifying, make a backup of the aws directory:

      cd ${RELEASE_DIR}/deployment/kubernetes
      cp -rf aws aws-bak
      
    2. Once you have a backup of the directory, go to the aws directory:

      cd aws
      
    3. Set environment variables to your substitution values:

      export DOMAIN=[Domain you are hosting the ESS system on]
      export COUNTRY=[Your company's 2-letter country code for TLS certificate generation (e.g., US)]
      export STATE=[Your company's state for TLS certificate generation (e.g., Massachusetts)]
      export CITY=[Your company's city for TLS certificate generation (e.g., Boston)]
      export ORG_NAME=[Your company's organization name for TLS certificate generation (e.g., Acme, Inc)]
      export ORG_UNIT=[Your company's organizational unit for TLS certificate generation (e.g., Engineering)]
      export POSTGRES_URL=[In terraform output as POSTGRES_URL]
      export POSTGRES_HOST=[In terraform output as POSTGRES_HOST]
      export POSTGRES_PORT=[In terraform output as POSTGRES_PORT]
      export TLS_KAFKA=[In terraform output as TLS_KAFKA]
      export EFS_ID=[In terraform output as EFS_ID]
      export DOCKER_CREDENTIALS=[Docker credentials string]
      export EKS_CLUSTER_NAME=[In terraform output as EKS Cluster Name]
      export ELASTICSEARCH_URL=[In terraform output as ELASTICSEARCH_URL]
      
    4. Use sed to make the substitution:

      Note

      The below sed operations use the ? character as the delimiter. If your environment variable includes the ? character in its value, you must escape the character in the value.

      find ./ -type f -exec sed -i 's?_REPLACE_ME_DOMAIN_?'${DOMAIN}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_COMPANY_COUNTRY_?'${COUNTRY}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_COMPANY_STATE_?'${STATE}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_COMPANY_CITY_?'${CITY}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_COMPANY_ORG_NAME_?'${ORG_NAME}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_COMPANY_ORG_UNIT_?'${ORG_UNIT}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_AWS_REGION_?'${AWS_REGION}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_ENVIRONMENT_?'${ENVIRONMENT}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_POSTGRES_URL_?'${POSTGRES_URL}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_POSTGRES_HOST_?'${POSTGRES_HOST}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_POSTGRES_PORT_?'${POSTGRES_PORT}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_TLS_KAFKA_?'${TLS_KAFKA}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_EFS_ID_?'${EFS_ID}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_DOCKER_CREDENTIALS_?'${DOCKER_CREDENTIALS}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_EKS_CLUSTER_NAME_?'${EKS_CLUSTER_NAME}'?g' {} \;
      find ./ -type f -exec sed -i 's?_REPLACE_ME_ELASTICSEARCH_URL_?'${ELASTICSEARCH_URL}'?g' {} \;
      

      The below sed operations use the ? character as the delimiter. If your environment variable includes the ? character in its value, you may need to escape the character in the value.

      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_DOMAIN_?'${DOMAIN}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_COMPANY_COUNTRY_?'${COUNTRY}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_COMPANY_STATE_?'${STATE}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_COMPANY_CITY_?'${CITY}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_COMPANY_ORG_NAME_?'${ORG_NAME}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_COMPANY_ORG_UNIT_?'${ORG_UNIT}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_AWS_REGION_?'${AWS_REGION}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_ENVIRONMENT_?'${ENVIRONMENT}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_POSTGRES_URL_?'${POSTGRES_URL}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_POSTGRES_HOST_?'${POSTGRES_HOST}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_POSTGRES_PORT_?'${POSTGRES_PORT}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_TLS_KAFKA_?'${TLS_KAFKA}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_EFS_ID_?'${EFS_ID}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_DOCKER_CREDENTIALS_?'${DOCKER_CREDENTIALS}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_EKS_CLUSTER_NAME_?'${EKS_CLUSTER_NAME}'?g' {} \;
      find ./ -type f -exec sed -i '' -e 's?_REPLACE_ME_ELASTICSEARCH_URL_?'${ELASTICSEARCH_URL}'?g' {} \;
      
  3. Once you have finished the string replacements, create the configuration entries in Kubernetes:

    cd ${RELEASE_DIR}/deployment/kubernetes/aws
    kubectl apply -f 03_config
    

    The operation reports on the entry creations:

    secret/docker-repo created
    configmap/ess-config created
    configmap/proxy-conf created
    

04_storage

The ESS services for require some local storage shared between service containers. In EKS, this is provided using EFS (created by Terraform above).

  1. In the environment specific configuration above, you should have replaced the _REPLACE_ME_* strings in 04_storage/efs-provisioner.yaml. If not, do so now using the EFS ID from the terraform apply output earlier.

  2. Create the Kubernetes storage entities:

    cd ${RELEASE_DIR}/deployment/kubernetes/aws
    kubectl apply -f 04_storage
    

    The operation reports on the storage entities created:

    persistentvolumeclaim/efs-identity-db created
    serviceaccount/efs-provisioner created
    clusterrole.rbac.authorization.k8s.io/efs-provisioner-runner created
    clusterrolebinding.rbac.authorization.k8s.io/run-efs-provisioner created
    role.rbac.authorization.k8s.io/leader-locking-efs-provisioner created
    rolebinding.rbac.authorization.k8s.io/leader-locking-efs-provisioner created
    deployment.apps/efs-provisioner created
    storageclass.storage.k8s.io/aws-efs created
    

05_deployments

This section deploys the actual ESS containers into the EKS cluster as deployments since some containers may have multiple instances or other higher level requirements than just an individual container.

Create the ESS deployment:

cd ${RELEASE_DIR}/deployment/kubernetes/aws
kubectl apply -f 05_deployments

The operation reports on the deployments created:

deployment.apps/ess-identity created
deployment.apps/ess-ldp created
deployment.apps/proxy-deployment created

You can view the created services:

kubectl get all -o wide -n ess

The operation returns various resource information, including the newly created deployments:

NAME                                   READY   STATUS             RESTARTS   AGE   IP             NODE                                        NOMINATED NODE   READINESS GATES
pod/efs-provisioner-69df5944f6-79t48   0/1     Pending            0          55m   10.1.150.182   ip-10-1-150-60.eu-west-2.compute.internal   <none>           <none>
pod/ess-identity-c7ff59dd8-nbvhs       0/1     Pending            0          43m   <none>         <none>                                      <none>           <none>
pod/ess-ldp-65bd8546c8-lzdtd           0/1     Pending            0          43m   <none>         <none>                                      <none>           <none>
pod/proxy-deployment-9b69c6fff-fcm7f   0/1     Pending            0          43m   10.1.150.250   ip-10-1-150-41.eu-west-2.compute.internal   <none>           <none>

NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                      AGE   SELECTOR
service/ess-identity    ClusterIP      172.20.181.145   <none>                                                                    10001/TCP,10000/TCP          19h   app=ess-identity
service/ess-ldp         ClusterIP      172.20.170.93    <none>                                                                    10101/TCP,10100/TCP          19h   app=ess-ldp
service/proxy           LoadBalancer   172.20.111.159   ................................-...........eu-west-2.elb.amazonaws.com   80:30535/TCP,443:30915/TCP   19h   app=proxy

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS        IMAGES                                                      SELECTOR
deployment.apps/efs-provisioner    0/1     1            0           55m   efs-provisioner   quay.io/external_storage/efs-provisioner:latest             app=efs-provisioner
deployment.apps/ess-identity       0/1     1            0           43m   ess-identity      docker.software.inrupt.com/inrupt-identity-service:x.x.x    app=ess-identity
deployment.apps/ess-ldp            0/1     1            0           43m   ess-ldp           docker.software.inrupt.com/inrupt-ldp-jdbc-service:x.x.x    app=ess-ldp
deployment.apps/proxy-deployment   0/1     1            0           43m   nginx             docker.software.inrupt.com/proxy:x.x.x                      app=proxy

NAME                                         DESIRED   CURRENT   READY   AGE   CONTAINERS        IMAGES                                                      SELECTOR
replicaset.apps/efs-provisioner-69df5944f6   1         1         0       55m   efs-provisioner   quay.io/external_storage/efs-provisioner:latest             app=efs-provisioner,pod-template-hash=69df5944f6
replicaset.apps/ess-identity-c7ff59dd8       1         1         0       43m   ess-identity      docker.software.inrupt.com/inrupt-identity-service:x.x.x    app=ess-identity,pod-template-hash=c7ff59dd8
replicaset.apps/ess-ldp-65bd8546c8           1         1         0       43m   ess-ldp           docker.software.inrupt.com/inrupt-ldp-jdbc-service:x.x.x    app=ess-ldp,pod-template-hash=65bd8546c8
replicaset.apps/proxy-deployment-9b69c6fff   1         1         0       43m   nginx             docker.software.inrupt.com/proxy:x.x.x                      app=proxy,pod-template-hash=9b69c6fff

06_autoscale

Optionally, ESS can scale to handle large amounts of traffic throughout the day. When it detects high volumes of traffic, it will scale up and when traffic volumes are low it will scale back down. It does this by increasing the number of deployment replicas and distributing the load among them. It will also increase the number of underlying EC2 nodes running the cluster and distribute the containers among them. This will keep infrastructure costs to a minimum but also allow it to handle large amount of network traffic.

This feature is optional in that is doesn’t have to be deployed for ESS to work but it is recommend in a Performance or Production environment to handle high load.

  1. To enable this optional feature, apply the autoscale module into the cluster:

    kubectl apply -f 06_autoscale/
    

    The operation outputs the following information:

    serviceaccount/cluster-autoscaler created
    clusterrole.rbac.authorization.k8s.io/cluster-autoscaler created
    role.rbac.authorization.k8s.io/cluster-autoscaler created
    clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
    rolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
    deployment.apps/cluster-autoscaler created
    horizontalpodautoscaler.autoscaling/ess-ldp created
    clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
    clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
    rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
    apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
    serviceaccount/metrics-server created
    deployment.apps/metrics-server created
    service/metrics-server created
    clusterrole.rbac.authorization.k8s.io/system:metrics-server created
    clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
    
  2. Check the status of the Autoscaler:

    kubectl get horizontalpodautoscalers.autoscaling
    

    You can also run kubectl get hpa for short.

    The operation should return information that resembles the following:

    NAME      REFERENCE            TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
    ess-ldp   Deployment/ess-ldp   0%/40%   2         30        2          57s
    

Tip

To get more information about how the Autoscaler is running execute: kubectl describe hpa.

07_monitoring (Optional)

To monitor the health and performance characteristics, ESS comes with a reference monitoring module. This module consists mainly of Grafana and Prometheus deployments that pull metrics from the internal ESS services and display them in a UI consisting of graphs and tables showing how ESS is performing.

Note

  • This feature is optional and does not have to be deployed for ESS to work but it will spin up ESS reference components.

  • This just a reference monitoring set up. ESS will be able to integrate with other modern monitoring tools.

  1. To enable this optional feature, apply the monitoring module into the cluster:

    kubectl apply -f 07_monitoring/
    

    The operation outputs the following information:

    service/grafana created
    persistentvolumeclaim/efs-grafana-ssl created
    deployment.apps/grafana created
    configmap/grafana-ini-config created
    service/postgres-metrics created
    deployment.apps/postgres-metrics created
    service/prometheus created
    persistentvolumeclaim/efs-prometheus-ssl created
    persistentvolumeclaim/prometheus-persistent-storage created
    deployment.apps/prometheus created
    configmap/prometheus-cm created
    clusterrole.rbac.authorization.k8s.io/prometheus created
    serviceaccount/prometheus created
    clusterrolebinding.rbac.authorization.k8s.io/prometheus created
    
  2. To verify that the monitoring components are working, use port forwarding to connect to Grafana in the Kubernetes cluster:

    kubectl port-forward service/grafana 3000:3000
    

    The operation returns the following information:

    Forwarding from 127.0.0.1:3000 -> 3000
    Forwarding from [::1]:3000 -> 3000
    
  3. Open https://localhost:3000/ in a web browser.

  4. On the Home page, click Dashboards -> Manage -> 1. ESS LDP Deployment Metrics.

    This opens the main LDP Deployment dashboard.

Note

The first time you connect to the Grafana dashboard, you will have to accept the self-signed certs.

08_logging (Optional)

To help with debugging, ESS comes with a reference logging module that will centralize the logs of all the components that run inside the K8 cluster. This module consists of Fluentd and Kibana (EFK) Logging Stack and uses the Elasticsearch managed cluster created from the Terraform scripts.

Note

  • This feature is optional and does not have to be deployed for ESS to work.

  • This is just a reference logging set up. ESS will be able to integrate with other modern centralized logging tools. For more information, see Centralized Logging to a Backend Logging System

  1. To enable this optional feature, apply the logging module into the cluster:

    kubectl apply -f 08_logging/
    

    The operation outputs the following information:

    serviceaccount/fluentd created
    clusterrole.rbac.authorization.k8s.io/fluentd created
    clusterrolebinding.rbac.authorization.k8s.io/fluentd created
    daemonset.apps/fluentd created
    service/kibana created
    deployment.apps/kibana created
    
  2. To verify that the logging components are working, use port forwarding to connect to Grafana in the Kubernetes cluster:

    kubectl -n kube-logging port-forward service/kibana 5601:5601
    

    The operation returns the following information:

    Forwarding from 127.0.0.1:5601 -> 5601
    Forwarding from [::1]:5601 -> 5601
    
  3. Open https://localhost:5601/ in a web browser.

    Kibana can take some time to start up and will display Kibana server is not ready yet until it finishes loading.

09_auditing (Optional)

ESS service will output auditing information of changes that have occured to the server or the Pods it manages.

Note

  • Auditing feature is optional and does not have to be deployed for ESS to work.

  • The auditing deployment used as part of this tutorial is just a reference auditing implementation (i.e., for example purposes only).

    Do not use in production. Audit messages are not encrypted and accessible in Elasticsearch.

  1. To enable this optional feature, apply the auditing module into the cluster:

    kubectl apply -f 09_auditing/
    
  2. Access audit logs.

    The same Kibana instance used for system logging can be used to view the audit logs. The audit logs are persisted to the same Elasticsearch instance under the ess-audit-* index.

Verification

Note

Replace <DOMAIN> with the domain name for your deployment.

Access via Application

You can validate the installation process by accessing the deployed ESS from an application:

  1. Open a web browser to https://podbrowser.inrupt.com.

  2. In the login dialog that appears, enter https://broker.<DOMAIN>.

  3. Click the Log In button.

VPN Access

VPN access is not be needed under normal circumstances. Access to EKS containers can be obtained through kubectl commands, and access to backend data stores can be achieved by proxying through a container in EKS. However it is occasionally useful to have direct VPN access. Details are available in Appendix C that show how to set up a VPN connection to ESS infrastucture and services.

Tear Down

In order to tear down the ESS deployment environment built above, the following steps must be followed:

Step 1: Delete all EKS resources

cd ${RELEASE_DIR}/deployment/kubernetes/aws
kubectl delete -f 07_monitoring
kubectl delete -f 06_autoscale
kubectl delete -f 05_deployments
kubectl delete -f 04_storage
kubectl delete -f 03_config
kubectl delete -f 02_services
kubectl delete -f 01_namespace

If this step isn’t done first, it is likely that later steps will fail. Terraform does not know about, nor have permissions to delete, resources created by EKS (e.g., ELBs).

Step 2: Delete Terraform Infrastructure

cd ${RELEASE_DIR}/deployment/infrastructure/aws
terraform destroy

You will be shown a plan of what is to be deleted, and you will be asked to confirm.

After successful tear-down completion (which can take about 10 minutes), you should see a message like this:

Destroy complete! Resources: XXX destroyed.

If the terraform destroy command fails for any reason (e.g., it is possible for some operations to timeout), you may see an error similar to the following:

:
aws_subnet.public_2: Destruction complete after 0s
aws_security_group.eks-cluster: Destruction complete after 0s
aws_iam_role.eks-worker-nodes: Destruction complete after 1s
aws_iam_role.eks-cluster: Destruction complete after 1s

Error: Error waiting for Client VPN endpoint to disassociate with target network:
Unexpected state 'associated', wanted target 'disassociated'. last error: %!s(<nil>)

It is safe to re-run the terraform destroy command - the state file will still have the existing resources and it will re-try the delete operations.

Step 3: Manually Delete Prerequisites

After confirming that terraform destroy completed successfully, you can manually delete the prerequisites that were created at the beginning of this installation guide.

Note

You will not actually be able to hard delete the certificate authority. Amazon requires that you soft delete the authority, and that it continues to live in a disabled state for at least 7 days.

Log into the AWS web console and delete:

  1. SSH Key pair:

    1. Display the AWS EC2 console.

    2. Select ‘Key Pairs’ from the main Resources section of the EC2 console.

    3. Select the ess-* key pair created as part of the install.

    4. Select the Delete option from the Actions drop-down menu.

  2. Certificates:

    1. Display the AWS Certificate Manager.

    2. From the Services drop-down menu, select the ess-vpn-* certificates created as part of the install.

    3. Select the Delete option from the Actions dropdown list.

  3. Private root certificate authority:

    1. Display the AWS Certificate Manager.

    2. Select Private CAs from the left hand navigation.

    3. Select the CA you wish to delete.

    4. Click the Actions drop down and disable the CA.

    5. Click the Actions drop down and delete the CA.

Tip

If you encounter errors saying that certificates are still in use, then it could be because the VPN is still using them, in which case the terraform destroy operation most likely did not complete correctly.

Step 4: Delete the DNS CNAME Records Manually

The final tear-down step is to manually remove the DNS CNAME records that you set up above.

Appendix A: Kubernetes Secrets

Environment variables control much of the configuration and operation of containers in a Kubernetes deployment - they allow the same underlying docker image to provide a range of services depending on how they are configured.

Sensitive data such as database passwords, service API tokens, TLS certificates/keys, etc. also need to make their way onto a container for secure operation. These variables can be set directly in the Kubernetes configuration files on the container specs. For example:

containers:
  - name: ldp
    env:
      - name: QUARKUS_DATASOURCE_PASSWORD
        value: "password"
      - name: QUARKUS_DATASOURCE_USERNAME
        value: "ess"

While this works, it does have some issues around the security of the secrets. Specifically they are stored in plain text, and available to anyone with access to the Kubernetes API, the underlying worker nodes, or the Kubernetes configuration YAML files.

Therefore Inrupt strongly recommends that these secrets be secured by integration with utilities such as Hashicorp Vault, or AWS Key Management Service (KMS) and Systems Manager (SSM) Parameter Store.

The Docker images provided by Inrupt include an integration with the AWS KMS key (created via Terraform above) and with AWS SSM Parameter Store values. This keeps the secret values encrypted everywhere (except in-memory for the running microservice processes).

These secrets can be set using several environment variables on the Kubernetes containers. For example:

containers:
  - name: ldp
    env:
      - name: AWS_REGION
        value: "eu-west-2"
      - name: AWS_SSM_KEYS
        value: "/ess-dev/postgres-password,QUARKUS_DATASOURCE_PASSWORD:/ess-dev/postgres-username,QUARKUS_DATASOURCE_USERNAME"
      - name: AWS_SSM_FILES
        value: "/ess-dev/ldp-ssl-crt,/opt/inrupt/server.crt:/ess-dev/ldp-ssl-key,/opt/inrupt/server.key"
  • AWS_REGION tells Kubernetes which region the secrets are stored in. This also triggers the integration with KMS/SSM.

  • AWS_SSM_KEY tells Kubernetes to pull the key /ess-prod/postgres-password, and inject it into the microservice process environment as POSTGRES_PASSWORD.

  • AWS_SSM_FILES tells Kubernetes to pull the key /ess-prod/proxy-ssl-crt, Base64 decode the value, and save that as /opt/inrupt/server.crt on the container file system. This allows even binary files to be stored in and retrieved from the Parameter Store if needed.

With the example infrastructure created above, you have an instance of an IAM profile that provides the following role policy:

policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAccessToESSParameterValues",
            "Action": [
                "ssm:GetParameters"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:ssm:${local.aws_region}:${data.aws_caller_identity.current.account_id}:parameter/${local.resource_prefix}/*"
        },
        {
            "Sid": "AllowDecryptionOfParameterValues",
            "Action": [
                "kms:Decrypt",
                "kms:DescribeKey"
            ],
            "Effect": "Allow",
            "Resource": "${aws_kms_key.eks_container_key.arn}"
        }
    ]
}
EOF

This means that the processes running on the EKS worker nodes will be able to access and decrypt stored secrets that start with the path ${local.resource_prefix}/. This keeps secrets out of the Kubernetes configuration files and out of the container configuration output from the Kubernetes API.

Warning

Fully compromising a node or container in the Kubernetes cluster still allows access via the AWS APIs to SSM and KMS, so it is not stripped of all attack vectors on these secrets. However this approach is significantly better than having secrets available in plain text. Further securing of these containers is up to the enterprise IT teams installing the system.

Appendix B: JSON Web Key Set

The Identity Service requires a JSON Web Key Set (JWKS), which needs to be placed into AWS SSM Parameter store under the /ess-prod/identity-service-keystore key.

There are a variety of ways this can be done. For a pre-production environment, any of the following options should be acceptable.

Option 1: Using mkjwk website

  1. Browse to the web site: https://mkjwk.org/.

  2. For Key Size, select 2048.

  3. For Key Use, select Signature.

  4. For Algorithm, select RSSA256.

  5. Click the Generate button.

  6. Copy to the clipboard the value from the Public and Private Keypair Set.

Option 2: connect2id application

The connect2id command line application can be downloaded and used to generate the JSON Web Key Set.

Option 3: Using a local TypeScript application

The following code can be used to generate a JSON Web Key Set.

  1. Create a temporary directory, and place the following Javascript inside it as generate-jwks.ts:

    import { JWKS } from '@panva/jose'
    const keystore = new JWKS.KeyStore()
    keystore.generateSync('RSA', 2048, {
        alg: 'RS256',
        use: 'sig',
    })
    console.log(keystore.toJWKS(true))
    
  2. Create a package.json file in the same directory:

    {
      "name": "JWKSGenerator",
      "version": "1.0.0",
      "description": "Generates simple JWKS",
      "main": "index.js",
      "scripts": {
        "build": "tsc -p tsconfig.json",
        "start": "node dist/generate-jwks.js"
      },
      "author": "",
      "license": "ISC",
      "dependencies": {
        "@panva/jose": "^1.9.3",
        "jose": "^1.23.0"
      },
      "devDependencies": {
        "@types/supertest": "^2.0.8",
        "typescript": "^3.7.2"
      }
    }
    
  3. Create a tsconfig.json file in the same directory:

    {
      "env": "node",
      "compilerOptions": {
        "target": "es6",
        "module": "commonjs",
        "moduleResolution": "node",
        "noImplicitAny": true,
        "removeComments": true,
        "preserveConstEnums": true,
        "sourceMap": true,
        "outDir": "dist",
        "declaration": true,
        "esModuleInterop": true,
        "types": [
          "node"
        ]
      },
      "typeRoots": [
        "./src/types"
      ],
      "exclude": [
        "test",
        "node_modules",
        "dist"
      ]
    }
    
  4. In the directory, run the following commands:

    npm install
    npm run-script build
    npm start
    

    The output should look similar to the following:

    {
      keys: [
        {
          e: 'A..B',
          n: 'c..D',
          d: 'E..F',
          p: 'G..H',
          q: 'i..j',
          dp: 'K..L',
          dq: 'm..n',
          qi: 'o..P',
          kty: 'RSA',
          kid: 'Q..R',
          alg: 'RS256',
          use: 'sig'
        }
      ]
    }
    

Appendix C: How to setup a VPN Endpoint

To setup a VPN endpoint follow this guide:

ACM Certificates for VPN

This guide will create a VPN connection via the AWS Client VPN managed service. This can in some cases make debugging issues easier (e.g., direct SSH access to Kubernetes worker nodes, SQL tooling access to RDS services, etc).

In order to configure a Client VPN, you will need to create certificates for the server and client. This is not currently practical using purely Terraform, so these will be created using the AWS CLI and some basic command line scripting.

Download Easy RSA

Clone the repo for Easy RSA.

mkdir -p ${ROOT_DIR}/vpn
cd ${ROOT_DIR}/vpn
git clone https://github.com/OpenVPN/easy-rsa.git

The operation outputs its progress:

Cloning into 'easy-rsa'...
remote: Enumerating objects: 49, done.
remote: Counting objects: 100% (49/49), done.
remote: Compressing objects: 100% (36/36), done.
remote: Total 2002 (delta 19), reused 39 (delta 13), pack-reused 1953
Receiving objects: 100% (2002/2002), 5.76 MiB | 12.60 MiB/s, done.
Resolving deltas: 100% (871/871), done.

Set Your Organization Details

Update the Easy RSA vars file with your Organization’s details.

  1. Open the vars file in an editor:

    cd easy-rsa/easyrsa3
    vi ./vars
    
  2. Edit the file to replace the _YOUR_*_ variables with values appropriate for your organization.

    set_var EASYRSA         "$PWD"
    set_var EASYRSA_OPENSSL "openssl"
    set_var EASYRSA_PKI     "\$EASYRSA/pki"
    set_var EASYRSA_DN      "org"
    
    set_var EASYRSA_REQ_COUNTRY  "_YOUR_COUNTRY_"
    set_var EASYRSA_REQ_PROVINCE "_YOUR_PROVINCE_"
    set_var EASYRSA_REQ_CITY     "_YOUR_CITY_"
    set_var EASYRSA_REQ_ORG      "_YOUR_ORGANIZATION_"
    set_var EASYRSA_REQ_EMAIL    "_YOUR_EMAIL_"
    set_var EASYRSA_REQ_OU       "_YOUR_ORG_UNIT_"
    
    set_var EASYRSA_KEY_SIZE    2048
    set_var EASYRSA_ALGO        rsa
    set_var EASYRSA_CA_EXPIRE   3650
    set_var EASYRSA_CERT_EXPIRE 365
    set_var EASYRSA_CRL_DAYS    180
    set_var EASYRSA_TEMP_FILE "\$EASYRSA_PKI/extensions.temp"
    set_var EASYRSA_BATCH     "true"
    

    For example,

    set_var EASYRSA_REQ_COUNTRY  "US"
    set_var EASYRSA_REQ_PROVINCE "Massachusetts"
    set_var EASYRSA_REQ_CITY     "Boston"
    set_var EASYRSA_REQ_ORG      "Inrupt, Inc"
    set_var EASYRSA_REQ_EMAIL    "admin@example.com"
    set_var EASYRSA_REQ_OU       "Engineering"
    

Generate the Certificates

  1. Create a new PKI and create a Certificate Authority (CA):

    ./easyrsa init-pki
    ./easyrsa build-ca nopass
    

    The operation outputs its progress:

    Using SSL: openssl LibreSSL 2.8.3
    Generating RSA private key, 2048 bit long modulus
    ......................................................................................+++
    ..+++
    e is 65537 (0x10001)
    
  2. Generate the server certificate and key:

    ./easyrsa build-server-full vpn-${ENVIRONMENT}-server nopass
    

    The operation outputs its progress:

    Using SSL: openssl LibreSSL 2.8.3
    Generating a 2048 bit RSA private key
    ...
    Certificate is to be certified until ... GMT (365 days)
    
    Write out database with 1 new entries
    Data Base Updated
    
  3. Generate the server certificate and key:

    ./easyrsa build-client-full vpn-${ENVIRONMENT}.client nopass
    

    The operation outputs its progress:

    Using SSL: openssl LibreSSL 2.8.3
    Generating a 2048 bit RSA private key
    ...
    Certificate is to be certified until ... GMT (365 days)
    
    Write out database with 1 new entries
    Data Base Updated
    

Gather Important Files

Gather the important files created above into a single folder:

cd pki
cp ca.crt ${ROOT_DIR}/vpn/
cp issued/vpn-${ENVIRONMENT}-server.crt ${ROOT_DIR}/vpn/
cp private/vpn-${ENVIRONMENT}-server.key ${ROOT_DIR}/vpn/
cp issued/vpn-${ENVIRONMENT}.client.crt ${ROOT_DIR}/vpn/
cp private/vpn-${ENVIRONMENT}.client.key ${ROOT_DIR}/vpn/

Important

Save these files to a safe location such as 1Password or another secrets/password manager. These files will be needed to generate the OpenVPN profile later.

Note

AWS will reject certificates if the start date is in the future. Because system clocks can vary very slightly between your workstation and the AWS servers, pause for just a minute to make sure the creation time of the certificate is in the past according to the AWS servers. If you still see timing issues, you may want to make sure that your own workstation system clock is up to date using a network time server (e.g., https://tf.nist.gov/tf-cgi/servers.cgi).

Upload the Client and Server Certificates to AWS

Upload the client and server certificates to AWS.

  1. Upload the server certificate:

    cd ${ROOT_DIR}/vpn
    
    aws --profile ${ENVIRONMENT} --region ${AWS_REGION} acm import-certificate --certificate fileb://vpn-${ENVIRONMENT}-server.crt --private-key fileb://vpn-${ENVIRONMENT}-server.key --certificate-chain fileb://ca.crt
    

    The operation returns the CertificateArn value for the certificate, similar to the following:

    {
        "CertificateArn": "arn:aws:acm:eu-west-2:123456789012:certificate/12345a67-8b9c-0123-def4-567801123456"
    }
    

    Important

    Make note of your CertificateArn values for the uploaded server certificate. You will need this later for the Terraform configuration files.

  2. Optional. Add tags to certificate for a more human-readable name when viewing in AWS. In those tag command, the ARN should be replaced with the CertificateArn obtained previously.

    aws --profile ${ENVIRONMENT} --region ${AWS_REGION} acm add-tags-to-certificate --certificate-arn arn:aws:acm:eu-west-2:123456789012:certificate/12345a67-8b9c-0123-def4-567801123456 --tags Key=Name,Value=ess-vpn-server-${ENVIRONMENT}
    
  3. Upload the client certificate:

    aws --profile ${ENVIRONMENT} --region ${AWS_REGION} acm import-certificate --certificate fileb://vpn-${ENVIRONMENT}.client.crt --private-key fileb://vpn-${ENVIRONMENT}.client.key --certificate-chain fileb://ca.crt
    

    The operation returns the CertificateArn value for the certificate, similar to the following:

    {
        "CertificateArn": "arn:aws:acm:eu-west-2:123456789012:certificate/09876a54-3b2c-1098-def7-654321098765"
    }
    

    Important

    Make note of your CertificateArn values for the uploaded client certificate . You will need this later for the Terraform configuration files.

  4. Optional. Add tags to certificate for a more human-readable name when viewing in AWS. In those tag command, the ARN should be replaced with the CertificateArn obtained previously.

    aws --profile ${ENVIRONMENT} --region ${AWS_REGION} acm add-tags-to-certificate --certificate-arn arn:aws:acm:eu-west-2:123456789012:certificate/09876a54-3b2c-1098-def7-654321098765 --tags Key=Name,Value=ess-vpn-client-${ENVIRONMENT}
    

Create VPN Endpoint in AWS

  1. Rename the vpn.tf-off file to vpn.tf.

    The VPN Endpoint is not added by default. Terraform will ignore any files not ending in .tf. Changing this file to a .tf will get its configuration applied the next time terraform apply is run.

  2. Add the server and client certificates ARN into the vpn_server_cert_arn and vpn_client_cert_arn properties inside the configure.tf file:

     # VPN Connection
     vpn_server_cert_arn = ""
     vpn_client_cert_arn = ""
    
  3. Run: terraform plan

    Details about the VPN connection will be displayed. Review this to verify it meets your requirements.

  4. Run terraform apply

    This will apply these changes into AWS. When this is complete it will print the following details about the VPN connection:

     ...
    
     output-notes-vpn =
     ########## VPN Connection Details ##########
    
     VPN connection setup:
     vpn-endpoint-id : cvpn-endpoint-0c9dcb93f5d593ffd
    
     ################################################
    

Create OVPN file

  1. Go to the vpn directory that contains the client and server certificate files created above.

    cd ${ROOT_DIR}/vpn
    
  2. Download the OVPN profile from AWS and create the .ovpn file. Replace the parameter <ENDPOINT> with the value found in the vpn-endpoint-id output from the terraform apply command.

    aws --profile ${ENVIRONMENT} --region ${AWS_REGION} ec2 export-client-vpn-client-configuration --client-vpn-endpoint-id <ENDPOINT> --output text > ${ENVIRONMENT}.ovpn
    

    The resulting .ovpn file still needs to be manually edited to add the certificate files.

  3. Open the file in vi or a similar text editor.

  4. Locate the line that simply reads <ca> and add the following text before it:

    # Redirect all traffic through VPN when connected
    redirect-gateway def1
    # The second ip address in the CIDR is reserved by AWS for a DNS server
    dhcp-option DNS 10.1.0.2
    ca [inline]
    cert [inline]
    key [inline]
    #   ca ca.crt
    
  5. Locate the closing </ca> line and add the following text after it:

    #   cert vpn-<ENVIRONMENT>.client.crt
    <cert>
    __CERT_FILE__
    </cert>
    #   key vpn-<ENVIRONMENT>.client.key
    <key>
    __KEY_FILE__
    </key>
    

    Replace the *-FILE_ strings with the contents of local files as follows:

    _CERT_FILE_

    Replace with the contents from $ROOT_DIR/vpn/vpn-$ENVIRONMENT-client.crt.

    _KEY_FILE_

    Replace with the contents from ${ROOT_DIR}/vpn/vpn-${ENVIRONMENT}-client.key.

The final .ovpn file should look something like the following:

client
dev tun
proto udp
remote cvpn-endpoint-.....prod.clientvpn.eu-west-2.amazonaws.com 443
remote-random-hostname
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
cipher AES-256-GCM
verb 3

# Redirect all traffic through VPN when connected
redirect-gateway def1
# The second ip address in the CIDR is reserved by AWS for a DNS server
dhcp-option DNS 10.1.0.2
ca [inline]
cert [inline]
key [inline]
#   ca ca.crt

<ca>
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

</ca>

#   cert vpn-dev.client.crt
<cert>
Certificate:
    Data:
...
-----END CERTIFICATE-----
</cert>
#   key vpn-dev.client.key
<key>
-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
</key>

reneg-sec 0

This file can be imported into any OVPN client (e.g., Tunnelblick) to provide access to the private subnets.

Important

Due to current limitations with Terraform and the AWS APIs, you cannot script the addition of the routes/authorizations to allow public internet access while connected to the VPN. As a result, while connected you will have access to the private subnets, but nothing else.

It is possible to add public internet routes/authorizations to the VPN endpoint manually, but this should not generally be necessary, and in fact may be considered a security risk. Consult AWS documentation on the VPN Client Endpoint for more information if you wish to enable it anyway.

Appendix D: Create Externally Signed TLS Certificates Using Let’s Encrypt

Traffic inbound to the Enterprise Solid Server system should be encrypted in transit, and should be trustable by the end client making the requests. This means generating a TLS certificate that is signed by an external Certificate Authority (CA).

Ultimately the creation and management of certificates is up to the reader. However, Let’s Encrypt provides free self service certificates that can be used for this purpose. Creation of these certificates can be an involved task - it is recommended that the reader ultimately become familiar with the certbot command and how to properly request and renew Let’s Encrypt certificates. This document does not cover generation of Let’s Encrypt certificates without the use of the certbot/certbot Docker image. It also does not cover more advanced usage of the certbot command (the primary interface to Let’s Encrypt) such as wildcard certificates, HTTP validation, and auto-renewing certificates. For more information on this and other topics, consult the Let’s Encrypt official documentation. However this document will provide a quick introduction sufficient to get the ESS system running in a Kubernetes cluster.

Only the proxy service needs a certificate signed by an external CA. All other traffic should be internal to the ESS cluster, and can use a self-signed certificate.

Create External Certificate/Key

Let’s Encrypt provides free certificates to anyone that can validate they own the domain/server they are securing. To simplify the process, you can use a Docker image certbot/certbot, so you will need to have Docker installed and running on your workstation.

This example assumes you own the domain mycompany.com, and will secure the subdomains identity.mycompany.com and mycompany.com.

  1. Run a container with the certbot/certbot image.

    mkdir -p ${ROOT_DIR}/ssl
    cd ${ROOT_DIR}/ssl
    mkdir etc
    mkdir var
    export UUID=`date '+%s'`
    docker run -it --rm --name certbot-${UUID} \
      -v ${ROOT_DIR}"/ssl/etc:/etc/letsencrypt" \
      -v ${ROOT_DIR}"/ssl/var:/var/lib/letsencrypt" \
      certbot/certbot --manual --preferred-challenges dns certonly \
      -d identity.mycompany.com \
      -d mycompany.com
    
  2. If you are installing the OIDC Broker Service, you can add the Broker URL here (Recommended):

    docker run -it --rm --name certbot-${UUID}
          -v ${ROOT_DIR}"/ssl/etc:/etc/letsencrypt"
          -v ${ROOT_DIR}"/ssl/var:/var/lib/letsencrypt"
          certbot/certbot --manual --preferred-challenges dns certonly
          -d identity.mycompany.com
          -d mycompany.com
          -d broker.mycompany.com
    
  3. The operation prompts for an email:

    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator manual, Installer None
    Enter email address (used for urgent renewal and security notices)
     (Enter 'c' to cancel):
    

    Enter your email address at this prompt. The email will be used by Let’s Encrypt to remind you when your certificate is expiring.

  4. The operation continues and prompts for Terms of Service agreement.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Please read the Terms of Service at
    https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
    agree in order to register with the ACME server at
    https://acme-v02.api.letsencrypt.org/directory
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    (A)gree/(C)ancel:
    

    Enter A to agree to the terms.

  5. The operation prompts for permission to share email address with Electronic Frontier Foundation (EFF):

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Would you be willing to share your email address with the Electronic Frontier
    Foundation, a founding partner of the Let's Encrypt project and the non-profit
    organization that develops Certbot? We'd like to send you email about our work
    encrypting the web, EFF news, campaigns, and ways to support digital freedom.
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    (Y)es/(N)o:
    

    This submission is optional. You can reply Y or N.

  6. The operation prompts for the logging of the IP:

    Obtaining a new certificate
    Performing the following challenges:
    dns-01 challenge for identity.mycompany.com
    dns-01 challenge for mycompany.com
    
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    NOTE: The IP of this machine will be publicly logged as having requested this
    certificate. If you're running certbot in manual mode on a machine that is not
    your server, please ensure you're okay with that.
    
    Are you OK with your IP being logged?
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    (Y)es/(N)o:
    

    Enter Y to continue.

  7. The operation prompts you to add a TXT record for each of the three domains.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Please deploy a DNS TXT record under the name
    _acme-challenge.identity.mycompany.com with the following value:
    
    AYXzo....zEQ
    
    Before continuing, verify the record is deployed.
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Press Enter to Continue
    
    1. Before hitting enter to continue, add each to your DNS provider. If Let’s Encrypt tries to verify the record before it is deployed, the process will fail and you will need to start over again.

    2. After you have added your domains to the DNS provide, you can then press Enter to continue. The process returns an output similar to the following:

    Waiting for verification...
    Cleaning up challenges
    
    IMPORTANT NOTES:
     - Congratulations! Your certificate and chain have been saved at:
       /etc/letsencrypt/live/identity.mycompany.com/fullchain.pem
       Your key file has been saved at:
       /etc/letsencrypt/live/identity.mycompany.com/privkey.pem
       Your cert will expire on 2020-09-01. To obtain a new or tweaked
       version of this certificate in the future, simply run certbot
       again. To non-interactively renew *all* of your certificates, run
       "certbot renew"
     - Your account credentials have been saved in your Certbot
       configuration directory at /etc/letsencrypt. You should make a
       secure backup of this folder now. This configuration directory will
       also contain certificates and private keys obtained by Certbot so
       making regular backups of this folder is ideal.
     - If you like Certbot, please consider supporting our work by:
    
       Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
       Donating to EFF:                    https://eff.org/donate-le
    

At this point, the docker container will stop and exit. It will leave several files in the /etc folder you created:

  • ${ROOT_DIR}/ssl/etc/live/identity.mycompany.com/fullchain.pem

  • ${ROOT_DIR}/ssl/etc/live/identity.mycompany.com/privkey.pem

These two files make up the server certificate and key that need to be installed on the proxy service.

Base64 Encode

In order to use the generated certificate with ESS, they must be Base64 encoded and placed into AWS Systems Manager (SSM) Parameter Store secrets.

Tip

You may need to run these commands with root privilege to access the *.pem files.

  1. Encode fullchain.pem:

    cd ${ROOT_DIR}/ssl
    cat etc/live/identity.mycompany.com/fullchain.pem | base64
    

    The operation returns the BASE64 encoded string, i.e.:

    LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZzRENDQkppZ0F3SUJBZ0lTQkxpck9iZ3Zl
    ...
    MFFnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    
  2. Encode privkey.pem:

    cat etc/live/identity.mycompany.com/privkey.pem | base64
    

    The operation returns the BASE64 encoded string, i.e.:

    LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2QUlCQURBTkJna3Foa2lHOXcwQkFRRUZB
    ...
    alpYZXgrR0dEQ2lxRUJ1c0orODV2Zz09Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
    

The above steps to create and external Certificate and Key should be used to manually renew the Certificate and Key.

Appendix E: Docker Image Repositories

This installation guide pulls docker images directly from Inrupt hosted repositories. Because of the size of docker images as opposed to many other release artifacts, Inrupt may choose to rate or bandwidth limit access to these images in the future. Additionally, in a production environment it is generally not a good idea to rely on an external repository being available 24x7.

Thus while the Inrupt release repositories work well for demonstration purposes in this installation guide, it is recommended that clients create their own container repository, and transfer images there before using those URLs in Kubernetes configuration files.

There are many container repository offerings including:

Whatever platform/provider is chosen, it is incumbent on the client to properly secure images that have been licensed to the client. These should only be available to Kubernetes clusters specifically licensed to use them.

Image Transfer - AWS Example

Images can normally be transferred to another docker repository using simple docker command line calls. For example, transferring from Inrupt’s repository to your own ECR repository would look similar to the code below:

  1. Login to Inrupt release repository. - Username: inrupt/release - Password: (Provided by Inrupt)

    docker login docker.software.inrupt.com
    
  2. Pull image from Inrupt to the local workstation.

    docker pull docker.software.inrupt.com/inrupt-identity-service:x.x.x
    
  3. Tag the local image with URL for your ECR repository. This example uses a single ECR repository for all images with different tag names for each image.Replace _YOUR_ECR_URL_ with your ECR URL in the command.

    docker tag docker.software.inrupt.com/inrupt-identity-service:x.x.x _YOUR_ECR_URL_:inrupt-identity-service-0.2.1
    
  4. Log in to your ECR repository. Replace _YOUR_ECR_REGION_ with your ECR region in the command.

    eval $(aws --profile ${ENVIRONMENT} --region _YOUR_ECR_REGION_ ecr get-login --no-include-email | sed 's/ -e none//')
    
  5. Push image to your ECR repository. Replace _YOUR_ECR_URL_ with your ECR URL in the command.

    docker push _YOUR_ECR_URL_:inrupt-identity-service-0.2.1