Pomerium – How to install on GKE, from zero to hero

Hello!

I have been using (and loving) Pomerium Enterprise for the past few months. Do you have an application that doesn’t have adequate access control or logging? Then Pomerium is the tool for you. It is an incredibly powerful and versatile zero-trust proxy (and no, that’s not a buzzword in this case).

This guide will show you how to install the FREE, OPEN SOURCE version on a GKE cluster on GCP. I use Google as the IDP but you can use anything else. This is more like a complement to the official guides, made for people that don’t have GKE / K8S / HELM experience.

The official guides are here, i highly suggest you check them out:

https://www.pomerium.io/docs/quick-start/helm.html
https://www.pomerium.io/docs/identity-providers/google.html

The main benefits of this tutorial are the architecture explanation and the script/values.yaml file, which differ slightly from those in the documentation (i had some problems with those. The ones below should work on the first try.)

A lot else I skip over because it’s already said in detail in the official documentation or because I assume you already have some previous experience with GCP.

Architecture – Basic Explanation

This is the basic Pomerium architecture:

As you can see, instead of a user accessing the application directly, they access it through Pomerium, which is basically a forward-proxy.

The user authenticates using its IDP account (GCP, Azure, Okta, whatever) and then Pomerium uses the user’s attributes to allow or deny access to specific pages using rules you create. All actions are logged and can be sent to a SIEM for analysis.

For example, you can create a rule where only users in the financial group are able to access the financial reports page of an application that, otherwise, would open these reports to everyone. You can also create an alert in your SIEM everytime such a report is generated and/or downloaded.

Pomerium is hosted in its own domain (in this case, pomerium.domain.com) and each appliaction has a specific subdomain. This way, you can send to your users a list of applications as links, such as:

app1.pomerium.domain.com
app2.pomerium.domain.com
payroll.pomerium.domain.com
etc!

Important note: By now you may be thinking, “Ok that’s cool and all, but how do i make my users actually use Pomerium instead of just acessing the application directly?”. That’s a great question!

You should use a Firewall to block any acess to your application that doesn’t come from Pomerium’s IP addresses (more on that below), so you “force” your users to go through Pomerium.

If it is a SaaS/3rd party application, they usually have a IP whitelisting feature that does the same thing. If they don’t, that sucks.

Technical Details

I decided to use K8s because it gives you more elasticity, automatic failure recovery and eaaasy scalability. It’s a pain to set-up, but the operation is so much easier when it’s done.

A K8s cluster is a cluster of machines that communicate with eachother and host the containers needed for the application to run.

I created a cluster with three machines. You can see them on your instances page:

Important note: See the IP addresses marked in yellow? That’s the IP address Pomerium will use as the source address when connecting to applications. You need to allow them on the application’s Firewall/WAF if you have any.

Importanter note: Those IPs are elastic, so they will change from time to time. If you want a static IP, you have to reserve one for each machine (in the public IPs pages) and manually assign it to each node in the cluster.

A GKE cluster contains services (the stuff that executes. a bunch of pods.) and an Ingress (the stuff that communicates with the outside world):

Your Ingress contains the external IP address that your users will access. This is the IP you will need to configure in your DNS server.

You can also see that by default Pomerium creates 4 pods (ignore the Enterprise one, that’s not on the free version).

They are:

  • Pomerium-authenticate: Responsible for user authentication. Communicates with your IDP.
  • Pomerium-authorize: Responsible for authorization when acessing each page. If you want access logs to see what users are doing, they are all here.
  • Pomerium-cache: Caches data.
  • Pomerium-proxy: Proxies the connection. There are also access logs here, but they’re redundant and less complete than those on Pomerium-authorize.

So, just to make it clear:
The source addresses Pomerium will use when connecting to stuff is the instances’ addresses.

The destination address your users will resolve and connect to when acessing Pomerium or any application through it is the Ingress’ IP.

Right. So you see, a K8s Cluster is a collection of nodes (VMs). These nodes run pods (micro-VMs, containers). To manage these pods, you need to install and use kubectl. You should run kubectl commands using the Cloud Shell:

Use the following command to specify to kubectl which cluster it should use:

gcloud container clusters get-credentials <cluster name> --region <region name)

Useful Kubectl commands:

To show all pods:

kubectl get pods

To show logs from a specific pod:

kubectl logs <pod> 
kubectl logs <pod> --since=5m >> Logs from the last 5 minutes

To show ingresses available on the cluster:

kubectl get ingress 

Execute a command inside a Pod:

kubectl exec <pod> -- <command>


OBS: Pomerium's pods are extremely lean and don't have many basic commands such as "ls". 

About HELM

So you’ve seen there are about 5 layers of abstraction until you get to the running code. Well, let’s add another one!

10 souvenirs to prove you've been to Russia | Nesting dolls, Russian  nesting dolls, Matryoshka doll
Modern Kubernetes deployment. Each doll is filled with volatile explosives.

HELM is used to install the entire K8s environment that Pomerium needs to run. It’s like apt-get, but for entire applications built on containers.

HELM installs all pods, services, ingresses and other resources that the application needs. This entire “package” is called a chart. The chart is available via a name (for pomerium it is pomerium/pomerium). It’s something like a git repository: you give it a name and it finds it. HELM also makes it easy to update existing applications to newest versions or reinstall them.

A HELM chart usually comes with many options or variables for you to tune or configure. You should always check the chart’s README to know which options can/should be configured:

helm show readme [CHART]

So how do you define these options?
There are two ways:

1 – Passing the commands when installing or upgrading:

With this option, you simply pass the variables you need in the command line:

helm install \
	pomerium \
	pomerium/pomerium \
    --set authenticate.service.type=NodePort \
    --set proxy.service.type=NodePort \

    --set ingress.secret.name="pomerium-tls" \
    --set config.sharedSecret=$(head -c32 /dev/urandom | base64) \
    --set config.cookieSecret=$(head -c32 /dev/urandom | base64) \
    --set ingress.secret.cert=$(base64 -w 0 -i "cert.crt") \
    --set ingress.secret.key=$(base64 -w 0 -i "cert.key") \
    --values values.yaml

It’s easy, but the downside is you have to pass the same command everytime you update the service.

2 – Passing values using a values.yaml file

You saw that the last line in the previous command mentions a values.yaml file. This file contains any other values you don’t want to pass with “–set” commands.

VERY IMPORTANT:

There is a difference between setting configurations via “–set” or via “values.yaml”

For example, the configuration “proxy.service.type” shown above would look like this in YAML format, with each dot being replaced by an indented newline:

proxy:
  service:
    type: 'something'

All possible Pomerium configurations are here:

https://www.pomerium.io/reference/#shared-settings

You can set any option or setting that is not exposed by HELM (that is, not in the chart’s README) using the following command:

--set config.extraOpts= {'option1':'value', 'option2':'value', 'etc':'etc'} 

Getting Started!

First things first, you should create a project on GCP to host Pomerium and install kubectl and HELM on your CloudShell.

You also need the TLS certificate and private key for domain you want to install Pomerium on. Upload it to your CloudShell.

Now save the file below as values.yaml and upload it to your CloudShell. Also noticed that I used the domain “domain.com” as my root domain. Please replace it with your root domain, but you can leave all the subdomains the same if you want to.

Note: You should read Pomerium’s official guide on how to configure your IDP. The example below is a template for GSuite. You should also choose the domain you’re going to use for Pomerium.

authenticate:
  idp:
    provider: "google"
    clientID: <something>
    clientSecret: <something>
    # Required for group data
    # https://www.pomerium.com/configuration/#identity-provider-service-account
    serviceAccount: <something>
  service:
    annotations:
      cloud.google.com/app-protocols: '{"https":"HTTPS"}'

proxy:
  service:
    annotations:
      cloud.google.com/app-protocols: '{"https":"HTTPS"}'

config:

  extraOpts: {'dns_lookup_family':'V4-ONLY'}
  rootDomain: pomerium.domain.com
  policy:
    - from: https://hello.pomerium.domain.com
      to: http://nginx.default.svc.cluster.local:80
      allowed_domains:
        - domain.com


ingress:
  annotations:
    kubernetes.io/ingress.allow-http: "false"
  hosts:
    - "*.pomerium.domain.com"

Now upload the script below to your CloudShell as helm_gke.sh along with your TLS cert/key and execute it:

IMPORTANT: The script below will create the cluster in south-america-east1-b and with only one node. Change it if you need to.

#!/bin/bash
# PRE-REQ: Install Helm : You should verify the content of this script before running.
# curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
# NOTE! This will create real resources on Google's cloud. Make sure you clean up any unused
# resources to avoid being billed. For reference, this tutorial cost me <10 cents for a couple of hours.
# NOTE! You must change the identity provider client secret setting, and service account setting!
# NOTE! If you are using gsuite, you should also set `authenticate.idp.serviceAccount`, see docs !

echo "=> [GCE] creating cluster"
gcloud container clusters create pomerium --region southamerica-east1-b --num-nodes 1

echo "=> [GCE] get cluster credentials so we can use kubctl locally"
gcloud container clusters get-credentials pomerium --region southamerica-east1-b

echo "=> add pomerium's helm repo"
helm repo add pomerium https://helm.pomerium.io

echo "=> update helm"
helm repo update

echo "=> add bitnami's helm repo"
helm repo add bitnami https://charts.bitnami.com/bitnami

echo "=> install nginx as a sample hello world app"
helm upgrade --install nginx bitnami/nginx --set service.type=ClusterIP

echo "=> install pomerium with helm"
helm install \
	pomerium \
	pomerium/pomerium \
    --set authenticate.service.type=NodePort \
    --set proxy.service.type=NodePort \

    --set config.sharedSecret=$(head -c32 /dev/urandom | base64) \
    --set config.cookieSecret=$(head -c32 /dev/urandom | base64) \
    --set ingress.secret.name="pomerium-tls" \
    --set ingress.secret.cert=$(base64 -w 0 -i "domainCert.crt") \
    --set ingress.secret.key=$(base64 -w 0 -i "domainCert.key") \
    --values values.yaml


Now that everything is up and running, all you need to do is get the Ingress IP address and add it to your DNS server.

That’s it!