This guide is meant to help you configure a private container registry running on your Kubernetes cluster that is backed by an S3 backend.
What you will need:
- Basic working knowledge of Kubernetes
- A running Kubernetes cluster: We will using Kubernetes resources such as Load Balancers that require cloud provider support.
- Basic working knowledge of Helm
- Valid Domain
All of the instructions in this guide can be swapped out for your cloud provider of choice with minor changes. We will be using Vultr as our cloud provider for this guide.
If you wish to also use Vultr there is an open-source Terraform Module that Vultr provides called Condor which bootstraps a working cluster in a few minutes. To find out more visit https://github.com/vultr/condor
There are a few steps that are required to our private container registry backed by an s3 backend and they are as follows
- Deploy and secure the Registry.
- Configure our ingress route so we have a public way to connect to the registry.
- Getting our local environment and Kubernetes to understand how to interact with the Registry.
Container Registry
Getting the Registry deployed is fairly straight forward with a simple helm chart. However, before we deploy the helm chart there are a few steps required. Setting up Object Storage
You will need to create an S3 bucket and make sure you have the following information:
- Bucket name
- Region Endpoint
- Region
- Access Key
- Secret Key
Registry Authentication
If you want to secure your registry so only authenticated users can only push/pull images from it we will need to set up basic authentication. We will use htpasswd to generate our username and password.
htpasswd -c auth ddymko
This will create a file called auth
with my username ddymko
and my password encrypted.
Registry Helm chart
The helm chart for the registry should look like the following
helm install registry stable/docker-registry \
--set s3.region={REGION} \
--set s3.regionEndpoint={REGION_ENDPOINT}\
--set s3.secure=true \
--set s3.bucket={BUCKET_NAME} \
--set secrets.s3.accessKey={ACCESS_KEY} \
--set secrets.s3.secretKey={SECRET_KEY} \
--set secrets.htpasswd={CONTENTS_OF_AUTH_FILE} \
--set storage=s3
One thing to note if you are using Vultr. The region will have to be us-east-1
By running kubectl get po
&& kubectl get svc
you will see a pod and a service running with a prefix of registry-docker-registry
.
Ingress
Ingress Helm Chart
We will be going with Ingress-Nginx for our ingress controller. This will deploy an external Load Balancer, on the cloud provider we are currently running on, that will be our public-facing IP to our cluster.
The Helm configuration is as follows:
Note: If you are running on a cloud provider other then Vultr you will need to look up what configuration their ingress would support.
helm install lb-ingress stable/nginx-ingress \
--namespace=default \
--set controller.publishService.enabled=true \
--set controller.service.annotations."service\.beta\.kubernetes\.io/vultr-loadbalancer-protocol"="http" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/vultr-loadbalancer-https-ports"="443" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/vultr-loadbalancer-ssl-pass-through="true""
This will install 2 pods and 2 services.
➜ ~ kubectl get po
NAME READY STATUS RESTARTS AGE
lb-ingress-nginx-ingress-controller-55c66f5798-w48vt 1/1 Running 0 2m50s
lb-ingress-nginx-ingress-default-backend-b76cb8b5b-bnw66 1/1 Running 0 2m50s
➜ ~ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6m49s
lb-ingress-nginx-ingress-controller LoadBalancer 10.98.183.8 8.9.6.172 80:30853/TCP,443:31396/TCP 2m52s
lb-ingress-nginx-ingress-default-backend ClusterIP 10.98.86.158 <none> 80/TCP 2m52s
Attaching Domain to Load Balancer
You may notice that the lb-ingress-nginx-ingress-controllerservice
may have a EXTERNAL-IP
of pending
. This is because the Load Balancer is being deployed and it may take a minute or two to get its IP address.
Once theEXTERNAL-IPhas an IP address you will now need to attach your domain to the LB IP. With vultr you can navigate to https://my.vultr.com/dns/ and add a new domain with the LB IP which in our case is 8.9.6.172
.
SSL
We want to secure our container with SSL. I will be using Lego to create my Lets Encrypt certs. If you want to use another method to create your SSL feel free do to do so.
VULTR_HTTP_TIMEOUT=300 \
lego --dns vultr \
--domains \*.domain.com \
--domains domain.com \
--email [email protected] \
--path="certs" \
--accept-tos run
Running this will have created a certs folder in the working directory where we can grab our cert and key to create a TLS secret in Kubernetes.
kubectl create secret tls ssl — cert=certs/certificates/_.domain.com.crt — key=./certs/certificates/_.domain.com.key
With all of that done. The YAML for your ingress route should look like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "30720m"
name: registry
namespace: default
spec:
rules:
- host: registry.domain.com
http:
paths:
- backend:
serviceName: registry-docker-registry
servicePort: 5000
path: /
tls:
- hosts:
- registry.domain.com
secretName: ssl
After deploying the yaml the container registry should be now running and accessible at https://registry.domain.com/v2 . You should also get greeted with a prompt for your username and password to access the registry.
Pushing/Pulling images
Now that are registry is running and accessible we will need to set up our local machine along with Kubernetes to know how to push and pull images.
Locally
Getting docker setup locally with our private registry is fairly straightforward.
docker login registry.domain.co
Now that we are logged in you are able to pull and push images to your registry.
➜ ~ docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:9a839e63dad54c3a6d1834e29692c8492d93f90c59c978c1ed79109ea4fb9a54
Status: Image is up to date for alpine:latest
docker.io/library/alpine:latest
➜ ~ docker tag alpine registry.domain.com/alpine:v1.0.0
➜ ~ docker push registry.domain.com/alpine:v1.0.0
The push refers to repository [registry.domain.com/alpine]
3e207b409db3: Pushed
v1.0.0: digest: sha256:39eda93d15866957feaee28f8fc5adb545276a64147445c64992ef69804dbf01 size: 528
Kubernetes
Now in order to have Kubernetes be able to pull images from our registry we need to create registry credentials. This is done by taking our local docker authentication, which is located ~/.docker/config.json , and giving this to Kubernetes.
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
To inspect the credentials you can run the following:
kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
Note: Your Docker credentials may be stored in a credstore which does not store them in the config.json. You will be required to set the credstore:"" in the config.json and run docker login registry.domain.com again. More information can be found here.
Deploying a container
With your registry being accessible through your domain, being able to run docker login, and deploying your docker credentials to Kubernetes you are ready to deploy a container from your registry.
Below is a sample YAML that will pull the alpine image we pushed to our registry. It also defines imagePullSecrets which has our regcred secret so that Kubernetes is able to authenticate with our registry.
apiVersion: v1
kind: Pod
metadata:
name: alpine
spec:
containers:
- name: alpine
image: registry.domain.com/alpine:v1.0.0
restartPolicy: OnFailure
imagePullSecrets:
- name: regcred
You should see the pod get deployed successfully and if you run a kubectl describe alpine
you will see Kubernetes log pulling the image from the registry in the Events section.
Wrapping up
Now with a private registry you can take things a step forward and set up CI/CD to automate image building, deployments, and more.
More information about private registries