The Oracle Cloud free tier offerings is surprisingly quite generous compared to other providers. For instance, it offers two VM.Standard.E2.1.Micro compute instances VMs with 1 GB of memory each. In comparison, GCP’s free tier offers a single f1-micro instance at 0.6 GB of memory. What’s also quite attractive is that the VMs come with a free public IP. Having a public IP is very useful in cases where you would want to have multiple A records for DNS round-robin.

That being said, 1 GB of memory is still below the recommended specs for a k8s-flavored Kubernetes cluster, so k3s is more reasonable and sufficient for our purposes. The installation is also extremely painless.

Oracle set-up

First we create two VM instances on Oracle Cloud and choose to assign a public IP address. We chose to use the Oracle Autonomous Linux base image although any other provided linux images would do. We chose the Oracle Autonomous Linux image for its convenience and security promises. Autonomous Linux is based on Red Hat Linux, so the command line interface should be familiar. We’ve also added a public key during setup to facilitate SSH’ing into the VMs in further steps.

While the VMs are being prepared, we can set up the ingress policies for our virtual network (Networking→Virtual Cloud Networks→<network name>→Security List Details). We want to enable ports 6443/TCP for the API server at the minimum, and may also need 8472/UDP (Flannel) and 10250/TCP (metrics-server) according to the k3s documentation. Allowing access to the private network is enough, although 6443/TCP will need to be accessible publicly if you need to access the API server from outside the cluster.

Master node set-up

By now the VMs should be ready so we’ll SSH into the master node and set up k3s. The default user for Oracle Cloud VMs is opc.

curl -sfL https://get.k3s.io | sh - 

Or, if you want to use kubectl from outside the cluster,

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--tls-san <master node public IP>" sh - 

The config file for the cluster is located at /etc/rancher/k3s/k3s.yaml and has chmod 600 to root by default. Also, the root user doesn’t have /usr/local/bin, where kubectl is installed, to its PATH. These are sensible defaults and we won’t touch that.

Using kubectl from outside the cluster should be as simple as copying the contents of /etc/rancher/k3s/k3s.yaml to a remote machine and changing the cluster[].cluster.server value to use the public IP address of the master node instead of localhost.

While we’re inside the master node VM, we’ll also want to set up some firewall rules as the security policies we set up on the OCI interface are not sufficient on their own. At first I tried settings things up like this:

firewall-cmd --add-port=6443/tcp --zone=public --permanent

# if needed
firewall-cmd --add-port=8472/udp --zone=public --permanent
firewall-cmd --add-port=10250/tcp --zone=public --permanent

# apply
firewall-cmd --reload

However, possibly due to the way Automatic Linux is configured, for some reason those rules find themselves appended after REJECT rules so nodes were not able to speak to each other. What I eventually ended up going with was manually editing iptables rules like so (obviously placed in a line above the REJECT rules):

iptables -I FORWARD 15 -s 10.0.0.0/8 -j ACCEPT
iptables -I FORWARD 16 -d 10.0.0.0/8 -j ACCEPT

iptables -I INPUT 18 -s 10.0.0.0/8 -j ACCEPT
iptables -I INPUT 19 -d 10.0.0.0/8 -j ACCEPT

The security policies on OCI’s side can also handle outside access so we don’t need to worry too much on the VM side.

Lastly for the master node, we’ll grab the token for connecting our worker node(s).

cat /var/lib/rancher/k3s/server/node-token

Worker node set-up

With the free tier, we have up to 2 free VMs so in our setup we only have one worker node, although these steps can be repeated for any number of worker nodes.

First, add the same firewall rules as above:

iptables -I FORWARD 15 -s 10.0.0.0/8 -j ACCEPT
iptables -I FORWARD 16 -d 10.0.0.0/8 -j ACCEPT

iptables -I INPUT 18 -s 10.0.0.0/8 -j ACCEPT
iptables -I INPUT 19 -d 10.0.0.0/8 -j ACCEPT

Then install k3s:

curl -sfL https://get.k3s.io | K3S_URL=https://<master node private IP>:6443 K3S_TOKEN=<master node token> sh -

Done. By now from the master node or any machine with kubectl access to our cluster we can see all nodes with kubectl get nodes and we can now start using our cluster.

Customizing k3s

We found that the bundled Traefik didn’t really suit our needs, plus we had a bunch of kustomization files from another cluster already using ingress-nginx, so we ended up removing it.

First, edit the systemd k3s.service to append --no-deploy traefik to the startup command. Then, remove Traefik by simply referencing the original definition at /var/lib/rancher/k3s/server/manifests/traefik.yaml using kubectl delete -f. You can then install another ingress of your choice, mine being ingress-nginx. Simply follow the instructions for the given ingress controller.

At this point our nodes are speaking to each other, we have an ingress controller, and we’ve started adding our deployments. However, our apps weren’t accessible to the outside world despite having our public IP given by OCI. The final piece of the puzzle was MetalLB. This is where we got lazy and simply installed it directly using kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml (we might add ArgoCD at some later time for painless upgrades anyways). Then it was as simple as adding the required ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - <node IPs>

Then we added the nodes’ public IPs to our DNS A records and magic happened. Once we’re done it would also be good practice to disable SSH access from the OCI GUI as we only really need kubectl at this point. This can always be re-added at a later point if necessary.

Afterword

And here comes the freaky part. Oracle and free already being such a strange combination, using OCI was actually a quite pleasant experience. At the moment Oracle probably has the most generous free tier we’ve seen and provides plenty enough resources for playing around with bare-bones Kubernetes.