Sailing through the Serverless Ocean with Spotinst & OpenFaaS Cloud -

Sailing through the Serverless Ocean with Spotinst & OpenFaaS Cloud

Reading Time: 14 minutes

Learn how to deploy OpenFaaS Cloud to Spotinst Ocean, the new elastic Kubernetes engine from Spotinst to bring managed Serverless to your team



In this tutorial Alex Ellis, the founder of OpenFaaS and Pavel Klushin, Solution Architect Team Leader @ Spotinst will demonstrate how you can deploy your own self-hosted OpenFaaS Cloud environment to Spotinst Ocean. With Ocean managing your Kubernetes cluster you’ll no longer have to worry about capacity planning or whether you’ve got the most cost-effective nodes in your cluster. That means, in a true serverless fashion, no virtual machines to manage, and no clusters to operate. Spotinst Ocean automatically picks for you specific instance types and optimizes cluster utilization, all while leveraging Spot instances at an 80% discount from On-Demand prices.

Empower DevOps and developers to build, run, and scale applications & Functions with ease, without worrying about infrastructure and costs. OpenFaaS Cloud brings a managed Serverless experience for your applications on Kubernetes.

In this post we will connect our GitHub repos or organizations to OpenFaaS Cloud, so that when our users perform a “git push” they get new HTTPs endpoints within seconds. OpenFaaS Cloud also supports self-hosted GitLab. For more see:

Experience level/target audience: DevOps practitioner or engineer with intermediate Kubernetes knowledgeOpenFaaS Cloud also supports self-hosted GitLab. For more see:

Configure Spotinst Ocean

or sign-up for Spotinst

If you’re already a Spotinst customer login to your dashboard, otherwise sign-up for a free trial at the Spotinst website.

Create your EKS cluster with Ocean

Spotinst Ocean will provision, manage and automatically scale the nodes for your cluster based on pod requirements, but you will need to create your Kubernetes master node.

We can create a new Amazon EKS cluster through the Ocean dashboard using CloudFormation. At the time of writing, there is a cost of ($0.20 per hour) or 150 USD per month for each master node in each Amazon EKS cluster that you create. Ocean also supports other clouds where master nodes are priced differently such as Google Cloud or Azure AKS.

Note: We will use EKS in this tutorial for the convenience of having managed master nodes, but you can save money if you want to manage your own cluster on AWS by using the open-source kops tool.

Before creating an EKS cluster make sure you have installed the following tools as a prerequisites:
  1. kubectl (Amazon EKS-vended)
  2. awscli 1.16.18+
  3. Aws-iam-authenticator

In order to create an EKS cluster using Ocean, please use the Ocean creation wizard or use a custom script that provisions an EKS & Ocean Cluster.

To get started, click on Cloud Clusters under the Ocean section of the Spotinst Console followed by Create Cluster and choose Create a new Cluster.

You will need to have at least one EC2 key-pair in your region.

Click on “Generate Token”

Fill in the “Cluster Name”, “Region” and “Key Pair” and click on “Launch CloudFormation Stack”

In the CloudFormation you can review the configurations and click on “Create Stack”, it will take about 15 minutes to create an EKS cluster.

Going back to Spotinst console, follow Step 4 and run the commands from your cli

  • awseks update-kubeconfig –name <cluster name>
  • check the connectivity to your EKS cluster by running  “kubectl get svc”

Install the Spotinst controller by running the controller installation script:

#!/usr/bin/env bash
curl -fsSL | \

*Token, AccountID and Cluster name will be filled in automatically by Spotinst console

Follow Step 5 and update AWS Authentication Config-Map

Download the AWS authenticator configuration map

curl -O

In the aws-auth-cm.yaml file, replace the

 <ARN of instance role (not instance profile)> 

nippet with the NodeInstanceRole value from the EKS cluster CloudFormation Stack

Apply the updated aws-auth-cm.yaml to the cluster

kubectl apply -f aws-auth-cm.yaml

Prepare for OpenFaaS Cloud

You can install OpenFaaS in 60 seconds on any Kubernetes cluster – whether that is your laptop, on-premises or using a managed service. Guides are available in the documentation at:

In this tutorial we will be installing OpenFaaS Cloud which brings a multi-user serverless experience with integrated git-workflow and CI/CD.

OpenFaaS Cloud is a shrink-wrapped platform consisting of:

  • OpenFaaS installed with helm
  • Nginx as an IngressController
  • SealedSecrets from Bitnami to allow secrets to be encrypted in your git repo
  • cert-manager to provision HTTPS certificates with LetsEncrypt
  • Docker’s buildkit project for building immutable Docker images for each function
  • Authentication/authorization through OAuth2 using GitHub/GitLab
  • Deep integration into GitHub/GitLab commit statuses
  • A personalized dashboard for each user

You can install OpenFaaS Cloud manually by following the developer-guide and customize any components you need. The easiest installation method is to automate everything with the ofc-bootstrap tool which uses recommended tools, projects and defaults to reduce the time taken to about 1.5 minutes on a new cluster.

Install prerequisite command-line tools
Prepare GitHub

You will create a GitHub App used to connect git push events in your managed repos and organisations to OpenFaaS Cloud, securely, through pub-sub. You will create a GitHub OAuth2 App to enable login to your OpenFaaS Cloud Dashboard with your GitHub identity.

  • Take note of your domain name

OpenFaaS Cloud uses a naming convention with your chosen domain to provide system, authentication and user endpoints with their own TLS certificates. Your URLs should be as per the documentation and will look like:

  • *.domain.tld – each user has a sub-domain for their own GitHub username or organization
  • system.domain.tld – used to serve the dashboard and GitHub webhooks
  • auth.system.domain.tld – used for OAuth, JWT and cookies

Carefully follow the documentation at the link below, and take note of the required IDs, secrets, keys and URLS.

  • Create the GitHub App
  • Create the OAuth App

We can control who can deploy functions to our OpenFaaS Cloud (OFC) by setting up an Access Control List (ACL) populated with GitHub usernames. The ACL prevents unauthorized users from deploying functions to your OFC instance.

Create a new public GitHub repo through – this repository should not have any code in it nor functions, it just stores the ACL.

Create a plain text file named CUSTOMERS and add each GitHub username login on a new line like this:



Deploy OpenFaaS Cloud

First, clone the ofc-bootstrap tool, then download the binary from the releases page to this folder on your local computer.

Clone the repository:

mkdir -p ~/dev/ofc/
cd ~/dev/ofc/
git clone
cd ~/dev/ofc/ofc-bootstrap

You can now download and run the ofc-bootstrap tool from the GitHub releases page.

If you are running on Linux pick “ofc-bootstrap”, if you’re running on MacOS then pick “ofc-bootstrap-darwin” and for Windows pick “ofc-bootstrap.exe”

Note: make sure to download the file to the cloned directory of ofc-bootstrap repo.

Customize your init.yaml

We will now enter all our secrets into a file named init.yaml which is used by ofc-bootstrap to configure your cluster with OpenFaaS Cloud. This takes around 100 seconds to execute.

Create a new init.yaml file with this command:

cp example.init.yaml init.yaml

Fill out init.yaml and create named secrets as per the comments in the file, here are some of the things you need to do or update:

  • Look for the `### User-input` section of the file
  • Set the path for your GitHub App private key file (downloaded earlier)
  • Create a new user with permissions to manage Route53. This will be used by cert-manager to provision your TLS certificates using the DNS01 challenge. The cert-manager tool runs inside your cluster as a Kubernetes Pod.
  • Create a plaintext file ~/Downloads/route53-secret-access-key and insert the secret key value of the new user
  • Edit the “access_key_id” section at the bottom of the file and insert the access key of the user
  • Fill out the root_domain, this will be for instance: or domain.tld, for the OpenFaaS Community Cluster we used “”
  • Enter the GitHub App ID (from earlier)
  • Fill out the client_id and client_secret from (GitHub OAuth page)
  • Turn on TLS by setting “tls: true” and uncomment the AWS Route 53 section.
  • Pick production TLS certificates from LetsEncrypt by setting issuer_type: prod
  • Fill out your email in this section
  • Turn on OAuth by setting enable_oauth: true
  • Open the Docker Desktop settings and make sure you have “Store my credentials in a key-chain” set to false, now run “docker login” and login with your Docker Hub account
  • Edit the “registry:” field and set your own Docker Hub username. For example, if your name is ofc-summit, set: “registry””
  • Edit the line that says “customers_url” and enter the GitHub Raw URL to your CUSTOMERS ACL file

Now run the tool like this:


You may receive errors if you have failed to install any of the pre-requisites such as kubectl or faas-cli. You may also receive errors if you have failed to create any of the required secret files.

Check the output for any errors. If you get stuck you can sign up for the OpenFaaS Slack community and ask for help.

Populate the generated webhook secret in your GitHub app

Navigate to your GitHub App and enter the webhook secret as found with this command:

echo $(kubectl get secret -n openfaas-fn github-webhook-secret -o jsonpath="{.data.github-webhook-secret}" | base64 --decode; echo)

Update your LoadBalancer’s incoming traffic security policy (optional)

If you have any policies applied to your AWS account to control or limit incoming connections then edit the LoadBalancer’s configuration to allow incoming TCP traffic from GitHub to the LoadBalancer on port 443

Go to your Load Balancers in AWS and choose the Load Balancer that was created using ofc-bootsrap file

Click on the Security group and edit the inbound rules for port 443.

Set-up AWS Route53 for your domain name

For our OpenFaaS Cloud support we will need a domain name managed by AWS Route53, GCP or DigitalOcean. Why? Because we will create a certificate for the wild-card domain and this requires a DNS01 challenge instead of the usual HTTP01 challenge.

In this example we will use Route53. This key is used by cert-manager for DNS01 challenges.

Prepare your domain

Either transfer an existing domain by setting up the correct nameservers for AWS or register a new domain and put it under the management of AWS Route53.

Create a service account that you can use for managing DNS via the AWS API. This may be your existing user account, but we recommend setting up a separate identity for this.

Now that we have deployed OpenFaaS Cloud, we need to create our DNS records in AWS Route 53.


We will set these to point at the IP address of the LoadBalancer created by the ofc-bootstrap tool below. Run the following command to find the IP address or DNS hostname of your LoadBalancer for the Nginx IngressController deployed as part of ofc-bootstrap.

kubectl get svc/nginxingress-nginx-ingress-controller -o wide

Use the AWS Console to navigate to your domain and update the IP addresses to point to the “Public address” found in the previous command. Alternatively you can use the following commands to create these records:

Copy the shell script that updates your AWS Route53 Hosted zone to include OpenFaaS Cloud A records. Note: You need to specify your domain record in the domain variable

#!/usr/bin/env bash

echo '{"Comment":"update domain to point to nginx ingress controller","Changes":[{"Action":"UPSERT","ResourceRecordSet":{"Name":"*.${domain}","Type":"CNAME","TTL":60,"ResourceRecords":[{"Value":"${ip}"}]}},{"Action":"UPSERT","ResourceRecordSet":{"Name":"auth.${domain}","Type":"CNAME","TTL":60,"ResourceRecords":[{"Value":"${ip}"}]}},{"Action":"UPSERT","ResourceRecordSet":{"Name":"system.auth.${domain}","Type":"CNAME","TTL":60,"ResourceRecords":[{"Value":"${ip}"}]}}]}' > record.json

export ip=$(kubectl get svc/nginxingress-nginx-ingress-controller -o json | jq '.status.loadBalancer.ingress[0].hostname' -r) # nginx ingress ip from kubectl get svc -o wide

resource_config=$(sed  -e 's@${ip}@'"$ip"'@g' -e 's@${domain}@'"$domain"'@g' record.json)

zone_id=$(aws route53 list-hosted-zones-by-name --dns-name ${domain}  | jq '.HostedZones[0].Id' -r)
aws route53 change-resource-record-sets --hosted-zone-id ${zone_id} --change-batch="${resource_config}"

Run the shell script

$ chmod +x ./

Test it out

You can now test out the experience of OpenFaaS Cloud by creating a GitHub repo and pushing a function there. You can call the repo “openfaas-cloud-test” for instance.

Now log into GitHub -> Your profile settings -> Developer Settings -> GitHub Apps -> {your application} -> click the logo for that application.

On the next page find the “Install” button for your GitHub App and select the test repository you have just created.

Install it on your new GitHub repo so that OpenFaaS Cloud will be sent events for each time you run “git push”.

Build a function

Let’s build a function named timezone-shift in JavaScript which takes a JSON body with a time of a meeting in your local time and how many hours to offset it. This would be useful for instance if you lived in London and met with colleagues on the West Coast of the USA.

Note – replace the two username variables with your respective usernames.

mkdir -p ~/dev/
cd ~/dev/
git clone
cd openfaas-cloud-test
faas-cli template store pull node10-express
faas-cli new --lang node10-express timezone-shift --prefix=dockerhub_username

Now install the moment.js package

cd timezone-shift
npm install --save moment

Edit handler.js:

"use strict"

const moment = require('moment');

module.exports = (event, context) => {
    let meeting = moment.utc(event.body.meeting)
    let adjusted = meeting.clone().utc().add(-8, 'hours');

        .succeed({ meeting: meeting.format(), adjusted: adjusted.format() });

We need to rename our function’s YAML file to stack.yml, so that it can be picked up by the OpenFaaS Cloud CI/CD pipeline.

mv timezone-shift.yml stack.yml

If this is the first time you have used git then you may need to set your username and email.

git config --global “My Full Name”
git config --global “my@email.domain”

Now return to the root directory and push your changes to Git.

cd ~/dev/openfaas-cloud-test
git add .
git commit --signoff

git push origin master

Open the “Commits” page of your GitHub repo and look for the commit status for your code. You will see a dot appear showing the build in progress followed by a Tick or an X. Clicking on this shows additional details about how your function was built such as the unit test results or installation of your npm packages.

Open your dashboard, you will see all of your functions appear as below:

Click on the function to open its details page, then copy the endpoint for the next step.

Invoke the function and find out what time your meeting at 5pm in London will be in over in San Francisco: -H "Content-type: application/json" -d ' { "meeting": "2019-02-18T17:00:00"}';echo
Configure a SealedSecret for your application

It is very likely that most of your functions will need to use some sort of confidential data such as an API key or a sensitive piece of data. The SealedSecrets project from Bitnami is integrated into OpenFaaS Cloud to enable secrets to be encrypted at rest in your git repository. When you do a “git push” the CI/CD pipeline will attach them to your functions.

Learn how to add a SealedSecret to your function in the OpenFaaS docs here:

Invite your team to your OpenFaaS Cloud

You are the first user for your installation, now you can invite your team, colleagues and friends, or keep the environment for yourself. You can enroll new users by adding their GitHub usernames to your CUSTOMERS file at any time.

Testing the waters with Ocean

In this section of the tutorial we will test out our OpenFaaS Cloud deployment by creating load,monitoring what happens in the Ocean dashboard and seeing how Ocean manages scaling and headroom automatically.

faas-cli login (to get power-user access)

Use the OpenFaaS docs to retrieve your admin login password for the OpenFaaS REST API.

Once you have the password run the following command to gain access to the internal OpenFaaS API, which is hidden when deployed as OpenFaaS Cloud. This command gives us a secret tunnel to access it.

kubectl port-forward -n openfaas deploy/gateway 8080:8080 &

You can now access the OpenFaaS UI and API on

Login to the API:

echo -n $PASSWORD | faas-cli login --gateway \
 --username admin \

Type in `faas-cli list –verbose` to see your deployed functions.

Monitor OpenFaaS Cloud in the Ocean dashboard

Log into the Ocean dashboard and find the openfaas-fn namespace. In the different views here, you can see how much CPU and memory is currently allocated, on which nodes they are running and how much that is costing you.

Simulate scale from zero

Scale the timezone-shift function to zero replicas using:

kubectl scale deploy/username-timezone-shift -n openfaas-fn --replicas=0

Now invoke the function as above. On a regular cluster with headroom to create the Pod available you’ll see a short pause of around 1s.

Simulate auto-scaling of Pods with the `hey` load-testing tool

Both OpenFaaS and Ocean support intelligent auto-scaling. OpenFaaS scales within the capacity of your cluster by creating new Pods (horizontally) and Ocean scales both horizontally and vertically by provisioning new worker nodes or exchanging instance types for ones with more or less resources available based on the pods requirements. Auto-scaling of nodes in a Kubernetes cluster is not trivial and Ocean must take into account for constraints around PodDisruptionBudget and affinity/anti-affinity rules specified in Pod specifications.

Install Golang from this URL:

Now install the hey load-testing tool:

go get -u

Let’s simulate 2 concurrent users making 10 requests per second over 5 minutes:

hey -c 2 -q 10 -z 5m

You can monitor the auto-scaling of Pods or OpenFaaS function replicas in your OpenFaaS Cloud Dashboard.

Open the Ocean Dashboard in a separate tab and you should be able to see new Pods and nodes being provisioned as the load test runs.

When you have a static amount of nodes in your cluster, you will see the Pods scaling, but at some point you may hit your capacity. Several of the managed Kubernetes offerings including “cluster auto-scaling” which adds additional nodes as necessary.

From the screenshots below you can see the nodes scale from 2 to 3 and the Pods scale from 1 to the upper limit configured in the cluster. The default configuration for OpenFaaS Cloud is 1/4 min/max replicas, but we set this to 1/32 min/max replicas for the test.

Before scaling – 2 running nodes and 1 running pod with the timezone function

The Autoscaler receives events for insufficient capacity in the cluster for the new Pods (Functions) and scales more Nodes based on the Pod requirements.

Additional pods are now pending:

Ocean is provisioning 1 additional node to cope with the demand generated by the test and the instance count scaled to 3 Instances.

Here the Pods have completed scaling-up to the upper-limit for the function.

More advanced options for tracking the running pods over time can be achieved using the “Namespaces” view which shows the correlation between the running pods and scaling nodes.

Both the amount of Nodes and Pods in the cluster will be scaled down when the test is completed and the traffic returns to normal levels.

Manual Headroom configuration – optional

For faster scaling you can configure Headroom which is a buffer of spare capacity (in terms of both memory and CPU) that makes sure that when you want to scale more tasks, you don’t have to wait for new instances to launch while preventing instances from being over-utilized.

Open the Spotinst Ocean Console and go to Actions -> Customize Scaling

Specify “Manual” Headroom of 2 pods with 1Vcpu & 20Mib of RAM and click on Update.

Tear-down (optional)

You can keep your OpenFaaS Cloud up and running and share it with your friends. Or if you aren’t quite ready to keep it running then use the script to remove the components before deleting your AWS CloudFormation templates and the Ocean cluster in the Spotinst dashboard.

cd ~/dev/ofc/ofc-bootstrap

Note: AWS may take several hours to clean up all the resources which were created. We recommend you browse your AWS Console to check if anything got left behind and remove that manually if needed.

Summing up

EKS with worker nodes managed by Ocean is probably the easiest and cheapest way to run Kubernetes on AWS today whilst retaining a managed experience. There is a 150USD / month running cost per master, but Ocean supports other clouds, including GCP and Azure, where the pricing structure for master nodes is different.

You will be able to view the costs of your functions under the “Cost” tab in Ocean Dashboard broken down by Namespace and Deployment.

OpenFaaS Cloud is the foundation for Serverless applications – whether microservices or functions using a wide range of templates. The skill-set for Serverless is reduced to just “git push” with rich and timely feedback. If you’d like to learn how to build a complete application read this post next: Single Page App with Serverless and Postgres DBaaS.

When you combine both Ocean and OpenFaaS, you get a highly scalable Serverless experience where you no longer have to worry about managing infrastructure, but can enjoy the benefits of a portable, open-source Serverless framework for Kubernetes in a cost-effective way.

Let us know what you think via Twitter and Slack:

@openfaas / OpenFaaS Slack   / Spotinst Slack