Spot Ocean Expands with ‘Run Workloads’

Reading Time: 4 minutes

Today we are proud to announce an additional development to Spot Ocean, our serverless compute-engine solution that abstracts Kubernetes Pods from the underlying infrastructure. Spot Ocean reliefs engineering teams from the overhead of managing their Kubernetes clusters by dynamically provisioning, scaling and managing k8s worker nodes.

As part of our never-ending mission to ease Kubernetes operations and provide a more robust platform that can manage container workloads, we are happy to introduce a new capability called ‘Run Workloads’.

‘Run Workloads’ allows you to Create, Edit, and Manage your Kubernetes Pods from the Spot Ocean console, giving you a more robust interface to manage your containerized workloads.

Run Workloads Overview

‘Run Workloads’ enables you to create container deployments directly from the Spot Ocean console and generate YAML files. The best practice is to store these files in version control, so you can track changes to your deployment configuration over time.

To access the ‘Run Workloads’ section, you will just need to click on “Actions” and then “Run Workloads”.

By using ‘Run Workloads’ you will be able to create the following Kubernetes resource types directly from the Spot Ocean console:

  • Deployment
  • Pod
  • DaemonSet

In order to run a specific workload, Spot supports 2 methods: 

  1. Importing YAML deployment file
  2. Using a dedicated form in the Spot Ocean UI where you can define several workload variables, such as:
    • Container Image and startup command
    • Key Values
    • Resource requests
    • Node selectors
    • Pod affinity

Use Case: Metrics Server Deployment 

There are numerous use cases of deployments that can be executed using ‘Run Workloads’, and it’s definitely useful for executing single deployments such as metric servers, cluster monitoring, 3rd party tools, etc..

In the following example, the deployment workload which we will run is a Metrics Server, which collects the pods’ resource consumption metrics from the Kubernetes cluster.

We will run this workload from a YAML file, taken from Github:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server-vpa-ocean
  namespace: kube-system
  labels:
    k8s-app: metrics-server
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v0.3.3
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
      version: v0.3.3
  template:
    metadata:
      name: metrics-server-vpa-ocean
      labels:
        k8s-app: metrics-server
        version: v0.3.3
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.3
        command:
        - /metrics-server
        - --metric-resolution=30s
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
      - name: metrics-server-nanny
        image: k8s.gcr.io/addon-resizer:1.8.5
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 5m
            memory: 50Mi
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        volumeMounts:
        - name: metrics-server-config-volume
          mountPath: /etc/config
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          - --cpu={{ base_metrics_server_cpu }}
          - --extra-cpu=0.5m
          - --memory={{ base_metrics_server_memory }}
          - --extra-memory={{ metrics_server_memory_per_node }}Mi
          - --threshold=5
          - --deployment=metrics-server-v0.3.3
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          # Specifies the smallest cluster (defined in number of nodes)
          # resources will be scaled to.
          - --minClusterSize={{ metrics_server_min_cluster_size }}
      volumes:
        - name: metrics-server-config-volume
          configMap:
            name: metrics-server-config
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
    1. Paste the YAML content in the YAML section of ‘Run Workloads’
    2. Validate all of the environmental variables of your cluster. 3. Click on “Deploy” and wait a few moments until the following pop up is displayed:
      4. After the deployment has completed successfully, let’s browse to the “Pods” dashboard in the Spot Ocean UI. As we can see, the Pod was scheduled successfully and is already running as part of our cluster.

Resizing Deployments

The ability to resize deployments was added a few months back, and it represented the first phase of the feature.
The resizing capability allows you to determine how many replicas of a certain deployment will exist in your cluster, and you can easily modify that via the Spot Ocean console.

Get Started Now!

Spot Ocean ‘Run Workloads’ function is now available for all Ocean clusters!

Get started today, and ease your Kubernetes operations.

More information and a step by step tutorial about ‘Run Workloads’ can be found here.

Coming Soon:

In the next phases of this feature, we will add additional functions such as Creating services, run pods on schedule, and run jobs. Stay Tuned!