Spot Ocean Expands with 'Run Workloads' -

Spot Ocean Expands with ‘Run Workloads’

Reading Time: 4 minutes

Today we are proud to announce an additional development to Spot Ocean, our serverless compute-engine solution that abstracts Kubernetes Pods from the underlying infrastructure. Spot Ocean reliefs engineering teams from the overhead of managing their Kubernetes clusters by dynamically provisioning, scaling and managing k8s worker nodes.

As part of our never-ending mission to ease Kubernetes operations and provide a more robust platform that can manage container workloads, we are happy to introduce a new capability called ‘Run Workloads’.

‘Run Workloads’ allows you to Create, Edit, and Manage your Kubernetes Pods from the Spot Ocean console, giving you a more robust interface to manage your containerized workloads.

Run Workloads Overview

‘Run Workloads’ enables you to create container deployments directly from the Spot Ocean console and generate YAML files. The best practice is to store these files in version control, so you can track changes to your deployment configuration over time.

To access the ‘Run Workloads’ section, you will just need to click on “Actions” and then “Run Workloads”.

By using ‘Run Workloads’ you will be able to create the following Kubernetes resource types directly from the Spot Ocean console:

  • Deployment
  • Pod
  • DaemonSet

In order to run a specific workload, Spot supports 2 methods: 

  1. Importing YAML deployment file
  2. Using a dedicated form in the Spot Ocean UI where you can define several workload variables, such as:
    • Container Image and startup command
    • Key Values
    • Resource requests
    • Node selectors
    • Pod affinity

Use Case: Metrics Server Deployment 

There are numerous use cases of deployments that can be executed using ‘Run Workloads’, and it’s definitely useful for executing single deployments such as metric servers, cluster monitoring, 3rd party tools, etc..

In the following example, the deployment workload which we will run is a Metrics Server, which collects the pods’ resource consumption metrics from the Kubernetes cluster.

We will run this workload from a YAML file, taken from Github:

apiVersion: apps/v1
kind: Deployment
  name: metrics-server-vpa-ocean
  namespace: kube-system
    k8s-app: metrics-server "true" Reconcile
    version: v0.3.3
      k8s-app: metrics-server
      version: v0.3.3
      name: metrics-server-vpa-ocean
        k8s-app: metrics-server
        version: v0.3.3
      annotations: '' 'docker/default'
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      - name: metrics-server
        - /metrics-server
        - --metric-resolution=30s
        - containerPort: 443
          name: https
          protocol: TCP
      - name: metrics-server-nanny
            cpu: 100m
            memory: 300Mi
            cpu: 5m
            memory: 50Mi
          - name: MY_POD_NAME
          - name: MY_POD_NAMESPACE
                fieldPath: metadata.namespace
        - name: metrics-server-config-volume
          mountPath: /etc/config
          - /pod_nanny
          - --config-dir=/etc/config
          - --cpu={{ base_metrics_server_cpu }}
          - --extra-cpu=0.5m
          - --memory={{ base_metrics_server_memory }}
          - --extra-memory={{ metrics_server_memory_per_node }}Mi
          - --threshold=5
          - --deployment=metrics-server-v0.3.3
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          # Specifies the smallest cluster (defined in number of nodes)
          # resources will be scaled to.
          - --minClusterSize={{ metrics_server_min_cluster_size }}
        - name: metrics-server-config-volume
            name: metrics-server-config
        - key: "CriticalAddonsOnly"
          operator: "Exists"
    1. Paste the YAML content in the YAML section of ‘Run Workloads’
    2. Validate all of the environmental variables of your cluster. 3. Click on “Deploy” and wait a few moments until the following pop up is displayed:
      4. After the deployment has completed successfully, let’s browse to the “Pods” dashboard in the Spot Ocean UI. As we can see, the Pod was scheduled successfully and is already running as part of our cluster.

Resizing Deployments

The ability to resize deployments was added a few months back, and it represented the first phase of the feature.
The resizing capability allows you to determine how many replicas of a certain deployment will exist in your cluster, and you can easily modify that via the Spot Ocean console.

Get Started Now!

Spot Ocean ‘Run Workloads’ function is now available for all Ocean clusters!

Get started today, and ease your Kubernetes operations.

More information and a step by step tutorial about ‘Run Workloads’ can be found here.

Coming Soon:

In the next phases of this feature, we will add additional functions such as Creating services, run pods on schedule, and run jobs. Stay Tuned!