Automate infrastructure provisioning, management and scaling for big data cloud environments

Wave simplifies and automates cloud infrastructure for Spark, with Kubernetes as the scheduler. From setup and configuration to resource provisioning, management and teardown, Wave uses Spot’s AI-based engine to continuously optimize Spark clusters, choosing the best infrastructure for an application, based on its real-time requirements. Built on top of Spot Ocean, Wave makes it possible to reliably run Spark applications on spot, reserved and on-demand instances, delivering up to 90% cost savings.

Wave Dashboard

Execute Spark jobs without worrying about infrastructure

Run applications reliably on an optimal mix of spot, on-demand and reserved instances to minimize cluster footprint and reduce cloud costs.
Track Spark jobs and optimize their parameters at runtime to more efficiently utilize resources and operate at high performance.
Advanced automation removes operational barriers for provisioning, scaling and monitoring cloud compute.
Robust cost metrics and analysis uncovers the true cost of data applications and pipelines in the cloud.

Automation yields continuous optimization

fyber
Wave builds on the capabilities that we love in Ocean, focusing on the specific needs of big data applications. It will be very powerful to be able to plug Spark applications into Wave. The solution also has the amazing value of executing jobs with existing tools and potentially spin up the right infrastructure to power intensive ML applications.
Gal Aviv, CTO

Cloud-native big data with Wave

Bring infrastructure management under the same roof with Kubernetes as your big data cluster orchestrator.

Unified management

Deploy multiple workloads on the same Spark cluster.

Isolate Spark jobs

Reduce dependency management when moving workloads to different environments.

Efficient infrastructure utilization

Maximize node utilization and cluster efficiency.

Key features

Optimization engine
Leveraging advanced AI algorithms, Wave automatically chooses the best infrastructure to run an application at the highest performance, matching CPU, RAM and other resources in real-time to application specifications.
Spark application
right-sizing
Compare compute and memory configurations with actual usage to right-size applications and reduce overprovisioning, avoid CPU throttling and out-of-memory conditions.
Container bin packing
Optimize resources allocations via bin packing algorithms that recognize when multiple containers should be placed on the same instances, or when they should be spread across a group.
Warm startup
Maintain automatic headroom so Spark applications can run instantaneously without waiting for infrastructure to provision new capacity.
Meets you where you are at
Pre-built integrations with JupyterHub, Airflow, Spark History Server, and spark-submit. Configure Jupyter or Zeppelin notebooks locally while executing Spark applications on Kubernetes remotely. With spark-submit support built-in, there’s no need to learn new workflows.

Connect with us to learn more about Wave

Book a demo to see what Wave can do for your big data applications.

Request a Demo