Welcome, Multai Load Balancer.
As AWS and other Cloud vendors move to become the new normal for companies today, there is a need for new approaches ensure your applications are safely and efficiently delivered to your clients. Most load balancers available on the market today are either very basic or costly services, or, were built from legacy technology and adapted to fit into the cloud. We here at Spotinst understand that your applications are constantly evolving and you need support for wherever you applications are hosted.
Introducing the Spotinst Multai Load Balancer (MLB). Layer 7, Application Delivery Load Balancer that was designed from the ground up specifically for providing the same level of load balancing experience regardless to the hardware or cloud vendor that your applications run on, combined with the kind of analytics you need to ensure you are effectively delivering the experience your customers require. MLB supports hybrid deployments, meaning you can have MLB load balancers located on prem, in AWS, GCP, and Azure all at the same time. Let’s dive into some details below.
Groundbreaking Features
- Intelligent routing of traffic to ensure lower data transfer costs (if an MLB instance is in the same AZ the traffic will be routed to this)
- HTTP translation layer, browser requests can be HTTP/2 and your application can be an older version.(Support HTTP/2 & HTTP/1.1)
- Websocket support and advanced routing with multiple ports on the same host
- Automatically scales to demand. MLB will automatically scale up to meet demand as needed
- Application level health check via URL endpoint rules
- Advanced logging provides the ability to record all requests and store them for analysis
- Robust SSL support including decryption, certificate storage, and end to end encryption to back end servers
- Circuit breaker to ensure data integrity and resiliency, in case of any network error, MLB retries to spill over the requests to different nodes
- The “Roshemet” service can watch for changes in the back end and automatically apply a new configuration as needed. Further simplifying your container infrastructure
Real World Data Analytics
When we first started coming up with the idea for new load balancer we knew the bare minimum would not be sufficient. We knew even if we were able to deliver a new product at a lower cost this still would not be compelling enough for most users to switch to MLB. This is why we integrated sophisticated analytics directly into our product and delivered to our clients in a slick dashboard. With MLB you are getting amazing data for your applications such as:
- Real time application latency response time with metrics
- Ability to identify latency per host, per target group or by cloud provider
- Application HTTP response code reports per hosts and cloud
- Real time GEO analysis – Where does the traffic originate
- Distribution of operating system, user device and browser
- Tracking user fingerprint (unique users)
- Latency Distribution graph, which is an aggregated and more clearer view of percentile graph, so you will understand how your application really acts
MLB Terms
- Runtime Deployment: A set of Virtual or Physical Machines running the MLB Software called “Runtime Nodes“.
- Runtime Node: A node (VM or Server) which runs the MLB Go Software acts as the Proxy nodes between your incoming traffic and your application servers. These Runtime nodes can be launched either On Premise or in your cloud provider. Once your hosts are launched you can start creating Load Balancers using the UI console or via API.
- Load Balancer: A load balancer is an entity that acts as a reverse proxy and distributes network or application traffic across a number of servers.
- Listener: A process that checks for incoming connections in specific protocol and port to the Load Balancer.
- Routing Rule: Rule that matches an incoming request and forwards it to a TargetSet for Target selection. This can be based on HTTP Header or based on an expression that matches the request, e.g.
Path("/v1/path")
.
- Target Group: A logical collection of Targets that get traffic from a RoutingRule.
- Target: The final destination of the incoming request, each target is defined by URL
<protocol>://<host>:<port>
, e.g. http://172.31.10.13:5000 and weight. - DNS – Route53 is the default, However, you can choose any other DNS provider.
How does it work?
We provide you with an installation script that can be deployed in any cloud provider or even in your own datacenter. Servers that are launched using this script act as Runtime Nodes. The MLB is intelligent enough to ensure that Runtime Nodes that have more compute resources will receive more traffic than RuntimeNnodes with less compute resources.
Once your Runtime Nodes are up and running, you can start creating Load Balancers for your applications. Each Load Balancer will have its own ID, CNAME, and analytics that you can view in a powerful dashboard. You can view your request counts, average latency, the number of Targets hosting your application, the health of your Targets, HTTP 200 requests, HTTP 500 requests, and load balancer 500 requests.
View metrics for each application in the dashboard.
Drill down to individual metrics for for more detail. You can click and drag a portion of the graph for more granular information.
Filter by TargetGroup or Drill Down to a single Host!
MLB can load balance your application targets regardless of the provider or geographical location. As you can see below, application servers are hosted in Azure, AWS, as well as an on-prem data center. Each grouping of application targets is referred to as a “TargetGroup” of your application. You can see your application metrics based on each TargetGroup and even drill down to individual servers!
Application metrics are filtered by the Azure TargetGroup above.
All the metrics available to you can easily be filtered all the way down to a single application server (This is not possible with ELB!). You can easily determine which application servers are behaving unexpectedly based on the latency, and 400/500 request counts.
Advanced Analytics
You can browse additional analytics by tab. Below, the Overview tab displays a graph of latencies by IP range, request distribution by the endpoint, 500/400 errors by /path, and individual endpoints. With this advanced analytics, you can very quickly see what instances or application paths are having problems.
Launching a new application and wondering where your end users are located? The Geographic tab shows you exactly how many requests are being made by the country.
Find out where your users are by country in this simple global map.
View Metrics by Path (This is a game changer!)
Traffic Source shows you the number of requests for each path within your application. Quickly find out what paths are most popular and find metrics like min/max/avg latency per path.
Quickly find out about your most popular Referrer URLs are located via Traffic Source
You can quickly modify the polling period from minutes to days. This allows you to easily analyze your data within a time range to understand how your application is performing.
Conclusion
We’re very excited about our new MLB product and we think you’re going to love these new features.
Interested in seeing it in action? Sign up today!