Migrating from Rockset? Find out if Hydrolix is right for you >>

RSS

Scaling to Zero to Reduce Your Cloud Spend

Sometimes you need to take a vacation, save money, and scale down to zero.

Franz Knupfer

Published:

Jul 02, 2024

4 minute read
,

We live in a world of hyperscalers, maximum growth, peak events, and always moving upwards and to the right. We’ve experienced that personally at Hydrolix in terms of our company’s growth (10x growth YoY). And it’s what ourcustomers are seeing, too, in terms of massive increases in log volume that drive up costs and create new challenges for use cases like observability.

Hydrolix is designed for scale, whether that’s ingesting tens of millions of rows of log data per second, delivering sub-second needle-in-a-haystack queries  on trillion row datasets, or compressing and storing petabytes of log data using object storage.

Where other platforms often struggle to handle major peak events, Hydrolix’s decoupled and stateless Kubernetes infrastructure can handle the load. Over the course of a few hours during the biggest American football game of the year, a major broadcaster ingested and analyzed 43 terabytes of data with less than ten seconds of ingest latency.

Hydrolix achieves this performance with Kubernetes infrastructure that utilizes massive parallelism, scalability, and workload isolation. Ahead of a big event, you can scale up ingest and query resources to safely meet anticipated load. And if traffic is greater than expected, you can easily scale infrastructure up further. 

But what about when the big event is over and you no longer need all those resources? For many systems, provisioning for peak events is a one-way street. There is no easy way to scale down and prioritize cost-effectiveness.  Not so with Hydrolix. The same combination of decoupled compute and Kubernetes makes it easy to reduce compute resources all the way to zero.

Both our customers and our engineers (who use the product internally) can use cron jobs and devops expertise to scale up and down based on work hours, weekends, or other times of expected periodic traffic. No more inefficient overprovisioning. Instead, operators can fine-tune cloud spend by scaling up and down based on daily traffic patterns. When an unexpected traffic spikes occur, they can readily scale up. 

Peak Performance—or Scale to Zero—for Holidays

This week (in the U.S., anyway), everyone from business leaders to engineers can be forgiven for thinking more about barbecues, chili cook offs, fireworks, and (hopefully) a long, relaxing weekend. The on-call engineers that are working through the holiday will be looking ahead to the long weekend they’ll be getting soon, too, when their on-call shifts are over.

When you leave town for the weekend, you probably don’t leave all the lights in the house on. But there’s often that moment after you’ve already left for vacation where someone in the car wonders if they turned off the stove—and it’s too late to turn back and check. The same thing can happen when it comes to cloud spend. There’s nothing relaxing about waking up in a cold sweat because you might’ve left too many Kubernetes pods running. Only to realize that you left your work computer at home for a reason, and there’s no VPN nearby to do a quick check and run a few kubectl commands.  Or coming back from a relaxing vacation and your manager has scheduled a call to discuss a surprise cloud bill.

So here’s a little holiday reminder—don’t forget to scale down your Kubernetes if you won’t be using it. If you’re using a data platform that doesn’t allow you to scale each part of the system independently, it might be time to consider another platform like Hydrolix. When you can’t scale each part of the system separately, it typically leads to overprovisioning and wasted resources.

If you’re using a SaaS solution, well… you don’t have to worry about scaling your services, but you’re still paying for all the technical debt and wasted resources you can’t see—you just don’t have any control over that waste. That typically leads to prohibitively high costs for data at terabyte scale.

Scale Any Part of Hydrolix to Zero

With Hydrolix, each part of the system is independently scalable:

  • Ingest: Uses Kubernetes infrastructure and can scale to zero. Typically, you won’t scale ingest down to zero, but you can still scale it down to ingest data with fewer intake heads to reduce costs while still ingesting data in a timely manner.
  • Query: Uses Kubernetes infrastructure and can scale to zero. You can even create separate query pools, each with their own independently scalable resources.
  • Storage: S3-compatible object storage is horizontally scalable and decoupled from compute (both ingest and query).

Scale down to zero when you need to. Or simply fine-tune resource usage to find that perfect balance between cost and performance.

Sometimes you’ll need maximum performance, such as for peak events like holiday sales or live sporting events.

And sometimes it’s better to prioritize cost-effectiveness, like in the middle of the night when you’re showing reruns or typically don’t have much traffic. Or those sleepy summer weekends when everyone is out barbecuing, including your engineering and data science teams.

Next Steps

Share this post…

Ready to Start?

Cut data retention costs by 75%

Give Hydrolix a try or get in touch with us to learn more

We live in a world of hyperscalers, maximum growth, peak events, and always moving upwards and to the right. We’ve experienced that personally at Hydrolix in terms of our company’s growth (10x growth YoY). And it’s what ourcustomers are seeing, too, in terms of massive increases in log volume that drive up costs and create new challenges for use cases like observability.

Hydrolix is designed for scale, whether that’s ingesting tens of millions of rows of log data per second, delivering sub-second needle-in-a-haystack queries  on trillion row datasets, or compressing and storing petabytes of log data using object storage.

Where other platforms often struggle to handle major peak events, Hydrolix’s decoupled and stateless Kubernetes infrastructure can handle the load. Over the course of a few hours during the biggest American football game of the year, a major broadcaster ingested and analyzed 43 terabytes of data with less than ten seconds of ingest latency.

Hydrolix achieves this performance with Kubernetes infrastructure that utilizes massive parallelism, scalability, and workload isolation. Ahead of a big event, you can scale up ingest and query resources to safely meet anticipated load. And if traffic is greater than expected, you can easily scale infrastructure up further. 

But what about when the big event is over and you no longer need all those resources? For many systems, provisioning for peak events is a one-way street. There is no easy way to scale down and prioritize cost-effectiveness.  Not so with Hydrolix. The same combination of decoupled compute and Kubernetes makes it easy to reduce compute resources all the way to zero.

Both our customers and our engineers (who use the product internally) can use cron jobs and devops expertise to scale up and down based on work hours, weekends, or other times of expected periodic traffic. No more inefficient overprovisioning. Instead, operators can fine-tune cloud spend by scaling up and down based on daily traffic patterns. When an unexpected traffic spikes occur, they can readily scale up. 

Peak Performance—or Scale to Zero—for Holidays

This week (in the U.S., anyway), everyone from business leaders to engineers can be forgiven for thinking more about barbecues, chili cook offs, fireworks, and (hopefully) a long, relaxing weekend. The on-call engineers that are working through the holiday will be looking ahead to the long weekend they’ll be getting soon, too, when their on-call shifts are over.

When you leave town for the weekend, you probably don’t leave all the lights in the house on. But there’s often that moment after you’ve already left for vacation where someone in the car wonders if they turned off the stove—and it’s too late to turn back and check. The same thing can happen when it comes to cloud spend. There’s nothing relaxing about waking up in a cold sweat because you might’ve left too many Kubernetes pods running. Only to realize that you left your work computer at home for a reason, and there’s no VPN nearby to do a quick check and run a few kubectl commands.  Or coming back from a relaxing vacation and your manager has scheduled a call to discuss a surprise cloud bill.

So here’s a little holiday reminder—don’t forget to scale down your Kubernetes if you won’t be using it. If you’re using a data platform that doesn’t allow you to scale each part of the system independently, it might be time to consider another platform like Hydrolix. When you can’t scale each part of the system separately, it typically leads to overprovisioning and wasted resources.

If you’re using a SaaS solution, well… you don’t have to worry about scaling your services, but you’re still paying for all the technical debt and wasted resources you can’t see—you just don’t have any control over that waste. That typically leads to prohibitively high costs for data at terabyte scale.

Scale Any Part of Hydrolix to Zero

With Hydrolix, each part of the system is independently scalable:

  • Ingest: Uses Kubernetes infrastructure and can scale to zero. Typically, you won’t scale ingest down to zero, but you can still scale it down to ingest data with fewer intake heads to reduce costs while still ingesting data in a timely manner.
  • Query: Uses Kubernetes infrastructure and can scale to zero. You can even create separate query pools, each with their own independently scalable resources.
  • Storage: S3-compatible object storage is horizontally scalable and decoupled from compute (both ingest and query).

Scale down to zero when you need to. Or simply fine-tune resource usage to find that perfect balance between cost and performance.

Sometimes you’ll need maximum performance, such as for peak events like holiday sales or live sporting events.

And sometimes it’s better to prioritize cost-effectiveness, like in the middle of the night when you’re showing reruns or typically don’t have much traffic. Or those sleepy summer weekends when everyone is out barbecuing, including your engineering and data science teams.

Next Steps