As we look towards 2025, the relationship between engineering and FinOps teams is increasingly pivotal. For FinOps to truly succeed, it must engage with all its stakeholders effectively—particularly engineers who play a central role in generating cloud costs. Unfortunately, FinOps professionals often encounter communication barriers with engineers, including differences in terminology, information, and expectations.
It’s essential to bridge this gap by addressing engineers’ key concerns regarding FinOps—those “elephants in the room” that may not receive adequate attention from FinOps teams. Acknowledging these concerns—and asking the right questions that they raise, using engineering terminology—can foster trust and significantly improve the impact of FinOps efforts.
Balancing reduced cost and maximized value
There’s a common misconception among engineers that FinOps is solely about cutting cloud costs, which they see as potentially conflicting with their goal of maximizing code throughput and application performance. Of course, in reality, FinOps is about optimizing the usage and pricing of cloud services to achieve the best performance at the lowest cost, which aligns with the engineers’ goal of maximizing productivity.
Questions to engage with your engineers:
- What do you think FinOps’ role is?
- How do you determine the necessary resource capacity to maximize performance and throughput?
- Do you feel that resource or cost optimization activities help you improve the infrastructure’s performance and dev productivity, or do they undermine it?
The misconception of FinOps as merely dashboards
Many engineers perceive FinOps as solely providing visibility through dashboards, or as an external entity that imposes cost reduction tasks to meet quarterly cost reduction goals. Of course, in reality, FinOps is a process encompassing visibility and cost optimization that should be integrated into all DevOps processes.
The endgame for FinOps, as described in the maturity model’s “run” phase, is to inject automation into cloud consumption from the get-go. Hence, DevOps teams play a crucial role in “shifting FinOps left” and should be equipped with tools to automate optimization tasks, thus reducing frustration and increasing impact.
Questions to engage your engineers:
- Can you describe your machine provisioning process? Is the logic manually configured or automated by infrastructure as code (IaC)? Where do the values and parameters come from?
- What manual resource or cost optimization tasks do you perform? How much time do you spend on them weekly? Are you aware of the cost savings that these actions achieved last month?
The challenge of scaling down without breaking stuff
A DevOps team’s primary mission is to give developers as much compute power as they need, within budget constraints. Therefore, they need an infrastructure autoscaling solution that can scale up rapidly to keep up with user demand. On the other hand, it should also know to scale down proactively and with precision, so waste is avoided without creating downtime. However, these tools often fall short when scaling down, which is a delicate task requiring machine learning (ML) to optimize machine sizes accurately. Failure to do so might cause costly downtimes. As cloud operations grow, the need for precise downscaling to avoid both waste and outages becomes more pressing.
Questions to engage your engineers:
- When upscaling is needed, how does the system decide on its format? (e.g. choosing between HPA and VPA, different instance types and sizes, or whether to use on-demand, existing RIs/SPs, or spot instances?)
- What scenarios today will trigger downscaling rules and policies? Are they only sending alerts, or also remediate automatically?
- What actions are done to reduce idle or underutilized resources?
- What needs or requirements does your autoscaler fail to satisfy?
The disconnect in utilizing commitments
Optimizing cloud pricing involves two main strategies: purchasing discounted commitments (such as Reserved Instances, Savings Plans, or Committed Usage Discounts) and using preemptible (spot) instances. However, the purchasing of commitments is often driven by the FinOps team without collaborating with the DevOps team leading to waste.
A coordinated approach with the DevOps team, that proactively assigns workloads with the more frugal choice of spot instances or existing RIs/SPs cost, can significantly increase cost savings.
Questions to engage your engineers:
- What percentage of our workloads are fault-tolerant and suitable for spot instances? Does that include self-contained stateful workloads?
- Do you use spot instances in production? If not, why not?
- How do you decide between using spot instances and existing commitments? How frequently are these decisions reviewed?
Conclusion: Knowledge is power, communication is key
In 2025, more than ever, manual tasks and short-term fixes are not enough to optimize cloud cost and usage. Therefore, FinOps’ effective communication with DevOps engineers, in their language, is crucial for the future of many businesses.
Finding the right FinOps technology with DevOps needs in mind is just as important. Spot by NetApp offers an integrated portfolio of cloud optimization products that harmonize FinOps and DevOps efforts for FinOps and infrastructure operations.
Discover how to unify your cloud cost and infrastructure optimization for your business.
Ready to get started today? Schedule your personalized demo to move towards a more integrated and efficient future for your FinOps practice.