OptimisationSee how it works in the product

From optimisation candidate to approved decision

This walkthrough follows one important motion: how CloudKnife surfaces optimisation opportunities, ranks them for review, and carries evidence through to an approval or hold. It complements the platform story with concrete queue behaviour, what reviewers see in product, and what happens after someone says yes.

QueueRanked candidates
ReviewEvidence in one surface
ActionPolicy after approval
What you get
You get a ranked optimisation backlog where each row exposes impact, risk, and rationale, then opens into the full decision view when a reviewer is ready.
Reviewer workflow

Inside one optimisation decision

After you scan the list, this is the surface a reviewer uses: assumptions, resources, Confidence, and status in one place before approve, reject, or hold.

13e55095…98409b7
Priority: HighSafety: Very safeConfidence: 98%
Confidence meter
98%
Expected yearly impact
€2,444.92
Review recommendation
Evidence summary
Confidence backed by signals and headroom
Observed usageTail stabilityPolicy boundaries

CPU stays below the safety buffer, and tail behaviour remains stable in the decision window after review.

Affected resources
prod-web-vm-03prod-db-01

Ownership and policy boundaries stay visible for audit and review.

Rationale
Utilisation signals
Low steady-state usage with stable tail windows.
Policy alignment
Recommendation respects governance boundaries.
Risk framing
Safety headroom is shown before any decision.
Current vs recommended
Current
Standard_D4s_v3
€3,156 / yr
4 vCPU · 16 GB RAM
Recommended
Standard_B2as_v2
€711 / yr
2 vCPU · 8 GB RAM
Review checklist
  • Evidence and assumptions
  • Impact and affected resources
  • Safety headroom and risk context
Review status · Awaiting ownerReady for review

Optimisation-specific fields stay together so approvers see savings, safety, and ownership without switching tabs.

One motion: ranked candidates, one review surface, then action only where policy allows.

Opportunity catalogue

Optimisation types you will see in the queue

Six recurring candidate shapes. Same review row pattern for each; grouping is only how you scan the catalogue.

Rank weighs savings you can challenge, environment and tier, and how much review each item needs, so the top of the list matches how FinOps and engineering prioritise together.

Capacity and runtime

What runs, when it runs, and how you pay for capacity.

  • Rightsizing

    Surfaces SKUs or tiers that overshoot observed load. Review checks headroom and production tier before any change.

  • Runtime alignment

    Surfaces off cycles or scale windows for steady patterns. Review pairs impact with change windows and blast radius.

  • Reservations and commitments

    Surfaces lock-in options with break-even against on-demand. Review brings finance and the owning service into one line item.

Hygiene and fit

Cleanup, better-fitting services, and placement under your rules.

  • Idle and orphaned resources

    Surfaces spend with weak or missing ownership. Review confirms intent and routing before cleanup work.

  • Service and architecture fit

    Surfaces simpler or cheaper services when constraints allow. Review matches workload needs to the suggestion.

  • Region and placement

    Surfaces placement differences under governance and latency expectations. Review keeps policy visible next to savings.

Approval quality

Why an optimisation is safe to approve

Governance is not a separate screen. It sits on the candidate so reviewers see savings next to blast radius, policy, and owners.

Evidence in product
Evidence
Usage profileObserved
prod-web-vm-03 · East US
Avg CPU (30d)
6.2%
P95 CPU
11%
Tail window
30 days
Assumptions
20% headroom bufferStable traffic patternEU region
ConfidenceHigh

Rationale and expected impact, built for decision-making.

How it stays safe

Optimisation starts as observation and candidates in queue. Nothing executes until your review rules allow it.

Each item states why this change is reasonable here, what evidence supports it, and what could go wrong if assumptions are wrong.

Prod versus non-prod, critical tiers, windows, and blast radius stay visible in the same item as the optimisation, so approvers see policy and savings together.

Routing and accountability stay with the candidate so the right service team approves before work reaches production.

Approvers check evidence, assumptions, and optimisation impact before execution is even offered.
End to end

From optimisation candidate to governed action

Detect, rank, review, execute only where your rules allow.

1
Observe

Usage, configuration, and ownership context feed optimisation detection, starting from read-only access where that applies.

2
Prioritise

Candidates sort by impact, risk, and how much review they need, so the queue matches how FinOps and engineering actually prioritise work.

3
Review

Open an item to see rationale, resources, Confidence, and policy context together before approve, reject, or hold.

4
Act when policy allows

Approved optimisations move to governed automation or tracked manual change, depending on rules you set, not a silent default.

Next step
Bring this optimisation workflow into your cloud environment

Start read-only, run the queue with your teams, then widen approvals and automation where policy supports it.