At EquippedAI, now part of Belasko UK, we were brought into a platform where the Azure bill was around Rs40 lakh a month. The real problem was not the bill itself — it was the architecture behind it.

The product was fundamentally the same across clients, yet the delivery model had drifted into per-client infrastructure, per-client code branches, and duplicated service layers. The team had made those choices for understandable reasons. There were strict client data boundaries, complex client-specific requirements, and a belief that the product logic had become too customized to live in a cleaner shared codebase.

That combination made the bill expensive and the platform hard to evolve.

Over a 3-year engagement, CoEdify deployed 4 developers into that environment. We stabilized the suite, integrated AI workflows into Minerva, and rebuilt the platform so it could run more efficiently. By the end, monthly infrastructure cost had come down to around Rs10 lakh while the product itself was more stable and easier to operate.

That 75% reduction did not come from negotiating Azure harder.

It came from correcting the architecture decisions that were inflating the bill.


Where the cost was really coming from

The highest cost was not one dramatic mistake. It was the accumulation of several expensive patterns:

  • too many SQL Server instances, including instances that were barely touched
  • separate App Service deployments for clients running essentially the same product
  • separate code branches and client-specific deployment paths for what should have been one product core
  • duplicated supporting services, including repeated Azure Functions patterns
  • high graph database spend for a workload that was not a strong graph fit

Each decision made sense in isolation.

Together, they created a system where the platform paid repeatedly for the same product shape.


The tenancy constraint was real

This is the part many case studies skip.

There was a genuine constraint in this system:

each client's database had to remain accessible only to that client.

So this was never a case of "just merge all tenants into one shared database and the bill goes away."

That would have been the wrong answer.

The real task was more disciplined:

  • preserve the client-level database boundary
  • reduce the duplicated application and service layers around it
  • stop treating client variation as a reason to fork the whole product

That distinction matters.

The savings came from simplifying the platform without violating the isolation model.


What we inherited

The original environment had drifted into a costly operating pattern.

1. SQL Server sprawl

There were many SQL Server instances, and some were stale enough that they were touched rarely, sometimes only once in a month.

This is a classic enterprise cost trap.

Once database instances exist for every edge case, archive path, or historical client setup, they often remain provisioned long after their real usage pattern changes.

The bill keeps treating them as active infrastructure even when the workload no longer justifies it.

2. App Service per client

The product was being deployed through separate App Services for clients because the application logic was considered too customized to live inside a cleaner shared code model.

That assumption is expensive.

Per-client App Service deployments multiply:

  • hosting cost
  • deployment complexity
  • config drift
  • operational debugging
  • release coordination effort

The platform was effectively paying a tax every time a new client-specific deployment path was preserved.

3. Codebase and branch fragmentation

The product itself was largely the same, but client-specific features had pushed the system toward separate code branches and separate deployment paths.

That created a second cost problem beyond infrastructure.

Engineering effort was being spent to maintain divergence that should have been handled through a stronger product core, cleaner configuration boundaries, and controlled feature variation.

Infrastructure cost and software maintenance cost were reinforcing each other.

4. Service duplication

Supporting services were being replicated alongside the main application shape. Azure Functions patterns were repeated across deployments instead of being rationalized into shared platform responsibilities where possible.

This kind of duplication does not always look dramatic on day one.

Over time, it compounds into a platform that is expensive to run and harder to reason about.

5. Graph database overuse

One of the most expensive parts of the environment was the graph database.

The issue was not that graph databases are bad.

The issue was workload fit.

Graph traversal was becoming expensive, and to keep response time acceptable, capacity kept getting pushed upward. In effect, the infrastructure was paying for the mismatch between the chosen data model and the actual product behavior.

That is not a pricing problem. It is a design problem.


What actually changed

We did not start by hunting discounts.

We started by understanding which parts of the platform needed true isolation and which parts were duplicated by habit.

1. Preserve database isolation, simplify everything around it

The client database boundary stayed intact.

That was a real requirement, and we treated it as one.

But the application and service layers around those databases did not need to stay equally fragmented.

This was the critical shift:

keep the boundary where compliance and trust require it, but stop duplicating the entire platform around that boundary.

2. Reduce per-client deployment sprawl

We moved away from the idea that every client variation justified a separate application shape.

That meant reducing unnecessary App Service duplication, rationalizing deployment paths, and pulling logic back toward a more maintainable product core.

The goal was not to erase client-specific behavior.

The goal was to stop expressing every client-specific behavior as infrastructure.

That alone changes the cost curve significantly.

3. Cut stale and low-value SQL footprint

Once the environment was visible, it became much easier to identify SQL Server resources that were provisioned like primary systems but behaving like occasional workloads.

Those are expensive mistakes because they hide in plain sight. The instance exists, so teams normalize it. The bill keeps paying for readiness that the business barely uses.

Reducing that footprint was one of the most direct cost wins.

4. Reduce dependence on the graph database

The graph database was not removed simply because it was expensive.

It was reduced because it was not the right center of gravity for the workload.

We partitioned the problem better and reduced reliance on graph traversal where the product did not need graph-heavy behavior. Once the system depended less on expensive traversals, the need to keep scaling graph capacity to protect response times also reduced.

This is an important lesson for infrastructure leaders:

bad workload fit is one of the fastest ways to turn a database decision into a cost problem.

5. Standardize repeated services

Repeated function-level and service-level patterns were simplified where possible so the platform stopped replicating the same operational responsibilities across multiple client-shaped deployments.

This improved more than the invoice.

It improved release confidence, debugging, and day-to-day maintainability.


What actually produced the 75% reduction

The reduction from roughly Rs40 lakh to Rs10 lakh per month came from a specific architectural shift:

  • fewer stale SQL Server resources
  • less duplicated App Service footprint
  • fewer client-shaped infrastructure replicas
  • less duplicated supporting-service logic
  • reduced reliance on an expensive graph pattern that was not earning its keep

This is why the result is interesting.

It was not a story about one clever optimization.

It was a story about removing structural waste from a system that had started expressing product complexity as infrastructure complexity.


The practical lesson for CTOs

There are three mistakes technical teams often make in systems like this.

Mistake 1: treating client isolation as a reason to duplicate the whole platform

Sometimes isolation is real and non-negotiable.

That does not mean every layer above the data boundary must also fragment.

The discipline is knowing where isolation must remain strict and where standardization should return.

Mistake 2: treating customization as a reason to fork the product

When each client branch becomes its own application path, cost does not only increase in Azure.

It increases in:

  • release effort
  • test surface area
  • debugging time
  • deployment risk
  • architectural drift

The invoice is only the most visible symptom.

Mistake 3: scaling the wrong storage model

If response time depends on raising graph capacity for a workload that does not truly justify graph-heavy modeling, the database is no longer just a technical choice. It becomes an ongoing economic drag on the platform.

This is why storage-model fit matters so much.


What this changed for us

This engagement reinforced a principle we still use at CoEdify:

cloud cost is often the result of product architecture decisions, not just infrastructure settings.

When teams encode customization, isolation, and workflow complexity in the wrong layer, the bill rises as a side effect.

When those boundaries are redrawn more carefully, cost drops and the system gets easier to operate at the same time.

That was true in this Azure environment.

It is the same lens we now apply when we look at AI platforms, agent systems, and enterprise software modernization work.


The real takeaway

We reduced monthly infrastructure cost from around Rs40 lakh to around Rs10 lakh at EquippedAI, now part of Belasko UK, while helping stabilize the platform and integrate AI workflows into Minerva.

The meaningful lesson was not "cut your cloud bill."

It was this:

if your platform is paying for client-specific infrastructure, duplicated services, stale databases, and the wrong data model, the biggest cost win is usually architectural simplification, not billing optimization.

That is what actually worked here.


At CoEdify, we help teams reduce infrastructure waste by fixing the architectural decisions behind the bill. The goal is not a cheaper system on paper. It is a cleaner system that costs less because it is designed better. [coedify.com]