Every Monday morning, in some conference room in Brazil, a CEO looks at two reports and asks the same question: "Why are these numbers different?" The sales team shows 12% growth. The finance team presents 9%. The marketing department swears the right number is 11.3%. And the next forty minutes of the meeting are wasted not discussing strategy, but trying to figure out which spreadsheet is correct.

This scenario is more common than it seems. In companies that already serve thousands of customers, have dozens of systems and produce terabytes of data per month, metric inconsistency has become a silent epidemic. It doesn't show up on the balance sheet, it has no budget line, but it systematically erodes leadership's decision-making capacity. Over 20 years working with companies like BTG, B3, Bradesco and XP, I've seen this problem destroy strategic meetings, delay critical decisions and, in some cases, lead companies to place bets on movements based on simply wrong data.

The solution has a name: single source of truth. But getting there requires more than technology. It requires governance, culture and an architecture designed to serve the business, not the IT department.

Why data doesn't match — and why this is an architecture problem, not a people problem

The first instinct of many leaders when numbers diverge is to blame someone. The analyst who "pulled it wrong," the system that "updated outside business hours," the team that "used a different metric." But the truth is that data inconsistency is rarely an isolated human failure. It is a symptom of an architecture that grew without planning.

Imagine a mid-sized company with an ERP for finance, a CRM for sales, an e-commerce platform, a logistics system and three or four more BI tools scattered across departments. Each system has its own definition of "active customer," its own criteria for "realized revenue" and its own update cycle. When someone consolidates this data into a spreadsheet — often manually — the discrepancies are mathematical. There is no way the numbers can match because they were never designed to talk to each other.

According to Gartner research, organizations lose an average of $12.9 million per year due to poor quality data. In Brazil, where tax complexity adds extra layers of accounting reconciliation, this cost tends to be proportionally higher. Data unification is not an IT project. It is a strategic decision with measurable financial returns.

What it actually means to have a single source of truth

The concept of a single source of truth — or SSOT — is simple in theory: a centralized layer where all relevant business metrics are calculated in the same way, with the same criteria, updated at the same frequency and accessible to all decision-makers.

In practice, this doesn't mean throwing out all existing systems and starting from scratch. It means creating a data layer that feeds from these diverse sources, standardizes definitions and serves as the authoritative reference for the business. When the sales team talks about "monthly recurring revenue," they are using the same definition that the finance team uses. When the CEO asks for the number of active customers, they receive the same value that the CTO would see when consulting the product dashboard.

This seems obvious, but it implies resolving issues that go far beyond technology:

  • Who defines what an "active customer" is? Is it someone who purchased in the last 30 days? The last 90? Someone with an active contract even without a recent transaction?
  • When is revenue recognized? At the time of the order, delivery, payment or financial settlement?
  • What is the churn criteria? Formal cancellation, inactivity for a period, dropping below a minimum ticket size?

These questions need to be answered by the business, not by IT. Technology executes what the business units decide. Without this conceptual clarity, any data project will reproduce the same babel of metrics on a more expensive infrastructure.

Data governance: the foundation nobody wants to build

The hardest part of the journey toward a single source of truth is not technical. It is data governance. And that is precisely why most initiatives fail before delivering value.

Data governance is the set of policies, processes, responsibilities and standards that determine how data is created, stored, accessed and used within the organization. Without it, you can have the best data warehouse on the market and still have five teams using different definitions of "gross margin."

In practice, a minimum governance structure for companies that take data seriously needs to include:

  • Data owners per domain: each area has a formal owner responsible for the definitions and quality of the data it produces
  • Centralized data dictionary: a living catalog with the official definition of each metric relevant to the business
  • Controlled change process: any change to a critical metric goes through formal approval and structured communication
  • Quality monitoring: automatic alerts when data arrives outside the expected standard, with greater latency than agreed or with statistically anomalous values

At a Brazilian fintech I worked with, the absence of data ownership was the main culprit. Each product squad defined its own engagement metrics, with no coordination with the data team. The result was a product dashboard with twelve different "retention" charts — each using a different time window. The solution didn't come from a new tool. It came from creating a metrics committee with representatives from product, technology and business, which in three months produced a glossary with 47 official definitions and eliminated more than 60 redundant metrics.

Modern architecture for data unification: from data warehouse to data mesh

From a technical standpoint, the market has evolved greatly over the last two decades. The approaches to building an efficient data unification architecture today range from more traditional solutions to more modern distributed models.

The classic centralized data warehouse model — a single repository where all company data is consolidated and transformed — still works well for small and mid-sized companies or for businesses with relatively homogeneous data domains. Tools like Amazon Redshift, Google BigQuery and Snowflake have made this approach accessible and highly scalable.

For larger companies, with multiple business units, distributed data teams and quite distinct domains — such as a financial conglomerate with a bank, insurer and brokerage — total centralization can create a dangerous bottleneck. It is in this context that the concept of data mesh gained relevance.

The data mesh proposes an organized decentralization: each business domain is responsible for its own data as if it were a product — with guaranteed quality, documentation, defined SLA and a standardized interface for consumption by other areas. The single source of truth, in this model, is not a central server, but a layer of contracts and standards that ensures interoperability between autonomous domains.

The choice between a centralized data warehouse and a data mesh should not be ideological. It should be pragmatic: which model solves your real problem, with your current team, within your available time horizon?

In most Brazilian companies I work with, the problem is not the absence of sophisticated technology. It is the absence of fundamentals: reliable pipelines, documented transformations and a data model that reflects how the business actually works. Before talking about data mesh or lakehouse, it is necessary to ensure that basic data arrives clean, on time and with clear meaning.

How to put this into practice: a three-phase roadmap

Data projects that try to solve everything at once generally solve nothing. The most effective approach I've seen work at companies like B3 and Livelo was a structured journey in phases, with value delivered at each stage.

Phase 1 — Diagnosis and prioritization (4 to 8 weeks): mapping existing data sources, identifying the most critical metrics for executive decisions, surveying the most frequent and costly inconsistencies, defining data owners by domain. The goal of this phase is not to solve anything yet. It is to understand exactly what is broken and how much it costs.

Phase 2 — Building the truth layer for priority metrics (3 to 6 months): instead of trying to unify 100% of company data at once, the recommendation is to focus on the 10 to 15 metrics that appear in every board meeting. Build the pipelines, transformations and dictionary for these metrics first. Deliver an executive dashboard that all leadership recognizes as the official reference. This moment of cultural adoption is just as important as the technical one.

Phase 3 — Expansion and maturity (ongoing): with the foundation working and leadership's trust established, expanding to other domains and metrics becomes simpler. At this stage, quality monitoring mechanisms, the corporate data catalog and formal governance processes are also implemented.

A metric I use to evaluate the success of this type of initiative is simple: how long would it take the company to answer, with confidence, an unplanned strategic question? If today the answer is "we need two days to consolidate the data," the goal is to reach "we have the number now, with context." Companies that get there make faster decisions, with fewer alignment meetings and far less rework.

The role of executive leadership: why this project cannot belong to IT alone

As much as data unification is enabled by technology, it fails when treated as an IT project. The reason is simple: the most important decisions in this process are organizational, not technical.

Defining that finance has authority over the official definition of revenue — and that the product team needs to adapt its reports to that definition — is a corporate governance decision. Approving a budget for a dedicated data engineering team is a decision for the CFO and CEO. Creating the habit of consulting the centralized dashboard instead of asking the analyst to "pull a quick number" is a cultural change that only happens if leadership sets the example.

The data projects that generated the most impact at the companies I've followed had one thing in common: a senior executive — usually the CTO or CDO — acted as an active sponsor, not just a budget approver. They participated in metric definitions, demanded consistency in meetings and vetoed initiatives that created parallel data silos.

Real data-driven decisions begin before the dashboard. They begin when leadership decides that reliable data is a strategic priority — and acts accordingly.

Data that matches creates companies that move forward

When a company finally builds its single source of truth, something interesting happens in executive meetings: the time spent discussing "which number is right" drops to near zero. And that time goes where it should have gone from the start — to discussing what to do with what the data shows.

In one implementation I followed at a financial sector company, the average duration of a results meeting dropped from ninety to forty minutes after unifying the main metrics. Not because people started speaking faster, but because they eliminated the manual reconciliation step that consumed half of every meeting. Multiplied by fifty weeks and ten executive meetings per week, the result is hundreds of hours of senior leadership redirected toward strategic decision-making.

That is the real return on a well-executed data project. It is not as glamorous as artificial intelligence or cutting-edge architectures. But it is the foundation without which none of these technologies will truly work in your company.

If your company's data doesn't match, the problem won't resolve itself over time — it will grow as systems multiply and teams expand. If you want to understand how to build this data foundation in your specific reality, get in touch. This is exactly the type of challenge I solve with executives who have already realized that metric inconsistency is costing more than it appears.