Every week an executive asks me some variation of the same question: "Daniel, I want to use generative AI, but I'm afraid of leaking my customers' data." It's a legitimate concern. And, at the same time, it's the kind of fear that, if not properly managed, will cause your company to miss the most important competitive window of the last decade.
The good news is that using generative AI with compliance is not a contradiction. It's architecture. It's governance. It's a conscious choice of where and how you implement each tool. Over the past few years, working with institutions such as BTG, B3 and Bradesco, I've learned that the companies that get this right are not the ones that avoid AI out of fear — they are the ones that build the right adoption model before hitting the scale button.
This article is a straightforward guide for decision-makers who need to understand the real risks, separate the noise from what matters, and build an enterprise AI governance strategy that neither paralyzes the business nor exposes sensitive data.
The Real Risk Is Not AI. It's the Lack of Governance.
First and foremost, we need to be precise about the problem. Generative AI itself is not the risk vector. The risk lies in how companies adopt it — typically without policies, without controls, and without visibility into what is being sent to which model.
A 2023 Samsung case study became a classic example: engineers sent proprietary source code to ChatGPT for debugging. The result was the inadvertent leakage of critical intellectual property. It was not a sophisticated attack. It was institutional carelessness — the absence of a clear policy on what can or cannot be entered into external AI tools.
In Brazil, the situation is even more delicate due to the LGPD and artificial intelligence. The General Data Protection Law imposes clear responsibilities on the processing of personal data, and sending customer information to third-party AI models — without a proper contract, without a DPA (Data Processing Agreement), and without a legal basis — may constitute a violation. The ANPD has already signaled that it is monitoring AI practices, and the regulatory framework is being rapidly developed.
The right question is not "should I use generative AI or not?". The right question is: "Where does my data go when I use these tools, and who is responsible for what happens to it?"
Map Before You Implement: The Sensitive Data Inventory
No data protection in AI strategy works without a basic prior step: knowing what data you have and how it flows through your organization. It sounds obvious. It is not what most companies do.
In practice, what I recommend to my clients is a three-layer mapping exercise:
- Regulated data: personal customer information (taxpayer ID, address, financial data, health data), which falls directly within the scope of the LGPD and sector-specific regulations such as those from the Central Bank and SUSEP.
- Business-sensitive data: source code, commercial strategies, pricing data, information about mergers and acquisitions, sales pipelines.
- Low-risk operational data: templates, public documents, generic knowledge bases, already published information.
Only after this mapping can you define which generative AI use cases can use data from the third layer without restriction, which require anonymization or pseudonymization before any processing, and which simply should not leave your controlled environment.
This exercise, which can be completed in two to three days with the right team, prevents 80% of the compliance problems I see occurring in companies that adopt AI in a disorganized manner.
Architecture Matters: External Models vs. Models in Your Environment
One of the most important decisions in adopting generative AI in enterprises is where the model runs. This choice has direct implications for security, compliance, and cost.
There are basically three architectures:
- Public SaaS (ChatGPT, Gemini, Claude via public API): data sent to third-party servers, outside your direct control. Suitable only for low-risk data, with clear acceptable use policies.
- Enterprise APIs with contractual agreements: Azure OpenAI, Amazon Bedrock, Google Vertex AI. In these cases, the provider signs data processing contracts, the models do not use your information for training by default, and you have contractual guarantees. This is the model I recommend for most corporate use cases involving sensitive data.
- Models hosted in your own environment (on-premise or private VPC): maximum control, maximum responsibility. Suitable for critical use cases in highly regulated sectors, such as healthcare and finance. The operational cost is higher, but data sovereignty is total.
Working with clients in the financial sector, one architecture that proves effective is the use of Amazon Bedrock with a private VPC, where data never leaves the client's infrastructure, the model is consumed via API with encryption in transit and at rest, and a complete log of all interactions exists for auditing purposes. This meets both the Central Bank's requirements and internal information security standards.
The message is simple: the right architecture depends on data sensitivity. There is no single answer — there is the right answer for each use case.
LGPD and Generative AI: What Your Company Needs to Ensure
The intersection between LGPD and artificial intelligence is still being built from a regulatory standpoint, but some principles are already clear and applicable today.
The first is the legal basis for processing. If your generative AI processes personal data from customers, you need a valid legal basis — consent, legitimate interest, contract performance. "The AI needed the data" is not a legal basis.
The second is purpose limitation and data minimization. The AI should only process the data necessary for the stated purpose. If you are using AI to summarize contracts, it does not need the customer's banking data. Design your prompts and data pipelines with this principle in mind.
The third is transparency and explainability. Decisions that affect customers — credit approval, pricing, differentiated service — need to be explainable. The use of generative AI in these decisions must be documented and auditable. Customers have the right to know that AI was used and to contest automated decisions.
The fourth, and often overlooked, is the contract with sub-processors. Any company that processes personal data on your behalf — including AI providers — must have an adequate contract (DPA) that establishes responsibilities, security measures, and procedures in case of an incident. Before integrating any AI API into your production environment, verify that this contract exists and is up to date.
The LGPD does not prohibit the use of AI. It requires that its use be responsible, documented, and auditable. Companies that build this framework today are creating competitive advantage, not merely avoiding fines.
Governance in Practice: What to Implement Now
Enterprise AI governance does not need to be bureaucratic to be effective. What works in practice is a minimal set of controls that can be implemented in weeks, not months.
The first control is an acceptable AI use policy. A clear, one-to-two-page document that states which tools are approved, for which purposes, and which categories of data should never be entered into external tools. Simple, direct, communicated to the entire organization.
The second is a new use case evaluation process. Before any team launches an AI application into production, a checklist of five to ten questions: What data is processed? Where does the model run? Is there a contract with the provider? How are interactions logged? Who approves? This does not need to take more than 48 hours for straightforward cases.
The third is continuous monitoring and auditing. Usage logs, alerts for anomalous behavior, periodic review of approved use cases. AI evolves quickly. What was safe six months ago may have changed — both in the model and in the regulatory context.
The fourth is human training. Technology solves part of the problem. The other part is behavioral. Teams that understand why policies exist — not just what to do, but the reasoning behind it — make better decisions in situations that no policy can fully anticipate.
A healthcare company I worked with implemented this model in six weeks. The result was internal approval of twelve generative AI use cases that had been stalled due to widespread fear, with appropriate controls for each one. Governance did not block innovation — it unlocked it.
The Most Costly Mistake: Treating Compliance as a Barrier
I want to close with a point that goes against the most common narrative about regulation and technology.
Executives who treat compliance as a barrier to generative AI are making a double strategic mistake. First, they are losing time and opportunity while competitors who have solved the governance problem move forward. Second, when an incident occurs — and without governance, it is only a matter of time — the cost will be far greater than the investment required to get it right from the start.
The companies leading in AI adoption in the Brazilian financial sector are not the ones that ignored compliance. They are the ones that built the governance framework first and then scaled with speed and confidence. There is a direct correlation between maturity in sensitive data security and the speed of AI adoption — because internal and regulatory trust allows you to move faster, not slower.
The Brazilian market is at an inflection point. The window to build real competitive advantage with generative AI is open right now. In two years' time, companies that invested in proper governance will have capabilities, data, and institutional trust that will be very difficult to replicate. Those that waited will be playing catch-up.
The question is not whether your company will use generative AI. It is whether it will use it well — with the right controls, at the right time, with the right data. That is what separates real digital transformation from unnecessary risk.
If you are building your company's AI strategy and want to ensure that governance and innovation go hand in hand, get in touch. This is exactly the kind of problem I solve with CEOs, CTOs, and founders who cannot afford to get this decision wrong.