Confidential AI: Unlocking Sensitive Data for Trusted Enterprise AI

In an era where data is considered a true “treasure,” organizations are racing to adopt AI to drive their businesses forward. However, many get stuck at a critical roadblock: data security and privacy—especially when dealing with highly sensitive information.
From the panel discussion “Confidential AI: Unlocking Sensitive Data for Trusted Enterprise AI,” Rishabh Poddar, CTO of Opaque Systems, presented a promising solution called “Confidential AI.”
This approach is poised to become a new, indispensable standard for enterprises that want to fully leverage AI while still protecting sensitive data and maintaining trust.
When Data Demand Clashes with Privacy
Opaque Systems was born from research at UC Berkeley around 2015–2016, right in the middle of a growing tension between two worlds:
- The business world: Needs increasingly diverse and granular data to train more powerful machine learning models.
- The legal/regulatory world: Continues to tighten global privacy regulations, making it harder to freely use sensitive data.
This led to a major question: “How can we unlock the value of data without breaking strict security and privacy rules?”
That question became the foundation of Opaque Systems’ mission: to build technology that keeps data confidential—even while it is being processed and used.
Runtime Encryption: The Missing Piece of the Puzzle
Rishabh pointed out that traditional data security methods are no longer sufficient.
- In the past: We were familiar with encrypting data “at rest” (on hard drives) and “in transit” (over networks).
- Today: The critical blind spot is the moment when data is pulled for processing. At that point, it’s often decrypted into plain text, which creates significant risk.
Confidential AI closes this gap by encrypting data “in use” or at runtime. Data is processed inside a protected, isolated environment called an enclave. Within this enclave, even cloud providers or system administrators cannot peek into the data.
The Trust Cycle: Verifiable Security in 3 Stages
Opaque’s system doesn’t just protect data—it builds end-to-end trust through a full Trust Cycle:
- Before Processing: Teams can verify which agents, models, or policies will be applied to the data. If anything fails verification, the data will not be touched.
- During Processing: This is the core highlight. Data remains encrypted even in memory while the AI is running. This means that not even cloud or infrastructure providers can secretly access the data.
- After Processing: There are detailed audit logs that prove the data was only used in approved ways. This supports legal and regulatory compliance and makes the whole process transparent and trustworthy.
Generative AI and Real-World Enterprise Application
The arrival of Generative AI has made Confidential AI more essential than ever. Because GenAI requires massive amounts of raw data and often operates in a non-deterministic manner, the risk of sensitive data leakage is significantly higher.
Rishabh highlighted ServiceNow as a prime example. They utilized Confidential AI to build an internal RAG (Retrieval-Augmented Generation) system to answer sensitive employee inquiries regarding topics like "commissions" or "sales quotas." The system securely pulls real data from HR and Sales departments to process answers without any data leakage, while strictly adhering to company policies.
Furthermore, this technology opens the door for Multi-party Collaboration. For instance, multiple banks can cooperate to detect money laundering, or hospitals can share data for disease research, all without exposing their raw customer or patient data to one another (Data Sharing without Data Showing).
The New Standard for the Future
Rishabh concluded with a compelling analogy: In the past, websites transmitted data openly until TLS (HTTPS) became the basic security standard.
"Confidential AI is the same. In the future, all AI should be confidential and verifiable."
For enterprise leaders, waiting to be fully ready might be too late. The advice is to identify High Impact Use Cases immediately and start laying the foundation for security that is "Built-in" rather than added later.
Watch the full session replay here: https://www.youtube.com/watch?v=fvjIbvB3uwk&list=PLJCrobWNqQvuxoXJNq5fS8p8KGb5xmc-t&index=25





