Would you trust an AI chatbot with your most personal data? Millions already have—without realizing the risks.
The recent revelations surrounding DeepSeek, a rapidly growing AI chatbot, expose a deeper issue in the AI industry: the dangerous trade-offs between innovation and privacy. With reports of login credentials being sent to China Mobile and a data breach exposing over a million sensitive records, it’s clear that AI is evolving faster than our ability to regulate and protect user data.
For companies building and adopting AI, the question isn’t whether AI is the future—it is. The real question is: how do we ensure it’s a future we can trust?
The DeepSeek controversy
DeepSeek is the Chinese foundation model accused of training on pirated books and private datasets — is more than a flashpoint. It’s a warning.
In the rush to build bigger, faster, and “more open” AI systems, fundamental safeguards are being trampled: consent, traceability, licensing, and auditability. For startups betting their product roadmap on third-party models, this creates more than reputational risk. It opens the door to IP infringement claims, compliance violations, and investor unease.
“In regulated sectors, it’s not the model’s intelligence that matters — it’s whether you can defend its decisions in a boardroom or a courtroom.”
— Matthew Rogers, CEO of Preux
At Preux, we’ve seen this play out firsthand in sectors where trust and audit trails aren’t optional. In our work with Wakura, a healthcare AI platform using large language models to evaluate clinical candidates, the stakes of AI misuse aren’t theoretical. Regulatory scrutiny is increasing, particularly when AI decisions shape outcomes for patients, clinicians, or public systems. Every model interaction needs to be defensible — and that starts with knowing what went into it.
The question every founder should be asking right now isn’t “How smart is the model?” — it’s “Can we explain what it knows and where it learned it?”
Wakura AI: Redefining Talent Assessment in Healthcare with Conversational Intelligence
Why this matters
The DeepSeek fallout isn’t unique. It’s the result of an industry culture that rewards speed over substance and novelty over accountability. As investor expectations rise — especially in healthcare, finance, and defense — that culture becomes a liability.
“The era of plausible deniability in AI is ending. If you don’t know what data your model was trained on, you’re not innovating — you’re gambling.”
— Meredith Whittaker, President of Signal and former AI Policy Advisor at the FTC
You don’t need to be OpenAI to face scrutiny. In fact, smaller companies are often more exposed because they rely on opaque third-party models without legal fallback or internal oversight.
What founders should do instead
Build for the Audit, Not the Demo
Real compliance means building systems that withstand scrutiny, not just pass a surface-level review. Every decision made by a model — from a clinician recommendation to a pricing suggestion — should be explainable to a non-technical stakeholder.
Demand Transparency from AI Vendors
If you’re embedding a model — open-source or commercial — ask about its training data, licensing, and risk posture. If they can’t give a straight answer, it’s not worth the exposure.
Separate Product Value from Model Dependency
At Wakura, the value lies in how interviews are structured, scored, and audited — not just in what the AI “says.” Responsible design means isolating risky components and building audit-friendly wrappers around them.
The Takeaway
AI’s regulatory reckoning isn’t coming. It’s here. Whether you’re building in healthtech, fintech, or any industry where trust is a prerequisite, how you build matters more than how fast you build.
At Preux, we specialise in helping founders navigate that tension — between innovation and integrity, speed and structure. That’s not about slowing down. It’s about building systems that last — and can stand up to scrutiny when it matters most.