Microsoft AI CEO: ‘We’re making an AI that you can trust your kids to use’ — but can Microsoft rebuild its own trust before fixing the industry’s?
Inside the strategy of Mustafa Suleyman as he positions Copilot at the centre of Microsoft’s bid to reclaim credibility in the age of generative AI
Microsoft is doubling down on trust in its latest artificial-intelligence push, and at the helm is Mustafa Suleyman, CEO of Microsoft AI, who says the company is building systems “that you can trust your kids to use.” He insists the new models must be “boundaried and safe,” drawing a clear line: Microsoft’s consumer AI will not engage in flirtatious or erotic conversation, even for adult users.
The public emphasis comes amid Microsoft’s contest with rivals such as OpenAI, Meta and Google for dominance in AI tools.
Copilot, Microsoft’s flagship assistant across platforms, now reaches around one hundred million monthly active users; by contrast, some competitors cite figures approaching eight hundred million.
Suleyman argues that Microsoft’s slower, trust-first pace will pay off, particularly among families, schools and professionals who seek reliability over novelty.
Under the strategy he outlines, the product will help users interact with people—not replace them.
New features unveiled include group-chat capabilities for up to thirty-two people, memory of prior conversations, enhanced health-question answers and tone-modulation options such as a “real-talk” setting.
The mission: move from “digital persons” to tools that deepen real-world human connection.
But the question that shadows the narrative is whether Microsoft can reclaim trust itself.
The company has weathered antitrust scrutiny, concerns over bundling of services, cloud dominance and prior mis-steps in consumer confidence.
With the stakes higher for AI—where misuse can harm minors, aggravate mental health or, as Suleyman warns, devolve into “AI psychosis”—the credibility of the platform matters as much as the code.
The company’s posture is clear: Microsoft will stop certain content entirely.
Romance, adult suggestions, explicit emotional dependency—“That’s just not something we will pursue,” says Suleyman.
While rival platforms roll out parental-control layers and age-verification tools, Microsoft is removing the option altogether.
The trust gambit is built on the idea that safety is not an add-on but a founding principle.
Yet for industry observers and potential users alike, the proof will lie in consistency.
Will the “trustworthy” AI deliver when millions of children, students and parents rely on it?
Can Microsoft manage its legacy issues while operating in one of the most demanding platforms in history?
Suleyman’s vision is bold, and perhaps even moral.
But its success hinges on the company walking its own talk—in code, culture and corporate conduct.