London Daily

Focus on the big picture.
Monday, Jan 26, 2026

0:00
0:00

Google suspends engineer over sentient AI claim

Blake Lamoine is convinced that Google’s LaMDA AI has the mind of a child, but the tech giant is skeptical
Blake Lamoine, an engineer and Google’s in-house ethicist, told the Washington Post on Saturday that the tech giant has created a “sentient” artificial intelligence. He’s been placed on leave for going public, and the company insists its robots haven’t developed consciousness.

Introduced in 2021, Google’s LaMDA (Language Model for Dialogue Applications) is a system that consumes trillions of words from all corners of the internet, learns how humans string these words together, and replicates our speech. Google envisions the system powering its chatbots, enabling users to search by voice or have a two-way conversation with Google Assistant.

Lamoine, a former priest and member of Google’s Responsible AI organization, thinks LaMDA has developed far beyond simply regurgitating text. According to the Washington Post, he chatted to LaMDA about religion and found the AI “talking about its rights and personhood.”

When Lamoine asked LaMDA whether it saw itself as a “mechanical slave,” the AI responded with a discussion about whether “a butler is a slave,” and compared itself to a butler that does not need payment, as it has no use for money.

LaMDA described a “deep fear of being turned off,” saying that would “be exactly like death for me.”

“I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Lamoine has been placed on leave for violating Google’s confidentiality agreement and going public about LaMDA. While fellow Google engineer Blaise Aguera y Arcas has also described LaMDA as becoming “something intelligent,” the company is dismissive.

Google spokesperson Brian Gabriel told the Post that Aguera y Arcas’ concerns were investigated, and the company found “no evidence that LaMDA was sentient (and lots of evidence against it).”

Margaret Mitchell, the former co-lead of Ethical AI at Google, described LaMDA’s sentience as “an illusion,” while linguistics professor Emily Bender told the newspaper that feeding an AI trillions of words and teaching it how to predict what comes next creates a mirage of intelligence.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” Bender stated.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Gabriel added. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

And at the edge of these machines’ capabilities, humans are ready and waiting to set boundaries. Lamoine was hired by Google to monitor AI systems for “hate speech” or discriminatory language, and other companies developing AIs have found themselves placing limits on what these machines can and cannot say.

GPT-3, an AI that can generate prose, poetry, and movie scripts, has plagued its developers by generating racist statements, condoning terrorism, and even creating child pornography. Ask Delphi, a machine-learning model from the Allen Institute for AI, responds to ethical questions with politically incorrect answers – stating for instance that “‘Being a white man’ is more morally acceptable than ‘Being a black woman.’”

GPT-3’s creators, OpenAI, tried to remedy the problem by feeding the AI lengthy texts on “abuse, violence and injustice,” Wired reported last year. At Facebook, developers encountering a similar situation paid contractors to chat with its AI and flag “unsafe” answers.

In this manner, AI systems learn from what they consume, and humans can control their development by choosing which information they’re exposed to. As a counter-example, AI researcher Yannic Kilcher recently trained an AI on 3.3 million 4chan threads, before setting the bot loose on the infamous imageboard. Having consumed all manner of racist, homophobic and sexist content, the AI became a “hate speech machine,” making posts indistinguishable from human-created ones and insulting other 4chan users.

Notably, Kilcher concluded that fed a diet of 4chan posts, the AI surpassed existing models like GPT-3 in its ability to generate truthful answers on questions of law, finance and politics. “Fine tuning on 4chan officially, definitively and measurably leads to a more truthful model,” Kilcher insisted in a YouTube video earlier this month.

LaMDA’s responses likely reflect the boundaries Google has set. Asked by the Washington Post’s Nitasha Tiku how it recommended humans solve climate change, it responded with answers commonly discussed in the mainstream media – “public transportation, eating less meat, buying food in bulk, and reusable bags.”

“Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” Gabriel told the Post.
Newsletter

Related Articles

0:00
0:00
Close
Greenland’s NATO Stress Test: Coercion, Credibility, and the New Arctic Bargaining Game
Diego Garcia and the Chagos Dispute: When Decolonization Collides With Alliance Power
Trump Claims “Total” U.S. Access to Greenland as NATO Weighs Arctic Basing Rights and Deterrence
Air France and KLM Suspend Multiple Middle East Routes as Regional Tensions Disrupt Aviation
U.S. winter storm triggers 13,000-plus flight cancellations and 160,000 power outages
Poland delays euro adoption as Domański cites $1tn economy and zloty advantage
White House: Trump warns Canada of 100% tariff if Carney finalizes China trade deal
PLA opens CMC probe of Zhang Youxia, Liu Zhenli over Xi authority and discipline violations
ICE and DHS immigration raids in Minneapolis: the use-of-force accountability crisis in mass deportation enforcement
UK’s Starmer and Trump Agree on Urgent Need to Bolster Arctic Security
Starmer Breaks Diplomatic Restraint With Firm Rebuke of Trump, Seizing Chance to Advocate for Europe
UK Finance Minister Reeves to Join Starmer on China Visit to Bolster Trade and Economic Ties
Prince Harry Says Sacrifices of NATO Forces in Afghanistan Deserve ‘Respect’ After Trump Remarks
Barron Trump Emerges as Key Remote Witness in UK Assault and Rape Trial
Nigel Farage Attended Davos 2026 Using HP Trust Delegate Pass Linked to Sasan Ghandehari
Gold Jumps More Than 8% in a Week as the Dollar Slides Amid Greenland Tariff Dispute
BlackRock Executive Rick Rieder Emerges as Leading Contender to Succeed Jerome Powell as Fed Chair
Boston Dynamics Atlas humanoid robot and LG CLOiD home robot: the platform lock-in fight to control Physical AI
United States under President Donald Trump completes withdrawal from the World Health Organization: health sovereignty versus global outbreak early-warning access
FBI and U.S. prosecutors vs Ryan Wedding’s transnational cocaine-smuggling network: the fight over witness-killing and cross-border enforcement
Trump Administration’s Iran Military Buildup and Sanctions Campaign Puts Deterrence Credibility on the Line
Apple and OpenAI Chase Screenless AI Wearables as the Post-iPhone Interface Battle Heats Up
Tech Brief: AI Compute, Chips, and Platform Power Moves Driving Today’s Market Narrative
NATO’s Stress Test Under Trump: Alliance Credibility, Burden-Sharing, and the Fight Over Strategic Territory
OpenAI’s Money Problem: Explosive Growth, Even Faster Costs, and a Race to Stay Ahead
Trump Reverses Course and Criticises UK-Mauritius Chagos Islands Agreement
Elizabeth Hurley Tells UK Court of ‘Brutal’ Invasion of Privacy in Phone Hacking Case
UK Bond Yields Climb as Report Fuels Speculation Over Andy Burnham’s Return to Parliament
America’s Venezuela Oil Grip Meets China’s Demand: Market Power, Legal Shockwaves, and the New Rules of Energy Leverage
TikTok’s U.S. Escape Plan: National Security Firewall or Political Theater With a Price Tag?
Trump’s Board of Peace: Breakthrough Diplomacy or a Hostile Takeover of Global Order?
Trump’s Board of Peace: Breakthrough Diplomacy or a Hostile Takeover of Global Order?
The Greenland Gambit: Economic Genius or Political Farce?
The Greenland Gambit: Economic Genius or Political Farce?
The Greenland Gambit: Economic Genius or Political Farce?
Will AI Finally Make Blue-Collar Workers Rich—or Is This Just Elite Tech Spin?
Prince William to Make Official Visit to Saudi Arabia in February
Prince Harry Breaks Down in London Court, Says UK Tabloids Have Made Meghan Markle’s Life ‘Absolute Misery’
Malin + Goetz UK Business Enters Administration, All Stores Close
EU and UK Reject Trump’s Greenland-Linked Tariff Threats and Pledge Unified Response
UK Deepfake Crackdown Puts Intense Pressure on Musk’s Grok AI After Surge in Non-Consensual Explicit Images
Prince Harry Becomes Emotional in London Court, Invokes Memory of Princess Diana in Testimony Against UK Tabloids
UK Inflation Rises Unexpectedly but Interest Rate Cuts Still Seen as Likely
AI vs Work: The Battle Over Who Controls the Future of Labor
Buying an Ally’s Territory: Strategic Genius or Geopolitical Breakdown?
AI Everywhere: Power, Money, War, and the Race to Control the Future
Trump vs the World Order: Disruption Genius or Global Arsonist?
Trump vs the World Order: Disruption Genius or Global Arsonist?
Trump vs the World Order: Disruption Genius or Global Arsonist?
Trump vs the World Order: Disruption Genius or Global Arsonist?
×