Slingshot Withdraws Therapy Chatbot Ash from UK Market Amid Regulatory Uncertainty
Mental health technology firm pauses UK operations for its AI chatbot as oversight concerns and compliance questions intensify
The digital health company Slingshot has withdrawn its artificial intelligence therapy chatbot, known as Ash, from the United Kingdom, citing mounting regulatory uncertainty and concerns about compliance with emerging oversight frameworks.
The decision follows increased scrutiny from UK authorities over the use of generative artificial intelligence in sensitive health-related applications, particularly tools offering mental health support without direct human supervision.
Ash, which was marketed as an on-demand conversational companion designed to help users manage anxiety, stress and emotional wellbeing, had attracted attention for its rapid growth and novel use of large language models.
However, regulators and professional bodies raised questions about safety standards, data protection, clinical accountability and the risk of users relying on automated systems in place of qualified medical professionals.
Slingshot said it chose to pause its UK offering rather than risk operating in an environment where regulatory expectations remain in flux.
In a statement, the company said it supports responsible regulation of artificial intelligence in healthcare and intends to work with policymakers to clarify the rules governing digital mental health tools.
Slingshot emphasised that the withdrawal does not reflect a lack of confidence in the technology itself, but rather a desire to ensure that future deployments fully align with legal, ethical and clinical requirements.
The chatbot will continue to operate in other markets while the company reassesses its UK strategy.
The move comes as the UK accelerates efforts to regulate artificial intelligence, including stricter enforcement under health, consumer protection and data laws.
Officials have signalled that AI systems providing therapeutic or quasi-medical guidance may be subject to standards similar to those applied to regulated medical devices, particularly where vulnerable users are involved.
Industry analysts say this approach reflects broader international concern about the unchecked deployment of generative AI in high-risk domains.
Slingshot’s decision highlights the growing tension between rapid innovation in artificial intelligence and the slower pace of regulatory clarity, especially in mental health care.
As governments move to establish clearer guardrails, companies operating at the intersection of AI and healthcare are increasingly being forced to adapt their business models or temporarily retreat from certain markets until compliance pathways are more clearly defined.