Prime Minister Keir Starmer has signalled that the UK government may extend the Online Safety Act to cover AI chatbots, warning that conversational AI systems pose distinct risks to children that existing social media regulations were not designed to address.
In a recent public update, Starmer warned that rapidly evolving AI systems are reshaping childhood in ways lawmakers can no longer afford to overlook. While online harms debates have traditionally centered on social media platforms, he argued that AI Chatbot Regulation must now catch up with technologies capable of generating real-time, human-like interactions.
“Protections for young people must evolve as technology evolves,” Starmer wrote, framing AI Chatbot Regulation as a natural extension of the UK’s broader online safety reforms.
Expanding the scope of online safety laws
The UK already has one of the world’s most comprehensive digital frameworks in the form of the Online Safety Act 2023, which places new duties of care on tech platforms. However, policymakers are now assessing whether the legislation sufficiently captures AI-driven services that function differently from traditional social media feeds.
Under potential reforms, AI Chatbot Regulation could explicitly extend to conversational systems embedded in apps, search engines, gaming platforms, and standalone AI services. These tools, often powered by large language models, engage users in personalized dialogue that can mimic empathy, authority, or companionship.
For children and teenagers, Starmer cautioned, such systems may blur the boundaries between neutral information and persuasive influence—making AI Chatbot Regulation a pressing matter.
Unique risks posed by AI chatbots
Unlike static content platforms, AI chatbots generate responses dynamically. This real-time generation makes moderation significantly more complex. Harmful or inappropriate responses cannot always be pre-screened in the same way as uploaded content.
Starmer pointed to concerns ranging from exposure to explicit material and misleading advice to emotional dependency. Child development experts have warned that young users may form attachments to conversational AI systems, particularly when interactions feel private and personalized.
“Children deserve the space to grow without being shaped by opaque algorithms,” Starmer said, underscoring why AI Chatbot Regulation must address both content risks and design features that encourage prolonged engagement.
Digital safety advocates have echoed that sentiment. Andy Burrows, CEO of the Molly Rose Foundation, has previously argued that AI tools “must not repeat the mistakes made with social media,” adding that proactive AI Chatbot Regulation could prevent harm before it becomes systemic.
Parliamentary powers and faster enforcement
One of the core challenges facing AI Chatbot Regulation is the speed at which AI capabilities are advancing. Chatbots today can simulate emotional support, provide educational guidance, or even role-play complex scenarios. Tomorrow’s iterations may be even more immersive.
Starmer suggested that Parliament may require enhanced regulatory flexibility to respond swiftly as these technologies develop. This could involve empowering the communications regulator Ofcom with expanded authority to oversee AI-driven services under the Online Safety Act.
Ofcom already holds enforcement powers over major platforms, including the ability to levy substantial fines for non-compliance. Incorporating AI systems into its remit would represent a significant step forward for AI Chatbot Regulation in the UK.
A spokesperson for Ofcom previously stated that the regulator is “closely monitoring the development of generative AI services” and stands ready to implement parliamentary directives if legislation evolves.
Balancing innovation and protection
Starmer was careful to stress that the government does not intend to stifle innovation. The UK has positioned itself as a global hub for artificial intelligence research and development, hosting major AI safety discussions and courting private-sector investment.
However, he insisted that innovation must not come at the expense of child safety. In his view, AI Chatbot Regulation is about ensuring responsible design and deployment, not blocking technological progress.
“Tech companies must take greater responsibility for how their tools are built and used,” Starmer said, signaling that voluntary safeguards may no longer suffice.
This stance aligns with broader international trends. The European Union’s AI Act and ongoing discussions in the United States reflect growing consensus that generative AI requires clearer guardrails—particularly when minors are involved.
Public consultation and next steps
The UK government is expected to launch a public consultation to gather evidence on how AI-driven services are being used by minors and where regulatory gaps exist. The findings will inform the next phase of AI Chatbot Regulation, potentially leading to legislative amendments.
Stakeholders likely to participate include child welfare organizations, educators, AI developers, and digital rights advocates. The consultation process will examine technical feasibility, enforcement mechanisms, and proportionality—key pillars of effective AI Chatbot Regulation.
Legal analysts note that extending existing law may be more efficient than drafting entirely new statutes. By adapting the Online Safety Act framework, lawmakers could integrate AI Chatbot Regulation without creating regulatory overlap or confusion.
A defining test for digital governance
As AI tools become embedded in everyday life—from homework assistance to mental health advice—the stakes are rising. The UK’s approach to AI Chatbot Regulation may serve as a model for other democracies grappling with similar questions.
For Starmer, the issue is ultimately about safeguarding childhood in a digital age. He framed the initiative as part of a broader mission to “give children the space to grow” free from manipulation or unchecked algorithmic influence.
With consultation on the horizon and political momentum building, AI Chatbot Regulation is poised to become a central pillar of the UK’s evolving tech policy. Whether lawmakers can craft rules that protect young users while preserving innovation will determine not just the future of AI oversight—but the digital environment shaping the next generation.