Italy’s antitrust authority launches a probe into Chinese AI startup DeepSeek over inadequate warnings about AI-generated false information, highlighting growing regulatory scrutiny of AI transparency and consumer protection.
Italy Launches Regulatory Probe into Chinese AI Firm DeepSeek Over False Information Risks
Italy has intensified its regulatory scrutiny of Chinese artificial intelligence startup DeepSeek, initiating a formal investigation into the company’s failure to adequately warn users about the potential for its AI chatbot to produce false or misleading information. This move highlights growing concerns within Europe about transparency, consumer protection, and the ethical deployment of AI technologies.
Background and Regulatory Context
The Italian Competition and Market Authority (AGCM), responsible for overseeing antitrust matters and consumer rights, announced the probe in June 2025. The investigation centers on allegations that DeepSeek did not provide users with sufficiently clear, immediate, and intelligible warnings regarding the risk of “hallucinations” in its AI-generated content. These hallucinations refer to instances where the AI confidently generates outputs that are inaccurate, misleading, or entirely fabricated in response to user inputs.
The AGCM’s statement emphasized that DeepSeek’s platform lacked transparency about these risks, potentially exposing users to false information without proper cautionary guidance. This lack of disclosure raises significant consumer protection issues, particularly as AI chatbots become more integrated into everyday digital interactions.
Previous Regulatory Actions
This inquiry follows earlier regulatory interventions by Italian authorities. In February 2025, Italy’s data protection authority ordered DeepSeek to block access to its chatbot after the company failed to adequately address concerns related to its privacy policies. The data watchdog’s action underscored the broader regulatory challenges DeepSeek faces in Italy, spanning both data privacy and consumer protection domains.
Despite multiple requests, DeepSeek has not publicly responded to the allegations or the ongoing investigations.
The Significance of AI Hallucinations
AI hallucinations represent a critical challenge in the deployment of generative AI models. These occur when AI systems produce outputs that appear plausible but are factually incorrect or fabricated. For example, an AI might invent historical events, fabricate statistics, or provide misleading answers with high confidence, potentially misleading users who rely on the information.
Such risks have prompted regulators worldwide to demand greater transparency and safeguards from AI developers. Italy’s probe into DeepSeek reflects a proactive regulatory stance aimed at ensuring AI companies disclose these limitations clearly to protect consumers from misinformation.
Broader Implications for AI Regulation in Europe
Italy’s actions against DeepSeek are part of a broader European trend toward stringent oversight of AI technologies. Regulators are increasingly focused on the ethical implications of AI, including accuracy, transparency, user consent, and data privacy. The European Union’s proposed AI Act, which seeks to regulate high-risk AI applications, underscores the continent’s commitment to responsible AI governance.
By targeting DeepSeek, Italy signals that companies operating AI platforms within its jurisdiction must comply with strict standards for user warnings and data protection. Failure to do so may result in legal penalties, restrictions, or bans.
What Lies Ahead for DeepSeek
As the AGCM investigation unfolds, DeepSeek faces significant pressure to enhance its transparency measures and address the risks associated with AI hallucinations. The outcome of this probe could set important precedents for how AI startups disclose the limitations of their technologies and protect consumers from misinformation.
Given the rapid expansion of AI applications, regulators worldwide will likely monitor this case closely to inform their own policies and enforcement strategies.