
A viral wave swept across global social media after a post from Dave Jones, an electronics designer and YouTuber, claimed that Google was quietly using Gmail messages and attachments to train its Gemini AI. His warning, shared on X, quickly surpassed 6.5 million views and reached tens of thousands of followers. He advised users to disable Smart Features inside Gmail settings to avoid their messages being used for AI training, fueling concern and confusion among many who rely on Google’s services every day.
Google responded quickly and firmly to the spreading claims by stating that the reports were inaccurate. According to company spokespeople who spoke with international outlets, Google did not change any user settings and does not use Gmail content to train the Gemini model. The company noted that Smart Features has been part of Gmail for years, designed to help users manage tasks such as email summaries and automated scheduling. It is not a hidden mechanism to feed email data into the company’s main AI models. This reassurance directly countered the escalating narrative and restored clarity to an increasingly heated discussion.
Dave Jones originally shared screenshots of Gmail’s settings menu that mentioned the use of data to personalize user experience. Some users interpreted that text as consent for AI training. Google clarified that this feature is entirely separate from Gemini and only processes information within a user’s own Workspace environment. The company emphasized that if any future policy change were to occur, it would be announced openly and not implemented silently in the background. This distinction helped ground the conversation and dispel the fear that user data was being secretly mined.
Mashable reported that the platform was skeptical of the rumors from the beginning, pointing out that Smart Features has never been a new or hidden system. Instead it works alongside Gemini to analyze information only for the owner of the account, such as spam detection or email composition assistance, without contributing to external model training. Malwarebytes, which once published a similar warning, issued a correction and apologized after confirming that unclear wording in Gmail’s updated menu from January caused widespread misunderstanding. They also noted that Gmail has long scanned emails to deliver basic functionality, a practice unrelated to any new training from Gemini.
Google’s own Gemini for Workspace page reiterates that user data stays within the Workspace ecosystem and is not used to train models outside the service unless explicit permission is given. The confusion largely arose from users assuming that enabling Smart Features automatically meant allowing AI training, when in reality the two functions operate independently and have no direct link. This clarification helped many regain confidence in how their information is handled.
Although the rumors have settled, the situation highlights the growing awareness users must have regarding data privacy in major tech services. Anyone still concerned can easily turn off Smart Features in their Gmail settings by navigating to Data Privacy and disabling the options in two locations. This ensures that personal information is processed only for essential features, even though Google maintains that none of it affects the training of its core AI models. The event serves as a reminder that clear communication and mindful settings are key to maintaining trust in today’s digital landscape.
THIS IS OUR SAY
Viral rumors spread faster than facts, but clear and honest communication from companies is what keeps users confident, and this moment shows just how crucial transparency is in today’s AI-driven world.
Origin: Mashable





