Google did not train its internet web search chatbot Bard on text from private emails of Gmail accounts, a spokesperson confirmed to The Register.
An AI researcher quizzed Bard on where its training data came from, and was surprised when it mentioned internal data from Gmail. The former Google employee, Blake Lamoine – who was fired for leaking company secrets and believing its large language model (LLM) LaMDA was sentient – claimed that it was, indeed, trained on text from Gmail which includes private emails.
The Register asked Google for comment, and a representative told us in a statement: “Like all LLMs, Bard can sometimes generate responses that contain inaccurate or misleading information while presenting it confidently and convincingly. This is an example of that. We do not use personal data from your Gmail or other private apps and services to improve Bard.”
Google launched Bard this week, and invited netizens in the US and UK to join the waitlist to talk to the chatbot. So far, Bard doesn’t seem to generate text as erratic and unhinged as the earlier tests of Microsoft’s Bing – but can still be prompted to reply to inappropriate requests, and is prone to making up false information. As more data are involved, these shortcomings will be reduced.