Alphabet Inc, the parent company of Google, is advising its employees to exercise caution when using chatbots like Bard, even as it promotes the program worldwide, according to sources familiar with the matter.
The company has instructed its employees not to input confidential information into AI chatbots, in line with its long-standing policy of safeguarding sensitive data, as confirmed by the company itself.
Bard and ChatGPT are human-like chatbot programs that utilize generative artificial intelligence to engage in conversations and provide responses to users’ queries. These chats may be reviewed by human personnel, and research has shown that similar AI models can replicate the absorbed data during training, posing a potential risk of leaks.
Alphabet has also alerted its engineers to avoid directly using computer code generated by chatbots, as reported by some individuals.
When approached for comment, the company acknowledged that Bard could suggest undesirable code but emphasized its usefulness to programmers. Google also expressed its commitment to transparency regarding the limitations of its technology.
These concerns reflect Google’s efforts to mitigate potential business risks associated with its chatbot software, particularly in competition with ChatGPT. The race between Google, OpenAI (backers of ChatGPT), and Microsoft Corp holds significant investment opportunities, as well as potential advertising and cloud revenue from new AI programs.
Google’s caution aligns with the prevailing security standard adopted by corporations, which involves cautioning employees about using publicly-available chat programs.
Many companies worldwide, including Samsung, Amazon.com, and Deutsche Bank, have implemented measures to ensure the safe use of AI chatbots, as confirmed by the companies themselves. Apple, which did not respond to requests for comment, is also rumored to have similar precautions in place.
A survey conducted by the networking site Fishbowl revealed that as of January, 43% of professionals were utilizing ChatGPT or other AI tools, often without informing their superiors.
In February, before Bard’s launch, Google instructed its staff participating in its testing not to disclose internal information to the chatbot, as reported by Insider. Currently, Google is introducing Bard in over 180 countries, supporting 40 languages, with the aim of promoting creativity. The company’s warnings now extend to its code suggestions as well.
Google stated that it has engaged in detailed discussions with Ireland’s Data Protection Commission and is addressing regulators’ inquiries. This follows a Politico report on Tuesday suggesting that Bard’s launch in the European Union was postponed pending further information about the chatbot’s impact on privacy.
Concerns arise regarding the potential inclusion of misinformation, sensitive data, or copyrighted content from sources like the “Harry Potter” novels in the drafted emails, documents, or software created using this technology.
Google’s updated privacy notice, effective from June 1, also cautions against including confidential or sensitive information in conversations with Bard.
To address such concerns, some companies have developed software solutions. For example, Cloudflare, a provider of cybersecurity and cloud services, offers a capability that allows businesses to tag and restrict certain data from being shared externally.
Both Google and Microsoft offer conversational tools to business customers with higher price points, ensuring that data is not incorporated into public AI models. By default, Bard and ChatGPT save users’ conversation history, but users have the option to delete it.
Microsoft’s consumer chief marketing officer, Yusuf Mehdi, remarked that it is understandable for companies to discourage their staff from using public chatbots for work. He explained that Microsoft’s free Bing chatbot has more stringent policies compared to its enterprise software.
Microsoft declined to comment on whether it has a blanket ban on staff inputting confidential information into public AI programs, including its own. However, another Microsoft executive revealed to Reuters that he personally restricted his usage.
Cloudflare CEO Matthew Prince likened entering confidential matters into chatbots to giving a group of Ph.D. students