Home Technology Top Stories Business Most Featured Sports Social Issues Animals News Fashion Crypto Featured Music & Pop Culture Travel & Tourism

Google Warns Employees on What Not to Do With the Bard Chatbot

Author Avatar
By Jerry Walters - - 5 Mins Read
A colorful Google logo sign hanging on the wall
Featured | Unsplash

Google has warned their workers sternly about issues relating to the Google Bard AI. Alphabet Inc., the parent company of Google, warned their employees about entering sensitive content in AI solutions such as ChatGPT and their own Google Bard AI. They fear that employees will unknowingly leak sensitive information to the internet while communicating with these AI solutions.

 

According to four sources confirmed by Reuters, there's a concern about using AI applications among Google employees. While AI solutions have many positive applications like AI content generators, there's also the wrong side which is mainly cyber threats. When confidential information from Google employees leaks through these Chatbots, it can cause massive damage to the company. So a preventive measure is being taken by Google to stop leaks from getting into these AI chatbots. 

 

Alphabet Inc. claims that human reviewers of programmers might be at the other end of the chatbot and be getting sensitive information that is not supposed to get to the public.

Despite being artificial intelligence, humans still program and provide the information it gives. So its programmers at the backend might lay their hands on sensitive data and leaks inputted by a Google employee.

 

There is a concern about potential data leaks at Google. AI chatbots rely heavily on past interactions with humans to better themselves through machine learning. Alphabet Inc. is worried that confidential information may be entered by their employees, which could then be used to retrain these AI chatbots.

 

Such risks have been experienced after a Samsung employee leaked a sensitive code while interacting with ChatGPT. The electronics company banned any interaction between AI technology and their employees. When news outlets reached out to Google for a comment, they declined. 

Artificial intelligence robot figure in programming language background
Unsplash

More Companies Banning the Use of AI Chatbots 

The leakage of important information on AI chatbots has made many tech giants ban employees from using such technology while working for them. In January, an Amazon lawyer told Amazon employees not to share any code and other crucial information with any AI chatbot, especially ChatGPT.

 

He told Amazon workers not to share "any Amazon confidential information (including Amazon code you are working on)" with ChatGPT. This interaction between the Amazon lawyer and the employees were screenshots of Slack messages reviewed by the Insider. 

 

Apple did the same thing with their employees when they banned them from using AI solutions like ChatGPT and Github Co-pilot. The documents that verified this development from Apple were shared by The Wall Street Journal.

 

Sources speaking to The Wall Street Journal have indicated that Apple is focused on developing its own AI solution. In 2020, the tech giant acquired two AI startups for $200 million and $50 million, respectively. It's worth noting that other tech giants are also pursuing the development of their own AI solutions, not just Google.

 

In March, Google introduced Google Bard AI as a competitor to the well-known ChatGPT. This AI solution's development involved using Google's in-house artificial intelligence engine, Language Model for Dialogue Applications. Although several tech companies are developing their AI solutions, the primary objective is ensuring employees do not disclose sensitive information through AI chatbots.

Share