Concerned about leaks, Samsung forbids staff from using ChatGPT.
The order was issued due to worries that the AI chatbot was consuming sensitive and private data. Samsung informed staff members on Tuesday that ChatGPT usage on business networks is no longer permitted.
According to Bloomberg, Samsung forbade the usage of the popular chatbot. It was because it was worried that staff members were sending ChatGPT critical company information. According to Samsung, the information provided to AI platforms like ChatGPT, Google Bard, and Bing is retained on external servers, may be difficult to retrieve or erase, and may be made public.
Samsung issued a warning that disregarding its security principles could lead to a breach or compromise of corporate data. This could result in disciplinary action, up to and including employment termination.
Samsung is vigilant in protecting its intellectual property. This includes the schedules for the release of new products and the designs of its hardware and software. This is something that all other tech businesses do as well.
Although it’s uncertain whether Samsung-specific information might be recovered from the Large Language Models used by generative AI approaches, even abstracted information may be used by competitors
Following the release of GPT-4 in March, OpenAI’s ChatGPT swept the internet. A few days after the program went live, several well-known computer scientists, researchers, and influencers, including Elon Musk, Steve Wozniak, and Andrew Yang, encouraged OpenAI to stop training ChatGPT, the following iteration of their artificial intelligence platform.
Restricting ChatGPT Access for Employees
Because of growing security concerns, Samsung has made it illegal for its workers to use ChatGPT, the language model created by OpenAI for artificial intelligence. People are more aware of the possible data breaches and information security risks that come with AI-powered platforms, which led to the choice.
Samsung’s action highlights the company’s dedication to securing private information and preventing unwanted access to it. Samsung’s cautious move attempts to mitigate any potential flaws that could jeopardize its internal security protocols, given that ChatGPT is a tool that is frequently used for producing text-based content.
Although ChatGPT provides useful features for content creation and natural language processing, Samsung’s move to limit access is indicative of the changing cybersecurity risks that businesses around the globe must deal with. Samsung intends to strengthen its defenses against potential cyber threats and preserve the integrity of its private information by imposing stringent limitations on employee access to external AI platforms.
This proactive approach is consistent with Samsung’s security-first reputation and emphasizes how critical it is to put strong security measures in place to guard against new threats in an increasingly digital world. Businesses need to stay alert as technology develops and implement thorough security procedures to guard critical information from outside attacks and minimize potential weaknesses.
What Risk to Security Does ChatGPT Present?
ChatGPT records every detail. After mistakenly exposing sensitive data to chat software powered by artificial intelligence, Samsung staff learned this last week.
Data analysis, arranging meetings, and email authoring are all tasks that ChatGPT can help with.
Customers’ data is maintained, though, to train the AI that powers the bot, posing a security concern. If someone requests ChatGPT to create a piece of code that is similar to one created by a previous user, the AI may use some code components from that user’s previously written code and share them with another user.
OpenAI, ChatGPT’s developer, expressly cautions users against giving crucial information because it is “not able to delete specific prompts” unless users opt out.
Samsung discovered three cases in which personal information was compromised, according to Economist Korea. The chatbot was given access to private equipment data by two different employees, and it was given a meeting snippet by a third.
Both Walmart and Amazon.com have issued internal alerts on the usage of AI.
Amazon reportedly issued its warning in January after identifying language created by ChatGPT that “closely” matched confidential corporate information, according to internal Slack discussions made available to Business Insider.
According to a different Business Insider report, Walmart issued a late-February email cautioning employees to “not input any information about Walmart’s business, including business process, policy, or strategy, into these tools.”
According to Walmart, general AI technologies “can enhance efficiency and innovation,” but they must be used “appropriately.”
A rising number of companies, including CitiGroup, Goldman Sachs, Verizon, and Wells Fargo, have banned their employees’ use of ChatGPT in response to growing concerns about the capacity to regulate and shape the use of such technology.
A recent study by the cybersecurity company Cyberheaven found that 3.1% of companies adopting AI had ever entered private company data into the system.
According to Cyberhaven, ChatGPT has a variety of legal uses in the workplace, and companies that can utilize it to increase productivity without endangering their sensitive data would benefit.
What Is ChatGPT’s Stance on Data Privacy?
When using ChatGPT, you can ask about its data privacy policies if you wish to go one step further. When we did, it responded with the following suggestions:
Be cautious with what you say, and avoid giving personal information.
Consider utilizing a false identity or an alias when interacting with ChatGPT.
In Summary
Remember that anything you enter into ChatGPT will be visible to everyone. Please don’t enter any sensitive, private, or confidential information into the chat box, including personally identifiable information (PII). This holds for any instructions and questions you type.
Here is where having a reliable internet service provider can make all the difference you need. ISPs can help protect and secure your connection. Thus, saving you from data leaks.
To prevent the tool from being further trained, avoid adding extracts from contracts, software code, or other private documents.
0 Comments