top of page

Samsung Bans Employee Use of ChatGPT and Other Generative AI Tools Over Security Risks

Samsung Electronics Co. has banned the use of popular generative AI tools, including ChatGPT, by its employees after an accidental exposure incident occurred while using the large language model (LLM) generative AI. The incident involved the uploading of sensitive company code to the platform, resulting in the leak of proprietary data outside the organization. To train an LLM, data sets are required to feed the system, which can be pulled from requests and conversations held with the AI.


This can include sensitive information that may be inadvertently leaked, posing a risk to company security. Threat actors regularly comb LLMs to find information about companies that can be used to launch targeted attacks.


Matt Fulmer, Cyber Intelligence Engineering Manager at Deep Instinct, has warned that companies need to understand how these platforms work on the backend to avoid such incidents. [Companies can] follow Samsung’s lead and implement a security lockdown to prevent members of the organization from using LLMs to try and "simplify" their jobs. In addition, create the necessary security policies to outline an AUP (Acceptable Usage Policy) and clearly define the penalties for violation of the AUP. This technology can be a boon to society but right now it's a burden on anyone within security given the real and persistent threat it has become," Fulmer said.


Samsung’s new policy is a setback to the spread of generative AI tools in the workplace, as its concerns are not unique to the company. In a survey conducted last month, 65% of Samsung employees believed that such services posed a security risk. Other companies, such as Wall Street banks JPMorgan Chase & Co., Bank of America Corp., and Citigroup Inc., have either banned or restricted the use of generative AI tools like ChatGPT due to security concerns.


The ban affects the use of generative AI systems on company-owned computers, tablets, and phones, as well as on internal networks. It does not affect the company’s devices sold to consumers, such as Android smartphones and Windows laptops. Employees who use ChatGPT and other tools on personal devices have been asked not to submit any company-related information or personal data that could reveal its intellectual property. Breaking the new policies could result in disciplinary action up to and including termination of employment.


Samsung is also creating its own internal AI tools for translation, summarizing documents, and software development, and is working on ways to block the upload of sensitive company information to external services. Meanwhile, ChatGPT has added an “incognito” mode that allows users to block their chats from being used toward AI model training.


Generative AI technology has the potential to enhance productivity and efficiency in the workplace, but the risk of inadvertent leaks of sensitive company data requires caution. Companies need to take a proactive approach to secure their networks and implement policies to mitigate these risks. The recent bans and restrictions by major corporations are a reminder of the need for robust security measures when implementing generative AI tools in the workplace.


###

Comments


bottom of page