top of page

Privacy and Security Risks Loom as Large Language Models Transform Conversations

In the era of pervasive access to Large Language Models (LLMs) like ChatGPT, the convenience of conversational AI is accompanied by pressing data security concerns. While these interactions enhance our daily lives, the prompts organizations feed into LLMs can offer detailed insights into our personalities, behaviors, and preferences, raising significant privacy and security risks. NCC Group recently published a research blog on the potential data security challenges associated with LLM prompts, demonstrating how seemingly innocuous prompts can lead to user profiling. While they use ChatGPT as an example, these principles apply to all LLMs and their associated prompt histories.

LLM prompt histories are stored by default, even if chat history is disabled. This data can potentially be accessed through various means, including unauthorized access to a user's device, accessing prompt histories online or on LLM servers, or intercepting prompts during transmission. ChatGPT also allows users to export their conversations, further highlighting the accessibility of prompt data.

NCC conducted an experiment using 50 prompts to explore whether ChatGPT could profile a user based on their prompt history. The results were surprisingly accurate, revealing the user's probable profession, interests, and potentially even location. This highlights the risks of user profiling and its implications, such as tailored phishing attacks, coercion, or deanonymization.

These security concerns necessitate a proactive approach to LLM usage:

  1. User Identification: User data, including chat prompts, can identify and profile individuals, even when anonymized.

  2. Sensitive Information Exposure: Employees may inadvertently expose sensitive information in conversations with LLMs.

  3. Data Retention Policies: Organizations must establish clear policies on prompt data retention and deletion.

  4. Data Access Controls: Access to chat logs should be controlled and audited, limited to necessary personnel.

  5. Encryption: Data should be encrypted at rest and in transit.

  6. Data Leak Prevention: Employees should be trained to recognize and avoid sharing sensitive information in chat prompts, with DLP solutions as safeguards.

  7. Third-Party Vendor Security: Assess the security protocols of third-party vendors providing LLM services.

  8. Data Sovereignty: Understand and comply with data regulations based on prompt history storage location.

  9. Incident Response Plan: Develop a clear plan for responding to prompt data breaches or incidents.

Organizations using external LLMs should consider disabling features like "Chat history & training" to mitigate exposure. For internal LLMs, security measures should protect against malicious insiders and external threats. Strong user account security, such as two-factor authentication, is essential to prevent unauthorized access and prompt history exports.

As LLMs continue to shape our digital interactions, understanding and addressing these data security challenges is paramount to safeguarding user privacy and organizational integrity. ###

bottom of page