55% of generative AI inputs comprised personally identifiable data
Menlo Security released a report on generative AI’s impacts on security posture, analyzing the ways employees use generative AI and the risks associated with it. Within the last 30 days, 55% of data loss protection events involved user attempts to input personally identifiable data into generative AI sites. Following behind personally identifiable data was confidential documentation, which accounted for 40% of the attempts.
The report revealed security risks associated with the transforming generative AI landscape. Copy and paste inputs decreased by 6%, but these attempts still frequently occur. Furthermore, attempted file uploads increased by 80%. These uses of generative AI pose a cybersecurity risk due to the simplicity and speed with which information can be inputted, including customer lists or personally identifiable information.
Many organizations are aware or these risks and are working to secure against data leakage. In the past six months, security policies for generative AI have increased by 26% among organizations. Many of these security policies are established application by application rather than applying policies across all generative AI applications. When instating these policies application by application, organizations risk having cracks in their defenses.
Key findings in the report suggest the need for group level security measures. File inputs are 70% higher when considering generative AI as a category, indicating the unreliability of security policies on an application basis.
For more information, read the full report here.
https://www.securitymagazine.com/articles/100400-55-of-generative-ai-inputs-comprised-personally-identifiable-data