AI-enabled data collection and the regulatory landscape
As the world embarks on the Fourth Industrial Revolution, one major contributor is Artificial Intelligence (AI), specifically its ability to collect, process and analyze massive quantities of data at incredible speeds. Already, innovative enterprises are utilizing various AI tools and methodologies, such as web crawling, social media monitoring and digital analytics to anticipate emerging trends, deliver highly personalized customer services and boost productivity through finely tuned business operations. While it is undeniable that AI has elevated the value of data, the unprecedented level and sophistication of these collection activities have given rise to a new and growing regulatory landscape.
The risks of using personally identifiable information
AI can analyze and utilize customers’ personally identifiable information (PII) in ways that could infringe on an individual’s privacy. A 2023 poll from Pew Research on Americans and data privacy reveals that consumers are wary of the implications of AI-driven data collection activities. In particular, 81% of those surveyed who expressed an awareness or knowledge of AI said that companies using AI to collect data would use it in a way that would make them uncomfortable. Eighty percent feared companies would use the collected data contrary to its original intent. Nevertheless, 61% admitted that personal data collected through AI could make their lives easier.
AI and PII concerns can fall into two categories: input and output concerns. The former category involves using large datasets and tools that contain PII. If the proper guardrails are not in place, there is a risk that AI-generated outputs could unintentionally expose sensitive information. And, as data becomes more valuable to enterprises, so too does it become more attractive to cyber criminals who benefit from stealing it. The latter concern deals with AI using PII to arrive at automated conclusions, such as denying someone access to a loan based on biased algorithms. Although this concern doesn’t specifically touch on data privacy per se, it does involve the misuse of PII.
Prominent data privacy regulations in the European Union and United States
Unsurprisingly, these data concerns surrounding AI and enhanced collection practices have increased regulatory scrutiny and compliance pressures across the globe. In the European Union, legislation is already in place in the form of the General Data Protection Regulation (GDPR) as well as the AI Act, which seeks to establish a common regulatory and legal framework for AI. Primarily, the GDPR focuses on protecting personal data, defining it as “any information relating to an identified or identifiable natural person.” Although the GDPR does not address AI explicitly, it contains many provisions relevant to AI.
In the United States, the National Institute of Standards and Technology issued the Risk Management Framework (AI RMF) last year, offering guidance to companies and other entities on using, designing and deploying AI systems. This framework is voluntary, and there are no penalties for noncompliance. However, there are privacy laws at the state level that carry penalties, including the Colorado Privacy Act, the Texas Data Privacy and Security Act and the California Privacy Rights Act (CCPA). The CCPA, in particular, was later amended to speak directly to AI with rules about “automated decision-making.”
Regulations for protected health information
The main data organizations will gather through AI is PII, but they could unconsciously collect more sensitive data categories like protected health information (PHI) and biometric data. PII encompasses the health-related information linked to an individual or disclosed during healthcare services. For example, health data is collected via period tracking apps, connected devices and genetics websites. AI-enabled collection tools could also gather data by analyzing customers’ social media posts, noting if they feel sick, depressed or happy. Likewise, AI tools like chatbots and facial recognition platforms collect and store PHI and biometric data.
Much like with PII, there are consequences for accidentally leaking PHI or ineffectively protecting it from bad actors. As such, the collection of PHI carries with it even stricter regulations — notably, the United States Health Insurance Portability and Accountability Act (HIPAA). Regulations, such as the Illinois Biometric Information Privacy Act (BIPA), aim to curb the indiscriminate collection of biometric data, including biological attributes like fingerprints or facial recognition patterns.
How companies can comply with these laws
Companies engaging in AI-powered data collection methods face a complex and ever-evolving regulatory landscape, where even unintentional noncompliance could result in serious fines and reputational damage. To effectively navigate these sensitivities, enterprises must build a comprehensive data governance framework, establish transparent and concise privacy policies and invest in thorough data mapping and inventory tools.
Businesses should ask for explicit consent to use sensitive data and set up rigorous data processing and storage protocols to not only comply with regulators but also help ease customer concerns. Organizations must also anonymize data where possible and restrict unwarranted data collection. Moreover, companies need to keep a pulse on the changing regulatory environment, noting those data protection laws relevant to their regions of operation.
Using AI to improve data privacy
Amid efforts to protect customers’ privacy, enterprises can find an unlikely ally in AI. Beyond harvesting and extracting insights for mountains of data, businesses can use AI algorithms to monitor data handling practices. Likewise, they can equip their customers with AI-enabled tools that allow them to easily request access to or erasure of their data. AI-powered detection systems can also spot potential cybersecurity threats for proactive prevention. However businesses opt to use AI, maintaining compliance will be indispensable with the diverse and evolving regulatory landscape.
https://www.securitymagazine.com/articles/100628-ai-enabled-data-collection-and-the-regulatory-landscape