Did someone say Andy Cohen? Join industry veterans and emerging voices at Convergent TV March 21-22 and upgrade your media strategy. Register now to save 35%.
With generative artificial intelligence tools upending the media and marketing industry, Adweek reporters felt it prudent to ask one of them, ChatGPT, whether it was compliant with GDPR, the 5-year-old data protection and privacy law in the European Union.
The response: “As an AI language model, I do not collect or process personal data, nor do I have access to any data storage. Therefore, I do not fall under the scope of the General Data Protection Regulation (GDPR) and do not have to comply with its regulations. However, the companies or organizations that use my services to process personal data are responsible for ensuring GDPR compliance and should implement appropriate measures to protect the personal data they collect and process.”
A caveat: AI chatbots can be volatile and tend to respond differently to the same question. It’s nearly impossible to confirm the veracity of its answers. Still, the onus is on the companies using these tools to be compliant. And here, there are many gray areas.
Beyond GDPR, futurists and industry experts flag potential regulatory concerns around misinformation, antitrust and copyright, partly because of AI’s open-source nature and how the tools are trained.
“AI has become an open source and many companies are building models on top of other models,” said Michael Liu, head of innovation for media company Carat Interactive. “But it’s going to be hard to regulate that.”
As more brands and agencies tinker with generative AI, 70% of enterprise CMOs will identify accountability for ethical AI in marketing as their top concern by 2025, per Gartner. Meanwhile, regulators are getting more muscular with company’s data governance and anti-competitive practices.
“Questions [are rising] about what data OpenAI is storing versus what data is it spitting out,” said Gary Kibel, a privacy and data security lawyer of Davis and Gilbert. “If it’s storing any information, that creates bigger issues.”
Attribution and trademark infringement
Currently, there are numerous cases where brands are using ChatGPT for creative idea generation.
“I could envision a world where two competitive brands are using OpenAI to write taglines for their new products,” said Stephanie Bunnell, svp of marketing at Aki Technologies, an Inmar Intelligence company. “I can’t personally verify that those ChatGPT outputs wouldn’t be exactly the same.”
If you were to ask ChatGPT something, you can’t know its source data…there’s a gazillion source data.
Michael Liu, head of innovation for media company Carat Interactive
Similarly, brand agency Tanj created an AI naming assistant, Chat Namer, using ChatGPT’s language model to churn out brand names. But the tool can’t yet account for trademark capabilities.
“Based on what ChatGPT knows, it’s spitting out the names of brands that already exist in the world,” said Scott Milano, the agency’s managing director. This could give rise to trademark infringement lawsuits.
Meanwhile Carat Interactive is one agency paying attention to ChatGPT’s content attribution.
“If you were to ask ChatGPT something, you can’t know its source data…there’s a gazillion source data,” said Liu.
Disinformation researchers have flagged concerns that ChatGPT will be able to easily and quickly spread false information. For marketers who rely on this AI tool for copy output or to build a consumer-facing tool based on its API, the threat of disinformation can have damaging reputational impacts on their brand.
Additionally, Bunnell said that if a brand uses an open-source chatbot versus its own closed, verified chatscript, it may increase the odds of a brand-unsafe dialogue.
Publishers’ antitrust crusade
Currently, OpenAI has fed ChatGPT around 300 billion words scraped from across the internet, including articles, books, websites and posts.
Much has been said about how AI-enhanced search engines can either scrape—and not compensate—information from various publishers, or outperform them to the point where there is less of a need for the influx of certain intent-based media brands.
“There’s going to be some regulation that’s going to attempt to address this,” said Liu, adding that ChatGPT’s emergence could be seen as anticompetitive to some publishers.
GDPR data minimization at odds with data scraping
European regulators are considering placing ChatGPT in a high-risk category under its AI Act that has yet to become a law. This category questions the safety components of the tool. As a result, high-risk AI systems will be subject to strict obligations, such as risk assessments, before they enter the market.
As for GDPR, it’s unclear whether large language models like ChatGPT can be developed in a GDPR-compliant manner whose principles include data minimization and the right to be forgotten.
“In theory, with a precise enough dataset, you could meet the law’s requirements,” said Robert Bateman, head of content at GRC World Forums, a company that runs events on governance, risk, and compliance. “But it’s hard to see how this would work when scraping data at such a large scale.”
Still, if someone who requests OpenAI to delete their personal data is unhappy with the response, that could trigger an investigation that draws attention to wider compliance issues.
“It’s all speculation, but there are data protection authorities across Europe who are not averse to making highly consequential decisions,” said Bateman.
https://www.adweek.com/media/misinformation-antitrust-and-copyright-loom-as-regulatory-hurdles-for-ai-and-chatgpt/