Gen AI complements the validation process by comparing audience engagements with previous, similar campaign assets and recommends ones that can resonate better. It’s essential to establish guidelines to safeguard user data, ensuring inputs and outputs of gen AI are stored on the organization’s private cloud and not utilized for training publicly available solutions like ChatGPT or DALL-E. —Stephen Noble, business development director of ad tech, Star
Social listening
Traditional first-party data collection is plagued with low response rates and fake responses from bots and survey farms. Gen AI is helping marketers develop questions more likely to engage people and ensure responses are authentic. Many have created “wrappers” or synthetic personas to infuse UX into the technology. And that’s just the front end: Marketers are also using gen AI to parse responses in record time—especially the open-ended questions, since LLMs can find common themes in consumers’ qualitative data and highlight those findings.
Most data sets LLMs are built on—publicly available internet chatter, for example—comprise the views of a relatively small number of people who discuss brands on social media. It’s especially under-representative of people in marginalized groups. Gen AI can find people in specific demographic groups for first-party data collection: For example, the National Foundation for Credit Counseling reached 2,000 low- and middle-income renters to learn about their experiences with housing insecurity and eviction. Only about a third of respondents felt they fully understood their rights and opportunities, but the analysis of open-ended questions showed a strong undercurrent of hope and a commitment to achieving home ownership, particularly among communities of color. —Neil Dixit, CEO, and Adam Bai, chief strategy officer, Glimpse
Human impact
The measurement of outcomes in AI often goes beyond technical metrics, particularly when we bring democratization, empathy and compassion into the conversation. In this context, it’s essential to assess human impact, such as how well AI applications are received and used by diverse sets of users. Tools like user satisfaction surveys and open channels for feedback play a critical role in understanding this dimension.
One potential solution is to build this framework on core ethical principles such as fairness, inclusivity, transparency and accountability. These form the pillars of regular audits where these principles are used to evaluate the impact of AI systems. As part of our own audit, we assess whether our systems, especially those used in design or content generation, are promoting diverse perspectives and not inadvertently perpetuating stereotypes. We look at the output and judge whether it reflects the tapestry of experiences and ideas we aim to represent. —Joy Fennell, founder and CEO, The Future in Black