Navigating the Future of Trust, Risk, and Opportunity in the Age of AI


.article-native-ad { border-bottom: 1px solid #ddd; margin: 0 45px; padding-bottom: 20px; margin-bottom: 20px; } .article-native-ad svg { color: #ddd; font-size: 34px; margin-top: 10px; } .article-native-ad p { line-height:1.5; padding:0!important; padding-left: 10px!important; } .article-native-ad strong { font-weight:500; color:rgb(46,179,178); }

For marketers, AI promises a leap in productivity—but only when guided by a clear strategy and human oversight. The question isn’t whether to use AI, but how to use it responsibly and effectively.

That question was front and center at the Digital Trust Summit 2025, an event founded and produced by Global Data Innovation and its CEO, Dominique Shelton Leipzig. Tech innovators, CEOs, government leaders, and regulators gathered in Washington, DC to tackle urgent challenges around AI governance, cybersecurity, regulation, and digital risk. As organizations race to adopt AI, the complexities of digital transformation and data protection are only growing.

Leipzig reminded the audience, “But trust will not happen all by itself. We must program trust into AI now.” With that urgency in mind, here are some key highlights from the event.

Building trust in AI requires leadership, culture, and curiosity

(L-R) CBS News’ Lesley Stahl and Georgetown University Law Center’s Neel Sukhatme
(L-R) CBS News’ Lesley Stahl and Georgetown University Law Center’s Neel Sukhatme

Trust in AI doesn’t happen by default—it must be deliberately designed into systems from the start. That means embedding fairness, transparency, and ethical considerations throughout development. Clear frameworks are essential for data governance, privacy, and intellectual property protection.

Equally important is fostering a culture where curiosity is encouraged, and failure is seen as a learning opportunity. Leaders must distinguish between flawed AI outputs and underlying process issues while empowering teams to ask hard questions and think differently. Trust, after all, isn’t just a technical problem—it’s a human one. The organizations that thrive will be those that identify AI initiatives that energize them and explore their broader implications with diverse input.

Technology governance requires responsible leadership and vigilance

Strong governance in the digital era begins with aligning both human and AI decision-making to a company’s core values. Some organizations are adopting triage-style systems to categorize risks and gauge the health of decisions—from low-risk “green” zones to critical “red” alerts.

As emerging technologies like AI and blockchain reshape the landscape, some organizations are struggling to balance innovation with safety. The learning curve is steep, and while many aren’t fully equipped to keep up, it doesn’t mean it’s impossible. With the right mindset and follow-through, meaningful progress is possible.

AI governance requires both strong oversight and a proactive approach to risk

(L-R), Chief Executive Group’s Dan Bigman, ADWEEK’s Will Lee, and Emovid’s Victor Cho
(L-R), Chief Executive Group’s Dan Bigman, ADWEEK’s Will Lee, and Emovid’s Victor Cho

Brands don’t need to be made up of AI experts, but they do need to ask the right questions. Establishing clear frameworks for AI usage—particularly around privacy, security, and ethics—is critical. The risks are real, from transparency lapses to outright misuse.

The EU’s AI Act, which bans predictive policing and social scoring, offers one potential roadmap. Proactive risk assessments, transparency, and human oversight can build public trust and minimize regulatory and reputational fallout. Oversight shouldn’t stifle innovation—but enable it safely.

The future of AI and digital trust

As AI continues to evolve, the path forward requires ongoing collaboration, adaptation, and a commitment to strong governance. The challenges presented by its growth will test the resilience of organizations, from boardrooms to management teams. Companies must prioritize leadership, structure, and strategy to navigate the complexities ahead.

As Leipzig put it, “This moment isn’t just a test of leadership. It’s a test of legacy.”

Those who invest in continuous learning, ethical practices, and proactive planning will not only keep pace with AI but also help define a future worth trusting.

https://www.adweek.com/brand-marketing/navigating-the-future-of-trust-risk-and-opportunity-in-the-age-of-ai/