What’s AI Actually Good for Right Now?

  Rassegna Stampa, Social
image_pdfimage_print

‘Explainability’ 

Explainable AI is the laudable aim of creating models that humans can interpret easily. Without explainability, oversight of models is challenging and ethical governance is almost impossible.

Consider a situation where understanding the rationale behind a decision may change the decision itself. Taking an extreme example, if you fed a model the details of every law in the U.K. alongside every case and the associated ruling in the past, it could make a more consistent and efficient judge in court rulings—but without a clear understanding of the rationale, its rulings are insufficient. Ultimately, explainability is a core tenet of accountability: If you can’t explain why a decision is right, how can you be held accountable for that decision? 

It’s important to consider how much performance degradation you are willing to accept to make the model explainable; you only know a model works by checking results against pre-agreed criteria, which requires sufficient volumes of assets to sense-check the output and can be time-consuming to the point that it negates efficiency savings. We can learn from areas where AI has been successfully implemented over time for automation, like the autonomous driving industry, which uses clear categorized levels of automation—broadly, Levels 0-5, from “no automation” to “steering wheel optional.” If we think about gen AI in marketing with a similar maturity curve, we can start to shift tasks without undue risk—James Addlestone, chief strategy officer, Journey Further 

Automated testing 

Game-changers like ChatGPT can reduce content production requirements for new campaigns or variations by suggesting engagement strategies, refining drafts or even authoring entire campaigns from outlines. However, it is one thing to hand the keys over to an AI to optimize ads that render on a third-party website; it is quite another to relinquish content control for surfaces inside your own product or in messages to existing customers. This will remain true until gen AI is provably able to respect brand safety, maintain brand voice and cease to hallucinate (an excellent property for creative inspiration but a liability if left unsupervised). 

Efforts to address this challenge are underway, including WPP’s collaboration on a hybrid engine that harnesses Nvidia’s Omniverse technology to render 3D models of real products within settings and backgrounds created automatically with gen AI. We also recently launched a QA tool that aims to streamline new content production for marketers, using prompt engineering on top of OpenAI’s GPT-4 model to check for incorrect grammar, inappropriate tone, offensive language or accidental content in both human and gen AI content. For a global audience, it also flags cultural insensitivity and unintended religious connotations. —Bill Magnuson, CEO and co-founder, Braze 

Pagine: 1 2 3 4 5