Will AI Be A Benevolent Helper Or An Evil Overlord for Scripps?

  Rassegna Stampa, Social
image_pdfimage_print

For those who haven’t kept up with technology and still think the earth is flat, artificial intelligence defines itself as “the ability of machines to exhibit intelligence, especially computer systems. AI can automate tasks, analyze data, and make decisions.”

In November, Scripps named several newsers to head up a company AI team in an effort to get ahead of the emerging technology.

One of those hires, Christina Hartman, was named vice president of emerging technology operations, while her counterpart Kerry Oslund was named vice president of AI strategy. Hartman came from the news side. Both report to the company’s chief transformation officer, Laura Tomlin.

The move signaled the company’s attention shift towards AI and what it may hold for local stations.

“Our goal is to quickly and responsibly transform our organization into a nimble environment that fosters innovation at all levels, inspiring revenue growth, efficient workflows and new product development,” Tomlin said in a press release on the move. “AI will play a critical role in reshaping our operating systems and company culture.”

This week, I got the chance to hang out with Hartman on Zoom for a bit and ask about what Scripps meant by the press release on its AI strategy. Will it be friend or foe? Below is the transcript of the conversation.

TVSPY: I wanted to get a little bit of background from you since you came from news. How did you go from news to AI?.

Hartman: My prior role was overseeing standards. So, I issued our guidance around language choice overall ethics policies with regard to news gathering. And so, when chat GPT hit the scene, I was looking at it from a standards perspective, like the team needs guidance here.  

But it felt a lot bigger than the sort of guidance that we are typically dealing with solely within standards. And so, at the time, I reached out to our chief ethics officer and said, I really think we need to be thinking about this much more deeply. And so I proposed governance for the organization and overall framework, thinking about trust implications, but also data privacy, security, bias.  

So we pulled in from across the organization representatives and specialists within those areas. But I also felt in order to lead governance on AI, I needed to become a practitioner so that I could understand the accuracy, factual accuracy and I became a practitioner.  

And so it sort of became a little bit of an assistant to me in thinking about if I had a strategy. If I were looking at a script, I’d pop it in as like a, “what do you think of this?” or basic copy editing. I became seen as, I should say, not became, but became seen as the AI person. But I am no expert.  

Pagine: 1 2 3 4 5 6