Connect with us

Tech

Ex-Google Safety Lead: News Orgs Using AI Have to Own the Hallucinations

Published

on

In a few short months, the idea of convincing news articles written entirely by computers have evolved from perceived absurdity into a reality that’s already confusing some readers. Now, writers, editors, and policymakers are scrambling to develop standards to maintain trust in a world where AI-generated text will increasingly appear scattered in news feeds.

Major tech publications like CNET have already been caught with their hand in the generative AI cookie jar and have had to issue corrections to articles written by ChatGPT-style chatbots, which are prone to factual errors. Other mainstream institutions, like Insider, are exploring the use of AI in news articles with notably more restraint, for now at least. On the more dystopian end of the spectrum, low-quality content farms are already using chatbots to churn out news stories, some of which contain potentially dangerous factual falsehoods. These efforts are, admittedly crude, but that could quickly change as the technology matures.

Issues around AI transparency and accountability are among the most difficult challenges occupying the mind of Arjun Narayan, the Head of Trust and Safety for SmartNews, a news discovery app available in more than 150 countries that uses a tailored recommendation algorithm with a stated goal of “delivering the world’s quality information to the people who need it.” Prior to SmartNews, Narayan worked as a Trust and Safety Lead at ByteDance and Google. In some ways, the seemingly sudden challenges posed by AI news generators today result from a gradual buildup of recommendation algorithms and other AI products Narayan has helped oversee for more than twenty years. Narayan spoke with Gizmodo about the complexity of the current moment, how news organizations should approach AI content in ways that can build and nurture readers’ trust, and what to expect in the uncertain near future of generative AI.

This interview has been edited for length and clarity.

What do you see as some of the biggest unforeseen challenges posed by generative AI from a trust and safety perspective?

There are a couple of risks. The first one is around making sure that AI systems are trained correctly and trained with the right ground truth. It’s harder for us to work backward and try to understand why certain decisions came out the way they did. It’s extremely important to carefully calibrate and curate whatever data point is going in to train the AI system.

Advertisement

When an AI makes a decision you can attribute some logic to it but in most cases it is a bit of a black box. It’s important to recognize that AI can come up with things and make up things that aren’t true or don’t even exist. The industry term is “hallucination.” The right thing to do is say, “hey, I don’t have enough data, I don’t know.”

Then there are the implications for society. As generative AI gets deployed in more industry sectors there will be disruption. We have to be asking ourselves if we have the right social and economic order to meet that kind of technological disruption. What happens to people who are displaced and have no jobs? What could be another 30 or 40 years before things go mainstream is now five years or ten years. So that doesn’t give governments or regulators much time to prepare for this. Or for policymakers to have guardrails in place. These are things governments and civil society all need to think through. 

What are some of the dangers or challenges you see with recent efforts by news organizations to generate content using AI?

It’s important to understand that it can be hard to detect which stories are written fully by AI and which aren’t. That distinction is fading. If I train an AI model to learn how Mack writes his editorial, maybe the next one the AI generates is very much so in Mack’s style. I don’t think we are there yet but it might very well be the future. So then there is a question about journalistic ethics. Is that fair? Who has that copyright, who owns that IP?

We need to have some sort of first principles. I personally believe there is nothing wrong with AI generating an article but it is important to be transparent to the user that this content was generated by AI. It’s important for us to indicate either in a byline or in a disclosure that content was either partially or fully generated by AI. As long as it meets your quality standard or editorial standard, why not?

Another first principle: there are plenty of times when AI hallucinates or when content coming out may have factual inaccuracies. I think it is important for media and publications or even news aggregators to understand that you need an editorial team or a standards team or whatever you want to call it who is proofreading whatever is coming out of that AI system. Check it for accuracy, check it for political slants. It still needs human oversight. It needs checking and curation for editorial standards and values. As long as these first principles are being met I think we have a way forward.

What do you do though when an AI generates a story and injects some opinion or analyses? How would a reader discern where that opinion is coming from if you can’t trace back the information from a dataset?

Typically if you are the human author and an AI is writing the story, the human is still considered the author. Think of it like an assembly line. So there is a Toyota assembly line where robots are assembling a car. If the final product has a defective airbag or has a faulty steering wheel, Toyota still takes ownership of that irrespective of the fact that a robot made that airbag. When it comes to the final output, it is the news publication that’s responsible. You are putting your name on it. So when it comes to authorship or political slant, whatever opinion that AI model gives you, you are still rubber stamping it.

We’re still early on here but there are already reports of content farms using AI models, often very lazily, to churn out low-quality or even misleading content to generate ad revenue. Even if some publications agree to be transparent, is there a risk that actions like these could inevitably reduce trust in news overall?

As AI advances there are certain ways we could perhaps detect if something was AI written or not but it’s still very fledgling. It’s not highly accurate and it’s not very effective. This is where the trust and safety industry needs to catch up on how we detect synthetic media versus non-synthetic media. For videos, there are some ways to detect deepfakes but the degrees of accuracy differ. I think detection technology will probably catch up as AI advances but this is an area that requires more investment and more exploration.

Advertisement

Do you think the acceleration of AI could encourage social media companies to rely even more on AI for content moderation? Will there always be a role for the human content moderator in the future?

For each issue, such as hate speech, misinformation, or harassment, we usually have models that work hand in glove with human moderators. There is a high order of accuracy for some of the more mature issue areas; hate speech in text, for example. To a fair degree, AI is able to catch that as it gets published or as somebody is typing it.

That degree of accuracy is not the same for all issue areas though. So we might have a fairly mature model for hate speech since it has been in existence for 100 years but maybe for health misinformation or Covid misinformation, there may need to be more AI training. For now, I can safely say we will still need a lot of human context. The models are not there yet. It will still be humans in the loop and it will still be a human-machine learning continuum in the trust and safety space. Technology is always playing catch up to threat actors.

What do you make of the major tech companies that have laid off significant portions of their trust and safety teams in recent months under the justification that they were dispensable?

It concerns me. Not just trust and safety but also AI ethics teams. I feel like tech companies are concentric circles. Engineering is the innermost circle whereas HR recruiting, AI ethics, trust, and safety, are all the outside circles and let go. As we disinvest, are we waiting for shit to hit the fan? Would it then be too late to reinvest or course correct?

I’m happy to be proven wrong but I’m generally concerned. We need more people who are thinking through these steps and giving it the dedicated headspace to mitigate risks. Otherwise, society as we know it, the free world as we know it, is going to be at considerable risk. I think there needs to be more investment in trust and safety honestly.

Geoffrey Hinton who some have called the Godfather of AI, has since come out and publicly said he regrets his work on AI and feared we could be rapidly approaching a period where it’s difficult to discern what is true on the internet. What do you think of his comments?

He [Hinton] is a legend in this space. If anyone, he would know what he’s saying. But what he’s saying rings true.

What are some of the most promising use cases for the technology that you are excited about?

I lost my dad recently to Parkinson’s. He fought with it for 13 years. When I look at Parkinsons’ and Alzheimer’s, a lot of these diseases are not new, but there isn’t enough research and investment going into these. Imagine if you had AI doing that research in place of a human researcher or if AI could help advance some of our thinking. Wouldn’t that be fantastic? I feel like that’s where technology can make a huge difference in uplifting our lives.

A few years back there was a universal declaration that we will not clone human organs even though the technology is there. There’s a reason for that. If that technology were to come forward it would raise all kinds of ethical concerns. You would have third-world countries harvested for human organs. So I think it is extremely important for policymakers to think about how this tech can be used, what sectors should deploy it, and what sectors should be out of reach. It’s not for private companies to decide. This is where governments should do the thinking.

Advertisement

On the balance of optimistic or pessimistic, how do you feel about the current AI landscape?

I’m a glass-half-full person. I’m feeling optimistic but let me tell you this. I have a seven-year-old daughter and I often ask myself what sort of jobs she will be doing. In 20 years, jobs, as we know them today, will change fundamentally. We’re entering an unknown territory. I’m also excited and cautiously optimistic.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

Source: Gizmodo

Follow us on Google News to get the latest Updates

Advertisement
Advertisement

Trending