Search
Close this search box.

Australian journalism industries grapple with AI amid consumer distrust

Audiences find it difficult to build trust in AI-generated news as they fear it lacks human insight and ability to consider ethics.

When News Corp journalist, Brittany Carlson, is trying to understand a complex topic such as a budget surplus, she turns to artificial intelligence (AI) to help explain it to her so she can then ensure she communicates it correctly to her audience.

“You put it into a program like ChatGPT or equivalent, and you say… ‘explain budget surplus to me like I’m ten years old’. It would give you a really great explanation, actually, of these kind of mathematical equations,” she tells upstart.

“Which is great for me if I don’t understand something, and trying to relay it to an audience who also probably doesn’t understand much about it.”

Carlson would then check the information with a colleague to ensure what she was publishing was correct and easy for her readers to understand and move on to the next task.

AI’s potential to increase the efficiency and volume of news is the primary motivation behind its integration into news media organisations. Generative AI can accurately and quickly transcribe interviews, suggest headlines and even generate entire news stories. An Associated Press survey revealed that 70 percent of 300 journalists surveyed had used AI in one way or another.

However, using AI comes with an array of advantages and challenges to consider. And for news consumers, data tells us that uncertainty comes from the fact that many are unaware of where and how the technology is currently being used by media industries.

How is AI already being used in news media?

News Corp is one of many news and media organisations beginning to use artificial intelligence (AI) to enhance workflow. Carlson says there was some hesitancy when AI was first introduced into the company but it is now a part of their day-to-day work.

“We were sent emails and we had lots of meetings about what this was going to look like going forward, and the company was developing a policy on the boundaries of where it was to be used, where it was not to be used,” she says.

Beyond its use by individual journalists, News Corp also said last year that they were publishing 3000 articles a week using generative AI. These stories included weather, traffic and finance reports.

AI is still limited in the kind of news it can produce. Dr Rob Nicholls, a researcher in the regulation of technology, media and communications at the University of Sydney, says that while it can be used to write short weather stories and sports reports, it would lack the analysis, context and emotions required for most forms of journalism.

Also, human perspectives are still needed to preserve the traditional, watchdog function of the press, he says.

“Public interest journalism relies on there being journalists to hold politicians [and] decision-makers to account, and that’s important,” he tells upstart.

“No large language model is going to be able to get these nuances, even if it’s fine-tuned for journalism.”

A Large Language Model (LLM) is a form of AI that has been trained on a diverse set of data so it can generate text. LLMs can write in a convincingly human way. Like humans, it can also hold inherent biases, fabricate answers and generate factually incorrect information.

“Most large language models were trained on the internet, specifically Reddit and Wikipedia… so there is a risk that the material on which the large language model was trained will contain misinformation,” Nicholls says.

How do people feel about the integration of AI into news media?

Data from the Digital News Report this year found that Australians are less trusting of AI-generated news than people in other parts of the world.

The type of language AI uses can also give it away, or generate suspicion in readers. TJ Thomson, senior lecturer in visual journalism at RMIT, says people find it difficult to “pinpoint” where they’ve seen AI in journalism, but they do know something is “off”.

“People have said, ‘I can’t really recall a particular example or a particular outlet where I’ve seen it done, but I’m suspicious’,” he tells upstart. “’The words I’m reading, they seem clunky or they seem too perfect or they seem generic or they seem not contextually accurate for whatever reason’.”

Trust in the media is already on the decline. Carlson says that AI has just added to that.

“It’s something that’s already shrouded in a lot of distrust and misinformation or fact checking, and so when you mix in AI with that, I think it’s very, very easy for people to be [wary of it],” she says.

The Digital News Report findings did suggest that Australians would be more accepting of AI being used for the production of certain news articles. For example, sport, lifestyle and weather would be more acceptable than politics, crime and other sensitive topics that could impact human life.

Thomson says that audiences will ultimately be the ones to decide on the ethics of AI use in journalism. As it stands, he says there is not enough information about “how comfortable or uncomfortable audiences are with different applications of AI”.

The issue of AI images

Another application is, of course, AI generated images. Thomason says people regard AI-generated images, in particular, as “soulless”, lacking creativity and looking “similar, shiny and plasticky”.

This technology has already been put to use in Australia. Sydney’s Daily Telegraph is using non-realistic generated images in its opinion section, a place usually occupied by cartoons. It attributes the images to “ChatGPT”. Even established figures and organisations such as the Melbourne lord mayor Nicholas Reece have released images created by AI. In January, an image of Victorian MP Georgie Purcell, which was part of a Nine News story, was edited using Photoshop’s generative AI features, which altered her body and made her dress to be more revealing. Nine News claimed it to be an “automation by Photoshop”. However, Adobe pointed out that the edits required “human intervention and approval”.

AI-generated or edited images become more dangerous when they are realistic. Nicholls says the release of images and news stories created and altered with AI is a challenge faced by news outlets.

“These issues are quite problematic now in a news context,” he says. “Images, video, they become the sort of obvious ones but words are also capable of having that level of mis and disinformation.”

How can AI use be regulated?  

In order to preserve the integrity of the media industry and the faith that audiences and industry professionals place in it, experts say that regulations need to be established.

The Media, Entertainment and Arts Alliance (MEAA), the union for journalists and other creative professionals, recently signed a deal with the publishing arm of Nine which included a commitment to develop standards for AI use in the workplace. News Corp also has its own set of guidelines. However, Thomson says current internal policies at news outlets are typically “generic”.

“Saying we want to use AI in ways that are transparent or that are respectful or ethical, don’t violate privacy, that kind of thing,” he says. “But they don’t get down to the weeds.”

Thomson says that news outlets need to be transparent when they use AI. He suggests that allowing people to hover over an image, video or article for more information on how it was created would be a simple solution. However, when it comes to government regulation of AI, Thomson believes Australia is at the “back of the pack”.

“If you look at the European Union, they’ve been working on AI protocols and guidelines and regulations for five or so years already,” he says. “Australia’s trying to catch up.”

Beyond the eight voluntary “AI ethics principles”, there is no significant regulation of AI in Australia, although proposed mandatory guardrails have been tabled by the federal government.

The MEAA is campaigning for a federal “AI act”, similar to regulations in the US and Europe that will force companies to disclose what sources they have used to train AI. It also wants to ensure that workers have more control over how AI is used, and that they are fairly compensated for the use of their work in training models.

How worried are journalists?

A survey conducted earlier this year by the MEAA found 74 percent of members surveyed are concerned about AI’s ability to “spread misinformation either deliberately or inadvertently”.

“Journalists are concerned that the publication or broadcast of AI-generated content will undermine trust and be a breach of faith with audiences who expect journalism to be a robust and responsible search for the truth,” an MEAA spokesperson tells upstart.

The survey found 72 percent of members are worried about the theft of intellectual or creative work, while 59 percent are concerned about job losses. However, Carlson doesn’t believe that AI will cause job losses in journalism as it lacks the human touch that only a journalist can offer.

“I think at its core, it really is a new developing tool and at this point, I don’t think it can ever really replace a human completely,” she says.

“It doesn’t have a heart. It doesn’t have a mind. Well, it doesn’t have a human mind. So those two things, I think, are really important in journalism.”


Article:

Xavier Rodrigues is a third-year Bachelor of Media and Communication (Journalism/Sports Media) student at La Trobe University. You can follow him on Twitter @xt_rod

Ella Zammit is a second-year Bachelor of Media and Communications (Journalism) student at La Trobe University. You can follow her on Twitter @EllaJZammit

Tia Clarkson-Pascoe is a second-year Bachelor of Media and Communications (Journalism) student at La Trobe University. You can follow her on Twitter @Tia_pascoe

 

Photo: by Sanket Mishra can be found HERE and is used under a Creative Commons Licence.

Related Articles

Covering trauma close to home

When reporting local traumatic events, many regional journalists face the dual tasks of protecting themselves and the community they live within.

Editor's Picks