AustLII Home | Databases | WorldLII | Search | Feedback

Precedent (Australian Lawyers Alliance)

You are here:  AustLII >> Databases >> Precedent (Australian Lawyers Alliance) >> 2023 >> [2023] PrecedentAULA 46

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Naughton, James; Currie, Liam --- "What Happens When Artificial Intelligence Defames? The Australian Position" [2023] PrecedentAULA 46; (2023) 177 Precedent 36


What happens when artificial intelligence defames? The Australian position

By James Naughton and Liam Currie

Artificial intelligence has exploded in popularity and functionality in the past year. AI is now capable of doing more and more impressive tasks, and it seems to forebode immense changes in the way humans live. Most institutions have struggled to match AI’s pace, including legal institutions.

In May 2007, Brian Hood became a whistleblower.[1] Mr Hood alerted authorities to what other officers were doing where he worked at Note Printing Australia (NPA), a subsidiary of the Reserve Bank of Australia.

Other officials at NPA and another subsidiary, Securency, had been paying bribes to foreign officials for the provision of lucrative polymer bank note contracts. Brian raised the alarm, and several of those involved went to prison.[2] When one of the wrongdoers was sentenced, Brian and fellow whistleblower James Shelton were praised by the Supreme Court of Victoria for their moral courage:

‘Mr Hood and Mr Shelton both showed tremendous courage in raising their concerns about the foreign bribery activities with appropriate people ... the various foreign bribery court proceedings have lasted for many years longer than anyone might have anticipated, without there having been any public acknowledgement of the very important role played by Brian Hood and James Shelton in exposing what happened within Securency and NPA.’[3]

Mr Hood was, predictably, shocked when a friend informed him that popular online AI chatbot ChatGPT, owned and operated by OpenAI, had falsely said that Mr Hood had been involved in the offences that he had brought to the attention of the public. ChatGPT also said that he had plead guilty to those offences and that he had served a term of imprisonment for them. Among other things, ChatGPT said that Mr Hood had been personally responsible for bribing a Malaysian arms dealer to secure the contracts, that he had plead guilty to this offence and been sentenced to 30 months in prison. None of this was true. Beyond drawing Mr Hood’s ire by alleging that he had committed the offences he had risked his professional reputation to expose, ChatGPT’s assertions also threatened Brian’s current position in the community, not only as the trusted and well-respected mayor of the Hepburn Shire Council, but also as the person who was responsible for running the local Bendigo Bank in Trentham, north-west of Melbourne. ChatGPT’s assertions alleged that Brian was a person who lacked trustworthiness, financial probity and ethics.

We acted for Mr Hood and sent a concerns notice on his behalf to seek to have Chat GPT’s offending content removed. It took a series of interventions by Gordon Legal before OpenAI eventually responded. In our view, it was only after Mr Hood’s story started gaining media attention, including international media attention, that OpenAI engaged on his claim. This strategy is not going to work for everyone –better pathways and complaint mechanisms need to be put in place to allow people who have been defamed by AI programs to correct the record in a more efficient way, without having to resort to concerns notices or litigation.

WHAT IS ARTIFICIAL INTELLIGENCE?

Artificial Intelligence (AI) is commonly defined as:

‘Technology with the ability to perform tasks that would otherwise require human intelligence and which, usually, [has] the capacity to learn or adapt to new experiences or stimuli, including machine learning, speech and natural language processing, robotics and autonomous systems.’[4]

Some AI, ChatGPT for example, offers an AI interface that allows for interactions between humans and technology in a ‘conversational way.’[5] ChatGPT 3.5 is free to use with an account that anybody can sign up for, while ChatGPT 4.0 is for paid subscribers only. ChatGPT is a conversational AI, meaning that it produces answers to users’ questions in natural language and can opine on most topics. Hypothetically, no two conversations between a user and ChatGPT will be the same.

It is known that AI sometimes forms conclusions that are wrong, or forms conclusions based on assumptions. For instance, ChatGPT includes as a disclaimer that its responses are not always accurate and should not be relied upon.

The idea of ‘defamation by algorithm’ was considered in the 2023 Missouri Law Review Symposium. The panel considered whether the use of ChatGPT to write a news story was like ‘commissioning a piece from a freelance writer’; the Symposium discussed whether ‘ChatGPT, a language model that continues to make headlines for its ability to convincingly reproduce human language, can produce defamatory speech’.[6] The panel didn’t ultimately form a view, but said that the ‘onus [is] on news organisations and social media companies to avoid the pitfalls of tools that might be as risky as they are promising’.[7]

An article published in the American Federal Communications Law Journal considered whether artificial intelligence could generate defamatory content that is harmful to people. The focus of the article was around the use of AI tools and algorithms to assist in the world of journalism. The author ultimately concluded that ‘fully autonomous journalism risks propagating false and damaging statements about individuals.’[8]

The James E Rogers College of Law at the University of Arizona released a discussion paper considering the potential liability of ChatGTP software in relation to US defamation laws. Specifically, a discussion of its intersection with s230 of the Communications Decency Act 1996 (US): ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’

There is a clear legal difference between a newspaper ‘importing’ the false findings of a conversational AI versus a conversational AI functioning in its natural form by producing a response to an individual user. In our opinion, if a news outlet publishes something defamatory, relying upon the fact that an AI wrote it initially will be seen extremely dimly by any Australian court.

It could have been argued that defamation by a conversational AI was unlikely to occur, because it was unlikely that a person would be famous enough for a conversational AI to know who it is, but not so famous that their reputation could not be harmed by an AI making defamatory statements about them.

However, Mr Hood’s circumstances demonstrate that there may be people who fall into this category: private citizens with reputations to protect, about whom there is enough information available online for AI to form conclusions, including incorrect conclusions, or false assumptions.

DEFAMATION IN THE ORDINARY COURSE

A defamation claim is a personal cause of action, which provides aggrieved persons (plaintiffs) with ways to address harmful imputations against their character and to hopefully clear their name. Defamation law provides plaintiffs with the right to seek damages for non-economic ‘hurt and humiliation’, in addition to direct financial loss as a result of the defamation.[9]

Required to make out the cause of action of defamation is:

• communication or publication;

• to any third party;

• of a defamatory matter;

• about, concerning, or identifying a person;

• without lawful excuse.[10]

Publications are defamatory if they ‘tend to lower the plaintiff in the estimation of right-thinking members of society generally’.[11]

The plaintiff needs to be identified by the publication, by reference to what an ordinary reasonable person, with knowledge of the facts surrounding the imputation, would have reasonably understood about the person to whom the publication is referring.[12] In practical terms, this means that innuendo or insinuations of ‘you-know-who’ may not save a person or a publication that seeks to defame.[13]

Likely to be the most relevant aspect to whether an AI defames is the element of ‘communication or publication’. Publication in Australia is deemed a bilateral act, meaning that two parties are required – one to communicate and one to comprehend the communication.[14] Like a tree falling in the woods, publication does not occur unless there is someone around to hear it.

One critical question is whether an AI operated by an American company or other offshore entity can be liable under Australian defamation law. The law is quite clear on the point that the tort occurs where the publication happens, by reference to the bilateral act discussed above. Publication is complete when a person understands the communication.[15]

Notably distinct from the United States and other jurisdictions, which require proof of intent on the part of the publisher to defame the plaintiff, in Australia defamation is a strict liability tort and the subjective intention of the publisher is irrelevant. A publisher can be any person who plays a material role in the communication of the imputations.[16] A publisher can potentially be liable in defamation in Australia even if they did not know what they were publishing was defamatory.[17] It appears reasonable to accept that when an AI defames, it does not intend to do so. If it did, however, it is difficult to see what forensic advantage that might offer a plaintiff, if any.

Last, the publication needs to cause serious harm to a person’s reputation. This is a new element of defamation, introduced in July 2021. There is still limited case law on the meaning of serious harm, but what precedent there is indicates that both the seriousness of the imputation, and the number of people to whom it was published, are relevant in establishing serious harm.

If a plaintiff is successful in establishing the above, there may still be strong defences to the claims, including (for example) innocent dissemination. Although it is not clear yet how these issues will conclude, it is conceivable that an AI manufacturer may be able to make out a defence on the basis that the defamatory imputation was innocent, because it was the AI that formed the conclusion, not its creator. However, one needs only to think this argument through to see some of its weaknesses, particularly the fact that were it not for the AI developer, the AI itself would not exist.

On the basis of the above, it appears that an AI is capable of satisfying the elements of defamation. It is very likely that the first defamation case against an AI will happen soon, and the precise issues will flow from that litigation.

One issue is considered further below: is an AI’s output a ‘publication’?

PUBLICATION AND THE INTERNET

Melbourne lawyer George Defteros sued Google for directing users via its search results to a newspaper article that he said contained defamatory imputations. The newspaper article and its content had been the subject of a settlement several years prior, but not a settlement that involved the article being taken down.

Mr Defteros enjoyed early success in the Supreme Court and Court of Appeal, with both finding that Google’s search engine had played the role of an over-enthusiastic librarian who, when prompted, collects a book for a customer and points to a page containing defamatory material.[18]

The High Court thought differently, however, holding that hyperlinks generated by the Google Search Engine were merely tools used to go from one page to another. Hyperlinks themselves do not contain defamatory information.

Like Google, conversational AI generates unique responses to prompts by individual users. There is no guarantee that any two users will ask the same questions or receive the same responses.

However, when a conversational AI ‘says’ something to a user that is defamatory, it appears to act more like a person in conversation than Google. A Google user searches for a topic they are interested in and is presented with a list of links. Intrigued by a link, they clickthrough to a website that may have the information they are interested in. Beyond being responsible for asking the question, the user of a conversational AI cannot have confidence in the AI’s response, nor any control over it. They are merely a recipient of an answer that may be defamatory when their question was not, and did not intend to elicit, a defamatory response.

WHAT NEXT FOR BRIAN?

In our opinion, Brian Hood was able to substantively rectify the damage to his reputation by seeking to clear his name in the media. However, this option may not be available to the next person affected by a defamatory AI statement. It is likely that, as AI continues to grow and more and more people use it, there will be more stories like Brian’s in coming months. What that will mean for the treatment of plaintiffs in Australian defamation settings remains to be seen.

James Naughton is a partner in commercial law at Gordon Legal. PHONE 03 9603 3018 EMAIL jnaughton@gordonlegal.com.au.

Liam Currie is a lawyer in Commercial Law at Gordon Legal. PHONE 03 9603 3037 EMAIL lcurrie@gordonlegal.com.au.

Both James and Liam are specialists in defamation law.


[1] N Bonyhady, ‘Australian whistleblower to test whether ChatGPT can be sued for lying,’ The Age (5 April 2023) <https://www.theage.com.au/technology/australian-whistleblower-to-test-whether-chatgpt-can-be-sued-for-lying-20230405-p5cy9b.html>.

[2] S Letts, ‘How the RBA scandal unfolded,’ ABC News (28 Nov 2018) <https://www.abc.net.au/news/2018-11-28/reserve-bank-note-printing-scadal-timeline/10561826>.

[3] Director of Public Prosecutions (Cth) v Boillot [2018] VSC 739 (Hollingworth J) [16]–[17].

[4] ‘Artificial intelligence definition’, Glossary, LexisNexis (2023) <https://www.lexisnexis.co.uk/legal/glossary/artificial-intelligence>.

[5] 'Introducing ChatGPT’ OpenAI (30 November 2022) <https://openai.com/blog/chatgpt>.

[6] A Fitzgerald, ‘Symposium at Reynolds Journalism Institute asks: What happens when AI creates defamatory content?’ Reynolds Journalism Institute, University of Missouri School of Journalism (17 March 2023) <https://journalism.missouri.edu/2023/03/symposium-at-reynolds-journalism-institute-asks-what-happens-when-ai-creates-defamatory-content/>.

[7] Ibid.

[8] D Albright ‘Do androids defame with actual malice? Libel in the world of automated journalism’, The Federal Communications Law Journal, Vol 75, No 1, 2021–2022, 123.

[9] Defamation Act 2005 (Vic) s35.

[10] Favell v Queensland Newspapers Pty Ltd (2005) 221 ALR 186.

[11] Radio 2UE Sydney Pty Ltd v Chesterton [2009] HCA 16; (2009) 238 CLR 460, [3]–[5].

[12] Mirror Newspapers Ltd v World Hosts Pty Ltd (1979) 23 ALR 167 (Aickin J).

[13] See, for example Steele v Mirror Newspapers Ltd [1974] 2 NSWLR 348.

[14] Google LLC v Defteros [2022] HCA 27 (Google) [21].

[15] Dow Jones and Co Inc v Gutnick [2002] HCA 56.

[16] Google, above note 14.

[17] Ibid.

[18] Ibid [17], [50].


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/PrecedentAULA/2023/46.html