Georgia Smith MBCS spoke to Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey and author of The Reasonable Robot: Artificial Intelligence and the Law, and globally renowned AI policy, ethics and governance expert Merve Hickok, founder of AIethicist.org and President and Policy Director at the Center for AI & Digital Policy (CAIDP), to explore the relationship between AI and authorship.
As we step irrevocably into the age of science fiction, one of many complexities rearing their heads is the relationship between AI and authorship.
‘Authorship can mean different things in different contexts, but legally, what qualifies someone as a legal author is the generation and dissemination of creative works’, explains Ryan Abbott, considering whether AI generated material can be given copyright. ‘Copyright, at least in the US, was designed to encourage that, and my work has argued that this supports the granting of copyright to AI-generated works because it will encourage people to make and use creative AI, which will [be socially beneficial]. The US… [already] allows a corporation or other non-natural persons to be authors, so we also have a tradition of non-human authorship.
‘An AI can contribute in a minor way to a work’s creation, like helping to correct spelling and grammar, or it can effectively automate the entirety of the creative process… Some people think AI works are not genuinely creative, largely based on beliefs about human exceptionalism. That is an interesting philosophical position, but it really should not have anything to do with legal authorship, which is a utilitarian concept’, he explains. ‘My thinking is that the law should care more about behaviour than the nature of the actor, because the law generally cares about changing behaviour to generate more social benefits. If we allow useful AI output to receive protection, we will have more useful AI output. If we discourage infringing AI output, we will have less infringement.’
For Merve, who sounds a more cautious note, what jumps out is the slippery matter of liability. ‘Attributing legal authorship to a technological artefact or a commercial product is the first step towards attributing legal personhood’, she explains. ‘It can be used to shift liability by the companies benefiting from these products. If a product creates a safety issue or fails due to its design, you need those involved to be held accountable. If a plane or a car crashes due to design, you don’t say it was the plane’s responsibility. It would be the manufacturer’s.’
Is generative AI the new ghostwriting?
Ghostwriting is not new, but has never escaped accusations of being deceptive, exploitative, or lacking integrity. Many academic and journalistic organisations expressly forbid it. But it’s now possible for someone to train an AI model on their own work and voice, and prompt it to write an authentic-sounding article based on their latest research. That creates a real conceptual tangle of authorship; on the one hand — they didn’t write it. On the other, it’s written in their voice, with their research, with their explicit consent and instruction. Is AI authorship essentially ghostwriting for the modern age, and does it raise similar issues?
‘It does raise similar issues’, Ryan answers. ‘I see nothing wrong generally with having AI help you write, or even having AI entirely write something, but I do have concerns about people passing AI work off as their own — especially if this violates some principled rule. In the case of students, for example, it is generally a violation of academic integrity to have another person write your paper, so it should similarly be a violation to have AI write your paper... Similarly for researchers, passing off AI-generated work as human-generated involves taking credit for work not done.’
Who bears responsibility for AI output?
Moral responsibility and accountability are not currently assignable to AI models — which are, perhaps for the first time in history, genuinely ‘just following orders’. I’m curious how Ryan’s belief in legal neutrality towards AI as an actor informs his opinion on where moral responsibility for AI output lies.
‘As a legal matter, I’m not sure that authorship is about responsibility and accountability…’, he responds thoughtfully. ‘But regardless, there’s no reason AI legal neutrality must come at the expense of accountability or transparency. For example, I’ve argued that AI-generated works should be disclosed as such, rather than mislabelled as some specific person’s work. Similarly, if one does engage in infringing activities, either related to AI training or output, a legal person should still generally be liable.’
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
Also on the topic of moral liability, I ask Merve about her characterisation of AI hallucinations as deception in the CAIDP’s 2023 letter to the Federal Trade Commission, which argued that OpenAI broke guidelines by shifting responsibility for mistakes onto the consumer. Would granting AI legal authorship over its output be a reasonable solution, making it impossible to shift blame?
‘No, granting legal authorship to AI would be a serious mistake’, Merve responds emphatically. ‘AI systems do not fall from the sky. They are designed with intentional choices of the AI companies — including how they mimic human behaviour and styles. Companies who create these commercial products are fully to blame when AI recommends suicide to a child, for example. But instead of being honest about the limitations of AI systems, they continue to hype them up as being capable of advanced reasoning and of understanding the consequences of their recommendations.’
Authorship and transparency
When discussing the extent to which a human can use generative AI tools to write and still claim authorship, and at what point they should disclose their usage, both experts find the answer is context dependent.
‘Whether a person can call themselves an author depends on their own judgement of their contribution’, says Merve. ‘However, the more just approach is where publishers draw the line. Publishers need to be crystal clear about AI use guidelines and enforce them in a robust way. Moving on to context, Merve expands: ‘Synthetic content is increasingly indistinguishable from real photos or sounds. When used in contexts which can manipulate or impact a person’s behaviour or choices, AI should definitely be disclosed. For example, if AI was used to generate customer reviews for a product, it is not the experience of a real customer and can be deceptive and misleading for others. If AI is used for robocalls by companies or governments, it can lead to mis- or disinformation. If AI provides health advice to individuals, it does not have the professional knowledge or liability of a doctor.’
For Ryan, the answer is similarly context dependent. ‘In the academic context, for example, we care a lot about transparency and attribution’, he explains. ‘Journals require detailed disclosures about who contributed to a manuscript and how, and they increasingly require detailed disclosures about the use of AI — and I think disclosure solves a lot of the related concerns. In other contexts, short bits of commercial text for instance, we care a lot less about transparency and attribution. Though these are fundamentally difficult lines to draw.’
Take it further
Interested in this and similar topics? Explore BCS' range of books and courses: