Artificial intelligence and the law

April 2017

Synthetic humanOn 12 January, MEPS voted for a set of regulations to be drafted to govern the use and creation of robots and artificial intelligence, hot off the back of the UK Government setting up a commission to look at the issues surrounding artificial intelligence.

Across continents, the law is unclear and differing and is likely to evolve in this area. Charlotte Walker-Osborn, Head of Technology, Media and Telecoms Sector, with input from Christopher Chan, Intellectual Property Law Partner, both at global law-firm Eversheds Sutherland, give us a brief perspective on the current status.

Artificial intelligence is the simulation of human intelligence processes by computer systems and other machines. These processes include machine learning (essentially the acquisition of data and rules for using the data), reasoning and use of the rules to reach conclusions as well as an element of self-correction.

In late 2016, in the UK, the Commons’ Science and Technology Committee published a report on robotics and artificial intelligence (AI). The report recommended that a standing Commission on Artificial Intelligence be established to examine the social, ethical and legal implications of recent and potential developments in AI.

As of 12 January, MEPs from the parliament’s Legal Affairs Committee passed Mady Delvaux’s report into robotics and AI. As a result, the European Parliament will vote on draft proposals in February for the creation of specific regulation around the use of robots and AI.

Machine generated ideas: who owns the intellectual property?

So, pending further regulation, where are we in relation to intellectual property and AI currently in the UK and beyond? Given the differing legal systems, this article touches upon the position in just three key countries of interest: UK, US and Japan, as well as some discussion as to the European position.

Copyright, AI and the law

Starting with the position in the UK, the Copyright Designs and Patents Act 1988 sets out that: ‘In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken’, and that ‘computer-generated’ means ‘the work is generated by computer in circumstances such that there is no human author of the work.’

There is currently little clarity (whether in case-law or otherwise) as to what these necessary arrangements mean and, so, clarity around ownership is not clear-cut. It is arguable that the organisation who set up the rules for the system has made the arrangements necessary for the creation (and is therefore the owner), but this is not a definitive conclusion. Many other countries have similar provisions and are similarly struggling to give effect to what this means in terms of ownership.

In the US, copyright law does not envisage ownership of work generated by a machine, but the law has recently addressed whether such work is eligible for copyright. The US Copyright Office rules state that it ‘will register an original work of authorship, provided that the work was created by a human being.’ Generally, absent of a written agreement, the author of a work owns the work.

In 2016, in response to a US court ruling in a copyright infringement case involving a monkey who had taken a selfie using a camera that a British photographer had setup, the US Copyright Office updated its rules to clarify that ‘copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind”’. The rules listed specific examples of works that do not qualify under US law for copyright protection, which included ‘works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author’.

While it follows that a work solely created by a machine is not eligible for US copyright, US law is silent on the issue of ownership of a work created solely or jointly by a machine. Assuming the machine’s contribution to a joint work with a person could qualify for US copyright, would the owner of the machine jointly own such work with the other person, or would the other person be the sole owner of the work?

 It would seem reasonable that the owner of the machine would be a joint owner of the work, but there is no explicit US law or case dictating copyright ownership for work generated by a machine.

Over in Japan, the government’s intellectual property task force stated in 2016 that Japan’s existing copyright law did not cover creations produced by AI. The Japanese Government is in the process of putting in place new measures in 2017 to seek to give protection in this area.

Are inventions created by AI systems patentable?

Patents are also relevant in the field of AI. If a machine invents something new, can it be patented? Turning to the US first, the law envisages an individual as the inventor who contributes to conception of an invention and, yet, there is no concept of a computer being able to conceive of a patentable invention.

While the term ‘individual’ appears to exclude companies or legal entities from being named an inventor, ‘conception’ is defined by the US Supreme Court as ‘the complete performance of the mental part of the inventive act’ and ‘the formation in the mind of the inventor of a definite and permanent idea of the complete and operative invention as it is thereafter to be applied to practice.’

Under US law, currently, a machine is not likely to be named an inventor since it is not an ‘individual’ and the ‘conception’ standard appears to contemplate inventorship by a person rather than a machine. However, there is no specific prohibition on patenting inventions created by AI, and no US court has yet ruled on the issue.

In the UK, for over a decade, there has been discussion as to whether inventions which are conceived using computers can gain patent protection. The Patents Act 1977 expressly carves out from patent protection inventions which are implemented by computer programs if they ‘relate to that thing as such’.

Traditionally, that has meant that only certain types of patent applications, which involve computer systems, will be granted and these need to have a certain ‘technical’ contribution. If this hurdle is overcome, the Act sets out that the inventor is the devisor of the invention, albeit that there can be joint inventors.

So, arguably, AI inventions of a certain type are patentable but there are barriers to patentability to be overcome. Indeed, whether robots can create something which is patentable is subject to debate by the European Parliament following the vote mentioned above.

Japan already holds numerous patents in AI and, as of November 2016, was reported to have more patents in this area than any other country in the world.

Liability for acts and omissions of robots and AI

Turning away from intellectual property ownership for one moment, the legal implications as to who is liable for the acts and omissions of robots and AI delivered outcomes paints a similar story and clearly if robots and/or AI are operating in a ‘connected’ environment there are increased security and hacking risks (obviously not uncommon to any internet-enabled technology).

You are unlikely to be surprised that the law is unclear in many territories in relation to an ‘owner’, manufacturer and / or user’s liability for acts and omissions by robots and AI. 2017 is the year in which many countries are seeking to generate legislation which will give at least some framework in these areas. For example, the 2017 report from Mady Delvaux examines if robots should have legal rights and be given legal status as an ‘electronic person’ as well as whether a robot can be held liable for accidents.

There is much talk about whether robots should have a ‘kill switch’ so they could be switched off if needs be. The EU report sets out some proposed principles which include:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm. You can see immediately this is likely to be abused - for example, drones have already been used in warfare and use of robots would give a much bigger advantage to warring nations.
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the above. This seems strange given robots may be given their own legal status. It will be interesting to keep track of further discussion in this area.
  • A robot must protect its own existence as long as such protection does not conflict with both of the above.

See article: Robots: Legal Affairs Committee calls for EU-wide rules

2017 is the year

It is clear in numerous countries across the world that 2017 is the year that they will look to grapple with and update their laws to deal more comprehensively with AI. The UK, US, EU and Japan all indicated they will look at the legal implications of AI (including in relation to intellectual property and liability) in 2017. The UK will have the added complexity of Brexit such that if new EU laws seek to deal with AI, the UK may look to keep those (if implemented before the UK leaves the EU) or may need to apply its own.

Current conclusions in relation to commercial arrangements where an organisation is procuring AI systems or consultancy from a third-party provider or are offering these services, are that it is essential to seek to give clarity in the contract(s) to the parties’ intentions around ownership, licensing, and exploitation as well as product and other potential liabilities. We recommend anyone working in this area (if they are not already) should carefully consider the position with your legal team(s) before entering into those contracts.

Please note that the information provided above is for general information purposes only and should not be relied upon as a detailed legal source.

Image: iStock.com/marek_bf

Comments (2)

Leave Comment
  • 1
    Frank Land wrote on 28th Apr 2017

    Much of current concern and discussion of IT and IS is centred on the dark side, ranging from the activities of the 'bad guys' to Government activated surveillance and invasion of privacy. Yet the IT community - academics, consultants, practitioners - posted few warnings of problems and damage faced by society when they lauded the benefits of the information age.

    Should we not learn from that experience in our projections of what AI might achieve - in particular the exploitation of AI for purposes which will damage individuals and society by the 'bad guys', those in authority and indeed as the use of social media has shown by ourselves.

    Report Comment

  • 2
    Ceri Charlton wrote on 28th Apr 2017

    I would very much like to see the BCS lobby (now) for a legal requirement which will be relatively cheap/painless to implement now, but vastly harder and more costly to implement once it truly becomes necessary (IE legislating "after the fact"):

    Increasingly, services are rendered by AIs, rather than humans. Last week, I opened a bank account with Revoult. My questions were answered by an AI, via a "messenger" style application. There are many, many instances (some of which have not yet arisen, due to current technology) in which a human interacting with a machine may genuinely mistake a machine for a person and it would be in the public interest that it was unambiguously clear at the time of entering into a contract, that the 'agent' with whom you were speaking was an AI. Rather than a weak approach of burying words to the effect, "aspects of our services are fulfilled by AIs" 500 pages into a EULA that no one reads, I would advocate the following:

    There should be a specific and formalised question, which AIs performing services are required to answer immediately with a specific response, which will confirm both their status as an AI and identifying their company/person responsible for them. I would suggest,
    Q."Are you an Artificial Intelligence?"
    A."Yes, I am an Artificial Intelligence. I am registered to the Deepmind Corporation LLP and the individual responsible for me can be contacted at: xxxx@yyy.com, Deepmind, 4 Acacia Avenue, Nowheresville."

    If it were made as a legal requirement by 2025, effort to implement this functionality in future systems would be negligible. Retrofitting such a requirement in future (as will eventually be inevitably deemed as necessary) would be disproportionately expensive.

    I would be happy to be part of any BCS focus group that was interested in advancing this proposal, or working in the field of regulation/legislation relating to the use of AI.

    Report Comment

Post a comment