On 12 January, MEPS voted for a set of regulations to be drafted to govern the use and creation of robots and artificial intelligence, hot off the back of the UK Government setting up a commission to look at the issues surrounding artificial intelligence.
Across continents, the law is unclear and differing and is likely to evolve in this area. Charlotte Walker-Osborn, Head of Technology, Media and Telecoms Sector, with input from Christopher Chan, Intellectual Property Law Partner, both at global law-firm Eversheds Sutherland, give us a brief perspective on the current status.
Artificial intelligence is the simulation of human intelligence processes by computer systems and other machines. These processes include machine learning (essentially the acquisition of data and rules for using the data), reasoning and use of the rules to reach conclusions as well as an element of self-correction.
In late 2016, in the UK, the Commons’ Science and Technology Committee published a report on robotics and artificial intelligence (AI). The report recommended that a standing Commission on Artificial Intelligence be established to examine the social, ethical and legal implications of recent and potential developments in AI.
As of 12 January, MEPs from the parliament’s Legal Affairs Committee passed Mady Delvaux’s report into robotics and AI. As a result, the European Parliament will vote on draft proposals in February for the creation of specific regulation around the use of robots and AI.
So, pending further regulation, where are we in relation to intellectual property and AI currently in the UK and beyond? Given the differing legal systems, this article touches upon the position in just three key countries of interest: UK, US and Japan, as well as some discussion as to the European position.
Starting with the position in the UK, the Copyright Designs and Patents Act 1988 sets out that: ‘In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken’, and that ‘computer-generated’ means ‘the work is generated by computer in circumstances such that there is no human author of the work.’
There is currently little clarity (whether in case-law or otherwise) as to what these necessary arrangements mean and, so, clarity around ownership is not clear-cut. It is arguable that the organisation who set up the rules for the system has made the arrangements necessary for the creation (and is therefore the owner), but this is not a definitive conclusion. Many other countries have similar provisions and are similarly struggling to give effect to what this means in terms of ownership.
In the US, copyright law does not envisage ownership of work generated by a machine, but the law has recently addressed whether such work is eligible for copyright. The US Copyright Office rules state that it ‘will register an original work of authorship, provided that the work was created by a human being.’ Generally, absent of a written agreement, the author of a work owns the work.
In 2016, in response to a US court ruling in a copyright infringement case involving a monkey who had taken a selfie using a camera that a British photographer had setup, the US Copyright Office updated its rules to clarify that ‘copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind”’. The rules listed specific examples of works that do not qualify under US law for copyright protection, which included ‘works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author’.
While it follows that a work solely created by a machine is not eligible for US copyright, US law is silent on the issue of ownership of a work created solely or jointly by a machine. Assuming the machine’s contribution to a joint work with a person could qualify for US copyright, would the owner of the machine jointly own such work with the other person, or would the other person be the sole owner of the work?
It would seem reasonable that the owner of the machine would be a joint owner of the work, but there is no explicit US law or case dictating copyright ownership for work generated by a machine.
Over in Japan, the government’s intellectual property task force stated in 2016 that Japan’s existing copyright law did not cover creations produced by AI. The Japanese Government is in the process of putting in place new measures in 2017 to seek to give protection in this area.
Patents are also relevant in the field of AI. If a machine invents something new, can it be patented? Turning to the US first, the law envisages an individual as the inventor who contributes to conception of an invention and, yet, there is no concept of a computer being able to conceive of a patentable invention.
While the term ‘individual’ appears to exclude companies or legal entities from being named an inventor, ‘conception’ is defined by the US Supreme Court as ‘the complete performance of the mental part of the inventive act’ and ‘the formation in the mind of the inventor of a definite and permanent idea of the complete and operative invention as it is thereafter to be applied to practice.’
Under US law, currently, a machine is not likely to be named an inventor since it is not an ‘individual’ and the ‘conception’ standard appears to contemplate inventorship by a person rather than a machine. However, there is no specific prohibition on patenting inventions created by AI, and no US court has yet ruled on the issue.
In the UK, for over a decade, there has been discussion as to whether inventions which are conceived using computers can gain patent protection. The Patents Act 1977 expressly carves out from patent protection inventions which are implemented by computer programs if they ‘relate to that thing as such’.
Traditionally, that has meant that only certain types of patent applications, which involve computer systems, will be granted and these need to have a certain ‘technical’ contribution. If this hurdle is overcome, the Act sets out that the inventor is the devisor of the invention, albeit that there can be joint inventors.
So, arguably, AI inventions of a certain type are patentable but there are barriers to patentability to be overcome. Indeed, whether robots can create something which is patentable is subject to debate by the European Parliament following the vote mentioned above.
Japan already holds numerous patents in AI and, as of November 2016, was reported to have more patents in this area than any other country in the world.
Turning away from intellectual property ownership for one moment, the legal implications as to who is liable for the acts and omissions of robots and AI delivered outcomes paints a similar story and clearly if robots and/or AI are operating in a ‘connected’ environment there are increased security and hacking risks (obviously not uncommon to any internet-enabled technology).
You are unlikely to be surprised that the law is unclear in many territories in relation to an ‘owner’, manufacturer and / or user’s liability for acts and omissions by robots and AI. 2017 is the year in which many countries are seeking to generate legislation which will give at least some framework in these areas. For example, the 2017 report from Mady Delvaux examines if robots should have legal rights and be given legal status as an ‘electronic person’ as well as whether a robot can be held liable for accidents.
There is much talk about whether robots should have a ‘kill switch’ so they could be switched off if needs be. The EU report sets out some proposed principles which include:
See article: Robots: Legal Affairs Committee calls for EU-wide rules for more details.
It is clear in numerous countries across the world that 2017 is the year that they will look to grapple with and update their laws to deal more comprehensively with AI. The UK, US, EU and Japan all indicated they will look at the legal implications of AI (including in relation to intellectual property and liability) in 2017. The UK will have the added complexity of Brexit such that if new EU laws seek to deal with AI, the UK may look to keep those (if implemented before the UK leaves the EU) or may need to apply its own.
Current conclusions in relation to commercial arrangements where an organisation is procuring AI systems or consultancy from a third-party provider or are offering these services, are that it is essential to seek to give clarity in the contract(s) to the parties’ intentions around ownership, licensing, and exploitation as well as product and other potential liabilities. We recommend anyone working in this area (if they are not already) should carefully consider the position with your legal team(s) before entering into those contracts.
Please note that the information provided above is for general information purposes only and should not be relied upon as a detailed legal source.