Plaintiffs are filing suits against AI developers at a quick pace, arguing that AI really creates nothing new but merely reads and copies billions of other peoples’ protected works from the internet. Using these works, and in reaction to a user’s commands, AI combines elements of existing works into something else, whether art or prose.
The new work may be a derivative of an old, copied work for which the developer needs a license to use the old work. Many authors think so, and have joined together to sue, although for now their suits raise more questions than there are answers.
Are these copied uses “fair” and not infringing? Maybe, maybe not, and there are cases each side can point to. The Authors Guild suit against HathiTrust for scanning books to enhance research and provide access to print-disabled people was held to be transformative and fair. The result did not replace the function of the scanned books.
But there is also the recent Warhol suit in which the U.S. Supreme Court held Andy Warhol’s art to be not transformative and not a fair use because it substituted for the function of the original work.
AI developers are not internet service providers so they are not immune to suit under Section 230 of the Communications Decency Act, and authors like Sarah Silverman, and image owners like Getty Archives are suing. Silverman is a member of a class suing Meta and OpenAI, alleging defendants used copyright protected material to create derivative works. Her class intends to test the law of fair use, and will contend AI’s use of a copyright protected work to create a new work is not sufficiently transformative to qualify as a fair use under 17 U.S.C. § 108.
Under Section 106 of the Copyright Act, the owner of a protected work has the exclusive right to create new works based on the works or authorize others.
Getty Images, the holder of the rights to millions of images, sued Stability AI, claiming it has infringed by scanning Getty’s vast image libraries to enable AI users to create new images based on existing images that are protected by copyright. Likely a court will have to determine questions of substantial similarity, whether the new images are transformative and fair uses and the effect of the copying on the potential market for the original image under Section 108.
Adding to the misery, those determinations will need to be made case by case because the results of the copying will be different in each case.
And lastly, Microsoft was sued for $9 billion over its alleged AI-induced infringement. Microsoft charges users $360 a year to access its CoPilot AI program to write AI code based on code the program has found on the internet and used to create new code. If it works as advertised, it’s a product of great value, and worth every penny of the license fee.
Many a well-known person has expressed grave concern over damages posed by AI, but none appears to have considered legal perils to be as significant as the end of civilization they fear at the hands of AI. And so, developers have stepped into a legal downpour without an umbrella. For some developers, these suits can be an existential threat.
There are solutions that can define what and what not an AI developer can legally do and create somewhat of a safe harbor. Some are better than others. The most obvious will be clear court rulings, but many copyright rulings do not have general application as they are fact specific. How the courts will rule remains to be seen, and at best, these cases will take years.
Next up, the big AI developers are likely asking their favorite politicians to provide cover in the form of immunity laws, such as what the ISPs received at the dawn of the internet era through Section 230. That will likely take some heavy lifting.
The best solution may be for AI developers to pay money to copyright holders for use of their protected works. AI developers can contribute to a pool of royalties to be administered and divided among those who qualify as copyright holders.
How the fund will collect, allocate and pay these royalties is challenging but not insurmountable, and maybe the method used by ASCAP and BMI can provide a model. Or, maybe AI developers can task AI with writing a protocol based on works that are substantially similar to works already available on the internet.
James B. Astrachan is a partner at Goodell, DeVries, Leech & Dann and teaches Trademark and Unfair Competition Law at University of Baltimore Law School.