RSA’s demise from quantum attacks is very much exaggerated, expert says

Abstract futuristic electronic circuit board high-tech background

Three weeks ago, panic swept across some corners of the security world after researchers discovered a breakthrough that, at long last, put the cracking of the widely used RSA encryption scheme within reach by using quantum computing.

Scientists and cryptographers have known for two decades that a factorization method known as Shor’s algorithm makes it theoretically possible for a quantum computer with sufficient resources to break RSA. That’s because the secret prime numbers that underpin the security of an RSA key are easy to calculate using Shor’s algorithm. Computing the same primes using classical computing takes billions of years.

The only thing holding back this doomsday scenario is the massive amount of computing resources required for Shor’s algorithm to break RSA keys of sufficient size. The current estimate is that breaking a 1,024-bit or 2,048-bit RSA key requires a quantum computer with vast resources. Specifically, those resources are about 20 million qubits and about eight hours of them running in superposition. (A qubit is a basic unit of quantum computing, analogous to the binary bit in classical computing. But whereas a classic binary bit can represent only a single binary value such as a 0 or 1, a qubit is represented by a superposition of multiple possible states.)

The paper, published three weeks ago by a team of researchers in China, reported finding a factorization method that could break a 2,048-bit RSA key using a quantum system with just 372 qubits when it operated using thousands of operation steps. The finding, if true, would have meant that the fall of RSA encryption to quantum computing could come much sooner than most people believed.

RSA’s demise is greatly exaggerated

At the Enigma 2023 Conference in Santa Clara, California, on Tuesday, computer scientist and security and privacy expert Simson Garfinkel assured researchers that the demise of RSA was greatly exaggerated. For the time being, he said, quantum computing has few, if any, practical applications.

“In the near term, quantum computers are good for one thing, and that is getting papers published in prestigious journals,” Garfinkel, co-author with Chris Hoofnagle of the 2021 book Law and Policy for the Quantum Age, told the audience. “The second thing they are reasonably good at, but we don’t know for how much longer, is they’re reasonably good at getting funding.”

Even when quantum computing becomes advanced enough to provide useful applications, the applications are likely for simulating physics and chemistry, and performing computer optimizations that don’t work well with classical computing. Garfinkel said that the dearth of useful applications in the foreseeable future might bring on a “quantum winter,” similar to the multiple rounds of artificial intelligence winters before AI finally took off.

The problem with the paper published earlier this month was its reliance on Schnorr’s algorithm (not to be confused with Shor’s algorithm), which was developed in 1994. Schnorr’s algorithm is a classical computation based on lattices, which are mathematical structures that have many applications in constructive cryptography and cryptanalysis. The authors who devised Schnorr’s algorithm said it could enhance the use of the heuristic quantum optimization method called QAOA.

Within short order, a host of researchers pointed out fatal flaws in Schnorr’s algorithm that have all but debunked it. Specifically, critics said there was no evidence supporting the authors’ claims of Schnorr’s algorithm achieving polynomial time, as opposed to the exponential time achieved with classical algorithms.

The research paper from three weeks ago seemed to take Shor’s algorithm at face value. Even when it’s supposedly enhanced using QAOA—something there’s currently no support for—it’s questionable whether it provides any performance boost.

“All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen … many,” Scott Aaronson, a computer scientist at the University of Texas at Austin and director of its Quantum Information Center, wrote. “Having said that, this actually isn’t the first time I’ve encountered the strange idea that the exponential quantum speedup for factoring integers, which we know about from Shor’s algorithm, should somehow ‘rub off’ onto quantum optimization heuristics that embody none of the actual insights of Shor’s algorithm, as if by sympathetic magic.”

With Nvidia Eye Contact, you’ll never look away from a camera again

Nvidia's Eye Contact feature automatically maintains eye contact with a camera for you.
Enlarge / Nvidia’s Eye Contact feature automatically maintains eye contact with a camera for you.


Nvidia recently released a beta version of Eye Contact, an AI-powered software video feature that automatically maintains eye contact for you while on-camera by estimating and aligning gaze. It ships with the 1.4 version of its Broadcast app, and the company is seeking feedback on how to improve it. In some ways, the tech may be too good because it never breaks eye contact, which appears unnatural and creepy at times.

To achieve its effect, Eye Contact replaces your eyes in the video stream with software-controlled simulated eyeballs that always stare directly into the camera, even if you’re looking away in real life. The fake eyes attempt to replicate your natural eye color, and they even blink when you do.

So far, the response to Nvidia’s new feature on social media has been largely negative. “I too, have always wanted streamers to maintain a terrifying level of unbroken eye contact while reading text that obviously isn’t displayed inside their webcams,” wrote The D-Pad on Twitter.

An Nvidia press video for Broadcast 1.4 update featuring Eye Contact.

A former TV news anchor named Scott Baker also chimed in about Nvidia Eye Contact with his analysis: “As a TV news anchor for decades … this is not quite the right approach. To make communication effective … you have to naturally break eye contact with the camera (just as you would in real life) fairly often. The power of eye contact in human communication is deeply researched. Locking eyes with someone for more than 7-10 seconds is intuitively regarded as creepy or uncomfortable. True across a dinner table, in front of a group, or through a camera.”

This isn’t the first time a company has used simulated eyeballs to maintain eye contact in video calls or video streams. In 2019, Apple introduced its “Eye Contact” feature in FaceTime that kept your peepers always glued to the camera. Like Nvidia’s version of the technology, it also faced a generally negative reception upon launch.

But hey, if non-stop soul-searing eye contact is your thing, you can run Eye Contact yourself by downloading Nvidia Broadcast for free from the company’s website. It requires Windows, an Nvidia RTX graphics card, and a deep desire to freak out anyone watching your video.

Fearing ChatGPT, Google enlists founders Brin and Page in AI fight

An illustration of ChatGPT exploding onto the scene, being very threatening.
Enlarge / An illustration of a chatbot exploding onto the scene, being very threatening.

Benj Edwards / Ars Technica

ChatGPT has Google spooked. On Friday, The New York Times reported that Google founders Larry Page and Sergey Brin held several emergency meetings with company executives about OpenAI’s new chatbot, which Google feels could threaten its $149 billion search business.

Created by OpenAI and launched in late November 2022, the large language model (LLM) known as ChatGPT stunned the world with its conversational ability to answer questions, generate text in many styles, aid with programming, and more.

Google is now scrambling to catch up, with CEO Sundar Pichai declaring a “code red” to spur new AI development. According to the Times, Google hopes to reveal more than 20 new products—and demonstrate a version of its search engine with chatbot features—at some point this year.

The NYT report quotes D. Sivakumar, a former Google research director, on the internal urgency of the situation: “This is a moment of significant vulnerability for Google. ChatGPT has put a stake in the ground, saying, ‘Here’s what a compelling new search experience could look like.’”

ChatGPT can answer questions, write programs, and even compose poetry about Nebraska.
Enlarge / ChatGPT can answer questions, write programs, and even compose poetry about Nebraska.

Ars Technica

Unlike Google search, which works primarily through keywords, ChatGPT uses natural language processing to glean the context of what a user is asking, then generate its best attempt at relevant answers. ChatGPT’s output isn’t always accurate, but its performance has been convincing enough to illustrate a potential conversational search interface that would make Google’s technology look antiquated.

Perhaps because of this, Microsoft is reportedly working on a new version of its Bing search engine that will integrate features of ChatGPT. Microsoft made its first OpenAI investment in 2019, and it recently announced a new round of funding to the tune of $10 billion.

Back at Google, Page and Brin have not been very involved with the search engine since they left their daily roles in 2019, but they have long been cheerleaders of bringing AI into Google’s products. Their involvement reflects the gravity of the ChatGPT challenge within Google.

So far, Google has responded to OpenAI by introducing fast-track product approval reviews and tools to help other companies develop their own AI prototypes, according to the NYT. Google also offers software developers and other businesses image-creation technology, along with its AI language model, LaMDA.

Some have wondered if Google has been playing it too safe, worried about the negative societal impacts or copyright implications of generative AI technology. Google seems to recognize this hesitancy internally, and the NYT report mentions that the company may potentially “recalibrate” the level of risk it is willing to take with new AI technology.

In a tweet, OpenAI CEO Sam Altman poked fun at this line in the NYT article by saying that OpenAI aims to decrease the level of risk the company will take while still shipping powerful new AI models.

Recalibration or not, Google says it is committed to AI safety. “We continue to test our AI technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon,” Lily Lin, a spokesperson for Google, said in a statement to the NYT.

But while Google delays, the more nimble OpenAI is shipping generative AI products that could potentially disrupt not only Google, but the entire tech industry as OpenAI works toward its goal of creating ever-more-powerful AI technology. Google’s response to this existential threat may decide the success of the company for years to come.

Amazon is discontinuing its AmazonSmile charity program next month

Amazon package

Amazon’s business practices and footprint have received plenty of criticism over the years. From its misleading products and reviews and its environmental impact to its effect on small businesses and its own employees, its shoppers are left with a fair amount of guilt every time they use its convenient platform. AmazonSmile, which donates 0.5 percent of the price of eligible purchased items to a shopper-selected charity, has been one way for shoppers to ease that sense of guilt. Come February 20, those shoppers will have to find a new path to absolution when AmazonSmile is shuttered.

Amazon emailed participants of the free program about the news on Wednesday. The email said that AmazonSmile, which launched in 2013, “has not grown to create the impact that we had originally hoped.”

AmazonSmile shoppers can pick which charity will receive the 0.5 percent donation from the 1 million 501(c)(3) charitable groups participating. These groups include American Red Cross, Meals on Wheels America, St. Jude Children’s Research Hospital, and local groups, like specific Boys and Girls Club chapters.

Amazon claims AmazonSmile has donated $449,385,192 to global charities and $400 million to US charities as of December 2022. But the tech giant is now telling shoppers that it tried to do too much, and its “ability to have an impact was often spread too thin.”

Unfortunately, Amazon didn’t announce an immediate philanthropic effort to replace AmazonSmile. Instead, it said it will “continue to pursue and invest in other areas where it can make meaningful change.” Its email named charitable efforts it had already been making, such as its Housing Equity Fund for affordable housing and its Future Engineer program that claims to have paid for computer science curriculum for more than 600,000 students.

Amazon’s charitable efforts moving ahead will also focus on natural disaster relief through its massive “logistics infrastructure and technology.”

As the program closes, Amazon said it would give participating charities a bonus donation that totals three months’ worth of donations that the organization received through AmazonSmile in 2022.

“Once AmazonSmile closes, charities will still be able to seek support from Amazon customers by creating their own wish lists,” Amazon said.

Amazon’s closure of AmazonSmile adds to a growing list of reasons for people to frown recently. Earlier this month, Amazon expanded layoff plans from 10,000 workers to 18,000.

Correction: This article previously stated that AmazonSmile donates 0.05 percent of the price of eligible products, but it donates 0.5 percent. The article has been updated. 

Hacker group incorporates DNS hijacking into its malicious website campaign

DNS hijacking concept.
Enlarge / DNS hijacking concept.

Researchers have uncovered a malicious Android app that can tamper with the wireless router the infected phone is connected to and force the router to send all network devices to malicious sites.

The malicious app, found by Kaspersky, uses a technique known as DNS (Domain Name System) hijacking. Once the app is installed, it connects to the router and attempts to log in to its administrative account by using default or commonly used credentials, such as admin:admin. When successful, the app then changes the DNS server to a malicious one controlled by the attackers. From then on, devices on the network can be directed to imposter sites that mimic legitimate ones but spread malware or log user credentials or other sensitive information.

Capable of spreading widely

“We believe that the discovery of this new DNS changer implementation is very important in terms of security,” Kaspersky researchers wrote. “The attacker can use it to manage all communications from devices using a compromised Wi-Fi router with the rogue DNS settings.”

The researchers continued: “Users connect infected Android devices to free/public Wi-Fi in such places as cafes, bars, libraries, hotels, shopping malls, and airports. When connected to a targeted Wi-Fi model with vulnerable settings, the Android malware will compromise the router and affect other devices as well. As a result, it is capable of spreading widely in the targeted regions.”

DNS is the mechanism that matches a domain name like to, the numerical IP address where the site is hosted. DNS lookups are performed by servers operated by a user’s ISP or by services from companies such as Cloudflare or Google. By changing the DNS server address in a router’s administrative panel from a legitimate one to a malicious one, attackers can cause all devices connected to the router to receive malicious domain lookups that lead to lookalike sites used for cybercrime.

The Android app is known as Wroba.o and has been in use for years in various countries, including the US, France, Japan, Germany, Taiwan, and Turkey. Curiously, the DNS hijacking technique the malware is capable of is being used almost exclusively in South Korea. From 2019 to most of 2022, attackers lured targets to malicious sites that were sent through text messages, a technique known as smishing. Late last year, the attackers incorporated DNS hijacking into their activities in that Asian nation.

Infection flow with DNS hijacking and smishing.
Enlarge / Infection flow with DNS hijacking and smishing.

The attackers, known in the security industry as Roaming Mantis, designed the DNS hijacking to work only when devices visit the mobile version of a spoofed website, most likely to ensure the campaign goes undetected.

While the threat is serious, it has a major shortcoming—HTTPS. Transport Layer Security (TLS) certificates that serve as the underpinning for HTTPS bind a domain name such as to a private encryption key that’s known only to the site operator. People directed to a malicious site masquerading as Ars Technica using a modern browser will receive warnings that the connection isn’t secure or will be asked to approve a self-signed certificate, a practice that users should never follow.

Another way to combat the threat is to ensure the password protecting a router’s administrative account is changed from the default one to a strong one.

Still, not everyone is versed in such best practices, which leaves them open to visiting a malicious site that looks almost identical to the legitimate one they intended to access.

“Users with infected Android devices that connect to free or public Wi-Fi networks may spread the malware to other devices on the network if the Wi-Fi network they are connected to is vulnerable,” Thursday’s report stated. “Kaspersky experts are concerned about the potential for the DNS changer to be used to target other regions and cause significant issues.

300+ models of MSI motherboards have Secure Boot turned off. Is yours affected?

A stylized skull and crossbones made out of ones and zeroes.

Secure Boot is an industry standard for ensuring that Windows devices don’t load malicious firmware or software during the startup process. If you have it turned on—as you should in most cases, and it’s the default setting mandated by Microsoft—good for you. If you’re using one of more than 300 motherboard models made by manufacturer MSI in the past 18 months, however, you may not be protected.

Introduced in 2011, Secure Boot establishes a chain of trust between the hardware and software or firmware that boots up a device. Prior to Secure Boot, devices used software known as the BIOS, which was installed on a small chip, to instruct them how to boot up and recognize and start hard drives, CPUs, memory, and other hardware. Once finished, this mechanism loaded the bootloader, which activates tasks and processes for loading Windows.

The problem was: The BIOS would load any bootloader that was located in the proper directory. That permissiveness allowed hackers who had brief access to a device to install rogue bootloaders that, in turn, would run malicious firmware or Windows images.

When Secure Boot falls apart

About a decade ago, the BIOS was replaced with the UEFI (Unified Extensible Firmware Interface), an OS in its own right that could prevent the loading of system drivers or bootloaders that weren’t digitally signed by their trusted manufacturers.

UEFI relies on databases of both trusted and revoked signatures that OEMs load into the non-volatile memory of motherboards at the time of manufacture. The signatures list the signers and cryptographic hashes of every authorized bootloader or UEFI-controlled application, a measure that establishes the chain of trust. This chain ensures the device boots securely using only code that’s known and trusted. If unknown code is scheduled to be loaded, Secure Boot shuts down the startup process.

A researcher and student recently discovered that more than 300 motherboard models from Taiwan-based MSI, by default, aren’t implementing Secure Boot and are allowing any bootloader to run. The models work with various hardware and firmware, including many from Intel and AMD (the full list is here). The shortcoming was introduced sometime in the third quarter of 2021. The researcher accidentally uncovered the problem when attempting to digitally sign various components of his system.

“On 2022-12-11, I decided to setup Secure Boot on my new desktop with a help of sbctl,” Dawid Potocki, a Poland-born researcher who now lives in New Zealand, wrote. “Unfortunately I have found that my firmware was… accepting every OS image I gave it, no matter if it was trusted or not. It wasn’t the first time that I have been self-signing Secure Boot, I wasn’t doing it wrong.”

Potocki said he found no indication motherboards from manufacturers ASRock, Asus, Biostar, EVGA, Gigabyte, and NZXT suffer the same shortcoming.

The researcher went on to report that the broken Secure Boot was the result of MSI inexplicably changing its default settings. Users who want to implement Secure Boot— which really should be everyone—must access the settings on their affected motherboard. To do that, hold down the Del button on the keyboard while the device is booting up. From there, select the menu that says SecuritySecure Boot or something to that effect and then select the Image Execution Policy submenu. If your motherboard is affected, Removable Media and Fixed Media will be set to “Always Execute.”

Getty Images

To fix, change “Always Execute” for these two categories to “Deny Execute.”

In a Reddit post published on Thursday, an MSI representative confirmed Potocki’s findings. The representative wrote:

We preemptively set Secure Boot as Enabled and “Always Execute” as the default setting to offer a user-friendly environment that allows multiple end-users flexibility to build their PC systems with thousands (or more) of components that included their built-in option ROM, including OS images, resulting in higher compatibility configurations. For users who are highly concerned about security, they can still set “Image Execution Policy” as “Deny Execute” or other options manually to meet their security needs.

The post said that MSI will release new firmware versions that will change the default settings to “Deny Execute.” The above-linked subreddit contains a discussion that may help users troubleshoot any problems.

As mentioned, Secure Boot is designed to prevent attacks in which an untrusted person surreptitiously gets brief access to a device and tampers with its firmware and software. Such hacks are usually known as “Evil Maid attacks,” but a better description is “Stalker Ex-Boyfriend attacks.”

Pioneering Apple Lisa goes “open source” thanks to Computer History Museum

The Apple Lisa 1, released in 1983.
Enlarge / The Apple Lisa 1, released in 1983.

Apple, Inc.

As part of the Apple Lisa’s 40th birthday celebrations, the Computer History Museum has released the source code for Lisa OS version 3.1 under an Apple Academic License Agreement. With Apple’s blessing, the Pascal source code is available for download from the CHM website after filling out a form.

Lisa Office System 3.1 dates back to April 1984, during the early Mac era, and it was the equivalent of operating systems like macOS and Windows today.

The entire source package is about 26MB and consists of over 1,300 commented source files, divided nicely into subfolders that denote code for the main Lisa OS, various included apps, and the Lisa Toolkit development system.


First released on January 19, 1983, the Apple Lisa remains an influential and important machine in Apple’s history, pioneering the mouse-based graphical user interface (GUI) that made its way to the Macintosh a year later. Despite its innovations, the Lisa’s high price ($9,995 retail, or about $30,300 today) and lack of application support held it back as a platform. A year after its release, the similarly capable Macintosh undercut it dramatically in price. Apple launched a major revision of the Lisa hardware in 1984, then discontinued the platform in 1985.

A screenshot of the Apple Lisa Office System.

A screenshot of the Apple Lisa Office System.

The Lisa was not the first commercial computer to ship with a GUI, as some have claimed in the past—that honor goes to the Xerox Star—but Lisa OS defined important conventions that we still use in windowing OSes today, such as drag-and-drop icons, movable windows, the waste basket, the menu bar, pull-down menus, copy and paste shortcuts, control panels, overlapping windows, and even one-touch automatic system shutdown.

With the LisaOS source release, researchers and educators will now be able to study how Apple developers implemented those historically important features four decades ago. Apple’s Academic license permits using and compiling the source code for “non-commercial, academic research, educational teaching, and personal study purposes only.”

The Computer History Museum had previously teased the release of the code in 2018, but after spending some time in review, they decided to hold back its release until the computer’s 40th birthday—the perfect gift to honor this important machine’s legacy.

1923 cartoon predicts 2023’s AI art generators

Excerpt of a 1923 cartoon that predicted a
Enlarge / Excerpt of a 1923 cartoon that predicted a “cartoon dynamo” and “idea dynamo” that could create cartoon art automatically. The full cartoon is reproduced below.

In 1923, an editorial cartoonist named H.T. Webster drew a humorous cartoon for the New York World newspaper depicting a fictional 2023 machine that would generate ideas and draw them as cartoons automatically. It presaged recent advancements in AI image synthesis, one century later, that actually can create artwork automatically.

The vintage cartoon carries the caption “In the year 2023 when all our work is done by electricity.” It depicts a cartoonist standing by his drawing table and making plans for social events while an “idea dynamo” generates ideas and a “cartoon dynamo” renders the artwork.

Interestingly, this separation of labor feels similar to our neural networks of today. In the actual 2023, the “idea dynamo” would likely be a large language model like GPT-3 (albeit imperfectly), and the “cartoon dynamo” is most similar to an image-synthesis model like Stable Diffusion.

A 1923 cartoon by H.T. Webster captioned "In the year 2023 when all our work is done by electricity."
Enlarge / A 1923 cartoon by H.T. Webster captioned “In the year 2023 when all our work is done by electricity.”

In 2014, the blog Paleofuture profiled Webster’s work and this cartoon in particular, noting that at the start of the 1920s, only 35 percent of Americans had electricity at home. Electricity and the devices it powered represented a radical new way to get things done. Yesterday, someone on Reddit noticed the cartoon again, and it went viral on social media.

Interestingly, despite rapid advances in generative AI technology over the past two years, image-synthesis models aren’t that great at line art yet, as a cartoonist named Douglas Bonneville often notes on Twitter.

But improvements in AI models that master hand-drawn cartoons may be just around the corner. And unlike other early-1900s future projections that involved personal butterfly wings and citywide networks of pneumatic tubes, this prediction from Webster seems to hit fairly close to the mark.

More than 4,400 Sophos firewall servers remain vulnerable to critical exploits

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word "exploit"

Getty Images

More than 4,400 Internet-exposed servers are running versions of the Sophos Firewall that’s vulnerable to a critical exploit that allows hackers to execute malicious code, a researcher has warned.

CVE-2022-3236 is a code-injection vulnerability allowing remote code execution in the User Portal and Webadmin of Sophos Firewalls. It carries a severity rating of 9.8 out of 10. When Sophos disclosed the vulnerability last September, the company warned it had been exploited in the wild as a zero-day. The security company urged customers to install a hotfix and, later on, a full-blown patch to prevent infection.

According to recently published research, more than 4,400 servers running the Sophos firewall remain vulnerable. That accounts for about 6 percent of all Sophos firewalls, security firm VulnCheck said, citing figures from a search on Shodan.

“More than 99% of Internet-facing Sophos Firewalls haven’t upgraded to versions containing the official fix for CVE-2022-3236,” VulnCheck researcher Jacob Baines wrote. “But around 93% are running versions that are eligible for a hotfix, and the default behavior for the firewall is to automatically download and apply hotfixes (unless disabled by an administrator). It’s likely that almost all servers eligible for a hotfix received one, although mistakes do happen. That still leaves more than 4,000 firewalls (or about 6% of Internet-facing Sophos Firewalls) running versions that didn’t receive a hotfix and are therefore vulnerable.”

The researcher said he was able to create a working exploit for the vulnerability based on technical descriptions in this advisory from the Zero Day Initiative. The research’s implicit warning: Should exploit code become public, there’s no shortage of servers that could be infected.

Baines urged Sophos firewall users to ensure they’re patched. He also advised users of vulnerable servers to check for two indicators of possible compromise. The first is the log file located at: /logs/csc.log, and the second is /log/validationError.log. When either contains the_discriminator field in a login request, there likely was an attempt, successful or otherwise, to exploit the vulnerability, he said.

The silver lining in the research is that mass exploitation isn’t likely because of a CAPTCHA that must be completed during authentication by web clients.

“The vulnerable code is only reached after the CAPTCHA is validated,” Baines wrote. “A failed CAPTCHA will result in the exploit failing. While not impossible, programmatically solving CAPTCHAs is a high hurdle for most attackers. Most Internet-facing Sophos Firewalls appear to have the login CAPTCHA enabled, which means, even at the most opportune times, this vulnerability was unlikely to have been successfully exploited at scale.”

In a statement, Sophos officials wrote: “Sophos took immediate steps to remediate this issue with an automated hotfix sent out in September 2022. We also alerted users who don’t receive automatic hotfixes to apply the update themselves. The remaining 6% of the Internet-facing versions that Baines is guestimating in his article are running old, unsupported version of the software. This is a good opportunity to remind these users, as well as all users of any type of outdated software, to follow best security practices and upgrade to the most recent version available, like Sophos does on a regular basis with its customers.”

Artists file class-action lawsuit against AI image generator companies

A computer-generated gavel hovering over a laptop.
Enlarge / A computer-generated gavel hovers over a laptop.

Some artists have begun waging a legal fight against the alleged theft of billions of copyrighted images used to train AI art generators to reproduce unique styles without compensating artists or asking for consent.

A group of artists represented by the Joseph Saveri Law Firm has filed a US federal class-action lawsuit in San Francisco against AI-art companies Stability AI, Midjourney, and DeviantArt for alleged violations of the Digital Millennium Copyright Act, violations of the right of publicity, and unlawful competition.

The artists taking action—Sarah Andersen, Kelly McKernan, Karla Ortiz—”seek to end this blatant and enormous infringement of their rights before their professions are eliminated by a computer program powered entirely by their hard work,” according to the official text of the complaint filed to the court.

Using tools like Stability AI’s Stable Diffusion, Midjourney, or the DreamUp generator on DeviantArt, people can type phrases to create artwork similar to living artists. Since the mainstream emergence of AI image synthesis in the last year, AI-generated artwork has been highly controversial among artists, sparking protests and culture wars on social media.

A selection of images generated by Stable Diffusion. Knowledge of how to render them came from scraped images on the web.
Enlarge / A selection of images generated by Stable Diffusion. Knowledge of how to render them came from scraped images on the web.

One notable absence from the list of companies listed in the complaint is OpenAI, creator of the DALL-E image synthesis model that arguably got the ball rolling on mainstream generative AI art in April 2022. Unlike Stability AI, OpenAI has not publicly disclosed the exact contents of its training dataset and has commercially licensed some of its training data from companies such as Shutterstock.

Despite the controversy over Stable Diffusion, the legality of how AI image generators work has not been tested in court, although the Joseph Saveri Law Firm is no stranger to legal action against generative AI. In November 2022, the same firm filed suit against GitHub over its Copilot AI programming tool for alleged copyright violations.

Tenuous arguments, ethical violations

An assortment of robot portraits generated by Stable Diffusion as found on the Lexica search engine.
Enlarge / An assortment of robot portraits generated by Stable Diffusion as found on the Lexica search engine.

Alex Champandard, an AI analyst who has advocated for artists’ rights without dismissing AI tech outright, criticized the new lawsuit in several threads on Twitter, writing, “I don’t trust the lawyers who submitted this complaint, based on content + how it’s written. The case could do more harm than good because of this.” Still, Champandard thinks that the lawsuit could be damaging to the potential defendants: “Anything the companies say to defend themselves will be used against them.”

To Champandard’s point, we’ve noticed that the complaint includes several statements that potentially misrepresent how AI image synthesis technology works. For example, the fourth paragraph of section I says, “When used to produce images from prompts by its users, Stable Diffusion uses the Training Images to produce seemingly new images through a mathematical software process. These ‘new’ images are based entirely on the Training Images and are derivative works of the particular images Stable Diffusion draws from when assembling a given output. Ultimately, it is merely a complex collage tool.”

In another section that attempts to describe how latent diffusion image synthesis works, the plaintiffs incorrectly compare the trained AI model with “having a directory on your computer of billions of JPEG image files,” claiming that “a trained diffusion model can produce a copy of any of its Training Images.”

During the training process, Stable Diffusion drew from a large library of millions of scraped images. Using this data, its neural network statistically “learned” how certain image styles appear without storing exact copies of the images it has seen. Although in the rare cases of overrepresented images in the dataset (such as the Mona Lisa), a type of “overfitting” can occur that allows Stable Diffusion to spit out a close representation of the original image.

Ultimately, if trained properly, latent diffusion models always generate novel imagery and do not create collages or duplicate existing work—a technical reality that potentially undermines the plaintiffs’ argument of copyright infringement, though their arguments about “derivative works” being created by the AI image generators is an open question without a clear legal precedent to our knowledge.

Some of the complaint’s other points, such as unlawful competition (by duplicating an artist’s style and using a machine to replicate it) and infringement on the right of publicity (by allowing people to request artwork “in the style” of existing artists without permission), are less technical and might have legs in court.

Despite its issues, the lawsuit comes after a wave of anger about the lack of consent from artists who feel threatened by AI art generators. By their admission, the tech companies behind AI image synthesis have scooped up intellectual property to train their models without consent from artists. They’re already on trial in the court of public opinion, even if they’re eventually found compliant with established case-law regarding overharvesting public data from the Internet.

“Companies building large models relying on Copyrighted data can get away with it if they do so privately,” tweeted Champandard, “but doing it openly *and* legally is very hard—or impossible.”

Should the lawsuit go to trial, the courts will have to sort out the differences between ethical and alleged legal breaches. The plaintiffs hope to prove that AI companies benefit commercially and profit richly from using copyrighted images; they’ve asked for substantial damages and permanent injunctive relief to stop allegedly infringing companies from further violations.

When reached for comment, Stability AI CEO Emad Mostaque replied that the company had not received any information on the lawsuit as of press time.