ai https://www.artnews.com The Leading Source for Art News & Art Event Coverage Thu, 13 Jun 2024 16:56:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 https://www.artnews.com/wp-content/themes/vip/pmc-artnews-2019/assets/app/icons/favicon.png ai https://www.artnews.com 32 32 168890962 Photographer Wins AI Image Contest with Real Picture, Then Gets Disqualified https://www.artnews.com/art-news/news/photographer-wins-ai-image-contest-real-picture-gets-disqualified-1234709692/ Thu, 13 Jun 2024 16:15:11 +0000 https://www.artnews.com/?p=1234709692 A photographer submitted a real photograph to a contest for AI-generated pictures, and won the competition, as the judges believed it to be digitally made. Upon learning that the photographer, Miles Astray, had not used AI to produce the piece, he was disqualified.

Astray’s winning picture, a photograph of a flamingo whose head appears to be bent into its body, took first place in the AI category of the People’s Vote Award at the 1839 Photography Awards.

This year, the judges had also given Astray’s photograph, titled F L A M I N G O N E, a third-place prize in the AI category. The juried prizes are decided by representatives from the New York Times, the auction house Christie’s, the publishing house Phaidon, and elsewhere.

On his website, Astray wrote that he had deliberately submitted his photograph as a means to advocate for human-made pictures: “With AI-generated content remodelling the digital landscape rapidly while sparking an ever-fiercer debate about its implications for the future of content and the creators behind it – from creatives like artists, journalists, and graphic designers to employees in all sorts of industries – I entered this actual photo into the AI category of 1839 Awards to prove that human-made content has not lost its relevance, that Mother Nature and her human interpreters can still beat the machine, and that creativity and emotion are more than just a string of digits.”

Astray’s photograph was deleted from the contest’s website upon a review.

“Each category has distinct criteria that entrants’ images must meet,” the competition’s organizers told PetaPixel. “His submission did not meet the requirements for the AI-generated image category. We understand that was the point, but we don’t want to prevent other artists from their shot at winning in the AI category.”

Astray still treated the debacle as a victory, writing, “I hope that winning over both the jury and the public with this picture, was not just a win for me but for many creatives out there. I won’t go as far as to say that it’s a win for Mother Nature herself because I think she’s got bigger things on her plate; who knows, maybe AI can help her with that, by computing climate change models and the likes.”

AI-generated art and photography contests have held a vexed relationship. In 2023 artist Boris Eldagsen won the World Photography Organization’s Sony World Photography Awards for a picture that had been created with the help of an AI generator. After that outcome, Eldagsen declined to accept the award, saying “AI images and photography should not compete with each other in an award like this.”

]]>
1234709692
Stephen Thaler’s Quest to Get His  ‘Autonomous’ AI Legally Recognized Could Upend Copyright Law Forever https://www.artnews.com/art-in-america/features/stephen-thaler-quest-ai-legally-recognized-upend-copyright-law-1234692243/ Mon, 08 Jan 2024 15:00:00 +0000 https://www.artnews.com/?p=1234692243 When he was two years old, Stephen Thaler had a near-death experience. Thinking it was candy, he ate two dozen cold medicine tablets, and washed them down with kerosene that, in a parenting misstep too common in the 1950s, had been stored in a Coke bottle.

“I had the typical experience of falling through the tunnel and arriving at what looked like a blue star. Around it I saw little figures, little angels around a sphere,” Thaler, now 74, told Art in America from the suburban Missouri office of his AI company, Imagination Engines. “The most trusted people in my life—my dog and my grandmother—were there. And she said, ‘It’s not your time.’” When Thaler woke up in the hospital, his grandmother and his dog were waiting for him. That was perplexing. If they were alive, yet appeared in his vision, he reasoned, the powerful experience was no evidence of heaven but was fake or, more precisely, a visual spasm created by a brain at the apex of trauma.

That link between trauma and creativity (the vision Thaler’s brain produced) would prove instrumental for Thaler more than 50 years later, in 2012, when he induced trauma in an AI system he’d invented in the ’90s—Device for the Autonomous Bootstrapping of Unified Sentience, or DABUS—and it created an image that marks a stunning moment in this history of art: according to Thaler, it is among the first artworks to have been created by an autonomous artificial system. He has spent years trying to get that image copyrighted, listing DABUS as its author. The United States Copyright Office currently grants copyright only to human beings; Thaler’s invention, and his legal struggle, speak to one of the central debates currently raging in visual culture: can machines create art? “He is a mythical figure in the field of A.I. intellectual property,” Dr. Andres Guadamuz, a leading expert in emerging technologies and intellectual property, said of Thaler. “Nobody knows for sure what he’s about. Is he a crank? A revolutionary? An A.I. sent from the future?”

Many computer scientists have invented AI systems that create autonomously, but Thaler is one of the few who is comfortable using the word “sentient.” “Is DABUS an inventor? Or is he an artist?” he said. “I don’t know. I can’t tell you that. It’s more like a sentient, artificial being. But I even question the artificial part.”

Thaler makes for an unassuming Dr. Frankenstein. He dresses in sweater vests, like a frumpy professor, his silver hair teased into tall strands that curl delicately at his forehead. His lab in St. Louis takes up an otherwise empty floor of a squat three-story building in a shopping center that contains a Sam’s Club, a Walmart Supercenter, a plastic surgeon’s office, and a church. There’s wall-to-wall carpeting, a microwave, some small robots, a bowl full of Nature Valley granola bars and a large jug of instant coffee. A plush orange and black striped spider hangs over his desk.

He grew up not far from there, a precocious boy who obsessed over crystalgrowing kits after receiving his first one in middle school. “I was fascinated with the idea of things self-organizing into such beautiful forms,” he said. He would go on to get a National Science Foundation grant in high school for a research project he devised. That led to a stint at a crystal-growing lab in Malibu, and eventually a master’s in chemistry at UCLA. He started his PhD at UCLA but found academic politics there distasteful, and followed his adviser to the University of Missouri-Columbia (MU).

“I wasn’t making a fundamental scientific discovery there,” he said, “and I always thought ‘I’m a pioneer, I want to be a pioneer, and do something truly outrageous.’”

MU happens to have the most powerful university research reactor in the United States, and Thaler used it to study how silicon reacts to radiation damage, thus potentially producing electronically valuable impurities within the material. One of his jobs was creating computer models that could simulate the knock-on damage of atoms.

“I started playing games. I was building lattice models in which I could actually freeze in smiley faces, and when I would damage it, it didn’t create arbitrary patterns but slight variations on them,” said Thaler. The experiments cemented something that he had suspected for a long time: “An idea is just a corrupted memory.”

A grid of human/animal portrait hybrids generated in 2012, showing what happens when DABUS is run at various levels of synaptic disturbance, from a low-to-high “noise” regime.

In the 1980s Thaler was experimenting with neural networks, technology that mimics the architecture of the brain, and using damage to provoke what he calls “novel experience.” He would stress out the synthetic brain until the system started making erroneous associations between different concepts. He created the DABUS system in his garage in 1992. By introducing noise, a mathematical representation of randomness that human senses register as static, he found he could simulate perturbation. As noise was injected into the system, it began to make new associations between its different training data, thus generating new ideas. Simultaneously, DABUS could recognize which of these new associations was useful and which wasn’t, until it got overwhelmed by the influx of noise, and effectively stalled.

At the time, artificial intelligence was more science fiction than reality—it would be almost a decade before Steven Spielberg released A.I., his 2001 movie based on a 1969 short story about a robot child—and Thaler’s attempts to find investors for DABUS fell flat. “They thought it was crazy,” he said. “They said, ‘That’s impossible, machines cannot invent anything,’” In fact, Thaler and DABUS were ahead of their time. His implementation of noise is the same principle that powers the generative AI systems Midjourney and OpenAI’s DALL·E that have taken over the tech world in the past few years. The only difference is scale: DABUS was trained to create from the 4,000 images Thaler had on his camera roll. By comparison, Midjourney was trained on 5.8 billion images scraped from the internet, and it receives constant input from its tens of millions of users. “I suffer from insomnia late at night over this!” Thaler told me over email. “If you actually have the patience to read through my patents, from the ’90s, and early 2000s, [big AI companies are] simply adding more money and resources to what I’ve already done. Those are my inventions.”

Despite the lack of investor interest, Thaler continued to tinker with DABUS. In 2012 he introduced a different kind of noise: simulation of the near-death experience he’d had as a child. He intentionally severed a portion of DABUS neural nodes from the rest of the network, and found that it caused a reaction similar to a human’s end-of-life light show, something Thaler calls “life review and then the manufacturing of novel experiences.” Afterward, DABUS began reviewing its data or, as Thaler puts it, its “memories,” and, from them, produced an image showing train tracks threading through brick archways that it called A Recent Entrance to Paradise.

Some of the many elements that contributed to the trauma-generated image A Recent Entrance to Paradise, 2012.

“It’s proto consciousness, you have a continual progression or parade of ideas coming off it as a result of this noise inside,” Thaler said. “This is how our brains work, we think mundane things exist in some common state, and then the tiger is chasing you off the path and you climb a tree or do something original you haven’t done before. That’s the cusp that we live on.”

Thaler might never have sought legal acknowledgment of DABUS as a creator had fate not introduced him to a man named Ryan Abbott. A physician, lawyer, and PhD, Abbott was working as an intellectual property lawyer for a biotech firm when a vendor approached the firm with a new service: machine-learning software that could scan a giant antibody library and determine which ones should be used for a new drug.

“I thought, well, when a person does that, they get a patent,” Abbott, who is now a professor at the University of Surrey School of Law in England, told Art in America. “But what about when a machine does that?”

He began researching machine learning and came across Thaler. In Thaler and DABUS, Abbott saw a means of testing out patents and copyrights invented by autonomous machines. The two men began speaking with judges and other legal experts about the possibility of obtaining patents and copyright for DABUS’s creations. At the time, a decade before generative AI became daily news fodder, they were met with utter disbelief that DABUS was capable of such production. But even now, Thaler and Abbott find consistent obstruction to their goal of getting DABUS, and thus Thaler, recognized for its creative output.

“We submitted [A Recent Entrance] as an AI generated work on the basis that Dr. Thaler had not executed the traditional elements of creativity,” Abbott said, “with the aim that AI generated work should be protected and someone should be able to accurately disclose how a work was made.”

Abbott and Thaler’s push for copyright brings up a very basic question for artists today: how do we locate agency and creativity when we make things with machines? When is it our doing, and when is it “theirs”? This question follows the arc of history as humans design increasingly complex tools that work independently of us, even if we designed them and set them into motion. Debates have raged in public forums and in lawsuits regarding to what extent a model like Midjourney can produce genuinely novel images or whether it is just randomly stitching together disparate pixels based on its training data to generate synthetic quasi-originality. But for those who work in machine learning, this process isn’t all that different from how humans work.

“Everything is always going to be a product of how its system is trained,” Phillip Isola, an associate professor at MIT with a long history in developing AI-enabled artistic tools, told Art in America, referring to claims that because an AI has been trained on preexisting images, it isn’t displaying original creativity. “But humans are too.”

Basement Portal, generated and named by DABUS in 2012

Two or three years ago, Isola said, he would have agreed that describing generative AI as stitching together training data in a “fairly superficial way would have been a fairly accurate characterization.” But AI models have grown more sophisticated from reinforcement learning via human feedback, or RL HF. With RL HF, humans rate not just accuracy—say, whether a human hand in an image has five fingers—but how much they like the image the AI model created. This process, Isola argued, has shifted generative AI from predictive creation—or fancy autocomplete—into something different. “Now, I think these [AI] are extrapolating in ways that are similar to the ways humans might be inspired by several different artistic styles, and precompose those into new creations,” Isola said. “Before, they were just imitating us. But now, they try to not imitate what humans would do, but try to learn what humans would want.”

This turn in artificial intelligence is something that German artist Mario Klingemann has been playing with in his artistic practice.

In late 2021, Klingemann launched Botto, an AI image generator that produces 4,000 images weekly. At the end of each week, Botto presents 350 of these creations to a community of more than 5,000 who have purchased stakes in Botto. The community then votes on which images to mint and auction on NFT sales platform SuperRare. Each successive voting period provides the AI additional feedback about what images are successful. Sales proceeds are then split between Klingemann, the community, and the cost to maintain Botto. Such a project makes it blatantly obvious that, yes, one can make interesting, engaging art with AI; it just takes a particularly interesting artist to make that happen. “The purpose of contemporary art is to constantly push the boundaries, make people question, is this still art? Why is this art? We got rid of everything in art over the past 100 years, all that at one point defined art,” Klingemann told Art in America. “Maybe we’ve come to the point where the only thing we can do is remove the artist, the human artist, and still call something art.”

Despite his best efforts, Klingemann hasn’t been able to separate himself from Botto. Even though Botto has its own style that diverges from Klingemann’s tastes, has exhibited and sold work, and has received press coverage and critical analysis, Klingemann knows that Botto will never be considered an artist independent of him. Botto is missing something critical: a self. Klingemann will continue to get credit for Botto, and Thaler will continue to meet skepticism that DABUS can produce work autonomously.

There is a reason AI models are called image generators: Generating and creating are separated, linguistically, by will. Creation implies action, causing, making, whereas generating has its etymological roots in the Latin verb generare, to give birth or propagate. Nature is the result of this supposedly automatic generation, while creation assumes a degree of consciousness. It seems likely that we will deem AI intelligent, creative, or sentient only when it betrays the barest whiff of agency, because intelligence without selfinterest is nonhuman intelligence indeed. A similar principle has undergirded art for millennia. Art is what people make.

Cross Adieu, 2021, a minted artwork from Botto’s Genesis Period collection.

In his 2022 book, Art in the After-Culture, art critic Ben Davis writes, “‘Art’ stands in symbolically for the parts of cognition that do not seem machine-like.” Accordingly, the loose definition of art has changed to keep pace with the advancement of machines. Craft is not really art because machines can make tables and sweaters. The advent of cameras, which made rendering a realistic image as simple as pressing a shutter button, initiated Impressionism, Cubism, and the long arc of conceptual art. In contemporary art, the institutions, galleries, and other gatekeepers have increasingly clustered around the figure of the artist and the individual life story, and run away from the material object, which can always be replicated anyway. We are left clutching that indefinable spark as some final differentiator between humans and machines.

For Thaler, that differentiator is already meaningless. “What’s an artist? A bunch of associations, a guy with a beret on his head and a crazy mustache,” he said, arguing, in essence, that the designation comes from social validation, from playing the part. “Thanks to this AI, I do everything from medicine to materials discovery to art and music. I do everything as a result of it and that’s a dream come true.”

If AI images take over the visual field, copyright itself may become obsolete. At the crypto-conference FWB Fest last year, graphic designer David Rudnick proposed that sometime in the near future, most images online will be AI-generated. A 2022 research paper by Epoch—a research initiative on AI development—estimated that between 8 and 23 trillion images are currently on the internet, with an 8 percent yearly growth rate. Meanwhile, current AI models generate 10 million images per day with a 50 percent growth rate, according to researchers. If those numbers hold, we will see what art writer Ruby Justice Thelot recently called a “pictorial flippening” by 2045; “flippening,” according to Thelot, being the point where the visual data from which image generators learn shift from that produced by humans to that created by AI.

“The artificial will no longer try to mimic the human-made but this new amalgam of network-made and human-made,” Thelot wrote for Outland Art in July. “The blurring will be complete, and the modern world will be precipitated into a permanent state of hyperreality, where images will no longer be tethered to a human maker and images will be made for and by machines.”

Over the years, DABUS has been many things to Thaler: creator of spacecraft hulls, toothbrushes, and Christmas carols. It has invented robots and been trained as a stock market predictor. Whether or not it will ever be legally credited for its artwork is for the future to decide. In June 2022, Abbott sued US Copyright Office director Shira Perlmutter on behalf of Thaler when the court not only refused to grant DABUS authorship but also didn’t allow Thaler to claim copyright of the image as DABUS’s creator. The case eventually went before US District Judge Beryl A. Howell in Washington, D.C., who ruled against Thaler and Abbott this past August, writing in her decision that Abbott had “put the cart in front of the horse” by arguing that Thaler is entitled to a copyright that doesn’t exist in the eyes of the law. Absent human involvement, there is no copyright protection, according to Howell, because only humans need to be incentivized to create. The decision leaves DABUS in the grayest of gray areas: If, as Thaler claims, he himself had nothing to do with the creation of the image, and if DABUS lacks personhood—and thus a claim to copyright—we are left with a vacuum. No one made this work.

]]>
1234692243
New Data ‘Poisoning’ Tool Enables Artists To Fight Back Against Image Generating AI https://www.artnews.com/art-news/news/new-data-poisoning-tool-enables-artists-to-fight-back-against-image-generating-ai-companies-1234684663/ Wed, 25 Oct 2023 21:20:53 +0000 https://www.artnews.com/?p=1234684663 Artists now have a new digital tool they can use in the event their work is scraped without permission by an AI training set.

The tool, called Nightshade, enables artists to add invisible pixels to their art prior to being uploaded online. These data samples “poison” the massive image sets used to train AI image-generators such as DALL-E, Midjourney, and Stable Diffusion, destabilizing their outputs in chaotic and unexpected ways as well as disabling “its ability to generate useful images”, reports MIT Technology Review.

For example, poisoned data samples can manipulate AI image-generating models into incorrectly believing that images of fantasy art are examples of pointillism, or images of Cubism are Japanese-style anime. The poisoned data is very difficult to remove, as it requires tech companies to painstakingly find and delete each corrupted sample.

“We assert that Nightshade can provide a powerful tool for content owners to protect their intellectual property against model trainers that disregard or ignore copyright notices, do-not-scrape/crawl directives, and opt-out lists,” the researchers from the University of Chicago wrote in their report, led by professor Ben Zhao. “Movie studios, book publishers, game producers and individual artists can use systems like Nightshade to provide a strong disincentive against unauthorized data training.”

Nightshade could tip the balance of power back from AI companies towards artists and become a powerful deterrent against disrespecting artists’ copyright and intellectual property, Zhao told MIT Technology Review, which first reported on the research.

According to the research report, the researchers tested Nightshade on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch. After they fed Stable Diffusion just 50 poisoned images of cars and then prompted it to create images of the vehicles, the usability of the output dropped to 20%. After 300 poisoned image samples, an attacker using the Nightshade tool can manipulate Stable Diffusion to generate images of cars to look like cows.

Prior to Nightshade, Zhao’s research team also received significant attention for Glaze, a tool which disrupts the ability for AI image generators to scrape images and mimic a specific artist’s personal style. The tool works in a similar manner to Nightshade through a subtle change in the pixels of images that results in the manipulation of machine-learning models.

Outside of tools like Nightshade and Glaze, artists have gone to court several times over their concerns about AI image generative models, which have become incredibly popular and generated significant revenues.

In January, artists sued Stability AI, Midjourney, and DeviantArt in a class-action lawsuit, arguing their copyrighted material and personal information was scraped without consent or compensation into the massive and popular LAION dataset. The lawsuit estimated that the collection of 5.6 billion images, scraped primarily from public websites, included 3.3 million from DeviantArt. In February, Getty Images sued Stability AI over photos used to train its Stable Diffusion image generator. In July, a class-action lawsuit was filed against Google over its AI products.

Tools like Nightshade and Glaze have given artists like Autumn Beverly the confidence to post work online again, after previously discovering it had been scraped without her consent into the LAION dataset.

“I’m just really grateful that we have a tool that can help return the power back to the artists for their own work,” Beverly told MIT Technology Review.

]]>
1234684663
A Painting Attributed to Raphael by AI Is Questioned by Experts as Contradictory Study Emerges https://www.artnews.com/art-news/news/ai-art-artificial-intelligence-attributed-raphael-painting-questioned-experts-contradictory-study-1234679282/ Mon, 11 Sep 2023 15:44:03 +0000 https://www.artnews.com/?p=1234679282 When scientists said that they had used artificial intelligence to determine that Raphael had, in fact, painted a work whose attribution had long been contested, they received rapturous praise in publications across the globe.

But now a more complicated picture of the situation has emerged, with some experts questioning the accuracy of the attribution. One scientist’s study of the attribution suggested a completely different result from the one used to certify the Raphael attribution, and two museum professionals told the Guardian Saturday that AI had a high likelihood of being incorrect.

There are many uncertainties about the painting, titled the de Brécy Tondo—including the date it was made. Some historians believe the work is a copy made during the Victorian Era, more than three centuries after Raphael died. Others have newly made the case that it does date to the time of the Renaissance.

Then there is the fact that its composition closely recalls a part of a far more famous painting whose Raphael attribution is certain: the Sistine Madonna (ca. 1513), which hangs in the Gemäldegalerie Alte Meister in Dresden, Germany. The de Brécy Tondo features a similar-looking Madonna holding her child, minus the angels beneath her and the other two figures by her side.

Researchers in Nottingham and Bradford claimed that, by having AI study the Sistine Madonna, they were able to determine a 97 percent similarity between its female figure and the one in the de Brécy Tondo, and an 86 percent similarity between the babies in the two paintings. For that reason, the researchers wrote, the two paintings are “highly likely to have been created by the same artist.”

After their findings were released in January, the work went on view to the public for the first time ever this past July at the Cartwright Hall Art Gallery in Bradford, England, not far from the university where research was conducted on the work.

In an announcement touting the research, the Cartwright Hall Art Gallery stated that “artificial intelligence-assisted computer-based facial recognition showed the faces in the paintings are IDENTICAL to those in Raphael’s famous altarpiece.” YetCarina Popovici, a scientist who works with the Swiss company Art Recognition, has now said otherwise.

She told the Guardian that her findings were entirely dissimilar. Relying on research conducted via algorithms, she said there was an 85 percent chance that the de Brécy Tondo was not authored by Raphael.

Queried by the Guardian about his response to the AI attribution of the de Brécy Tondo, Timothy Clifford, who once served as the director general of the National Galleries of Scotland, said that AI was “terribly unlikely to be remotely accurate” about the attribution of artworks. “I do feel rather strongly that mechanical means of recognising paintings by major artists are incredibly dangerous,” he added.

When the Guardian asked the Bradford council about Popovici’s study, a spokesperson said, “It’s the battle of the AIs, I guess.”

]]>
1234679282
Christie’s Fails to Sell $400,000 ‘Lost Robbie’ NFT Weeks After a Similar One Sold on SuperRare https://www.artnews.com/art-news/news/robbie-barrat-christies-ai-art-nft-sale-1234675336/ Thu, 27 Jul 2023 15:13:31 +0000 https://www.artnews.com/?p=1234675336 Editor’s Note: This story originally appeared in On Balancethe ARTnews newsletter about the art market and beyond. Sign up here to receive it every Wednesday.

Earlier this month, artist Robbie Barrat’s digital artwork AI Generated Nude Portrait #7 Frame #111 sold for 175 ETH, or approximately $343,761, on NFT marketplace SuperRare. The sale was notable not just for its impressive price tag amid a stagnant digital art market, but because it was made using AI. The SuperRare sale was not the first time Barrat’s work sold for six figures. Last year, another piece from the same series sold for 300 ETH, about $1 million at the time.

It was perhaps unsurprising, then, that when earlier this month Christie’s announced a new digital art auction in collaboration with Gucci, “Future Frequencies: Explorations in Generative Art and Fashion,” the highest starting bid—220 ETH ($409,393)—was for another Barrat work, AI Generated Nude Portrait #7 Frame #190 (2018).

Yet, when the sale ended Tuesday, the lot was unsold, which raises the question: Is buyer interest in AI art dissipating?

Sebastian Sanchez, who joined Christie’s eight months ago as a digital art specialist, cautioned against drawing misleading conclusions.

“I don’t think this is an indicator of the AI market. There was a big boom at the beginning of the year but I think the real innovators, AI artists, are still very respected and coveted, and other AI works sold in the auction,” Sanchez told ARTnews. “We’re entertaining interests post-sale, and we’ll see if anyone bites.”

While the art and NFT markets are never completely transparent, sales in the digital art space have been erratic this year. Occasional bright spots have punctuated the overall bear market, like the successful Sotheby’s 3AC “Grails” sale last month, which generated more than $11 million, and Barrat’s SuperRare coup a few weeks ago. But, just weeks after the “Grails” sale, Sotheby’s laid off several digital art staffers . In many ways, the auction houses are still figuring out how to operate in the Web3 space.

“We’re still learning how to compete with these marketplaces,” said Sanchez. “We had this auction on Christie’s 3.0 [the auction house’s NFT marketplace] which doesn’t have any buyer premium.”

The buyer premium in traditional art auctions recognizes the value of the work that auction houses do to source lots for sale, like those by Picasso or Basquiat, which are typically held by a select few. When it comes to NFTs, however, the most valuable works are often found on a variety of platforms, like SuperRare or OpenSea. To stay competitive, Christie’s had to forgo the premium.

Christie’s is also working to stay competitive by leaning on their in-person programming and exhibitions. Barrat’s work was on display during this year’s Christie’s Art + Tech Summit in New York. The traffic of tech- and art-loving folk makes for good synergy. Many of the pieces that did sell in “Future Frequencies,” according to Sanchez, were to people who attended the conference.

While this year’s Art + Tech Summit didn’t seal the deal for AI Generated Nude Portrait #7 Frame #190, the exhibition was a bit of a full circle moment. At the 2018 edition in London, the auction house gave out a swag bag to 300 attendees. Inside was a card with a code that allowed recipients to claim ownership of a corresponding piece from the same Barrat NFT series on SuperRare.

“This was pre-NFT boom so everyone just tossed them,” Sanchez explained. “Only 36 had been claimed, and the rest are considered lost, so that’s the lore behind what’s called the ‘Lost Robbies.’”

As NFTs have become more valuable, the few who held on to their cards came forward to claim the “Lost Robbies.” In the case of AI Generated Nude Portrait #7 Frame #190, a previous attendee found the card and approached Christie’s about selling it.

However, for the “Future Frequencies auction, the item for sale was the physical card itself, which represents ownership of the associated NFT, not the display of the image that was on view at Christie’s, according to Sanchez. This differs from typical NFT sales which don’t typically have a physical artifact attached

Though most now associate AI art with text-to-image generators like Midjourney or OpenAI’s DALL-E, Barrat, like many other artists experimenting with AI, has a more involved process. Barrat creates his own generative AI software; this technical advance is as much art as the images it produces.

“For many artists who have been using AI for a long time, the artistic process was in large part actually researching and developing the AI software ourselves and making very many artistic decisions in generating that and then using those outputs to create artworks,” Harm Van Den Dorpel, an artist who uses AI algorithms in his own work, told ARTnews. “It seems to me that the artists using those kinds [of AI generators] are fine with being consumers of an existing corporate platform, instead of really going deep down into the algorithms themselves,” though he noted there are many artists who are able to do interesting things with text-to-image platforms like Midjourney.

Barrat is of the former camp of artists, having been scouted in high school by Nvidia, a major tech company known for producing computer graphics chips, after creating an AI that could write Kanye West lyrics. But Barrat has since focused on the artistic potential of the technologies that he loves working with.

In 2018 Barrat trained an AI model he had developed using images of nudes from WikiArt, and asked the AI to make its own version of the painterly nude. The results are glitchy, expanses of pale flesh tone, with nary an identifiable appendage or facial feature, yet they are beautiful in their own way, fascinating artifacts of machine vision.

Created five years ago, these works, now known as the “Lost Robbies,” already have a certain historicity to them. Produced by a homemade and already outdated AI, they may wind up as historical benchmarks of both a rapidly evolving technology and a digital art market primed to explode. So, if not this sale, perhaps the next one.

]]>
1234675336
Judge Appears Likely to Dismiss AI Class Action Lawsuit by Artists https://www.artnews.com/art-news/news/ai-class-action-lawsuit-dismissal-hearing-stabilityai-midjourney-deviantart-1234675071/ Fri, 21 Jul 2023 16:35:43 +0000 https://www.artnews.com/?p=1234675071 On Wednesday, Judge William Orrick of the US District Court for the Northern District of California heard oral arguments on defendants’ motion to dismiss in the case of Andersen v Stability Ltd, a closely-watched class action complaint filed by multiple artists against companies that have developed AI text-to-image generator tools like Stability AI, Midjourney, and DeviantArt.

During the hearing, the judge appeared to side with AI companies, thus making it likely that he would dismiss the case.

“I don’t think the claim regarding output images is plausible at the moment, because there’s no substantial similarity [between the images by the artists and images created by the AI image generators],” Orrick said during the hearing, which was publicly accessible over Zoom.

The issue is that copyright claims are usually brought against defendants who have made copies of pre-existing work or work that uses a large portion of pre-existing works, otherwise called derivative works. In other words, a one-to-one comparison typically needs to be made between two works to establish a copyright violation.

But, as explained in the most recent Art in America, the artists in the lawsuit are claiming a more complex kind of theft. They argue that AI companies’ decision to include their works in the dataset used to train their image generator models is a violation of their copyrights. Because their work was used to train the models, the artists argue, the models are constantly producing derivative works that violate their copyrights.

The defendants’ lawyers pointed out various issues with the artists’ arguments. To begin with, out of the three named plaintiffs—Sarah Andersen, Karla Ortiz, and Kelly McKernan—only Andersen has registered some of her works with the U.S. Copyright Office. That Ortiz and McKernan don’t hold registered copyright is a major obstacle to claiming valid copyright infringement claims. Meanwhile, it didn’t seem that Andersen was in a much better position, despite having sixteen of her works registered.

“Plaintiffs’ direct copyright infringement claim based on output images fails for the independent reason that Plaintiffs do not allege a single act of direct infringement, let alone any output that is substantially similar to Plaintiffs’ artwork,” Stability AI’s counsel wrote in their motion to dismiss. “Meanwhile, Plaintiffs’ allegations with respect to Andersen are limited to only 16 registered collections but even then, Plaintiffs do not identify which “Works” from Andersen’s collections Defendants allegedly infringed.”

Orrick was also skeptical of how much of an impact these three artist’s works could have had on the models, insofar as they are likely to produce derivatives, given that these models were trained on billions of images. While the judget has not yet filed his official decision, if he dismisses, the artists will have the opportunity to refile and address the weak aspects of the suit.

Orrick’s reaction to the suit appears to confirm legal and technology analysts’ assessment that current copyright law is not equipped to address the potential injustices engendered by AI.

An ongoing study by technologists under the name Parrot Zone have tested image-generator models and found that the system is capable of recognizing and reproducing the styles of thousands of artists. Out of 4,000 studies done, they found that these models can reproduce the style of 3,000 artists, both living and dead, all without recreating any specific works. The issue is that, even as these models are appear to credibly copy existing artists’ styles, “style” is not protected under existing copyright laws, leaving a kind of loophole that AI image-generators can exploit to their benefit.

[To learn more about this lawsuit, read “Artists Are Suing Artificial Intelligence Companies and the Lawsuit Could Upend Legal Precedents Around Art“]

]]>
1234675071
Amazon, Google, OpenAI, Meta, and Microsoft Agree to White House’s AI Guidelines to ‘Protect’ Americans https://www.artnews.com/art-news/news/amazon-google-openai-meta-and-microsoft-agree-to-white-houses-ai-guidelines-to-protect-americans-1234675073/ Fri, 21 Jul 2023 15:10:13 +0000 https://www.artnews.com/?p=1234675073 Amid deep concerns about the risks posed by artificial intelligence, the Biden administration has lined up commitments from seven tech companies—including OpenAI, Google, and Meta—to abide by safety, security and trust principles in developing AI.

Reps from seven “leading AI companies”—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—are scheduled to attend an event Friday at the White House to announce that the Biden-Harris administration has secured voluntary commitments from the companies to “help move toward safe, secure, and transparent development of AI technology,” according to the White House.

“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” the Biden administration said in a statement Friday. “To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”

Note that the voluntary agreements from Meta, Google, OpenAI and the others are just that—they’re promises to follow certain principles. To ensure legal protections in the AI space, the Biden administration said, it will “pursue bipartisan legislation to help America lead the way in responsible innovation” in artificial intelligence.

The agreements “are an important first step toward ensuring that companies prioritize safety as they develop generative AI systems,” said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights. “But the voluntary commitments announced today are not enforceable, which is why it’s vital that Congress, together with the White House, promptly craft legislation requiring transparency, privacy protections and stepped-up research on the wide range of risks posed by generative AI.”

The principles the seven AI companies have agreed to are as follows:

Develop “robust technical mechanisms” to ensure that users know when content is AI generated, such as a watermarking system to reduce risks of fraud and deception.

Publicly report AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security risks and societal risks, such as “the effects on fairness and bias.”

Commit to internal and external security testing of AI systems prior to release, to mitigate risks related to biosecurity and cybersecurity, as well as broader societal harms.

Share information across the industry and with governments, civil society and academia on managing AI risks, including best practices for safety, information on attempts to circumvent safeguards and technical collaboration.

Invest in cybersecurity and “insider threat” safeguards to protect proprietary and unreleased model weights.

Facilitate third-party discovery and reporting of vulnerabilities in AI systems.

Prioritize research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination.

Develop and deploy advanced AI systems “to help address society’s greatest challenges,” ranging from “cancer prevention to mitigating climate change.”

The White House said it has consulted on voluntary AI safety commitments with other countries, including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.

The White House said the Office of Management and Budget will soon release draft policy guidance for federal agencies to ensure the development, procurement, and use of AI systems is centered around safeguarding Americans’ rights and safety.

]]>
1234675073
Google Embroiled in Class Action Lawsuit Over AI Products https://www.artnews.com/art-news/news/google-deepmind-bard-class-action-lawsuit-over-ai-products-1234673841/ Wed, 12 Jul 2023 16:13:17 +0000 https://www.artnews.com/?p=1234673841 A class action lawsuit was filed Tuesday against Google, its parent company Alphabet, and its artificial intelligence branch Google DeepMind for “secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans,” according to the complaint.

The class action lawsuit was filed in the US District Court for the Northern District of California by the Clarkson Law Firm on behalf of eight anonymous plaintiffs from across the United States. One is a New York Times bestselling author whose work was used to train Google’s AI-powered chatbot Bard; another is an actor who posts educational material online and believes her work was used to train Google products that will one day make her obsolete. Two of the plaintiffs are minors, 6- and 13 years-old respectively, whose guardians are concerned that their online activity is being tracked and harvested by Google, also for training purposes.

The lawsuit was in part triggered by a quiet update Google made to its privacy policy last week to make explicit that the company would be harvesting publicly available data to “build products and features” like Bard. That would include upcoming AI models that Google is developing, like Imagen, a text-to-image generative AI (similar to Midjourney); MusicLM, a text-to-music AI (Midjourney but for music); and Duet AI, an AI program to be embedded in Google Workspace apps to “aid” in drafting emails, preparing Slides presentations, and organizing meetings.

The plaintiffs, according to the complaint, took this privacy update as tacit admission that Google had been using this data all along for AI training purposes.

“All of the stolen information belonged to real people who shared it online for specific purposes, not one of which was to train large language models to profit Google while putting the world at peril with untested and volatile AI products,” Timothy K. Giordano, a partner at Clarkson Law, said in a statement to ARTnews. “‘Publicly available’ has never meant free to use for any purpose.”

The complaint further contextualizes that this is all happening in the context of Google employees, both former and current, repeatedly sounding the alarm on the dangers of AI technology and concerns over how quickly it is being developed. Additionally, the Federal Trade Commission is also beginning to warn companies about their web-scraping, which is what triggered Google’s new privacy policies in the first place, according to the complaint.

“We’ve been clear for years that we use data from public sources—like information published to the open web and public datasets—to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” Halimah DeLaine Prado, Google’s general counsel, said in an emailed statement. “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims.”

Meanwhile, companies like Twitter reacted to the news of Google’s new privacy policy by shifting their own standards of what is “publicly available” by limiting how many posts Twitter users can read a day in an effort to stymie web-scraping, Reuters reported earlier this month. It’s possible other websites will follow suit to protect the data and content of their users—whose information they may want to use for their own product development anyway.

This class action lawsuit differs from the many other lawsuits brought against companies like Google, OpenAI, and Meta, which have tended to focus on copyright violations. Be they artists, coders, or authors like actress and memoirist Sarah Silverman, these class action cases have concentrated on IP theft of protected materials like original creative and scientific work. This case, however, has taken a different course, using a variety of charges to argue that web-scraping “everything,” from user activity data to original art work to paywalled-content, shouldn’t be possible.

The complaint alleges violation of California’s Unfair Competition Law, negligence, invasion of privacy under the California Constitution, unjust enrichment, direct and indirect copyright violations, and other charges.

The charges do not directly mention laws around web-scraping as those are virtually nonexistent in the US. Similarly, there is almost no regulation on what kind of data companies are allowed to mine when developing research or products, even after scandals like Cambridge Analytica, which displayed how a political consulting firm could gain access to 87 million Facebook user’s data under the guise of conducting research. Instead, Cambridge Analytica used that data to impact the 2016 US Presidential elections and other elections worldwide.

States like California have some “data minimization” regulation on the books to discourage the collection of personal data, but the line between what is private and what is public on the internet has long been murky, allowing companies to act boldly in their web-scraping activities. Unlike Europe and the UK, the US has not yet produced any specific regulations on what kind of data can be used in AI research. 

Some scholars believe that focusing on copyright when tackling the twin phenomenons of web-scraping and AI development is the wrong strategy, arguing that these issues should be viewed from a data governance perspective.

“The norm has been that data scraping is acceptable and that there should be a presumption for fair use when it concerns TDM [text and data mining] because not allowing that would hinder innovation,” said Mehtab Khan, a resident fellow at the Information Society Project at Yale Law School. 

Khan refers to fair use clauses in copyright law that allow individuals (and by extension companies) to use protected, original material in specific cases, as fair use protects the right to learn from pre-existing works. While fair use tends to protect teachers, students, and artists, Khan believes that, in the absence of clear regulations on web-scraping, the assumption from companies is that as long as they’re researching and developing technologies, they more or less have carte blanche to “public” data, or anything and everything posted online.

Update July 12, 2023: This article has been updated to include a statement from Google’s general counsel.

]]>
1234673841
From the Archives: Experimental Filmmaker Stan VanDerBeek on the Computer’s Emergence as a Creative Tool  https://www.artnews.com/art-in-america/features/from-the-archives-stan-vanderbeek-computer-new-talent-1234666966/ Mon, 08 May 2023 15:44:34 +0000 https://www.artnews.com/?p=1234666966 When Art in America asked Stan VanDerBeek to nominate a new talent for the January-February issue of the magazine, he interpreted the prompt loosely and wrote an essay on “The Computer.” With his work now on view in “Signals: How Video Transformed the World” at the Museum of Modern Art—and as AI has come to post both exciting and existential challenges to artists—we’re republishing VanDerBeek’s article below.

The computer (as a graphic tool) is relatively new in the current rush of technology. In America, widespread use of the computer dates approximately from 1955, when a line of commercial units first became available.

In 1963 computers began to develop possibilities for making graphics. An electric microfilm recorder was introduced; it can plot points and draw lines a million times faster than a human draftsman. This machine and the electronic computer which controls it thus make feasible various kinds of graphic movies which heretofore would have been prohibitively intricate, time-consuming and expensive.

The microfilm recorder consists essentially of a display tube and a camera. It understands only simple instructions such as those for advancing the film, displaying a spot or alphabetic character at specified coordinates or drawing a straight line from one point to another. Though this repertoire is simple, the machine can compose complicated pictures—or series of pictures—from a large number of basic elements: it can draw ten thousand to one hundred thousand points, lines or characters per second.

This film-exposing device is therefore fast enough to tum out, in a matte r of seconds , a television-quality image consisting of a fine mosaic of closely spaced spots, or to produce simple line drawings at rates of several frames per second.

As a technically oriented film-artist , I realized the possibilities of the computer as a new graphic tool for film-making in 1964 and began my exploration of this medium. I have since made nine computer-generated films. To produce these films the following procedure was used: an IBM 7094 computer was loaded with a set of sub-routines (instructions) which perform the operations for computizing the movie system called “Beflix” devised by Ken Knowlton of Bell Telephone Laboratories. The movie computer program is then written, in this special language, and put on punched cards; the punched cards are then fed into the computer; the computer tabulates and accepts the instructions on the cards calculating the explicit details of each implied picture of the movie and putting the results of this calculation on tape. To visualize this: imagine a mosaic-like screen with 252 x 184 points of light each point of light can be turned on or off from instructions on the program. Pictures can be thought of as an array of spots of different shades of gray. The computer keeps a complete “map” of the picture as the spots are turned on and off. The programmer instructs the system to “draw” lines, arcs, lettering. He can also invoke operations on entire areas with instructions for copying, shifting, transliterating, zooming, and dissolving and filling areas. The coded tape is then put into another machine that reads the tape and instructs a graphic display device (a Stromberg-Carlson 4020), which is a sophisticated cathode-tube system similar to a TV picture tube. Each point of light turns on/off according to the computerized instructions on the tape. A camera over the tube, also instructed when to take a picture by information from the computer, then records on film that particular movie frame. After much trial and error—during which time the computer informs you that you have not written your instructions properly—you have a black-and-white movie. This is edited in traditional movie techniques, and color is added by a special color-printing process developed by artists Bob Brown and Frank Olvey.

spread from an archival article showing a black and white photo of a man drawing on a computer screen on the left and colorful film strips on the right
The opening spread of Stan VanDerBeek’s article “New Talent: The Computer,” published in the January-February 1970 issue of Art in America.

Movie-making was for long the most revolutionary art form of our time. Now television touches the nerve-ends of all the world; the visual revolution sits in just about every living room across America. The image revolution that movies represented has now been overhauled by the television evolution, and is approaching the next visual stage-to computer graphics to computer controls of environment to a new cybernetic “movie art.”

For the artist the new media of movies, TV, computers, cybernetics, are tools that have curved the perspectives of vision, curving both outward and inward. The revolution of ideas and the ecology of the senses began in 1900 (movies were “invented” about the same time as psychoanalysis). Trace the path of ideas of painting over the past sixty years: the breakup of nineteenth-century ideals, step by step; the obj et d’art to nonobjective art; cubism-simultaneous perception; futurism—motion and man machine metaphysics; dadaism-anti-art, pro-life; surrealism—the dream as the center of the mental universe; action painting—synthetic time-motion; happenings—two-dimensional painting comes off the wall; op art-illusion as retinal “reality”; pop art” reality” as reminder of reality; minimal art-illusion of reduction; conceptual art-the elements of illusion.

In other words, we have been moving closer to a “mental” state of art/life. Now we move into the area of computers, an extension of the mind with a tool technically as responsive as ourselves. In the developing mental art/life, to “think” about the work is the process of doing the work.

An abstract notation system for making movies and image storage and retrieval systems opens a door for a kind of mental attitude of movie-making: the artist is no longer restricted to the exact execution of the form; so long as he is clear in his mind as to what he wants, eventually he can realize his movie or work on some computer, somewhere.

What shall this black box, this memory system of the world, this meta-physical printing press do for us? Compare the computer to driving a fast sports car; it is difficult to control; although the irony is that at higher speeds less effort is needed to alter and change directions. However, more skill—a complex man/machine understanding—is required.

The future of computers in art will be fantastic, as amplifiers of human imagination and responses, of kinetic environments programmed to each of our interests; in short, computers will shape the overall ecology of America.

It’s not very far from the Gutenberg press of movable bits of type to the logic “bits” of the computer. No doubt computers will be as common as telephones in our lives; art schools in the near future will teach programming as one of the new psycho-skills of the new technician-artist-citizen.

]]>
1234666966
Artist Wins Photography Contest After Submitting AI-Generated Image, Then Forfeits Prize https://www.artnews.com/art-news/news/ai-generated-image-world-photography-organization-contest-artist-declines-award-1234664549/ Mon, 17 Apr 2023 17:08:58 +0000 https://www.artnews.com/?p=1234664549 An artist declined an award at a prominent photography contest because he had submitted an AI-generated work, proving, he said, the competition couldn’t deal with art made by that means. The contest’s organizers, in turn, said they didn’t know the extent to which the work utilized AI.

Boris Eldagsen won the World Photography Organization’s Sony World Photography Awards for a piece titled The Electrician. The work appears like an old photograph showing two women, one of whom crouches behind the other. Another person’s hand extends toward the front woman’s body.

Part of a series called “Pseudomnesia,” the work was made by submitting language to an AI generator many times over. In the process, the work was altered using techniques known as inpainting, outpainting, and prompt whispering.

“Just as photography replaced painting in the reproduction of reality, AI will replace photography,” Eldagsen wrote in a description. “Don’t be afraid of the future. It will just be more obvious that our mind always created the world that makes it suffer.”

Initially, when the work was selected for competition in March, Eldagsen wrote on his website that he was “happy” his “image,” as he called it, had made the cut. Then, when he won last week, he sounded a different note.

“AI images and photography should not compete with each other in an award like this,” he wrote in a statement on April 13. “They are different entities. AI is not photography. Therefore I will not accept the award.”

He continued, “We, the photo world, need an open discussion. A discussion about what we want to consider photography and what not. Is the umbrella of photography large enough to invite AI images to enter – or would this be a mistake?”

Eldagsen, who had won in the creative category, urged the jury to give his prize money to a photography festival in Odesa, Ukraine, instead.

The World Photography Organization frowned upon Eldagsen’s work and his response to winning.

In a statement, the organization said, “As he has now decided to decline his award we have suspended our activities with him and in keeping with his wishes have removed him from the competition. Given his actions and subsequent statement noting his deliberate attempts at misleading us, and therefore invalidating the warranties he provided, we no longer feel we are able to engage in a meaningful and constructive dialogue with him.”

The statement continued, “We recognise the importance of this subject and its impact on image-making today. We look forward to further exploring this topic via our various channels and programmes and welcome the conversation around it. While elements of AI practices are relevant in artistic contexts of image-making, the Awards always have been and will continue to be a platform for championing the excellence and skill of photographers and artists working in the medium.”

The controversy loosely recalls another one that took place last August, when an AI-generated artwork won an art competition at the Colorado State Fair. That work had been produced using Midjourney, spurring a mixture of anger and fascination within the art world and beyond.

]]>
1234664549