Getty rejects AI-generated imagery because it doesn’t want to deal with any more copyright issues.
Deepfakes are no longer wanted, fakes are being abandoned, and Getty Images will exclude any content created using artificial intelligence. Getty confirmed to Gizmodo that the website will immediately reject any submissions made by AI picture generators in a message that was originally issued to the site’s contributors.
Getty made it clear that any photographs produced with well-known programs like Stable Diffusion, DALL-E, and MidJourney are now prohibited. Additionally, all previously posted photographs that were produced with AI will be deleted. According to the Getty statement, the decision was made in response to the unresolved copyright issues surrounding AI-generated images, particularly “concerning the underlying material and metadata used to develop these models.” Any images that have been altered with programs like Photoshop and Illustrator are still acceptable as long as they were made by hand. The platform’s CEO, Craig Peters, told The Verge that there were “unaddressed rights issues” about AI art. This is a particular issue for hosting sites that have to uphold strict copyright control over the content they host. In a statement to The Verge, the CEO of the corporation expanded on its position in the picture hosting sector, saying, “Our firm has never been about the simplicity of creating imagery or the ensuing volume. Connecting and breaking through is key. AI picture generators are getting more and more complex, however, issues about how well these systems can truly fool remain. This week, OpenAI revealed that its DALL-E AI image generator could change people’s appearance. Even if the corporation claims to use detection techniques to identify any violent or sexual content, there will inevitably be some photographs that get through. Given that Getty has previously dealt with lawsuits involving dubious copyright, it is probably attempting to prevent any negative consequences in the current atmosphere of uncertainty. In one recent instance, the CEO of a game firm exploited artificial intelligence (AI) content to triumph in a regional art competition. A Getty representative, Alex Lazarou, told Gizmodo that the company’s larger library contains only “very limited” AI-generated content so far, but they already have “strong controls” in place to keep an eye on it. The platform’s ability to track AI-generated material is another issue. The Getty spokesman claimed that they continue to develop technology to recognize persons and other objects in photographs uploaded to the website. Although Peters told The Verge that the company or firm will have to rely on users detecting questionable photographs while they create automatic filters, he said that they are collaborating with the Coalition for Content Provenance and Authority—or C2PA—to establish image validation mechanisms for the website. However, since the open-source application lacks any sort of genuine filter, other AI picture producers of any corporation like Stability AI’s Stable Diffusion have been utilized to produce even more pornographic images. Furthermore, it doesn’t take much to show how heavily Stable Diffusion depends on protected content. There is a tonne of images with warped Getty Images watermarks on Lexica, which has thousands of user-generated images. According to tech blogger Andy Baio, this AI system in particular is built on datasets gathered by LAION that include billions of internet-scraped photographs. Numerous image-hosting websites, including Pinterest, Flickr, DeviantArt, and other user-created blogs hosted by Blogspot and Tumblr, are said to have been searched by the system. Additionally, almost 35,000 of the 12 million photos in LAION’s database that Baio evaluated came from sources including Getty Images, VectorStock, and Shutterstock. Lazarou said that they were “communicating with other companies and communities to understand perspectives concerning the broader landscape, how the legal or regulatory bodies might address, and whether we might be helpful to resolve,” but did not elaborate on how the company planned to handle the fact that their images were being edited by AI.
Not just Getty has banned AI-generated photos. The renowned platform for creative content, Newgrounds, was one of the first to forbid AI-generated photos last year. Fur Affinity, a well-known furry art community, informed its users on September 5 that they will be changing their policy to exclude AI art since it “lacked in creative worth,” particularly as popular programs sample the work of other artists to produce content, InkBlot, a new site for hosting artwork, made it clear this month that AI art is not acceptable.