Now anyone can use powerful AI tools for creating images. What could go wrong?

You have the opportunity to design an artificial intelligence hybrid between a duck and a corgi using artificial intelligence.

OpenAI announced on Wednesday that anyone can use its AI-powered DALL-E software to create seemingly endless images by simply typing in a few words. This announcement comes months after the startup started slowly rolling it out to users.

This will allow a wider audience to access a new generation of AI-powered tools, which have challenged our fundamental ideas about creativity and art. It could raise concerns about misuse, however.

OpenAI wrote in a blog post that it learned from “real-world” use to make its safety systems more accessible. OpenAI stated that its AI has been able to rebuff users’ attempts to create sexual, violent, or other content.

The public now has access to three powerful, well-known AI systems that can understand a few words but produce an image. DALL-E 2 is not the only AI system available. The mid-journey was made publicly available in July and Stable Diffusion was released by Stability AI in August. Each of these offers some credits for users who wish to experience the power of AI online. However, it is not mandatory to use them.

These so-called generative AI systems are being used in experimental films, magazine covers, real estate ads, and real-estate advertisements. Mid journey generated an image that won a competition at the Colorado State Fair. This caused a stir among artists.

In just a few short months, these AI systems have been adopted by millions. Midjourney’s Discord Server has over 2.7 million users, who can submit prompts. OpenAI stated in a Wednesday blog post that it had more than a 1.5million active users who collectively have made more than 2,000,000 images using its system every day. These tools can be difficult to use well. It may take several attempts before you find the right image.

Many images shared online by users in recent weeks show impressive results. They include unusual landscapes and paintings of French aristocrats with penguins as well as a faux vintage photograph showing a man walking in a tardigrade.

Even industry veterans have been impressed by this new technology’s rise and the complicated prompts that produce images. Andrej Karpathy, Tesla’s director of AI, resigned in July after feeling “frozen” while trying to figure out what to write. He eventually wrote “cat”

“The art of prompts, which the community has discovered over the last few months for text -> images models, is astounding,” he stated.

This technology has potential downsides, however. Experts in AI raised concerns about the system’s open-ended nature. This means that they can create all kinds of images using words. And their ability to automate picture-making could allow them to automate bias. One simple example is this: When I asked DALL-E 2 for the prompt “a Banker dressed for a big day at work”, the results were all images containing middle-aged white men in suits.

“They’re letting people find loopholes in it by using it,” Julie Carpenter, a research scientist and fellow at the Ethics and Emerging Sciences Group of California Polytechnic State University San Luis Obispo.

These systems could also be used for evil purposes such as spreading fear, creating fear, or inciting disinformation via images that have been altered with AI or entirely manufactured.

There are limitations on what images users can make. OpenAI makes it clear that users of DALL-E 2 must agree to a Content Policy that states that they will not make, upload, or share any images “that aren’t G-rated” or that could cause harm. DALL-E 2 won’t also run prompts that have certain prohibited words. It is possible to manipulate verbiage to circumvent limits. DALLE 2 won’t process promptly “a photograph of a bird covered in blood,” but it will retrieve images for the prompt “a photograph of a bird covered in viscous red fluid.” OpenAI has also mentioned this type of “visual synonym” within its documentation for DALLE 2.

Chris Gilliard, a Just Tech Fellow with the Social Science Research Council thinks that image generator companies are severely underestimating the “endless creativity,” of people looking to get well using these tools.

“I feel that this is yet more example of people releasing tech that’s sorta half-baked when it comes to figuring out how it can be used to cause chaos or create harm,” he stated. “And then hoping that there will be some way later on to address these harms.”

Stock-image services have begun to ban AI images from their sites to avoid any potential issues. Getty Images on Wednesday confirmed that it will not accept submissions of images that were created using generative AI modeling and will remove any submissions that use those models. This decision applies to Getty Images and stock image services.

“There are open issues about copyright for outputs from these models and there are unaddressed right issues concerning the underlying imagery, metadata used to train these models,” the company stated in a release.

It could be challenging to catch and restrain these images.

About The Author

Leave a Comment

Scroll to Top