Exploring Artificially Generated Images Via Open Journey
It seems like a lot of what we’ve been waiting for regarding AI has become accessible all at once. As someone interested in the subject, I wanted to look at an image aggregator and see what I could do with it before I dove into a large language model like Chat GPT. I chose Openjourney as my sandbox and decided to see what I could generate if I gave it parameters involving my local team, the Cincinnati Bengals. Within minutes I received some incredible results which I share below.
After entering in a simple set of parameters, I refreshed the model a few times to see what if anything would change on subsequent output. Each iteration gave me something slightly different. The output played with the basic colors of the Bengals uniform – orange, black, and white. Some played on the stripes, some played on the Bengal logo itself offering a different image of a cat. It shifted up the backgrounds, changed the look of the face mask, but interestingly enough all helmets faced the right even though I didn’t put that into the seed description. Note – I just used very simple commands. Someone with a better grasp on image description could get a lot more out of it than I could here.
Although a fun exercise, I am curious about the benefits and the drawbacks of these types of tools. The benefit is obvious, people like myself without any design training can now whip up some images for a project without much effort. This would’ve been very beneficial when I ran an outdoor blog and needed illustrations to use in my articles to break up blocks of text. I’d also suspect that if you wrote fiction it would be fairly straight forward to whip up pictures of elves, robots, monsters, or anything else you fancy. In the past your option was either use one of the free libraries which weren’t that great or buy an image, which was just another expense in a venture that wasn’t generating much revenue to start.
The second benefit may lie in its ability to incubate design possibilities. For example, each of these helmets takes a new approach on the face guard. How helpful that is I don’t know, but it could act as a prompt in other more pertinent design questions. For example, the safety of a helmet and how it can reduce concussions needs serious attention, especially for youths. Maybe a tweak in design could transfer the impact away and out so the head receives less force in contact. Perhaps you plug in images and let the aggregator whip up ideas for other safety equipment like car seats.
As far as drawbacks, it goes without saying these AIs are being trained on something and those original digital artists most likely didn’t consent to it. Is this fair use of one’s artwork? If you use these images are you potentially liable? My understanding is these lawsuits are beginning to be filed and will work themselves through the courts. Congress probably needs to step up but I’m not sure they have the expertise to do so. I am curious what the take is in the professional ranks and the pricing pressure this puts on their work. There’s already lots of consternation among voice actors being replaced by AI even now.
But beyond that, there’s one large negative aspect that I don’t think we’re ready to confront. All together these tools are creating some incredible work. I’ve seen images from paid versions like DALL-E and Mid Journey that are absolutely breathtaking. They’ve even solved for that pesky hand problem so who knows what the next iteration may bring to the table in a few years or even less. My guess is it will evolve into the live action space, perhaps generating entire movies based off a script and set of descriptions. Of course, this makes it virtually impossible to distinguish deep fakes, and that we’ll have to deal with that as a society and a democracy.


