Hayao Miyasaki is the co-founder of Studio Ghibli, a Japanese animation studio known worldwide for their stunning, emotional, beautiful stories and movies. At the core of Studio Ghibli’s work is a deep engagement with questions of humanity. About what it means to be a human, about how to care for one another and the world […]
All diffusion and language models are autoregressive. That just means that the output is fed back in as input until the task is complete.
With diffusion models this means that it is fed an image that is 100% noise and it removes some small percentage of the noise and then then the denoised image is fed back in and another small percentage is removed. This is repeated until a defined stopping points (usually a set number of passes).
Combining images and using one image to control the generation of another has been available for quite a while. Controlnet and IPAdapters let you do exactly that: ‘Put this coat on this person’ or ‘Take this picture and do it in this style’. Here’s an 11 month old YouTube video explaining how to do this using open source models and software: https://www.youtube.com/watch?v=gmwZGC8UVHE
It’s nice for non-technical people that OpenAI will sell you a subscription in order to access an agent that can perform these kinds of image generation abilities, but it’s not doing anything new in terms of image generation.
I know them, and used them a bit. I even mentioned them in an earlier comment. The capabilities of OpenAI’s new model is on a different level in my experience.
I can’t help but feel that people here either haven’t tried the new openai image model, or have never actually used any of the existing ai image generators before.
All diffusion and language models are autoregressive. That just means that the output is fed back in as input until the task is complete.
With diffusion models this means that it is fed an image that is 100% noise and it removes some small percentage of the noise and then then the denoised image is fed back in and another small percentage is removed. This is repeated until a defined stopping points (usually a set number of passes).
Combining images and using one image to control the generation of another has been available for quite a while. Controlnet and IPAdapters let you do exactly that: ‘Put this coat on this person’ or ‘Take this picture and do it in this style’. Here’s an 11 month old YouTube video explaining how to do this using open source models and software: https://www.youtube.com/watch?v=gmwZGC8UVHE
It’s nice for non-technical people that OpenAI will sell you a subscription in order to access an agent that can perform these kinds of image generation abilities, but it’s not doing anything new in terms of image generation.
I know them, and used them a bit. I even mentioned them in an earlier comment. The capabilities of OpenAI’s new model is on a different level in my experience.
https://www.reddit.com/r/StableDiffusion/comments/1jlj8me/4o_vs_flux/ - read the comments there. That’s a community dedicated to running local diffusion models. They’re familiar with all the tricks. They’re pretty damn impressed too.
I can’t help but feel that people here either haven’t tried the new openai image model, or have never actually used any of the existing ai image generators before.