Machine Learning

Machine Learning in Linux: ImaginAIry – Pythonic generation of images

In Operation

We can generate images and animations from the command-line. In the examples below, we generate an image and an animation, but you can chain text prompts together to generate multiple images/animations from a single command.

$ imagine "Romantic painting of a ship sailing in a stormy sea, with dramatic lighting and powerful waves"

Ship

$ imagine --gif "an owl"

Owl GIF

By default, the software uses Stable Diffusion v1.5.

The --model option lets you use many other models. Choose from Stable Diffusion 1.4, Stable Diffusion 1.5, Stable Diffusion 1.5 – Inpainting, Stable Diffusion 2.0, Stable Diffusion 2.0 – Depth, Stable Diffusion 2.0 – Inpainting, Stable Diffusion 2.0 v – 768×768, Stable Diffusion 2.1, Stable Diffusion 2.1 – Inpainting, Stable Diffusion 2.1 v – 768×768, Instruct Pix2Pix – Photo Editing, OpenJourney V1, OpenJourney V2, OpenJourney V4, or a path to custom weights.

Models are automatically downloaded for you on first use. You can also import your own models. Models are stored in ~/.cache/huggingface/.

The software automatically adds many negative prompts. They are the opposites of a prompt; they allow the user to tell the model what not to generate. Negative prompts often eliminate unwanted details like mangled hands or too many fingers or out of focus and blurry images. Alternatively, we can define the negative prompts with the --negative-prompt option.

As you’d expect, there are a whole raft of other command-line options which let you define things like the prompt strength, image height and width, upscale, fix faces, set the sampler, mask for inpainting, the number of times to repeat the renders, and many more besides.

Images are generated quicker by using a persistent shell session. This session is started with the command $ aimg. Besides saving time, this also gives you an interactive prompt. There is also a web interface which is started with the command $ aimg server.

There are too many other features available for an exhaustive list. Here are the highlights:

  • Generate images guided by ControlNet.
  • Image (re)Colorization.
  • Instruction based image edits by InstructPix2Pix.
  • Prompt Based Masking by clipseg.
  • Face Enhancement by CodeFormer.
  • Upscaling by RealESRGAN. For example upscale my-images/*.jpg upscales a folder of images
  • Tiled Images.
  • Depth maps for “translations” of existing images.
  • Outpainting.

Summary

ImaginAIry is another extremely useful tool for generating Stable Diffusion images. The command-line offers so much power and flexibility. For example, with a single command you can generate a whole series of images for the same prompt using different generation models. Images can also be generated in code.

The web interface is currently extremely basic and lacks the flexibility of using the command-line. If you’re looking for a web interface for Stable Diffusion, you’d be better served with Easy Diffusion, Stable Diffusion web UI, or InvokeAI.

Images/animations are saved to ~/outputs/ which can be changed with the --outdir option.

Website: github.com/brycedrennan/imaginAIry
Support:
Developer: Bryce Drennan and many contributors
License: MIT License

Artificial intelligence icon For other useful open source apps that use machine learning/deep learning, we’ve compiled this roundup.

ImaginAIry is written in Python. Learn Python with our recommended free books and free tutorials.

Pages in this article:
Page 1 – Introduction and Installation
Page 2 – In Operation and Summary

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments