Skip to content
How to Run Stable Diffusion On Your Laptop

How to Run Stable Diffusion On Your Laptop

In the last year, several machine learning models have become available to the public to generate images from textual descriptions. This has been an interesting development in the AI space. However, most of these models have remained closed source for valid ethic reasons. Until now…

The latest of these models is Stable Diffusion, which is an open machine learning model developed by Stability AI to generate digital images from natural language descriptions.

Initial Notes

A couple of notes before we get to it. I tried several guides online, and I was unable to get a smooth experience with any of them. The main goal of this guide is to provide instructions for how to run Stable Diffusion on an M1.

Note: I didn’t try the above Mac guide, as when I found this repo, I had already figured out most of the workarounds needed to get the model to work.

Get the Code

Let’s start with getting the code. I am using InvokeAI’s fork of Stable Diffusion.

git clone https://github.com/nunocoracao/InvokeAI

Get the Model

Now, you need to get the actual model that contains the weights for the network. Just go to the Hugging Face’s site and login, or create an account if you don’t have one. Accept the terms on the model card, and download the file called sd-v1-4-full-ema.ckpt.

Setup Environment

Install Xcode

The first step is to install Xcode:

xcode-select --install

Install Conda

Most of the solutions I’ve seen use Conda to manage the required packages. I ended up using Anaconda:

conda

Note: conda will require that both python and pip commands are available in the terminal.

Install Rust

When following some other guides, I would always get problems on the next part of the process. After many tries, I figured that I was missing the Rust compiler:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Build and Turn On the Environment

Now we will create the ldm environment and activate it:

PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml

If you need to rebuild the environment:

PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env update -f environment-mac.yml

If you are on an Intel Mac the command should be:

PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml

Now activate and preload:

conda activate invokeai
python scripts/preload_models.py

Have Fun…

Now it’s time to start to play around with Stable Diffusion. Run:

python scripts/invoke.py --full_precision --web

And open your browser on localhost:9090. I’ve been running mine with 512x512 images, around 100 cycles for the final images, and config scale at 7.5. As a sampler I prefer the results using DDIM.

Disclaimer & Other Options

Even though I installed this on both a Mac and a Windows, the performance on my Windows machine with an Nvidia RTX 2070 was way better. There are a ton of options for running the Stable Diffusion model, some locally, some in the cloud (e.g., Google Colab), so don’t get frustrated if you want to try this out but don’t have access to a machine that can run it.

Share this post