Generative AI
Generative AI
own
Generative AI
Text to
Images
Are you interested
to learn more
about AI .Don't
forgot to follow me
in linkedin .
What is stable diffusion ?
Process of Stable
Diffusion
Let's move to Coding
Part
Prerequisites:
Click "save" .
Click on Connect/Reconnect -> Connect
to a hosted runtime:
July 1, 2023
we need to install some python libraries and authenticate using an API token:
[1]: !pip install --upgrade huggingface_hub
Collecting huggingface_hub
Downloading huggingface_hub-0.15.1-py3-none-any.whl (236 kB)
��������������������������������������� 236.8/236.8
kB 9.7 MB/s eta 0:00:00
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-
packages (from huggingface_hub) (3.12.2)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages
(from huggingface_hub) (2023.6.0)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-
packages (from huggingface_hub) (2.27.1)
Requirement already satisfied: tqdm>=4.42.1 in /usr/local/lib/python3.10/dist-
packages (from huggingface_hub) (4.65.0)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-
packages (from huggingface_hub) (6.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in
/usr/local/lib/python3.10/dist-packages (from huggingface_hub) (4.6.3)
Requirement already satisfied: packaging>=20.9 in
/usr/local/lib/python3.10/dist-packages (from huggingface_hub) (23.1)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in
/usr/local/lib/python3.10/dist-packages (from requests->huggingface_hub)
(1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in
/usr/local/lib/python3.10/dist-packages (from requests->huggingface_hub)
(2023.5.7)
Requirement already satisfied: charset-normalizer~=2.0.0 in
/usr/local/lib/python3.10/dist-packages (from requests->huggingface_hub)
(2.0.12)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-
packages (from requests->huggingface_hub) (3.4)
Installing collected packages: huggingface_hub
Successfully installed huggingface_hub-0.15.1
1
0.1 Next we need to authenticate using a token from you HuggingFace account,
when you execute the code below, you will be prompted in your notebook
to paste in your HuggingFace API token:
[2]: from huggingface_hub import notebook_login
notebook_login()
VBox(children=(HTML(value='<center> <img\nsrc=https://huggingface.co/front/
↪assets/huggingface_logo-noborder.sv…
After successfully logging into your HuggingFace account, we’re going download the diffusers and
transformers python libraries:
[3]: !pip install -qq -U diffusers transformers
���������������������������������������� 1.1/1.1 MB
27.6 MB/s eta 0:00:00
���������������������������������������� 7.2/7.2 MB
83.8 MB/s eta 0:00:00
���������������������������������������� 7.8/7.8 MB
80.5 MB/s eta 0:00:00
���������������������������������������� 1.3/1.3 MB
70.8 MB/s eta 0:00:00
We need to create a StableDiffusion model pipeline so we can basically pass the model some text
and have it generate an image based on that prompt. You might notice that one of the parameters
we’re passing is a path to a Stable Diffusion model hosted on HuggingFace. The examples in this
post were tested using v1.5, you can try swapping for the latest (v2.1) at the time of writing:
The cache for model files in Transformers v4.22.0 has been updated. Migrating
your old cache. This is a one-time only operation. You can interrupt this and
resume the migration later on by calling `transformers.utils.move_cache()`.
0it [00:00, ?it/s]
Downloading (…)ain/model_index.json: 0%| | 0.00/541 [00:00<?, ?B/s]
Fetching 15 files: 0%| | 0/15 [00:00<?, ?it/s]
Downloading model.safetensors: 0%| | 0.00/492M [00:00<?, ?B/s]
Downloading (…)tokenizer/merges.txt: 0%| | 0.00/525k [00:00<?, ?B/s]
Downloading (…)rocessor_config.json: 0%| | 0.00/342 [00:00<?, ?B/s]
Downloading (…)_encoder/config.json: 0%| | 0.00/617 [00:00<?, ?B/s]
Downloading (…)cial_tokens_map.json: 0%| | 0.00/472 [00:00<?, ?B/s]
Downloading (…)_checker/config.json: 0%| | 0.00/4.72k [00:00<?, ?B/s]
2
Downloading (…)cheduler_config.json: 0%| | 0.00/308 [00:00<?, ?B/s]
Downloading model.safetensors: 0%| | 0.00/1.22G [00:00<?, ?B/s]
Downloading (…)tokenizer/vocab.json: 0%| | 0.00/1.06M [00:00<?, ?B/s]
Downloading (…)e6a/unet/config.json: 0%| | 0.00/743 [00:00<?, ?B/s]
Downloading (…)8e6a/vae/config.json: 0%| | 0.00/547 [00:00<?, ?B/s]
Downloading (…)okenizer_config.json: 0%| | 0.00/806 [00:00<?, ?B/s]
Downloading (…)ch_model.safetensors: 0%| | 0.00/3.44G [00:00<?, ?B/s]
Downloading (…)ch_model.safetensors: 0%| | 0.00/335M [00:00<?, ?B/s]
Cannot initialize model with low cpu memory usage because `accelerate` was not
found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is
strongly recommended to install `accelerate` for faster and less memory-intense
model loading. You can do so with:
```
pip install accelerate
```
.
`text_config_dict` is provided which will be used to initialize
`CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
# Initialize a prompt
prompt ="a peaceful beach at sunset"
3
4