PixArt-Sigma is a diffusion transformer mannequin that’s able to picture technology at 4k decision. This mannequin reveals important enhancements over earlier technology PixArt fashions like Pixart-Alpha and different diffusion fashions via dataset and architectural enhancements. AWS Trainium and AWS Inferentia are purpose-built AI chips to speed up machine studying (ML) workloads, making them supreme for cost-effective deployment of enormous generative fashions. Through the use of these AI chips, you possibly can obtain optimum efficiency and effectivity when operating inference with diffusion transformer fashions like PixArt-Sigma.
This put up is the primary in a collection the place we are going to run a number of diffusion transformers on Trainium and Inferentia-powered situations. On this put up, we present how one can deploy PixArt-Sigma to Trainium and Inferentia-powered situations.
Resolution overview
The steps outlined under might be used to deploy the PixArt-Sigma mannequin on AWS Trainium and run inference on it to generate high-quality pictures.
- Step 1 – Pre-requisites and setup
- Step 2 – Obtain and compile the PixArt-Sigma mannequin for AWS Trainium
- Step 3 – Deploy the mannequin on AWS Trainium to generate pictures
Step 1 – Conditions and setup
To get began, you will want to arrange a improvement atmosphere on a trn1, trn2, or inf2 host. Full the next steps:
- Launch a
trn1.32xlarge
ortrn2.48xlarge
occasion with a Neuron DLAMI. For directions on the right way to get began, seek advice from Get Began with Neuron on Ubuntu 22 with Neuron Multi-Framework DLAMI. - Launch a Jupyter Pocket book sever. For directions to arrange a Jupyter server, seek advice from the next consumer information.
- Clone the aws-neuron-samples GitHub repository:
- Navigate to the hf_pretrained_pixart_sigma_1k_latency_optimized.ipynb pocket book:
The offered instance script is designed to run on a Trn2 occasion, however you possibly can adapt it for Trn1 or Inf2 situations with minimal modifications. Particularly, inside the pocket book and in every of the part information beneath the neuron_pixart_sigma
listing, you will see that commented-out adjustments to accommodate Trn1 or Inf2 configurations.
Step 2 – Obtain and compile the PixArt-Sigma mannequin for AWS Trainium
This part gives a step-by-step information to compiling PixArt-Sigma for AWS Trainium.
Obtain the mannequin
You can see a helper operate in cache-hf-model.py in above talked about GitHub repository that reveals the right way to obtain the PixArt-Sigma mannequin from Hugging Face. In case you are utilizing PixArt-Sigma in your individual workload, and choose to not use the script included on this put up, you need to use the huggingface-cli to obtain the mannequin as an alternative.
The Neuron PixArt-Sigma implementation accommodates a couple of scripts and courses. The varied information and scrips are damaged down as follows:
├── compile_latency_optimized.sh # Full Mannequin Compilation script for Latency Optimized
├── compile_throughput_optimized.sh # Full Mannequin Compilation script for Throughput Optimized
├── hf_pretrained_pixart_sigma_1k_latency_optimized.ipynb # Pocket book to run Latency Optimized Pixart-Sigma
├── hf_pretrained_pixart_sigma_1k_throughput_optimized.ipynb # Pocket book to run Throughput Optimized Pixart-Sigma
├── neuron_pixart_sigma
│ ├── cache_hf_model.py # Mannequin downloading Script
│ ├── compile_decoder.py # Textual content Encoder Compilation Script and Wrapper Class
│ ├── compile_text_encoder.py # Textual content Encoder Compilation Script and Wrapper Class
│ ├── compile_transformer_latency_optimized.py # Latency Optimized Transformer Compilation Script and Wrapper Class
│ ├── compile_transformer_throughput_optimized.py # Throughput Optimized Transformer Compilation Script and Wrapper Class
│ ├── neuron_commons.py # Base Courses and Consideration Implementation
│ └── neuron_parallel_utils.py # Sharded Consideration Implementation
└── necessities.txt
This pocket book will assist you to to obtain the mannequin, compile the person part fashions, and invoke the technology pipeline to generate a picture. Though the notebooks could be run as a standalone pattern, the following few sections of this put up will stroll via the important thing implementation particulars inside the part information and scripts to help operating PixArt-Sigma on Neuron.
For every part of PixArt (T5, Transformer, and VAE), the instance makes use of Neuron particular wrapper courses. These wrapper courses serve two functions. The primary objective is it permits us to hint the fashions for compilation:
class InferenceTextEncoderWrapper(nn.Module):
def __init__(self, dtype, t: T5EncoderModel, seqlen: int):
tremendous().__init__()
self.dtype = dtype
self.system = t.system
self.t = t
def ahead(self, text_input_ids, attention_mask=None):
return [self.t(text_input_ids, attention_mask)['last_hidden_state'].to(self.dtype)]
Please seek advice from the neuron_commons.py file for all wrapper modules and courses.
The second cause for utilizing wrapper courses is to change the eye implementation to run on Neuron. As a result of diffusion fashions like PixArt are sometimes compute-bound, you possibly can enhance efficiency by sharding the eye layer throughout a number of gadgets. To do that, you change the linear layers with NeuronX Distributed’s RowParallelLinear and ColumnParallelLinear layers:
def shard_t5_self_attention(tp_degree: int, selfAttention: T5Attention):
orig_inner_dim = selfAttention.q.out_features
dim_head = orig_inner_dim // selfAttention.n_heads
original_nheads = selfAttention.n_heads
selfAttention.n_heads = selfAttention.n_heads // tp_degree
selfAttention.inner_dim = dim_head * selfAttention.n_heads
orig_q = selfAttention.q
selfAttention.q = ColumnParallelLinear(
selfAttention.q.in_features,
selfAttention.q.out_features,
bias=False,
gather_output=False)
selfAttention.q.weight.information = get_sharded_data(orig_q.weight.information, 0)
del(orig_q)
orig_k = selfAttention.ok
selfAttention.ok = ColumnParallelLinear(
selfAttention.ok.in_features,
selfAttention.ok.out_features,
bias=(selfAttention.ok.bias just isn't None),
gather_output=False)
selfAttention.ok.weight.information = get_sharded_data(orig_k.weight.information, 0)
del(orig_k)
orig_v = selfAttention.v
selfAttention.v = ColumnParallelLinear(
selfAttention.v.in_features,
selfAttention.v.out_features,
bias=(selfAttention.v.bias just isn't None),
gather_output=False)
selfAttention.v.weight.information = get_sharded_data(orig_v.weight.information, 0)
del(orig_v)
orig_out = selfAttention.o
selfAttention.o = RowParallelLinear(
selfAttention.o.in_features,
selfAttention.o.out_features,
bias=(selfAttention.o.bias just isn't None),
input_is_parallel=True)
selfAttention.o.weight.information = get_sharded_data(orig_out.weight.information, 1)
del(orig_out)
return selfAttention
Please seek advice from the neuron_parallel_utils.py file for extra particulars on parallel consideration.
Compile particular person sub-models
The PixArt-Sigma mannequin consists of three parts. Every part is compiled so all the technology pipeline can run on Neuron:
- Textual content encoder – A 4-billion-parameter encoder, which interprets a human-readable immediate into an embedding. Within the textual content encoder, the eye layers are sharded, together with the feed-forward layers, with tensor parallelism.
- Denoising transformer mannequin – A 700-million-parameter transformer, which iteratively denoises a latent (a numerical illustration of a compressed picture). Within the transformer, the eye layers are sharded, together with the feed-forward layers, with tensor parallelism.
- Decoder – A VAE decoder that converts our denoiser-generated latent to an output picture. For the decoder, the mannequin is deployed with information parallelism.
Now that the mannequin definition is prepared, you have to hint a mannequin to run it on Trainium or Inferentia. You may see the right way to use the hint()
operate to compile the decoder part mannequin for PixArt within the following code block:
compiled_decoder = torch_neuronx.hint(
decoder,
sample_inputs,
compiler_workdir=f"{compiler_workdir}/decoder",
compiler_args=compiler_flags,
inline_weights_to_neff=False
)
Please seek advice from the compile_decoder.py file for extra on the right way to instantiate and compile the decoder.
To run fashions with tensor parallelism, a method used to separate a tensor into chunks throughout a number of NeuronCores, you have to hint with a pre-specified tp_degree
. This tp_degree
specifies the variety of NeuronCores to shard the mannequin throughout. It then makes use of the parallel_model_trace
API to compile the encoder and transformer part fashions for PixArt:
compiled_text_encoder = neuronx_distributed.hint.parallel_model_trace(
get_text_encoder_f,
sample_inputs,
compiler_workdir=f"{compiler_workdir}/text_encoder",
compiler_args=compiler_flags,
tp_degree=tp_degree,
)
Please seek advice from the compile_text_encoder.py file for extra particulars on tracing the encoder with tensor parallelism.
Lastly, you hint the transformer mannequin with tensor parallelism:
compiled_transformer = neuronx_distributed.hint.parallel_model_trace(
get_transformer_model_f,
sample_inputs,
compiler_workdir=f"{compiler_workdir}/transformer",
compiler_args=compiler_flags,
tp_degree=tp_degree,
inline_weights_to_neff=False,
)
Please seek advice from the compile_transformer_latency_optimized.py file for extra particulars on tracing the transformer with tensor parallelism.
You’ll use the compile_latency_optimized.sh script to compile all three fashions as described on this put up, so these capabilities might be run robotically while you run via the pocket book.
Step 3 – Deploy the mannequin on AWS Trainium to generate pictures
This part will stroll us via the steps to run inference on PixArt-Sigma on AWS Trainium.
Create a diffusers pipeline object
The Hugging Face diffusers library is a library for pre-trained diffusion fashions, and contains model-specific pipelines that bundle the parts (independently-trained fashions, schedulers, and processors) wanted to run a diffusion mannequin. The PixArtSigmaPipeline
is restricted to the PixArtSigma mannequin, and is instantiated as follows:
pipe: PixArtSigmaPipeline = PixArtSigmaPipeline.from_pretrained(
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
torch_dtype=torch.bfloat16,
local_files_only=True,
cache_dir="pixart_sigma_hf_cache_dir_1024")
Please seek advice from the hf_pretrained_pixart_sigma_1k_latency_optimized.ipynb pocket book for particulars on pipeline execution.
Load compiled part fashions into the technology pipeline
After every part mannequin has been compiled, load them into the general technology pipeline for picture technology. The VAE mannequin is loaded with information parallelism, which permits us to parallelize picture technology for batch measurement or a number of pictures per immediate. For extra particulars, seek advice from the hf_pretrained_pixart_sigma_1k_latency_optimized.ipynb pocket book.
vae_decoder_wrapper.mannequin = torch_neuronx.DataParallel(
torch.jit.load(decoder_model_path), [0, 1, 2, 3], False
)
text_encoder_wrapper.t = neuronx_distributed.hint.parallel_model_load(
text_encoder_model_path
)
Lastly, the loaded fashions are added to the technology pipeline:
pipe.text_encoder = text_encoder_wrapper
pipe.transformer = transformer_wrapper
pipe.vae.decoder = vae_decoder_wrapper
pipe.vae.post_quant_conv = vae_post_quant_conv_wrapper
Compose a immediate
Now that the mannequin is prepared, you possibly can write a immediate to convey what sort of picture you need generated. When making a immediate, you must all the time be as particular as doable. You need to use a optimistic immediate to convey what is needed in your new picture, together with a topic, motion, model, and placement, and might use a unfavourable immediate to point options that needs to be eliminated.
For instance, you need to use the next optimistic and unfavourable prompts to generate a photograph of an astronaut driving a horse on mars with out mountains:
# Topic: astronaut
# Motion: driving a horse
# Location: Mars
# Type: picture
immediate = "a photograph of an astronaut driving a horse on mars"
negative_prompt = "mountains"
Be happy to edit the immediate in your pocket book utilizing immediate engineering to generate a picture of your selecting.
Generate a picture
To generate a picture, you move the immediate to the PixArt mannequin pipeline, after which save the generated picture for later reference:
# pipe: variable holding the Pixart technology pipeline with every of
# the compiled part fashions
pictures = pipe(
immediate=immediate,
negative_prompt=negative_prompt,
num_images_per_prompt=1,
top=1024, # variety of pixels
width=1024, # variety of pixels
num_inference_steps=25 # Variety of passes via the denoising mannequin
).pictures
for idx, img in enumerate(pictures):
img.save(f"image_{idx}.png")
Cleanup
To keep away from incurring extra prices, cease your EC2 occasion utilizing both the AWS Administration Console or AWS Command Line Interface (AWS CLI).
Conclusion
On this put up, we walked via the right way to deploy PixArt-Sigma, a state-of-the-art diffusion transformer, on Trainium situations. This put up is the primary in a collection targeted on operating diffusion transformers for various technology duties on Neuron. To be taught extra about operating diffusion transformers fashions with Neuron, seek advice from Diffusion Transformers.
Concerning the Authors
Achintya Pinninti is a Options Architect at Amazon Net Providers. He helps public sector prospects, enabling them to realize their aims utilizing the cloud. He focuses on constructing information and machine studying options to unravel advanced issues.
Miriam Lebowitz is a Options Architect targeted on empowering early-stage startups at AWS. She leverages her expertise with AI/ML to information firms to pick and implement the suitable applied sciences for his or her enterprise aims, setting them up for scalable progress and innovation within the aggressive startup world.
Sadaf Rasool is a Options Architect in Annapurna Labs at AWS. Sadaf collaborates with prospects to design machine studying options that handle their important enterprise challenges. He helps prospects prepare and deploy machine studying fashions leveraging AWS Trainium or AWS Inferentia chips to speed up their innovation journey.
John Grey is a Options Architect in Annapurna Labs, AWS, primarily based out of Seattle. On this function, John works with prospects on their AI and machine studying use circumstances, architects options to cost-effectively clear up their enterprise issues, and helps them construct a scalable prototype utilizing AWS AI chips.