Language Mannequin (LLM) just isn’t essentially the ultimate step in productionizing your Generative AI software. An usually forgotten, but essential a part of the MLOPs lifecycle is correctly load testing your LLM and guaranteeing it is able to stand up to your anticipated manufacturing site visitors. Load testing at a excessive degree is the observe of testing your software or on this case your mannequin with the site visitors it could expect in a manufacturing setting to make sure that it’s performant.
Previously we’ve mentioned load testing conventional ML fashions utilizing open supply Python instruments akin to Locust. Locust helps seize normal efficiency metrics akin to requests per second (RPS) and latency percentiles on a per request foundation. Whereas that is efficient with extra conventional APIs and ML fashions it doesn’t seize the complete story for LLMs.
LLMs historically have a a lot decrease RPS and better latency than conventional ML fashions on account of their measurement and bigger compute necessities. Typically the RPS metric does probably not present essentially the most correct image both as requests can enormously differ relying on the enter to the LLM. As an example you may need a question asking to summarize a big chunk of textual content and one other question that may require a one-word response.
For this reason tokens are seen as a way more correct illustration of an LLM’s efficiency. At a excessive degree a token is a piece of textual content, at any time when an LLM is processing your enter it “tokenizes” the enter. A token differs relying particularly on the LLM you’re utilizing, however you’ll be able to think about it for example as a phrase, sequence of phrases, or characters in essence.
What we’ll do on this article is discover how we are able to generate token based mostly metrics so we are able to perceive how your LLM is acting from a serving/deployment perspective. After this text you’ll have an concept of how one can arrange a load-testing device particularly to benchmark totally different LLMs within the case that you’re evaluating many fashions or totally different deployment configurations or a mix of each.
Let’s get arms on! In case you are extra of a video based mostly learner be at liberty to observe my corresponding YouTube video down beneath:
NOTE: This text assumes a fundamental understanding of Python, LLMs, and Amazon Bedrock/SageMaker. In case you are new to Amazon Bedrock please confer with my starter information right here. If you wish to study extra about SageMaker JumpStart LLM deployments confer with the video right here.
DISCLAIMER: I’m a Machine Studying Architect at AWS and my opinions are my very own.
Desk of Contents
- LLM Particular Metrics
- LLMPerf Intro
- Making use of LLMPerf to Amazon Bedrock
- Extra Sources & Conclusion
LLM-Particular Metrics
As we briefly mentioned within the introduction with regard to LLM internet hosting, token based mostly metrics typically present a significantly better illustration of how your LLM is responding to totally different payload sizes or kinds of queries (summarization vs QnA).
Historically we now have all the time tracked RPS and latency which we are going to nonetheless see right here nonetheless, however extra so at a token degree. Listed here are among the metrics to concentrate on earlier than we get began with load testing:
- Time to First Token: That is the length it takes for the primary token to generate. That is particularly helpful when streaming. As an example when utilizing ChatGPT we begin processing data when the primary piece of textual content (token) seems.
- Complete Output Tokens Per Second: That is the full variety of tokens generated per second, you’ll be able to consider this as a extra granular different to the requests per second we historically monitor.
These are the foremost metrics that we’ll give attention to, and there’s a couple of others akin to inter-token latency that will even be displayed as a part of the load exams. Consider the parameters that additionally affect these metrics embrace the anticipated enter and output token measurement. We particularly play with these parameters to get an correct understanding of how our LLM performs in response to totally different era duties.
Now let’s check out a device that allows us to toggle these parameters and show the related metrics we want.
LLMPerf Intro
LLMPerf is constructed on high of Ray, a preferred distributed computing Python framework. LLMPerf particularly leverages Ray to create distributed load exams the place we are able to simulate real-time manufacturing degree site visitors.
Observe that any load-testing device can be solely going to have the ability to generate your anticipated quantity of site visitors if the consumer machine it’s on has sufficient compute energy to match your anticipated load. As an example as you scale the concurrency or throughput anticipated on your mannequin, you’d additionally wish to scale the consumer machine(s) the place you’re working your load check.
Now particularly inside LLMPerf there’s a couple of parameters which can be uncovered which can be tailor-made for LLM load testing as we’ve mentioned:
- Mannequin: That is the mannequin supplier and your hosted mannequin that you just’re working with. For our use-case it’ll be Amazon Bedrock and Claude 3 Sonnet particularly.
- LLM API: That is the API format wherein the payload must be structured. We use LiteLLM which supplies a standardized payload construction throughout totally different mannequin suppliers, thus simplifying the setup course of for us particularly if we wish to check totally different fashions hosted on totally different platforms.
- Enter Tokens: The imply enter token size, you too can specify a regular deviation for this quantity.
- Output Tokens: The imply output token size, you too can specify a regular deviation for this quantity.
- Concurrent Requests: The variety of concurrent requests for the load check to simulate.
- Take a look at Length: You possibly can management the length of the check, this parameter is enabled in seconds.
LLMPerf particularly exposes all these parameters via their token_benchmark_ray.py script which we configure with our particular values. Let’s have a look now at how we are able to configure this particularly for Amazon Bedrock.
Making use of LLMPerf to Amazon Bedrock
Setup
For this instance we’ll be working in a SageMaker Basic Pocket book Occasion with a conda_python3 kernel and ml.g5.12xlarge occasion. Observe that you just wish to choose an occasion that has sufficient compute to generate the site visitors load that you just wish to simulate. Be sure that you even have your AWS credentials for LLMPerf to entry the hosted mannequin be it on Bedrock or SageMaker.
LiteLLM Configuration
We first configure our LLM API construction of alternative which is LiteLLM on this case. With LiteLLM there’s help throughout varied mannequin suppliers, on this case we configure the completion API to work with Amazon Bedrock:
import os
from litellm import completion
os.environ["AWS_ACCESS_KEY_ID"] = "Enter your entry key ID"
os.environ["AWS_SECRET_ACCESS_KEY"] = "Enter your secret entry key"
os.environ["AWS_REGION_NAME"] = "us-east-1"
response = completion(
mannequin="anthropic.claude-3-sonnet-20240229-v1:0",
messages=[{ "content": "Who is Roger Federer?","role": "user"}]
)
output = response.decisions[0].message.content material
print(output)
To work with Bedrock we configure the Mannequin ID to level in direction of Claude 3 Sonnet and go in our immediate. The neat half with LiteLLM is that messages key has a constant format throughout mannequin suppliers.
Submit-execution right here we are able to give attention to configuring LLMPerf for Bedrock particularly.
LLMPerf Bedrock Integration
To execute a load check with LLMPerf we are able to merely use the supplied token_benchmark_ray.py script and go within the following parameters that we talked of earlier:
- Enter Tokens Imply & Customary Deviation
- Output Tokens Imply & Customary Deviation
- Max variety of requests for check
- Length of check
- Concurrent requests
On this case we additionally specify our API format to be LiteLLM and we are able to execute the load check with a easy shell script like the next:
%%sh
python llmperf/token_benchmark_ray.py
--model bedrock/anthropic.claude-3-sonnet-20240229-v1:0
--mean-input-tokens 1024
--stddev-input-tokens 200
--mean-output-tokens 1024
--stddev-output-tokens 200
--max-num-completed-requests 30
--num-concurrent-requests 1
--timeout 300
--llm-api litellm
--results-dir bedrock-outputs
On this case we preserve the concurrency low, however be at liberty to toggle this quantity relying on what you’re anticipating in manufacturing. Our check will run for 300 seconds and put up length you must see an output listing with two information representing statistics for every inference and likewise the imply metrics throughout all requests within the length of the check.
We will make this look somewhat neater by parsing the abstract file with pandas:
import json
from pathlib import Path
import pandas as pd
# Load JSON information
individual_path = Path("bedrock-outputs/bedrock-anthropic-claude-3-sonnet-20240229-v1-0_1024_1024_individual_responses.json")
summary_path = Path("bedrock-outputs/bedrock-anthropic-claude-3-sonnet-20240229-v1-0_1024_1024_summary.json")
with open(individual_path, "r") as f:
individual_data = json.load(f)
with open(summary_path, "r") as f:
summary_data = json.load(f)
# Print abstract metrics
df = pd.DataFrame(individual_data)
summary_metrics = {
"Mannequin": summary_data.get("mannequin"),
"Imply Enter Tokens": summary_data.get("mean_input_tokens"),
"Stddev Enter Tokens": summary_data.get("stddev_input_tokens"),
"Imply Output Tokens": summary_data.get("mean_output_tokens"),
"Stddev Output Tokens": summary_data.get("stddev_output_tokens"),
"Imply TTFT (s)": summary_data.get("results_ttft_s_mean"),
"Imply Inter-token Latency (s)": summary_data.get("results_inter_token_latency_s_mean"),
"Imply Output Throughput (tokens/s)": summary_data.get("results_mean_output_throughput_token_per_s"),
"Accomplished Requests": summary_data.get("results_num_completed_requests"),
"Error Price": summary_data.get("results_error_rate")
}
print("Claude 3 Sonnet - Efficiency Abstract:n")
for okay, v in summary_metrics.objects():
print(f"{okay}: {v}")
The ultimate load check outcomes will look one thing like the next:
As we are able to see we see the enter parameters that we configured, after which the corresponding outcomes with time to first token(s) and throughput with regard to imply output tokens per second.
In a real-world use case you would possibly use LLMPerf throughout many alternative mannequin suppliers and run exams throughout these platforms. With this device you should utilize it holistically to establish the appropriate mannequin and deployment stack on your use-case when used at scale.
Extra Sources & Conclusion
The whole code for the pattern could be discovered at this related Github repository. Should you additionally wish to work with SageMaker endpoints you will discover a Llama JumpStart deployment load testing pattern right here.
All in all load testing and analysis are each essential to making sure that your LLM is performant in opposition to your anticipated site visitors earlier than pushing to manufacturing. In future articles we’ll cowl not simply the analysis portion, however how we are able to create a holistic check with each parts.
As all the time thanks for studying and be at liberty to go away any suggestions and join with me on Linkedln and X.