Coaching and deploying massive AI fashions requires superior distributed computing capabilities, however managing these distributed programs shouldn’t be advanced for knowledge scientists and machine studying (ML) practitioners. The newly launched command line interface (CLI) and software program growth package (SDK) for Amazon SageMaker HyperPod simplify how you should utilize the service’s distributed coaching and inference capabilities.
The SageMaker HyperPod CLI supplies knowledge scientists with an intuitive command-line expertise, abstracting away the underlying complexity of distributed programs. Constructed on high of the SageMaker HyperPod SDK, the CLI gives simple instructions for frequent workflows like launching coaching or fine-tuning jobs, deploying inference endpoints, and monitoring cluster efficiency. This makes it very best for fast experimentation and iteration.
For extra superior use instances requiring fine-grained management, the SageMaker HyperPod SDK permits programmatic entry to customise your ML workflows. Builders can use the SDK’s Python interface to exactly configure coaching and deployment parameters whereas sustaining the simplicity of working with acquainted Python objects.
On this submit, we reveal the way to use each the CLI and SDK to coach and deploy massive language fashions (LLMs) on SageMaker HyperPod. We stroll by means of sensible examples of distributed coaching utilizing Absolutely Sharded Information Parallel (FSDP) and mannequin deployment for inference, showcasing how these instruments streamline the event of production-ready generative AI purposes.
Stipulations
To comply with the examples on this submit, it’s essential to have the next stipulations:
As a result of the use instances that we reveal are about coaching and deploying LLMs with the SageMaker HyperPod CLI and SDK, it’s essential to additionally set up the next Kubernetes operators within the cluster:
Set up the SageMaker HyperPod CLI
First, it’s essential to set up the most recent model of the SageMaker HyperPod CLI and SDK (the examples on this submit are based mostly on model 3.1.0). From the native atmosphere, run the next command (you may as well set up in a Python digital atmosphere):
# Set up the HyperPod CLI and SDK
pip set up sagemaker-hyperpod
This command units up the instruments wanted to work together with SageMaker HyperPod clusters. For an current set up, be sure you have the most recent model of the package deal put in (sagemaker-hyperpod>=3.1.0
) to have the ability to use the related set of options. To confirm if the CLI is put in appropriately, you possibly can run the hyp
command and examine the outputs:
# Test if the HyperPod CLI is appropriately put in
hyp
The output will likely be just like the next, and consists of directions on the way to use the CLI:
Utilization: hyp [OPTIONS] COMMAND [ARGS]...
Choices:
  --help  Present this message and exit.
Instructions:
  create        Create endpoints or pytorch jobs.
  delete        Delete endpoints or pytorch jobs.
  describe       Describe endpoints or pytorch jobs.
  get-cluster-context  Get context associated to the present set cluster.
  get-logs       Get pod logs for endpoints or pytorch jobs.
  get-monitoring    Get monitoring configurations for Hyperpod cluster.
  get-operator-logs   Get operator logs for endpoints.
  invoke        Invoke mannequin endpoints.
  listing         Record endpoints or pytorch jobs.
  list-cluster     Record SageMaker Hyperpod Clusters with metadata.
  list-pods       Record pods for endpoints or pytorch jobs.
  set-cluster-context  Connect with a HyperPod EKS cluster.
For extra data on CLI utilization and the obtainable instructions and respective parameters, consult with the CLI reference documentation.
Set the cluster context
The SageMaker HyperPod CLI and SDK use the Kubernetes API to work together with the cluster. Due to this fact, be sure the underlying Kubernetes Python shopper is configured to execute API calls in opposition to your cluster by setting the cluster context.
Use the CLI to listing the clusters obtainable in your AWS account:
# Record all HyperPod clusters in your AWS account
hyp list-cluster
[
   {
     "Cluster": "ml-cluster",
     "Instances": [
       {
         "InstanceType": "ml.g5.8xlarge",
         "TotalNodes": 8,
         "AcceleratorDevicesAvailable": 8,
         "NodeHealthStatus=Schedulable": 8,
         "DeepHealthCheckStatus=Passed": "N/A"
       },
       {
         "InstanceType": "ml.m5.12xlarge",
         "TotalNodes": 1,
         "AcceleratorDevicesAvailable": "N/A",
         "NodeHealthStatus=Schedulable": 1,
         "DeepHealthCheckStatus=Passed": "N/A"
       }
     ]
   }
]
Set the cluster context specifying the cluster identify as enter (in our case, we use ml-cluster
as ):
# Set the cluster context for subsequent instructions
hyp set-cluster-context --cluster-nameÂ
Practice fashions with the SageMaker HyperPod CLI and SDK
The SageMaker HyperPod CLI supplies an easy technique to submit PyTorch mannequin coaching and fine-tuning jobs to a SageMaker HyperPod cluster. Within the following instance, we schedule a Meta Llama 3.1 8B mannequin coaching job with FSDP.
The CLI executes coaching utilizing the HyperPodPyTorchJob
Kubernetes {custom} useful resource, which is applied by the HyperPod coaching operator, that must be put in within the cluster as mentioned within the stipulations part.
First, clone the awsome-distributed-training
repository and create the Docker picture that you’ll use for the coaching job:
cd ~
git clone https://github.com/aws-samples/awsome-distributed-training/
cd awsome-distributed-training/3.test_cases/pytorch/FSDP
Then, log in to the Amazon Elastic Container Registry (Amazon ECR) to drag the bottom picture and construct the brand new container:
export AWS_REGION=$(aws ec2 describe-availability-zones --output textual content --query 'AvailabilityZones[0].[RegionName]')
export ACCOUNT=$(aws sts get-caller-identity --query Account --output textual content)
export REGISTRY=${ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/
docker construct -f Dockerfile -t ${REGISTRY}fsdp:pytorch2.7.1 .
The Dockerfile within the awsome-distributed-training
repository referenced within the previous code already accommodates the HyperPod elastic agent, which orchestrates lifecycles of coaching employees on every container and communicates with the HyperPod coaching operator. For those who’re utilizing a unique Dockerfile, set up the HyperPod elastic agent following the directions in HyperPod elastic agent.
Subsequent, create a brand new registry in your coaching picture if wanted and push the constructed picture to it:
# Create registry if wanted
REGISTRY_COUNT=$(aws ecr describe-repositories | grep "fsdp" | wc -l)
if [ "$REGISTRY_COUNT" -eq 0 ]; then
   aws ecr create-repository --repository-name fsdp
fi
# Login to registry
echo "Logging in to $REGISTRY ..."
aws ecr get-login-password | docker login --username AWS --password-stdin $REGISTRY
# Push picture to registry
docker picture push ${REGISTRY}fsdp:pytorch2.7.1
After you may have efficiently created the Docker picture, you possibly can submit the coaching job utilizing the SageMaker HyperPod CLI.
Internally, the SageMaker HyperPod CLI will use the Kubernetes Python shopper to construct a HyperPodPyTorchJob
{custom} useful resource after which create it on the Kubernetes the cluster.
You possibly can modify the CLI command for different Meta Llama configurations by exchanging the --args
to the specified arguments and values; examples might be discovered within the Kubernetes manifests within the awsome-distributed-training repository.
Within the given configuration, the coaching job will write checkpoints to /fsx/checkpoints
on the FSx for Lustre PVC.
hyp create hyp-pytorch-job
   --job-name fsdp-llama3-1-8b
   --image ${REGISTRY}fsdp:pytorch2.7.1
   --command '[
     hyperpodrun,
     --tee=3,
     --log_dir=/tmp/hyperpod,
     --nproc_per_node=1,
     --nnodes=8,
     /fsdp/train.py
   ]'
   --args '[
     --max_context_width=8192,
     --num_key_value_heads=8,
     --intermediate_size=14336,
     --hidden_width=4096,
     --num_layers=32,
     --num_heads=32,
     --model_type=llama_v3,
     --tokenizer=hf-internal-testing/llama-tokenizer,
     --checkpoint_freq=50,
     --validation_freq=25,
     --max_steps=50,
     --checkpoint_dir=/fsx/checkpoints,
     --dataset=allenai/c4,
     --dataset_config_name=en,
     --resume_from_checkpoint=/fsx/checkpoints,
     --train_batch_size=1,
     --val_batch_size=1,
     --sharding_strategy=full,
     --offload_activations=1
   ]'
   --environment '{"PYTORCH_CUDA_ALLOC_CONF": "max_split_size_mb:32"}'
   --pull-policy "IfNotPresent"
   --instance-type ml.g5.8xlarge
   --node-count 8
   --tasks-per-node 1
   --deep-health-check-passed-nodes-only false
   --max-retry 3
   --volume identify=shmem,kind=hostPath,mount_path=/dev/shm,path=/dev/shm,read_only=falseÂ
   --volume identify=fsx,kind=pvc,mount_path=/fsx,claim_name=fsx-claim,read_only=false
The hyp create hyp-pytorch-job
command helps extra arguments, which might be found by working the next:
hyp create hyp-pytorch-job --help
The previous instance code accommodates the next related arguments:
--command
and --args
supply flexibility in setting the command to be executed within the container. The command executed is hyperpodrun
, applied by the HyperPod elastic agent that’s put in within the coaching container. The HyperPod elastic agent extends PyTorch’s ElasticAgent and manages the communication of the assorted employees with the HyperPod coaching operator. For extra data, consult with HyperPod elastic agent.
--environment
defines atmosphere variables and customizes the coaching execution.
--max-retry
signifies the utmost variety of restarts on the course of degree that will likely be tried by the HyperPod coaching operator. For extra data, consult with Utilizing the coaching operator to run jobs.
--volume
is used to map persistent or ephemeral volumes to the container.
If profitable, the command will output the next:
Utilizing model: 1.0
2025-08-12Â 10:03:03,270 - sagemaker.hyperpod.coaching.hyperpod_pytorch_job - INFO - Efficiently submitted HyperPodPytorchJob 'fsdp-llama3-1-8b'!
You possibly can observe the standing of the coaching job by means of the CLI. Operating hyp listing hyp-pytorch-job
will present the standing
first as Created
after which as Operating
after the containers have been began:
NAME Â Â Â Â Â Â Â Â Â Â Â Â Â NAMESPACE Â Â Â Â Â STATUS Â Â Â Â AGE Â Â Â Â Â Â
--------------------------------------------------------------------------------
fsdp-llama3-1-8b        default       Operating     6m    Â
To listing the pods which are created by this coaching job, run the next command:
hyp list-pods hyp-pytorch-job --job-name fsdp-llama3-1-8b
Pods for job: fsdp-llama3-1-8b
POD NAME Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â NAMESPACE Â Â Â Â Â
----------------------------------------------------------------------
fsdp-llama3-1-8b-pod-0               default      Â
fsdp-llama3-1-8b-pod-1               default       Â
fsdp-llama3-1-8b-pod-2               default     Â
fsdp-llama3-1-8b-pod-3               default     Â
fsdp-llama3-1-8b-pod-4               default     Â
fsdp-llama3-1-8b-pod-5               default     Â
fsdp-llama3-1-8b-pod-6               default    Â
fsdp-llama3-1-8b-pod-7               default     Â
You possibly can observe the logs of one of many coaching pods that get spawned by working the next command:
hyp get-logs hyp-pytorch-job --pod-name fsdp-llama3-1-8b-pod-0Â
--job-name fsdp-llama3-1-8b
...
2025-08-12T14:59:25.069208138Z [HyperPodElasticAgent] 2025-08-12 14:59:25,069 [INFO] [rank0-restart0] /usr/native/lib/python3.10/dist-packages/torch/distributed/elastic/agent/server/api.py:685: [default] Beginning employee group
2025-08-12T14:59:25.069301320Z [HyperPodElasticAgent] 2025-08-12 14:59:25,069 [INFO] [rank0-restart0] /usr/native/lib/python3.10/dist-packages/hyperpod_elastic_agent/hyperpod_elastic_agent.py:221: Beginning employees with employee spec worker_group.spec=WorkerSpec(function="default", local_world_size=1, rdzv_handler=, fn=None, entrypoint="/usr/bin/python3", args=('-u', '/fsdp/prepare.py', '--max_context_width=8192', '--num_key_value_heads=8', '--intermediate_size=14336', '--hidden_width=4096', '--num_layers=32', '--num_heads=32', '--model_type=llama_v3', '--tokenizer=hf-internal-testing/llama-tokenizer', '--checkpoint_freq=50', '--validation_freq=50', '--max_steps=100', '--checkpoint_dir=/fsx/checkpoints', '--dataset=allenai/c4', '--dataset_config_name=en', '--resume_from_checkpoint=/fsx/checkpoints', '--train_batch_size=1', '--val_batch_size=1', '--sharding_strategy=full', '--offload_activations=1'), max_restarts=3, monitor_interval=0.1, master_port=None, master_addr=None, local_addr=None)...
2025-08-12T14:59:30.264195963Z [default0]:2025-08-12 14:59:29,968 [INFO] **essential**: Creating Mannequin
2025-08-12T15:00:51.203541576Z [default0]:2025-08-12 15:00:50,781 [INFO] **essential**: Created mannequin with whole parameters: 7392727040 (7.39 B)
2025-08-12T15:01:18.139531830Z [default0]:2025-08-12 15:01:18 I [checkpoint.py:79] Loading checkpoint from /fsx/checkpoints/llama_v3-24steps ...
2025-08-12T15:01:18.833252603Z [default0]:2025-08-12 15:01:18,081 [INFO] **essential**: Wrapped mannequin with FSDP
2025-08-12T15:01:18.833290793Z [default0]:2025-08-12 15:01:18,093 [INFO] **essential**: Created optimizer
We elaborate on extra superior debugging and observability options on the finish of this part.
Alternatively, in case you favor a programmatic expertise and extra superior customization choices, you possibly can submit the coaching job utilizing the SageMaker HyperPod Python SDK. For extra data, consult with the SDK reference documentation. The next code will yield the equal coaching job submission to the previous CLI instance:
import os
from sagemaker.hyperpod.coaching import HyperPodPytorchJob
from sagemaker.hyperpod.coaching import ReplicaSpec, Template, VolumeMounts, Spec, Containers, Sources, RunPolicy, Volumes, HostPath, PersistentVolumeClaim
from sagemaker.hyperpod.frequent.config import Metadata
REGISTRYÂ =Â os.environ['REGISTRY']
# Outline job specs
nproc_per_node = "1"  # Variety of processes per node
replica_specs = [
   ReplicaSpec(
     name = "pod",  # Replica name
     replicas = 8,
     template = Template(
       spec = Spec(
         containers =
         [
           Containers(
             # Container name
             name="fsdp-training-container", Â
            Â
             # Training image
             image=f"{REGISTRY}fsdp:pytorch2.7.1", Â
             # Volume mounts
             volume_mounts=[
               VolumeMounts(
                 name="fsx",
                 mount_path="/fsx"
               ),
               VolumeMounts(
                 name="shmem",
                 mount_path="/dev/shm"
               )
             ],
             env=[
                 {"name": "PYTORCH_CUDA_ALLOC_CONF", "value": "max_split_size_mb:32"},
               ],
            Â
             # Picture pull coverage
             image_pull_policy="IfNotPresent",
             assets=Sources(
               requests={"nvidia.com/gpu": "1"}, Â
               limits={"nvidia.com/gpu": "1"}, Â
             ),
             # Command to run
             command=[
               "hyperpodrun",
               "--tee=3",
               "--log_dir=/tmp/hyperpod",
               "--nproc_per_node=1",
               "--nnodes=8",
               "/fsdp/train.py"
             ], Â
             # Script arguments
             args = [
               '--max_context_width=8192',
               '--num_key_value_heads=8',
               '--intermediate_size=14336',
               '--hidden_width=4096',
               '--num_layers=32',
               '--num_heads=32',
               '--model_type=llama_v3',
               '--tokenizer=hf-internal-testing/llama-tokenizer',
               '--checkpoint_freq=2',
               '--validation_freq=25',
               '--max_steps=50',
               '--checkpoint_dir=/fsx/checkpoints',
               '--dataset=allenai/c4',
               '--dataset_config_name=en',
               '--resume_from_checkpoint=/fsx/checkpoints',
               '--train_batch_size=1',
               '--val_batch_size=1',
               '--sharding_strategy=full',
               '--offload_activations=1'
             ]
           )
         ],
         volumes = [
           Volumes(
             name="fsx",
             persistent_volume_claim=PersistentVolumeClaim(
               claim_name="fsx-claim",
               read_only=False
             ),
           ),
           Volumes(
             name="shmem",
             host_path=HostPath(path="/dev/shm"),
           )
         ],
         node_selector={
           "node.kubernetes.io/instance-type": "ml.g5.8xlarge",
         },
       )
     ),
   )
]
run_policy = RunPolicy(clean_pod_policy="None", job_max_retry_count=3) Â
# Create and begin the PyTorch job
pytorch_job = HyperPodPytorchJob(
   # Job identify
   metadata = Metadata(
     identify="fsdp-llama3-1-8b",  Â
     namespace="default",
   ),
   # Processes per node
   nproc_per_node = nproc_per_node, Â
   # Duplicate specs
   replica_specs = replica_specs,    Â
)
# Launch the job
pytorch_job.create() Â
Debugging coaching jobs
Along with monitoring the coaching pod logs as described earlier, there are a number of different helpful methods of debugging coaching jobs:
- You possibly can submit coaching jobs with a further
--debug True
flag, which can print the Kubernetes YAML to the console when the job begins so it may be inspected by customers.
- You possibly can view an inventory of present coaching jobs by working
hyp listing hyp-pytorch-job
.
- You possibly can view the standing and corresponding occasions of the job by working
hyp describe hyp-pytorch-job —job-name fsdp-llama3-1-8b
.
- If the HyperPod observability stack is deployed to the cluster, run
hyp get-monitoring --grafana
and hyp get-monitoring --prometheus
to get the Grafana dashboard and Prometheus workspace URLs, respectively, to view cluster and job metrics.
- To watch GPU utilization or view listing contents, it may be helpful to execute instructions or open an interactive shell into the pods. You possibly can run instructions in a pod by working, for instance,
kubectl exec -it
-- nvtop
to run nvtop
for visibility into GPU utilization. You possibly can open an interactive shell by working kubectl exec -it
-- /bin/bash
.
- The logs of the HyperPod coaching operator controller pod can have beneficial details about scheduling. To view them, run
kubectl get pods -n aws-hyperpod | grep hp-training-controller-manager
to seek out the controller pod identify and run kubectl logs -n aws-hyperpod
to view the corresponding logs.
Deploy fashions with the SageMaker HyperPod CLI and SDK
The SageMaker HyperPod CLI supplies instructions to rapidly deploy fashions to your SageMaker HyperPod cluster for inference. You possibly can deploy each basis fashions (FMs) obtainable on Amazon SageMaker JumpStart in addition to {custom} fashions with artifacts which are saved on Amazon S3 or FSx for Lustre file programs.
This performance will routinely deploy the chosen mannequin to the SageMaker HyperPod cluster by means of Kubernetes {custom} assets, that are applied by the HyperPod inference operator, that must be put in within the cluster as mentioned within the stipulations part. It’s optionally attainable to routinely create a SageMaker inference endpoint in addition to an Software Load Balancer (ALB), which can be utilized immediately utilizing HTTPS calls with a generated TLS certificates to invoke the mannequin.
Deploy SageMaker JumpStart fashions
You possibly can deploy an FM that’s obtainable on SageMaker JumpStart with the next command:
hyp create hyp-jumpstart-endpoint
  --model-id deepseek-llm-r1-distill-qwen-1-5b
  --instance-type ml.g5.8xlarge
  --endpoint-name
  --tls-certificate-output-s3-uri s3:///
  --namespace default
The previous code consists of the next parameters:
--model-id
is the mannequin ID within the SageMaker JumpStart mannequin hub. On this instance, we deploy a DeepSeek R1-distilled model of Qwen 1.5B, which is out there on SageMaker JumpStart.
--instance-type
is the goal occasion kind in your SageMaker HyperPod cluster the place you wish to deploy the mannequin. This occasion kind have to be supported by the chosen mannequin.
--endpoint-name
is the identify that the SageMaker inference endpoint can have. This identify have to be distinctive. SageMaker inference endpoint creation is elective.
--tls-certificate-output-s3-uri
is the S3 bucket location the place the TLS certificates for the ALB will likely be saved. This can be utilized to immediately invoke the mannequin by means of HTTPS. You should use S3 buckets which are accessible by the HyperPod inference operator IAM function.
--namespace
is the Kubernetes namespace the mannequin will likely be deployed to. The default worth is ready to default
.
The CLI helps extra superior deployment configurations, together with auto scaling, by means of extra parameters, which might be considered by working the next command:
hyp create hyp-jumpstart-endpoint --help
If profitable, the command will output the next:
Creating JumpStart mannequin and sagemaker endpoint. Endpoint identify: deepseek-distill-qwen-endpoint-cli.
 The method could take a couple of minutes...
After a couple of minutes, each the ALB and the SageMaker inference endpoint will likely be obtainable, which might be noticed by means of the CLI. Operating hyp listing hyp-jumpstart-endpoint
will present the standing
first as DeploymentInProgress
after which as DeploymentComplete
when the endpoint is prepared for use:
| identify                | namespace  | labels  | standing       |
|------------------------------------|-------------|----------|--------------------|
| deepseek-distill-qwen-endpoint-cli | default   |      | DeploymentComplete |
To get extra visibility into the deployment pod, run the next instructions to seek out the pod identify and examine the corresponding logs:
hyp list-pods hyp-jumpstart-endpoint --namespace
hyp get-logs hyp-jumpstart-endpoint --namespace  --pod-name
The output will look just like the next:
2025-08-12T15:53:14.042031963Z WARN Â PyProcess W-195-model-stderr: Capturing CUDA graph shapes: 100%|??????????| 35/35 [00:18<00:00, Â 1.63it/s]
2025-08-12T15:53:14.042257357Z WARN Â PyProcess W-195-model-stderr: Capturing CUDA graph shapes: 100%|??????????| 35/35 [00:18<00:00, Â 1.94it/s]
2025-08-12T15:53:14.042297298Z INFO Â PyProcess W-195-model-stdout: INFO 08-12 15:53:14 llm_engine.py:436] init engine (profile, create kv cache, warmup mannequin) took 26.18 seconds
2025-08-12T15:53:15.215357997Z INFO Â PyProcess Mannequin [model] initialized.
2025-08-12T15:53:15.219205375Z INFO Â WorkerThread Beginning employee thread WT-0001 for mannequin mannequin (M-0001, READY) on machine gpu(0)
2025-08-12T15:53:15.221591827Z INFO Â ModelServer Initialize BOTH server with: EpollServerSocketChannel.
2025-08-12T15:53:15.231404670Z INFO Â ModelServer BOTH API bind to: http://0.0.0.0:8080
You possibly can invoke the SageMaker inference endpoint you created by means of the CLI by working the next command:
hyp invoke hyp-jumpstart-endpoint
   --endpoint-name deepseek-distill-qwen-endpoint-cli   Â
   --body '{"inputs":"What's the capital of USA?"}'
You’ll get an output just like the next:
{"generated_text": " What's the capital of France? What's the capital of Japan? What's the capital of China? What's the capital of Germany? What's"}
Alternatively, in case you favor a programmatic expertise and superior customization choices, you should utilize the SageMaker HyperPod Python SDK. The next code will yield the equal deployment to the previous CLI instance:
from sagemaker.hyperpod.inference.config.hp_jumpstart_endpoint_config import Mannequin, Server, SageMakerEndpoint, TlsConfig
from sagemaker.hyperpod.inference.hp_jumpstart_endpoint import HPJumpStartEndpoint
mannequin=Mannequin(
   model_id='deepseek-llm-r1-distill-qwen-1-5b',
)
server=Server(
   instance_type="ml.g5.8xlarge",
)
endpoint_name=SageMakerEndpoint(identify="deepseek-distill-qwen-endpoint-cli")
tls_config=TlsConfig(tls_certificate_output_s3_uri='s3://')
js_endpoint=HPJumpStartEndpoint(
   mannequin=mannequin,
   server=server,
   sage_maker_endpoint=endpoint_name,
   tls_config=tls_config,
   namespace="default"
)
js_endpoint.create()
Deploy {custom} fashions
It’s also possible to use the CLI to deploy {custom} fashions with mannequin artifacts saved on both Amazon S3 or FSx for Lustre. That is helpful for fashions which have been fine-tuned on {custom} knowledge. It’s essential to present the storage location of the mannequin artifacts in addition to a container picture for inference that’s suitable with the mannequin artifacts and SageMaker inference endpoints. Within the following instance, we deploy a TinyLlama 1.1B mannequin from Amazon S3 utilizing the DJL Massive Mannequin Inference container picture.
In preparation, obtain the mannequin artifacts regionally and push them to an S3 bucket:
# Set up huggingface-hub if not current in your machine
pip set up huggingface-hub
# Obtain mannequin
hf obtain TinyLlama/TinyLlama-1.1B-Chat-v1.0 --local-dir ./tinyllama-1.1b-chat
# Add to S3
aws s3 cp ./tinyllama s3:///fashions/tinyllama-1.1b-chat/ --recursive
Now you possibly can deploy the mannequin with the next command:
hyp create hyp-custom-endpoint
  --endpoint-name my-custom-tinyllama-endpoint
  --model-name tinyllama
  --model-source-type s3
  --model-location fashions/tinyllama-1.1b-chat/Â
  --s3-bucket-name
  --s3-region Â
  --instance-type ml.g5.8xlarge
  --image-uri 763104351884.dkr.ecr.us-west-2.amazonaws.com/djl-inference:0.33.0-lmi15.0.0-cu128
  --container-port 8080
  --model-volume-mount-name modelmount
    --tls-certificate-output-s3-uri s3:///
    --namespace default
The previous code accommodates the next key parameters:
--model-name
is the identify of the mannequin that will likely be created in SageMaker
--model-source-type
specifies both fsx
or s3
for the placement of the mannequin artifacts
--model-location
specifies the prefix or folder the place the mannequin artifacts are positioned
--s3-bucket-name
and —s3-region
specify the S3 bucket identify and AWS Area, respectively
--instance-type
, --endpoint-name
, --namespace
, and --tls-certificate
behave the identical as for the deployment of SageMaker JumpStart fashions
Just like SageMaker JumpStart mannequin deployment, the CLI helps extra superior deployment configurations, together with auto scaling, by means of extra parameters, which you’ll be able to view by working the next command:
hyp create hyp-custom-endpoint --help
If profitable, the command will output the next:
Creating sagemaker mannequin and endpoint. Endpoint identify: my-custom-tinyllama-endpoint.
 The method could take a couple of minutes...
After a couple of minutes, each the ALB and the SageMaker inference endpoint will likely be obtainable, which you’ll be able to observe by means of the CLI. Operating hyp listing hyp-custom-endpoint
will present the standing
first as DeploymentInProgress
and as DeploymentComplete
when the endpoint is prepared for use:
| identify             | namespace  | labels  | standing        |
|------------------------------|-------------|----------|----------------------|
| my-custom-tinyllama-endpoint | default   |      | DeploymentComplete  |
To get extra visibility into the deployment pod, run the next instructions to seek out the pod identify and examine the corresponding logs:
hyp list-pods hyp-custom-endpoint --namespace
hyp get-logs hyp-custom-endpoint --namespace  --pod-name
The output will look just like the next:
│ INFO  PyProcess W-196-model-stdout: INFO 08-12 16:00:36 [monitor.py:33] torch.compile takes 29.18 s in whole                              │
│ INFO  PyProcess W-196-model-stdout: INFO 08-12 16:00:37 [kv_cache_utils.py:634] GPU KV cache dimension: 809,792 tokens                           │
│ INFO  PyProcess W-196-model-stdout: INFO 08-12 16:00:37 [kv_cache_utils.py:637] Most concurrency for two,048 tokens per request: 395.41x               │
│ INFO  PyProcess W-196-model-stdout: INFO 08-12 16:00:59 [gpu_model_runner.py:1626] Graph capturing completed in 22 secs, took 0.37 GiB                 │
│ INFO  PyProcess W-196-model-stdout: INFO 08-12 16:00:59 [core.py:163] init engine (profile, create kv cache, warmup mannequin) took 59.39 seconds             │
│ INFO  PyProcess W-196-model-stdout: INFO 08-12 16:00:59 [core_client.py:435] Core engine course of 0 prepared.                               │
│ INFO  PyProcess Mannequin [model] initialized.                                                               │
│ INFO  WorkerThread Beginning employee thread WT-0001 for mannequin mannequin (M-0001, READY) on machine gpu(0)                                   │
│ INFO  ModelServer Initialize BOTH server with: EpollServerSocketChannel.                                                │
│ INFO  ModelServer BOTH API bind to: http://0.0.0.0:8080Â
You possibly can invoke the SageMaker inference endpoint you created by means of the CLI by working the next command:
hyp invoke hyp-custom-endpoint
   --endpoint-name my-custom-tinyllama-endpoint   Â
   --body '{"inputs":"What's the capital of USA?"}'
You’ll get an output just like the next:
{"generated_text": " What's the capital of France? What's the capital of Japan? What's the capital of China? What's the capital of Germany? What's"}
Alternatively, you possibly can deploy utilizing the SageMaker HyperPod Python SDK. The next code will yield the equal deployment to the previous CLI instance:
from sagemaker.hyperpod.inference.config.hp_endpoint_config import S3Storage, ModelSourceConfig, TlsConfig, EnvironmentVariables, ModelInvocationPort, ModelVolumeMount, Sources, Employee
from sagemaker.hyperpod.inference.hp_endpoint import HPEndpoint
model_source_config = ModelSourceConfig(
   model_source_type="s3",
   model_location="fashions/tinyllama-1.1b-chat/",
   s3_storage=S3Storage(
     bucket_name="",
     area='',
   ),
)
employee = Employee(
   picture="763104351884.dkr.ecr.us-west-2.amazonaws.com/djl-inference:0.33.0-lmi15.0.0-cu128",
   model_volume_mount=ModelVolumeMount(
     identify="modelmount",
   ),
   model_invocation_port=ModelInvocationPort(container_port=8080),
   assets=Sources(
       requests={"cpu": "30000m", "nvidia.com/gpu": 1, "reminiscence": "100Gi"},
       limits={"nvidia.com/gpu": 1}
   ),
)
tls_config = TlsConfig(tls_certificate_output_s3_uri='s3:///')
custom_endpoint = HPEndpoint(
   endpoint_name="my-custom-tinyllama-endpoint",
   instance_type="ml.g5.8xlarge",
   model_name="tinyllama", Â
   tls_config=tls_config,
   model_source_config=model_source_config,
   employee=employee,
)
custom_endpoint.create()
Debugging inference deployments
Along with the monitoring of the inference pod logs, there are a number of different helpful methods of debugging inference deployments:
- You possibly can entry the HyperPod inference operator controller logs by means of the SageMaker HyperPod CLI. Run
hyp get-operator-logs
—since-hours 0.5
to entry the operator logs for {custom} and SageMaker JumpStart deployments, respectively.
- You possibly can view an inventory of inference deployments by working
hyp listing
.
- You possibly can view the standing and corresponding occasions of deployments by working
hyp describe
--name
to view the standing and occasions for {custom} and SageMaker JumpStart deployments, respectively.
- If the HyperPod observability stack is deployed to the cluster, run
hyp get-monitoring --grafana
and hyp get-monitoring --prometheus
to get the Grafana dashboard and Prometheus workspace URLs, respectively, to view inference metrics as properly.
- To watch GPU utilization or view listing contents, it may be helpful to execute instructions or open an interactive shell into the pods. You possibly can run instructions in a pod by working, for instance,
kubectl exec -it
-- nvtop
to run nvtop
for visibility into GPU utilization. You possibly can open an interactive shell by working kubectl exec -it
-- /bin/bash
.
For extra data on the inference deployment options in SageMaker HyperPod, see Amazon SageMaker HyperPod launches mannequin deployments to speed up the generative AI mannequin growth lifecycle and Deploying fashions on Amazon SageMaker HyperPod.
Clear up
To delete the coaching job from the corresponding instance, use the next CLI command:
hyp delete hyp-pytorch-job --job-name fsdp-llama3-1-8b
To delete the mannequin deployments from the inference instance, use the next CLI instructions for SageMaker JumpStart and {custom} mannequin deployments, respectively:
hyp delete hyp-jumpstart-endpoint --name deepseek-distill-qwen-endpoint-cli
hyp delete hyp-custom-endpoint --name my-custom-tinyllama-endpoint
To keep away from incurring ongoing prices for the cases working in your cluster, you possibly can scale down the cases or delete cases.
Conclusion
The brand new SageMaker HyperPod CLI and SDK can considerably streamline the method of coaching and deploying large-scale AI fashions. By way of the examples on this submit, we’ve demonstrated how these instruments present the next advantages:
- Simplified workflows – The CLI gives simple instructions for frequent duties like distributed coaching and mannequin deployment, making highly effective capabilities of SageMaker HyperPod accessible to knowledge scientists with out requiring deep infrastructure data.
- Versatile growth choices – Though the CLI handles frequent eventualities, the SDK permits fine-grained management and customization for extra advanced necessities, so builders can programmatically configure each side of their distributed ML workloads.
- Complete observability – Each interfaces present strong monitoring and debugging capabilities by means of system logs and integration with the SageMaker HyperPod observability stack, serving to rapidly determine and resolve points throughout growth.
- Manufacturing-ready deployment – The instruments assist end-to-end workflows from experimentation to manufacturing, together with options like computerized TLS certificates era for safe mannequin endpoints and integration with SageMaker inference endpoints.
Getting began with these instruments is so simple as putting in the sagemaker-hyperpod
package deal. The SageMaker HyperPod CLI and SDK present the suitable degree of abstraction for each knowledge scientists seeking to rapidly experiment with distributed coaching and ML engineers constructing manufacturing programs.
For extra details about SageMaker HyperPod and these growth instruments, consult with the SageMaker HyperPod CLI and SDK documentation or discover the instance notebooks.
In regards to the authors

Giuseppe Angelo Porcelli is a Principal Machine Studying Specialist Options Architect for Amazon Net Companies. With a number of years of software program engineering and an ML background, he works with clients of any dimension to know their enterprise and technical wants and design AI and ML options that make one of the best use of the AWS Cloud and the Amazon Machine Studying stack. He has labored on initiatives in numerous domains, together with MLOps, pc imaginative and prescient, and NLP, involving a broad set of AWS providers. In his free time, Giuseppe enjoys enjoying soccer.
Shweta Singh is a Senior Product Supervisor within the Amazon SageMaker Machine Studying platform group at AWS, main the SageMaker Python SDK. She has labored in a number of product roles in Amazon for over 5 years. She has a Bachelor of Science diploma in Laptop Engineering and a Masters of Science in Monetary Engineering, each from New York College.
Nicolas Jourdan is a Specialist Options Architect at AWS, the place he helps clients unlock the complete potential of AI and ML within the cloud. He holds a PhD in Engineering from TU Darmstadt in Germany, the place his analysis targeted on the reliability, idea drift detection, and MLOps of business ML purposes. Nicolas has intensive hands-on expertise throughout industries, together with autonomous driving, drones, and manufacturing, having labored in roles starting from analysis scientist to engineering supervisor. He has contributed to award-winning analysis, holds patents in object detection and anomaly detection, and is keen about making use of cutting-edge AI to resolve advanced real-world issues.