SageMaker JumpStart

SageMaker JumpStart offers an extensive collection of pretrained, open-source models that encompass a diverse spectrum of problem types, thereby serving as a catalyst for your machine learning endeavors. These models afford you the versatility to incrementally refine and optimize their parameters prior to deployment, ensuring optimal performance tailored to your specific requirements. Furthermore, JumpStart encompasses a comprehensive suite of solution templates that diligently orchestrate the necessary infrastructure for prevalent use cases, coupled with meticulously crafted executable example notebooks that furnish a hands-on approach to machine learning with SageMaker's capabilities.

Through the JumpStart landing page in the comprehensively revamped Studio experience, you can seamlessly deploy, meticulously fine-tune, and rigorously evaluate a vast array of pretrained models sourced from renowned model hubs.

Furthermore, the JumpStart landing page in Amazon SageMaker Studio Classic provides effortless access to a plethora of pretrained models, meticulously crafted solution templates, and illustrative examples.

The subsequent paragraphs meticulously outline the procedure for accessing JumpStart models using Amazon SageMaker Studio and Amazon SageMaker Studio Classic, thus empowering you with the knowledge to leverage these cutting-edge resources effectively.



This is still in the beta testing phase, but it is expected to be fully available by the end of April 2024.

Open JumpStart in Studio

In Amazon SageMaker Studio, open the JumpStart landing page either through the Home page or the Home menu on the left-side panel. This opens the SageMaker JumpStart landing page where you can explore model hubs and search for models.

  • From the Home page, choose JumpStart in the Prebuilt and automated solutions pane.
  • From the Home menu in the left panel, navigate to the SageMaker JumpStart node.

Use JumpStart in Studio

From the SageMaker JumpStart landing page in Studio, you can explore model hubs from providers of both proprietary and publicly available models.

You can find specific hubs or models using the search bar. Within each model hub, you can search directly for models, sort by provided attributes, or filter based on a list of provided model tasks.

Manage JumpStart in Studio

Choose a model to see its model detail card. In the upper right-hand corner of the model detail card, choose Fine-tune, Deploy, or Evaluate to start working through the fine-tuning, deployment, or evaluation workflows, respectively. Note that not all models are available for fine-tuning or evaluation. For more information on each of these options, see Use foundation models in Studio.

Model settings

When using a pre-trained JumpStart foundation model in Amazon SageMaker Studio, the Model artifact location (Amazon S3 URI) is populated by default. To edit the default Amazon S3 URI, choose Enter model artifact location. Not all models support changing the model artifact location.

Data settings

In the Data field, provide an Amazon S3 URI point to your training dataset location. The default Amazon S3 URI points to an example training dataset. To edit the default Amazon S3 URI, choose Enter training dataset and change the URI. Be sure to review the model detail card in Amazon SageMaker Studio for information on formatting training data.


You can customize the hyperparameters of the training job that are used to fine-tune the model. The hyperparameters available for each fine-tunable model differ depending on the model.

The following hyperparameters are Palmyra's models:

Epochs – One epoch is one cycle through the entire dataset. Multiple intervals complete a batch, and multiple batches eventually complete an epoch. Multiple epochs are run until the accuracy of the model reaches an acceptable level, or when the error rate drops below an acceptable level.

Learning rate – The amount that values should be changed between epochs. As the model is refined, its internal weights are being nudged and error rates are checked to see if the model improves. A typical learning rate is 0.1 or 0.01, where 0.01 is a much smaller adjustment and could cause the training to take a long time to converge, whereas 0.1 is much larger and can cause the training to overshoot. It is one of the primary hyperparameters that you might adjust for training your model. Note that for text models, a much smaller learning rate (5e-5 for BERT) can result in a more accurate model.

Batch size – The number of records from the dataset that is to be selected for each interval to send to the GPUs for training.

Review the tool tip prompts and additional information in the model detail card in the Studio UI to learn more about hyperparameters specific to the model of your choice.

For more information on available hyperparameters, see Commonly supported fine-tuning hyperparameters.


Specify the training instance type and output artifact location for your training job. You can only choose from instances that are compatible with the model of your choice within the fine-tuning the Studio UI. The default output artifact location is the SageMaker default bucket. To change the output artifact location, choose Enter output artifact location and change the Amazon S3 URI.


Precisely define the security parameters to employ for your training endeavor, encompassing the IAM role that SageMaker utilizes to meticulously train your model. Further, determine whether your training endeavor should establish a connection with a virtual private cloud (VPC) and incorporate robust encryption keys to safeguard the integrity of your data.