Today, we are excited to announce that Pixtral 12B (pixtral-12b-2409), a state-of-the-art 12 billion parameter vision language model (VLM) from Mistral AI that excels in both text-only and multimodal tasks, is available for customers through Amazon Bedrock Marketplace. Amazon Bedrock Marketplace is a new capability in Amazon Bedrock that enables developers to discover, test, and use over 100 popular, emerging, and specialized foundation models (FMs) alongside the current selection of industry-leading models in Amazon Bedrock. You can also use this model with Amazon SageMaker JumpStart, a machine learning (ML) hub that provides access to algorithms and models that can be deployed with one click for running inference.
In this post, we walk through how to discover, deploy, and use the Pixtral 12B model for a variety of real-world vision use cases.
Overview of Pixtral 12B
Pixtral 12B, Mistral’s inaugural VLM, delivers robust performance across a range of benchmarks, surpassing other open models and rivaling larger counterparts, according to Mistral’s evaluation. Designed for both image and document comprehension, Pixtral demonstrates advanced capabilities in vision-related tasks, including chart and figure interpretation, document question answering, multimodal reasoning, and instruction following—several of which are illustrated with examples later in this post. The model processes images at their native resolution and aspect ratio, providing high-fidelity input handling. Unlike many open source alternatives, Pixtral 12B achieves strong results in text-based benchmarks—such as instruction following, coding, and mathematical reasoning—without sacrificing its proficiency in multimodal tasks.
Mistral developed a novel architecture for Pixtral 12B, optimized for both computational efficiency and performance. The model consists of two main components: a 400-million-parameter vision encoder, responsible for tokenizing images, and a 12-billion-parameter multimodal transformer decoder, which predicts the next text token based on a sequence of text and images. The vision encoder was specifically trained to natively handle variable image sizes, enabling Pixtral to accurately interpret high-resolution diagrams, charts, and documents while maintaining fast inference speeds for smaller images such as icons, clipart, and equations. This architecture supports processing an arbitrary number of images of varying sizes within a large context window of 128k tokens.
License agreements are a critical decision factor when using open-weights models. Similar to other Mistral models, such as Mistral 7B, Mixtral 8x7B, Mixtral 8x22B, and Mistral Nemo 12B, Pixtral 12B is released under the commercially permissive Apache 2.0, providing enterprise and startup customers with a high-performing VLM option to build complex multimodal applications.
Performance metrics and benchmarks
Pixtral 12B is trained to understand both natural images and documents, achieving 52.5% on the Massive Multitask Language Understanding (MMLU) reasoning benchmark, surpassing a number of larger models according to Mistral. The MMLU benchmark is a test that evaluates a language model’s ability to understand and use language across a variety of subjects. The MMLU consists of over 10,000 multiple-choice questions spanning a variety of academic subjects, including mathematics, philosophy, law, and medicine. The model shows strong abilities in tasks such as chart and figure understanding, document question answering, multimodal reasoning, and instruction following. Pixtral is able to ingest images at their natural resolution and aspect ratio, giving the user flexibility on the number of tokens used to process an image. Pixtral is also able to process multiple images in its long context window of 128,000 tokens. Unlike previous open source models, Pixtral doesn’t compromise on text benchmark performance to excel in multimodal tasks, according to Mistral.
You can review the Mistral published benchmarks
Prerequisites
To try out Pixtral 12B in Amazon Bedrock Marketplace, you will need the following prerequisites:
- An AWS account that will contain all your AWS resources.
- An AWS Identity and Access Management (IAM) role to access Amazon Bedrock Marketplace and Amazon SageMaker endpoints. To learn more about how IAM works with Amazon Bedrock Marketplace, refer to Set up Amazon Bedrock Marketplace.
- Access to accelerated instances (GPUs) for hosting the model, such as ml.g6.12xlarge. Refer to Requesting a quota increase for access to GPU instances.
Deploy Pixtral 12B in Amazon Bedrock Marketplace
On the Amazon Bedrock console, you can search for models that help you with a specific use case or language. The results of the search include both serverless models and models available in Amazon Bedrock Marketplace. You can filter results by provider, modality (such as text, image, or audio), or task (such as classification or text summarization).
To access Pixtral 12B in Amazon Bedrock Marketplace, follow these steps:
- On the Amazon Bedrock console, choose Model catalog under Foundation models in the navigation pane.
- Filter for Hugging Face as a provider and choose the Pixtral 12B model, or search for Pixtral in the Filter for a model input box.
The model detail page provides essential information about the model’s capabilities, pricing structure, and implementation guidelines. You can find detailed usage instructions, including sample API calls and code snippets for integration.
The page also includes deployment options and licensing information to help you get started with Pixtral 12B in your applications.
- To begin using Pixtral 12B, choose Deploy.
You will be prompted to configure the deployment details for Pixtral 12B. The model ID will be prepopulated.
- Read carefully and accept the End User License Agreement (EULA).
- The Endpoint Name is automatically populated. Customers can choose to rename the endpoint.
- For Number of instances, enter a number of instances (between 1–100).
- For Instance type, choose your instance type. For optimal performance with Pixtral 12B, a GPU-based instance type like ml.g6.12xlarge is recommended.
Optionally, you can configure advanced security and infrastructure settings, including virtual private cloud (VPC) networking, service role permissions, and encryption settings. For most use cases, the default settings will work well. However, for production deployments, you might want to review these settings to align with your organization’s security and compliance requirements.
- Choose Deploy to begin using the model.
When the deployment is complete, Endpoint status should change to In Service. After the endpoint is in service, you can test Pixtral 12B capabilities directly in the Amazon Bedrock playground.
- Choose Open in playground to access an interactive interface where you can experiment with different prompts and adjust model parameters like temperature and maximum length.
This is an excellent way to explore the model’s reasoning and text generation abilities before integrating it into your applications. The playground provides immediate feedback, helping you understand how the model responds to various inputs and letting you fine-tune your prompts for optimal results.
You can quickly test the model in the playground through the UI. However, to invoke the deployed model programmatically with Amazon Bedrock APIs, you need to use the endpoint ARN as model-id
in the Amazon Bedrock SDK.
Pixtral 12B use cases
In this section, we provide example use cases of Pixtral 12B using sample prompts. We have defined helper functions to invoke the Pixtral 12B model using Amazon Bedrock Converse APIs:
Visual logical reasoning
One of the interesting use cases of vision models is solving logical reasoning problems or visual puzzles. Pixtral 12B vision models are highly capable in solving logical reasoning questions. Let’s explore an example.
We use the following input image.
Our prompt and input payload are as follows:
We get following response:
Structured product information
Extracting product information is crucial for the retail industry, especially on sites that host third-party sellers, where product images are the most accessible resource. Accurately capturing relevant details from these images is vital for a product’s success in ecommerce. For instance, using advanced visual models like Pixtral 12B, retailers can efficiently extract key attributes from clothing product images, such as color, style, and patterns. This capability not only streamlines inventory management but also enhances customer experiences by providing essential information that aids in informed purchasing decisions.
We use the following input image.
Our prompt and input payload are as follows:
We get the following response:
Vehicle damage assessment
In the insurance industry, image analysis plays a crucial role in claims processing. For vehicle damage assessment, vision models like Pixtral 12B can be used to compare images taken at policy issuance with those submitted during a claim. This approach can streamline the evaluation process, potentially reducing loss adjustment expenses and expediting claim resolution. By automating the identification and characterization of automobile damage, insurers can enhance efficiency, improve accuracy, and ultimately provide a better experience for policyholders.
We use the following input images.
Our prompt and input payload are as follows:
We get the following response:
Handwriting recognition
Another feature in vision language models is their ability to recognize handwriting and extract handwritten text. Pixtral 12B performs well on extracting content from complex and poorly handwritten notes.
We use the following input image.
Our prompt and input payload are as follows:
We get the following response:
Reasoning of complex figures
VLMs excel at interpreting and reasoning about complex figures, charts, and diagrams. In this particular use case, we use Pixtral 12B to analyze an intricate image containing GDP data. Pixtral 12B’s advanced capabilities in document understanding and complex figure analysis make it well-suited for extracting insights from visual representations of economic data. By processing both the visual elements and accompanying text, Pixtral 12B can provide detailed interpretations and reasoned analysis of the GDP figures presented in the image.
We use the following input image.
Our prompt and input payload are as follows:
We get the following response:
Clean up
To avoid unwanted charges, clean up your resources. If you deployed the model using Amazon Bedrock Marketplace, complete the following steps:
Delete the Amazon Bedrock Marketplace deployment
- On the Amazon Bedrock console, under Foundation models in the navigation pane, choose Marketplace deployments.
- In the Managed deployments section, locate the endpoint you want to delete.
- Verify the endpoint details to make sure you’re deleting the correct deployment:
- Endpoint name
- Model name
- Endpoint status
- Select the endpoint, and choose Delete.
- Choose Delete to delete the endpoint.
- In the deletion confirmation dialog, review the warning message, enter
confirm
, and choose Delete to permanently remove the endpoint.
Conclusion
In this post, we showed you how to get started with the Pixtral 12B model in Amazon Bedrock and deploy the model for inference. The Pixtral 12B vision model enables you to solve multiple use cases, including document understanding, logical reasoning, handwriting recognition, image comparison, entity extraction, extraction of structured data from scanned images, and caption generation. These capabilities can drive productivity in a number of enterprise use cases, including ecommerce (retail), marketing, FSI, and much more.
For more Mistral resources on AWS, check out the GitHub repo. The complete code for the samples featured in this post is available on GitHub. Pixtral 12B is also available in Amazon SageMaker JumpStart; refer to Pixtral 12B is now available on Amazon SageMaker JumpStart for details.
About the Authors
Deepesh Dhapola is a Senior Solutions Architect at AWS India, where he assists financial services and fintech clients in scaling and optimizing their applications on the AWS platform. He specializes in core machine learning and generative AI. Outside of work, Deepesh enjoys spending time with his family and experimenting with various cuisines.
Preston Tuggle is a Sr. Specialist Solutions Architect working on generative AI.
Shane Rai is a Principal GenAI Specialist with the AWS World Wide Specialist Organization (WWSO). He works with customers across industries to solve their most pressing and innovative business needs using AWS’s breadth of cloud-based AI/ML services including model offerings from top tier foundation model providers.
John Liu has 14 years of experience as a product executive and 10 years of experience as a portfolio manager. At AWS, John is a Principal Product Manager for Amazon Bedrock. Previously, he was the Head of Product for AWS Web3 / Blockchain. Prior to AWS, John held various product leadership roles at public blockchain protocols and fintech companies, and also spent 9 years as a portfolio manager at various hedge funds.