GenAI News
Evaluate RAG responses with Amazon Bedrock, LlamaIndex and RAGAS
In the rapidly evolving landscape of artificial intelligence, Retrieval Augmented Generation (RAG) has emerged as a game-changer, revolutionizing how Foundation Models (FMs) interact with...
Build a Multi-Agent System with LangGraph and Mistral on AWS
Agents are revolutionizing the landscape of generative AI, serving as the bridge between large language models (LLMs) and real-world applications. These intelligent, autonomous systems...
Innovating at speed: BMW’s generative AI solution for cloud incident analysis
This post was co-authored with Johann Wildgruber, Dr. Jens Kohl, Thilo Bindel, and Luisa-Sophie Gloger from BMW Group.
The BMW Group—headquartered in Munich, Germany—is a vehicle...
Ground truth generation and review best practices for evaluating generative AI...
Generative AI question-answering applications are pushing the boundaries of enterprise productivity. These assistants can be powered by various backend architectures including Retrieval Augmented Generation...
Time series forecasting with LLM-based foundation models and scalable AIOps on...
Time series forecasting is critical for decision-making across industries. From predicting traffic flow to sales forecasting, accurate predictions enable organizations to make informed decisions,...
Dynamic metadata filtering for Amazon Bedrock Knowledge Bases with LangChain
Amazon Bedrock Knowledge Bases offers a fully managed Retrieval Augmented Generation (RAG) feature that connects large language models (LLMs) to internal data sources. It’s...
Accelerate AWS Well-Architected reviews with Generative AI
Building cloud infrastructure based on proven best practices promotes security, reliability and cost efficiency. To achieve these goals, the AWS Well-Architected Framework provides comprehensive...
Customize DeepSeek-R1 distilled models using Amazon SageMaker HyperPod recipes – Part...
Increasingly, organizations across industries are turning to generative AI foundation models (FMs) to enhance their applications. To achieve optimal performance for specific use cases,...
Pixtral-12B-2409 is now available on Amazon Bedrock Marketplace
Today, we are excited to announce that Pixtral 12B (pixtral-12b-2409), a state-of-the-art 12 billion parameter vision language model (VLM) from Mistral AI that excels...
Reduce conversational AI response time through inference at the edge with...
Recent advances in generative AI have led to the proliferation of new generation of conversational AI assistants powered by foundation models (FMs). These latency-sensitive...