GenAI News
Accelerate digital pathology slide annotation workflows on AWS using H-optimus-0
Digital pathology is essential for the diagnosis and treatment of cancer, playing a critical role in healthcare delivery and pharmaceutical research and development. Pathology...
How Travelers Insurance classified emails with Amazon Bedrock and prompt engineering
This is a guest blog post co-written with Jordan Knight, Sara Reynolds, George Lee from Travelers.
Foundation models (FMs) are used in many ways and...
DeepSeek-R1 model now available in Amazon Bedrock Marketplace and Amazon SageMaker...
Today, we are announcing that DeepSeek AI’s first-generation frontier model, DeepSeek-R1, is available through Amazon SageMaker JumpStart and Amazon Bedrock Marketplace to deploy for...
How Aetion is using generative AI and Amazon Bedrock to unlock...
The real-world data collected and derived from patient journeys offers a wealth of insights into patient characteristics and outcomes and the effectiveness and safety...
Streamline grant proposal reviews using Amazon Bedrock
Government and non-profit organizations evaluating grant proposals face a significant challenge: sifting through hundreds of detailed submissions, each with unique merits, to identify the...
Deploy DeepSeek-R1 Distilled Llama models in Amazon Bedrock
Open foundation models (FMs) have become a cornerstone of generative AI innovation, enabling organizations to build and customize AI applications while maintaining control over...
Generative AI operating models in enterprise organizations with Amazon Bedrock
Generative AI can revolutionize organizations by enabling the creation of innovative applications that offer enhanced customer and employee experiences. Intelligent document processing, translation and...
Track LLM model evaluation using Amazon SageMaker managed MLflow and FMEval
Evaluating large language models (LLMs) is crucial as LLM-based systems become increasingly powerful and relevant in our society. Rigorous testing allows us to understand...
Optimizing AI responsiveness: A practical guide to Amazon Bedrock latency-optimized inference
In production generative AI applications, responsiveness is just as important as the intelligence behind the model. Whether it’s customer service teams handling time-sensitive inquiries...
Develop a RAG-based application using Amazon Aurora with Amazon Kendra
Generative AI and large language models (LLMs) are revolutionizing organizations across diverse sectors to enhance customer experience, which traditionally would take years to make...