HomeGen AI News Talk

Gen AI News Talk

Deploy conversational agents with Vonage and Amazon Nova Sonic

This post is co-written with Mark Berkeland, Oscar Rodriguez and Marina Gerzon from Vonage. Voice-based technologies are transforming the way businesses engage with customers across customer support, virtual assistants, and intelligent agents. However, creating real-time, expressive, and highly responsive voice interfaces still requires navigating a...

Enabling customers to deliver production-ready AI agents at scale

AI agents will change how we all work and live. Our AWS CEO, Matt Garman, shared a vision of a technological shift as transformative as the advent of the internet. I’m energized by this vision because I’ve witnessed firsthand how these intelligent agent systems...

Monitor agents built on Amazon Bedrock with Datadog LLM Observability

This post was co-written with Mohammad Jama, Yun Kim, and Barry Eom from Datadog. The emergence of generative AI agents in recent years has transformed the AI landscape, driven by advances in large language models (LLMs) and natural language processing (NLP). The focus is shifting...

Amazon Bedrock Knowledge Bases now supports Amazon OpenSearch Service Managed Cluster as vector store

Amazon Bedrock Knowledge Bases has extended its vector store options by enabling support for Amazon OpenSearch Service managed clusters, further strengthening its capabilities as a fully managed Retrieval Augmented Generation (RAG) solution. This enhancement builds on the core functionality of Amazon Bedrock Knowledge Bases...

How PayU built a secure enterprise AI assistant using Amazon Bedrock

This is a guest post co-written with Rahul Ghosh, Sandeep Kumar Veerlapati, Rahmat Khan, and Mudit Chopra from PayU. PayU offers a full-stack digital financial services system that serves the financial needs of merchants, banks, and consumers through technology. As a Central Bank-regulated financial institution in...

Accelerate generative AI inference with NVIDIA Dynamo and Amazon EKS

This post is co-written with Kshitiz Gupta, Wenhan Tan, Arun Raman, Jiahong Liu, and Eiluth Triana Isaza from NVIDIA. As large language models (LLMs) and generative AI applications become increasingly prevalent, the demand for efficient, scalable, and low-latency inference solutions has grown. Traditional inference systems...