Home Blog

How Tines enhances security analysis with Amazon Quick Suite

0
Organizations face challenges in quickly detecting and responding to user account security events, such as repeated login attempts from unusual locations. Although security data...

How Lendi revamped the refinance journey for its customers using agentic...

0
This post was co-written with Davesh Maheshwari from Lendi Group and Samuel Casey from Mantel Group. Most Australians don’t know whether their home loan is...

Building a scalable virtual try-on solution using Amazon Nova on AWS:...

0
In this first post in a two-part series, we examine how retailers can implement a virtual try-on to improve customer experience. In part 2,...

Building specialized AI without sacrificing intelligence: Nova Forge data mixing in action

0
Large language models (LLMs) perform well on general tasks but struggle with specialized work that requires understanding proprietary data, internal processes, and industry-specific terminology....

Build safe generative AI applications like a Pro: Best Practices with...

0
Are you struggling to balance generative AI safety with accuracy, performance, and costs? Many organizations face this challenge when deploying generative AI applications to...

Build a serverless conversational AI agent using Claude with LangGraph and...

0
Customer service teams face a persistent challenge. Existing chat-based assistants frustrate users with rigid responses, while direct large language model (LLM) implementations lack the...

NVIDIA and Partners Show That Software-Defined AI-RAN Is the Next Wireless...

0
AI-RAN is moving from lab to field, showing that a software-defined approach is the only viable way to build future AI-native wireless networks. Ahead of...

NVIDIA Advances Autonomous Networks With Agentic AI Blueprints and Telco Reasoning...

0
Autonomous networks — intelligent, self-managing telecommunications operations — are moving from a future vision to a current priority for telecom operators. In the latest...

Large model inference container – latest capabilities and performance enhancements

0
Modern large language model (LLM) deployments face an escalating cost and performance challenge driven by token count growth. Token count, which is directly related...