Case Study: How New Relic Built an Internal GenAI Engine on AWS
Coverage of aws-ml-blog
In a recent publication, the AWS Machine Learning Blog details the architecture and strategy behind New Relic NOVA, an internal generative AI tool designed to streamline engineering workflows.
In a recent post, the aws-ml-blog discusses the development and deployment of New Relic NOVA, an internal generative AI assistant built to address the challenges of knowledge fragmentation within large engineering organizations.
The Context: The Cost of Information Sprawl
For mature technology companies, internal documentation often becomes a double-edged sword. While necessary for operations, the sheer volume of technical guides, architectural decision records, and runbooks-spread across disparate repositories-can create significant friction. Engineers frequently spend valuable cycles searching for specific information rather than writing code or solving customer problems. This phenomenon, often referred to as "information sprawl," is a primary target for enterprise adoption of generative AI.
While much of the current market focus remains on customer-facing AI features, the immediate return on investment for many enterprises lies in internal productivity tools. By reducing the mean time to discovery for internal knowledge, organizations can achieve measurable gains in developer velocity and operational efficiency.
The Gist: From Knowledge Assistant to Productivity Engine
The source article outlines New Relic's journey in building NOVA to solve this specific friction. Originally conceived as a knowledge assistant to help engineers navigate internal documentation, the tool's scope expanded significantly during development. New Relic collaborated with the AWS Generative AI Innovation Center to accelerate the project, moving from concept to a production-grade tool capable of synthesizing complex internal data.
Built on Amazon Bedrock, NOVA allows employees to interact with company knowledge bases using natural language. Rather than returning a list of links (standard search behavior), the system synthesizes answers based on retrieved context. The post highlights that this implementation has transformed how New Relic employees access systems, effectively turning the tool into a comprehensive productivity engine rather than a simple chatbot.
Why This Matters
This case study is significant for technical leaders evaluating the build-vs-buy decision for internal AI tools. It demonstrates a practical application of Retrieval-Augmented Generation (RAG) using managed services, reducing the infrastructure overhead typically associated with hosting Large Language Models (LLMs). Furthermore, it underscores the value of strategic partnerships-in this case, with AWS's Innovation Center-to navigate the complexities of model selection and integration.
For a deeper look at how New Relic leveraged AWS services to modernize their internal knowledge management, we recommend reading the full case study.
Read the full post on the AWS Machine Learning Blog
Key Takeaways
- **Internal Efficiency Focus**: New Relic targeted internal developer productivity and knowledge fragmentation as a primary use case for Generative AI.
- **Evolution of Scope**: The project, named NOVA, evolved from a basic documentation assistant into a broader productivity engine for the company.
- **Infrastructure Strategy**: The tool was built using Amazon Bedrock, leveraging managed services to handle model interaction and scaling.
- **Strategic Collaboration**: New Relic partnered with the AWS Generative AI Innovation Center to accelerate development and implementation.