New Relic
Radar
The proactive intelligence product that became the foundation for New Relic's AI platform.
Early iterations of the Radar mobile inbox and desktop insight cards — the two surfaces that made up the initial product. The underlying intelligence engine was later absorbed into New Relic Alerts and AI.
Context
This project started at a different company. At Immediately, I worked on Gong — a mobile app for salespeople that proactively surfaced timely insights about prospects: who they were meeting with, what they were interested in, when was the right moment to reach out. The core idea was to stop making people dig for information and instead deliver the right signal at the right time. New Relic acquired the Immediately team, and the question became whether we could apply that same concept to engineering observability. The answer was Seymour — later renamed Radar — which would eventually become the foundation of New Relic's AI platform.
The Problem
New Relic was, by design, a reactive tool. You opened it when something was wrong or when you needed to answer a specific question. The platform was extraordinarily powerful, but only if you knew what to ask. The issue was that most performance problems followed recognizable patterns — N+1 database queries, over-provisioned EC2 instances, applications quietly consuming memory before crashing. These weren't novel incidents; experienced engineers had seen all of them before, and New Relic had the data to catch them. But nothing was watching for them proactively. An engineering team might have a host slowly running out of disk space for two weeks before anyone noticed, or a microservice making redundant database calls on every request, adding milliseconds to millions of transactions per day. The data to surface those issues existed in New Relic already — there just wasn't anything connecting it to the engineers who needed to act on it.
The Approach
We started by asking what patterns every experienced engineer already knew to look for: disk space trending toward a threshold, CPU consistently above 90%, query counts per transaction climbing, memory utilization growing in ways that didn't match traffic. The first version was rules-based — a set of algorithmic checks that evaluated these known patterns continuously and surfaced an insight when something crossed a threshold that mattered.
The design challenge was making each insight genuinely actionable rather than just descriptive. “Your disk space is low” isn't useful. “This host will run out of disk space in 18 days at current growth rate, and here are the top consuming processes” is something an engineer can actually do something with. Every insight needed context (why this matters), specificity (exactly which system, exactly what numbers), and a clear path forward.
Gong's proactive delivery model translated directly — engineers shouldn't have to go looking for problems any more than salespeople should have to dig for prospect signals. But where Gong used a card-based feed that let items scroll away, we found that proactive insights needed to persist. Engineers didn't want to miss something because they hadn't checked the app recently; they needed something closer to an inbox — where findings accumulated, could be tracked, and didn't feel as ephemeral as alerts. We initially designed for mobile, but learned that proactive intelligence isn't the same category as alerts — it doesn't demand immediate action. Engineers generally preferred to review and investigate at their desk, which moved the desktop interface to the primary focus, with mobile as a secondary surface.
The Solution
The Web Insight Card
The desktop version opened each insight into a full investigation view. For a disk space issue affecting multiple hosts, the card showed a comparison table — hostname, current usage, estimated days remaining — alongside a threaded comment interface where engineers could coordinate directly within the insight without switching to Slack.
As the system expanded from threshold-based rules into anomaly detection, the card format scaled to handle more complex, less predictable insights — unusual traffic patterns, potential security events, behavioral anomalies that didn't match historical norms. The consistent card structure meant engineers could trust their mental model regardless of what type of insight they were looking at.
The Outcome
More durably than any single metric, Radar reframed what New Relic was: not just a platform for answering questions, but one that watched your systems and told you what to pay attention to. The algorithmic and ML foundations built for Radar became the core of New Relic's AI platform, which the company subsequently built its product strategy around. The thread from Gong to Seymour to Radar to New Relic AI is a straight line: the same conviction that the right insight at the right time is worth more than a warehouse of data you have to search yourself.