Collaborative Debugging with AI
Collaborative Debugging with AI
Relvy transforms incident response from a solitary debugging session into a collaborative investigation between engineers and AI. This partnership ensures that human expertise guides the process while AI handles the repetitive and time-consuming tasks.
How Relvy Works With Engineers
Review & Approve
Review and approve investigation steps before execution, ensuring every action aligns with your expertise.
Modify & Guide
Modify queries, add custom analysis, and guide the investigation with your domain knowledge.
Override AI
Override or skip AI-suggested steps at any time—engineers retain full control.
Continuous Feedback
Provide feedback to Relvy during investigations. Your feedback is saved as runbooks to improve future AI performance.
Learning Mode: Guided AI, Continuous Improvement
Learning Mode lets you configure Relvy to propose a debugging plan and wait for engineer approval before executing any steps.
- Enable or disable Learning Mode in the Settings page.
- While enabled, Relvy is in a “learning phase”—it does not take action on its own.
- Engineers review, approve, or modify each step, providing feedback directly to Relvy.
- All feedback is saved as runbooks, which guide Relvy’s future investigations.
- When your team is confident, disable Learning Mode to let Relvy autonomously investigate incidents using the knowledge it has learned from your feedback.
Investigation Workflow
Relvy receives an alert or incident description
Relvy creates a structured investigation plan with multiple steps
Engineers review and modify the plan as needed (as Relvy waits for approval in Learning Mode)
Relvy executes queries and fetches the data from different sources
Relvy analyzes results and correlates findings
Engineers asks followup questions to Relvy for a deeper investigation
The investigation concludes with actionable insights
Shareable Investigation Notebooks
Breaking Down Tribal Knowledge Barriers
In most organizations, each system component ends up being deeply understood by only a handful of developers. This creates a “tribal knowledge” bottleneck during incidents. With Relvy, every investigation and runbook becomes a shareable, searchable artifact—so on-call engineers and AI can review previous investigations and runbooks for faster, more effective incident response. Tribal knowledge is diffused across the team and the AI, making expertise accessible to everyone.
Coaching Relvy
As engineers investigate or simulate scenarios, they provide feedback to Relvy. This feedback is captured as runbooks, allowing Relvy to learn your team’s unique systems, workflows, and best practices. Over time, Relvy becomes well-versed in your production environment—not just a generic AI solution, but a coachable teammate tailored to your stack.
Real-World Collaboration Example
Scenario: A microservice is experiencing high latency.
1. Relvy creates initial plan: Check recent deployments, analyze metrics, examine logs
2. Engineer adds context: “This service was recently updated with new caching logic. Check cache related queries as well”
3. Relvy adjusts focus: Prioritizes cache-related queries and deployment analysis
4. Collaborative analysis: AI exectutes queries and presents analysis. Engineer and AI work together to identify the root cause
5. Knowledge capture: The entire investigation is saved as a shareable notebook
6. Future reference: Next time a similar issue occurs, Relvy can reference this investigation
This collaborative approach ensures that both human expertise and AI capabilities are leveraged effectively, leading to faster resolution times and better knowledge retention across your team.