One of the world's largest network modernization programs needed AI that understood how the business thinks, not just where the data lives.
A multi-year radio access network modernization across thousands of live cell sites. Four parallel operational tracks. And the knowledge that connected it all was trapped in the heads of a handful of senior program managers.
Field execution, HSE compliance, outage management, and formal closeout ran simultaneously at every site. Vendor teams, general contractors, program managers, and safety inspectors each held a different fragment of the operational picture.
No single person or system could answer the question that mattered: "What should happen next at this site, and why?" The tribal knowledge that connected these fragments lived entirely in people's heads. When those people were unavailable, the program slowed down.
An initial AI pilot took the standard route: connect an LLM to operational databases, let teams query in natural language. Text-to-SQL. RAG over documentation.
No relational reasoning. The LLM had access to tables, not to a business model. It could not connect a "Ready" site to its expired maintenance window and a vendor whose HSE certification lapsed last week.
Tribal knowledge cannot be prompted. The rules governing vendor eligibility, escalation paths, and exception handling were never written down. No amount of prompt engineering could surface them.
Confidence without correctness. Simple lookups worked. Multi-step reasoning across scheduling, compliance, and vendor history produced answers that were articulate, specific, and wrong.
Instead of pointing an LLM at databases, we built a Business Knowledge Graph that encodes the entities, relationships, metrics, and decision rules that experienced operators carry in their heads.
The graph wasn't built by reading documentation or reverse-engineering schemas. It was populated through structured conversational sessions with stakeholders using Morrie, an adaptive system that conducts Socratic-style interviews, asks progressively sharper domain questions, and constructs graph nodes in real-time as experts describe how the business actually works.
14 sessions across program managers, field leads, and HSE supervisors. Each session elicited entities, relationships, thresholds, and exception logic that no documentation captured.
14 sessions · 42 entities, 18 metrics, 12 decision rules captured · Every node traceable to a specific conversation
Which sites need attention now? What's blocking them? What should we do? The first agent deployed answered the hardest question in the program.
The Intelligence Warehouse is the compounding asset. The first use case is the investment. Everything after rides on the knowledge already encoded.
| Use Case | BKG Reuse | Accuracy | Time to Live | Cost |
|---|---|---|---|---|
Site Execution Intelligence Blocker detection, prioritization, next-action |
Baseline | 96.1% | 6 weeks | 100% |
HSE Compliance Prediction Predict likely audit failures by site and vendor |
76% | 95.4% | 13 days | 9% |
Maintenance Window Optimization Scheduling, conflict detection, expiry alerts |
81% | 95.8% | 10 days | 8% |
Vendor Performance & Assignment Throughput, SLA breach prediction, crew matching |
84% | 96.2% | 9 days | 7% |
The cost collapse is structural. Each subsequent use case reuses the same ontology, the same metrics layer, and most of the same decision logic. Only net-new decision rules and additional entity relationships need to be built.
Versus the 42 entities, 18 metrics, and 12 decision rules already in the graph from use case 1.
Four use cases. One knowledge foundation. Each one faster, cheaper, and more accurate.