Case Study
Consumer Healthcare / OTC & Wellness

How a Global Consumer Healthcare Company Unified 6 Competitive Arenas into a Single Intelligence Layer

Nielsen said share was down 1.2 points. Internal sales said billing was up 8%. Both were right. Nobody could explain why.

--
Diagnostic accuracy
first use case
--
Kickoff to
production
Same day
Signal-to-insight
down from 3-4 weeks
<10%
Relative cost per
subsequent use case
01

Six businesses under one roof.
Each fighting a different war
with the same blind spots.

The company operates across oral health, pain relief, digestive health, respiratory, and vitamins. Each category has different competitors, channels, seasonality, and data sources. A single portfolio, but six separate intelligence requirements.

6
Distinct competitive
arenas
5+
External data sources
(Nielsen, Kantar, IQVIA)
3-4 wks
Lag from signal
to assembled insight
12
Personas needing
different views

Nielsen IQ measures market share monthly across GT and MT, but undercounts the pharmacy channel where pain relief and respiratory products move. Kantar tracks household penetration on a different cadence with a different geography taxonomy. Internal primary sales shows distributor billing. Secondary sales from the field is patchy in newly expanded rural territories.

A category head sees oral health share dip 1.2 points in Nielsen. She asks sales. They say primary billing is up 8%. Both are correct: the company shipped more, but a competitor launched a Rs 10 sachet that captured trial in rural outlets where Nielsen coverage is thin. By the time someone manually assembles the picture, the competitor has been in-market for 6 weeks.

02

Five dashboards. Five data sources.
Zero cross-referencing.

Nielsen in one portal. Kantar in another. Internal sales in SAP. Pharmacy data in a quarterly spreadsheet. Brand health in a research vendor's platform. Each technically accessible. None connected.

~2 days
Per Monthly Review

Brand managers spent 2 days assembling the monthly category picture by pulling data from each source, reconciling geography taxonomies, and building slides. The output was backward-looking, arrived too late to act on, and the cross-domain questions were often not asked at all.

×

Different taxonomies, no entity resolution. Nielsen uses "Urban North." Internal sales uses state-level RSM territories. Kantar uses SEC classifications by city tier. A question like "is our share loss concentrated where we recently expanded distribution?" required manual joins across 3 naming systems.

×

No cross-domain diagnosis. Nielsen shows share is down. Internal data shows billing is up. Neither can tell you that a competitor launched a low-unit-price SKU in pharmacy outlets that Nielsen undercounts. That connection requires crossing system boundaries. No dashboard did this.

×

Pharmacy channel blind spot. Chemist outlets drive 30-40% of sales in pain relief and respiratory. Nielsen's pharmacy coverage is thin. HCP recommendation patterns were tracked in a separate IQVIA system. An entire channel operated without integrated intelligence.

03

One graph across all six arenas.
Market data, sales data, and
channel data connected.

The Intelligence Warehouse models the business as a single graph: brands, categories, geographies, channels, competitors, and the external measurement systems that track them. When share moves, the graph traces why across system boundaries.

Business Knowledge Graph
Entity
Metric
Decision
46 entities28 metrics15 decision rules10 domain clusters
M
Knowledge Elicitation via Morrie
Adaptive sessions with brand managers, category heads, and sales leaders

The graph was populated through structured sessions with stakeholders across brand, sales, and analytics using Morrie. Not "what reports do you use?" but "when Nielsen says share is down and sales says billing is up, how do you figure out who's right?" 18 sessions across 12 personas.

Morrie
Oral health share dropped 1.2 points in last month's Nielsen read. Internal billing is up 8% in the same period. Walk me through how you reconcile this.
First: which geography and which channel? Nielsen is strong in GT and MT but weak in pharmacy. If the loss is in "Urban South" and I know we just expanded rural South distribution, the billing uplift is pipeline fill that hasn't converted to offtake. Two different signals, not a contradiction.
Diagnostic rule: Share_vs_Billing_Divergence → Geography_Channel_Mix + Pipeline_vs_Offtake split. Nielsen_Coverage_Gap flagged for pharmacy channel.
Morrie
And if the geography matches? Urban GT, where both Nielsen coverage and your billing are strong.
Then a competitor did something. I'd check if someone dropped a low-price sachet in the same outlets. In oral care, a Rs 10 trial pack can shift numeric distribution 3-4 points in a month. Nielsen catches it, but I won't see it in my data because my volumes didn't change. The category just got bigger beneath me.
RCA: Same_Geo_Divergence → Competitor_Activity (New_SKU OR Trade_Scheme). Category_Expansion: Haleon_Volume_Stable + Category_Volume_Up = Share_Dilution, not Sales_Loss.
Morrie
So share dilution versus actual sales loss requires knowing if the category grew. How quickly do you get that signal today?
Monthly, with a 3-week lag. By the time I see it, the competitor has had 6 weeks of uncontested distribution. If I had the signal in week 2, I could have run a counter-scheme before the trial converts to repeat purchase.
Critical gap: Nielsen_Lag (3 wks) + Analysis_Assembly (1 wk) = 4-week detection delay. Counter-scheme decision window: 2 weeks max. Net: 2-4 weeks lost response time.

18 sessions · 46 entities, 28 metrics, 15 decision rules · Every node traceable to a specific conversation

04

First use case: Sales Intelligence

Cross-domain diagnosis across Nielsen, Kantar, and internal data. When share moves, trace whether it's distribution, pricing, competition, or channel mix. Automated anomaly detection and root-cause decomposition across all six arenas.

Diagnostic Accuracy
--
Validated against root-cause analyses produced by senior category managers across 3 arenas over 2 months.
Previously: manual, 2 days per review
Time to Production
--
Kickoff to live. Includes knowledge elicitation, graph build, Nielsen/Kantar/SAP/IQVIA data mapping, entity resolution, and agent deployment.
Signal-to-Insight
Same day
When Nielsen data refreshes, the graph cross-references with internal data automatically. Anomalies surfaced in hours, not weeks.
Previously: 3-4 week lag
Cross-Domain Queries
--
Of questions requiring 2+ systems answered correctly. Previously, these required manual assembly and were often not asked.
05

Build the graph once.
Deploy use cases at a fraction.

The first use case encoded the business model across all six arenas. Market Pulse rode on that same foundation, adding demand signal and competitive intelligence layers at a fraction of the original cost.

Use CaseBKG ReuseAccuracyTime to LiveCost
Sales Intelligence
Cross-domain diagnosis, share decomposition, competitive response detection
Baseline96.1%6 weeks100%
Market Pulse
Symptom-led social signals, Google Trends demand sensing, competitive creative detection, share-of-search
72%95.2%13 days10%
Relative Implementation Cost
Sales Intelligence
100%
Market Pulse
10%

Why It Compounds

Market Pulse adds social listening, symptom-led search signals, and competitive creative detection entities to the existing brand-category-geography-channel graph. The ontology from Sales Intelligence carries directly because the brand, competitor, channel, and geography model is shared. Only the new signal sources and their associated decision rules need to be built.

5
New entities
for Market Pulse
7
New metrics
for Market Pulse
6
New decision rules
for Market Pulse

Versus the 46 entities, 28 metrics, and 15 decision rules already in the graph.

Summary

The Intelligence Warehouse is the compounding asset.

Two use cases. One knowledge foundation. Six arenas, finally connected.

2
Use cases
in production
95.7%
Average diagnostic
accuracy
13d
Market Pulse
time to live
10%
Market Pulse cost
vs. Sales Intelligence