That took time to accept. Many of the worst-performing clusters had excellent RSSI, clear dominance, and coverage plots that looked textbook clean. Yet they produced persistent call drops, handover failures, and poor voice quality day after day.
The issues were rarely coverage gaps. They came from how the radio system behaved under real traffic load. That distinction is not obvious until you spend enough time correlating drive data against busy-hour OSS counters and watch the two tell completely different stories.
Neighbor lists were typically planned once during rollout and rarely touched afterward. Over months, traffic patterns shifted. New sites were added. Antenna adjustments changed dominance areas. The original neighbors became outdated, but the configuration stayed the same.
Handovers were sent to poor targets. Drops happened that drive tests in quiet conditions never caught. The fix required pulling OSS neighbor performance counters, cross-referencing with measurement reports, identifying actual dominant candidates under load, and rebuilding the neighbor set accordingly.
Aggressive frequency reuse improved capacity metrics on paper. The interference effects were load-dependent and invisible in idle mode. They only appeared when multiple TRXs were active simultaneously, which meant post-launch drive tests done at low traffic hours missed the problem entirely.
Small differences in configuration between adjacent cells were rarely flagged during commissioning. Power control thresholds, handover margins, and timing advance settings that looked reasonable in isolation produced unpredictable behavior when cells interacted under load.
The problem was that cells were audited individually. The interaction between adjacent cells under load was never modeled. Fixing this required cluster-level parameter audits, comparing every adjacent pair, not reviewing each cell in isolation.
The most effective troubleshooting combined drive-test measurements with OSS counters. The drive data showed where problems occurred. OSS counters showed when and under what load. Neither was sufficient alone. A drop appearing in both sources, correlated by location and time of day, was far more actionable than either finding in isolation.
Only patterns confirmed in both sources were acted on. Single-source findings were treated as hypotheses, not conclusions.