WCDMA / HSUPA  ·  packet performance

By late 2011, voice KPIs alone were no longer telling the full story. Networks could meet call setup and drop targets while users complained that data sessions felt slow, stalled, or unstable. These problems were harder to diagnose because they rarely showed up as outright failures in any single counter.

Data performance fails quietly. Unlike voice, it degrades before it collapses. A call drops, and you know something is wrong. A data session retries silently and times out. The user blames the network. The dashboard stays green.

What voice metrics did not cover

In several WCDMA clusters, voice performance looked acceptable while packet-switched KPIs quietly degraded. High RRC connection success rates masked frequent uplink instability, excessive retransmissions, and rising interference during busy hours.

Metric Voice view Data view (same cells) RRC setup success 97.8% -- within target High, but RRC state churn elevated Drop rate 1.4% -- within target Session timeout rate not tracked Uplink interference Not flagged by voice KPI UL noise rise 4-6 dB above baseline at peak User perception No voice complaints Slow network, stalled pages, failed downloads

Data sessions did not drop cleanly like calls. They lingered, retried, and eventually timed out. The failure mode was invisible to the KPI framework, originally designed around circuit-switched voice.

What the counters showed

Correlating RRC state transitions, HSUPA power control counters, and uplink interference indicators with actual traffic patterns surfaced the problem. Cells serving a mix of stationary voice users and mobile data users behaved very differently from the voice-only assumptions used during original tuning.

Counter correlation, busy-hour analysis: RRC connected mode transitions (CELL_DCH to CELL_FACH): -- elevated in data-heavy sectors -- UE oscillating state due to inactivity timer mismatch HSUPA: max power limited events increasing -- UE hitting uplink power ceiling before throughput target met -- correlates with rising UL RTWP in same sectors RTWP trend (uplink noise rise): -- baseline: -104 dBm -- busy hour observed: -98 to -100 dBm -- 4-6 dB rise absorbing uplink budget -- voice unaffected (power-controlled, lower data rate) -- HSUPA sessions throttled, retransmission rate up
Capacity imbalance within NodeBs

A recurring issue was uneven traffic distribution across sectors of the same NodeB. Some sectors consistently carried more packet traffic because of indoor penetration or nearby hotspots. Scheduler behavior and uplink power targets were still tuned uniformly across all sectors.

Pattern observed across multiple NodeBs
Sector A (facing office building): 60% of NodeB packet traffic Sector B: 25% Sector C: 15% All three sectors: identical uplink noise rise threshold, same scheduler config Sector A hits UL noise limit first at busy hour Data throughput throttled on Sector A Sectors B and C: headroom still available, not utilized Downlink resources on Sector A: still largely free Result: capacity available in the NodeB, but inaccessible because per-sector thresholds did not reflect actual usage

The fix was not adding capacity everywhere. It was targeted tuning based on observed usage patterns: adjusting uplink power targets, load thresholds, and scheduler behavior for data-heavy sectors specifically. Stabilized sessions without impacting voice on the other sectors.

What changed in the analysis approach

Voice optimization had a relatively clean set of counters: setup success, drop rate, handover success, and congestion. Data performance required a wider set pulled together: RRC state behavior, HSUPA power events, interference trends, retransmission rates, and session continuity indicators. None of these told the full story individually.

Minimum counter set for packet performance diagnosis: RRC state transitions -- stability of the session layer HSUPA max power events -- uplink budget exhaustion RTWP per sector -- interference floor trend HSDPA CQI distribution -- downlink quality seen by UE E-RAB / session drop rate -- session-level failure separate from call drop Retransmission ratio -- silent indicator of link quality stress

Pulling these together at busy-hour granularity, per sector rather than per NodeB, gave a usable picture. Averaging across the NodeB or pulling 24-hour stats obscured the same patterns that per-cell averaging had hidden in voice analysis.

Data performance fails quietly and degrades well before it collapses. Unless OSS counters, interference trends, and usage patterns are analyzed together, the network can look fine while users grow increasingly frustrated. That realization pushed toward deeper analytics-driven optimization. The tools were basic at that point, mostly SQL queries against OSS exports and manual counter correlation. But the discipline of looking across multiple data sources simultaneously, rather than trusting a single dashboard KPI, started here.

WCDMA  ·  HSUPA  ·  RAN Optimization  ·  Packet Performance  ·  OSS Analytics  ·  Interference Management  ·  Telecommunications

Popular posts from this blog