HiveDesk
<- Back to Blog

Productivity Monitoring in Contact Centers — What to Watch

Vik Chadha
Vik Chadha · · Updated · 14 min read
Productivity Monitoring in Contact Centers — What to Watch

Productivity monitoring in a contact center is the practice of observing operational and agent-level data — in real time and historically — to identify what is working, what is not, and where to intervene. It is distinct from productivity tracking (the review cadence that structures when you look at data) and from workforce analytics (the analysis layer that answers broader questions about staffing, scheduling, and cost). Monitoring is the observation itself: what data to watch, what each signal means, and what action each signal should trigger.

The difference matters because most contact centers have monitoring tools (the ACD dashboard, the time tracking system, the WFM wallboard) but use them as passive displays rather than active management instruments. A wallboard showing that service level is at 62% is not monitoring — it is display. Monitoring means someone is watching that number, knows that 62% is below the 80/20 target, understands that it has been below target for 45 minutes, has diagnosed the likely cause (3 unplanned absences plus higher-than-forecast volume), and has initiated a response (voluntary overtime offer to the afternoon shift).

The data sources

Productivity monitoring in a contact center pulls from multiple systems. Each system provides a different piece of the picture.

Data sourceWhat it providesProductivity questions it answers
ACD (Automatic Call Distributor)Call volume by interval, queue times, abandon rates, calls per agent, AHT by agent and call type, service level by intervalAre agents handling the expected number of calls? Is AHT within target? Which intervals are understaffed?
Time trackingClock-in/out times, break durations, time by activity category (calls, training, admin, coaching)Are agents working their scheduled hours? How much time is productive (on-phone) vs. off-phone? What is actual shrinkage?
WFM systemSchedule, forecast, staffing requirements, adherenceAre agents following the schedule? Is actual volume matching forecast? Is the operation staffed correctly for demand?
QA evaluationsCall quality scores by agent, rubric category scores, trend over timeIs the agent doing the work correctly, not just quickly? Are quality issues concentrated in specific agents, call types, or rubric categories?
CRM / ticketingFirst call resolution, disposition codes, after-call work content, escalation frequencyAre agents resolving issues or passing them along? Is documentation adequate? Which call types generate the most rework?

The integration requirement: These data sources are useful individually but powerful when connected. An agent with low AHT (fast) but low FCR (not resolving) looks productive on the ACD report but is generating callbacks — net negative productivity. Monitoring requires looking at multiple data sources together, not each in isolation. See workforce analytics for how to build the connected data layer.

What to monitor: real-time signals

Real-time monitoring answers the question: "Is the operation running correctly right now, and if not, what needs to happen in the next 30 minutes?"

SignalWhat it tells youNormal rangeAction trigger
Service level (current interval)Whether enough agents are available to answer calls within the target time80/20 or per your SLABelow target for 2+ consecutive intervals → check staffing vs. requirement. Intraday response needed
Calls in queueHow many customers are waiting right nowFewer than 5% of agents logged in (e.g., fewer than 5 calls in queue for 100 agents)Queue exceeding threshold for more than 10 minutes → overflow action, break deferrals, or VOT request
Agents availableHow many agents are in "available" state (ready to take the next call)10–15% of agents logged inBelow 5% → agents are fully loaded with no buffer. Any additional call will queue. Above 25% → overstaffed for current volume
Longest wait timeHow long the longest-waiting customer has been in queueFewer than 60 seconds for an 80/20 operationOver 120 seconds → immediate attention. Customers abandon at increasing rates past this point
Adherence (current)How many agents are in the activity they should be in per the schedule90%+ of agents in correct stateBelow 85% → multiple agents off-schedule. Check for extended breaks, late returns from lunch, agents in ACW longer than expected
AHT (rolling current shift)Whether the current shift's handle time is trending to targetWithin 10% of target for the call mixRising AHT during the shift → investigate. Possible causes: new call type, system issue making lookups slow, agent mix (more new agents on this shift)

Who watches real-time signals

RoleWhat they monitorResponse authority
SupervisorTheir team's adherence, agent states, individual AHTCoaching conversations, break management, agent-level intervention
WFM analyst / intraday managerService level, queue depth, staffing vs. requirement, forecast accuracyIntraday adjustments: skill changes, overtime offers, schedule adjustments, break rescheduling
Ops managerEscalated situations: sustained SL miss, staffing crisis, system outageAuthorization for mandatory overtime, cross-account moves (BPO), incident response

What to monitor: historical signals

Historical monitoring answers the question: "Is the operation getting more or less productive over time, and why?" These are reviewed on a daily, weekly, and monthly cadence.

Agent-level signals

SignalWhat it revealsWhat it does not reveal
AHT trend (4-week)Whether the agent is getting faster, slower, or staying flatWhy. A rising AHT could be positive (agent is being more thorough, FCR is improving) or negative (agent is struggling with a new call type). Must be interpreted alongside FCR and QA
Calls per hour trendThroughput — how many contacts the agent handles in a productive hourQuality. An agent with rising calls/hour and falling QA scores is cutting corners to go faster
QA score trendWhether call handling quality is improving, declining, or stableCause. A declining QA score may indicate a training gap, a new call type the agent has not been coached on, or burnout
Adherence trendWhether the agent consistently follows the scheduleIntent. An agent with 88% adherence may have a childcare issue causing repeated late returns from break — a different situation than an agent who simply extends breaks
FCRWhether the agent resolves issues on the first contactDifficulty mix. An agent handling complex technical calls will have lower FCR than an agent handling address changes — raw FCR must be compared within the same call type

The agent productivity profile: No single metric defines an agent's productivity. The combination creates a profile. For a framework that uses multiple metrics to categorize agent performance patterns (fast-thorough, fast-sloppy, slow-thorough, inconsistent, declining), see the agent productivity measurement guide.

Team and operation-level signals

SignalWhat it revealsReview frequency
Service level trend (weekly)Whether the operation is consistently meeting SLA or trending downWeekly. A 4-week downward trend requires structural investigation — not just intraday fixes
Occupancy trendWhether agents are busy enough (too low = overstaffed) or too busy (too high = burnout risk)Weekly. Sustained occupancy above 88% degrades quality and accelerates attrition
Shrinkage actual vs. plannedWhether the staffing model reflects realityMonthly. If actual shrinkage exceeds planned by 3+ points, the staffing calculation is wrong
Forecast accuracyWhether the volume forecast is producing usable staffing plansWeekly. Consistent over- or under-forecasting means every schedule is wrong
Overtime as % of total hoursWhether overtime is occasional (acceptable) or structural (a staffing problem)Weekly. Above 5% for 3+ consecutive weeks = structural overtime
Attrition rate by tenureWhether the operation is losing agents faster than it can replace them, and where in the lifecycleMonthly. Early attrition (0–90 days) above 50% indicates training or onboarding problems

The monitoring-to-action chain

Monitoring has no value unless it leads to action. Each observation should follow a chain: signal → diagnosis → decision → action → verification.

Example 1: AHT spike

StepWhat happens
SignalReal-time AHT for the current shift is 7.2 minutes vs. a 5.8-minute target — 24% above target
DiagnosisCheck by call type: billing calls are at normal AHT, but technical support calls spiked to 11 minutes (normally 8). Check for a system issue or new problem type. Find that a client pushed a software update that is generating a new error — no troubleshooting flowchart exists for it
DecisionCreate an interim resolution script. Route the new issue type to senior agents until the script is validated
ActionSupervisor distributes the interim script. WFM adjusts the AHT assumption for the forecast. Senior agents handle the new call type
VerificationMonitor AHT for the next 2 hours. Technical support AHT drops from 11 to 9 minutes with the interim script. Update the troubleshooting flowchart formally within 48 hours

Example 2: Adherence decline on a specific team

StepWhat happens
SignalTeam B's weekly adherence dropped from 92% to 86% over 3 weeks
DiagnosisCheck individual adherence: 3 of 12 agents are below 80%. Check the pattern: all 3 are returning late from lunch (12–15 minutes late consistently). Check whether the schedule changed: the team's lunch break was moved to 11:00 AM (from 12:00 PM) in the latest schedule cycle to cover a mid-day volume peak
DecisionThe early lunch time is the likely cause. Options: revert the lunch time, stagger lunches so some agents keep the later time, or coach the agents on the new expectation
ActionSupervisor has a team conversation about the schedule change, explains the reason, and asks for feedback. Two agents have a genuine constraint (school pickup at 11:45 requires the later lunch). Lunches are staggered: 6 agents at 11:00, 6 at 12:00
VerificationMonitor adherence for 2 weeks. Team B returns to 91%

Example 3: Declining FCR across the operation

StepWhat happens
SignalFCR dropped from 74% to 68% over 6 weeks — a consistent downward trend across all teams
DiagnosisNot agent-specific — all teams declined. Check by call type: FCR for "account inquiry" calls dropped from 82% to 65%. Check what changed: a new verification process was added 7 weeks ago that requires agents to send a verification email and wait for customer response before resolving. Calls that previously resolved in one contact now require a callback
DecisionThe new verification process is the root cause. It was added for fraud prevention — removing it is not an option. But the process can be modified: agents can complete verification via SMS during the call instead of email after the call
ActionProcess change: SMS verification replaces email verification for account inquiries. Training delivered on the new process
VerificationMonitor FCR for account inquiry calls over 4 weeks. FCR recovers from 65% to 78% (higher than the original 82% was not expected, since some calls still require callback for complex verification)

What monitoring does not tell you

Every monitoring signal has limits. Misinterpreting what a signal means — or treating it as a complete picture — leads to wrong actions.

SignalWhat it does not tell you
Low AHTThat the agent is productive. Low AHT with low FCR means the agent is ending calls quickly without resolving them — creating callbacks that double the work
High adherenceThat the agent is working effectively during scheduled time. An agent with 98% adherence who spends calls reading a script without listening to the customer will have high adherence and low quality
High calls per hourThat the operation is productive. High calls per hour can indicate short calls, which can indicate agents are not fully resolving issues or are rushing through required steps
Low QA scoresThat the agent is not trying. Low QA may indicate a training gap, an uncalibrated evaluator, a rubric that penalizes an effective but unconventional approach, or an agent who handles disproportionately difficult calls
High service levelThat the operation is at peak efficiency. Consistently high service level (e.g., 95/20 against an 80/20 target) may indicate overstaffing — the operation is paying for agents who are idle

Monitoring for remote and hybrid agents

Remote and hybrid agents require additional monitoring considerations because the supervisor cannot observe the agent directly.

In-office monitoringRemote equivalent
Supervisor sees who is at their deskTime tracking shows login/logout times compared to schedule
Supervisor notices an agent on an extended breakActivity monitoring shows time away from work applications
Supervisor overhears a difficult call and intervenesReal-time call monitoring or listen-in capability from the ACD
Supervisor sees an agent struggling with a system and helpsScreen-level activity data shows whether the agent is navigating systems efficiently
Supervisor checks the team wallboard in the officeRemote dashboard access with the same real-time data

What to avoid: Monitoring remote agents differently than in-office agents on metrics that should be the same. AHT, FCR, QA, adherence, and calls per hour should be measured identically regardless of location. If remote agents have different productivity outcomes, the investigation should focus on whether remote work eligibility criteria are correct — not on adding surveillance.

BPO monitoring additions

BPO operations monitor everything above plus client-specific dimensions.

Additional signalWhat it reveals
Per-client service levelWhether each client's SLA is being met. Aggregate SL can mask a client that is consistently below target
Per-client AHT and FCRPerformance differences across accounts. An agent may be productive on Client A but struggling on Client B — a training gap on Client B's product, not a general performance issue
Billable utilizationWhat percentage of paid hours are spent on client-billable activity. Non-billable time (training, bench, internal meetings) that is not tracked leads to overstated utilization
Client hour allocation accuracyWhether cross-trained agents' hours are attributed to the correct client account. Misallocation distorts client-level productivity and cost metrics
Contract profitability indicatorsLabor cost per client vs. revenue per client. If overtime on one account is eroding margin, monitoring catches it before the quarterly review

Building the monitoring practice

Monitoring is not a tool purchase — it is an operational discipline. The tools (ACD, time tracking, WFM, QA system) provide the data. The practice is who watches what, when, and what authority they have to act.

ElementWhat to define
Roles and ownershipWho monitors real-time signals (typically WFM or lead supervisor)? Who reviews historical trends (ops manager)? Who acts on each type of signal?
Escalation thresholdsAt what point does a signal require escalation? Service level below 70% for 30 minutes → escalate to ops manager. Adherence below 80% for a team → supervisor investigates same-day
Review cadenceReal-time signals: continuously during shift. Historical signals: daily check, weekly review, monthly analysis
DocumentationWhen an action is taken in response to a monitoring signal, document what was observed, what was done, and what happened. This record builds institutional knowledge and prevents the same problem from being re-diagnosed from scratch each time it recurs
Feedback to agentsAgents should see their own metrics — AHT, adherence, QA scores, calls per hour. Monitoring that is visible only to management creates distrust. Monitoring that is visible to the agent enables self-correction
Vik Chadha

About the Author

Vik Chadha

Founder of HiveDesk. Has been helping businesses manage remote teams with time tracking and workforce management solutions since 2011.

Try HiveDesk Free for 14 Days

Increase productivity, take screenshots, track time and cost, and bring accountability to your team. $5/user/month, all features included.