Overlay

New & Improved Performance Tracking.

Contact center team leaders were flying blind. The dashboard existed — it just didn't tell them what they actually needed to know, in a way they could act on. The redesign wasn't about adding more data. It was about surfacing the right data, in the right hierarchy, at the moment a decision needed to be made.

A decision-support problem, not a data problem

A CCaaS platform — cloud-based contact center software enabling omnichannel customer conversations — had an analytics problem. Team leaders responsible for monitoring agent performance and allocating resources to high-demand queues were working with a dashboard that was dense, poorly structured, and not built around how they actually made decisions.

I took full design ownership from discovery through to a shipped analytics system: design decisions, visual direction, component definitions, and a purpose-built analytics design system that later integrated into the global platform system.

This wasn't a data design problem. It was a decision-support problem. The question wasn't "how do we display these metrics better?" — it was "what does a team leader need to see to make a good call in the next 30 seconds?"

What the UX audit showed

Two product discovery workshops with developers, stakeholders, and designers produced concrete deliverables upfront — personas, initial prototypes, and a UX audit of the existing design. We were diagnosing a real system, not hypothesising about an abstract problem.

Three things were clearly broken

  • Wrong metrics surfaced. The dashboard was showing data — but not the data team leaders needed to manage their teams. The metrics that drove day-to-day decisions weren't prominently available; the ones that were weren't actionable.
  • No way to act from the dashboard. Team leaders could see a queue was overwhelmed but couldn't reassign agents from within the same view. Insight and action were disconnected.
  • Poor hierarchy — couldn't scan quickly. Contact center environments are fast-moving. The existing design required too much cognitive work to extract a status read in the time available.

Two tracks, deliberately

  • Track 1 — Existing clients Re-engaged current platform users for both interviews and usability testing. Deep domain knowledge, real operational context, and grounded feedback on the existing tool's frustrations.
  • Track 2 — usertesting.com Pre-screened for contact center size, metric familiarity, and reporting practices. Unbiased participants with no prior platform exposure — honest signal on whether the design stood on its own.

Why both mattered: Existing clients gave us depth and domain expertise. Unbiased participants gave us an honest read on whether the design worked without institutional knowledge propping it up. Using only existing clients risked validating familiarity rather than usability. Every metric we decided to surface was validated through both tracks — not assumed, not stakeholder-driven. If team leaders couldn't identify why a metric mattered or what they'd do with it, it didn't make the cut.

The process

Discovery

Set the foundation — personas, UX audit, initial prototypes, and alignment on what success looked like before any screens were designed in earnest.

Wireframes

Started with structure before visual decisions, then built interactive prototypes for stakeholder feedback. The writing team refined copy for clarity and scannability — in a dashboard context, label language is as much a UX decision as layout.

Moderated Testing

Conducted on usertesting.com covering three goals: navigational functionality, hierarchy clarity, and metric comprehension. Task-based scenarios asked participants to make realistic decisions from the dashboard. Five participants per round, synthesis done directly in the platform using built-in voice and written note-taking on recorded sessions.

Iteration

Prototypes were refined between rounds based on findings, not shipped after a single pass. The loop continued until task success and comprehension hit the bar we'd set. Stakeholder alignment was maintained through a combination of good working relationships and evidence-backed decisions — no significant conflict, because the direction was grounded in data they'd been part of gathering.

Built around the decision, not the data

An analytics dashboard designed around how team leaders make decisions under pressure — not around the data architecture of the underlying system.

For team leaders

A clear, scannable view of team performance with the right metrics surfaced at the right level of hierarchy. Queue status visible at a glance. Agent assignment actionable from within the same view — no context switching between insight and action.

For agents

An overview of their own performance — giving them visibility into the same data their team leader sees, supporting accountability and self-management.

The analytics design system — built specifically for this product, later integrated into the global platform design system. Data visualisation components, metric cards, status indicators, hierarchy patterns — all defined, documented, and scalable across the platform.

Impact

Qualitative wins, honestly presented

Quantitative usability metrics aren't available to share here. What the testing process validated qualitatively: team leaders could identify queue status and agent utilisation at a glance — the scan-speed problem the audit had flagged was resolved. Participants understood what each metric meant and, critically, what action it implied. Agent assignment from within the dashboard worked as intended.

The analytics design system shipped and was integrated into the global platform system, making it reusable across other product surfaces.

Rigour as a leadership act

Full ownership on a B2B enterprise analytics product meant: structuring a two-track research approach that produced unbiased signal alongside domain-expert depth — and making the case for both when one would have been easier. Validating every metric decision through user testing rather than stakeholder assumption. Building an analytics design system that didn't just serve this product but integrated into the global system — thinking beyond the immediate scope.

The research rigour on this project is what I'm most proud of. It would have been faster to trust the existing audit and ship. Running two independent participant tracks, synthesising findings, and iterating until the comprehension bar was hit took more time — and produced a significantly better outcome.