The Coaching Lifecycle

The Case Study

The context

Coaching was meant to be the engine of sales readiness at Bigtincan — the place where reps practiced skills, received feedback, and demonstrated readiness for real customer conversations.

It sat at the intersection of enablement, performance, and leadership visibility. When Coaching worked, it drove adoption and confidence. When it didn’t, it created confusion, hesitation, and heavy reliance on Customer Success to fill the gaps.

How it started

When Bigtincan acquired Brainshark in 2021, we inherited a Coaching experience built on early-2000s enterprise patterns. The workflow relied on rigid, state-driven paths that predated modern SaaS expectations.

Over time, incremental enhancements were layered on — new features, shortcuts, and patches — but the underlying structure never evolved. By the time this work began, Coaching had become fragile, opaque, and increasingly difficult to reason about.

The future pressure

By late 2025, as Vector Capital took Bigtincan private and the Showpad integration began, Coaching needed to function as a coherent, platform-level system.

While a full backend rebuild wasn’t possible, this moment created the opportunity for targeted backend changes in service of clearer structure — allowing us to move beyond surface-level UX fixes and finally define a more predictable lifecycle.

The tension

The inherited Brainshark architecture hard-coded two primary paths — submission and feedback — and couldn’t be re-architected end-to-end, even though limited, targeted backend changes became possible later. We still needed to deliver incremental value, support new capabilities, and signal platform maturity.

The challenge wasn’t identifying what was broken.
It was improving clarity and usability inside a system that had never been designed as a lifecycle.

The problem

Opening an activity dropped users straight into either the submission flow or the feedback flow, with no expectation, no orientation, and no sense of progression. Recording lived on a separate page, disconnected from the rest of the experience, while core context — instructions, criteria, and participant selection — was buried inside flows rather than surfaced upfront.

Iteration technically existed, but it wasn’t visible as takes were hidden, there was no feedback on non-submitted attempts. Reviewers were thrust into a single rep’s submission with little context, managers relied on a separate product (Scorecards) for limited visibility, and CS teams spent hours explaining how the system worked.

Fundamental issue

Coaching had no structure. It wasn’t one experience — it was multiple mini-flows stitched together. That’s what we needed to fix.

Fragmented pages, unpredictable paths, and no sense of “what comes next.”

Understanding the system

Working closely with CS and Engineering, it became clear that these weren’t isolated UX issues — they were symptoms of a system without an organizing model.

The workflow behaved as rigid legacy paths with scattered context and unclear transitions. Without a predictable sequence, reps couldn’t tell what came next, reviewers had to reconstruct intent manually, and managers lacked a reliable picture of progress.

A pattern began to emerge: Coaching needed to be understandable as a sequence — expectations, attempts, and review — not a set of disconnected screens.

The constraints

  • Legacy backend that couldn’t be fully re-architected, limiting workflow states to incremental additions

  • High-stakes submission model with no true drafts

  • Multi-role complexity across participants, reviewers, and managers

  • Phased engineering capacity requiring incremental progress

What I owned

I led design direction from 2022–2025, from stabilizing the inherited Brainshark experience to shaping the later unified Coaching designs.

Strategy & Direction

  • Defined clearer structure across the Coaching workflow with Product and CS

  • Sequenced improvements to deliver value within backend constraints

  • Established consistent role-aware patterns across participant, reviewer, and manager experiences

  • Ran regular CS reviews that shaped key decisions, including removing a “quick submit” shortcut that would have created dead ends

Design & Delivery

  • Designed a unified Activity Page to reduce fragmented entry points

  • Reworked Instructions, Practice, Review, and Feedback into a more connected flow

  • Improved dashboards to better reflect real user behavior

  • Partnered closely with Engineering to ensure feasibility at each phase

How it came together

Rather than arriving fully formed, the structure emerged through the work.

Phase 1 and Phase 2 focused on stabilizing and clarifying the legacy flows — surfacing context, reducing friction, introducing early practice patterns, and adding Roleplay AI — even though the underlying structure couldn’t yet change.

Instead of pursuing a risky backend rebuild, we expressed structure through design: reshaping surfaces, states, and transitions so the workflow felt more predictable within legacy constraints.

As friction decreased, a clearer sequence began to surface:

Understand expectations → attempt → review → iterate

By Phase 3, we were able to define that sequence explicitly. It became the organizing principle for what needed to be visible upfront, what belonged together, and how different roles should experience the same activity.

With targeted backend changes, Coaching shifted from rigid, state-driven flows into a system people could actually reason about.

A unified flow with predictable steps for participants and reviewers.

The final results

Unified Activity Page

A single, stable surface where the coaching lifecycle became clear

Instructions, criteria, takes, submissions, reviewers, & feedback lived in one predictable place, giving all roles a shared point of reference. Reps could understand what was expected next without guessing or jumping between flows

What this enabled:
Clear navigation, fewer accidental submissions, and context that stayed intact across the activity

One surface for the lifecycle

instructions, takes, reviews, and feedback finally lived together.

Your submissions

iteration became visible and compare-able.

Role clarity in action

reviewers instantly saw who was stuck, who submitted, and who needed nudging.

Dashboards that matched real behavior

Different roles, different needs — one coherent framework

Participants
To-dos, pending reviews, and completed activities were consolidated into one dashboard, making next steps explicit

Reviewers
Pending reviews, criteria, takes, and context were aligned in a single, consistent pattern, reducing context-hunting and review errors

Managers
Managers could monitor progress and readiness directly within Coaching, instead of relying on Scorecards and fragmented metadata

What this enabled:
Clearer ownership, reduced manual tracking, and more reliable visibility across roles

Participant Dashboard

to-dos, pending reviews, and completed work merged into one predictable flow.

Reviewer Dashboard

video, transcript, criteria, and scoring aligned into a single reviewing surface.

Manager Dashboard — cross-team readiness, bottlenecks, and progress finally became visible

Recording & takes redesigned around iteration

Practice became intentional, not fragile

Recording was redesigned as a focused, full-page experience with visible takes, save/delete controls, and instant AI feedback where technically feasible. Iteration was supported explicitly before submission, rather than hidden inside the flow

What this enabled:
Higher-quality takes, lower cognitive load, and more consistent review context

Practice became safe — reps iterated without fear of submitting the wrong take

Choice before saving — a lightweight checkpoint that reduced anxiety and improved quality

Transcript, video, and AI feedback combined into one learning loop
so reps improved faster with clearer context

Structured criteria tied directly to transcript and video reduced variance and made evaluations faster and more consistent.

Reviewer scoring

structured criteria anchored directly to transcript and video for consistent evaluations which significantly reduced reviewer variance

Submission switching

Iteration made visible

Reps and reviewers could compare submissions side-by-side, making progress visible and review context explicit instead of hidden

What this enabled:
Clearer progression, safer iteration, and more consistent reviews

Iteration became visible and comparable — a major shift from the opaque, “submit once” legacy model

Phase 1 — Stabilization & Practice Validation

2022–2023

Cleaner flows, transcript editing within feedback, reduced UI drift, and early practice support within legacy limits.

Outcome

CS reported reps practicing roughly 2–3× more before submitting.

Phase 2 — RoleplayAI

2023–2024

Scenario-based practice with AI scoring and inline feedback, strengthening Coaching’s differentiation.

Outcome

Positioned Coaching as an AI-forward differentiator during merger planning.

Phase 3 — Unified Lifecycle

2025

Unified Activity Page, clearer role-aware dashboards, and more explicit workflow states.

Status at transition

Designs were built and validated with CS; rollout timing shifted due to platform integration priorities.

Delivered impact — Phases 1 & 2

Behavioral changes (validated by the CS team)

  • Improved clarity and reliability within legacy flows

  • Reduced UI inconsistency and accidental errors

  • Stronger engagement with practice through Roleplay AI

  • Clear validation that users wanted safer iteration and a more predictable flow

Strategic outcomes

  • Created the foundation for AI-supported coaching

  • Positioned Coaching as a differentiator during acquisition planning

Validated impact — Phase 3

  • Unified Activity Page resolved fragmented context and unpredictable navigation in CS validation

  • Role-aware dashboards closed long-standing visibility gaps within Coaching

  • Iteration became explicit through visible takes and submission switching

  • Targeted backend changes supported clearer lifecycle states

Internal user feedback

“I finally understand where I am in coaching — it’s not a mystery anymore.” — Sales Rep
“This cut my review time in half.” — Enterprise CS Manager
“My reps finally feel safe to practice.” — Sales Director

Strategic

  • Established a durable lifecycle model that positioned Coaching as platform-ready

  • Created a coherent foundation for AI-supported coaching

  • Produced reusable lifecycle patterns that informed adjacent platform work

Where it stalled

The design worked — the system wasn’t ready.

  • True practice-first cycles required backend rebuilding

  • Legacy architecture limited how far fragmentation could be removed

  • Multi-role complexity stretched existing system boundaries

  • Acquisition timelines delayed Phase 3 rollout

The real lesson

Clarity starts with structure.
The biggest unlock wasn’t the UI — it was defining the lifecycle model itself. Once that clicked, decisions became obvious.

What I'd do differently

This work reinforced how much clarity depends on structure. Many usability issues weren’t about missing features, but about missing sequence.

In hindsight, earlier alignment with Engineering on backend evolution would have allowed the system to support the workflow more directly. Even so, establishing a clearer structure created a foundation that extended beyond Coaching into other parts of the platform.