The Intelligence Layer

The Case Study

Context

By 2023, AI was table stakes in enterprise SaaS — but only when it was accurate, explainable, and trustworthy.

At Bigtincan, AI directly impacted:

  • Content discovery

  • Sales productivity

  • Platform trust

When it worked, adoption accelerated.
When it failed, trust collapsed.

Starting point

SearchAI launched as a semantic layer alongside keyword search.

It proved the tech.
It exposed the problem:

AI was outpacing the platform’s ability to support it coherently.

Developer's Proof of Concept

Pressure

With GenieAI on the roadmap and acquisitions accelerating, AI could no longer ship as standalone features.

It had to become a shared platform capability.

Core tension

AI raised expectations — but the platform wasn’t built for intelligence as a system.

  • Conflicting search models

  • Fragmented retrieval logic

  • Product-specific AI behavior

  • Minimal analytics & experimentation

  • UX involved too late

The challenge wasn’t adding AI.
It was stopping intelligence from fragmenting the experience.

The problem

AI-powered search and traditional search could return different answers on the same page.

Across the platform:

  • SearchAI and keyword search disagreed

  • GenieAI relied on SearchAI, but UI patterns weren’t consistent

  • Some areas had no real search — only filters or manual navigation

Users couldn’t tell:

  • Which answer to trust

  • Where AI applied

  • Why results differed

  • Whether content was complete or scoped

Where trust broke

A sales rep searches for a pitch deck before a customer call.

On the same screen:

  • SearchAI highlights one document as the “best result”

  • Traditional search ranks a different file first

Both look valid.
Neither explains why.

The rep hesitates — not because AI failed, but because the system presented two conflicting truths.

The root issue

There was no shared intelligence layer.

AI existed as features, not as a system.

What we learned

These weren’t isolated UX issues. They were symptoms of:

  • Fragmented retrieval models

  • Inconsistent discovery entry points

  • No unified search surface

  • AI logic disconnected from user mental models

The pattern was clear:
AI needed to help users find, understand, and act on content — not live as scattered tools across the platform.

The constraints

  • AI logic defined before design involvement

  • Limited analytics and experimentation tooling

  • Acquisition and merger priorities shifting the roadmap

  • Multiple products with inconsistent search foundations

Impact was qualitative and directional, validated through Customer Success feedback, leadership alignment, and recurring usability patterns — not clean quantitative attribution.

What I owned

I focused on experience definition, interaction patterns, and trust models, working within engineering-defined AI constraints.

SearchAI

  • UX and interaction design within dev-defined logic

  • Visual and UI refinement to reduce confusion and duplication

  • Desktop and mobile experiences

GenieAI

  • Research and early concepting

  • User mental models and trust patterns

  • UI foundations for Knowledge Scopes

  • Handoff prior to full execution

Global AI Search (POC)

  • Design-led research and problem framing

  • Platform-level recommendations

  • Exploratory wireframes to align cross-functional teams

The intelligence layer didn’t arrive fully formed.
It emerged in phases, each exposing new constraints and sharper principles.

Phase 1 — SearchAI Stabilization

2023

Intervention

Clarified AI vs keyword results within legacy constraints

Outcome

Usability improved, core issues remained

Phase 2 — GenieAI & Trust Patterns

2023-2024

Intervention

Expanded AI into an assistant with Knowledge Scopes

Outcome

Enabled safe enterprise adoption

Scope as a Trust Boundary

Enterprise adoption required control.
Knowledge Scopes functioned as a trust mechanism:

  • Users understood what AI could and couldn’t see

  • Admins reduced risk without disabling value

Select a knowledge scope

Admin create a knowledge scope

Phase 3 — Global Search POC

2025

Intervention

Reframed AI as shared discovery layer across products

Status

Direction-setting. Not shipped due to acquisition priorities.

AI as an Overlay, Not a Destination

AI lived alongside existing UI, not in a separate experience:

  • Inline summaries

  • Contextual suggestions

  • Side-panel reasoning

  • Action prompts aligned to workflows

Why it mattered:
Lower cognitive load, higher trust, no forced behavior change.

POC concept showing AI layered into search results instead of pulling users into a standalone experience.

Prompt-based refinement explored how AI could narrow search intent without obscuring results.

Separate Retrieval from Interpretation

  • Search (retrieval): find the right things

  • AI (interpretation): summarize, explain, suggest

  • Combined moments: accelerate decisions

Why it mattered:
AI felt understandable, not magical or random.

Concept model separating search (retrieval) from AI interpretation to make system behavior easier to understand.

Optimize for “Next Steps,” Not Clever Answers

High-value AI moments enabled action:

  • Summaries without opening files

  • Cross-item suggested actions

  • Creation flows triggered in context

Why it mattered:
AI supported momentum, not distraction.

POC exploration of AI-driven summaries and actions triggered directly from search results.

Behavioral & UX

Reduced confusion between AI and non-AI results

Clearer mental models for AI outputs

Safer adoption through scoped knowledge

Strategic

Positioned AI as a platform capability, not a feature

Informed future search and consolidation discussions

Reduced long-term UX debt risk

Validated

Customer Success confirmed improved trust and comprehension

Leadership aligned on the need for unified search

Global Search POC referenced in platform-level decisions

Where it stalled

Platform and merger priorities paused execution

Limited analytics constrained validation

AI systems evolved faster than UX maturity

The real lesson

AI doesn’t fail because models are weak.
It fails when experiences fragment.

Trust comes from consistency, retrieval quality, clear mental models—not novelty.

What I'd do differently

Push for earlier design involvement in AI architecture, especially around retrieval and ranking.
Several UX issues were locked in by technical decisions made before design input.

Even so, defining platform-level principles such as separating retrieval from interpretation, treating AI as an overlay, and optimizing for next steps created a durable foundation beyond any single feature.