The Intelligence Layer

The Case Study

The context

By 2023, AI had rapidly become table stakes in enterprise SaaS. Customers expected semantic understanding, instant answers, and intelligent assistance — but only if the results were accurate, explainable, and trustworthy.

At Bigtincan, AI sat at a critical intersection:

  • Content discovery

  • Sales productivity

  • Platform credibility

When it worked, it accelerated adoption.
When it didn’t, it eroded trust faster than almost any other feature.

How it started

The first AI capability shipped was SearchAI — a semantic search experience layered alongside the existing keyword search as a proof of concept.

Technically, it worked.
Experientially, it exposed something deeper.

AI was being introduced faster than the platform could support it coherently.

caption

The future pressure

As GenieAI entered the roadmap and acquisition planning accelerated, AI could no longer exist as isolated features.

It needed to operate as a shared platform capability — consistently, safely, and across products.

The tension

AI raised expectations — but the platform wasn’t designed for intelligence as a system.

  • Multiple search paradigms across products

  • Fragmented retrieval logic

  • Product-specific AI implementations

  • Limited analytics and experimentation tooling

  • AI logic often defined before UX involvement

The challenge wasn’t adding AI features.
It was preventing intelligence from fragmenting the experience further.

The problem

SearchAI and traditional search could surface different answers on the same page.

GenieAI relied on SearchAI for retrieval, but its UI patterns weren’t consistently exposed across products. Other areas lacked true search entirely, relying on inline filtering or manual navigation.

Users couldn’t tell:

  • Which answer to trust

  • Where AI applied

  • Why results differed

  • Whether content was complete or scoped

A concrete moment where trust broke

A sales rep searches for a pitch deck before a customer call.

On the same screen:

  • SearchAI surfaces a summarized document as the “best result”

  • Traditional search ranks a different file first

Both appear valid.
Neither explains why.

The rep hesitates — not because AI failed, but because the system offered two conflicting truths.

Fundamental issue

There was no shared intelligence layer.
AI existed as features — not as a system.

Understanding the system

Through close collaboration with Product, Customer Success, and Engineering, it became clear these weren’t isolated UX issues.

They were symptoms of:

  • Fragmented retrieval models

  • Inconsistent entry points into discovery

  • No unified search surface across products

  • AI logic divorced from user mental models

A pattern emerged:

AI needed to help users find the right content, understand it in context, and act — not exist as separate tools scattered across the platform.

The constraints

  • AI logic often defined before design involvement

  • Limited access to usage analytics and experimentation tools

  • Acquisition and merger priorities shifting roadmap focus

  • Multiple products with inconsistent search foundations

Impact for this work was therefore qualitative and directional, validated through Customer Success feedback, leadership alignment, and usability patterns rather than clean quantitative attribution.

What I owned

My role focused on experience definition, interaction patterns, and trust models, working within engineering-defined AI constraints.

SearchAI

  • UX and interaction design within dev-defined logic

  • Visual and UI refinement to reduce confusion and duplication

  • Desktop and mobile experiences

GenieAI

  • Research and early concepting

  • Mental models and trust patterns

  • UI foundations for Knowledge Scopes

  • Handoff prior to full execution

Global AI Search POC

  • Design-led research and framing

  • Platform-level recommendations

  • Exploratory wireframes to align teams

How it came together

Rather than arriving as a single vision, the intelligence layer emerged over time.

Each phase surfaced new constraints — and new clarity.

From this, a set of consistent design principles began to take shape.

Phase 1 — SearchAI Stabilization

2023

Focused on clarifying AI vs keyword results, reducing duplication, and improving comprehension within an engineering-defined system.

Outcome

Improved usability and reduced confusion — but structural issues remained.

Phase 2 — GenieAI & Trust Patterns

2023-2024

Expanded AI beyond search into an assistant experience.
Knowledge Scopes became critical for enterprise trust and safe adoption.

Outcome

Customers could adopt AI while maintaining control over scope and accuracy.

Phase 3 — Global Search POC

2025

Reframed AI as a shared discovery foundation across products.

Defined principles for:

  • Unified search entry points

  • Assistant behaviors

  • Platform-wide intelligence

Status

Exploratory and direction-setting. Not shipped due to acquisition priorities.

The solution & hypothesis

AI as an Overlay, Not a Destination

Rather than pulling users into a separate AI experience, AI was designed to live alongside existing UI:

  • Inline summaries

  • Contextual suggestions

  • Side-panel reasoning

  • Action prompts that respected existing workflows

Why it mattered:
Users stayed oriented, cognitive load dropped, and trust increased — without forcing behavior change.

Separate Retrieval from Interpretation

A deliberate distinction emerged:

  • Search (retrieval): finding the right things

  • AI (interpretation): summarizing, explaining, suggesting

  • Combined moments: accelerating decisions

This prevented AI from feeling random or magical.

Why it mattered:
Users understood where answers came from — and why they could trust them.

caption

Optimize for “Next Steps,” Not Clever Answers

The most valuable AI moments weren’t verbose responses — they were clear next actions:

  • Summaries without opening files

  • Suggested actions across one or multiple items

  • Creation flows triggered directly from context

Why it mattered:
AI supported momentum and decision-making instead of becoming another content surface.

caption

caption

caption

caption

caption

Scope as a Trust Boundary

Enterprise adoption required control.

Knowledge Scopes became a trust mechanism, not just a configuration:

  • Users understood what AI could and couldn’t see

  • Admins reduced risk without disabling value

caption

Behavioral & UX impact

Reduced confusion around AI vs non-AI results

Clearer mental models for AI outputs

Safer adoption through scoped knowledge

Strategic impact

Positioned AI as a platform capability, not a feature

Informed future consolidation discussions

Reduced long-term UX debt risk

Validated impact

Customer Success feedback confirmed improved trust and comprehension

Leadership alignment on the need for unified search

POC used as reference in platform-level discussions

Where it stalled

Platform and merger priorities paused execution

Limited analytics constrained validation

AI systems evolved faster than UX maturity

The real lesson

AI doesn’t fail because models are weak.
It fails when experiences are fragmented.

Trust comes from consistency, retrieval quality, and clear mental models — not novelty.

What I'd do differently

I would push harder for earlier design involvement in AI architecture decisions — particularly around retrieval and ranking logic. Several UX issues we later addressed were already “locked in” by technical choices made before design input.

Even so, establishing platform-level principles — separating retrieval from interpretation, treating AI as an overlay, and optimizing for next steps — created a durable foundation that extended beyond any single feature.