
AI Knowledge Assistant for Enterprise Support Platform
Client:
Enterprise SaaS Company (Confidential)
Role:
Product - UX/UI Designer
Sector:
Enterprise Knowledge Platform
Year:
2026
Integrating AI into enterprise software requires more than adding a chat interface. It means designing structured interaction layers that preserve trust, traceability, and system integrity within complex SaaS environments.
Context
This project focused on embedding an AI assistant into an existing enterprise support portal used for ticketing and knowledge management.
The platform already included dashboards, structured knowledge content, and escalation flows within a multi-tenant SaaS environment.
The challenge was not to add AI as a feature, but to integrate it into the system without compromising structure, predictability, or trust.
Brief & Objectives
The goal was to reduce unnecessary ticket creation by strengthening self-service through AI-assisted knowledge retrieval.
The experience needed to remain:
Structured and traceable
Consistent across tenant configurations
Aligned with backend and data constraints
The focus was clarity and system coherence rather than conversational novelty.
My Role
I led the interaction and UI design of the AI assistant across:
Conversation model definition
Escalation logic
Knowledge Base refinement
Visual hierarchy and layout exploration
Failure states and edge case mapping
Engineering alignment
The work required balancing conversational flexibility with enterprise-grade structure.
Designing AI as Part of the System
Embedding instead of overlaying
The assistant was integrated into primary support entry points: homepage search, knowledge consumption, and escalation triggers, reinforcing existing workflows rather than introducing a parallel chat layer.
A structured interaction model
The experience evolves in stages:
Homepage: single query + compact response
Escalation: transition to a dedicated workspace
Workspace: two-pane layout (conversation + structured resources)
This structure improves scannability, traceability, and validation of AI output. The intent was not to simulate human conversation, but to support structured reasoning.
Making AI legible
Enterprise users need clarity about where answers come from.
To support this:
Responses are separated from structured resource references
Related sources appear in a dedicated panel
Citations link directly to articles
Content can be previewed side-by-side
Transparency was treated as a primary design principle.
Rule-based escalation
Escalation was designed to be measurable and predictable.
After a defined number of exchanges, the assistant prompts for resolution confirmation. If unresolved, ticket creation is suggested through a clear transition.
Within the Knowledge Base flow, existing feedback mechanisms were leveraged before escalation.
This reduces premature ticket creation while respecting real user behavior patterns.
Designing within constraints
The system operates in a multi-tenant environment with variable branding and data completeness.
Design decisions accounted for:
Customer configuration variability
API and data limitations
Explicit capability constraints (e.g., non-multimodal AI)
UI patterns avoided implying unsupported functionality.
Deliverables
Homepage AI integration explorations (conservative vs AI-first hierarchy)
Conversation-to-workspace transition model
Two-pane workspace layout with source traceability
Knowledge Base navigation refinement
Escalation UX patterns
Failure-state and system-availability guidelines
Design System Extension
The project introduced modular AI patterns integrated into the existing portal system:
AI search container variants
Persistent AI entry component
Two-pane workspace structure
Citation and source highlighting patterns
Resolution confirmation components
Clear loading and limitation states
All additions were designed for incremental implementation aligned with backend readiness.
Results & Learnings
Key learnings:
In enterprise environments, structure builds trust.
Visible sources matter more than conversational tone.
Escalation performs best when rule-based and measurable.
Multi-tenant systems require explicit handling of variability.
Clear communication of capability limits reinforces credibility.
*Client and product details have been anonymized due to confidentiality.






