Solutions
Multi-domain
Beta

Liya Chat

Enterprise conversational AI

Deploy intelligent, context-aware chat agents on your own knowledge base. Bring together agents, retrieval, persistent memory, and guardrails — all in a single conversational interface.

BETA PERFORMANCE METRICS
< 1.2s
Avg response latency
94%
Answer grounding rate
10M+
Conversations in beta
0.1%
Escalation rate
Integrates with
ConfluenceNotionSharePointS3SlackTeams
How it works

From question to grounded answer in milliseconds

Liya Chat chains intent routing, retrieval, generation, and guardrails into a single coherent pipeline — invisible to the end user.

01

Connect Your Knowledge Base

Point Liya Chat at your existing knowledge sources — internal docs, Confluence, Notion, SharePoint, S3, or a custom vector store. Ingestion and indexing happen automatically.

Supports structured and unstructured data. Incremental sync keeps the index fresh as your docs change. No manual chunking or embedding pipelines to manage.

02

Intent Classification

Every message is routed through an intent classifier before any generation happens. This determines which domain pack, retrieval strategy, and agent path to activate.

Intent routing is configurable per deployment. You can define custom intents, fallback policies, and escalation paths for out-of-scope queries.

03

Retrieval-Augmented Generation

Relevant context is fetched from your knowledge base using semantic search, filtered by user role and document permissions. The agent only sees what the user is allowed to see.

Hybrid search (dense + sparse) for precision. Re-ranking ensures the most relevant chunks surface. Retrieved sources are cited in responses.

04

Agent Response with Memory

The agent generates a grounded response using the retrieved context. Multi-turn memory persists across sessions — users pick up where they left off, agents remember past context.

Memory is scoped per user, per workspace, or per conversation — configurable at deployment time. No context window leakage between users.

05

Guardrails & Policy Enforcement

Every response passes through layered policy checks: grounding verification (no hallucinations beyond retrieved context), content filtering, and role-based output masking.

Guardrails run asynchronously with near-zero added latency. All decisions are logged with a tamper-evident audit trail.

API Walkthrough

The full pipeline in one call.

Send a user message with session context. Retrieval, generation, grounding verification, and memory update all happen automatically.

request.sh
curl -X POST https://api.liyaengine.com/v1/run \
  -H "x-api-key: $LIYA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "pack": "enterprise-chat",
    "intent": "answer-question",
    "input": {
      "message": "What is our parental leave policy for contractors?",
      "session_id": "sess_u82kd1",
      "user": {
        "id": "usr_9f3c",
        "role": "contractor",
        "permissions": ["hr-policy-read"]
      }
    },
    "retrieval": {
      "sources": ["hr-handbook", "contractor-guide"],
      "top_k": 5
    },
    "guardrails": {
      "grounding": { "action": "block_if_ungrounded" },
      "content": { "policy": "professional" }
    }
  }'
response.json
{
  "response": {
    "content": "Based on the Contractor Guide (updated March 2025), contractors on contracts of 12+ months are eligible for 4 weeks of parental leave...",
    "intent": "answer-question",
    "confidence": 0.96,
    "sources": [
      { "doc": "contractor-guide", "section": "Leave Entitlements", "relevance": 0.92 }
    ],
    "metadata": {
      "guardrails_passed": true,
      "grounding_verified": true,
      "flags": []
    }
  },
  "session": {
    "id": "sess_u82kd1",
    "turns": 3,
    "memory_updated": true
  },
  "execution": {
    "steps": 3,
    "latency_ms": 980,
    "retrieval_ms": 145
  }
}
Deployment

Deploy anywhere

Embedded Widget
Drop a script tag into any web app. The chat widget connects to your Liya Chat deployment and inherits your brand config.
Slack / Teams Integration
Deploy Liya Chat as a bot in your Slack workspace or Microsoft Teams environment. Works with existing identity and permission models.
API-First
Build your own frontend. Liya Chat exposes a streaming REST API — SSE for real-time output, webhooks for async agent runs.
White-Label
Deploy Liya Chat under your own brand with custom domain, theme, and identity. Available on Enterprise.
Liya Engine components
Agent Mode
Multi-turn conversational agent
Retrieval
Hybrid semantic + keyword search
Memory
Persistent cross-session context
Intent Routing
Configurable intent classifier
Domain Packs
Any pack — HR, Legal, Finance, Healthcare
Guardrails
Grounding verification, content filter, RBAC
Beta — limited spots

Ready to deploy enterprise chat?

Liya Chat is in beta with select enterprise design partners. Apply to join and get white-glove onboarding.

Apply for betaRead the docs