Editing
Conversational Ai
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> {{BloomIntro}} Conversational AI and chatbots are AI systems designed to engage in natural language dialogue with humans β answering questions, completing tasks, providing information, and maintaining coherent multi-turn conversations. From simple rule-based FAQ bots to sophisticated LLM-powered assistants that can code, plan, research, and reason, conversational AI spans a wide spectrum. Modern conversational AI powers customer service agents, personal assistants (Siri, Alexa, Google Assistant), enterprise knowledge bases, and research tools, handling billions of interactions daily. </div> __TOC__ <div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Remembering</span> == * '''Chatbot''' β A software application designed to simulate conversation with human users, especially over the internet. * '''Conversational AI''' β AI systems capable of understanding and generating natural language in interactive dialogue contexts. * '''Turn''' β One exchange in a conversation: one message from the user and one response from the system. * '''Context window''' β The amount of conversation history the model can process when generating a response. * '''Intent recognition''' β Identifying the user's goal or purpose from their message ("I want to book a flight" β intent: book_flight). * '''Entity extraction''' β Identifying and extracting key information from user input (dates, locations, names, numbers). * '''Slot filling''' β Collecting all required pieces of information (slots) needed to complete a task (destination, date, passenger count for booking). * '''Dialogue state tracking''' β Maintaining a representation of what has been established in the conversation so far. * '''NLU (Natural Language Understanding)''' β The component that interprets user input: intent + entities. * '''NLG (Natural Language Generation)''' β The component that generates the system's response. * '''Dialogue policy''' β The decision about what action to take given the current dialogue state. * '''Retrieval-augmented chatbot''' β A chatbot that retrieves relevant documents or knowledge base entries before generating responses. * '''Fallback''' β A response generated when the system cannot confidently handle the user's input. * '''Grounding''' β Connecting chatbot outputs to verified facts, documents, or knowledge bases to reduce hallucination. * '''RLHF (Reinforcement Learning from Human Feedback)''' β Training approach used to align LLM chatbots with human preferences (used in ChatGPT). </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Understanding</span> == Conversational AI has evolved through three generations: '''Rule-based bots''': Decision trees and pattern matching (ELIZA, 1966; early customer service bots). Predictable, interpretable, but brittle β fail on any unanticipated input. Still widely used for structured, high-volume, simple tasks. '''Intent-based systems''' (Rasa, Dialogflow): Train NLU models to recognize intents and extract entities from user input. A dialogue manager selects the appropriate response template or action based on intent. More flexible than rules but still requires exhaustive intent definition and breaks on complex multi-step conversations. '''LLM-based conversational AI''' (ChatGPT, Claude): Large language models generate responses contextually from the full conversation history. No explicit intent definition β the model understands arbitrary natural language. Dramatically more capable for complex, open-ended conversations but prone to hallucination, harder to control, and expensive at scale. '''The key components of production conversational AI''': - '''NLU''': What does the user want? (intent, entities) - '''Dialogue management''': What should the system do? (retrieve information, call an API, ask for clarification) - '''Response generation''': How should the system say it? (template, retrieval, generation) - '''Memory''': What do we know about this user and conversation? (session state, user profile) - '''Integration''': What external systems does it connect to? (databases, APIs, CRMs) '''Grounding and RAG''': The most critical improvement for production LLM chatbots is retrieval augmentation β anchoring responses in verified documents rather than generating from parametric memory. This dramatically reduces hallucination and enables factual accuracy for domain-specific bots. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Applying</span> == '''Building a RAG-powered customer service chatbot:''' <syntaxhighlight lang="python"> from openai import OpenAI from sentence_transformers import SentenceTransformer import faiss import numpy as np client = OpenAI() embedder = SentenceTransformer("all-MiniLM-L6-v2") # Build knowledge base index from FAQ/documents docs = [ "Shipping takes 3-5 business days for standard delivery.", "Returns are accepted within 30 days of purchase with receipt.", "Our customer service hours are 9am-6pm EST, Monday-Friday.", # ... more documents ] doc_embeddings = embedder.encode(docs) index = faiss.IndexFlatL2(doc_embeddings.shape[1]) index.add(doc_embeddings.astype('float32')) def retrieve_context(query: str, top_k: int = 3) -> str: q_emb = embedder.encode([query]).astype('float32') _, ids = index.search(q_emb, top_k) return "\n\n".join([docs[i] for i in ids[0]]) def chat(conversation_history: list, user_message: str) -> str: # Retrieve relevant context context = retrieve_context(user_message) # Build conversation with system prompt + retrieved context messages = [ {"role": "system", "content": f"""You are a helpful customer service assistant. Answer questions based ONLY on the following context. If the answer isn't in the context, say "I don't have that information - please contact support@company.com." Context: {context}"""} ] + conversation_history + [{"role": "user", "content": user_message}] response = client.chat.completions.create( model="gpt-4o-mini", messages=messages, temperature=0.1 ) return response.choices[0].message.content # Multi-turn conversation history = [] while True: user_input = input("You: ") if user_input.lower() in ['quit', 'exit']: break response = chat(history, user_input) history.extend([ {"role": "user", "content": user_input}, {"role": "assistant", "content": response} ]) print(f"Bot: {response}") </syntaxhighlight> ; Chatbot technology stack selection : '''Simple FAQ, high volume''' β Rule-based / intent-based (Rasa, Dialogflow, Amazon Lex) : '''Complex tasks, enterprise''' β LLM + RAG + tool use (LangChain, LlamaIndex) : '''Voice interface''' β ASR (Whisper) β LLM β TTS (ElevenLabs) : '''Regulated domain''' β Intent-based with human escalation; strict output guardrails : '''Open-domain assistant''' β GPT-4o, Claude, Gemini via API </div> <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Analyzing</span> == {| class="wikitable" |+ Conversational AI Approach Comparison ! Approach !! Flexibility !! Hallucination Risk !! Control !! Cost |- | Rule-based || Very low || None || Very high || Very low |- | Intent-based (Rasa/Dialogflow) || Medium || Low || High || Low |- | LLM (raw) || Very high || High || Low || High |- | LLM + RAG || High || Low-medium || Medium || Medium-high |- | LLM + tools + RAG || Very high || Low || Medium || High |} '''Failure modes''': Hallucination β LLMs generate plausible but false information with confidence. Context window overflow in long conversations β older context is lost. Prompt injection β users craft inputs to override system instructions. Escalation failure β bot doesn't recognize when a conversation needs human handoff. Sycophancy β model agrees with incorrect user assertions rather than correcting them. </div> <div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Evaluating</span> == Chatbot evaluation: # '''Task completion rate''': does the bot achieve the user's goal? # '''Hallucination rate''': sample 200 conversations, manually verify factual claims. # '''Escalation appropriateness''': does the bot know when to hand off to a human? # '''User satisfaction (CSAT)''': post-conversation surveys. # '''Response latency''': p50/p95 time-to-first-token. # '''Safety''': red-teaming for jailbreaks, harmful content generation, inappropriate advice. Expert practitioners monitor live conversations with random sampling and use LLM-as-judge for automated quality scoring at scale. </div> <div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Creating</span> == Designing a production conversational AI system: # Define scope: what can the bot do? what must it escalate? # Build knowledge base: curate, chunk, embed, and index all relevant documents. # System prompt: define persona, capabilities, constraints, escalation triggers. # RAG pipeline: retrieve top-5 chunks on each turn; include in context. # Guardrails: input validation (detect abuse, PII), output filtering (harmful content, confidential data). # Human escalation: trigger on low-confidence signals, explicit requests, negative sentiment. # Feedback loop: review escalated conversations for bot improvement. # Monitoring: CSAT, containment rate, escalation rate as key KPIs. [[Category:Artificial Intelligence]] [[Category:Natural Language Processing]] [[Category:Conversational AI]] </div>
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Template used on this page:
Template:BloomIntro
(
edit
)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information