Nlp

From BloomWiki
Revision as of 01:54, 25 April 2026 by Wordpad (talk | contribs) (BloomWiki: Nlp)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

How to read this page: This article maps the topic from beginner to expert across six levels � Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Scan the headings to see the full scope, then read from wherever your knowledge starts to feel uncertain. Learn more about how BloomWiki works ?

Natural Language Processing (NLP) is the branch of artificial intelligence concerned with enabling computers to understand, interpret, and generate human language. From search engines and virtual assistants to machine translation and sentiment analysis, NLP is one of the most pervasive AI technologies, touching billions of people every day. The field has undergone a revolution with the advent of transformer-based language models.

Remembering

  • Token — The basic unit of text that NLP models process. Tokens can be words, subwords, or characters depending on the tokenization strategy.
  • Tokenization — The process of splitting text into tokens. Common algorithms: Byte-Pair Encoding (BPE), WordPiece, SentencePiece.
  • Corpus — A large collection of text used to train NLP models.
  • Vocabulary — The set of all unique tokens a model knows. Modern LLMs typically have vocabularies of 32k–100k tokens.
  • Part-of-Speech (POS) tagging — Labeling each word with its grammatical role (noun, verb, adjective, etc.).
  • Named Entity Recognition (NER) — Identifying and classifying entities in text (persons, organizations, locations, dates).
  • Sentiment analysis — Determining the emotional tone of text (positive, negative, neutral).
  • Machine translation — Automatically converting text from one language to another.
  • Stemming — Reducing words to their root form (e.g., "running" → "run"). Often too aggressive.
  • Lemmatization — Reducing words to their dictionary form using linguistic rules (e.g., "better" → "good").
  • Stop words — Common words (the, is, at) often removed in preprocessing as they carry little semantic meaning.
  • TF-IDF — Term Frequency–Inverse Document Frequency; a statistical measure of how important a word is to a document in a collection.
  • Word embeddings — Dense vector representations of words that capture semantic relationships (Word2Vec, GloVe).
  • Perplexity — A metric for evaluating language models; lower perplexity indicates better prediction of text sequences.
  • BLEU score — Bilingual Evaluation Understudy; a metric for evaluating machine translation quality.
  • Large Language Model (LLM) — A neural network trained on massive text corpora to predict and generate text.

Understanding

NLP's fundamental challenge is the ambiguity of language. The same sentence can mean different things in different contexts ("I saw her duck" — did someone see a person bend down, or see her waterfowl?). Humans resolve this using world knowledge, context, and pragmatics. Teaching machines to do the same is the core problem.

The evolution of NLP mirrors advances in representation learning:

Rule-based systems (1950s–1980s): Hand-crafted grammars and lexicons. Brittle, but interpretable.

Statistical NLP (1990s–2000s): Probabilistic models like Hidden Markov Models and n-gram language models. Better generalization, but still limited by sparse data.

Word embeddings (2013+): Word2Vec and GloVe showed that words with similar meanings cluster together in vector space. "King − Man + Woman ≈ Queen" is the famous demonstration of captured relational semantics.

Sequence-to-sequence with attention (2014–2017): Encoder-decoder architectures with attention mechanisms enabled machine translation breakthroughs. Attention allows the model to "look back" at relevant parts of the input when generating each output token.

Transformer era (2017+): The "Attention Is All You Need" paper replaced recurrence entirely with self-attention, enabling massively parallel training. BERT (encoder-only) enabled classification tasks; GPT (decoder-only) enabled generation. Models scaled from millions to hundreds of billions of parameters.

A key insight: language modeling — predicting the next word — is an extraordinarily rich self-supervised task that forces models to learn syntax, semantics, facts, and reasoning as a byproduct.

Applying

Text classification with a pre-trained transformer (HuggingFace):

<syntaxhighlight lang="python"> from transformers import pipeline

  1. Sentiment analysis using a pre-trained model

classifier = pipeline("sentiment-analysis",

                      model="distilbert-base-uncased-finetuned-sst-2-english")

results = classifier([

   "This film was absolutely wonderful!",
   "The service was terrible and I want a refund."

])

  1. [{'label': 'POSITIVE', 'score': 0.9998}, {'label': 'NEGATIVE', 'score': 0.9996}]
  1. Named Entity Recognition

ner = pipeline("ner", grouped_entities=True) ner("Apple was founded by Steve Jobs in Cupertino, California.")

  1. [{'entity_group': 'ORG', 'word': 'Apple', ...},
  2. {'entity_group': 'PER', 'word': 'Steve Jobs', ...},
  3. {'entity_group': 'LOC', 'word': 'Cupertino, California', ...}]

</syntaxhighlight>

Common NLP task → model mapping
Text classification → BERT fine-tuned, DistilBERT, RoBERTa
Text generation → GPT-2/3/4, LLaMA, Mistral
Translation → MarianMT, NLLB, Google Translate API
Summarization → BART, Pegasus, T5
Question answering → RoBERTa fine-tuned on SQuAD
Semantic search → Sentence-BERT, E5, text-embedding-ada-002

Analyzing

NLP Approach Trade-offs
Approach Strengths Weaknesses
Rule-based Deterministic, interpretable, no data needed Brittle, doesn't generalize, high maintenance
Statistical (n-gram) Simple, fast, works on small data Can't capture long-range dependencies
RNN/LSTM Handles sequences, captures context Slow to train, vanishing gradients over long sequences
Transformer (BERT-style) Rich contextual representations, transfer learning High compute cost, large memory, positional encoding limits
LLMs (GPT-4, Claude) Few-shot learning, generalist capability Very expensive, hallucination, opaque reasoning

Key failure modes:

  • Hallucination — LLMs confidently generate factually incorrect text because they learn statistical patterns, not truth.
  • Bias amplification — Models trained on internet text inherit and sometimes amplify demographic biases present in the data.
  • Out-of-vocabulary (OOV) issue — Rare words, misspellings, or domain-specific terminology not in the training vocabulary are poorly handled.
  • Evaluation mismatch — Automated metrics like BLEU and ROUGE correlate poorly with human judgment for generation quality.

Evaluating

Expert NLP practitioners evaluate systems holistically:

Intrinsic vs. extrinsic evaluation: Intrinsic metrics (perplexity, BLEU) measure the model's internal properties. Extrinsic metrics measure performance on the downstream task. Always prioritize downstream task performance — a model with worse perplexity can outperform on the actual application.

Human evaluation: For generation tasks, human raters evaluating fluency, coherence, factuality, and helpfulness remain the gold standard. A/B tests comparing model outputs are common in production.

Bias and fairness auditing: Run the model on carefully constructed test sets designed to reveal disparate treatment. Tools like WinoBias (gender bias in coreference) are standard benchmarks.

Error analysis: Rather than global metrics, inspect specific failure cases. Cluster errors by type (wrong entity span, wrong sentiment polarity, factual error). This guides targeted improvements more effectively than global metric optimization.

Expert practitioners distinguish between capability (what the model can do in optimal conditions) and reliability (how consistently it performs across diverse, realistic inputs). Production systems must optimize for reliability.

Creating

Designing an end-to-end NLP system:

1. Task and data specification

  • Define the exact input and output format
  • Audit data quality: label noise, class imbalance, domain coverage
  • Establish human performance as a ceiling

2. Model architecture selection <syntaxhighlight lang="text"> Input Text

Tokenizer (BPE/WordPiece)

Pre-trained Transformer Backbone (BERT/RoBERTa/LLaMA)

[Optional: Domain-adaptive pre-training on in-domain unlabeled data]

Task-specific head (Classification / Seq2Seq / Token classification)

Fine-tune on labeled data → Evaluate → Iterate </syntaxhighlight>

3. Serving considerations

  • Quantize the model (INT8/FP16) for faster inference
  • Use ONNX Runtime or TensorRT for optimized inference
  • Cache embeddings for frequently queried inputs
  • Set up monitoring for input distribution drift

4. Quality safeguards

  • Output filtering for sensitive content
  • Confidence thresholds to trigger human review
  • Retrieval augmentation to reduce hallucination on factual queries