Editing
Natural Language Processing
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> {{BloomIntro}} Natural Language Processing (NLP) is the branch of artificial intelligence concerned with enabling computers to understand, interpret, and generate human language. From search engines and virtual assistants to machine translation and sentiment analysis, NLP is one of the most pervasive AI technologies, touching billions of people every day. The field has undergone a revolution with the advent of transformer-based language models. </div> __TOC__ <div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Remembering</span> == * '''Token''' β The basic unit of text that NLP models process. Tokens can be words, subwords, or characters depending on the tokenization strategy. * '''Tokenization''' β The process of splitting text into tokens. Common algorithms: Byte-Pair Encoding (BPE), WordPiece, SentencePiece. * '''Corpus''' β A large collection of text used to train NLP models. * '''Vocabulary''' β The set of all unique tokens a model knows. Modern LLMs typically have vocabularies of 32kβ100k tokens. * '''Part-of-Speech (POS) tagging''' β Labeling each word with its grammatical role (noun, verb, adjective, etc.). * '''Named Entity Recognition (NER)''' β Identifying and classifying entities in text (persons, organizations, locations, dates). * '''Sentiment analysis''' β Determining the emotional tone of text (positive, negative, neutral). * '''Machine translation''' β Automatically converting text from one language to another. * '''Stemming''' β Reducing words to their root form (e.g., "running" β "run"). Often too aggressive. * '''Lemmatization''' β Reducing words to their dictionary form using linguistic rules (e.g., "better" β "good"). * '''Stop words''' β Common words (the, is, at) often removed in preprocessing as they carry little semantic meaning. * '''TF-IDF''' β Term FrequencyβInverse Document Frequency; a statistical measure of how important a word is to a document in a collection. * '''Word embeddings''' β Dense vector representations of words that capture semantic relationships (Word2Vec, GloVe). * '''Perplexity''' β A metric for evaluating language models; lower perplexity indicates better prediction of text sequences. * '''BLEU score''' β Bilingual Evaluation Understudy; a metric for evaluating machine translation quality. * '''Large Language Model (LLM)''' β A neural network trained on massive text corpora to predict and generate text. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Understanding</span> == NLP's fundamental challenge is the '''ambiguity''' of language. The same sentence can mean different things in different contexts ("I saw her duck" β did someone see a person bend down, or see her waterfowl?). Humans resolve this using world knowledge, context, and pragmatics. Teaching machines to do the same is the core problem. The evolution of NLP mirrors advances in representation learning: '''Rule-based systems''' (1950sβ1980s): Hand-crafted grammars and lexicons. Brittle, but interpretable. '''Statistical NLP''' (1990sβ2000s): Probabilistic models like Hidden Markov Models and n-gram language models. Better generalization, but still limited by sparse data. '''Word embeddings''' (2013+): Word2Vec and GloVe showed that words with similar meanings cluster together in vector space. "King β Man + Woman β Queen" is the famous demonstration of captured relational semantics. '''Sequence-to-sequence with attention''' (2014β2017): Encoder-decoder architectures with attention mechanisms enabled machine translation breakthroughs. Attention allows the model to "look back" at relevant parts of the input when generating each output token. '''Transformer era''' (2017+): The "Attention Is All You Need" paper replaced recurrence entirely with self-attention, enabling massively parallel training. BERT (encoder-only) enabled classification tasks; GPT (decoder-only) enabled generation. Models scaled from millions to hundreds of billions of parameters. A key insight: language modeling β predicting the next word β is an extraordinarily rich self-supervised task that forces models to learn syntax, semantics, facts, and reasoning as a byproduct. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Applying</span> == '''Text classification with a pre-trained transformer (HuggingFace):''' <syntaxhighlight lang="python"> from transformers import pipeline # Sentiment analysis using a pre-trained model classifier = pipeline("sentiment-analysis", model="distilbert-base-uncased-finetuned-sst-2-english") results = classifier([ "This film was absolutely wonderful!", "The service was terrible and I want a refund." ]) # [{'label': 'POSITIVE', 'score': 0.9998}, {'label': 'NEGATIVE', 'score': 0.9996}] # Named Entity Recognition ner = pipeline("ner", grouped_entities=True) ner("Apple was founded by Steve Jobs in Cupertino, California.") # [{'entity_group': 'ORG', 'word': 'Apple', ...}, # {'entity_group': 'PER', 'word': 'Steve Jobs', ...}, # {'entity_group': 'LOC', 'word': 'Cupertino, California', ...}] </syntaxhighlight> ; Common NLP task β model mapping : '''Text classification''' β BERT fine-tuned, DistilBERT, RoBERTa : '''Text generation''' β GPT-2/3/4, LLaMA, Mistral : '''Translation''' β MarianMT, NLLB, Google Translate API : '''Summarization''' β BART, Pegasus, T5 : '''Question answering''' β RoBERTa fine-tuned on SQuAD : '''Semantic search''' β Sentence-BERT, E5, text-embedding-ada-002 </div> <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Analyzing</span> == {| class="wikitable" |+ NLP Approach Trade-offs ! Approach !! Strengths !! Weaknesses |- | Rule-based || Deterministic, interpretable, no data needed || Brittle, doesn't generalize, high maintenance |- | Statistical (n-gram) || Simple, fast, works on small data || Can't capture long-range dependencies |- | RNN/LSTM || Handles sequences, captures context || Slow to train, vanishing gradients over long sequences |- | Transformer (BERT-style) || Rich contextual representations, transfer learning || High compute cost, large memory, positional encoding limits |- | LLMs (GPT-4, Claude) || Few-shot learning, generalist capability || Very expensive, hallucination, opaque reasoning |} '''Key failure modes:''' * '''Hallucination''' β LLMs confidently generate factually incorrect text because they learn statistical patterns, not truth. * '''Bias amplification''' β Models trained on internet text inherit and sometimes amplify demographic biases present in the data. * '''Out-of-vocabulary (OOV) issue''' β Rare words, misspellings, or domain-specific terminology not in the training vocabulary are poorly handled. * '''Evaluation mismatch''' β Automated metrics like BLEU and ROUGE correlate poorly with human judgment for generation quality. </div> <div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Evaluating</span> == Expert NLP practitioners evaluate systems holistically: '''Intrinsic vs. extrinsic evaluation''': Intrinsic metrics (perplexity, BLEU) measure the model's internal properties. Extrinsic metrics measure performance on the downstream task. Always prioritize downstream task performance β a model with worse perplexity can outperform on the actual application. '''Human evaluation''': For generation tasks, human raters evaluating fluency, coherence, factuality, and helpfulness remain the gold standard. A/B tests comparing model outputs are common in production. '''Bias and fairness auditing''': Run the model on carefully constructed test sets designed to reveal disparate treatment. Tools like WinoBias (gender bias in coreference) are standard benchmarks. '''Error analysis''': Rather than global metrics, inspect specific failure cases. Cluster errors by type (wrong entity span, wrong sentiment polarity, factual error). This guides targeted improvements more effectively than global metric optimization. Expert practitioners distinguish between '''capability''' (what the model can do in optimal conditions) and '''reliability''' (how consistently it performs across diverse, realistic inputs). Production systems must optimize for reliability. </div> <div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Creating</span> == Designing an end-to-end NLP system: '''1. Task and data specification''' * Define the exact input and output format * Audit data quality: label noise, class imbalance, domain coverage * Establish human performance as a ceiling '''2. Model architecture selection''' <syntaxhighlight lang="text"> Input Text β Tokenizer (BPE/WordPiece) β Pre-trained Transformer Backbone (BERT/RoBERTa/LLaMA) β [Optional: Domain-adaptive pre-training on in-domain unlabeled data] β Task-specific head (Classification / Seq2Seq / Token classification) β Fine-tune on labeled data β Evaluate β Iterate </syntaxhighlight> '''3. Serving considerations''' * Quantize the model (INT8/FP16) for faster inference * Use ONNX Runtime or TensorRT for optimized inference * Cache embeddings for frequently queried inputs * Set up monitoring for input distribution drift '''4. Quality safeguards''' * Output filtering for sensitive content * Confidence thresholds to trigger human review * Retrieval augmentation to reduce hallucination on factual queries [[Category:Artificial Intelligence]] [[Category:Natural Language Processing]] [[Category:Machine Learning]] </div>
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Template used on this page:
Template:BloomIntro
(
edit
)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information