Editing
Prompt Engineering
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Understanding</span> == LLMs are next-token predictors β they complete text in a manner consistent with their training distribution. A prompt is not a command but a '''context that constrains the probability distribution''' over possible continuations. When you write a clear, structured prompt, you are steering the model toward the region of its output space that contains useful, accurate responses. '''Why prompts matter so much''': The model's behavior is entirely determined by its weights plus its input context. Since you can't change the weights at inference time (without fine-tuning), the prompt is your only lever. Small changes β adding "think step by step," restructuring information, or providing a clear role β can swing response quality dramatically. '''Chain-of-thought''' works because LLMs trained on vast amounts of human-written text have learned that reasoning traces precede correct conclusions in textbooks, solutions, and technical writing. By prompting "let's think step by step," you nudge the model into the portion of its output distribution where reasoning traces are followed by correct answers. '''The anatomy of an effective prompt''': <syntaxhighlight lang="text"> [System/Role] β Who the model is and what constraints it operates under [Context/Input] β Background information, document, or data relevant to the task [Task/Instruction]β What exactly to do, stated clearly and unambiguously [Format] β What the output should look like (JSON? Numbered list? Table?) [Examples] β (Optional) 1β5 input-output demonstrations [Output Cue] β A partial beginning of the expected response to prime generation </syntaxhighlight> The more precisely each of these components is specified, the less the model must infer β and inference is where errors enter. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information