Editing
Reasoning Models and the Architecture of the Thought
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Remembering</span> == * '''Reasoning Models''' β Advanced AI models designed specifically to execute complex, multi-step logical deduction, mathematics, and problem-solving, moving beyond simple pattern matching and text generation. * '''System 1 vs. System 2 Thinking''' β A psychological framework by Daniel Kahneman applied to AI. *System 1*: Fast, automatic, intuitive (Standard LLMs). *System 2*: Slow, deliberate, analytical, step-by-step logic (Reasoning Models). * '''Chain-of-Thought (CoT) Prompting''' β The foundational technique that unlocked AI reasoning. Instead of asking the AI for the final answer, the user prompts the AI to "think step-by-step," forcing the model to explicitly output its intermediate logical calculations. * '''Latent Reasoning (Hidden Thoughts)''' β A feature of advanced reasoning models (like OpenAI's o1). Before generating the final output for the user, the model enters a hidden, internal loop where it generates, tests, and discards thousands of logical steps in the background. * '''Tree of Thoughts (ToT)''' β An advanced reasoning architecture. Instead of a single, linear chain of logic, the AI generates multiple different, branching paths of logic simultaneously. It evaluates each branch, abandons the dead ends, and follows the most promising branch to the solution. * '''Self-Correction''' β A critical capability of a true Reasoning Model. The ability to realize mid-thought that a previous logical step was mathematically incorrect, backtrack, fix the error, and resume the calculation. * '''Search and Planning''' β The integration of classic computer science search algorithms (like Monte Carlo Tree Search, used in AlphaGo) with LLMs. The model treats reasoning as a massive maze, actively searching for the correct path to the goal. * '''Compute-Optimal Scaling (Inference-Time Compute)''' β A massive shift in AI economics. Traditional models spend all their massive computing power during *training*. Reasoning models spend massive amounts of computing power during *inference* (the time it takes to answer the user), taking minutes or hours to "think" about a single prompt. * '''Formal Logic & Math Validation''' β The benchmarks used to test reasoning models. While standard LLMs are tested on poetry or trivia, reasoning models are tested on elite, PhD-level physics, complex coding challenges, and formal mathematical proofs. * '''The Verification Gap''' β The principle that it is vastly computationally easier to *verify* if a mathematical answer is correct than it is to *generate* the correct answer from scratch. Reasoning models exploit this by generating multiple answers and verifying them internally. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information