Editing
Attention Perception
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> {{BloomIntro}} Attention and Perception are the gateway processes of the mind. Perception is the process of organizing and interpreting sensory information to understand the environment, while Attention is the cognitive mechanism that allows us to focus on specific stimuli while filtering out others. Together, they construct our "subjective reality." Far from being a passive camera-like recording, perception is an active, "top-down" constructive process where the brain uses prior knowledge and expectations to make sense of ambiguous sensory data. </div> __TOC__ <div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Remembering</span> == * '''Perception''' β The process of selecting, organizing, and interpreting sensory information. * '''Attention''' β The cognitive process of selectively concentrating on one aspect of the environment. * '''Sensation''' β The physical process of sensory receptors responding to external stimuli (light, sound, pressure). * '''Top-Down Processing''' β Using prior knowledge, expectations, and context to influence perception. * '''Bottom-Up Processing''' β Sensory analysis that begins with the raw data and works up to integration. * '''Selective Attention''' β Focusing on one stimulus while ignoring distractions (e.g., the Cocktail Party Effect). * '''Divided Attention''' β Attempting to process multiple sources of information or perform multiple tasks at once. * '''Inattentional Blindness''' β Failing to see visible objects when attention is directed elsewhere (e.g., the "invisible gorilla"). * '''Change Blindness''' β Failing to notice a significant change in a visual scene. * '''Gestalt Principles''' β Principles of organization (e.g., proximity, similarity, closure) that explain how we perceive patterns. * '''Proprioception''' β The sense of the relative position of one's own parts of the body. * '''Multisensory Integration''' β The way the brain combines information from different senses (e.g., the McGurk Effect). * '''Psychophysics''' β The study of the relationship between physical stimuli and the sensations they produce. * '''Absolute Threshold''' β The minimum stimulation needed to detect a stimulus 50% of the time. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Understanding</span> == Our brain does not "see" the world; it builds a model of it. '''Perception as Inference''': Helmholtz described perception as "unconscious inference." When you see a chair, your eyes receive a flat 2D image. Your brain infers the 3D structure based on shadows, perspective, and your knowledge of what chairs are. This is why optical illusions work; they exploit the brain's "shortcuts." '''The Spotlight of Attention''': Attention is often compared to a spotlight. We can move it around our environment (covertly or overtly). * '''Broadbent's Filter Model''': Suggests we have a limited capacity and must filter out information early in processing. * '''Treisman's Attenuation Model''': Suggests information is "turned down" rather than completely blocked, explaining why we still hear our name across a loud room. '''Feature Integration Theory''': Anne Treisman proposed that perception occurs in two stages: 1. '''Pre-attentive stage''': Features (color, shape, movement) are processed automatically and in parallel. 2. '''Focused attention stage''': Features are "bound" together into a single object. Without attention, "illusory conjunctions" (e.g., seeing a red circle and green square as a green circle) can occur. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Applying</span> == '''Simulating a Visual Search Task (Feature vs. Conjunction Search):''' <syntaxhighlight lang="python"> import random import time def simulate_visual_search(n_items, search_type='feature'): """ Simulates the reaction time logic of a visual search. 'feature': Search for a red 'O' among green 'O's (Parallel). 'conjunction': Search for a red 'O' among green 'O's and red 'X's (Serial). """ # Feature search is independent of set size if search_type == 'feature': base_rt = 400 # ms noise = random.uniform(0, 20) return base_rt + noise # Conjunction search increases linearly with set size elif search_type == 'conjunction': base_rt = 400 per_item_cost = 20 # ms per item scanned return base_rt + (n_items * per_item_cost) + random.uniform(0, 50) # Compare set sizes 10 vs 50 print(f"Feature Search (10 items): {simulate_visual_search(10, 'feature'):.1f}ms") print(f"Feature Search (50 items): {simulate_visual_search(50, 'feature'):.1f}ms") print(f"Conjunction Search (10 items): {simulate_visual_search(10, 'conjunction'):.1f}ms") print(f"Conjunction Search (50 items): {simulate_visual_search(50, 'conjunction'):.1f}ms") # Feature search time is flat; conjunction search time scales with complexity. </syntaxhighlight> ; Design Applications : '''User Interface (UI) Design''' β Using Gestalt principles (grouping, contrast) to guide user attention. : '''Aviation/Driving''' β Understanding the limits of divided attention to prevent "cognitive tunneling." : '''Advertising''' β Using "bottom-up" salience (bright colors, movement) to capture reflexive attention. : '''Gaming''' β Managing cognitive load so players don't miss critical cues (inattentional blindness). </div> <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Analyzing</span> == {| class="wikitable" |+ Dorsal vs. Ventral Streams ! Stream !! Path !! Function !! Description |- | Ventral || Temporal Lobe || "What" || Object recognition, faces, color. |- | Dorsal || Parietal Lobe || "Where/How" || Spatial awareness, movement, guiding action. |} '''The Binding Problem''': How does the brain combine the output of different specialized areas (one for color, one for motion, one for shape) into a unified experience of a "flying red bird"? This remains one of the greatest mysteries in neuroscience. Synchronized neural firing (gamma oscillations) is a leading hypothesis. </div> <div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Evaluating</span> == Evaluating theories of perception: # '''Ecological Validity''': Does the laboratory finding (like 2D illusions) hold true in the complex 3D real world? # '''Robustness''': Does the theory explain cross-modal phenomena (e.g., how the smell of a food changes its perceived taste)? # '''Computational efficiency''': Is the proposed model (like Bayesian inference) mathematically feasible for a biological brain to implement in milliseconds? </div> <div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Creating</span> == Future Directions: # '''Predictive Coding''': Developing AI architectures that perceive the world by predicting future frames and only processing "prediction errors." # '''Sensory Substitution''': Creating devices that allow the blind to "see" via sound or tactile feedback on the skin (exploiting neuroplasticity). # '''Virtual/Augmented Reality''': Engineering "perceptual tricks" to make digital environments feel physically real. # '''Attention Enhancement''': Using neurofeedback or non-invasive brain stimulation (tDCS) to improve focus in high-stakes environments. [[Category:Cognitive Science]] [[Category:Perception]] [[Category:Attention]] </div>
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Template used on this page:
Template:BloomIntro
(
edit
)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information