Editing
Ai Neuroscience
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Understanding</span> == Neuroscience generates diverse data requiring different AI tools: '''Neural decoding with ML''': Given neural spike trains or fMRI activations, can we decode what the subject is seeing, thinking, or intending? Decoders range from linear regression (simple, interpretable) to deep learning (powerful, opaque). Meta AI's "Brain Decoding" (2023) used MEG signals to reconstruct perceived images with 70%+ top-5 accuracy using a multimodal embedding alignment approach. '''Representational Similarity Analysis (RSA)''': Compare the similarity structure of AI model representations (e.g., ResNet activations) to neural representations (fMRI responses). If the geometry of object representations matches between model and brain, the model may be implementing a similar computation. This has revealed that deep CNNs trained on ImageNet predict ventral visual stream responses better than any previous model. '''Calcium imaging analysis''': Two-photon calcium imaging captures activity of thousands of neurons simultaneously as fluorescent intensity changes. ML pipelines: # CellPose detects cell boundaries in fluorescence images. # Suite2p or CaImAn extract calcium traces from detected cells. # Dimensionality reduction (PCA, UMAP) reveals low-dimensional dynamics in neural population activity. # Recurrent networks model the dynamics. '''Brain-Computer Interfaces''': BCIs decode motor intentions from neural signals to restore movement for paralyzed patients. BrainGate used linear decoders on 96-electrode Utah arrays. More recent systems use deep learning to achieve higher accuracy and generalization. In 2024, a patient with ALS used a BCI speech decoder achieving 78 words/minute β faster than average speaking rate. '''Computational models of cognition''': Reinforcement learning has been used to model basal ganglia-based dopamine learning. Predictive coding frameworks are implemented as hierarchical generative models. Transformer attention has structural similarities to cortical attention circuits. These computational models make testable predictions that experiments can validate or falsify. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information