Editing
Multimodal AI Models and the Architecture of Perception
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Remembering</span> == * '''Multimodal AI''' — Artificial intelligence systems capable of processing, understanding, and generating multiple forms (modalities) of data simultaneously, such as text, images, audio, and video. * '''Modality''' — A specific type of data or format of information. Text, images, and audio are distinct modalities. * '''Cross-Modal Learning''' — The process by which an AI learns the complex relationships between different modalities (e.g., learning that the text word "dog" corresponds to the visual pixels of a dog in an image). * '''Embedding Space''' — The underlying mathematical dimension where AI models map different modalities. A multimodal model maps an image of an apple and the word "apple" to the exact same location in the embedding space. * '''Vision-Language Models (VLMs)''' — A common type of multimodal model that combines computer vision and natural language processing, allowing the AI to answer questions about an image or generate an image from text. * '''Contrastive Language-Image Pretraining (CLIP)''' — A foundational architecture developed by OpenAI. It trains two neural networks simultaneously—one for text and one for images—to predict which images correspond to which text descriptions, creating a massive, shared multimodal embedding space. * '''Audio-Visual Models''' — Models that process sound and video together, allowing them to understand context (like matching a speaker's lip movements to the audio track or identifying an action based on its sound). * '''Late Fusion vs. Early Fusion''' — *Early Fusion*: Combining the raw data from different modalities immediately at the input layer. *Late Fusion*: Processing each modality in a separate neural network first, and combining their final outputs at the end. * '''Tokenization''' — The process of breaking data down into tiny mathematical chunks (tokens). In multimodal AI, text is tokenized into word pieces, and images are tokenized into small image patches, allowing the transformer architecture to process them both using the exact same math. * '''Generative Multimodal Models''' — AI that cannot only *understand* multiple modalities but *create* them. (e.g., Generating a video directly from a text prompt, or generating a voice speaking based on a text prompt). </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information