Editing
Visual Grounding
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> {{BloomIntro}} Visual grounding is the AI capability to connect language to specific visual regions β locating the objects, regions, or relationships described by text within an image or video. While image classification says "there is a dog," visual grounding says "the dog is in the bottom-left corner, sitting on a red mat." This capability underpins a family of tasks: referring expression comprehension (find what a phrase describes), visual question answering (answer questions about specific image regions), grounded captioning (generate text anchored to specific regions), and phrase grounding (link words to image regions). Visual grounding is essential for robots, assistive AI for the blind, and multimodal reasoning systems. </div> __TOC__ <div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Remembering</span> == * '''Visual grounding''' β Localizing image regions corresponding to natural language descriptions. * '''Referring Expression Comprehension (REC)''' β Given an image and a referring expression ("the woman in the red dress on the left"), locate the described object. * '''Phrase grounding''' β Linking each phrase in a caption to the corresponding image region. * '''Visual Question Answering (VQA)''' β Answering questions about image content; often requires grounding attention to relevant regions. * '''Region proposal''' β Generating candidate bounding boxes in an image; the first stage of many grounding pipelines. * '''DETR (Detection Transformer)''' β An end-to-end object detection transformer; adapted for grounding in MDETR and others. * '''MDETR''' β Modulated DETR: a joint vision-language model for end-to-end grounding. * '''Grounding DINO''' β Open-vocabulary object detection and grounding using DINO + grounding pre-training. * '''GLIP (Grounded Language-Image Pre-training)''' β Unifies detection and grounding pre-training for open-vocabulary detection. * '''RefCOCO''' β A widely used referring expression dataset (120,000+ expressions for COCO images). * '''Bounding box''' β A rectangle locating an object in an image; the primary output format for grounding. * '''Region features''' β Visual features extracted from specific image regions (RoI pooling, RoI align) for multi-modal reasoning. * '''Open-vocabulary detection''' β Detecting objects described by arbitrary text, not just a fixed set of training categories. * '''SAM (Segment Anything Model)''' β Meta's foundation model for segmenting any object from a point, box, or text prompt. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Understanding</span> == Visual grounding requires simultaneously understanding language semantics and visual scene structure, then aligning them. This is fundamentally harder than either task alone β it requires knowing what "the woman on the left in the red dress" means (language), identifying all relevant visual regions (vision), and matching the description to the correct region (grounding). '''Two-stage vs. end-to-end''': Early grounding systems used two stages: # generate region proposals (Selective Search, RPN), # rank proposals by language-visual similarity. Modern end-to-end systems (MDETR, Grounding DINO) jointly process image and text, generating grounded outputs in one forward pass. End-to-end approaches outperform two-stage but require more data and training. '''Grounding DINO''': The current standard for open-vocabulary grounding. Combines DINO (a strong visual backbone) with language-conditioned attention. Given an image and any text query, it outputs bounding boxes for described objects. Crucially, it generalizes to objects not seen during training β "open vocabulary" β making it vastly more flexible than fixed-category detectors. '''SAM + language''': SAM can segment any object from a bounding box or point prompt. Combining Grounding DINO (detect β bounding box) with SAM (box β precise segmentation mask) gives a powerful open-vocabulary segmentation pipeline. LangSAM makes this combination accessible in a few lines of code. '''Multimodal LLMs and grounding''': GPT-4V, LLaVA, and Qwen-VL can discuss image regions conversationally. However, precise bounding box output requires specialized models. The field is rapidly moving toward unified models that can both ground (output boxes) and reason (generate text about grounded regions) in a single framework (e.g., Qwen2-VL, InternVL2). </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Applying</span> == '''Open-vocabulary grounding with Grounding DINO + SAM:''' <syntaxhighlight lang="python"> from PIL import Image import torch import numpy as np # Method 1: Grounding DINO for bounding box grounding from groundingdino.util.inference import load_model, load_image, predict, annotate model = load_model("groundingdino/config/GroundingDINO_SwinT_OGC.py", "weights/groundingdino_swint_ogc.pth") image_source, image = load_image("street_scene.jpg") # Ground a natural language description boxes, logits, phrases = predict( model=model, image=image, caption="the woman in the red dress . the yellow car on the left", box_threshold=0.35, text_threshold=0.25 ) annotated = annotate(image_source=image_source, boxes=boxes, logits=logits, phrases=phrases) Image.fromarray(annotated).save("grounded_output.jpg") print(f"Found {len(boxes)} objects: {phrases}") # Method 2: LangSAM (Grounding DINO + SAM combined) from lang_sam import LangSAM lang_sam = LangSAM() image = Image.open("garden.jpg").convert("RGB") # Get segmentation masks for any text description masks, boxes, phrases, logits = lang_sam.predict(image, "red flowers") # masks: list of boolean numpy arrays β precise pixel-level masks for i, (mask, phrase) in enumerate(zip(masks, phrases)): masked_image = np.array(image.copy()) masked_image[~mask] = 0 # Keep only the grounded region Image.fromarray(masked_image).save(f"grounded_mask_{i}.png") </syntaxhighlight> ; Visual grounding systems : '''Open-vocabulary detection''' β Grounding DINO, GLIP, OWL-ViT (Google) : '''Segmentation from text''' β LangSAM, SEEM, X-Decoder : '''Referring expression''' β MDETR, TransVG, SeqTR : '''Multimodal reasoning''' β Qwen2-VL, InternVL2, LLaVA-1.6 (grounded output) : '''Video grounding''' β TubeDETR, MOMA, CLIP4Clip for temporal grounding </div> <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Analyzing</span> == {| class="wikitable" |+ Visual Grounding Benchmarks (RefCOCO val) ! Model !! val Accuracy !! testA (people) !! testB (objects) !! Speed |- | TransVG || 81.0% || 82.7% || 78.4% || Fast |- | MDETR || 86.8% || 89.6% || 81.4% || Moderate |- | Grounding DINO || 90.6% || 92.5% || 86.7% || Moderate |- | Qwen2-VL (7B) || 91.4% || 93.2% || 87.8% || Moderate |- | GPT-4o (with tools) || ~89% || ~91% || ~85% || Slow (API) |} '''Failure modes''': Attribute confusion β "the large red ball" may ground to a small red ball or a large blue ball. Relational grounding failures β "the ball to the left of the box" requires spatial reasoning that models handle inconsistently. Overcrowded scenes β models struggle when many similar objects are present. Out-of-vocabulary objects β even open-vocabulary models fail on rare, unusual objects. </div> <div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Evaluating</span> == Visual grounding evaluation: # '''RefCOCO/RefCOCO+/RefCOCOg''': standard referring expression benchmarks; report accuracy@0.5 IoU (box overlaps ground truth by >50%). # '''Flickr30k Entities''': phrase grounding benchmark. # '''Open vocabulary''': COCO novel categories test (COCO-base trained, COCO-novel tested). # '''Accuracy by referent type''': people vs. objects vs. scenes β performance varies significantly. # '''Robustness''': test with paraphrased expressions, negations, spatial relations. </div> <div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Creating</span> == Building a grounding-enabled visual search system: # Ingest image library; extract Grounding DINO features. # Text query β Grounding DINO β bounding boxes + confidence scores. # Apply SAM to convert boxes to precise segmentation masks. # Post-processing: NMS to remove duplicate detections; threshold on confidence. # Interface: display image with highlighted regions and confidence scores. # Applications: e-commerce visual product search, accessibility (describe what's at position X), medical image ROI identification, surveillance (find person in red jacket), robotics (pick the blue cup). [[Category:Artificial Intelligence]] [[Category:Computer Vision]] [[Category:Visual Grounding]] </div>
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Template used on this page:
Template:BloomIntro
(
edit
)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information