Editing
Transfer Learning
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Applying</span> == '''Fine-tuning a pre-trained ResNet for a custom image classification task:''' <syntaxhighlight lang="python"> import torch import torch.nn as nn import torchvision.models as models import torchvision.transforms as transforms from torch.utils.data import DataLoader from torchvision.datasets import ImageFolder # Load pre-trained ResNet-50 model = models.resnet50(weights='IMAGENET1K_V2') # === Strategy 1: Feature Extraction (small dataset <1k images) === # Freeze ALL pre-trained weights for param in model.parameters(): param.requires_grad = False # Replace final layer with new head for 5 classes model.fc = nn.Linear(model.fc.in_features, 5) # Only model.fc parameters are trainable # === Strategy 2: Fine-tuning (larger dataset 1k+ images) === # Unfreeze last 2 residual blocks for name, param in model.named_parameters(): if 'layer4' in name or 'layer3' in name or 'fc' in name: param.requires_grad = True else: param.requires_grad = False # Use different learning rates: lower for frozen/pre-trained, higher for new head optimizer = torch.optim.Adam([ {'params': model.layer3.parameters(), 'lr': 1e-5}, # small LR {'params': model.layer4.parameters(), 'lr': 1e-4}, # medium LR {'params': model.fc.parameters(), 'lr': 1e-3}, # large LR ]) # === Preprocessing must match pre-training === transform = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) # ImageNet stats ]) dataset = ImageFolder("data/my_dataset/", transform=transform) loader = DataLoader(dataset, batch_size=32, shuffle=True) </syntaxhighlight> ; Strategy selection guide : '''<100 examples per class''' β Feature extraction only; freeze all backbone layers; only train head : '''100β1000 examples''' β Unfreeze last 1β2 blocks + head; use low LR for backbone : '''1000β10k examples''' β Fine-tune from last half of backbone; discriminative LRs : '''>10k examples''' β Full fine-tuning or even train from scratch if very different domain : '''No labeled data''' β Zero-shot (CLIP, GPT) or self-supervised domain adaptation </div> <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information