Editing
Neural Radiance Fields and 3D AI
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Understanding</span> == **The NeRF idea**: Represent a scene as a function f(x,y,z,ΞΈ,Ο) β (RGB color, density Ο), where (x,y,z) is a 3D position and (ΞΈ,Ο) is a viewing direction. This function is parameterized by a neural network. To render a novel view: for each pixel, cast a ray through the scene, sample points along the ray, query the network for color and density at each point, and integrate (volume rendering) to produce the pixel color. The network is trained on input images with known camera poses by minimizing pixel color reconstruction error. **Why it's remarkable**: The network implicitly encodes the geometry, appearance, and lighting of the scene β without any explicit 3D model. Photorealistic novel view synthesis from 20β100 input photographs of a real scene is possible after a few minutes of training. **Gaussian Splatting** (Kerbl et al., 2023) replaced the implicit MLP with an explicit representation: millions of 3D Gaussian distributions, each with position, covariance (shape), opacity, and color (spherical harmonics for view-dependent appearance). Splatting renders these Gaussians as 2D projections onto the camera plane β far faster than ray marching. Result: real-time rendering of NeRF-quality scenes at 100+ FPS. 3DGS has become the dominant approach for practical applications. **Text-to-3D** (DreamFusion): Use a 2D diffusion model as a "prior" to supervise NeRF optimization. For each training step: render the NeRF from a random viewpoint; apply the diffusion model's score function (SDS - Score Distillation Sampling) to push the rendered image toward the text prompt; backpropagate through NeRF. This transfers 2D generative model knowledge to 3D without any 3D training data. Quality is improving rapidly (Shap-E, Zero123, Wonder3D). </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information