Lec 6. Shading
约 1320 字大约 4 分钟
2025-10-31
Shading
Shading is local.
Compute light reflected toward camera at a specific shading point.
Viewer direction v.
Surface normal n.
Light direction l (for each of many lights).
Surface parameters (color, shininess, etc.).

Shading = shadowing.
Diffuse reflection
Light is scattered uniformly in all directions.
Lambert's cosine law.
Light falloff: 1/r2.
Diffusely reflected light intensity:
Ld=kd(I/r2)max(0,l⋅n)
where kd is the diffuse reflection coefficient (color), I/r2 is the energy arrived at the shading point.

Specular reflection
Bright near the mirror reflection direction.
Half vector:
h=∥l+v∥l+v
Specularly reflected light intensity:
Ls=ks(I/r2)max(0,n⋅h)p
where ks is the specular coefficient, p is the shininess exponent.

Ambient term
Add constant color to account for disregarded illumination and fill in black shadows.
This is approximate / fake!
Ambient light intensity:
La=kaIa
where ka is the ambient coefficient, Ia is the ambient light intensity.
Blinn-Phong reflection model
L=La+Ld+Ls

Shading frequencies
Flag shading
Triangle face is flat — one normal vector.
Not good for smooth surfaces.
Gouraud shading (vertex shading)
Interpolate colors from vertices across triangle.
Each vertex has a normal vector.
Phong shading (pixel shading)
Interpolate normal vectors from vertices across triangle.
Compute color (full shading model) at each pixel.
NOT the Phong reflection model.

Defining per-vertex normal vectors.
Best: from underlying smooth surface (geometry).
Simple scheme: average of face normals of adjacent faces.
Nv=∥∑f∈FvNf∥∑f∈FvNf
where Fv is the set of faces adjacent to vertex v, Nf is the normal vector of face f.
Defining per-pixel normal vectors.
Barycentric interpolation of vertex normal vectors.
Normalize the interpolated normal vector at each pixel.
Graphics pipeline
From application to display:

Stage Description Model, view, projection transformations Vertex processing. Sampling triangle coverage Rasterization. Z-Buffer visibility test Fragment processing. Shading Fragment processing and vertex processing. Texture mapping Fragment processing and vertex processing. Shader programs
Program vertex and fragment processing stages.
Describe operation on a single vertex (or fragment).
Shader function executes once per fragment.
Outputs color of surface at the current fragment's screen sample position.
Implementation: GPUs (heterogeneous and multi-core processor).
Texturing
Texture mapping
Different colors at different surface points.
Surfaces are 2D: every 3D surface point also has a place where it goes in the 2D image (texture).

Goal: "flatten" 3D object onto 2D UV coordinates. Find UV coordinates for each vertex such that the distortion is minimized.
Distance in UV corresponds to distances on mesh.
Angle of 3D triangle same as angle of triangle in UV plane.
Fix (u,v) coordinates of boundary, want interior vertices to be at the barycenter of their neighbors.
vi=valance(i)1(i,j)neighbors∑vj
Cuts are usually required.
Textures can be used multiple times (example texture: used / tiled).
Barycentric coordinates
Interpolation across triangles
Specify values at vertices, get smooth interpolation across triangle.
Texture coordinates, colors, normal vectors, ...
How? Barycentric coordinates.
Barycentric coordinates
Given triangle with vertices A, B, C.
Any point (x,y) in the plane of the triangle can be expressed as:
(x,y)=αA+βB+γC
where α+β+γ=1.
α, β, γ are the barycentric coordinates of point (x,y) with respect to triangle ABC.
Inside triangle: α,β,γ≥0.
Given (x,y):
α=−(xA−xB)(yC−yB)+(yA−yB)(xC−xB)−(x−xB)(yC−yB)+(y−yB)(xC−xB)
β=−(xB−xC)(yA−yC)+(yB−yC)(xA−xC)−(x−xC)(yA−yC)+(y−yC)(xA−xC)
γ=1−α−β
Interpolation of value V at vertices A, B, C:
V=αVA+βVB+γVC
Barycentric coordinates are not invariant under projection (should do interpolation before projection).
Texture queries
Diffuse color: simple texture mapping.
For each rasterized screen pixel (x,y) (usually a pixel's center), evaluate texture coordinate (u,v).
Sample texture color at (u,v).
Bilinear interpolation

u0=lerp(s,u00,u10),u1=lerp(s,u01,u11)
u=lerp(t,u0,u1)
where lerp(t,a,b)=(1−t)a+tb.
Hard case: pixel is large, can't cover entire texture pixel.

Screen pixel "footprint" in texture space.

Super-sampling can help anti-aliasing but costly.
In fact we just need to get the average value within a range.
Solution: MIP mapping.
Different pixel → different-sized footprint.
Allow (fast, approx., square) range queries.
"MIP" = multum in parvo (multitude in a small space).


How to compute MIP level D?

Estimate texture footprint using texture coordinates of neighboring screen samples.
D=log2L,L=max(∂x∂u)2+(∂x∂v)2,(∂y∂u)2+(∂y∂v)2
Trilinear interpolation

MIP mapping limitations: overblur (because of square approximation of footprint and trilinear interpolation).
Actually irregular footprint shape.

Anisotropic filtering can do better.

Lookup axis-aligned rectangular footprint.
Diagonal footprints still a problem.
EWA filtering (elliptical weighted average).

Multiple lookups.
Weighted average.
MIP hierarchy still helps.
Can handle irregular footprints.
Applications of textures
In modern GPUs, texture = memory + range query (filtering).
General method to bring data to fragment calculations Many applications.
Environment mapping
Place a reflective ball, capture the environment.
Distortion at poles.

Cube mapping
A vector maps to cube point along that direction.
The cube is textured with 6 square texture maps.
Need direction → face computation.

Bump mapping
Adding surface detail without adding more triangles.
Perturb surface normal per pixel (only for shading).
Original normal:
n(p)=(0,1)
Perturbed normal:
n′(p)=normalize(−dp,1)
where
dp=c[h(p+1)−h(p)]

Note that the perturbation is in local coordinate.
Displacement mapping
A more advanced approach.
Use the same texture as in bumping mapping.
Actually moves the vertices.

3D procedural noise + solid modeling

3D textures and volume rendering

Shadow mapping
Draw shadows using rasterization.
The points NOT in shadow must be seen both from the light and from the camera.
Depth image from light source.
Render scene from camera, for each pixel project back to light space, compare depth with depth image.
Problems with shadow mapping.
Hard shadows (points light only).
Quality depends on shadow map resolution (general problem with image-based technique).
Involves equality comparison of floating point depth values means issues of scale, bias, tolerance.
Rasterization cannot handle global effect well.
Ray tracing: accurate but very slow (offline).
Tracing light ray from the camera.
A point on an object may be illuminated by:
Light source directly (shadow ray).
Light reflected off an object (reflected ray).
Light transmitted through a transparent object (refracted ray).
What if starting from the light?
- Follow rays of light from a point light source, determine which rays enter the lens of the camera through the imaging window, compute the color of projection.