Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When you do photon tracing, you begin with beams of known intensity, wavelength, direction because you are aware of the light source. That combines with everything else that hits the sensor to give the image.

When you do ray tracing, you don't immediately know if 10 million photons were captured. You know how many rays you are going to send based on the resolution of the image. This also means there's issues with say light coming from narrow apertures because you limit yourself to a finite number of rays.

If you use photon tracing and a light source sends 10 million photons and 70,000 are absorbed that's 70,000 data points to define the image which may be plotted on a 200x200 canvas. If you use ray tracing with a 200x200 canvas you will have the same size of image, but only get 40,000 samples. Even if you expand the size of the canvas, there may be some photons which can reach the camera but whose path the ray tracer can't reach.

For instance, say some photons hit the camera at pixel x-coordinates 10.0003 to 10.0005 through a fine aperture.

This will illuminate the camera in a photon tracing simulation. The ray tracer will not capture that photon unless its resolution were 10,000 times higher. The ray tracer will trace back from 10, 11. It can't do every intermediary step. The photon tracer can account for those circumstances.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: