The so called Main Camera controls the rendering process by iterating over the pixels and storing the results for the indices. First it uses the Multi Sampler to get the position inside the pixel to be sampled. Then the Coord Mapper maps the generated two dimensional coordinates to the three dimensional scene. Finally the Render Pass samples rays from the context and returns the color. This model makes it easy to extend the possible Render Passes and Coord Mappers.
Samplig the ray is done by the Context. The Context contains Colliders and light Sources and some global settings. Colliders are responsible to give the tightest bounding box they are inside and tell where rays hit them. With these methods the context can build up an acceleration structure and get the nearest hit for the ray given.
Then the Shader instance assigned to the Collider found is asked for feed back and the returned color is used. Indirect illumination is achieved by calling the Context sampling method from the shader and using the returned value to calculate the feedback.
Using indirect rays only to construct the image is physically the most correct. This can be done by using Emission Shaders instead light Sources. However, scenes rendered with this method converge much slower. Usually they need more than ten times as much samples.
Direct light Sources generate random samples in their area and can tell the light amount cast to the target. But sometimes to stay consistent, these lights might need to tell for an existing ray what amount it should recieve. This can be a compex task. Shaders than can pick up the lighting and add the result to the indirect lights and their own emission.
This means that lights only emit but let other rays pass through. Direct lighting could be implemented to give more realistic results. These calculations are, however, not worth the minial realism boost. Later direct sampled Colliders may solve it anyway. That would be done by casting direct rays toward the bounding sphere. When it hits something other than it returns black, otherwhise the color would be normlized based on the spheres visible area.
Note that simply stopping recursive ray casting when a hit limit is reached introduces bias. Unbiased renderers usually start to kill rays with a certain probability and multiply the light amount recieved with the inverted probability. The result will then converge to the real solution, but these methods introduce additional noise and also some speed reduction.
I do instead let the user set the number when to stop sampling. In case someone wants a really realistic image, he can set it to a higher number. Usually around ten it is good. But in case the user wants to reduce render time or wants an image without bounces, he can do that by setting the maximum bounces lower. These images show renders with zero and three bounces.
While ray casting is happening, bounces are tracked by the context and generic methods are used to separate behaviour based on ray type. This enables good optimizations and hacks to skip the collision detection when it is not needed and enable shaders to behave based on the bounce history. Shaders can have settings about how much recursive rays to cast on camera hit. In other words, branched sampling can be implemented.
In theory one could limit the solution by casting random rays on the normals hemi-sphere and multiply it with the BRDF, which in this case equals the normals and lights directions dot product divided by two. Istead I sample rays with the distribution that has the BRDF density. This method produced much less noise.
One way is to cast a lambertian random ray and ask the light source wether that ray hits it. When it does, than the light returns the color that is cast that direction. Let us call this sampling method the density method. For lambert shaders in most cases this would result too much noise.
Istead one could ask the light source to give a random sample point and ask what color it casts to the direction. Then multiply it with the BRDF. In this case, the result has to be multiplied by the light sources visible area. Let us call this the BRDF method.
This basically means that for lambertian shader it is practical to use the BRDF method for direct lighting and the density method for indirect lighting. This will result quickly converging images where the lights are transparent.
It is hard to construct the BRDF with the roughness model even when speaking about rephlective shaders. When we do importace sample such shader, it means we cast one ray mirrored by the normal. It seems pretty trivial that one could random sample a normal based on the roughness distribution function and use that as the normal.
Then the density mehtod for direct lighting would be simple. For light sources that are large compared to the roughness, this method would result less samples. But in the other case it is better to use the BRDF method. Sadly it is hard to construct the BRDF based on the roughness.
I will list here all bias intoducing noise reduction or speed increasing tricks. Such tricks are caustics skipping or increasing roughness when a blurry hit already occured. Sample clamping and bilateral blurred renders are also planed to be implemented at some point.
I will list here all trick that are good to achieve renders that are partially photorealistic. Such tricks are currently the ability to skip ray collision per material based on ray type or the ability to skip shadow casting for lights and materials. Materials may be invisible only for rephlective rays for example. It is also possible to create an entity that cast shadows but is invisible otherwise.