# 3D system and rendering pipeline This sub-project describe how the 3D worlds are going to be rendered. We want a camera to move around moving and dynamic objects. These objects can be need a physics and collision system. ## the idea Each object has: * a bounding box or capsule * a BVH is maintained while these move around physically (or not) * to render a frame we cull these bounding volumes * at rendering time, the bounding box or sphere is turned into a quad or triangle fan and a shader associated with the object is called (after proper world-object-camera transformations) * each object can be queries for: a) ray-object intersection ("return the distance from the object at this point P in this direction D") b) Signed Distance Field ("what is the minimum distance to the object from this point P?") So in the end, we'll a) move the camera and lights along paths b) transform the bounding volumes and cull for visible boxes c) project them d) call the objects' respective shaders for rendering We want to use shadow maps, so multi-passes is expected. ## future step Have an exporter from Blender modelling software. That would be a converter from simple blender-exported files to our internal format (as an asset for our AssetManager or as c++ code directly) ## latter improvement How to handle transparency? Multi-Ray-casting?