Rendering in AR: implementation and basic rules
We continue a series of articles on the topic of AR-development. At this stage, our AR device has determined its location, planes, environment map, camera position, and other parameters. Now we move on to building a 3D scene using any engine, starting to render models. In this article, we will talk about what rendering is and how it is implemented in 3D space.
What is rendering?
Simply, rendering is visualization. In computer graphics, both 3D artists and programmers refer to rendering as creating a flat image — a digital raster image from a 3D scene. This process is somehow present in different areas: the film industry, video games, animation, architectural design, and others.
Rendering is one of the most technically challenging steps in working with 3D graphics. During the process, the 3D geometry, textures and lighting data of the scene are converted into the combined information about the color value of each pixel in a 2D image. Often, the render is the last or penultimate stage in the work on the project, after which the work is considered completed or needs little post-processing.
So rendering with lighting prediction is the third integral component of any AR project.
Light Estimation Method and its implementation options
Light Estimation is a method that uses the laws of real natural light to transmit them to 3D objects. It would be ideal to create such realistic lighting of objects that they couldn’t be distinguished from real ones, but this is still a long way off. The render easily conveys the material of objects drawn by a 3D artist, while the situation with lighting is much more complicated.
Unlike games with a ready-made static light, in AR the lighting constantly changes depending on the user’s position. Therefore, objects should be lit depending on the surrounding light sources, like the sun on the street or lamps and windows in the room.
There are three options for a competent assessment of the Light Estimation of space.
Spherical Functions and Reverse Rendering
To determine how a ray from a light source hits an object and hits a camera, we need a dense map as well as a rendering equation. Having counted the equation on the contrary, according to the brightness of each pixel, we calculate where the light source is. So, drawing several rays, we get the most likely solutions and can roughly determine where the main light sources are and how bright they are.
A faster method for determining lighting is spherical functions. If you see how a 3D object in the light changes brightness when moving into a shadow, this is done using this calculation method. Spherical functions are a histogram, where the simpler the shape of the function is, the softer the lightmap will be, the more difficult — the more accurate.
Machine Learning
The goal of machine learning is to predict the outcome of input data. The more diverse they are, the easier it is for the machine to find patterns and give an accurate result. In this case, you need to collect as much data as possible so that the neural network gives you a lighting map based on ready-made photo-samples and 3D-examples of models.
Sun and shadow position
Many AR-apps require access to the geolocation of the device. This is done to determine the position of the sun, depending on where the user is located. This can be calculated using the simplest formulas. It is also determined where the shadow will fall and how soft it will be. This method works great on the streets but is unsuitable for enclosed spaces.
Environmental mapping method
Environmental mapping is a method that is used in 3D graphics to model reflections on the surfaces of objects. This is a great way to make more realistic rendering, which is now very popular. Environmental mapping consists of 6 directions of the world around us, which are collected in a cube.
The AR device constantly determines and refines the image of the surrounding world in all 6 directions. To do this, many helmets have a rear camera, and some applications use both cameras of the phone to see what is behind the user. Also, to complete the missing parts of space, you can use neural networks. All the same, the user sees only the part of the world that he is looking at, the side parts can be blurry, so as not to load the system.
Basic components of the rendering
People unconsciously perceive subtle clues or signals about how objects or living things are illuminated in a real environment. When a virtual object doesn’t have a shadow or when the glossy material does not reflect the surrounding space, users feel that the object doesn’t fit a specific scene, even if they cannot explain why.
Therefore, the rendering of AR objects according to the lighting in the scene is crucial for immersion and a more realistic user experience. Let’s consider the basic components of a good realistic rendering.
Light Direction
We have already said that with the help of reverse rendering, light sources are calculated. You can define one light source and render it to all objects, which is much more convenient than lighting each object individually. So we get the right ray of light.
Ambient Light
Some kind of reference point for further rendering. This is a common diffused light that comes from the environment, illuminating everything around. Usually represented by light intensity and color temperature.
Reflections
Light reflects in different ways from the mirror and matte surfaces. So the metal will reflect the environment, like a mirror, and a matte object will transmit everything stray. Different surfaces also reflect the colors of the surrounding space and objects. For example, if you put a blue square next to a white ball, the ball will acquire a bluish tint.
Secular Highlights
These are shiny spots on the surface of objects that reflect a direct source of light. The backlight on the 3D object changes relative to the position of the viewer in the scene.
Shading
Shading is the intensity of light in different areas of an object. It depends on the viewing angle and how close the light source is. Different parts of the same object can be shaded in different ways. For example, if light falls on a ball on the right, then the shading and, accordingly, the shadow will be located on the left.
Shadows
Shadows add realism to the scene and allow the viewer to trace the spatial relationships between objects. They give a greater sense of depth to our 3D scene and objects. Shadows can fall from an object that is lit and onto an object from other objects.
You can also use the quick method and build a shadow map. The idea of its creation is quite simple: we visualize the scene from the point of view of the light source, and everything that we see is illuminated, and everything that we do not see should be in the shade.
Colour bleeding
In computer graphics and 3D rendering, color bleeding is a phenomenon when one colored object paints another in the color of the first. Also, the color may be reflected, for example, from the colored walls of the scene. This method is implemented similarly to the reverse rendering: we emit rays from the center of the object and look at what objects they intersect with.
Final words
Rendering is one of the most important stages in creating augmented reality. The quality of rendering 3D objects depends on it. Given all the methods that are used in 3D graphics and the basic rules for arranging lighting, you can create truly living objects that are close to real ones.