Graphics & Optimization #1: Lighting

It is always necessary to squeeze out every drop of performance in order to run a 3D app on an Android phone. Running the app in VR even doubles the necessity and difficulty. Fortunately, for VR Rehearsal, the fact that the position of presenter and virtual audiences in the environment remains still makes it possible to remarkably optimize the graphics performance in several aspects. We managed to render more than 50 animated humanoid models in a realistic environment with decent framerate by applying a number of tricks.

Because our environments are completely static, lightmap is the nice solution which pre-computes the lighting data in a texture even with GI information. Then in real-time, the shader can get the lighting by simply querying the lightmap texture. The following code is the diffuse shader used for rendering the environment that reads the lightmap data to apply lighting:

 

Shader “VR_Rehearsal_app/LightmapDiffuse”
{
Properties
{
_MainTex (“Texture”, 2D) = “white” {}
}
SubShader
{
Tags
{
“RenderType”=”Opaque”
“Queue” = “Geometry”
“IgnoreProjector” = “True”

}
LOD 100

Pass
{
Name “Lightmap”
Lighting Off

CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
// make fog work
//#pragma multi_compile_fog

#include “UnityCG.cginc”

struct appdata
{
half4 vertex : POSITION;
half2 uv : TEXCOORD0;
half2 uv_lm : TEXCOORD1;
};

struct v2f
{
half4 vertex : SV_POSITION;
half4 uv : TEXCOORD0; //xy: texture uv; zw: lightmap uv
//UNITY_FOG_COORDS(1)
};

uniform sampler2D _MainTex;
uniform half4 _MainTex_ST;

v2f vert (appdata v)
{
v2f o;
UNITY_INITIALIZE_OUTPUT(v2f, o);
o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
o.uv.xy = TRANSFORM_TEX(v.uv, _MainTex);
o.uv.zw = v.uv_lm *unity_LightmapST.xy + unity_LightmapST.zw;
//UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}

fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
// apply fog
col.rgb *= DecodeLightmap(UNITY_SAMPLE_TEX2D(unity_Lightmap, i.uv.zw));
//UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
ENDCG
}
}
}

 

The fragment shader here is extremely simple: one line for reading diffuse texture, another line for reading lightmap, and multiple them together (We can apply the fog as well without too much pain). However, the resulting quality is surprisingly good:

rpis

Large auditorium environment

conf

Small conference room environment

However the reality isn’t always so easy. It took us long time to tweak the parameters of Unity’s lightmap system in order to generate good lightmaps for our scenes. At first we were troubled by the problem that some triangles receives light bleeding in from the lightmap for another triangle. This was solved by closing all the edges of the model, and increasing the margin between lightmap patches. Then we experienced strange stripe artifacts on the walls of the conference room environment. It turned out that the problem was from inappropriate shadow parameters, and was solved by increasing the baked shadow radius of all the scene lights.

 

We’ve removed real-time light calculation for static environment, but what about characters with animations that are instantiated in runtime? It is impossible to pre-compute lightmap for moving objects in real time. But fortunately, Unity also provides light probe which is based on the idea that the lighting at any position can then be approximated by interpolating between discrete samples. The interpolation is far cheaper than real lighting, yet still produces a reasonably good quality. All we need to do is to setup our sampling points in the scene:

lp_conf

Setting up light probes in conference room

lp_rpis

Setting up light probes in auditorium

and use those information to interpolate lighting in our shader. The interpolation can be either done in vertex shader or fragment shader. Obviously, we choose to do that in vertex shader:


Shader "VR_Rehearsal_app/LightProbeDiffuse"
{
Properties
{
_MainTex ("Albedo (RGB)", 2D) = "white" {}
}
SubShader
{
Tags
{
"RenderType"="Opaque"
"Queue" = "Geometry"
"IgnoreProjector" = "True"
"BW"="TrueProbes"
}
LOD 100

Pass
{
Name “LightProbe”
Tags
{
“LightMode” = “ForwardBase”
}
//Fog
//{ Mode Global }
CGPROGRAM

#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#include “UnityCG.cginc”
#include “Lighting.cginc”

struct appdata
{
half4 vertex : POSITION;
fixed3 normal : NORMAL;
half2 texcoord : TEXCOORD0;
};

struct v2f
{
half4 pos : SV_POSITION;
half2 uv : TEXCOORD0;
//fixed4 vLightingFog : TEXCOORD1; //xyz: vertex light color; (w: vertex fog data)
fixed3 vLightingFog : TEXCOORD1; //xyz: vertex light color; (w: vertex fog data)
};

uniform sampler2D _MainTex;
uniform half4 _MainTex_ST;

v2f vert(appdata v)
{
v2f output;
UNITY_INITIALIZE_OUTPUT(v2f, output);

output.pos = mul(UNITY_MATRIX_MVP, v.vertex);
output.uv = TRANSFORM_TEX(v.texcoord, _MainTex);

half3 worldPos = mul((float3x3)_Object2World, v.vertex.xyz);
fixed3 worldN = mul((float3x3)_Object2World, v.normal);

//light probe
output.vLightingFog.xyz = ShadeSH9 (float4(worldN,1.0));

//output.vLightingFog.w = exp(-length(_WorldSpaceCameraPos – worldPos) * unity_FogParams.x);
return output;
}

fixed4 frag(v2f input) : SV_TARGET
{
fixed4 col = tex2D(_MainTex, input.uv) * fixed4(input.vLightingFog.xyz, 1.0);
return col;
//return lerp(unity_FogColor, col, input.vLightingFog.w);
}
ENDCG

}

}

FallBack Off
//FallBack “Mobile/VertexLit”
}

 

Notice the function ShadeSH9 in vertex shader is the interpolation function. And the fragment shader is still simple: read the diffuse texture and multiple the color by per-vertex interpolated lighting. The following pictures show the shaded character using light probe and normal diffuse shader.

Left: Light probed diffuse shader. Right: Unity standard shader.

Left: Light probed diffuse shader. Right: Unity standard shader.

Now we have precomputed lighting data (either lightmap or lightprobe) for different environments, the last thing to do is to load them correctly. We need to deal with dynamically loading those data because we generate anything in the same scene (which is better to manage than multiple scenes). However, it turns out that unity by default cannot bake lightmaps into prefabs, and it cannot automatically load multiple sets of lightprobe. We end up writing our own helper scripts to bake lightmaps into prefabs based on this thread: http://forum.unity3d.com/threads/problems-with-instantiating-baked-prefabs.324514/, and exporting all the lightprobe information as binary files.