Home News Oculus Developer Give Tips for Developing for Mobile VR

Oculus Developer Give Tips for Developing for Mobile VR

0

Trevor Dasch, the Developer Relations Engineer at Oculus, wrote a lengthy article this week on how to avoid rendering problems while developing for the Oculus Quest and the Oculus Go. Most of the tips and tricks he put in the developer post are designed for the Oculus Quest, but there is a lot to pull out from what he gave about the Go as well. If you are working on a project for VR and plan to put your game on one of the Oculus wireless platforms, there is a lot to understand. Approaching this development process as if it were a PC game is a bad thing to do here. 

Dasch gives a list of things not to do while developing for these platforms. He says that the following “are technically possible to do on a mobile chipset, we strongly suggest not doing them. However, this isn’t always a hard and fast rule, I’ve seen developers do most of them and still make framerate, but you will save yourself much pain by avoiding everything on this list.”

Below, we will list out what he says not to do, and then an excerpt of what he says about each technique and why you shouldn’t be doing them on mobile. 

  • Deferred Rendering
  • Depth Pre-Pass
  • Post-Processing
  • Realtime Shadows
  • Depth (and framebuffer) Sampling
  • Mirror/Portals

Each of these are frowned upon when developing for the wireless Oculus platform. Although you shouldn’t avoid them completely, you are going to find a lot of other techniques are going to be useful and better for your purposes. 

Deferred Rendering and Why You Shouldn’t Do It 

“Besides resolve cost, deferred rendering is only an advantage if your geometry is complex and you have multiple lights. Both of which can’t really be achieved on mobile anyway, because the limited power of the GPU to both push a large number of vertices and to compute pixel fill.

The answer, for the time being, is to stick to forward rendering. There may also be a place for a good forward+ implementation, though I haven’t seen one yet.”

Depth Pre-Pass and Why You Shouldn’t Do It 

“First of all, the amount of time you save on fragment fill by doing a depth pre-pass should be minimal if you sort your geometry before submitting your draw calls. Drawing front to back will cause the regular depth test to reject your pixels, so you’d only avoid the pixel fill work for geometry that wasn’t sorted correctly, or where both objects overlap each other at different points.”

“Second, it requires doubling your draw calls, since everything has to be submitted first for the depth pre-pass, and then for the forward pass. Since draw calls are quite heavy on the cpu, this is something you will want to avoid.”

“Third, all the vertices need to be processed twice, which will usually add more GPU time than you save by avoiding filling a few pixels twice. This is because vertex processing is relatively more time intensive on mobile than PC, and processing fragments will be relatively less (since the framebuffer size is usually smaller and the fragment program tends to be less complex).”

Post-Processing and Why You Shouldn’t Do It 

“The main problem with post-processing on mobile is once again, resolve cost. Producing a second image will cause another resolve, which immediately removes about 1ms from your gpu impact. Not to mention the time it takes to compute your post-processing effect, which can be quite resource intensive depending on the effect. It’s better to avoid post-processing all together.”

Realtime Shadows and Why You Shouldn’t Do Them

“I would consider this the most controversial item on this list. There are many apps that have successfully shipped on mobile with full realtime shadows. However, there are significant trade-offs for doing so, that in my opinion are worth avoiding.”

“A common technique for realtime shadows (and Unity’s default) is cascading shadow maps, which means your scene is rendered multiple times with various viewport sizes. This adds 1-4x the number of times your geometry must be processed by the GPU, which inherently limits the amount of vertices your scene can support. It also adds the resolve cost of the shadow map texture, which will be relative to the size of the texture. At the other end of the GPU pipeline, you have two options when sampling your shadow maps: hard shadows and soft shadows. Hard shadows are quicker to render, but they have an unavoidable aliasing problem. Because of the way shadow maps work (testing the depth of the pixel against the depth of your shadow map), only a binary result can come from this test, in shadow or not in shadow. You can’t bilinearly sample your shadow map, because it represents a depth value, not a color value. Soft shadows should be avoided, because they require multiple samples into the shadow map, which of course is slow. Your best bet is to bake all the shadows that you can, and if you need realtime shadows, figure out a different technique. Blob shadows are generally acceptable if your lighting is mostly diffuse. Geometry shadows can also work quite well if you need hard lighting and your shadow surface is a flat plane.”

Depth Sampling and Why You Shouldn’t Do It 

“When MSAA is enabled, your tile actually has a buffer that is large enough to hold all of your samples (i.e.,, 2x the pixels for 2xMSAA, 4x for 4xMSAA). This means that by default, if you sample the depth buffer, it will have to execute your fragment shader on a per-sample basis which means it will be 2x or 4x more time intensive than you would expect. There is a way to ‘fix’ this, which is to call glDisable(FETCH_PER_SAMPLE_ARM). However, the problem with this is it will only retrieve the value for the first sample, not the result of blending the samples, which means MSAA is functionally disabled when this is on.”

Mirrors/Portals and Why You Shouldn’t Do Them

“There is a solution that takes advantage of modified shaders and the stencil buffer. Every material in your scene would have two versions of your shader, one that only draws if a certain bit in your stencil buffer is 0, and one that only draws if it’s 1. Then what you would do is draw the mirror mesh with a material that sets that bit in your stencil buffer, draw your scene using the first set of shaders, set up your camera with the reflection matrices, and finally, draw the scene with your second set of shaders. This will produce the reflection you’re looking for without filling any more pixels than necessary and avoiding an unnecessary resolve. What it won’t do is avoid drawing a bunch of objects twice (which is unavoidable with any solution).”

We only pulled what we found most important from each section, and even left a couple sections out. This is a fantastic piece and every developer, inside or out of VR, should take the time to read the full piece. For more VR news and community updates, make sure to check back at VRGear.com

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version