A Beginners Guide To Developing VR Apps

A Beginners Guide To Developing VR Apps
December 19, 2016

While the thought of transferring your knowledge of app development from a rectangular screen to a fully immersive environment can seem daunting, the virtual reality development landscape is very much at the same place mobile app development was just a few short years ago.


Yes, the barrier to entry is high. But so are the opportunities.


And with new developments and tools constantly making it easier to create your own virtual reality application, there’s never been a better time to dive in.


So what are the essentials that any developer looking to break into the VR space needs to know?


We sat down with Jack Donovan, a virtual reality developer for New York-based Iris VR, to talk about everything from the basics of VR development to coding with the Oculus Rift development kit.


When did you first start developing for virtual reality?


Well, during my junior year of college I saw the the Oculus DK1 headset at a public event, tried it, and it blew my mind. I was like, ‘I need this.This is what I want to devote my life to.’


How is your approach different when you’re developing something virtual vs. something that’s not?


I think the dividing line isn’t necessarily between something virtual and non-virtual. I would say it's between 2D and 3D development.


The big difference between 2D and 3D is that, with 2D development you're working under the assumption that you have a screen and windows that can be maximized and moved a certain distance to the left and a certain distance to the right. So if you consider a classic desktop application, you design visual elements based on the height and width of the window that you are working with and you understand that maybe the window will be adjusted by the user from time to time.


In VR and 3D, however, there are no bounds. There is no box that everything fits in. It's just an infinite expanse. And so you have to change the way you think about code architecture.

Jack Donovan - Iris VR


Could you elaborate on what you mean by code architecture?


Sure. In 2D, you’re just working with x and y. But in 3D, you have z, which adds depth. So, things are not only higher or lower than each other, they can be further back.


Here’s a clunky analogy. It’s like an iPad versus a refrigerator. With 3D, not everything is at the closest distance to you, it can go as far back as you design it. You can think about design as you would in real life. If you want to put an object behind another object, you can do that. You have that depth.


And I guess you’re always thinking about where a user is situated when you’re developing in 3D?


Exactly. And you take for granted how limited you are by two dimensions when you’re using a classic application, because that’s what we’ve been using forever. But in VR, you can start to think about how to uninhibit your mind to work more efficiently with an extra dimension.


Does it actually feel that way? More freeing? I can imagine it feeling like a burden.


It definitely feels like you’re more free, and for two reasons.


One is the depth. To go back to the fridge analogy, I can place something behind an object instead of having to move it outside the fridge.


The other thing to consider is the bounds of the screen. I no longer have to try to fit everything within a predefined space. The space is now 360 degrees in every direction around the user, not to mention the fact that they can walk or move around. There are no boundaries at all.


You take for granted how limited you are by two dimensions when you’re using a classic application, because that’s what we’ve been using forever. But in VR, you can start to think about how to uninhibit your mind to work more efficiently with an extra dimension. — Jack Donovan


When you tackle a project, what is the first thing you think about?


I have to decide what engine I’m going to use. That's the first big decision.


Right now there are two popular modern game engines that are being used for VR development: Unity and Unreal. Between the two of them, I think something like 95% of today’s VR content has been developed on one of those two engines.


There are about five lesser known engines that support VR, or you can build your own engine, but I’d never recommend it.


What are the reasons you would choose a specific engine over the other?


Coding language is important: what language you are comfortable developing within. It’s also important to know what comes out of the box with the engine.


For instance, if you put a 3D model into Unreal and a 3D model into Unity and you didn't add any special treatments to either, the model will look better in Unreal. It’ll have higher-quality graphics for example. The interesting thing is that Unreal and Unity are both based on the same graphics library, but Unreal assumes you want certain qualities and just applies them without telling you.


So Unreal is less customizable?


No, it's just as customizable as Unity, but you'd have to dismantle the qualities it gives to an object or model before you could start adding your own modifications.


For instance, if you throw a lamp into the Unreal engine, it might like give it a bloom effect, right out of the box. So if you didn't want that bloom you'd have to manually remove it. Whereas, in Unity it's a from-the-ground-up approach. You add a lamp, and then if you want a bloom, you add a bloom component.


Is there a preference among virtual reality developers?


Well, a lot of people find Unity a much easier place to start because it’s less complex. In a broad sense, there are two camps: Unreal is sort of looked at as the engine that triple A enterprise game developers use, and Unity is more widely used by indie game developers.


Why do you think that is?


That’s a really hard question to answer, and one that everyone is trying to come up with an answer for right now, because there are definitely engine wars in the industry. They can both be used to make exactly the same product, but it's just a matter of how it feels.


To give some background, Unreal has been used in triple A games since the first incarnation of Unreal Tournament, which launched back in the early 2000s or late 90s. So I think people who are more accustomed to making enterprise triple A games already know the breakdown of Unreal as an engine. They know all of the tricks they have to use.


Whereas Unity has a more unfamiliar but pragmatic approach. It makes everything almost too easy. So you could publish something with Unity that's really terrible and we see a lot of Unity games that are actually pretty bad.


But overall, with the idea of rapid integration, the idea is that you can make something in Unity that could absolutely be shipped as a triple A title. In fact, Pokemon Go was based on Unity.


So I guess the idea is that indie developers tend to use Unity because they can build everything from the ground up, which makes an experience feel more personal, whereas Unreal feels a little more packaged.


With Unreal, you can start a project, press play, and you can walk around and look. You don’t have to worry about questions like "Why am I able to look around? Can I change the way I look around?"


With Unity, you have to think about those questions, it’s definitely more minimalist at the beginning. Not a lot is going on, so you have to tell it what is going to happen.


I’ve heard that some people actually program in VR with an Oculus Rift development kit. Do you ever program inside a virtual space?


Yeah I love to program in VR when I'm working on VR stuff.


So you work with the headset on?


Yeah, it's super awesome. There are a handful of companies making virtual reality applications for productivity.


One of the major applications is called Bigscreen, which takes the signal that your computer is trying to render, and then it displays that inside a virtual world. For instance, I can put on my my headset and then project whatever my computer is going to render onto an Imax-sized screen inside the virtual environment. And I can manipulate that giant screen with my keyboard and mouse just the same as I would normally.

Developing in Bigscreen


And then you can move around within the environment and look at the giant screen from different angles? Do you prefer to program this way?


There is a certain kind of work that I prefer to do this way. Anything hyper-focused and totally detached from reality, like head down sort of work.


I like it because I can customize it in so many different ways, and I’m not limited to just my monitor anymore, so I can have Spotify take up a fifty foot side of a building in the distance. I can basically design the computer I'm working on and change it at any time.


The environment also really lends to how you feel and how your mind functions. If you’re in a secluded forest you might be in a meditative state that lets you focus more on code than if you’re just working in an office, you know?


What do you usually use for your background?


I use a couple of virtual applications. I use Bigscreen and another one called Envelop VR. In Envelop I tend to use a space background. But the cool thing about Envelop is that it lets you create a different screen for each application, so I spread them all over the place. When I use Bigscreen, I have a huge open deck that looks out onto a city skyline at night, and so I just code ‘outside’.


If I was someone with an interest in virtual reality and a slight interest in coding, where would you suggest I start?


I would tell you to start with Unity. Hands down. Unity is super easy for people to learn. I have two friends that picked up Unity in a week and it's great because you can get your first Unity project up and running within the same sitting that you install it in. It would only take like an hour for you to be able to walk around in a virtual space.


In terms of learning advanced graphics and networking, the Unity community is super helpful and abundant. I really think Unity is the reason that VR is so successful. Their spiel is all about democratizing game development, and not to sound like a Unity salesman, but I think they nailed it. That's exactly what it does. Everyone can program a game all of a sudden. You can use Google to learn the basics really quickly and then it's just a matter of focusing your time on the aspects of your project you’d like to supplement with more knowledge.


What would be your dream virtual reality development project?


Honestly, my dream project would be something like ` in VR. To create an infinite, open environment with procedurally generated environments and simulate as many world properties as possible.


So I’m guessing you’ve played No Man’s Sky?


Yes, I'm loving it. It's so good. It's beautiful. Working in VR for the past couple years, all I essentially do at Iris is procedural generation. We have this data and we're generating virtual worlds from that data. So to see that idea applied with an element of randomness added and then expanded to fifteen quintillion planets instead of a single building? It's so cool. It’s also insane that the developers who created the game have not seen the entire world they’ve created. And no one will see the entire world.

As someone who’s so familiar with virtual reality development, do you find yourself constantly thinking about the development process when you’re in an environment or playing a game? Do you feel kind of snobby about it?


Sometimes, but I wouldn’t necessarily say snobby. When I play non-VR I see a lot of the tricks or shortcuts they use. I’ll notice graphical failures or glitches. But in VR I don't really notice because it's such a visceral, immersive experience and I don't really think about the fact that it's a piece of technology.


I think as a medium, VR is powerful enough to detract from any sort of self awareness. When you’re in VR you don't think “Here I am wearing a headset and holding two plastic controllers,” you're thinking “Here I am holding two medieval swords. What's going on around me? What do I do?”


What’s the most visceral experience you’ve had in VR?


It was for one of the very first demos that Oculus was showing for a game called Eve Valkyrie. I put on the Oculus headset, I looked around and I’m in this spaceship cockpit. When I looked down I saw my torso and my legs below my hands holding the controls and I instantly felt a sense of presence. And then the cockpit lights up and you’re in this massive tube, and you launch out of the tube and the feeling of speed is crazy.


I put people through this demo all the time and they lean back in their chairs because they feel like they're going so fast. It's amazing. And so it shoots you out the end of the tube and instantly you feel like you can see forever, because it's space. I don't know how they created this sense of depth and scale but you just feel so small in this massive expanse. After that, I just wanted to do it a million times.

When you’re talking about feeling the expanse of space in this game, is something like that up to a designer or a developer, or a combination of both?


I would say it’s a combination of both. You have the basics on the technical side, and the precedent is established: which is this idea of a skybox.


Every game has a skybox, which refers to whatever is at the very extent of what you can see. But to create a better sense of depth, you need the designer to say “Okay, we need to make things move very slowly in the distance.” This creates a sense that things aren't just at an arbitrary distance, that they're actually very very far away. And then you make the stars glitter, because if you just have a static image to fill the distance, then no matter how far away it feels it's going to feel stale and it's going to feel like you are existing in a sphere instead of an open space.


So if you were hired to take that job and the client said “Make us a galaxy that feels like it has a robust depth” Would you know how to tackle that?


Yeah I think I could implement anything a designer gave me, but I'm not sure that I could design it myself, because there's so much thought that goes into that stuff. When you hear how a team built something, you think, "Oh, of course," but to come up with this stuff independently, to really identify the things that make people feel like they're in another environment, that requires a very special kind of person, of which I am not (laughs).


So from a development standpoint, the way they created that specific feeling of depth was an incredibly creative endeavor?


Yeah. And it's combination of a lot of very small things, which reminds me that I think a lot of the feeling of depth in that game ironically came from the audio.


Audio is huge in VR for a sense of immersion and space because not only do you have static left and right channels in your headphones, you've got a full 360 degrees. For instance, if there is sound coming from above you and you have your head cocked to one side, then the sound is going to reflect off your shoulder, bounce into your ear and hit the upper corner. So there is crazy, very minute stuff we don't think about in real life.


Check out virtual barber shop on Youtube when you get a chance. It's a great example. You can hear a buzzer on the back of your neck and feel scissor snips around your ears. Somehow, even though the audio is only in your ear, it gives you a sense of how big the room is, and that's so weird. You’re not using your eyes but you have a sense of scale.

How does a virtual reality developer deal with something like audio?


If you’re using the Unity engine for instance, there is an entire audio system that you can link into from your main project code. And then you can configure all sorts of different properties, like physical material or how much the sound reflects, and then you have to provide it with the user’s head rotation to properly filter through the audio.


But I would say that at the end of the day it’s another central component of your project in the same way that graphics or input would be.


Do you have a specific headset in mind when you’re tackling a project in virtual development? Or are you trying to build things that are compatible with all of them?


Yeah, we're working towards 100% hardware agnosticism. The more headsets that can run our software the better. But it takes a lot of work. Some of it is technical work: finding out which things to track and which things to load at runtime based on the headset the person has plugged in. But a lot of the choices and hurdles are based on basic differences. Like, the HTC Vive has hand controllers, and the Oculus Rift will have controllers eventually, but right now the primary input method is an Xbox game pad.


So if you want to create an agnostic experience, you can't rely heavily on hand tracking because there is no way to replicate that on Oculus Rift.


A couple of light questions before we close this out. As someone who knows very little about development, could you tell me how long it would take to code a virtual version of Flappy Birds?


About four hours.


What about an open-ended world similar to that in The Matrix?


It depends on how deep you want to go with how the world defines itself versus how you define the world.


I could make it as simple as like, walking down a hallway, and at the end of the hallway you either come to a righthand turn, a lefthand turn, or a T in which you can choose to go left or right.


I can have that randomized, so that the game generates a hallway and adds one of those three pieces to the end of each hallway, which leads to another hallway. That’s a really simple example of how you can use randomization to create a virtually open ended world. You could add increasing complexity forever, until you make a perfectly accurate representation of our world, but in order to do that, you need to understand our world more than any human does.

Related articles

VRrOOm Wechat