Augmented Reality: Apple's Revolutionary Offering

Category: 
Augmented Reality: Apple's Revolutionary Offering
June 28, 2017

Summary

 

  • * Augmented reality is the next frontier for mobile.
  • * Apple just released their ARKit to developers.
  • * We discuss the features and implications of this release.

 

Investors need to think about Apple's (NASDAQ:AAPL) future. Since we see no new super products coming down the pike (currently, any major car products are vaporware), we need to look to the current product line for growth, or at very least to maintain its current cash generating position.

 

Since December 2014, the iPhone has been running mostly over 60% of Apple's gross revenue (most recently 63%). In the early years of the product, the iPhone had completely redefined the product category. (Today, the previous types are termed "feature phones"). Growth was explosive. Now as most developed markets are near saturation, and high-end phones are out of the price range for many in developing nations, growth of all smartphone sales is slowing, and high-end models particularly. Thus, iPhone sales growth has been extremely modest for the last few of years, even negative at times.

 

Apple has weathered this change by continuing to charge premium prices for its product (against the predictions of many naysayers). It can only do this for two reasons.

 

  1. Its design and build quality is unsurpassed, and
  2. It's always on the cutting edge of new technology.

 

For these two reasons, customers feel that there is value in the iconic product.

 

Big question

 

This leaves the investor with the important question:

 

  • * What will make Apple grow?

 

Or, for the dividend investor who is less interested in growth, the more modest framing would be:

 

  • * What will allow Apple to maintain its leadership and margins?

 

Assuming that the quality continues as per Apple's long history, we therefore must focus on reason number two above. That is:

 

  • * How will Apple maintain leadership in new technologies?

 

In the end, this is the important question to the investor.

 

Augmented reality is one area in which Apple intends to take an unprecedented lead. This is what we will address here in detail. My goal is to provide a deeper understanding of exactly what Apple has just done and why it is so revolutionary.

 

WWDC

 

A few weeks ago, Apple held its Developers Conference (WWDC), opening with the keynote address where Tim Cook and friends introduced new features of their line of products. Many focused on the iPad Pro, the new iOS and Mac OS features, or the HomePod speaker. For the long term, however, the real news for the investor is the augmented reality capabilities introduced (and the related machine learning).

 

In an earlier article, I explained the importance of augmented reality as well as some of the technical details that make it so difficult. I discussed as well some of the many different aspects of the industry, and different players, that will be affected. Here I discuss how Apple's system is groundbreaking and what it means in terms of opportunity.

 

Ramifications

 

This move by Apple will give it a tremendous lead over the competing systems: Android, promoted by Alphabet (GOOG) (NASDAQ:GOOGL), and Windows mobile by Microsoft (MSFT). It is unlikely that either will come up with a truly competitive system any time soon.

Augmented Reality Systems – Some Major Issues

 

Problem #1 – Objects in space

 

  • Essentially, AR is the ability of a visual system to put virtual, computer generated objects into a view of the real world.

 

For a simple data overlay, this is not a complex issue. The text is formatted to the screen size and displayed. For interactive objects that are supposed to appear as part of the scene, however, this is extremely complex. If you have a baby dragon flying through your living room, you want it to sit on the sofa, not be positioned half submerged in the cushion. If you have a virtual mechanical part, and want to illustrate how to position it in a real machine, you want it to appear in front of things behind it and behind things in front of it. Additionally, you want the image to change real size on the screen to reflect its apparent distance and angle of view from the user’s perspective. To do all this, you need to know where the surface is in the current visual space.

 

Think about it. In a virtual reality world, such as a game or flight simulator, the program knows exactly where every object and every surface is. This is part of the pre-programmed environment.

 

In AR, however, this is not the case. If there is a table, for example, the system must recognize that it is there, and more importantly, what its extentis in space. This is something our human visual system does for us automatically. We seldom have to think about the question "Where is the surface of that table?" And so we take it for granted, and never realize just how extraordinarily difficult this is.

 

A computer system, however, must compute this from visual images. Not only that, but for the most part, it must compute this from simple, two-dimensional images.

 

If we place a virtual glass of water on the table, we want it appear to be resting on the surface, supported by it. For this we need to know the extent of the table, and know this from unlimited angles as we move the device around in real space.

 

So, the most difficult problem for an AR system is that it must first interpret its environment from the visual images given it by the system camera. This is one area in which AR and AI overlap – the visual recognition of objects in a scene. It is an extremely difficult problem that requires significant compute resources.

 

Problem #2 – Permanence In Space

 

This is related to the detection of objects, but another real problem. It is not trivial to place an object in a real scene and then have it stay in precisely the same position as we move around it.

 

Problem #3 – Lighting

 

The next big problem in AR is lighting. This may not be important for some applications, but if you want your virtual object to look somewhat realistic in the real world, then it needs to appear to be lit by the existing light sources in the real-world environment. This might be as simple as a single, bare light bulb in a room, or sunlight on a beach, or as complex as a night time street with multiple streetlamps, strung Christmas bulbs, and storefront windows.

 

Placing an object into the real world realistically means it must appear to be lit by all these sources. So an AR system needs to analyze various shadows cast by existing objects in the scene in order to compute the locations and qualities of all the sources. It even needs to account for light cast by virtual sources.

Problem #4 – Apparent Size

 

A third problem in AR, though not so difficult as the first two, is the visual size of the objects placed into the scene. As we all well know, an object that is distant has a smaller visual image than when it is closer. So, when placing a virtual object into a view of the real world, we need to know the apparent distance from the viewing device (iPhone, etc.) and the angle of view of the camera.

 

Related aspects include as well the geometrical location of the viewer in relation to the scene. Is the person tall and looking down at the Pokémon on the sidewalk? Or is she a child and looking from eye level at the creature? The current image of both the real scene and the virtual additions will be defined by this three dimensional location.

 

Let's say you (as a programmer) want to place an avatar at a point on a road. Well, in reality, that is just a point on a flat, two-dimensional screen. Depending on the angle of view, that point might be three feet away or 20 yards away. You need to resize accordingly. This is a reasonably simple transform mathematically, but before you can do it you need to understand the apparent distance. This is garnered from a description of the scene from problem No. 1 above.

 

ARKit

 

ARKit is Apple's API for building AR applications for iOS devices.

 

An application programming interface is a set of programming routines that allows a developer to perform complex actions that have been programmed previously by experts and tested over time. I explain the concept in detail here.

 

Essentially, an API does all the hard work for the programmer. This is true here probably more so than in any other API. ARKit provides services for each of the problems listed above. In each case, the incredible work of interpreting the real world scene, all the artificial intelligence programming that has been done, all is hidden under the hood, and the programmer just needs to request a description of nearby surfaces. Placing the model into the scene will subject it to the automatically detected light sources and resize it as it is moved in relation to the viewer, or the user moves the viewing device around it.

 

If you send in a model of your virtual object into the system, then place it on a real table top or floor, then the system will see that it is sitting on the surface, and you can wander around it and view it from any angle. It will appear to rest in precisely the same place as you move around. Even if you leave the room and return through another door, you will find your virtual object precisely where you left it. (Well, almost precisely. In one case, the programmer walked in a circle through several other rooms and the floating airplane shifted about one foot to readjust itself when he got back.)

 

All these capabilities are provided by ARKit and are available to any iOS developer, and the programs will run on most existing iOS devices (some limitations with pre-iPhone 6s devices).

 

In one how-to-program video, Brian Advent shows us how to make a simple game that places the sample spaceship at a random point in the viewing field. The user then touches the screen, and if you touch the ship, then it disappears and a new one comes up. Brian builds the app and runs it literally in less than 20 minutes. This is, of course, given an existing model, and there is no motion in the game, but all the code for app project was developed in this time from scratch, using the ARKit. The spaceship appears to float in space in the real world. You can change your angle of view, even walk around it and it will appear to hover motionless in the real world. All this done in 20 minutes. It is that simple.

 

Example

 

The following is an example of what ARKit can do. The virtual girl dancing is a model developed with a popular program, MikuMikuDance, here.

 

Notes:

 

  1. This was recorded live, in a real street.
  2. Blue rectangles are the AR system defining the sidewalk surface.
  3. Note how the girl always appears to be right on the sidewalk, never too deep or floating over it! This is really extraordinary.
  4. Note how objects in the background, such as cars passing by, are effortlessly blocked by the virtual girl.
  5. Note that the viewer can move in close or back away or can pass behind the dancer. In every case, the model is shown from the right perspective. A close in of the hand will reveal more details.
  6. Note how the shadows are completely realistic – perhaps a bit light in comparison to that of the man who walks by, but perfectly in sync with the dancer. All this is computed on the fly, projected onto the real world surface. It does not matter the angle of the viewer. Even as he goes to the other side, the shadows stay true to the real world light source.

 

The important thing is how easy it is to make this "scene." In the one-hour workshop, Introduction to ARKit, the presenters were able go through the steps necessary to do so.

 

Reviews

 

In a TWIT.tv webcast, Apple ARKit Will Revolutionize AR, Andy Ihnatko remarks that for the programmer, "It means so much of your work is done."

 

Rene Ritchie adds:

 

A series of technologies they announced… that could be transformative on their own, but when put together… I mean how long is it before that airplane is using machine learning to fly around the room without hitting objects?

 

That remark is probably a very realistic look into the not so distant future.

 

Meanwhile, a Motherboard article quotes:

 

"The most impressive aspect of ARKit is that it tends to just work," said Cody Brown, founder of virtual reality production studio IRL, in an online interview with Motherboard.

 

"Another incredible aspect of ARKit is how it handles lighting adjustments in real time, continued Brown. "I can only imagine the math and magic underneath this tech to make it work."

 

Leo Laporte from TWIT.tv notes:

 

Unlike Google's Tango – you already have the hardware that you need.

 

It is critical here for the investor to understand this important point.

 

Not only is the programming made easy, but the end user does not have to buy any other hardware. All the functionality is available on any iPhone above a model 5. (The model 5 CPU has only 32 bit, not the required 64 bit structure. Also, the 5s and 6/6 Plus will not have access to a most precise positioning system.) Full device translocation information is available on iPads with an A9 chip or higher (2017 and on). This is unlike other systems that require goggles or other special hardware.

 

Aside: How Does Apple Do This?

 

This is a good question. I believe Apple has been preparing for AR for a long time. Apple surprised the tech world with the A7 chip for the iPhone 5s – the first 64-bit processor for mobile phones. It caught the cellular world flat-footed and all the rest scurried to catch up. But Apple not only had the then current state of computing in mind, they were building for the future. The 5s had TouchID so that it could be a test platform for Apple Pay (although it was limited to iTunes and App stores). Similarly, 64 bit CPUs and massive GPUs (graphics processors) were in part preparing for the AR and AI computing needs.

 

A lot of AI processing is done on GPUs they are perfect for most types of neural network and deep learning systems. Apple's work in this area clearly has been leading in this direction. This is Apple's way of thinking strategically and preparing for the future. It is a hidden strength that the investor should appreciate.

 

Summary

 

What Apple has done is revolutionary in the field. They have provided developers with a very easy to use system for incorporating virtual elements into real world views. They have provided to anyone an extraordinary tool to do most of the hard work of interpreting the visual scene so that 2D or 3D models can be plopped into the real world. The whole methodology of discovering surfaces is also novel.

 

When the iOS 11 update is available in the fall, it will instantly become by far the largest AR platform. Obviously, there are limitations to the current software, and developers will find them. But just as assuredly, Apple will address them and expand with new features.

 

An interesting point is that while the iPhone 6 and before are limited in the tracking that can be done (presumably due to hardware issues), it will get retired in September when the new models are introduced. Thus, in September, all models that will be for sale by Apple will have full AR capabilities.

 

Apple has just created yet one more very significant moat around the iOS platform.

 

In September, Apple devices will automatically become the default devices for AR, while Android rushes to catch up. Yet Apple will remain far ahead for, at very least, three years. Remember, these are all very hard problems that Apple has solved.

 

  • * They have solved them in a unique way.
  • * They have provided the services in a very simple manner, and
  • * They work.

 

This will not be easily duplicated, particularly because the software depends so heavily on the hardware to work efficiently.

 

And another thing. Let's face it, Pokémon Go is maybe OK on an iPhone - they are just small creatures on your screen, but other uses of AR will be so much better on an iPad. So Apple has just given people a great reason to go out and buy an iPad.

 

Some other interesting demos and technical videos are available here.

Related articles

VRrOOm Wechat