Augmented reality will be a critical technology.
Apple just released a developers tool ARKit.
We examine how will this will drive revenue.
Investors need to think about the future of Apple (AAPL). Since we see no surety for any new super hardware products coming down the pike, we need to look to the current product line for growth, or at very least to maintain its current cash generating position.
Apple has weathered the slowing of smartphone sales growth by continuing to charge premium prices for its product (against the predictions of many naysayers). It can do this for two critical reasons.
The design and build quality of the products, and it’s always on the cutting edge of new technology.
For these reasons, customers feel that there is value in the iconic product.
How will Apple maintain leadership in new technologies?
In the previous post on this topic, I explained Apple’s new software developers’ tool for augmented reality, the ARKit. I went into detail on what the API, or tool kit, provides for developers, and why it is such an important breakthrough.
First, what is AR (for those who missed the previous post).
Essentially, AR is the ability of a visual system to put virtual, computer generated objects into a view of the real world.
These objects or models may be static or dynamic. They may be isolated or interactive.
Revolutionary is all fine and dandy, but the investor wants to know how a free service is going to drive revenue. The answer to that is two-fold:
Hardware and software.
Let’s make no bones about this, when iOS 11 is released in September, it will enable hundreds of millions of iPhones and iPads to be AR viewers, and it will instantly become the largest AR platform on earth. Additionally, developers will have had the toolkit for three months, so there should be a rash of new apps. The integration with Unity and Unreal gaming engines will greatly accelerate the development of new games, so by the new year there will most likely be several high level games available to add substance to the slew of simplistic ones that will come right out of the gate.
All this will drive demand for iPhones that will include both current owners trading up and switchers looking for the new experience. My feeling is that switchers will provide an interesting phenomenon. Typically, the holiday season provides a huge sales bump, and the following quarters tail off in sales. I believe that there will be many people who will be moved to switch only after seeing the technology personally on friends’ iPhones. This will lead to greater adoption in the winter quarter than is usual. This will be strengthened by a build up in China, which is not affected by Christmas.
In many respects, Apple has lost its differentiation, particularly in China. Fans of the iPhone do see benefits, but there is no really compelling, highly visible factor that differentiates. The AR capabilities will be strong, obvious, and desirable enough to drive more sales there.
The user will not need any specialized hardware, only an iPhone or iPad with a sufficiently advanced processor. (An A7 or A8 will work but will not support the more advanced World Tracking feature, just the basic tracking. So the 5s and 6 iPhone models will be limited.)
The new AR features will almost assuredly drive an even greater upgrade push for this year’s new models. It is commonly thought that Apple will release three new models this year: the 7s, 7s Plus, and a third that some call the iPhone 8.
The question is – what will be the differentiators? My guess is that there will be several special features, but one of them will be that the iPhone 8 will be billed as a super advanced AR and AI phone. To promote this, there will be an extra processor.
Typically, Apple has a new processor each year that is significantly more powerful than its predecessor. This will be the A11 chip. My guess is that, along with the iPhone 7s/Plus, the iPhone 8 will sport the A11, but also a separate processor – a Neural Engine chip.
In May, Bloomberg reported on a possible AI processor called Neural Engine. They wrote:
Apple’s operating systems and software features would integrate with devices that include the chip. For example, Apple has considered offloading facial recognition in the photos application, some parts of speech recognition, and the iPhone’s predictive keyboard to the chip, the person said. Apple also plans to offer developer access to the chip so third-party apps can also offload artificial intelligence-related tasks, the person said.
The interesting thing here is that at WWDC, Apple announced, alongside the ARKit, another API called MLKit – for Machine Learning. The system is to help with many AI tasks, particularly neural network and Deep Learningsystems that would benefit from this type of chip. If there really is a special iPhone 8, then this would be a strong feature.
There already is a dual camera on the 7 Plus phone, so it is possible that this feature will move to the regular 7s this year. On the 7 Plus, the second camera is a 2x telephoto. Now, one use for dual cameras is to accurately detect distance. I am not sure if this works well if one is a telephoto and the other normal angle of view.
On the iPhone 8, however, I believe the cameras will have enhanced quality and be specifically configured to aid in AR scene recognition. This will provide better data for scene analysis, and quicker and more efficient processing.
But it is possible that a stereo camera is not necessary for ARKit to work well. Basically, the idea works as does your two eyes. With a distance between the two lenses, the view of the world is slightly shifted, and the difference is shift is proportional to the distance. Close points shift significantly more than do far points. Your brain, or the device’s processing system, can compute therefore the distance of an object by how much the two images differ.
The ARKit, however, can do the same by selecting two frames in time when the devices is moving. Due to the sensitivity of the inertial sensors, your iPhone or iPad knows precisely how far and precisely the direction it has shifted, and can stereo-optically compute point distance as well, perhaps even more accurately than a stereo camera wit just a few centimeters distance between the lenses.
Already the processing of the AR content is impressive. The system captures a still, analyzes for surfaces and lighting, provides hit detection and point distance information, then allows the user to draw virtual content into the display, and all this in real time while the i-device is in constant motion. But a Neural Engine processor and double cameras would increase the capability even further.
In any case – AR will drive strong incentives to upgrade iPhones, or switch to them.
The iPad has not done well over the last few years. Even in the last quarter, year over year sales fell significantly. But the AR demos have shown us clearly one thing. Some simple AR apps will work fine on an iPhone, but any complex interaction really needs a bigger screen.
Pokémon Go, with its limited virtual content, does not require a wide screen. But for a truly interactive game, or to view any kind of panoramic activity, an iPad is definitely superior. Check the demo at WWDC below. Skip to time 1:00 to see the actual game demo. This clearly would not work well on an smartphone,
Thus I believe we will see a real revival in iPad sales once this technology is well known and good apps are available. The Mobile First program IBM (IBM) will undoubtedly come out with important enterprise level AR products. So sales will come from a number of areas.
It’s indisputable that a smartphone is not the best window into an AR experience. A larger tablet, while providing more detail, still is an object with limited extent that you need to hold in your hand. The future definitely is glasses.
Currently, there are several players in the AR/VR headset business. The most notable is Microsoft (MSFT) with the HoloLens. This provides a pretty amazing experience and the company deserves kudos for their efforts. In the current configuration, however, they are both too bulky and too expensive. At $3,000 - $5000, they are not ready for mass market. Clearly, all this will improve with time. But when remains to be seen.
Apple itself is noted to be working on their own glasses. We have little idea as to the ultimate specs, but we can imagine they will lead the pack in comfort and responsiveness. The company just bought German eye motion detection company SensoMotoric. So it is likely they will have this type of controls in the product.
Gene Munster of Loup Ventures writes:
Our best guess is that Apple Glasses, an AR-focused wearable, will be released mid FY20. This is based on the significant resources Apple is putting into AR, including ARKit and the recent SensoMotoric Instruments acquisition. We believe Apple sees the AR future as a combination of the iPhone and some form of a wearable. With an average sale price of $1,300 we expect initial demand to be limited at just over 3m units compared to 242m iPhones that year. This equates 2% of sales in FY20 increasing to 10% ($30B) in FY22 when we expect the ASP to be about ~$1,000.
He envisions that glasses will replace the iPhone altogether in a decade.
The other half of the revenue equation is software. Since Apple does not charge for the use of its developers tools, there are several ways it can earn revenue from software.
The first would be if it made its own apps to sell. While this is a possibility, it is not likely. Even the iWork suite is now free on iOS devices and on the web. But it is possible.
More likely, Apple will continue to rely on third-party developers to produce apps and get its income from its percentage of sales from the App Store. This will be significant. App store revenue is increasing and services are growing as a portion of total revenue.
Revenue will increase for two reasons. First will be existing users who buy new titles and in-app purchases. The other will be the increase in sales to new customers as the AR platform draws them in.
Apple’s new ARKit system is not without its limitations. One important limitation is that it does not understand very much of its environment. It is very good on the horizontal surfaces that it detects, and these are certainly the most important physical features. But is does not have a complete model of your room (or whatever), and it is unclear to what extent it even recognizes walls. The demo with the warrior in the bedroom does appropriately obscure the figure when walls are in the way, but ultimately, we do want an AR system to be aware of existing physical objects so that it can interact with them – or at very least, not fly through them. The key word here is “ultimately.” There are a few reasons why we do not have this capability now. One is that algorithms may not yet be available for a good enough interpretation. Second, the developer resources are not available. There are a limited number of capable engineers, and they are all busy working on the problems that they just did solve. These problems are not solved by the run of the mill community college programming graduates (no offense intended, kudos to you all). These problems require higher level training and specialization.
Finally, one reason that higher level environment knowledge is not generated is likely to be compute power. It is totally amazing that a two-year-old iPhone can interpret what it can, then process the updated video frame with virtual images, and do all this in real time at 60 frames per second. The processing power to do greater interpretation of the external world is likely just not possible at this time.
In his presentation for Serif Affinity on the new iPad Pro, with the A10x processor, Ash Hewson noted that “Because of (Apple’s graphics API) Metal, we can achieve performance more than four times that of an i7 quad-core desktop PC.”
Apple is able to squeeze extraordinary performance out of its hardware due to its design expertise, and the fact that it has control over both the hardware and the software. No doubt this will lead to improvements in the AR system in the future.
Tango (originally the Project Tango) is a similar AR/VR platform by Alphabet (GOOG) (NASDAQ:GOOGL). It does greater analysis of the environment (although I believe this is sent to a cloud server for processing). So it has more of a true 3D concept of a room than does ARKit. This would allow virtual objects to interact with walls, objects, etc., instead of just horizontal surfaces.
The problem with Tango is that it is not so simple to use as is ARKit. Also it requires special hardware, and so is available only for one tablet and one smartphone.
With the addition of the ARKit developers toolkit, Apple has created yet one more very significant moat around the iOS platform. When iOS 11 is released in September, it will instantly become the largest AR platform in the world, with literally hundreds of millions of potential users.
In spite of some limitations, it provides the easiest to use development platform that will attract thousands of AR apps. The existence of the apps will not only be a new source of direct, sales revenue, but bring new customers to the iPhone, and promote the iPad.
It is impossible to predict revenue directly attributable to AR, but it is clear it will be substantial.
Your comments are appreciated.
Disclosure: I am/we are long AAPL, IBM.
I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.