How Oblong Helped IBM Build 'Immersion Rooms'

Category: 
How Oblong Helped IBM Build 'Immersion Rooms'
November 21, 2018
Above: IBM Immersion Room at the Watson Experience Center.
Image Credit: IBM

 

John Underkoffler is best known as the science adviser for the landmark sci-fi film Minority Report, where actor Tom Cruise uses “data gloves” and gesture controls to manipulate a transparent computer. As the CEO of Oblong Industries, he has been trying to bring that to life.

 

He founded the company in 2006 and launched the Mezzanine software for interacting with screens via gestures for enterprise collaborators in 2012. I visited Oblong’s warehouses in downtown Los Angeles in May 2017. There, I saw huge video walls, curved to immerse the user inside a visual experience. You could really see data and the connections between objects in a visual way. He showed me you could grab an image and toss it from a computer screen to a big video wall, seamlessly.

 

And now the company has shown off the technology with its partner, IBM. Big Blue recently took the wraps off its IBM Watson Experience Centers, which have an “immersion room” with 45 high-definition displays with 93 million pixels, all acting as one screen.

 

One of the centers, the IBM Watson Immersive AI Lab, opened in Austin, Texas. Under the hood is Oblong’s g-speak spatial operating environment. To date, IBM has taken more than 15,000 people through the Watson Experience Centers, helping them turn “terabytes into insights.”

 

I spoke with Underkoffler and Pete Hawkes, Oblong’s director of interaction design.

 

Here’s an edited transcript of our interview.

Above: Pete Hawkes (left) and John Underkoffler show off immersion room displays in downtown Los Angeles in May 2017.
Image Credit: Dean Takahashi

 

VentureBeat: You’re doing something with IBM now?

John Underkoffler: And have been for a long time. I’m sitting here with my fabulous colleague Pete Hawkes, who is Oblong’s director of interaction design. He’s been the prow of the ship on most of the IBM work.

 

VentureBeat: They have a facility in Austin that they’re opening up now for visits?

Pete Hawkes: Right. I was there all last week. They call it — let’s see, they have various names for it. It’s basically an immersive design lab. I can give you their proper name for it later. But the lab there is at IBM’s fairly older complex. It’s been there a long time. They have a few different facilities. It’s a very significant design presence there in one of the buildings, where many of the thousand or so designers they hired over the last few years reside.

 

The immersive design lab has a nice large Mezzanine space, alongside a reconfigurable room they use for testing and prototyping experiences at their three executive briefing centers, which are at Astor Place in New York, Cambridge in Boston, and the Financial District in San Francisco.

 

Underkoffler: As a bit of background, we’ve been doing heavy work with IBM since probably 2012. When you visited our warehouse, where we do all the large-scale prototyping for those IBM projects and similar projects with other customers, you almost certainly.

Above: IBM Watson Experience Center
Image Credit: IBM

 

VentureBeat: I wonder how big some of these displays are that they’re using, the scale of what they’re visualizing.

Hawkes: At Astor Place, the first large immersive space they set up — there’s a large display wall there, a 41-foot display wall. Three sections of 7,200 by 2,700 Christie MicroTile display pixels. Then it has an immersion room of 45 HD planar displays, which tally to 93.3 million pixels. It’s a lot of space to work with. That space wraps around you in 300 degrees.

 

The first space is a little bit more architecturally grounded. It’s five sides of a hexagon. The newer spaces are now kind of rounded out into 15 panels. It’s more of a curved space, 15 columns with the same count of pixels. We simulate at similar scale at our warehouse. Austin has a unique configuration in that they’ve installed their production displays on a rolling mechanism that allows them to flatten out and then re-curve the wall live.

 

Underkoffler: It’s the Transfomer space.

 

Hawkes: They were a little space-constrained at their lab in Austin. Up until about a year ago they were very tight-lipped. They kept their cards very close as to the content and the specifics of what you could see in this space. They released a few teaser things for press, but they really wanted folks to schedule time to come and visit the space rather than sharing much about what they were doing.

 

Fortunately we received permission to submit the work we’ve been producing for the spaces for national and international design competitions this year. A film crew went through and documented a bunch of the work. We were shortlisted for the UX Design Awards in Europe, and then also tapped for Austin Design Week this last week as a stop on one of many tours throughout the city. We hosted a two-hour event that demonstrated a bit of the content and also went into, from a design standpoint, how we produced content at that scale.

Underkoffler: The beginning of that whole thing was like many of our beginnings. We sort of inherited someone else’s design for the space. That’s often how we end up starting with a client. A bunch of architects designed this hexagonal room with 45 giant displays, and the separate 41-foot pixel wall. By the time IBM came to us and said, “Uh, let’s work together and put something in this.”

 

As we all know, the world is filled with giant display walls. But almost none of them are interactive in any way. We’re kind of the only people in the world who can turn that stuff on, who can make it fully interactive. There are hardware-based companies that will let you do — not even Dragon’s Lair things. More like pre-programmed branch points. That’s hardly interactive. In order to tell their story around Watson and other novel and difficult-to-explain offerings, IBM knew they needed an experience that was as live and real-time and interactive as the stuff they were trying to get the world to understand and buy.

 

They came to us. We’d already transformed their primary research facility, the T.J. Watson Center in Yorktown Heights. We build half a billion pixels, something like that, an enormous number of interoperable pixels in the main demo facility there on the ground floor. They came back to us and said, “We have this new problem. We have these enormous pixel spaces. We need to tell the story of AI, the story of cognitive computing.”

Above: Pete Hawkes showing an Immersion Room.
Image Credit: IBM

 

It was a process of working with their designers and their communications people to put together an evocation of these very abstract ideas around AI and cognitive computing, and do it in a visceral way using the Oblong spatial wands for control. It’s a uniquely great way to deal with a 41-foot wall when you’re not going to use touch. You can’t run up and down the wall. You need that laser-pointer action at a distance. All of our expertise up to that time got focused and funneled into building a set of experiences that made it worth having that many linear feet, that many square feet of pixels.

 

VentureBeat: What are they using that for as far as problems to solve?

Hawkes: They’ve shifted over the years. Initially it was primarily a marketing experience. There were a lot of explanatory videos that were both trying to create some hype, but also some understanding in the AI space. What we’ve brought with live software is we can work with actual data sets and integrate Watson capabilities, services and other things, directly into the software.

 

They still have a narrative slant to a lot of their primary experiences. These are tailored to specific industries. The ones that they’re using today are — there’s a story around disaster response, a hurricane that is approaching New York City. There’s a separate story around the financial services sector, specifically identifying fraud and how AI can help in that manner. We just released a new experience this year around supply-chain management and the implications of AI in that space.

 

In addition to that, some of the more exciting work that we’ve done — instead of just pure storytelling, it’s a software piece that’s new and different every single time. One of the more popular experiences is called News Discovery, where they analyze all of the news they can scrape from the world’s servers — upwards of 40,000 to 50,000 articles in a single day — and then analyze the content of those articles for high-level concepts and sentiment. Are they primarily positive or negative? And then allow you to look at related concepts and view them geographically, so that you can see how those concepts and sentiment maybe stack differently in Asia versus North America, by linking concepts together and diving directly to the web-based news content, which is viewable in the space.

Above: Oblong’s g-speak spatial operating environment powers this room.
Image Credit: IBM

 

That’s a lot of fun from a software standpoint, but the way we’ve utilized the space to sift and sort through that massive amount of data is what makes it truly unique. If you haven’t seen that particular application, I highly recommend finding some time the next time you’re in any one of these hubs.

 

VentureBeat: Have they credited this for helping them accomplish particular goals?

Hawkes: Right now they’re not selling it — it’s not something that they’re offering. “Buy one of these rooms with this service.” Their underlying goal is primarily to demystify AI and start a conversation with executives. That’s their primary audience for these experiences.

 

I can only speak from personal use of the software, but it’s highly gratifying. Compared to, say, what Google gives you, when you search up the news for the day — you get what Google thinks are the 10 or 12 best things in a list. That’s all you can see at a time. Many of those results are fed to you because of dollars people have spent to put them in front of you, instead of being relevant to your query or your actual intent.

What’s unique about this particular experience is that when they hand people the wand to use it, or take them through the experience, you don’t have to have a predefined path or course, because it changes. As you see the data, you respond in real time. Your queries change based on a more visual and visceral response to the results that you’re seeing. The filtering changes as well based on that.

VentureBeat: Do you have an idea of how much money they’ve put into each one of these?

That’s a good question. I can tell you that from a timeline standpoint, it takes anywhere from four to six months to create the material for each one of these experiences. We spend two months in an early fact-finding and research phase, helping both the designers and the engineers better understand the industry. They bring in subject matter experts from both inside and outside IBM to help us better understand how we might create a narrative or design interface within that context. Our engineers get comfortable with the services and what they’re capable of doing on the Watson side.

 

For the latest module we’re taking on insurance claims, identifying the viability of photos for insurance claims. We’re training our own model using various data sets to see if we can get accurate results on identifying whether a photograph of a vehicle shows severe damage or moderate damage or minor damage. It’s a lot of fun to use. Rather than working with canned algorithms that have already been proven in the field, we’re creating our own for some of these experiences.

 

Increasingly, they’re starting to showcase case studies where Watson is being used out in the world. They call it Watson in the Wild. Everything from engaging visitors to a museum in Sao Paulo to financial services. Many of the customers using Watson they can’t talk about directly, because they’re heavily NDA’d. Oil and gas is another field that leverages these pretty regularly.

Above: IBM helps people understand machine learning in “immersion rooms.”
Image Credit: IBM

 

VentureBeat: Is IBM your biggest customer or user here?

Underkoffler: I think you might recall, there’s two sides to the house at Oblong. One is the Mezzanine product side. There, we have more than 150 Fortune 500-style customers all over the planet, on six continents. Then the team that Pete runs is the parallel universe version of that, for customers like IBM that already have and use a large number of Mezzanine systems, but want to go further. They need custom experiences. They need to be with us on the absolute cutting edge. It’s fair to say that IBM is our largest customer on Pete’s side of the house, what we call client solutions.

 

Hawkes: We do work with other customers as well, though. It’s not exclusively IBM on the custom side.

 

VentureBeat: When does everybody else get this sort of thing, then?

Hawkes: [laughs] To IBM’s credit — and I think a lot of this came from a few visionary folks within that organization, people that understood our take on interface, and some folks in research who had experienced firsthand how well we could implement these things — the large creative agencies that typically set up these spaces don’t really know how to operate on a live software level as directly. They might have some sharp folks around, but the standard mode is generating high-end commercial-like material. It’s large videos and other things that are static. The nice part about this engagement with IBM is they’ve not only given us license to explore new modes, but they’ve also pulled their content and their use of the spaces in that direction.

 

Underkoffler: For us, what’s exciting is that our work with IBM and other similarly minded customers lets us push forward the state of the art around the idea of massive-scale interaction. Cinema really is a pretty good analog. What can you do if the screen is 30 feet high and 55 feet wide? Things work differently at that scale. The work we do with IBM is breaking all kinds of new ground in that mode of interaction, in that kind of interactive space.

 

But to your question about when we all get it, we want the answer to be “as soon as possible.” For example, Pete runs monthly meetings here where the local hacker and graphics community is welcome to come in. Everyone’s invited. People just try things out on the giant wall. It’s like handing a film camera and flatbed editing rig and a Rialto once a month to see what people can dream up when they have access to this large format. You should come by someday.

Above: This is a 300-degree immersion room at IBM.
Image Credit: IBM

 

Hawkes: There’s actually one tonight. It’s called “Slim Shader.” [laughs] Shader language is great because it cuts through an interesting swath of engineers, game designers, VR developers, mobile developers, and designers. We get a really interesting mix of folks. We’re also — I would say our team has also invested a lot of time in building accessible tools for non-technical people. We write a lot of difficult C++ software to drive these spaces in a way that makes sense.

 

One of the big hurdles with new tech is we’re so eager for it that we push it out the door before it’s really ready. This is the big problem with VR — illness because framerates are poor, or the display technology isn’t quite ready. We invest a lot of time making sure we have a high-fidelity experience, stuff that from a UX standpoint is very solid, but then we also create a lot of accessible tools so that folks can bring their own skill sets to the table. Designers can iterate rapidly at scale, without having to filter their ideas through an engineer, for example. That’s important to us.

 

We also have a strong relationship with many of the local schools. We’re right in the middle of UCLA, USC, ArtCenter, Cal Arts, and Otis. Not only do we employ designers and artists from those schools, but myself and other artists and engineers at Oblong teach at UCLA and USC. We do mentorships with ArtCenter. I’m teaching a course next term in the film school at USC, in Media Arts and Practice. I’ve been teaching there for the last two or three years now. We teach technology — Arduino and processing — for artists and designers. The class spends the last few weeks at our warehouse downtown interacting with many of these same data sets. We clean and scrub it down into a format they can play with, and we encourage them to create tangible interfaces.

 

This is another way — making sure that the younger generation, those that are just graduating from the top programs in the area, are familiar with this as a potential mode for creating content, for interacting with data, and understanding what’s possible. Many of these students end up working at some of the top companies around the world. ArtCenter folks are primarily working in automotive. Many of the USC folks go into entertainment and Hollywood, or a handful of other startups here in L.A. or around the Bay Area. We keep tabs on them, and they on us. These same concepts and ideas, we hope, trickle out into the world in a meaningful way.

Above: IBM created Watson Experience Centers to help humans grasp complex ideas.
Image Credit: IBM

 

VentureBeat: Is there any kind of explanation you’d give about the ROI on this? Why would somebody want to spend a lot of money to set up a center like IBM has done? How would you describe the advantage or the return?

Hawkes: The return on investment — again, I’m speaking from a design standpoint. When I work with designers in the space, or people that have worked with traditional modes, often there’s a moment where their brains kind of crack open and they start seeing things differently. A lot of that has to do with three fundamental concepts that underlie G-Speak, the platform we use to drive all of this.

 

First is that computing systems should be inherently multi-user. Everything we use, all the fanciest bits that we have from Apple and Microsoft and the rest, they’re primarily single-user interfaces. They might be connected through the cloud, but they don’t function the same way we do when we’re sitting around a tabletop working on a hard problem on a whiteboard together. Creating systems that are on equal footing with that cognitive space is very important to us. As a part of that, making sure that the space itself is spatial — the screens aren’t just on the walls, but they’re on the walls for a reason. They physically behave like they would in a real space.

 

And then the last part is just connecting multiple hardware devices into a unified interface. We have Macs. We have phones. We have Windows machines. They all do different things well for different reasons. But if they’re connected together, networked in a meaningful way — not just a cloud-based way or a file-sharing way or chatting with one another, but networked in a way that they can unify to create something bigger — that becomes extremely exciting. The ROI is more around changing hearts and minds, I think, in some cases.

 

We’re excited because the cost per pixel continues to plummet. As you walk around now, even your corner bodega probably has a large display screen with prices in it. Fast food restaurants and the rest — more and more of our architecture is becoming display surface. This is a concept that we’ve understood for a while now. But what we don’t know is what to do with those pixels once they’re there. Is it just a billboard? Could it be active space?

Above: Oblong’s prototype for an immersion room in May 2017.
Image Credit: Dean Takahashi

 

Similarly, in the VR and AR space, there’s a lot of excitement there, but the content — now that folks have the heavy hardware, they’re scrambling to generate good content. We have really strong shops sprouting up all over Los Angeles creating cinematic experiences, art-based experiences. We have some industry-based companies as well. But what they’re really struggling with is the interface. How do humans actually interact? What are the implications, from a UX standpoint, of trust? I have to blind myself to the real world in order to get this magical virtual world.

 

Standing in an immersive space like IBM’s space, we can have a group of 12, 15, 20 people having a shared digital experience with the same level of immersion, without losing context in the real world. We feel there’s a lot of power in that. These aren’t new concepts. They’ve been kicking around. Years ago UC Santa Barbara developed the AlloSphere, the near-spherical sound and digital display surface, to explore similar concepts. Others have existed in Chicago and San Diego for data visualization.

 

IBM is pushing these to be active interfaces. I think originally they spec’d out the space for its cinematic appeal. They wanted it to be more like a fancy theater of sorts. I don’t think they expected it to become the live interface it has become.

 

VentureBeat: How long would you say it’s taken to get to this point, at least with IBM?

Hawkes: Astor Place launched in 2014, so it’s been four years now. They started changing their approach to things a couple of years into the original exchange, when they started asking us the right questions instead of just dictating content.

 

They’ve needed to open space up and make it a little less special. It’s been exciting to see that this year, both in the ability to document it and submit it for awards, but also — Austin Design Week was the second design panel we’ve held in the space. They had another one earlier in the year, in August, at Astor Place, where they invited designers to both see the space and participate in a panel discussion on how we create content for the space. They let us write a blog post about it, which was great. [laughs]

Above: John Underkoffler shows off an immersion room display in May 2017.
Image Credit: Dean Takahashi

 

VentureBeat: I made a visit to Lightfield Labs in San Jose. They’re doing large-scale holograms, which should be interesting sometime soon.

Hawkes: I look forward to seeing that. We have a relationship with Looking Glass, and recently set up a demonstration where we integrate their display with our wands at the warehouse. Looking Glass creates a live digital lenticular display, so without any — they’re pushing toward the idea of an actual hologram, but the fact that you can look around a three-dimensional object without any special glasses or anything else and share that experience with other people is very exciting.

 

One last remark. I don’t think you have to have 93 million pixels to get the type of experience that IBM is getting. All of the tools we’ve built to drive those much larger spaces I think have just as much utility in a small office or living room space. We have been talking with various hardware companies. As embedded hardware moves into the screens we watch TV on, they suddenly become a lot more active. There’s a lot of new potential there. We want to be ready with the right approach to interface and tools, so that we can drive that potential.

Related articles

VRrOOm Wechat