At GDC 2019 later this month, Valve’s Principal Experimental Psychologist, Mike Ambinder will present the latest research pertaining to brain-computer interfaces—using signals from the brain as computer input. Ambinder says that BCI is still “speculative technology,” but could play an important role in the way players interact with the games of the future.
As time moves forward, the means by which users interact with computers have becoming increasingly natural. First was the punch card, then the command line, the mouse… and now we’ve got touchscreens, voice assistants, and VR/AR headsets which read the precise position of our head and hands for natural interactions with the virtual world.
More natural computer interfaces make it easier for us to communicate our intent to a computer, making computers more accessible and useful with less time spent learning the abstract input systems.
Perhaps the final frontier of computer input is the brain-computer interface (BCI). Like the virtual reality system envisioned in The Matrix (1999), the ultimate form of BCI would be some sort of direct neural input/output interface where the brain can directly ‘talk’ to a computer and the computer can directly ‘talk’ back, with no abstract I/O needed.
While we’re far, far away from anything like direct brain I/O, there has been some headway made in recent years at least on the input side—’brain reading’, if you will. And while early, there’s exciting potential for the technology to transform the way we interact with computers, and how computers interact (and react) to us.
At GDC 2019 later this month in San Francisco, Valve’s Principal Experimental Psychologist, Mike Ambinder, will present an overview of recent BCI research with an eye toward its applicability to gaming. The session, titled Brain-Computer Interfaces: One Possible Future for How We Play, will take place on Friday, March 22nd. The official description reads:
While a speculative technology at the present time, advances in Brain-Computer Interface (BCI) research are beginning to shed light on how players may interact with games in the future. While current interaction patterns are restricted to interpretations of mouse, keyboard, gamepad, and gestural controls, future generations of interfaces may include the ability to interpret neurological signals in ways that promise quicker and more sensitive actions, much wider arrays of possible inputs, real-time adaptation of game state to a player’s internal state, and qualitatively different kinds of gameplay experiences. This talk covers both the near-term and long-term outlook of BCI research for the game industry but with an emphasis on how technologies stemming from this research can benefit developers in the present day.
Ambinder holds a B.A. in Computer Science and Psychology from Yale, and a PhD in Psychology from the University of Illinois; according to his LinkedIn profile, he’s been working at Valve for nearly 11 years.
The session details say that the presentation’s goal is to equip developers with an “understanding of the pros and cons of various lines of BCI research as well as an appreciation of the potential ways this work could change the way players interact with games in the future.”
While the description of the upcoming GDC presentation doesn’t specifically mention AR/VR, the implications of combining BCI and AR/VR are clear: by better understanding the user, the virtual world can be made even more immersive. Like eye-tracking technology, BCI signals could be used, to some extent, to read the state and intent of the user, and use that information as useful input for an application or game. Considering Valve’s work in the VR space, we’d be surprised if Ambinder doesn’t touch on the intersection of VR and BCI during the presentation.