Deepfakes have had a bad rap. From the infamous Steve Buscemi x Jennifer Lawrence Golden Globes mash-up to the anatomically eye-watering trailer for Cats, you could be forgiven for thinking that these vocal-visual AI-engineered syntheses will be the sole force dragging us into the end of days. While the dangers surrounding disinformation are patently very real, now that the door has been irrevocably opened there is an important flipside emerging via avatar-specializing retail/brand-based technologists looking to put more positive possibilities into the ether.
Two currently in-beta projects spotlight the new opportunities, aligned to a burgeoning commercial era conjoining retail, gaming and cinema where mobilizing your digital self will be a standard daily activity:
The first is Superpersonal, an AI-powered face-swapping app that captures a user’s face and micro mannerisms to create a hyper-realistic moving image, which is then melded onto an approximately representative body; if you’re tall and skinny the body you’ve virtually acquired won’t be a diminutive beast-hunk. Consider it you, but as your best fashion model self. The whole process now takes only three minutes (to give an idea of how fast the cloud computing process is moving, back in February it took 20 minutes).
The verisimilitude is astonishing and attributable to a team that perfectly straddles the art-science divide; founder and CEO Yannis Konstantinidis is also the founder of Nomint—an award-winning, London-based animation studio for advertising whose R&D, established to probe how cutting-edge deep learning AI could enhance content production (such as creating ads that depending on time, weather or viewer), spawned the seeds of Superpersonal.
Working alongside AI expert Dr. Jamil Sawas, the overarching focus on what Konstantinidis refers to “redefining product placement” turned to fashion, and a trial in February with London’s College of Fashion’s Fashion Innovation Agency (FIA) and British brand Hanger.
The most obvious potential use is being able to try on clothes online at speed; retailers would only need to shoot a comparatively small cross-section of body sizes because as long as it’s a reasonable match the extreme realism of the face and facial mannerisms is enough to create a compelling bond.
“When we see a good approximation, we don’t feel it’s not ourselves, what we’re working on now is how much deviation from actual reality is possible,” says Konstantinidis. With e-tail sales predicted to more than double, hitting $4.88 billion by 2021 but still dogged by the specter of huge online returns (25% of online purchases are sent back) the practical value is enormous.
Beyond virtual fit it could also mean transposing consumers directly into fashion ads, literally putting themselves in the picture. Konstantinidis reveals that Superpersonal is currently developing emoji stickers (users create cartoonish personal avatars for use in messaging and on social media) as a gateway to this idea.
If it seems unlikely to catch on you need only look at Gucci’s late 2018 collaboration with Silicon Valley avatar-creation and messaging app Genies: it not only allows people to create digital clones of themselves (via one million customization options) but also deck themselves out in 200 replicas of real Gucci items. In future, they’ll be able to purchase those items, both digital and real, through the app in one click.
The aforementioned issue of deviation is critical. With flaw-softening tech of the variety found in photo-booths now intrinsic to mobile phones (making even the most apparently natural shots somewhat enhanced) and the popularity of gaming soaring, what constitutes the ‘real you', is an increasingly flexible notion. Minor manipulations of our virtual selves could be considered just the virtual-era equivalent of using make-up, a form of self-expression. For most, the confrontation of an exact mirror-image avatar of the kind previously used by in-store scanning booths has proven to be a considerable turn-off, compounded by the gamer-led appetite for post-reality identities.
“Most of the media frames this kind of tech as dystopian, but there are other angles. Superpersonal acknowledges that virtual fit is about more than physicality; it’s about self-perception. If people can superimpose themselves onto brand’s models, tweaking themselves to fit, they may even become part of that brand’s messaging. At a time when consumers are increasingly keen to get close to brands this is something that will facilitate that closeness,” says Matthew Drinkwater, who heads the FIA.
Konstantinidis concurs: “There’s a balance to be found here because you want to see yourself in the picture, but the question is ultimately, which you do you want to see?”
The Superpersonal app, part of a new breed of brand-applicable tech rethinking deepfakes.
SUPERPERSONAL / FIA
With representation a major issue in advertising and marketing of every flavour, and at least 50% of persuaded to buy by user-generated content (assume that figure will skyrocket if that person is yourself) and the prospect of having a personal avatar to travel with you across e-stores, games and anywhere else in the virtual-verse is extremely compelling.
Enter innovation two, another version of rapid digitization created by Austrian technologists, Reactive Reality, which is working with AI-supported AR engines enabling “users to experience a wide range of products before buying them, allowing users to immerse themselves in AR scenes with apparel, objects and landmarks, and furnish and decorate their own places.” In short, it’s making digital doubles of everything, populating a parallel virtual universe.
Reactive Reality has just released a collaborative project with the FIA and (gaming-fascinated) British fashion designer Charli Cohen. Reactive Reality's proprietary Pictofit 3D app lets users create (not-modifiable) avatars of themselves which they can then use to try on an entirely digitized version of Cohen’s latest collection.
Avatar-creation requires two people: one person to model (while wearing skin-tight clothing) and another to walk around them three times, photographing them with nothing more than a smartphone camera to amass the requisite data points. The entire process takes just 30-40 seconds, achieving photorealism and exact body measurements.
Exactly how the AI tech (secret sauce) works can’t be divulged but there are parallels with Google’s latest smartphone which includes a feature capable of instantaneously aggregating multiple shots into a single image, accurately merging angles and eliminating "flaws" such as blinking. While still in research mode it’s reputedly almost retail-ready.
For brands, this virtual replication could translate to clothing, people and stores/destinations, which will require considerable coordination of businesses. Consider creating your own avatar to try on a bridal gown or suit in digital form and then (virtually) experience a stroll through your wedding venue. “We always talk about what the business model is for digital clothes and so on but this other world in gaming is already economically viable [see Gucci initiative mentioned above]. The idea is that everywhere around us is another layer of reality, but who controls the over-arching matrix that you’d step into? Who owns this AR space?” says Drinkwater. He cites forerunners such as American company Verses, which bills itself as "powering the spatial web," and Magic Leap’s colonial declaration of "The Magiverse."
Governance of the spatial web is clearly still up for grabs, as is a slew of ethical debate as the prospect of conversant AI hovers close by: “As you can imagine, we’re already thinking about how digital humans come into this, how people will converse with people both fictional and real within these spaces,” confirms Drinkwater. “I think the sense of fear, of being unnerved by these deepfakes and virtual replication concepts is good because there are clearly many sinister ways these tools can be used. But that’s exactly why the creative industries need to get on board – to find ways of engaging that are more meaningful and positive."