Images in a Post-Truth Era

Article By : Junko Yoshida

We can instantly edit, enhance and distort photos. Shouldn’t we think first before enabling that same ability for augmented reality?

In designing a system, traditional camera manufacturers such as Nikon, Canon, and Pentax have shared a common goal: the pursuit of realism. For decades, they understood that professional and amateur photographers strive to capture life moments as accurately and vividly as they’ve occurred.

But if this traditional photorealism occupies one end of the spectrum, synthetic images — now more aptly called “computational photography” — have claimed the opposite end.

I had a rude awakening last week at Qualcomm’s Tech Summit, where the chip giant announced Snapdragon 865. Smartphone cameras, enabled by advanced apps processors loaded with powerful AI engines and their ability to handle multiple image sensors, are changing the very meaning of “photography” as I’ve known it.

Values of smartphone cameras

Smartphone camera users today love that they can edit and alter their captured images in real time, directly on their smartphones, by creating an “alternative” reality at a click of a button.

Snap Inc.’s senior director of engineering, Yurii Monastyrshyn summed it up succinctly at the Tech Summit, telling his audience, “While cameras were created to capture memory, Snap is reinventing the camera to be a platform of communication, entertainment, search, and e-commerce.”

Right. Welcome to the photography of alternative reality.

For many smartphone users, the value of photography is fast shifting. Smartphone cameras can provide instant gratification through real-time editing, beautification, or even distortion of a captured image, communicating it instantly to the world.

Snap is leveraging the Snapdragon 865’s new “Hexagon NN [neural network] direct” capability (which can write apps directly to the “metal”) so that a Snapchat app on your smartphone can change your face in real time — “close to 30 frames per second,” as Monastyrshyn noted — to make you look like as if you are, say, 15 years old.

Snap executive demos the Snapchat app by showing his face as it appears today…


…and then changing his appearance as a video stream in real time.

Another smartphone app company, Loom.ai, unveiled an avatar-based video conference app at the Tech Summit.

Using Qualcomm’s Snapdragon 865’s AI engine, Loom.ai’s LoomieTalk applies deep learning and creates “expressive” 3D avatars on a smartphone. By superimposing the avatar layer on top of the live video, LoomieTalk can track and mirror the facial expressions and movements of a smartphone user while he or she is video conferencing via mobile.

The “expressive” 3D image on the left is a LoomieTalk avatar of the person on the far right.

Count me among those who hate video conferencing in any situation. So, I don’t really understand why showing up on live video conference as an avatar and talking to other false faces could possibly make video conferencing less painful or more “productive,” as Loom.ai claims. But then, what do I know?

Maybe I’m the only one who prefers to present myself as I actually am.

Computational photography

All these efforts on AI and 5G do not indicate that Qualcomm is forgoing its pursuit of bringing professional-quality photos and videos to smartphones.

On the contrary, Qualcomm integrated a new Spectra 480 ISP (image signal processor) into the Snapdragon 365 that it unveiled at the Tech Summit.

Pointing out the Spectra 480 ISP’s ability to process 2 gigapixels per second, Qualcomm said this speed “delivers new camera features, including Dolby Vision video capture, 8K video recording, capture [of] 200-megapixel photos, and simultaneous capture [of] 4K HDR video and 64-MP photos.”

That’s all good and dandy. But by packing all these bells and whistles into the new app processor, the message Qualcomm drives home is the emergence of “computational photography.”

P.J. Jacobowitz, Qualcomm’s staff manager of product marketing for cameras, came onstage and described legacy cameras as “just one camera, one lens, one image sensor, and one image signal processor,” adding, “It’s hard to innovate with just one camera.” Tallying up the five cameras — five lenses, five sensors, five ISPs — integrated into Xiaomi’s latest smartphone, he declared, “Computational photography is the future of photography.”

What computational photography can offer by using five cameras in a smartphone

As Kevin Krewell, principal analyst at TIRIAS Research, wrote for EE Times, “The capabilities of smartphone applications processors to apply computational photography to these tiny sensors is putting a lot of pressure on traditional camera vendors.”

I couldn’t agree more.

He explained, “The camera industry focused on a mission of recreating the image the photographer saw in the most realistic way possible (in a basically very analog way, even if a digital sensor is used). But now the goal is to dissect the image into components, apply (multiple) filters to each element, and then reassemble an image in a purely synthetic manner.”

Krewell concluded, “The end product may be very appealing, but it may not resemble reality in any way.”

Again, welcome to photography designed for the post-truth era.

Here comes XR (Extended Reality)

But until you see what Qualcomm has in store with its new XR (extended reality) platform, you ain’t seen nothing yet.

The Snapdragon XR2 5G Platform, slated to debut next year, “unites 5G and AI for the first time,” according to the company.

Describing the XR2 as “built to enable unrivaled extended-reality experiences,” Qualcomm claimed that “the Snapdragon XR2 5G Platform enables users to explore every angle of their virtual world in a 360° spherical view that captures the scene in vivid detail.”

Snapdragon XR2 reportedly comes with significant improvements over Qualcomm’s current, widely adopted XR platform, enhancing CPU and GPU performance, video bandwidth, resolution, and AI.

Most notable, however, is XR2’s support for seven concurrent cameras and its dedicated computer vision processor.

Because the XR2 provides low-latency camera pass-through, users can interact with a blend of the virtual and real worlds on VR devices, providing a boost for “mixed reality” capabilities.

I highly recommend watching Qualcomm’s XR2 promotional video, posted below. It’s only 90 seconds long, and I guarantee that you won’t be wasting your time.

When I saw the clip, I was both amazed and amused by what mixed reality can offer. At the same time, I found the video clip alarming.

In one scene, a kid wearing VR/AR/XR goggles runs through his home and up the staircase, jumping over a heap of laundry (or something) on the floor. He sees the walls around him as black tiles or rubble-like bricks. He sees the stuff on the floor as an obstacle created by debris, and he has to jump over it.

Of course, it’s fun for a kid to run through his ordinary home as though he is jumping through a combat zone. I used to do it myself — without goggles.

I made that observation to Mike Demler, a senior analyst at The Linley Group, who was sitting next to me. “Yep, we did,” he agreed, but back then “it wasn’t XR. It’s called imagination.”

For sure, the power of VR/AR/XR is awesome. It’s amazing to see that technology can restructure any environment so vividly that it can provide a truly immersive experience, letting you interact with real objects — accurately positioned — in an artificial world of extended reality.

What was disturbing to me was the XR’s ability to spoon-feed kids a prefab imaginary world.

Real people are perfectly capable of imagining wild things on their own, and we can immerse ourselves in them. Is it possible that XR might rob from our younger generations the opportunity to create and inhabit their own imaginary worlds?

“I found it incredibly irresponsible [of tech companies] to push this [extended reality] without doing any studies on the potential impact on children,” said Demler.

I second that.

Certainly, we understand that VR and AR can be effective tools for professionals working in auto repair or plumbing services, in warehouses or thermonuclear war zones. But how do we serve our kids by pre-booking their flights of fancy?

Whose alterations?

The same applies to the sort of photography I discussed above. It’s great that technologies can make it easy to edit and alter the images we capture. But whose alterations? How we change and edit things on smartphones is predefined by the prevailing apps.

In a world where more and more people just cut and paste, then socialize pre-produced messages created by third-party apps, fewer folks end up creating their own messages. Qualcomm becomes the ghost in the machine.

Making yourself look like an eight-year-old is amusing. But what if you want to pose as a 12th-century Chinese emperor? Or a dancing platypus? Can you still imagine that silly image yourself, or are you going to wait for a corporation to sell you the app?

With goggles.

Leave a comment