Virtual Reality

OPINION

The Future According to Nvidia

I spent last week at Nvidia’s GPU Technology Conference, and I expect this will be the last year it will go by that name. The company has evolved significantly during the last decade with robotics, artificial intelligence, and even complete workstations and servers taking it well beyond its GPU roots. My bet is this will become Nvidia’s Developer Conference going forward, as the firm displaces Intel in the hearts and minds of developers and buyers.

One interesting pivot I’m anticipating is that as Nvidia watches the Qualcomm/Apple/Intel battle, something like that might be in its future with Intel, and it may need to pivot to IBM Power or AMD Epic (AMD actually partners with Nvidia better than Intel does). This is because Intel’s long-term plan is to make Nvidia redundant, and it is about 24 months out from executing it.

If Nvidia doesn’t pivot away from Intel by that time, it will be facing the possibility of the same near-death experience Qualcomm just experienced. Nvidia isn’t stupid and clearly has to see this coming.

One thing that is missing from Nvidia — largely because its change has been gradual, and it doesn’t fully get that it is no longer primarily a parts vendor but a solutions vendor — is an effective way to convey how all the things it is doing will change the world.

Corning created several videos that it used to showcase its vision, called “A Day Made of Glass.” While I don’t have the resources to create a video of what the future will look like when all of this Nvidia technology matures, I think I can describe it.

I’ll do that this week and then close with my product of the week: a new HP headset I saw at GTC, called the “Reverb,” that I think now sets the bar for virtual reality headsets.

Foundational Elements

The elements I’m going to use to build this story range from Nvidia’s new small form factor Jetson AI products for edge computing to its Data Science Workstations and Servers, its autonomous car and robotics solutions, its graphics and imaging products and enhancements, and its coming advances in networking and interconnect.

I’m not going to name the products, but I will walk you through what “A Day Imagined by Nvidia” might be.

An Imaginary ‘Day Imagined by Nvidia’

The video begins with a black screen. First, a compelling tune can be heard softly in the background. At the bottom of the screen is a digital dialog that says “Monday Morning 2025 AI-generated music, unique, based on a collection of favorite songs by [your name here]. That music builds and light increases as if you were opening your eyes. Colors around you are fluid, as what you see alters through scenes ranging from fantasy to science fiction, highlighting the breadth of what is possible and finally locking down on a contemporary setting that clearly has been digitally rendered.

A voice in the background, sounding a lot like the Avengers’ Jarvis, asks for a preference. Another voice, evidently belonging to the person whose eyes you are looking through, says “surprise me.” The room alters to look like something out of the movie Avatar, which itself was rendered for the most part. Text at the bottom of the screen says “RTX real time rendering room scale.”

The video bypasses any initial trip to the bathroom for obvious reasons, and the Jarvis voice asks what you would like for breakfast, offering a series of choices. It reminds you that you have a virtual meeting in 15 minutes and asks for your preference on appearance. By way of advice, it tells you the attendees are Japanese native speakers, conservative, and they dislike the colors green and yellow.

A virtual screen then appears that looks like three mirrors with what looks like the user’s image with three choices of formal clothing, none with green or yellow, and all conservative with Asian influence.

The user’s voice is heard selecting choice two, and that image comes alive with a mirror image of the user. Jarvis asks if you want him to enter the meeting for the user as the user or as himself if the user hasn’t finished breakfast on time, suggesting that the people he is meeting with value starting on time.

The user indicates he wants the avatar to open as him while he walks into the kitchen, which has small versions of what look to be industrial robots putting the finishing touches on breakfast. As the user approaches the table, a reflection off one of the appliances shows us the user’s face, and we see that he is wearing a set of augmented reality glasses, which is why the room has been changing its appearance on command.

We fast-forward through breakfast until Jarvis speaks again, saying it is time for the user to enter the meeting. The user gives the command “proceed,” and he suddenly appears to be transported into a rich conference room with floating virtual displays and other users all well dressed, with some of the clothing actually appearing to morph and change as we watch.

We observe a short meeting, during which the people talking are enhanced by dynamic displays that emerge and vanish on command to highlight key points or elaborate on certain subjects. We drift away from this virtual meeting to get an overhead view of a child’s room that is just starting to brighten as virtual animals line up at attention waiting for the user’s daughter to get out of bed.

The child scolds some of the virtual animals and engages with others who appear to be talking about today’s assignments, thus helping the child think through her day at school. We skip the bathroom visit, fast-forwarding to the child being advised on what to wear. The video then advances to the child fully dressed and eating breakfast in the same kitchen.

A reflection shows that the child is wearing a smaller set of AR glasses. The Jarvis voice announces that it is time to leave for class, and a door that we didn’t see before opens to a small windowed room with a chair that has seatbelts.

The child enters the room, sits on the chair, and we now move out of the house to see that the chair isn’t a chair at all but an autonomous personal transport that whisks the child off to school to the sound of the old Jetsons’ theme song. We follow the child into class where other children are drawing simple stick figures and lines on active screens.

Those screens — also using RTX technology, based on what we see on the bottom of the screens — instantly translate the rough drawings into photorealistic images of what the children intend to create. The kids all appear to be having fun when the teacher asks them to be seated.

The teacher then begins a lecture on protecting endangered animals.As she speaks, the screen behind her dynamically reflects what she is talking about, highlighting the animals. A child raises her hand and asks a question. One of the animals perks up and asks if it can answer. After getting a “yes” response, the virtual animal then takes over the class and continues the lesson.

We then transition out of the school and visually travel across the globe to find a woman who appears to be the wife and mother, watching her child perform in school real time. She speaks her child’s name and her child instantly perks up as her mother compliments her on the question she just asked. They enter into a side dialog that appears private to them.

We now realize the child never actually went to school. That transport was just a simulation. Instead, that child and all the other children are attending virtual classrooms while remaining in their respective homes, safe from contact with disease or any travel-related risks.

We see a series of short scenes showing pets being fed by robots and robot pets playing with actual pets, along with robots doing much of the maintenance around the home, which is actually located in a beautiful remote setting. It isn’t clear whether that location actually exists or is rendered.

At the end of the day the family gathers, with the mother attending virtually (we’ll put time zone differences aside for now), and they enjoy what appears to be movie together. However, the actors often seem to look and sound like the family members, and they often are asked what the character should do next, with the answer having a dynamic impact on the direction and end of the film.

We shift to a shot of the mother getting into a strange vehicle that starts on wheels and then lifts off and flies to the airport. We look over her shoulder while she clearly is creating a dynamic birthday card for her daughter, which she evidently plans to give her when she arrives.

We follow her path at high speed and notice she is guided through the airport, which uses biometrics to identify and authenticate her, noticing there are no lines. When she boards, a very Jarvis-like voice welcomes her, tells her that the personalized meal she wants will be ready after takeoff, and offers her favorite beverage, which has been stocked specifically for her. You’ll note that it appears everything just happened like clockwork, and there is no attendant in site.

The camera pulls back as the sun sets, showing robotic farming machines, cleaning machines, delivery machines, and an impression of millions of people mostly working from home but interacting digitally as if they all were in the same place. Everyone appears to be happy, because much of the stress associated with what they do has been removed.

Wrapping Up: Reality

Now, much like Corning Effort, this is a best-case scenario that depicts a world that likely will never actually exist in total. Still, the fact is that what Nvidia is creating could eliminate not only the jobs we don’t like doing, but also all of the waiting and wasted time we currently suffer through.

Ideally the video I think Nvidia actually may create also will highlight the gaps that it may not yet realize exist in the solutions its AI, graphics, and computing technology anticipate.

What I want to leave you with is that we all need to be thinking more about the world as we want it to be. Otherwise, we likely will be surprised by the world we get. As I flew back from Nvidia’s conference, I finally got around to watching Blade Runner 2049, the sequel to the 1980s classic, which portrays an alternative world that I personally wouldn’t want to live in, but using much of the same technology in very different ways.

In short, if more of us don’t imagine a very different future, many of us may not want to live in the future we get. Nvidia is one of the companies that could make a far better future. Here’s hoping we get to live in it.

Rob Enderle's Product of the Week

I have a lot of VR headsets in my office, most of which I wouldn’t give to someone I didn’t like. The resolution is bad, they are uncomfortable, and the only thing I can think they’d be good for is to make fun of the poor idiot wearing them.

A lot of this is due to the fact that the first set of headsets based on the Microsoft specification focused too much on being inexpensive and not enough on providing a good experience. The two exceptions are the Goovis Cinego and the Samsung Microsoft headset, as both companies pushed resolution to create a far better offering.

Well I had a chance to try the HP Reverb Virtual Reality Headset at GTC, and I was impressed. The price point is closer to US$600 for the base configuration and $650 for the pro configuration (you’ll want the pro, though, because it comes with leather face pads that easily can be wiped down and should both wear and feel better, making them well worth the $50 upcharge in my opinion).


HP Reverb VR Headset

HP Reverb VR Headset

This new headset sets the bar with regard to how light it is on your head, and how comfortable and easy-to-use the mounting straps are (no adjustments, you just slide it on your head). It even has decent speakers on it, though I’d likely pull them off and use one of my own noise-canceling over-the-ear headsets.

In use, the extra resolution — around 2K per eye like my Goovis — made it feel like I was actually in virtual environments. While I have yet to find a VR game title I like, this headset would be ideal for business or most medium usage commercial deployments. Heavy use would likely require the device to be a bit more robust. The headset will work with most laptop and desktop PCs, but the perfect use likely would couple it with HP’s backpack computers.

There are cameras on the headset, so when we finally see hand recognition it should work with that solution. For now, it still uses the Microsoft controllers. Although they are good for games, I prefer hand interfaces for commercial use, similar to what Microsoft has showcased with its HoloLens 3.

In the end, this is the first VR headset I think I could live with (the Goovis is mostly for movies and TV), which makes the HP Reverb Virtual Reality Headset my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Rob Enderle

Rob Enderle has been an ECT News Network columnist since 2003. His areas of interest include AI, autonomous driving, drones, personal technology, emerging technology, regulation, litigation, M&E, and technology in politics. He has an MBA in human resources, marketing and computer science. He is also a certified management accountant. Enderle currently is president and principal analyst of the Enderle Group, a consultancy that serves the technology industry. He formerly served as a senior research fellow at Giga Information Group and Forrester. Email Rob.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Rob Enderle
More in Virtual Reality

Technewsworld Channels