To stay in the present moment, we need to be looking up instead of down at a screen. We’ll use this image as a starting point.
Like many things in life, it’s beautiful. We don’t need to augment it with images or 3D models to make it more impressive. However, let’s say you want to know something about this place. Maybe it’s busy inside or it’s not open yet and you want to see what kind of baking classes they offer. Today, you would look down at your phone and open an app like Yelp, Google Maps, or try to find the business’s website or social media profile. This pulls you away from the moment, so how can we prevent this?
Augmented Reality (AR) allows us to stay looking up, especially when AR glasses are released in the near future. I don’t want to view a 2D screen overlaid in the real world as that would be no better than what I could do with my phone. I want to talk to someone or something about what I need, so let’s imagine something that looks more human and that I can talk to and get the info I want. We’ll call this the assistant. It should blend in with the scene, but also be noticeable if I’m looking for it. Here is a female assistant waving to us.
The assistant should only start listening when I look at her. I should also be able to easily control when she is listening. For this, we’ll need to add a mic button, but it only needs to display when we’re interacting with the assistant. For now, it’ll be attached to the screen (bottom-middle), but in the future we may be able to use some control on the device or other approach that doesn’t require a separate button in our view.
That’s it. That should be all we need. I should be able to walk-up, ask the assistant for info about the classes and then move-on. Never having to look down.
There are some additional use cases that we would want to cover, but which should hold to same requirement of limiting how much displays in our view. One use case is that we want to get info from the assistant, but we don’t know exactly what she knows and we’re not exactly sure how to ask it. You’ve probably experienced this frustration when asking Alexa or Google Home for something. For this, we can display buttons next the assistant (in 3D space) to help guide the conversation.
There are also cases where I might want to talk to this assistant as I walk away, or I might prefer not having any 3D models displayed in my world, or I want an assistant always available to help regardless of where I am. For this, we will attach the assistant to screen so it always displays in our view (bottom-left) and have it be easily removed from view when desired.
Finally, as a developer or business, I want to control the type of experience that I provide to potential customers so that it’s unique, memorable, and aligns with my brand. It would be quite boring to end up in a world where we only interact with a Google, Amazon, or Apple assistant, or for these companies to control the type of interactions we can create and try to push us towards their walled-gardens. For this reason, we’re putting a lot of focus on allowing businesses and developers to fully control the appearance and interactions for these assistants, and to have them work across all platforms, native and web.