Mixed Reality with the HoloLens: Using a virtual lightswitch to control a real world lamp

Auteur
Jasper Bogers
Datum

At Mirabeau, we keep track of innovative technology and how it might benefit our customers. As referenced in an earlier interview on technology trends (in Dutch), Augmented Reality (AR) is something we find promising. One of the best ways of understanding the potential of technology is experimenting with it. So early 2017 a small band of Mirabeau engineers came together and in a hackathon of 3 days, set up an architecture to allow somebody wearing a Microsoft HoloLens to flick a switch in virtual reality (VR) and turn on a real lamp.

Jasper-HoloLens-1

AR, VR, MR, HR

To avoid confusion, first a short explanation of the difference between the various -reality concepts:

  • Reality: Where we all live in.
  • Augmented Reality (AR). Whatever you see, enhanced with virtual information displayed on top. Think of the Google Glass.
  • Mixed reality (MR). Interchangable terms to describe AR where you can interact with the augmented reality. Microsoft refers to the HoloLens as Mixed Reality. The same concept is sometimes referred to as Hybrid Reality (HR).
  • Virtual Reality (VR). Total immersion, where the senses are served by virtual input. Think HTC Vive and Oculus Rift. The term VR might also refer to the part of the experience in AR/MR that is not the real-world input.

Requirements

Requirements we set out with for the HoloLens Hackathon were as follows:

  • The setup should use the Microsoft HoloLens.
  • The setup should use an action in VR to trigger a real world event. This makes the experience more tangible and fun, and with that, the possibilities more imaginable.
  • The setup should not be bound to 1 physical location, meaning the physical location of the operator in VR should not limit the location of the lamp.
  • Given the limited amount of time, the architecture should be such that the components can be developed simultaneously by splitting the team in small groups.
  • Given the limited experience with some of the technology, the 2nd principle of the Agile Manifesto should be heeded: "Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage"

Below is a demonstration of the results, followed by details about the architecture, requirements, technology used, and learnings from this innovation project.

Demonstration

AR/VR Lab Hololens Lamp

Uploaded by Mirabeau BV on 2017-04-13.

Architecture

HoloLens AWS Raspberry Pi LED architecture

The lamp was connected to a Raspberry Pi 3 running Raspbian Linux. A breadboard was used for connecting a simple LED during development, but for an actual 220Volt lamp we designed a custom PCB.

The Raspberry Pi was registered as a thing in AWS IoT. AWS takes care of access keys, pubsub for events, API and message definitions, and maintaining shadows of things in case they're offline when an event occurs that should affect them. We had planned to build some of that ourselves using AWS SQS, SNS and Lambda, but it really wasn't necessary, so we didn't. AWS IoT allows the downloading of a script in Python, Java or JavaScript that the thing can use to interact with other things through AWS. We installed a Python script on the Raspberry Pi and adjusted it for the exact interaction we wanted. It makes use of the MQTT protocol.

Building a Unity3D application for the HoloLens is best done by following Microsoft's instructions, starting with https://developer.microsoft.com/nl-nl/windows/mixed-reality/install_the_tools. The order of things is: in Unity3D you build the assets and their behaviour, and you export that as a project to Visual Studio, where you proceed to refine it, test it, run it through a HoloLens emulator, and eventually deploy it to the actual HoloLens. The biggest hurdle was getting connectivity set up in both the emulator and the HoloLens. Trial & error through building, deploying & debugging can be time consuming.

After noticing AWS IoT takes care of so much, we'd expected registering the HoloLens as a thing would be a good idea. Unfortunately, this turned out to be a bit more work than we had time for (AWS Lambda supports C# these days, but AWS IoT doesn't), so we ended up setting up a small Java application that talks to AWS IoT using the secure script provided, and offers an (insecure) REST API for the HoloLens to connect to. This kind of loose coupling allowed separate groups to work on separate parts of the total setup independently. Most of the work ended up being, again, in connectivity. It's a reminder that underneath the shiny new devices the same infrastructure concerns apply as with traditional web applications. Kees-HoloLens-1

Learnings, improving and expanding the concept

  1. Built in 3 days by approx. 5 people, cutting up the architecture into loosely coupled parts sped up development, but it was a far cry from a proper delivery pipeline. The way code is maintained between Unity and Visual Studio does not seem optimal, and test automation of a virtual user interface is a new domain.
  2. Using the IoT setup, more data sources could be connected. For example, environment background noise could be taken into account in audio volume levels; light intensity could be taken into account when deciding the colour of the light switch.
  3. The HoloLens offers ways to stream audio & video as perceived by the wearer. Showing this on a separate screen (or another VR device) can help involve an audience in the experience.
  4. As excited as people are about the possibilities of augmented and mixed or hybrid reality (AR/MR/HR) as compared to VR, few people were positive about the gesture controls the HoloLens offers. Considering cognitive services and image recognition, and the ease with which the HoloLens maps the room the wearer is standing in, you can say we're moving into an age where computers start understanding us better so we can spend less effort into understanding computers. The painstaking required to operate a HoloLens app with gestures doesn't seem like it's mature enough in that aspect. Alternatives offered by for example the Manus VR gloves are worth exploring, but closer to home you could build interaction around the cursor focus itself (much like the on-hover effect of a mouse cursor), or by using Cortana - something we didn't even touch on in this hackathon.
  5. We used a wooden board with an open space where the virtual switch could be displayed. The actual positioning of the virtual switch is right now simply "slightly right of whereever the center of view is when you started the app". Using image recognition, it might be possible to recognise where the virtual object should be located, and automatically position it there. This could facilitate all kind of exploring a virtual world using real world triggers.
  6. Using the pinch movement to activate an object is one thing; being able to pick it up and place it elsewhere, possibly arranging a set of objects, is a different matter. Although promising for concepts such as arranging furniture in a room before purchase, we now know the detail of the interaction involved would require thorough testing and finetuning.
  7. The audio on the HoloLens is of decent quality. It's worth exploring how to use that to improve the experience the limited field of view the HoloLens offers.
  8. The HoloLens is a functional Windows computer sitting on your head with no cables attached to it. It requires no additional backtop computer. The USB is for charging and deploying (and not usable for exploring files stored on it). It is WiFi and Bluetooth capable. It is relatively light and has an adjustable strap. If you wear glasses, you don't need to take them off. In other words: Compared to VR devices such as the HTC Vive, you are not nearly as encumbered and will experience a lot more freedom of movement.
  9. The battery life of the HoloLens is short, and charging it while in use may feel like it's not charging at all. Given that you need a cable to charge it, this also limits the freedom of movement that it otherwise offers.
  10. While we did not notice a delay activating the lamp from the application running in the HoloLens emulator on a laptop, even when using AWS IoT in the Ireland region, we did notice a delay using the actual HoloLens. Granted: the way we build up an asynchronous connection is far from optimized, and there is some network hopping involved because of devices running on a separate network at Mirabeau, but it can't yet be ruled out that the performance hit is caused by the HoloLens itself. It's worth taking into account when building an AR/MR application that it might have to deal with poor internet connectivity, just like you should with a website or web app.
  11. Our HoloLens app does not register whether or not the lamp has been turned on or off. It would be nice to have that feedback cycle a little more complete: action in VR, validate in real world, reaction in VR.
  12. The power of the HoloLens, and of AR/MR/HR/VR in general, is what it allows you to do that you can't already do with a regular user interface. One can get used to having to use gestures, but one wouldn't bother if the task at hand can be achieved through more conventional means. The word augmented implies exactly that.

Tags

C# Cloud Development Hacking Innovation IoT Unity3D Microsoft HoloLens AWS Raspberry Pi