Application developers are also racing to have their software ready to tie into this wearable computer with optical head-mounted display. Simon Karger, head of surgical and interventional products at Cambridge Consultants, a product development and design firm based in Cambridge, Mass., discussed the potential of Google Glass with The Progressive Physician. The Q&A with Karger below was edited for length and style.
Q: How would you use Google Glass? What possible applications can you foresee for healthcare?
A: Google Glass's application in the healthcare industry has huge potential for surgical use, in particular, by providing surgeons with the exact information they need at the right time. Its information delivery can enhance a wide range of surgical processes, for instance, by helping surgeons visualize what is below the skin to ensure more accurate navigation.
Other examples include overlaying pre-operative images (such as a CT or MRI scan) and virtual surgical plans (e.g. excise this region, avoid these regions). Or, integrating real-time sensor data, such as grasp force information from a laparoscopic tool, or patient vitals, like heart rate and blood pressure.
As surgical operations become more data driven and information rich, surgeons will need more sophisticated visualization tools to take advantage of this additional information in the operating room.
Another exciting capability is the 'touchless' interface--using Google Glass's microphone to pick up speech commands, or using the integrated camera to recognize and respond to gestures. Surgeons have to maintain sterility throughout an operation, which means not touching anything that isn't sterile.
This isn't compatible with traditional computer interfaces, where you have to move the mouse or touch a screen, which limits the benefit of using a computer during an operation - benefits such as quickly bringing up patient history, test results, or pre-operative x-rays.
There are a number of opportunities for diagnostic applications as well, combining Google Glass's camera with optical diagnostic tests, such as colorimetric tests or lateral flow tests (the same sort of technology used in pregnancy tests).
Using image processing algorithms on an image of the test, you can start to achieve the accuracy of dedicated clinical diagnostic machines with the ease of use of visually read tests. That means, for example, receiving immediate blood test results instead of having to wait for the results to be sent to a lab.
Q: What are the potential drawbacks and advantages of such a tool?
A: When you compare augmented reality tools like Google Glass against virtual reality tools, such as Oculus Rift, augmented reality has an advantage in applicability. While virtual reality tools shut out the rest of the world, augmented reality provides visibility into the surgical process at hand, enhanced with additional data overlaid across the display.
That means the surgeon can continue to pay attention to the patient while also accessing the additional data. Plus, as a touch-less interface, the surgeon doesn't need to engage with the technology in order for it to run. As a result, they can instantly obtain all the information they need while remaining sterile, and without stepping away from the operating table.
But even though it's a great way to present information that can augment surgical processes, what's really key is the data behind it all, and this is where Google Glass is limited. At the end of the day, Google Glass is a display--all the data it overlays must come from somewhere, and if that data is not available, or not seamlessly integrated into the workflow, the display won't provide that much benefit.
When Google Glass is used in the consumer realm, it can easily connect to Google Maps, grab your contacts, or pull up a video. In a clinical environment, however, it will rely on pre-op and real-time data that is not as instantly available without significant processing. Overcoming this drawback would require incorporating sensor-rich tools that can capture data in real time and feed it into the Google Glass, to then be overlaid across the display.
Q: How soon do you think Google Glass will become part of mainstream healthcare?
A: It will be quite some time before Google Glass becomes mainstream in healthcare and even longer for it to be commonplace in the operating room. The consumer world views Google Glass as a hot and sexy new product, but bear in mind that the consumer industry disrupts every nine months thanks to new iterations of operating systems and applications. The medical field, on the other hand, takes much longer to undergo change and implement new technological processes.
It could be quite some time before we see Google Glass go mainstream--even up to 10 years--because it's not yet an established, valid technology for surgical or clinical procedures.
What I expect we will see are some of the components of Google Glass technology--advanced visualization, image processing, and touchless interfaces--start to make inroads when there are specific, compelling clinical needs that can only be met with this additional information.
Once these technologies are more pervasive, for example in the operating room, then we'll expect many more clinical use cases to be found.
Q: Why is healthcare so slow to embrace such new innovations?
A: To put it simply: because you can kill people. Technology is so nuanced that consumers are still adapting to some of its risks, let alone the medical industry. Google Glass, for instance, is built on Android, a non-controlled operating system.
There are some regulated medical systems that run on iOS, but it more tightly limits how applications and the OS interact, creating greater stability of the overall system. Android, however, does not have these safeguards. There is much more liability involved with these non-controlled systems--for instance, having the software running your pacemaker crash.