Matthew Katz recently posted a link to his article in KevinMD, Google Glass for medicine: 4 reasons why it could be disastrous saying:
Am I just turning into a technophobe? My post on KevinMD about Google Glass.
As a person who has been using Google Glass for the past three months in a health care setting, I believe you have become a technophobe.
Privacy Violations: The same issue applies to cellphones. Are you going to ban them from your practice?
Hackable: Personal computers are hackable as well. Ban them? (I worked with security for a Swiss bank two decades ago when they said they'd never connect to the Internet because of security issues. There are risks with all technology, just like everything else in life. You can't ban life, instead, you need to mediate risks)
Concern with multitasking: This is probably the strongest point, which also seems pretty weak, based on my experience with Google Glass. Yet the interruptions I get from Google Glass, wearing it all the time, is similar to the interruptions I get from phone calls, overhead pages, and other staff members knocking on my door.
Google’s And medicine’s goals aren’t aligned: Again, on the surface, this seems like a valid point. However, from my experience dealing with pharmaceutical companies, medical device manufactures, and insurance companies, I suspect that Google's goals may be more closely aligned with medicine's goals than most companies working in health care.
Over on his article, I added a couple additional thoughts, edited for the blog here:
The other point that I would make is that Google Glass is not in BETA. Ity is not even in ALPHA. It is still a prototype. I think it is premature to make determinations about what a prototype is likely to do to a business. You might want to go back and look at the history of the Xerox.
The Smithsonian Article, Making Copies is a good starting point.
At first, nobody bought Chester Carlson's strange idea. But trillions of documents later, his invention is the biggest thing in printing since Gutenburg
Companies turned down the xerox machine because so few people made copies prior to it, they didn't think it would sell.
My experience with Glass, so far, is similar to my experiences with the Apple Newton in the early 90s. A lot of people didn't think much of the Newton back then, and it never really took off, but it laid the groundwork for smartphones today.
I wouldn't be surprised to see Glass follow a similar path and in twenty years be an all but forgotten precursor to ubiquitous wearable computing.
One last thought: it is worth looking at the Technology Adoption Life Cycle, as written about back in the 50's, particularly by Everett Rogers in his book Diffusion of Innovations.
Google Glass is at the very front end of the adoption lifecycle, where only a few innovators have been using it. As has become more and more common these days, when a new innovation comes along, it often gets a backlash. It seems that the backlash against an innovation is proportional to potential disruption the innovation carries.
As a final comment, I'd encourage you to read a blog post I wrote back in 2007 about Twitter:
In a previous post about ad:tech, I mentioned how I learned about NY Times' Facebook page from a twitter by Steve Rubel. I commented about this in the press room, and one of the reporters was surprised to hear that twitter was still around and active. I reflected back on hearing speakers at OMMA predict the demise of Twitter, Facebook and Second Life and it struck me that the standard technology adoption curve that we all hear so much about, may have a lot of interesting nuances.
I've never been a big fan of Powerpoint dating back to my training as a speaker in the 90s. The audience should be focusing on you and what you are speaking about, not on reading your script and looking at pictures on a screen. If you must use PowerPoint, you should follow Guy Kawasaki's 10-20-30 rule for Powerpoint, no more than 10 slides, no more then 20 minutes, and font no smaller than 30 points.
Instead, if I am using visual aids in a presentation, I prefer to use tools related to the presentation. When I speak about social media, I like to use Buffer to send preloaded tweets out to Tweetchat using a hashtag. The key points still get displayed on the screen, the audience gets more of a chance to interact and it illustrates using the technology.
On Thursday morning, I will be doing a presentation introducing a group of librarians to Google Glass. So, the challenge I came up with for myself was, could I use Google Glass as a replacement to Powerpoint.
The first issue was to find a way to present what is on the screen through a projector. I've done presentations using the screencast capabilities of the Glass app on my smartphone. This works very well if you are presenting to a small group that can gather around the cellphone, but for a larger crowd, finding some way to connect my smartphone to a projector was called for.
My first attempt was to use the old TV Out approach. My current smartphone is a Samsung Galaxy G4. Some Samsung phones, like several other phones, have the ability to display to old fashioned televisions, and many projectors using a cable that plugs into the headphone jack. I have a cable like that I've used on other phones, but I couldn't get it to work on the G4. My guess is that there is a setting I need to enable, which I haven't been able to find. Any suggestions are appreciated.
The second idea was to use a MHL cable. MHL, or Mobile High-definition Link, is a micro-usb plug on one end and a HDMI plug on the other. You can use it to display what is on your phone screen on a high definition television. I don't watch much television, so we don't have HDTV and I don't have any MHL or HDMI cables. I must admit that I haven't looked, but it seems like most times I've done presentations, the projectors accept RCA input (the old fashioned TV connector), or VGA input (the standard for PC monitors), but HDMI inputs are far from ubiquitous, so finding a RCA or VGA approach would work better.
My next thought was to try and find some way to connect the cellphone to a laptop and connect the laptop to the projector. I've connected other android phones to my laptop using the USB cable and running Android Development Tools, ADT, on the laptop. The Samsung G4, like many smartphones, does not default to having debugging enabled. How To Enable Samsung Galaxy S4 USB Debugging provides good instructions on how to do this.
However, when I connected the smartphone to my laptop and started ADT, I got a message that the Samsung Galaxy S4 was offline. It took me a while to figure out what the problem was. With newer versions of Android, there is security added to the device. You need to permit the specific laptop to debug the phone. I was using Android Debug Bridge (adb) version 1.0.29. This version does not support this type of security. When I upgraded to version 1.0.31 and tried to enter debug, a message popped up on my smartphone asking if I wanted to allow the laptop to debug the phone. I said yes, and dab started working.
The Dalvik Debug Monitor Server (ddms) provides the ability to display the screen of the android device on the laptop. However, this only works for single images. To be able to see the screen as it changes, I downloaded Android Projector.
Getting Android Projector to work also required a few steps. First, I needed to make sure I had a current version of Java running on my laptop. Then,, I started adb and authorized my laptop to debug my phone. Next, I started ddms and connected to my phone. With ddms running, I then started Android Projector. The screen came up nicely. I rotated it to match the orientation of the Glass app screen cast, and then hooked my laptop up to a projector and I could display what I was seeing on Glass to the whole room.
The one caveat: it tends to lag by three to five seconds between when a card comes up on Glass and when it makes it to the projector. An aside: I could have put Glass into debug mode and connected the glass directly to the laptop. I tried this, but then you need to remain connected via a USB cable, which ties you down and loses some of the effect.
With this in place, I could now display how Glass works on the projector. The next step was to put a presentation up on Glass. I've been doing a little bit of Glass development and have created GlassDeck. It allows you to create a bundled deck of cards as a timeline item. It is written in PHP based on the quickstart Mirror API guide. It is still fairly primitive. I wrote it mostly as a programming exercise. You can save your GlassDecks and share them with others. If you log into GlassDeck, you can find my presentation at 106686438536671985498:Presentation. Even if you don't have Glass, you can see the HTML that I used to create the cards. If you do have glass, you can edit it and create your own presentations.
This is all fairly primitive still, but has potential. I look forward to refining my GlassDeck app, finding easier ways to display Glass on a projector, and perhaps even using a remote for Glass at some point. Remotte is creating one such remote that might be useful for doing presentations using Glass.
So, Thursday, I'll do a presentation using Google Glass. I'll let you know how it goes. Let me know your thoughts on ways to make doing presentations using Google Glass even easier.
A few days ago, I wrote a blog post about My First Google Glass App in PHP. Since then, I've continued to enhance it, talk with people who have been testing it, and offer suggestions to others trying to get started. Here are some of things I've been telling people.
The first place to start changing code is in index.php. What I did was I read through the various operations to send cards to the timeline. I started making a few changes here and there, and then started getting bolder in my changes. One important tip, especially if you're developing code and sending lots of test cards to your timeline. Add the DELETE action to each card so you can delete them when your testing is over.
$menu_item = new Google_MenuItem();
While you're at it, you may want to add functionality to PIN or UNPIN a timeline item. This is the same as the above code for adding the DELETE action, but use use TOGGLE_PINNED. (It took me a little while to find the action.
Another minor glitch in the sample PHP code. It makes reference to $service_base_url. But that isn't set anywhere. You should change it to $base_url or set $service_base_url = $base_url. If you do this, some of the images start working.
An issue that another person ran into is that the code is written to use SQLite2. If you have SQLite3 but not SQLite2, the PHP code doesn't work. Fortunately, I have both. There is a comment on the SQLite3 page that talks about how to migrate from SQLite2 to SQLite3. I haven't tried that, because I wanted to migrate to MySQL.
Another thing to keep in mind: If you have different projects, make sure that you set up the config.php to point to different databases for each project, otherwise, you can run into various issues with the credentials.
One person asked about how to make bundled cards. I didn't find any good documentation about this, so I hacked around until I figured it out. To add bundled cards to a timeline item you need to use the setHtmlPages method, passing it an array of strings containing HTML. e.g.
To get a good idea at the possible html code, take a look at the Google Mirror API Playground. It has lots of good examples, and I used this to tweak my application.
To put a map in a timeline card, you should read the section of the location documentation, Rendering maps on timeline cards.
The next thing to look at is the util.php code. This is the code which stores credential information in the SQLite2 database. I changed the code around to use MySQL instead. There were a few error conditions that I didn't properly handle, which prevented some people from accessing the app. However, once that was fixed, I started adding the code to save decks of timeline cards. The first part of that is completed. Now, I just need to add code so people can save multiple decks, optionally share them, and retrieve them.
One person suggested that some of this would probably be better done in Drupal. I like working in Drupal and I've thought about using this as a framework for my application. However, I wanted to get past the mirror API complexity first.
To see the latest version of my app, check out Glassdeck. If you want the MySQL util.php code or have questions, contact me via Google+
Over the past week or so, during the limited free time I've had, I've put together my first Google Glass App. It is a fairly simple application that sends a bundled deck of timeline cards to your Google Glass. It is similar to how the New York Times app sends a bunch of articles, or several emails, birthday notifications, or other bits of information are bundled. Look for the white triangle in the upper right corner of the card.
I started the project simply to learn my way around writing apps in Glass. I chose to use PHP since it is the language I'm most comfortable with to program webpages. I started by downloading the Mirror Quick Start code for PHP from Google and making small modifications to it.
My initial thought was to create a tool that I could use for doing presentation. The goal would be to use the Glass app as a PowerPoint replacement. I often mirror what I'm seeing in Glass to my smartphone. I've found this useful in demonstrating Glass. I've thought it would be nice to build a presentation in Glass and then run through it, showing what is on the smartphone screen. Ideally, either hooking the smartphone up directly to a projector, or sharing it to a computer connected to a projector.
As I got the app a little further developed, I started thinking more generally about ways a deck of Timeline cards could be used. I've used my app to load a recipe that I can follow while cooking. I've used it to load a poem. I've used it to load a todo list.
Getting started was fairly easy. I started with the The Google Mirror API Developer Preview. I used the sample apps on that page, and then started my open application.
First, I created my project at the Google API Console. I selected Create… on the menu on the left to create my project. I turned on the Glass Mirror API in the Services section. I requested OAuth permission and went to the API Access section to create a Client ID for the web application.
I then downloaded the PHP code using GIT:
I changed the config.php section to use the API key information from the API Access section of the Google API Console and started hacking. I read different parts of the code and started making small changes. When I got a better sense of what could be done, I started making larger changes. Glass Explorers are welcome to test out the app, and provide feedback. Beginner Glass Developers who want more information are welcome to contact me directly.
You can see the app at Glass Deck
This morning, there were two news stories about Google Glass.
The first, Woodbridge man one of those chosen to test new Google Glass technology is from an interview I did with Jim Shelton from the Register about Google Glass about a week ago.
The second, Anaylsis & Video | Google Glass Review is by a friend who is also a Glass Explorer, and was not as impressed as I am. He writes:
It falls short because in the end the only people who likely will be willing to immerse themselves in 24/7 digital living are the several thousand “Glass Explorers” Google invited to purchase the $1500 product.
As one of the other Glass Explorers in Connecticut, I would like to present a contrasting viewpoint. I received my Google Glass just over a month ago, and I'm very pleased with it.
It is true that currently, everything that I can do with Google Glass, I can do with a Smartphone. It is also true that just about everything I can do with a smartphone, I can do with a laptop and a digital camera.
However, I find it easier to take and share pictures and videos with Glass than it is to take and share pictures with a smartphone, just as I find that task easier on a smartphone than I do with a laptop and a camera.
Yet looking only at the current applications of a prototype seems a bit narrow. I have chosen to explore Glass, not for what it can currently do, but for what it will be possible to do in the future with it. I've already started developing apps for Glass as well as brainstorming with other Glass Explorers around the world.
One of the most exciting areas is looking at Glass as a sensor in health care and in grids for big data analysis.
As I commented in my interview in the New Haven Register, I believe that Google Glass is to wearable computing what the Apple Newton was to PDAs and Smartphones.
People maligned the Apple Newton, and its product life was not spectacular. Yet it laid the groundwork for PDAs and smartphones. Lon is probably right, the only people willing to spend $1,500 on a prototype are innovators and early adopters. Everyone else is likely to wait until wearable computing becomes more developed and ubiquitous. At that point, I'll set my Google Glass next to my Apple Newton and the core memory from an old PDP-8.
I didn't address the price point issue. I do believe that $1,500 is steep for participating in a development program with a prototype, but not out of line.
On the other hand, I expect that by the time the third generation of wearable computing comes out, older versions will be in the $200-$300 range.