Over the past week or so, during the limited free time I've had, I've put together my first Google Glass App. It is a fairly simple application that sends a bundled deck of timeline cards to your Google Glass. It is similar to how the New York Times app sends a bunch of articles, or several emails, birthday notifications, or other bits of information are bundled. Look for the white triangle in the upper right corner of the card.
I started the project simply to learn my way around writing apps in Glass. I chose to use PHP since it is the language I'm most comfortable with to program webpages. I started by downloading the Mirror Quick Start code for PHP from Google and making small modifications to it.
My initial thought was to create a tool that I could use for doing presentation. The goal would be to use the Glass app as a PowerPoint replacement. I often mirror what I'm seeing in Glass to my smartphone. I've found this useful in demonstrating Glass. I've thought it would be nice to build a presentation in Glass and then run through it, showing what is on the smartphone screen. Ideally, either hooking the smartphone up directly to a projector, or sharing it to a computer connected to a projector.
As I got the app a little further developed, I started thinking more generally about ways a deck of Timeline cards could be used. I've used my app to load a recipe that I can follow while cooking. I've used it to load a poem. I've used it to load a todo list.
Getting started was fairly easy. I started with the The Google Mirror API Developer Preview. I used the sample apps on that page, and then started my open application.
First, I created my project at the Google API Console. I selected Create… on the menu on the left to create my project. I turned on the Glass Mirror API in the Services section. I requested OAuth permission and went to the API Access section to create a Client ID for the web application.
I then downloaded the PHP code using GIT:
I changed the config.php section to use the API key information from the API Access section of the Google API Console and started hacking. I read different parts of the code and started making small changes. When I got a better sense of what could be done, I started making larger changes. Glass Explorers are welcome to test out the app, and provide feedback. Beginner Glass Developers who want more information are welcome to contact me directly.
You can see the app at Glass Deck
This morning, there were two news stories about Google Glass.
The first, Woodbridge man one of those chosen to test new Google Glass technology is from an interview I did with Jim Shelton from the Register about Google Glass about a week ago.
The second, Anaylsis & Video | Google Glass Review is by a friend who is also a Glass Explorer, and was not as impressed as I am. He writes:
It falls short because in the end the only people who likely will be willing to immerse themselves in 24/7 digital living are the several thousand “Glass Explorers” Google invited to purchase the $1500 product.
As one of the other Glass Explorers in Connecticut, I would like to present a contrasting viewpoint. I received my Google Glass just over a month ago, and I'm very pleased with it.
It is true that currently, everything that I can do with Google Glass, I can do with a Smartphone. It is also true that just about everything I can do with a smartphone, I can do with a laptop and a digital camera.
However, I find it easier to take and share pictures and videos with Glass than it is to take and share pictures with a smartphone, just as I find that task easier on a smartphone than I do with a laptop and a camera.
Yet looking only at the current applications of a prototype seems a bit narrow. I have chosen to explore Glass, not for what it can currently do, but for what it will be possible to do in the future with it. I've already started developing apps for Glass as well as brainstorming with other Glass Explorers around the world.
One of the most exciting areas is looking at Glass as a sensor in health care and in grids for big data analysis.
As I commented in my interview in the New Haven Register, I believe that Google Glass is to wearable computing what the Apple Newton was to PDAs and Smartphones.
People maligned the Apple Newton, and its product life was not spectacular. Yet it laid the groundwork for PDAs and smartphones. Lon is probably right, the only people willing to spend $1,500 on a prototype are innovators and early adopters. Everyone else is likely to wait until wearable computing becomes more developed and ubiquitous. At that point, I'll set my Google Glass next to my Apple Newton and the core memory from an old PDP-8.
I didn't address the price point issue. I do believe that $1,500 is steep for participating in a development program with a prototype, but not out of line.
On the other hand, I expect that by the time the third generation of wearable computing comes out, older versions will be in the $200-$300 range.
I've been writing a lot about the potential of Glass and things that could be developed for it, and a lot of my focus has been on Glass as the prototype for future wearable computing and Glass for special niches. Yet you can do a lot with Glass as is, particularly in terms of social media. However, even in this area, there is work to be done.
I now wear Google Glass most of the time that I'm awake. I've run into issues, from time to time, with it not posting to Twitter or Facebook. It needs to be a bit more reliable in this way. I haven't really used the voice to text to add captions. I just don't trust voice to text, or "Gotham boys two text", as my friends like to call it, enough for tweeting.
Yet as a social media manager, tweeting for many accounts, I wish I could easily select a Twitter account to send the pictures two. The same thing applies to Facebook. I'd like to be able to post a picture to a page.
Google+ has even more issues. Currently, I'm following about two thousand people on Google+. With Glass and Ingress, my use of Google+ has been picking up and we'll see what the latest changes result in. I've also made copious use of Circles to organizes things, so, when I try to share a picture, I'm given about sixty different choices of who to share things with on Google+ as well as one for Twitter, a couple for Facebook, and a few for random other apps that I've been testing.
Ideally, when I select share, I should be able to select the platform next, and then within the platform be able to select the specifics for the Platform. For example, tap on the picture, and get the Share or Delete option. Tap on Share, and get Twitter, Facebook, Tumblr, Path or Google+. Tap on Google Plus and then get my sixty choices.
On top of that, ever since Google+ came out, I have been calling for hierarchical circles. I have circles for Connecticut, New York, Massachusetts, and a few other states. Within Connecticut, I have circles for Woodbridge, New Haven, and a couple other towns. It would be nice if I could set up Woodbridge as a subset of Connecticut. If I add someone to Woodbridge, I also want them in my Connecticut circle.
This could also be used to help people with lots of circles organize them better. For example, my circle hierarchy might be something like Locations, and Topics. Within Locations, I would have Connecticut, Massachusetts, New York, etc. Within Connecticut, I would have Woodbridge, New Haven, etc. Within Topics, I would have technology, healthcare, games, and others. Within technology, I would have Glass, Ingress, Drupal and so on.
To share a picture with just my friends in Woodbridge, I'd tap on Share, Google+, Location, Connecticut, Woodbridge. Yes, that would be five taps, but would be much easier than scrolling back and forth amount over sixty different cards. Glass could be configured to make hierarchical selections an option for those people with much smaller numbers of circles.
In terms of getting the news on Glass, there is plenty to work with. For me, the high priority tweets, direct messages and Gmail messages seem about right for me. CNN news is pretty good. It comes up with a picture and a headline. If I drill down, I can share the story. The New York Times does not fair as well. They take up most of their screen with their logo, and only have small pictures and no text about the stories. When you drill down, I've not been able to find an option to share the stories. Generally speaking, I don't look at the Times on Glass and I'm thinking of removing them from my timeline.
As a camera, I'm getting to like Glass, but have some concerns. I'm never sure exactly what I'm aiming at, so sometimes my pictures come out off center. Mostly, however, I like the way photography works, especially being able to hit the shutter button, and tap on the side of my glass a couple times to get a pictures online.
A friend has gotten Glass and is apparently considering returning it. It doesn't do much that you can't already do with a cellphone. I'd take that a bit further. Everything I can do with Glass, I can do much better with a laptop and a DSLR. I just can't do it as quickly, easily, or seamlessly.
In terms of my use of Glass and Social Media yesterday, several photos I shared were retweeted. One person asked if my photo of Lt. Gov. Nancy Wyman is the first portrait of a Lt. Gov. that has been shared via Glass. So far, I haven't found any others.
That's my Glass and Social Media recap for right now. What are you finding?
I'm active in several discussions about Glass online and recently a couple questions came up where I shared fairly long comments. To try and keep together some of what I'm writing about Glass in one place, I'm adding them here.
The first question was from a UK firm that asked where people saw Glass going in healthcare.
I work for a Community Health Center in the United States and have recently gotten Google Glass. We've been having lots of discussions about how we hope to use Google Glass.
Enhancing our Telemedicine program
(See http://quality.chc1.com/echo/ for more information about our Telemedicine program)
Making our EHRs available to our medical providers via Glass, including improved ways to do screenings and enter information into our EHR system.
Using Glass as an advocacy tool to help people recognize the social determinants of health around them.
The second question asked what markets were likely to be largest for Glass, did people think it would be law enforcement? I replied:
My father-in-law is a retired Federal agent. He is very excited about Glass from a law enforcement perspective. I work in health care, and I'm very excited about it from that perspective. Friends work in marketing and creative services and are very excited about it from that angle.
I think it is way to early to try and guess which market will be biggest. If I were guessing, I might go with health care, because it is such a large market. As a nation we spend a lot more on health care than we do on law enforcement, unless you include the full defense budget.
I also think it is useful to look beyond the current Glass prototype. Where do you see this going? I tend to think of Glass in terms of wearable computing. If we add devices like Fitbit and Pebbles into the same class and ask where this class of devices is going, the question gets even more interesting.
To sum it up, I'd take an old saying and twist it around for Glass, Follow your interests and the market will follow.
What are your thoughts?
Yesterday, I blogged about my plans to get together with a friend to talk about Glass development. I went on to share some initial thoughts, which mostly revolved around Glass as a device used to retrieve information. Yet much of today's discussion focused on a different aspect of Glass, Glass as a sensor, used to transmit information.
I touch on the Glass as sensor a little bit, at the end of yesterday's blog post, when I talked about using it in fitness, along the lines of Fitbit. Yet my friend, an MIT engineering graduate, and son of a retired MIT professor, with strong ties back to his alma mater encouraged me to think more about Glass as sensor.
In the past, we had worked together on complex event processing projects and developed code for analyzing complex data using Matlab. We talked a lot about various sensor related projects at MIT, so this shift of discussion wasn't a surprise.
What information is Glass capable of gathering right now? Images. Sounds. Location. Can it gather fine motions? Temperature? Other data? What might one be able to do if one could take this information and use it to trigger events?
How can this information be accessed? It looks like location information can be subscribed to with the Mirror API, but other information may need some sort of special Android App for Glass to be developed.
So, I'm starting to explore a little bit more working with the Mirror Api. I've sent messages to my Google Glass from the sample apps as well as from the playground. Next step will be to create something on my server.
Now, I've spoken with a few different people about developing for Glass. It will be interesting to see who comes up with what.