Rabbit, Rabbit, Rabbit. We'll here we are, another October. Like other months, when I get time, I start off with a childhood invocation for good luck.
But it's October, thirty-seven years ago, a classmate of mine from high school disappeared. They found her body later in the month, but never found the murderer. Last year, during Hurricane Sandy, towards the end of October, my mother died in a car accident.
Looking back over my career, many of my job changes took place in October. My youngest daughter was born in October, as were some of my closest long time friends.
It's October, and the Government is shut down. This weekend, I sat on the porch, after making a batch of green apple jelly. Yes, I'm connected online. With my Google Glass, I get notifications as they happen. But there is something about sitting on the porch, having just made jelly.
I thought about when my mother was a kid. Yes, she heard, via the radio fairly quickly about the bombing of Pearl Harbor, but most news was much slower then, and even more slow before the radio and telegraph. How much is this always on, instant notification contributing to disfunction in Washington, where people seem more interested in the political theatre of the sound bite than in sound governing?
How much is the medium the message?
I've been reading The Blithedale Romance by Nathaniel Hawthorne. The setting is a utopian community in the mid nineteenth century. The hero is sick and reads books that other members of the community bring to him. Yet I'm reading it as an ebook on my smartphone. What is the mixed message of a nineteenth century novel on a twenty-first century device?
Kim and I have started watching "H+". It is a series about human implants, similar to Google Glass and a mass kill off of people with the implants due to a network virus. The medium is the message, as my wife and I watch it on an old TV hooked up to an old Roku which manages to still get YouTube. I watched an episode on Google Glass, which pushes the medium is the message idea even further.
And here I am, writing a blog post about it.
It is a post-apocalyptical world and I've been thinking about this new millennialism, a resurgence of apocalyptical thinking. No, we didn't have a Mayan apocalypse. We haven't had an apocalypse as a result of people of the same gender who love each other now being able to marry one another.
Now, even though the Federal Government is shutdown, you can go online and purchase health insurance. Like same-sex marriage, for some this looks like the end of the world. For others, the Federal Government shutdown looks like the end of the world.
But as I sat on the porch over the weekend, with a kitchen full of jams and jellies that I've made, and as I sit in my chair now, writing my blog post and listening to the large dog snore on the couch next to me, this is nothing like the end of the world in all the dystopian post-apocalyptical stories.
So I say Rabbit, Rabbit, Rabbit, bringing back all the simple childhood hopes and memories in this complicated hyper-connected world as I think of dogs and jelly and porches, and trying to get back to sleep.
Matthew Katz recently posted a link to his article in KevinMD, Google Glass for medicine: 4 reasons why it could be disastrous saying:
Am I just turning into a technophobe? My post on KevinMD about Google Glass.
As a person who has been using Google Glass for the past three months in a health care setting, I believe you have become a technophobe.
Privacy Violations: The same issue applies to cellphones. Are you going to ban them from your practice?
Hackable: Personal computers are hackable as well. Ban them? (I worked with security for a Swiss bank two decades ago when they said they'd never connect to the Internet because of security issues. There are risks with all technology, just like everything else in life. You can't ban life, instead, you need to mediate risks)
Concern with multitasking: This is probably the strongest point, which also seems pretty weak, based on my experience with Google Glass. Yet the interruptions I get from Google Glass, wearing it all the time, is similar to the interruptions I get from phone calls, overhead pages, and other staff members knocking on my door.
Google’s And medicine’s goals aren’t aligned: Again, on the surface, this seems like a valid point. However, from my experience dealing with pharmaceutical companies, medical device manufactures, and insurance companies, I suspect that Google's goals may be more closely aligned with medicine's goals than most companies working in health care.
Over on his article, I added a couple additional thoughts, edited for the blog here:
The other point that I would make is that Google Glass is not in BETA. Ity is not even in ALPHA. It is still a prototype. I think it is premature to make determinations about what a prototype is likely to do to a business. You might want to go back and look at the history of the Xerox.
The Smithsonian Article, Making Copies is a good starting point.
At first, nobody bought Chester Carlson's strange idea. But trillions of documents later, his invention is the biggest thing in printing since Gutenburg
Companies turned down the xerox machine because so few people made copies prior to it, they didn't think it would sell.
My experience with Glass, so far, is similar to my experiences with the Apple Newton in the early 90s. A lot of people didn't think much of the Newton back then, and it never really took off, but it laid the groundwork for smartphones today.
I wouldn't be surprised to see Glass follow a similar path and in twenty years be an all but forgotten precursor to ubiquitous wearable computing.
One last thought: it is worth looking at the Technology Adoption Life Cycle, as written about back in the 50's, particularly by Everett Rogers in his book Diffusion of Innovations.
Google Glass is at the very front end of the adoption lifecycle, where only a few innovators have been using it. As has become more and more common these days, when a new innovation comes along, it often gets a backlash. It seems that the backlash against an innovation is proportional to potential disruption the innovation carries.
As a final comment, I'd encourage you to read a blog post I wrote back in 2007 about Twitter:
In a previous post about ad:tech, I mentioned how I learned about NY Times' Facebook page from a twitter by Steve Rubel. I commented about this in the press room, and one of the reporters was surprised to hear that twitter was still around and active. I reflected back on hearing speakers at OMMA predict the demise of Twitter, Facebook and Second Life and it struck me that the standard technology adoption curve that we all hear so much about, may have a lot of interesting nuances.
Back in July, I wrote a blog post, Players Who Suit Ingress building off Richard Bartle's 1996 article about types of players in virtual games.
In the article, I suggested that Ingress players may have similar characteristics as players of MUDs back in the 1990s. Key player types include people who build things, people who destroy things, and people who explore.
Ingress just came out with a new update that provides information about a players activity. This information maps nicely to some of these player types.
As an example, the first category Ingress lists is Discovery with the number of Unique Portals Visited. I've currently visited 476 different portals. It is enough to get me a first level badge, which only requires 100 different portals, but not enough for the second level badge of 1000 portals. I suspect some of this depends on where you live. Visiting 1000 different portals may be easier if you live in New York City than if you live in the middle of Kansas.
The second category is building. There are for different statistics provide, Hacks, Resonators Deployed Links Created and Control Fields Created. I am currently at 7,869 hacks, adding over 500 new hacks a week. That is still a first level badge having hit 2,000 hacks, but not yet at the second level badge of 10,000 hacks. However, at my current rate, I should hit the second level in about a month.
I have deployed 10,539 resonators. That gets me the second level badge. The third level is 30,000 resonators, so that will probably be quite a while yet.
I've created 2,721 links, which gets me a second level badge for 1000 links and a little over half way to the third level badge of 5,000 links. I have created 267 control fields, which gets me the first level badge at 100, and half way to the second level badge of 500.
On the Combat side, I've destroyed 4,521 enemy resonators. Again, past level 1, of 2000, but not yet half way to level 2 of 10,000. I've destroyed 500 enemy links and 108 enemy control fields. I don't see badges for those. Perhaps I haven't destroyed enough. On the other hand, it is interesting to see that I've deployed over twice as many resonators as I've destroyed and created over five times as many links as I've destroyed.
I guess I'm more of a builder than destroyer. How about you?
I've never been a big fan of Powerpoint dating back to my training as a speaker in the 90s. The audience should be focusing on you and what you are speaking about, not on reading your script and looking at pictures on a screen. If you must use PowerPoint, you should follow Guy Kawasaki's 10-20-30 rule for Powerpoint, no more than 10 slides, no more then 20 minutes, and font no smaller than 30 points.
Instead, if I am using visual aids in a presentation, I prefer to use tools related to the presentation. When I speak about social media, I like to use Buffer to send preloaded tweets out to Tweetchat using a hashtag. The key points still get displayed on the screen, the audience gets more of a chance to interact and it illustrates using the technology.
On Thursday morning, I will be doing a presentation introducing a group of librarians to Google Glass. So, the challenge I came up with for myself was, could I use Google Glass as a replacement to Powerpoint.
The first issue was to find a way to present what is on the screen through a projector. I've done presentations using the screencast capabilities of the Glass app on my smartphone. This works very well if you are presenting to a small group that can gather around the cellphone, but for a larger crowd, finding some way to connect my smartphone to a projector was called for.
My first attempt was to use the old TV Out approach. My current smartphone is a Samsung Galaxy G4. Some Samsung phones, like several other phones, have the ability to display to old fashioned televisions, and many projectors using a cable that plugs into the headphone jack. I have a cable like that I've used on other phones, but I couldn't get it to work on the G4. My guess is that there is a setting I need to enable, which I haven't been able to find. Any suggestions are appreciated.
The second idea was to use a MHL cable. MHL, or Mobile High-definition Link, is a micro-usb plug on one end and a HDMI plug on the other. You can use it to display what is on your phone screen on a high definition television. I don't watch much television, so we don't have HDTV and I don't have any MHL or HDMI cables. I must admit that I haven't looked, but it seems like most times I've done presentations, the projectors accept RCA input (the old fashioned TV connector), or VGA input (the standard for PC monitors), but HDMI inputs are far from ubiquitous, so finding a RCA or VGA approach would work better.
My next thought was to try and find some way to connect the cellphone to a laptop and connect the laptop to the projector. I've connected other android phones to my laptop using the USB cable and running Android Development Tools, ADT, on the laptop. The Samsung G4, like many smartphones, does not default to having debugging enabled. How To Enable Samsung Galaxy S4 USB Debugging provides good instructions on how to do this.
However, when I connected the smartphone to my laptop and started ADT, I got a message that the Samsung Galaxy S4 was offline. It took me a while to figure out what the problem was. With newer versions of Android, there is security added to the device. You need to permit the specific laptop to debug the phone. I was using Android Debug Bridge (adb) version 1.0.29. This version does not support this type of security. When I upgraded to version 1.0.31 and tried to enter debug, a message popped up on my smartphone asking if I wanted to allow the laptop to debug the phone. I said yes, and dab started working.
The Dalvik Debug Monitor Server (ddms) provides the ability to display the screen of the android device on the laptop. However, this only works for single images. To be able to see the screen as it changes, I downloaded Android Projector.
Getting Android Projector to work also required a few steps. First, I needed to make sure I had a current version of Java running on my laptop. Then,, I started adb and authorized my laptop to debug my phone. Next, I started ddms and connected to my phone. With ddms running, I then started Android Projector. The screen came up nicely. I rotated it to match the orientation of the Glass app screen cast, and then hooked my laptop up to a projector and I could display what I was seeing on Glass to the whole room.
The one caveat: it tends to lag by three to five seconds between when a card comes up on Glass and when it makes it to the projector. An aside: I could have put Glass into debug mode and connected the glass directly to the laptop. I tried this, but then you need to remain connected via a USB cable, which ties you down and loses some of the effect.
With this in place, I could now display how Glass works on the projector. The next step was to put a presentation up on Glass. I've been doing a little bit of Glass development and have created GlassDeck. It allows you to create a bundled deck of cards as a timeline item. It is written in PHP based on the quickstart Mirror API guide. It is still fairly primitive. I wrote it mostly as a programming exercise. You can save your GlassDecks and share them with others. If you log into GlassDeck, you can find my presentation at 106686438536671985498:Presentation. Even if you don't have Glass, you can see the HTML that I used to create the cards. If you do have glass, you can edit it and create your own presentations.
This is all fairly primitive still, but has potential. I look forward to refining my GlassDeck app, finding easier ways to display Glass on a projector, and perhaps even using a remote for Glass at some point. Remotte is creating one such remote that might be useful for doing presentations using Glass.
So, Thursday, I'll do a presentation using Google Glass. I'll let you know how it goes. Let me know your thoughts on ways to make doing presentations using Google Glass even easier.
A few days ago, I wrote a blog post about My First Google Glass App in PHP. Since then, I've continued to enhance it, talk with people who have been testing it, and offer suggestions to others trying to get started. Here are some of things I've been telling people.
The first place to start changing code is in index.php. What I did was I read through the various operations to send cards to the timeline. I started making a few changes here and there, and then started getting bolder in my changes. One important tip, especially if you're developing code and sending lots of test cards to your timeline. Add the DELETE action to each card so you can delete them when your testing is over.
$menu_item = new Google_MenuItem();
While you're at it, you may want to add functionality to PIN or UNPIN a timeline item. This is the same as the above code for adding the DELETE action, but use use TOGGLE_PINNED. (It took me a little while to find the action.
Another minor glitch in the sample PHP code. It makes reference to $service_base_url. But that isn't set anywhere. You should change it to $base_url or set $service_base_url = $base_url. If you do this, some of the images start working.
An issue that another person ran into is that the code is written to use SQLite2. If you have SQLite3 but not SQLite2, the PHP code doesn't work. Fortunately, I have both. There is a comment on the SQLite3 page that talks about how to migrate from SQLite2 to SQLite3. I haven't tried that, because I wanted to migrate to MySQL.
Another thing to keep in mind: If you have different projects, make sure that you set up the config.php to point to different databases for each project, otherwise, you can run into various issues with the credentials.
One person asked about how to make bundled cards. I didn't find any good documentation about this, so I hacked around until I figured it out. To add bundled cards to a timeline item you need to use the setHtmlPages method, passing it an array of strings containing HTML. e.g.
To get a good idea at the possible html code, take a look at the Google Mirror API Playground. It has lots of good examples, and I used this to tweak my application.
To put a map in a timeline card, you should read the section of the location documentation, Rendering maps on timeline cards.
The next thing to look at is the util.php code. This is the code which stores credential information in the SQLite2 database. I changed the code around to use MySQL instead. There were a few error conditions that I didn't properly handle, which prevented some people from accessing the app. However, once that was fixed, I started adding the code to save decks of timeline cards. The first part of that is completed. Now, I just need to add code so people can save multiple decks, optionally share them, and retrieve them.
One person suggested that some of this would probably be better done in Drupal. I like working in Drupal and I've thought about using this as a framework for my application. However, I wanted to get past the mirror API complexity first.
To see the latest version of my app, check out Glassdeck. If you want the MySQL util.php code or have questions, contact me via Google+