This evening, I sat down to my evening positive attitude adjustment, and found Howard Rheingold had shared on Facebook a link to Jason Feifer’s comments in Fast Company, GOOGLE MAKES YOU SMARTER, FACEBOOK MAKES YOU HAPPIER, SELFIES MAKE YOU A BETTER PERSON
It was, in my opinion, a very well written response to Sherry Turkle’s recent Op-Ed in the New York Times, The Documented Life where she complains about Selfies.
My Initial reaction to Turkle’s piece was to write Sisyphus’ Selfie. I’ve been intending to write more on this, and I started to write a comment to Howard’s status. Yet as it grew, I thought I should really make it part of my blog post.
I started off:
I must say, as an active participant in LambdaMOO back in the mid 90s and a friend of many of the researchers and cyberanthropoligists that became involved there. I've always found Turkle to be a bit full of herself (and other stuff).
I read her Op-Ed and found that my opinion of her hasn't improved over the past 18 years. I've been meaning to write a blog post about her article, very similar to Feifer's, but perhaps from a slightly different angle.
This is where I decided to merge the comment into this blog post. One person suggested, why not just call Turkle a Luddite, and then went on to repeat various assertions of Turkle that are tangential to the article, claiming them to be facts.
I think Luddite is an overused word amongst technophiles and so I want to present a slightly different idea.
Marc Prensky, in his famous article, Digital Natives, Digital Immigrants presents the idea of people who have grown up in a digital culture as digital natives. Those who have moved into a digital culture, having grown up in a different culture are digital immigrants.
In my mind, this fits nicely with some of what Turkle talks about. Yes, growing up in a digital culture does change the way we think and act. Yet this also points to the biggest problem with what Turkle has to say.
She is looking at digital culture from the viewpoint of a digital immigrant. For example, her comment
We don’t experience interruptions as disruptions anymore. But they make it hard to settle into serious conversations with ourselves and with other people because emotionally, we keep ourselves available to be taken away from everything.
This sure sounds a lot to me like that old grandmother living in the immigrant community complaining about how people these days just don’t do things the way they used to in the old world, and how much better the old world was.
I pause to think a little more and glance at my daughter creating something in Mindcraft. She is a digital native. Me? Having been on the Internet for over thirty years, and on bulletin boards and programming computers long before this, I tend to think of myself as a digital pioneer, or perhaps a digital aborigine.
Yes, working with computers for all these years has changed my way of thinking. A critic might compare it to the way mercury changed the thinking of hatmakers, and my children might have other comments about having a Dad that has been online longer than they have.
Yet I relish my experiences with technology and I’m glad that my children are having even greater experiences with it. I love the camaraderie of other digital pioneers or digital aborigines.
Through my discussions with friends on Facebook, I’ve also found myself talking about Jacques Ellul, whether or not people need to learn to program, representations of transhumanism, The Power of Patience and Civil Religion and how it relates to prophetic religion, the social contract, the way we interact through digital media, and if there are implications for a Great Awakening.
And, for that matter, I let a young college student from Iran borrow my Google Glass this afternoon, so he could take a selfie of him wearing Google Glass, standing next to a robot.
Technology does change the way we think and act. There is much that needs to be discussed about it. I’m happy that Facebook has given me topics to Google and become smarter about. I’m just not sure that Turkle is really adding much of value to the conversation.
Rabbit, Rabbit, Rabbit. We'll here we are, another October. Like other months, when I get time, I start off with a childhood invocation for good luck.
But it's October, thirty-seven years ago, a classmate of mine from high school disappeared. They found her body later in the month, but never found the murderer. Last year, during Hurricane Sandy, towards the end of October, my mother died in a car accident.
Looking back over my career, many of my job changes took place in October. My youngest daughter was born in October, as were some of my closest long time friends.
It's October, and the Government is shut down. This weekend, I sat on the porch, after making a batch of green apple jelly. Yes, I'm connected online. With my Google Glass, I get notifications as they happen. But there is something about sitting on the porch, having just made jelly.
I thought about when my mother was a kid. Yes, she heard, via the radio fairly quickly about the bombing of Pearl Harbor, but most news was much slower then, and even more slow before the radio and telegraph. How much is this always on, instant notification contributing to disfunction in Washington, where people seem more interested in the political theatre of the sound bite than in sound governing?
How much is the medium the message?
I've been reading The Blithedale Romance by Nathaniel Hawthorne. The setting is a utopian community in the mid nineteenth century. The hero is sick and reads books that other members of the community bring to him. Yet I'm reading it as an ebook on my smartphone. What is the mixed message of a nineteenth century novel on a twenty-first century device?
Kim and I have started watching "H+". It is a series about human implants, similar to Google Glass and a mass kill off of people with the implants due to a network virus. The medium is the message, as my wife and I watch it on an old TV hooked up to an old Roku which manages to still get YouTube. I watched an episode on Google Glass, which pushes the medium is the message idea even further.
And here I am, writing a blog post about it.
It is a post-apocalyptical world and I've been thinking about this new millennialism, a resurgence of apocalyptical thinking. No, we didn't have a Mayan apocalypse. We haven't had an apocalypse as a result of people of the same gender who love each other now being able to marry one another.
Now, even though the Federal Government is shutdown, you can go online and purchase health insurance. Like same-sex marriage, for some this looks like the end of the world. For others, the Federal Government shutdown looks like the end of the world.
But as I sat on the porch over the weekend, with a kitchen full of jams and jellies that I've made, and as I sit in my chair now, writing my blog post and listening to the large dog snore on the couch next to me, this is nothing like the end of the world in all the dystopian post-apocalyptical stories.
So I say Rabbit, Rabbit, Rabbit, bringing back all the simple childhood hopes and memories in this complicated hyper-connected world as I think of dogs and jelly and porches, and trying to get back to sleep.
Matthew Katz recently posted a link to his article in KevinMD, Google Glass for medicine: 4 reasons why it could be disastrous saying:
Am I just turning into a technophobe? My post on KevinMD about Google Glass.
As a person who has been using Google Glass for the past three months in a health care setting, I believe you have become a technophobe.
Privacy Violations: The same issue applies to cellphones. Are you going to ban them from your practice?
Hackable: Personal computers are hackable as well. Ban them? (I worked with security for a Swiss bank two decades ago when they said they'd never connect to the Internet because of security issues. There are risks with all technology, just like everything else in life. You can't ban life, instead, you need to mediate risks)
Concern with multitasking: This is probably the strongest point, which also seems pretty weak, based on my experience with Google Glass. Yet the interruptions I get from Google Glass, wearing it all the time, is similar to the interruptions I get from phone calls, overhead pages, and other staff members knocking on my door.
Google’s And medicine’s goals aren’t aligned: Again, on the surface, this seems like a valid point. However, from my experience dealing with pharmaceutical companies, medical device manufactures, and insurance companies, I suspect that Google's goals may be more closely aligned with medicine's goals than most companies working in health care.
Over on his article, I added a couple additional thoughts, edited for the blog here:
The other point that I would make is that Google Glass is not in BETA. Ity is not even in ALPHA. It is still a prototype. I think it is premature to make determinations about what a prototype is likely to do to a business. You might want to go back and look at the history of the Xerox.
The Smithsonian Article, Making Copies is a good starting point.
At first, nobody bought Chester Carlson's strange idea. But trillions of documents later, his invention is the biggest thing in printing since Gutenburg
Companies turned down the xerox machine because so few people made copies prior to it, they didn't think it would sell.
My experience with Glass, so far, is similar to my experiences with the Apple Newton in the early 90s. A lot of people didn't think much of the Newton back then, and it never really took off, but it laid the groundwork for smartphones today.
I wouldn't be surprised to see Glass follow a similar path and in twenty years be an all but forgotten precursor to ubiquitous wearable computing.
One last thought: it is worth looking at the Technology Adoption Life Cycle, as written about back in the 50's, particularly by Everett Rogers in his book Diffusion of Innovations.
Google Glass is at the very front end of the adoption lifecycle, where only a few innovators have been using it. As has become more and more common these days, when a new innovation comes along, it often gets a backlash. It seems that the backlash against an innovation is proportional to potential disruption the innovation carries.
As a final comment, I'd encourage you to read a blog post I wrote back in 2007 about Twitter:
In a previous post about ad:tech, I mentioned how I learned about NY Times' Facebook page from a twitter by Steve Rubel. I commented about this in the press room, and one of the reporters was surprised to hear that twitter was still around and active. I reflected back on hearing speakers at OMMA predict the demise of Twitter, Facebook and Second Life and it struck me that the standard technology adoption curve that we all hear so much about, may have a lot of interesting nuances.
Back in July, I wrote a blog post, Players Who Suit Ingress building off Richard Bartle's 1996 article about types of players in virtual games.
In the article, I suggested that Ingress players may have similar characteristics as players of MUDs back in the 1990s. Key player types include people who build things, people who destroy things, and people who explore.
Ingress just came out with a new update that provides information about a players activity. This information maps nicely to some of these player types.
As an example, the first category Ingress lists is Discovery with the number of Unique Portals Visited. I've currently visited 476 different portals. It is enough to get me a first level badge, which only requires 100 different portals, but not enough for the second level badge of 1000 portals. I suspect some of this depends on where you live. Visiting 1000 different portals may be easier if you live in New York City than if you live in the middle of Kansas.
The second category is building. There are for different statistics provide, Hacks, Resonators Deployed Links Created and Control Fields Created. I am currently at 7,869 hacks, adding over 500 new hacks a week. That is still a first level badge having hit 2,000 hacks, but not yet at the second level badge of 10,000 hacks. However, at my current rate, I should hit the second level in about a month.
I have deployed 10,539 resonators. That gets me the second level badge. The third level is 30,000 resonators, so that will probably be quite a while yet.
I've created 2,721 links, which gets me a second level badge for 1000 links and a little over half way to the third level badge of 5,000 links. I have created 267 control fields, which gets me the first level badge at 100, and half way to the second level badge of 500.
On the Combat side, I've destroyed 4,521 enemy resonators. Again, past level 1, of 2000, but not yet half way to level 2 of 10,000. I've destroyed 500 enemy links and 108 enemy control fields. I don't see badges for those. Perhaps I haven't destroyed enough. On the other hand, it is interesting to see that I've deployed over twice as many resonators as I've destroyed and created over five times as many links as I've destroyed.
I guess I'm more of a builder than destroyer. How about you?
I've never been a big fan of Powerpoint dating back to my training as a speaker in the 90s. The audience should be focusing on you and what you are speaking about, not on reading your script and looking at pictures on a screen. If you must use PowerPoint, you should follow Guy Kawasaki's 10-20-30 rule for Powerpoint, no more than 10 slides, no more then 20 minutes, and font no smaller than 30 points.
Instead, if I am using visual aids in a presentation, I prefer to use tools related to the presentation. When I speak about social media, I like to use Buffer to send preloaded tweets out to Tweetchat using a hashtag. The key points still get displayed on the screen, the audience gets more of a chance to interact and it illustrates using the technology.
On Thursday morning, I will be doing a presentation introducing a group of librarians to Google Glass. So, the challenge I came up with for myself was, could I use Google Glass as a replacement to Powerpoint.
The first issue was to find a way to present what is on the screen through a projector. I've done presentations using the screencast capabilities of the Glass app on my smartphone. This works very well if you are presenting to a small group that can gather around the cellphone, but for a larger crowd, finding some way to connect my smartphone to a projector was called for.
My first attempt was to use the old TV Out approach. My current smartphone is a Samsung Galaxy G4. Some Samsung phones, like several other phones, have the ability to display to old fashioned televisions, and many projectors using a cable that plugs into the headphone jack. I have a cable like that I've used on other phones, but I couldn't get it to work on the G4. My guess is that there is a setting I need to enable, which I haven't been able to find. Any suggestions are appreciated.
The second idea was to use a MHL cable. MHL, or Mobile High-definition Link, is a micro-usb plug on one end and a HDMI plug on the other. You can use it to display what is on your phone screen on a high definition television. I don't watch much television, so we don't have HDTV and I don't have any MHL or HDMI cables. I must admit that I haven't looked, but it seems like most times I've done presentations, the projectors accept RCA input (the old fashioned TV connector), or VGA input (the standard for PC monitors), but HDMI inputs are far from ubiquitous, so finding a RCA or VGA approach would work better.
My next thought was to try and find some way to connect the cellphone to a laptop and connect the laptop to the projector. I've connected other android phones to my laptop using the USB cable and running Android Development Tools, ADT, on the laptop. The Samsung G4, like many smartphones, does not default to having debugging enabled. How To Enable Samsung Galaxy S4 USB Debugging provides good instructions on how to do this.
However, when I connected the smartphone to my laptop and started ADT, I got a message that the Samsung Galaxy S4 was offline. It took me a while to figure out what the problem was. With newer versions of Android, there is security added to the device. You need to permit the specific laptop to debug the phone. I was using Android Debug Bridge (adb) version 1.0.29. This version does not support this type of security. When I upgraded to version 1.0.31 and tried to enter debug, a message popped up on my smartphone asking if I wanted to allow the laptop to debug the phone. I said yes, and dab started working.
The Dalvik Debug Monitor Server (ddms) provides the ability to display the screen of the android device on the laptop. However, this only works for single images. To be able to see the screen as it changes, I downloaded Android Projector.
Getting Android Projector to work also required a few steps. First, I needed to make sure I had a current version of Java running on my laptop. Then,, I started adb and authorized my laptop to debug my phone. Next, I started ddms and connected to my phone. With ddms running, I then started Android Projector. The screen came up nicely. I rotated it to match the orientation of the Glass app screen cast, and then hooked my laptop up to a projector and I could display what I was seeing on Glass to the whole room.
The one caveat: it tends to lag by three to five seconds between when a card comes up on Glass and when it makes it to the projector. An aside: I could have put Glass into debug mode and connected the glass directly to the laptop. I tried this, but then you need to remain connected via a USB cable, which ties you down and loses some of the effect.
With this in place, I could now display how Glass works on the projector. The next step was to put a presentation up on Glass. I've been doing a little bit of Glass development and have created GlassDeck. It allows you to create a bundled deck of cards as a timeline item. It is written in PHP based on the quickstart Mirror API guide. It is still fairly primitive. I wrote it mostly as a programming exercise. You can save your GlassDecks and share them with others. If you log into GlassDeck, you can find my presentation at 106686438536671985498:Presentation. Even if you don't have Glass, you can see the HTML that I used to create the cards. If you do have glass, you can edit it and create your own presentations.
This is all fairly primitive still, but has potential. I look forward to refining my GlassDeck app, finding easier ways to display Glass on a projector, and perhaps even using a remote for Glass at some point. Remotte is creating one such remote that might be useful for doing presentations using Glass.
So, Thursday, I'll do a presentation using Google Glass. I'll let you know how it goes. Let me know your thoughts on ways to make doing presentations using Google Glass even easier.