10 Years Later: What 2004 Predicted For The Internet Of 2014

Monday, December 8th 2014

epic 2014

This blog turns 10 later this month. I’m no longer nearly as prolific a writer as I was back then, but I’m still kind of amazed that I’ve kept at it this long. Among other things since then: I got my master’s, moved cities/jobs twice, got married, and had a daughter. Wow.

While all 625 old posts are still available in the archives, I implore you to pretend most of them aren’t there. With the benefit of a decade’s hindsight I just see typos, odd sentence structures, weird choices in my URL structure that still haunt me today, and all-around questionable writing galore.

There’s one exception: I do want to point out the second post I ever wrote, way back on 12/26/04. I titled it simply “Googlezon”. While I was a bit late to the party at the time, I pointed out an interesting little movie called EPIC 2014. It forecasted the internet and society of 2014, from the perspective of 2004. It’s about 8 minutes long, and still exists on the web in flash format today (remember, this predates Youtube! Ancient history!).

EPIC posits a 2014 where Google and Amazon merged (after Google bought Tivo), Microsoft bought Friendster, the New York Times has gone print-only, and more.

But buried among these amusing predictions are grains of truth. EPIC’s forecasts of how we generate and consume news aren’t that far off from reality, and it seems to have pretty accurately predicted the rise of Big Data. EPIC is a fun look back at where the web was, and where it might still be going. I’ll check in with you again in 2024.

(side note: While researching this piece, I realized that the Robin Sloan who worked on this short film is the same Robin Sloan who wrote one of the best books I read last year.)

08. December 2014 by Chad Haefele
Categories: HP Updates, Libraries/Info Sci, Ramblings, Tech | Leave a comment

Holiday gift guide: Motorola Keylink

Tuesday, December 2nd 2014

I have a strange fascination with all the holiday gift guide lists that pop up this time of year. I’ve always wanted to do one, but also feel like I’d be reinventing the wheel. Many more interesting people than me have already done the job. But I do want to point to at least one item, something new that I don’t think is getting enough review coverage: The Motorola Keylink

41264_07_motorola_keylink_lets_you_find_your_lost_keys_with_your_smartphone

Basic Features

The Keylink ($24.99) is billed as a “phone and key finder”. And it works well for that: Attach the small Keylink to your keychain. Lose track of your phone? Push a button on the keylink to make the phone ring. Lose your keys? A button in the Motorola Connect app does it the other way around: the Keylink beeps.

Better Security

That’s all well and good, and it works well. But my favorite feature is one that’s getting far less billing. If you’re running Android on the latest version (5.0/Lollipop), the Keylink can let you bypass your phone’s lock code.

Lollipop introduced a handy new feature to Android devices, the idea of a trusted bluetooth device. You can tell Android that if you’re connected to a certain bluetooth device (like your car or a home stereo) then there’s no need to use a lock code. If you go out of range of that bluetooth device, the lock code becomes necessary again. Handy while driving, and in a bunch of other situations too. I spend most of my day away from my bluetooth devices, so I didn’t have anything I could use to take advantage of this feature. But the Keylink uses bluetooth!

I attached it to my keys, which spend most of the day in my pocket. As long as the Keylink is near my phone, no lock code necessary. But if my phone gets more than about 30 feet from me, then the code snaps back into place. I’ve had a lock code on my phone in the past, but it’s always been a very simple one. I have to enter it countless times per day, so anything truly secure got annoying fast. Now I’m free to use a much more complex code, knowing that I’ll rarely have to enter it. I still wish that my phone had fingerprint-based security like the iPhone, but using the Keylink as a trusted bluetooth device makes for an interesting and convenient alternate method to keep my phone a bit more secure.

The Keylink’s battery should last about a year, and is replaceable.

Who’s it for?

Anyone who carries an Android phone and a keyring should find the Keylink useful. Just make sure their phone is on the latest version of Android. The Nexus 4/5/6 all fit the bill, plus a list of a few others that should grow soon.

Where is it?

The Keylink is often out of stock on Motorola’s website. But it’s in stock at many T-mobile stores, which also lets you skip Motorola’s shipping charge.

02. December 2014 by Chad Haefele
Categories: Ramblings, Reviews, Tech | Leave a comment

From Exposition to Resolution: Looking at User Experience as a Narrative Arc

Thursday, November 20th 2014

Untitled-3

The nice people at Optimal Workshop asked me to write a guest post over at their blog. It’s all about mapping the narrative arc onto a user’s journey through a website, an area I’ve been turning over in my head lately. Go take a look!

As full disclosure, I was paid for the guest post.

20. November 2014 by Chad Haefele
Categories: Libraries/Info Sci, Publications, Ramblings | Leave a comment

Usability and Amazon Premium Headphones

Wednesday, September 24th 2014

51IubYYzI-L._AA1500_[1]I’ve finally found a pair of headphones that I actually enjoy using: Amazon’s Premium Headphones.

As a product category, headphones continually frustrate me. I use them all the time while commuting. I shove them in my messenger bag, fish them out at odd times, and usually end up losing them within a year. I also have relatively small ear canals (according to my doctor), so in-ear types often don’t fit me well or end up hurting after far too little time.

My ideal pair of headphones would, in no particular order:

  • Be tangle-free or wireless
  • Include some kind of controls (volume, play/pause, etc)
  • Fold or coil up into a compact size
  • Fit in or on my ears
  • Produce at least average sound (I’m not an audiophile)
  • Be cheap (< $20) for replacement purposes

I’ve lived with cheap skullcandy in-ear headphones for years, which met some of these qualifications: They’re cheap, have a volume control, sound decent, coil up well, and mostly fit in my ears thanks to coming with different sizes of rubber earbuds. But that fit isn’t ideal, and I’m constantly untangling them.

I also own a pair of Motorola S305 bluetooth headphones, for situations where wireless is important. They don’t fold up and are too expensive to replace regularly, but are otherwise a good choice and meet all my criteria.

Now I think I’ve found a new favorite pair, from an unlikely source: The headphones that come with Amazon’s Fire phone are nearly perfect!

Say what you will about the Fire phone itself, but the accessory headphones (available separately as the awkwardly named “Amazon Premium Headphones”) tackle headphone usability in some interesting ways:

  • Most of the cable is flat, not round, and relatively stiff. This part of the cable never gets tangled at all.
  • The earbuds themselves are magnetic, and stick together when not in use. This reduces tangles even more.
  • The built-in controls are simple and useful. Tap the button once to pause/resume, or twice to go to the next track. And the volume controls are the first I’ve seen on a wired pair that directly control my phone’s volume, instead of just modulating what’s going through the headphone cable.
  • The earbuds don’t go deeply into the ear canal, meaning they actually fit me. They’re shaped similarly to Apple’s current earbuds, but those always fell right out of my ears. Amazon has slightly tweaked the shape for a more secure fit.

So they’re tangle-free, have excellent controls, coil up well, fit in my ears, sound decent enough, and cost $10-$15. I love these things, even if I’m still a bit confused that something decent came out of the Fire phone’s release. I’d better go stock up on some extras while they’re still available.

24. September 2014 by Chad Haefele
Categories: Ramblings, Reviews, Tech | Leave a comment

ALA 2014: My Session is Available Online

Friday, September 19th 2014

title slide

Earlier this summer I gave a talk with Emily King at ALA 2014 in Las Vegas: Focusing on the Big Picture: Re-Imagining the Library Website.

The session was recorded, and the audio and slides are now available online to conference attendees. We had a full room, and some great discussion! We covered our whole website redesign process – how we moved from 20,000+ flat HTML files to a nicely managed WordPress site with a few hundred pages.

(I’m also kind of thrilled to be able to check off “be listed in the same conference proceedings as Stan Lee” from my bucket list.)

19. September 2014 by Chad Haefele
Categories: Libraries/Info Sci, Presentations | Leave a comment

Usability testing with Optimal Workshop

Wednesday, July 9th 2014

ow_logoUsability testing is one of the best parts of my job. I love hearing from users about how they interact with the library’s website and then figuring out what we can change to better meet their needs.

The dark side of this testing is the sheer time involved. Recruiting, scheduling, and sitting down with each individual user can be a daunting commitment of staff hours. I’ll say upfront: that type of testing is still great! It definitely has a place. But we’ve started using a tool that lets us run more tests, more often: Optimal Workshop.

One important bit: While Optimal Workshop has a free plan, you’ll get the most out of it if you spring for the paid level. It’s on the pricey side, but keep in mind that they offer a 50% discount to educational customers.

What we did

We used two of the suite’s three tools in a study earlier this year: Chalkmark and Optimal Sort. We advertised the tests with a pop-up on our homepage that was displayed to half our visitors. All respondents were able to enter a drawing for a $50 Amazon gift card at the end. We expected to run the tests for at least two weeks to get enough responses. But after just a week we had more than 500 and were able to conclude it early. That number exceeded my wildest expectations! Here’s how we used each tool:

Chalkmark

Think of Chalkmark as a first-click test. You display a screenshot or design draft to your users, and ask them where they’d click first to accomplish a given task. Results are displayed in a heatmap that’s easy to parse at a glance. For example, we asked users where they’d click first to search for a book on our homepage:

Click for larger view

Click for larger view

82% of clicks were either in our main search box or on the link to our catalog. That’s great! They were able to find their way to a book search easily. Another 7% clicked on our Research Tools menu. While that’s not ideal, it’s also not a bad option; they’ll see a page with a link to the catalog next. That leaves about 11% of our users who went astray. Thanks to some demographic questions we asked, we know a little about them and can try to figure out what was confusing or unintuitive to them in future tests. We can also view other heatmaps based on those demographic questions, which is proving useful.

(Side note: We asked library staff to take the same test, and got very different results! Fascinating, but the implications are still unclear and a topic for another time)

Optimal Sort

Analogous to an in-person card sorting exercise, in an Optimal Sort test users are shown a list of text items and asked to sort them into categories. We used it to get at how our menu navigation could or should be organized. Results are shown in a matrix of where each item got sorted:

Click for larger view

Click for larger view

Our results mostly validated our existing menu organization choices, but along the way we accidentally discovered something interesting!

We provided users with the option to sort items into a category called “I don’t know what these items are”. The original idea was to avoid users sorting an item randomly if they didn’t truly have an idea of where it should go. But a couple of items proved unexpectedly popular in this category, so now we know that some of our naming conventions need to be addressed.

Optimal Workshop’s third tool is Treejack, which is designed to test a site structure. We haven’t used it yet, but I’m looking forward to putting it through it’s paces.

Summing Up

Our website is an iterative project, one that is never truly finished. Optimal Workshop lets us run frequent tests without significant staff time involved in the execution, and to reach more users than we ever could in person. Even the free plan, with it’s 10 response limit, is still useful enough to get actionable data in the right context.

Are any other libraries using it? I’d love to hear what you’re testing.

09. July 2014 by Chad Haefele
Categories: Libraries/Info Sci, Reviews, Tech, UNC | 3 comments

ALA 2014: My two WordPress presentations

Thursday, June 12th 2014

After a couple years off, I’m returning to ALA’s annual conference this year. I’m obviously excited to see colleagues and the Vegas sights, but I’m also looking forward to my two presentations there. If you’d like to come hear about how we redesigned the UNC Libraries website and moved it into WordPress, you’ve got two options:

I’m running through a short lightning talk style overview of our process at the Tech Speed Dating session organized by LITA’s Code Year Interest Group. That’s Saturday, 6/28 from 1:00-2:30 in Convention Center room N119. There’s a bunch of other great talks in that session on the list too, including a demo from SparkFun.

Think of that as the preview for the full session on Sunday. Emily King and I have a whole session to ourselves where we’ll walk through our redesign and content strategy development process from start to finish. This one’s Sunday, 6/29 from 4:30-5:30 in Convention Center room N243. Late in the day, I know, but come rest and learn before hitting the strip.

Both sessions will cover how we made WordPress work for us, how our migration worked, and what our ongoing content & site maintenance has been like since launch. I hope to see you there!

12. June 2014 by Chad Haefele
Categories: Libraries/Info Sci, Presentations, UNC | 2 comments

My presentations from Computers in Libraries 2014

Friday, April 11th 2014

I was fortunate enough to have two presentations accepted at Computers in Libraries this year in DC. As always I’m not sure if my slides make much sense without my accompanying narration, but I’m happy to answer questions about them.

Both sessions were collaborations. I presented “Moving Forward: Redesigning UNC’s Library Website” with Kim Vassiliadis, and “Rock your library’s content with WordPress” with Chad Boeninger. Thanks to all who came out! We had some great discussions during and after.

Moving Forward: Redesigning UNC's Library Website from chaefele

Rock your library’s content with WordPress from chaefele

11. April 2014 by Chad Haefele
Categories: Libraries/Info Sci, Presentations, Tech, UNC | Leave a comment

Semi-Automatic Chat: Speeding up reference questions in Pidgin

Monday, March 17th 2014

This is an expanded write-up of a lightning talk I presented at the 2014 LAUNC-CH conference:

Some background: We answer reference questions via chat at the reference desk using the amazing Libraryh3lp service. We log in and conduct chats with Pidgin. Libraryh3lp isn’t required for this to work, but Pidgin is.

A few months ago, a colleague asked me if there was a way to quickly cut and paste frequent responses into a chat. We end up repeating ourselves quite a bit when a common question comes up, and it seems rather inefficient.

Thankfully, Pidgin has a built-in plugin called (aptly enough) Text Replacement.

To get it up and running:

  • In Pidgin, go to the Tools menu.
  • Click Plugins.
  • Check the box next to Text Replacement.
  • While Text Replacement is highlighted, click Configure Plugin.

This is the screen where you configure your text replacement. The basic idea is that you set a keyword. Whenever a user types that keyword, Pidgin automatically replaces it with a pre-set block of text. So for example, in our case typing “$hi” will produce: “Hi, how can I help you today?”

To add a new replacement at the Configure screen:

  • Fill out the ‘you type’ and ‘you send’ boxes appropriately. I recommend starting each ‘you type’ trigger with a $, which should help avoid accidental replacements.
  • Uncheck the ‘only replace whole words’ box.
  • Click Add.
  • click Close.

Now your text replacement is active! Repeat as necessary to create others.

We use Pidgin at multiple computers simultaneously, so I wanted to be able to duplicate these replacements at each station without having to do it manually.

Pidgin stores the plugin’s text replacement library here:
C:\Users\USERNAME\AppData\Roaming\.purple\dict

To move this file to another computer:

  • On the destination PC, repeat the first chunk of steps above to enable the Text Replacement plugin.
  • Copy the dict file from the source PC to the same location on the destination PC.
  • Restart pidgin on the destination PC.

Now we’re in business! The next step was to figure out exactly what we wanted to replace. Read more if you’re interested.

17. March 2014 by Chad Haefele
Categories: Libraries/Info Sci, Presentations, Tech, UNC | 7 comments

My week with Google Glass: Personal life thoughts

Friday, March 14th 2014

I was lucky enough to spend last week with a loaner pair of Google Glass. Purchased by my place of work, I was asked to try them out and evaluate them for possible library use or development of apps by the library. I’m far from the first person to write about their experience with Glass, but I wanted to write up my experience and reactions as an exercise in forcing myself to think critically about the technology. I’m splitting it into two posts: One about the impact and uses of Glass in libraries was posted yesterday, and this is the second: my more general impressions as a Glass user and how it might fit into my daily life.

To cut to the chase: Google Glass is an extremely impressive piece of technology squeezed into a remarkably small package. But it does have issues, and Google is right to declare that it isn’t ready for mass market adoption yet.

What I didn’t like about Glass:

  • Battery life is anemic at best, especially when using active apps like Word Lens. I rarely got more than 4-5 hours of use out of Glass, and sometimes as little as 30 minutes.
  • I’m blind without my (regular) glasses. I know that prescription lenses are now available for Glass, but the $250 price tag means there’s no way I could justify getting them for a one week trial. And because Glass’ frame doesn’t fold up in the way that regular glasses do, there’s no easy way to carry them around to swap out with regular glasses for occasional use. Despite being impressively small for what they do, they’re still too bulky.
  • Many apps on Glass are launched with a spoken trigger phrase. Remembering them all is awkward at best, and I sometimes flashed back to MS-DOS days and searching for the right .exe file to run.
  • Confusingly, Glass does not auto-backup photos and videos taken with it. My Android phone dumps all media to my Google account, but Glass won’t do that unless it’s plugged in and on wifi.
  • Style and social cues, the two elephants in the room, has to be addressed. Right now I don’t think I could ever get up the courage to wear Glass in public on a regular basis. But when the tech shrinks even more and can be embedded in my regular glasses, then things will get interesting. The social mores around wearable tech still need to be worked out. I did not feel comfortable pushing those bounds except in very limited circumstances (like walking around on a rainy cold day with my bulky hood pulled up around Glass), and rarely wore Glass in public as a result.
  • Taking a picture by winking alternately delighted and horrified me. I’d love to see more refined eye movement gesture controls, instead of just the one that’s associated with so much unfortunate subtext.

What I loved about Glass:

Nora through GlassBut as a camera, Glass excels. My daughter is 13 months old, and invariably stops doing whatever ridiculously cute thing she’s doing the moment I get out a camera to capture it. The camera becomes the complete focus of her attention. But if I’m wearing Glass, I can take a picture at a moment’s notice without stopping what I was doing. A wink or simple voice command, and I have a snapshot or short video saved for perpetuity. In my week I got some amazing Glass pictures of my daughter that I never would have otherwise. For a brief moment this alone made the $1500 price tag seems oddly reasonable.

Side note: This easy hands-free capture of photos and video has fascinating implications for personal data and photo management. With such a giant pile of media produced, managing it and sorting through the bad shots becomes a herculean task. I don’t know that there’s a solution for this yet, though admittedly I think Google Plus’ automatic enhancement and filtering of photos is a great first step.

Back to what I like about Glass:

Other than taking photos of kids, I ran into three other use cases that genuinely excited me about using Glass in everyday life:

Biking with GlassThanks to Strava’s integration with Google Glass, I was able to try Glass on a short cycling excursion. With live ambient access to my speed, direction, distance, and maps, I was in biking heaven. And I still had access to a camera at a moment’s notice too! Admittedly, all of this is stuff that my smartphone can do too. But using a smartphone while on a bike is a dicey proposition at best, and something I really don’t want to do. Glass’ ambient presentation of information and reliance on voice controls make the idea viable. I’m not sure I’d use it on a busy road, but on paths or dedicated bicycle lanes I’m sold.

I also happened to have Glass on while cooking dinner, and while I couldn’t figure out how to easily load a recipe other than searching the web for it, I have to assume an Epicurious or other recipe-centric app isn’t far off. Voice-controled access to recipes and cooking tips, without having to touch buttons with my messy or salmonella-laden hands, is something I want.

My third compelling use case is the Word Lens app I mentioned previously. Real-time, ambient translation! Not that I need another reason to want to visit Paris, but I really want to try this in action in a foreign country.

Analysis:

All three of these cases have one simple thing in common: They involve a task that is greatly improved by becoming hands-free. Taking pictures of my daughter at play, assistance while cooking a meal, and ambient translation of text are all much better (or only possible at all) by removing the hands-on requirement of an interface. I believe this hands-free factor will be key in which apps are successful on Glass (and other future wearable tech) and which fall by the wayside.

Other functions, like saving voice notes to Evernote or doing live video chat, were kind of neat but didn’t strike me as particularly revolutionary. My phone does all of that well enough for me already, and the tasks aren’t significantly enhanced by becoming hands free. Navigation while driving is something I never felt comfortable doing with Glass, as I found it somehow more distracting than doing the same on my phone.

But much of what I tried on Glass doesn’t really fall into a category of something I liked or disliked. Instead, many of the apps just seem silly to me. While I might want to post to Facebook or Twitter from Glass, do I really need pop-up notifications of new posts in the corner of my eye? The prototype Mini Games app from Google features a version of tennis where you have to crane your neck awkwardly around to move around, or pretend to balance blocks on your head. I tried things like this once, and then moved on. And while it’s nice in theory to be able to play music on Glass, the low quality speakers and ease of annoying your neighbors with this feature means I’d never want to actually use it.

Some of my confusion or frustration with these functions will no doubt be addressed in future generations of the hardware. But if I can give some amateur advice to Glass developers: Focus on making everyday tasks hands free, and you’ll win me over.

When Glass inevitably hits a more consumer-friendly price point, I’ll probably pick one up. Right now I have a hard time recommending it at $1500, but of course even Google themselves consider this a sort of beta product. This a test-bed for wearable technology, and I’m grateful to have had a glimpse of the future.

14. March 2014 by Chad Haefele
Categories: Libraries/Info Sci, Ramblings, Reviews, Tech | 1 comment

← Older posts