Dave Perillo’s Disney Attraction Posters & Postcards: A Complete List

Believe it or not, some info is still difficult to find online. Whenever I get to the end of a thorny research question, no matter how trivial, I like to share what I’ve found back with the rest of the web.

Dave Perillo is one of my favorite artists right now. I collect prints/posters/whatever I’m supposed to call them. Dave did a series of posters based on Disney attractions and rides that I think capture a ton of the magic of those experiences. He’s got a semi-vintage style that I’m into, and I like that most of them include somebody on the ride itself. They encapsulate the experience of riding in a slightly abstract way, and aren’t just a well-posed picture of the attraction itself.

There’s too many of them for me to justify owning, but then I discovered 5″x7″ postcard versions! Now that’s achievable. But it was frustratingly hard to find a complete list of which posters exist. After extensive googling and browsing ExpressoBeans, here’s what I think is a complete list of all 16 prints, with release dates when I could verify them. I believe they were released in sets of 4:



  • Dumbo
  • Peter Pan’s Flight
  • Pirates of the Caribbean
  • Mr. Toad’s Wild Ride

  • Splash Mountain
  • Snow White’s Scary Adventures
  • Enchanted Tiki Room
  • Mad Tea Party (teacups)

  • 20,000 Leagues Under the Sea (2019)
  • Carousel of Progress (2019)
  • Peoplemover (2019)
  • Journey into Imagination (2019)

Here’s my collection! The bottom four are even signed.

collection of all 16 postcards

I’m disappointed that Disney chose to change the borders on the cards twice, but I still love having a complete set. Now to consider framing options.

Movies Anywhere is Great for Consumers

""
In early 2014 Disney announced a surprisingly reasonable approach to their digital movie sales: Buy it once, access it on many services. “Disney Movies Anywhere” ensured that when bought one of their movies on any of iTunes, Amazon, Google, or Vudu, you also got it on the other three services at no extra charge.

Ever since then, Disney movies (including Marvel and Star Wars) are the only digital movies I’ve been willing to buy. If any one service went out of business, I knew I’d still be able to access the movies elsewhere. And since this coincided roughly with the time my daughter started watching Disney movies over and over again, I had to know they wouldn’t disappear into the void.

Even when DMA dropped support for Microsoft’s movie store recently, purchased & linked movies weren’t removed from your Microsoft account. Very reasonable.

As a bonus, DMA meant digital movie sellers could actually compete with each other on price. I don’t use Vudu, but if they put a Disney movie on sale I could buy it and watch it on my iPad via iTunes.

Sure, the Ultraviolet movie locker service has been around longer and did similar things, but it never had direct integration with platform-specific services like iTunes and Google Play. DMA was a breath of fresh air.

Today it got even better – Disney Movies Anywhere is now simply “Movies Anywhere“, since Disney added new studios to the mix. Fox, Warner Bros, Sony, and Universal movies now sync across services too!

That leaves Paramount and Lionsgate as the holdouts, and yes it’s slightly annoying to have to scrutinize a movie’s studio before I buy it, but this is still huge. I had a number of movies trapped in Vudu that I got via free promotions over the years, but I never watched them because I hate Vudu’s apps so much. Now those movies are also safely stashed in my Google, iTunes, and Amazon accounts.

And I’ll say it again: I can buy movies at lower prices from different stores when they go on sale! For example: The Lego Batman Movie is $2.49 on Amazon right now, so I bought it and watched it tonight via Google. The same movie is $19.99 elsewhere. This sounds simple, but it’s new territory for digital movies.

I’m strangely excited about this. Go sign up for Movies Anywhere. Link your various accounts and they’ll even give you a pile of free movies right now.

Amazon Echo: The Case for Voice Commands

Windows 10 is almost upon us, just over a day away as I write. Among other new features, I’ve seen article after article talking about the integration of voice controls into Win10. Has their time finally arrived? For a long, long time I was skeptical of voice controls in any context. Way back in elementary school I played with an early version or Kurzweil Voice, and I was impressed if it got more than 10% of my speech right. I think that experience colored my expectations until very recently.

My Android phone has had voice commands built in for years, but other than setting timers or alarms I almost never use them. So for a while, I expected my use of Win10’s voice commands to run along the same lines: I’d think it was neat, play with it for a bit, and then forget about it entirely. Then in February my Amazon Echo arrived, and completely changed my thinking.

Amazon Echo

I bought the Echo almost on a whim, thinking once again that it would be a neat toy for a while but probably not have long term utility. I’m as surprised as anyone that now, four months later, I still use it multiple times a day. When I get home from work, I usually blurt out three commands as I unpack:

  • “Alexa, turn on the lights”: I have two lamps on a wemo switch, which the Echo controls.
  • “Alexa, how’s the traffic?”: The Echo reads me a report of the traffic between my daughter’s daycare and my home, giving me a rough idea of how long I have to make dinner before she and my wife arrive.
  • “Alexa, play NPR”: This one does what you’d expect – it plays a live stream of my local NPR station.

Then while I’m cooking, I usually ask Alexa to set a couple timers or add things to my shopping list. Later in the evening I often ask Alexa to play music by a certain band or in a given genre, and then I control the volume by voice commands too.

This is all done hands free, while I get other stuff done, and I almost never have to repeat myself or cancel a command the Echo heard incorrectly. We’ve come a long way since my arguments with Kurzweil Voice.

And ok, I’ll admit that I’m on the fence about just how useful it really is to have the Echo turn on lights for me. A plain old fashioned lightswitch is a pretty darn perfect UI already. But using a voice command to trigger lights still delights me in a Jetsons kind of way.

I didn’t set out to write a review of the Echo here (although if I did, I’d say it was totally worth the early bird $99 price but the current $179 is too steep of an ask). Instead, my point is that voice commands can fill some very valuable niches. I still don’t use voice for dictation, but it turns out voice recognition is a very good way to do a handful of things in my life. I can group them into general categories:

  • Asking for brief reports and updates like traffic, weather, or checking for new messages and alerts
  • Starting or stopping a background process, like a timer or music
  • Toggling a system setting like volume, wifi, and bluetooth connections

Today I do these things on my computer nearly constantly throughout the workday, without the benefit of voice commands. If Windows 10 lets me do them by voice instead, without breaking stride to open another program or dig through settings menus, that’s a bunch of small gains that will add up to a big improvement in how I work. I’m truly excited to try Win10’s voice recognition and see where it goes. Maybe you’ll even catch me dictating an email someday – but probably not in public.

10 Years Later: What 2004 Predicted For The Internet Of 2014

epic 2014

This blog turns 10 later this month. I’m no longer nearly as prolific a writer as I was back then, but I’m still kind of amazed that I’ve kept at it this long. Among other things since then: I got my master’s, moved cities/jobs twice, got married, and had a daughter. Wow.

While all 625 old posts are still available in the archives, I implore you to pretend most of them aren’t there. With the benefit of a decade’s hindsight I just see typos, odd sentence structures, weird choices in my URL structure that still haunt me today, and all-around questionable writing galore.

There’s one exception: I do want to point out the second post I ever wrote, way back on 12/26/04. I titled it simply “Googlezon”. While I was a bit late to the party at the time, I pointed out an interesting little movie called EPIC 2014. It forecasted the internet and society of 2014, from the perspective of 2004. It’s about 8 minutes long, and still exists on the web in flash format today (remember, this predates Youtube! Ancient history!).

EPIC posits a 2014 where Google and Amazon merged (after Google bought Tivo), Microsoft bought Friendster, the New York Times has gone print-only, and more.

But buried among these amusing predictions are grains of truth. EPIC’s forecasts of how we generate and consume news aren’t that far off from reality, and it seems to have pretty accurately predicted the rise of Big Data. EPIC is a fun look back at where the web was, and where it might still be going. I’ll check in with you again in 2024.

(side note: While researching this piece, I realized that the Robin Sloan who worked on this short film is the same Robin Sloan who wrote one of the best books I read last year.)

Holiday gift guide: Motorola Keylink

I have a strange fascination with all the holiday gift guide lists that pop up this time of year. I’ve always wanted to do one, but also feel like I’d be reinventing the wheel. Many more interesting people than me have already done the job. But I do want to point to at least one item, something new that I don’t think is getting enough review coverage: The Motorola Keylink

41264_07_motorola_keylink_lets_you_find_your_lost_keys_with_your_smartphone

Basic Features

The Keylink ($24.99) is billed as a “phone and key finder”. And it works well for that: Attach the small Keylink to your keychain. Lose track of your phone? Push a button on the keylink to make the phone ring. Lose your keys? A button in the Motorola Connect app does it the other way around: the Keylink beeps.

Better Security

That’s all well and good, and it works well. But my favorite feature is one that’s getting far less billing. If you’re running Android on the latest version (5.0/Lollipop), the Keylink can let you bypass your phone’s lock code.

Lollipop introduced a handy new feature to Android devices, the idea of a trusted bluetooth device. You can tell Android that if you’re connected to a certain bluetooth device (like your car or a home stereo) then there’s no need to use a lock code. If you go out of range of that bluetooth device, the lock code becomes necessary again. Handy while driving, and in a bunch of other situations too. I spend most of my day away from my bluetooth devices, so I didn’t have anything I could use to take advantage of this feature. But the Keylink uses bluetooth!

I attached it to my keys, which spend most of the day in my pocket. As long as the Keylink is near my phone, no lock code necessary. But if my phone gets more than about 30 feet from me, then the code snaps back into place. I’ve had a lock code on my phone in the past, but it’s always been a very simple one. I have to enter it countless times per day, so anything truly secure got annoying fast. Now I’m free to use a much more complex code, knowing that I’ll rarely have to enter it. I still wish that my phone had fingerprint-based security like the iPhone, but using the Keylink as a trusted bluetooth device makes for an interesting and convenient alternate method to keep my phone a bit more secure.

The Keylink’s battery should last about a year, and is replaceable.

Who’s it for?

Anyone who carries an Android phone and a keyring should find the Keylink useful. Just make sure their phone is on the latest version of Android. The Nexus 4/5/6 all fit the bill, plus a list of a few others that should grow soon.

Where is it?

The Keylink is often out of stock on Motorola’s website. But it’s in stock at many T-mobile stores, which also lets you skip Motorola’s shipping charge.

Usability and Amazon Premium Headphones

51IubYYzI-L._AA1500_[1]I’ve finally found a pair of headphones that I actually enjoy using: Amazon’s Premium Headphones.

As a product category, headphones continually frustrate me. I use them all the time while commuting. I shove them in my messenger bag, fish them out at odd times, and usually end up losing them within a year. I also have relatively small ear canals (according to my doctor), so in-ear types often don’t fit me well or end up hurting after far too little time.

My ideal pair of headphones would, in no particular order:

  • Be tangle-free or wireless
  • Include some kind of controls (volume, play/pause, etc)
  • Fold or coil up into a compact size
  • Fit in or on my ears
  • Produce at least average sound (I’m not an audiophile)
  • Be cheap (< $20) for replacement purposes

I’ve lived with cheap skullcandy in-ear headphones for years, which met some of these qualifications: They’re cheap, have a volume control, sound decent, coil up well, and mostly fit in my ears thanks to coming with different sizes of rubber earbuds. But that fit isn’t ideal, and I’m constantly untangling them.

I also own a pair of Motorola S305 bluetooth headphones, for situations where wireless is important. They don’t fold up and are too expensive to replace regularly, but are otherwise a good choice and meet all my criteria.

Now I think I’ve found a new favorite pair, from an unlikely source: The headphones that come with Amazon’s Fire phone are nearly perfect!

Say what you will about the Fire phone itself, but the accessory headphones (available separately as the awkwardly named “Amazon Premium Headphones”) tackle headphone usability in some interesting ways:

  • Most of the cable is flat, not round, and relatively stiff. This part of the cable never gets tangled at all.
  • The earbuds themselves are magnetic, and stick together when not in use. This reduces tangles even more.
  • The built-in controls are simple and useful. Tap the button once to pause/resume, or twice to go to the next track. And the volume controls are the first I’ve seen on a wired pair that directly control my phone’s volume, instead of just modulating what’s going through the headphone cable.
  • The earbuds don’t go deeply into the ear canal, meaning they actually fit me. They’re shaped similarly to Apple’s current earbuds, but those always fell right out of my ears. Amazon has slightly tweaked the shape for a more secure fit.

So they’re tangle-free, have excellent controls, coil up well, fit in my ears, sound decent enough, and cost $10-$15. I love these things, even if I’m still a bit confused that something decent came out of the Fire phone’s release. I’d better go stock up on some extras while they’re still available.

My week with Google Glass: Personal life thoughts

I was lucky enough to spend last week with a loaner pair of Google Glass. Purchased by my place of work, I was asked to try them out and evaluate them for possible library use or development of apps by the library. I’m far from the first person to write about their experience with Glass, but I wanted to write up my experience and reactions as an exercise in forcing myself to think critically about the technology. I’m splitting it into two posts: One about the impact and uses of Glass in libraries was posted yesterday, and this is the second: my more general impressions as a Glass user and how it might fit into my daily life.

To cut to the chase: Google Glass is an extremely impressive piece of technology squeezed into a remarkably small package. But it does have issues, and Google is right to declare that it isn’t ready for mass market adoption yet.

What I didn’t like about Glass:

  • Battery life is anemic at best, especially when using active apps like Word Lens. I rarely got more than 4-5 hours of use out of Glass, and sometimes as little as 30 minutes.
  • I’m blind without my (regular) glasses. I know that prescription lenses are now available for Glass, but the $250 price tag means there’s no way I could justify getting them for a one week trial. And because Glass’ frame doesn’t fold up in the way that regular glasses do, there’s no easy way to carry them around to swap out with regular glasses for occasional use. Despite being impressively small for what they do, they’re still too bulky.
  • Many apps on Glass are launched with a spoken trigger phrase. Remembering them all is awkward at best, and I sometimes flashed back to MS-DOS days and searching for the right .exe file to run.
  • Confusingly, Glass does not auto-backup photos and videos taken with it. My Android phone dumps all media to my Google account, but Glass won’t do that unless it’s plugged in and on wifi.
  • Style and social cues, the two elephants in the room, has to be addressed. Right now I don’t think I could ever get up the courage to wear Glass in public on a regular basis. But when the tech shrinks even more and can be embedded in my regular glasses, then things will get interesting. The social mores around wearable tech still need to be worked out. I did not feel comfortable pushing those bounds except in very limited circumstances (like walking around on a rainy cold day with my bulky hood pulled up around Glass), and rarely wore Glass in public as a result.
  • Taking a picture by winking alternately delighted and horrified me. I’d love to see more refined eye movement gesture controls, instead of just the one that’s associated with so much unfortunate subtext.

What I loved about Glass:

Nora through GlassBut as a camera, Glass excels. My daughter is 13 months old, and invariably stops doing whatever ridiculously cute thing she’s doing the moment I get out a camera to capture it. The camera becomes the complete focus of her attention. But if I’m wearing Glass, I can take a picture at a moment’s notice without stopping what I was doing. A wink or simple voice command, and I have a snapshot or short video saved for perpetuity. In my week I got some amazing Glass pictures of my daughter that I never would have otherwise. For a brief moment this alone made the $1500 price tag seems oddly reasonable.

Side note: This easy hands-free capture of photos and video has fascinating implications for personal data and photo management. With such a giant pile of media produced, managing it and sorting through the bad shots becomes a herculean task. I don’t know that there’s a solution for this yet, though admittedly I think Google Plus’ automatic enhancement and filtering of photos is a great first step.

Back to what I like about Glass:

Other than taking photos of kids, I ran into three other use cases that genuinely excited me about using Glass in everyday life:

Biking with GlassThanks to Strava’s integration with Google Glass, I was able to try Glass on a short cycling excursion. With live ambient access to my speed, direction, distance, and maps, I was in biking heaven. And I still had access to a camera at a moment’s notice too! Admittedly, all of this is stuff that my smartphone can do too. But using a smartphone while on a bike is a dicey proposition at best, and something I really don’t want to do. Glass’ ambient presentation of information and reliance on voice controls make the idea viable. I’m not sure I’d use it on a busy road, but on paths or dedicated bicycle lanes I’m sold.

I also happened to have Glass on while cooking dinner, and while I couldn’t figure out how to easily load a recipe other than searching the web for it, I have to assume an Epicurious or other recipe-centric app isn’t far off. Voice-controled access to recipes and cooking tips, without having to touch buttons with my messy or salmonella-laden hands, is something I want.

My third compelling use case is the Word Lens app I mentioned previously. Real-time, ambient translation! Not that I need another reason to want to visit Paris, but I really want to try this in action in a foreign country.

Analysis:

All three of these cases have one simple thing in common: They involve a task that is greatly improved by becoming hands-free. Taking pictures of my daughter at play, assistance while cooking a meal, and ambient translation of text are all much better (or only possible at all) by removing the hands-on requirement of an interface. I believe this hands-free factor will be key in which apps are successful on Glass (and other future wearable tech) and which fall by the wayside.

Other functions, like saving voice notes to Evernote or doing live video chat, were kind of neat but didn’t strike me as particularly revolutionary. My phone does all of that well enough for me already, and the tasks aren’t significantly enhanced by becoming hands free. Navigation while driving is something I never felt comfortable doing with Glass, as I found it somehow more distracting than doing the same on my phone.

But much of what I tried on Glass doesn’t really fall into a category of something I liked or disliked. Instead, many of the apps just seem silly to me. While I might want to post to Facebook or Twitter from Glass, do I really need pop-up notifications of new posts in the corner of my eye? The prototype Mini Games app from Google features a version of tennis where you have to crane your neck awkwardly around to move around, or pretend to balance blocks on your head. I tried things like this once, and then moved on. And while it’s nice in theory to be able to play music on Glass, the low quality speakers and ease of annoying your neighbors with this feature means I’d never want to actually use it.

Some of my confusion or frustration with these functions will no doubt be addressed in future generations of the hardware. But if I can give some amateur advice to Glass developers: Focus on making everyday tasks hands free, and you’ll win me over.

When Glass inevitably hits a more consumer-friendly price point, I’ll probably pick one up. Right now I have a hard time recommending it at $1500, but of course even Google themselves consider this a sort of beta product. This a test-bed for wearable technology, and I’m grateful to have had a glimpse of the future.

My week with Google Glass: Library-centric thoughts

I was lucky enough to spend last week with a loaner pair of Google Glass. Purchased by my place of work, I was asked to try them out and evaluate them for possible library use or development of apps by the library. I’m far from the first person to write about their experience ewith Glass, but I wanted to write up my experience and reactions as an exercise in forcing myself to think critically about the technology. I’m splitting it into two posts: One about the impact and uses of Glass in libraries, and a second about my more general impressions as a Glass user and my overall daily life.

Without further ado, lets look at the library perspective: I came away with one major area for library Glass development in mind, plus a couple of minor (but still important) ones.

One big area for library development on Google Glass: Textual capture and analysis

Image from AllThingsD

Image from AllThingsD

One of the most impressive apps I tried with Glass, and one of only a handful of times I was truly amazed by it’s capabilities, was a translation app called Word Lens. Word Lens gives you a realtime view of any printed text in front of you, translated into a language of your choice. In practice I found the translation’s accuracy to be lacking, but the fact that this works at all is amazing. It even attempts to replicate the font and placement of the text, giving you a true augmented view and not just raw text. Word Lens admittedly burned through Glass’ battery in less than half an hour and made the hardware almost too hot to touch, but imagine this technology rolled forward into a second or third generation product! While similar functionality is available in smartphone apps today (this is a repeating refrain about using Glass that I’ll come back to in my next post), translation, archiving, and other manipulation of text in this kind of ambient manner via Glass makes it many times more useful than a smartphone counterpart. Instead of having to choose to point a phone at one sign, street signs and maps could be automatically translated as you wander a foreign city or sit with research material in another language.

I want to see this taken further. Auto-save the captured text into my Evernote account and while you’re at it, save a copy of every word I look at all day. Or all the way through my research process. Make that searchable, even the pages I just flipped past because I thought they didn’t look valuable at the time. Dump all that into a text-mining program and save every image I’ve looked at for future use in an art project. I admit I drool a little bit over the prospect of such a tool existing. Again, a smartphone could do all of this too. But using Glass instead frees up both of my hands and lets the capture happen in a way that doesn’t interfere with the research itself. The possibilities here for digital humanities work seem endless, and I hope explorations of the space include library-sponsored efforts.

Other areas for library development on Google Glass:

Tours and special collections highlights

The University of Virginia has already done some work in this area. While wandering campus with their app installed, Glass alerts you when you’re close to a location referenced in their archival photo collections and shows you the old image of your current location. This is neat, and especially while Glass is on the new side will likely get your library some press. NC State’s libraries have done great work with their Wolfwalk mobile device tour, for example, which seems like a natural product to port over to Glass. This is probably also the most straightforward kind of Glass app for a library or campus to implement. Google’s own Field Trip Glass and smartphone app already points out locations of historical or other interest to you as you walk around town. The concept is proven, works, and is set for exploitation.

Wayfinding within the library

While it would likely require some significant infrastructure and data cleanup, I would love to see a Glass app that directs a library user to a book on the shelf or the location of their reserved study room or consultation appointment. I imagine arrows appearing to direct someone left, right, straight, or even to crouch down to the lower shelf. While the tour idea is in some ways a passive app, wayfinding would be more active and possibly more engaging.

Wrap-up

The secondary use cases above are low-hanging fruit, and I expect libraries to jump onboard with them quickly. Again, UVA has already forged a path for at least one of them. And I fully expect generic commercial solutions to emerge to handle these kinds of functions in a plug and play style.

Textual capture and analysis is a tougher nut to crack. I know I don’t have the coding chops to make it happen, and even if I started to learn today I wouldn’t pick it up in time before someone else gets there. Because someone will do this. Evernote, maybe, or some other company ready to burst onto the scene. But what if a library struck first? Or even someone like JSTOR or Hathi Trust? I’m not skilled enough to do it, but I know there’s people out there in libraryland (and related circles) who are. I want to help our users better manage their research, to take it further than something like Zotero or the current complicated state of running a sophisticated text mining operation. The barriers to entry on this kind of thing is still high, even as we struggle to lower it. Ambient information gathering as enabled by wearable technology like Glass has the potential to help researchers over the wall.

Tomorrow I’ll write up my more general, less library oriented impressions of using Glass.

Proquest Flow now offers free accounts. Why?

Flow logoFine print: My opinions and thoughts here are as always my own, and not necessarily those of the UNC Libraries.

I’ve wanted to write about the state of citation management for months now, and the idea kept rattling around in the back of my head. There’s so many options for managing research and citations out there, and I support a couple of them as part of my job. I frequently get asked which one is the best to go with. When Proquest announced a free version of Flow last week, I couldn’t avoid the topic any longer. I was originally going to do a compare/contrast review of the major options out there, but I find the Flow announcement so interesting that now I want to focus on it entirely.

Flow is Proquest’s successor to Refworks. Their official line is that Refworks isn’t going away, but I have to believe that Refworks’ lifespan is limited at this point. Why would Proquest want to develop two similar products in parallel forever? That has to be a huge resource drain. Refworks hasn’t seen a major new feature in years, and still doesn’t support collaborative folders, while Flow seems to be adding interesting options all the time.

Flow is a promising product, but not quite at 100% yet. The web import tool in particular has a long way to go before matching the utility of Zotero’s, but at the same time the Flow UI provides a pleasantly minimalist reading experience and fills in a number of feature gaps present in Refworks (especially collaboration and PDF archiving) while streamlining the clunky Refworks UI into something much more usable.

But I’m not here to just review Flow as a product. What confuses me is this new business model of providing a free account. Flow’s free accounts include 2gb of storage and collaboration with up to 10 people per project. If an institution subscribes to the paid version of Flow, their users get bumped up to 10gb of storage and unlimited collaboration. The institution itself gets access to analytics data and a handful of other administrative features.

The free Flow option is certainly superior to Mendeley’s free plan, which also includes 2gb of storage but limits collaboration to just 3 users per account. I find Mendeley’s pricing for extra collaboration slots insane (plans start at $49/month and go up sharply after that), but that’s an argument for another time. Zotero, admittedly my personal favorite citation management tool, by comparison offers a paltry 300mb of storage but allows collaboration with an unlimited number of users. My point is that the free Flow plan, with 2gb and 10 collaborators, is a pretty attractive option by comparison to the competition. I’d be willing to bet that the vast majority of our users would be satisfied with those limitations.

Flow or Refworks access at an institutional level is not cheap. We’re facing our fifth or sixth consecutive year of hard budget choices, and while we have no plans to cancel our Refworks/Flow access I have to wonder at what point that becomes a viable option. Other than the obvious Big Data potential, I don’t know what Proquest’s endgame is by offering free Flow accounts. I hope they’ve thought through what the option looks like to their paying customers.