I was feeling a bit stressed the other day. On an impulse, I wrote on my Facebook wall:
I have decided I am in need of advice. Please send me all the advice you have. I will pick whatever seems applicable.
I didn’t really know what I wanted or expected, but it struck me as a funny thing to write. I also thought it might be a little like flipping coins or consulting the I Ching or a horoscope when you’re trying to make a decision: whatever comes up, you just project your own reading onto it (or, in the case of coins, flipping repeatedly until you get the result you want), and end up doing what you secretly really wanted to do all along.
My friends came through. I received lots of advice. Some of it was generic: “Buy low, sell high”, “Wash behind your ears”, “Don’t take any wooden nickels.” Some consisted of popular culture references: “Don’t blink”, “One word: plastics” and “Only an asshole gets killed for a car.” Some of it was surreal: “Don’t wash mushrooms in a washing machine”, “Don’t pat a burning dog.” Two friends shared advice they’d had from their parents: “Tell them to stop, if they don’t then punch them” and “Don’t drink whisky or you’ll have nothing to fall back on when you’re forty.” One person sent me detailed advice on cooking asparagus.
Two people suggested what looked like proverbs: “Never take two camels when a mule will suffice” and “If ye will not when ye may, when ye will ye shall have nay.” Two sent in summaries of their own current ideological programs: “Have no fear” and “Disconnect”. One quoted the great Jon Langford: “Get the money, don’t leave anything behind.” Two couched their advice in corporate speak: “Ideate. Iterate. Keep the throughput in the green zone” and “Proactively leverage synergies across the enterprise.” One whimsically advised me “Don’t stick your prick in daisies”. That one took me a moment or two to figure out. Another told me to “Fallow your heart”. I couldn’t decide if that was a deft epigram … or just a typo.
Not all of these were easy to apply to my own life, but “Have no fear” and “Disconnect” sound like good ideas. “Have a good time, all of the time” is also a nice goal, but potentially challenging. Something about “Red wine to start. The rest will come” also appealed to me. But in the end, the one that struck the deepest chord was the enigmatic “Run, you clever boy. And remember.”
I can’t explain why that was what I needed to hear just then. But it was, and it made me feel much better.
I feel as if I’ve made a discovery. Specific advice is overrated, unless you’re buying something. The key is to get advice that’s as generic or random as possible, and then you can read into it whatever you want.
- Jeff: Have you been to Hill Country?
- Annie: I'm vegetarian.
- Jeff: ...
- Annie: Sorry, didn't mean to cut that conversation short.
- Jeff: Conversation? That cut our friendship short.
- Actually ... having been dragged to Hill Country by Jeff, along with the rest of the dev team ... I can confirm that some of the sides are pretty good and they even do a reasonable salad. If you can get over the incongruity of being a vegetarian at a barbecue restaurant, and ignore the fact that all your fellow diners are tearing into chunks of cooked meat the size of a smallish SUV, you needn't actually starve.
Richard Kadrey said it best:
What does the death of Delicious teach us? The cloud isn’t your friend. The cloud will lure you into its van, but there’s no candy in there.
Substitute ‘Google Reader’ for ‘Delicious’, and the lesson is the same.
Today, Google announced that they’re pulling the plug on Reader, their RSS aggregation service. This has prompted predictable cries of outrage in the Twitterverse, as people who love and use Reader daily suddenly face the chilling prospect of life without it after July 1st.
I am not a Google Reader user … or rather, I am, but I don’t use Reader on the web to read news. Instead, I use applications - NetNewsWire on the Mac, and Reeder on my iPod. Both of these applications offer the option to use Google Reader to synchronize your news feeds. When I read something on my iPod and then switch to the Mac, the thing I’ve just read is marked as read, so I don’t have to read it all over again. If I flag something to read it later on the Mac, it’s flagged on the iPod. If I subscribe to a new newsfeed on the iPod (or on the Mac, or even on Google Reader itself) it’s there on all the other devices I might use. It’s seamless, transparent, essential.
Come July 1st, Reeder and NetNewsWire aren’t going to be able to leverage Google Reader to provide this very useful functionality. They’ll either have to drop the feature, or switch to another provider that offers the same functionality, or implement it themselves. Google has just pulled the rug out from underneath them, and they - and many similar apps or services - are going to have to scramble to find another solution.
If you go to Reeder’s settings panel, the very first item is “Google Reader Account”. Same thing with NetNewsWire: under Sync settings there’s a checkbox that says ‘Sync with Google Reader’. There are no other options, no alternative services.
The fundamental error that the creators of both these excellent apps have made is that they programmed against an application, not against an API. What they need isn’t Google Reader: it’s something that offers the same API endpoints and the same functionality as Google Reader. That something could be anything. So long as it looks and behaves like Google Reader, it doesn’t have to be Google Reader. My guess would be that an averagely-capable programmer could implement a simple webapp to implement the API and duplicate the synchronization and storage functionality of Google Reader in a weekend.
What the app makers then need to do is to change their apps so that instead of saying “Synchronize with Google Reader” they say “Synchronize with …” and let you specify the service of your choice. Provided that service supports a synchronization API, the app shouldn’t care who or what it is.
I’ve actually wanted this for a while, simply because I don’t see that what I read is any of Google’s business. I would have been happy to deploy a synchronization webapp on my own server and keep it all to myself. Not everyone has their own server, of course, but I’m sure there’d be businesses who’d be happy to step up and provide synchronization as a service - if the apps supported it.
This error - programming against a specific application, not a general API - repeats throughout the app ecosystem. Reeder lets me choose one of three bookmarking services - Pinboard, Delicious and Zootool - to save links to. It lets me save articles to two offline reading services - Instapaper and Pocket. It lets me send updates to two different micro-messaging services - Twitter and App.net. Each one gets a separate entry in the Reeder settings panel.
What if, instead of Twitter or App.net, I wanted to use Identi.ca or Tent.is? Or my own server running tentd? Sorry, no can do. Those options aren’t offered. Even though each of these is fundamentally the same kind of thing as Twitter, and may even support precisely the same API (or, if they don’t, could easily be made to). It’s the same thing with the other options: for all the options that an app developer can squeeze into a settings panel, there are always a few more services that aren’t offered. Yet in many cases, the app could talk to them without changing a single line of code.
I’m not singling out Reeder - a truly excellent application - here. All app developers are doing something similar. They pick the current market leader for some particular functionality, they implement support for it, and they put it in their app. If there are a few equally prominent competitors, they might support two or three. But that’s all.
In the same way, a bunch of other apps that I use offer support for Dropbox. Why not? Dropbox is a wonderful service. But Dropbox is now just one of many similar services. Why support Dropbox and not Box.net, or Pogoplug, or Microsoft SkyDrive, or iCloud? Or a private cloud storage service implemented using an open-source solution running on someone’s server somewhere?
“Oh, but we can’t possibly support all those different applications,” the app makers say. Right. You can’t support a half dozen different applications. But you could support just one API for any functionality you need to implement. One API for cloud storage. One API for bookmarking. One API for RSS synchronization. One API for micro-messaging. One API for sending articles to an offline reader. One, open, documented, well-defined API for each type of service.
The beauty of this is that if a given provider - say Microsoft - doesn’t offer the required API, they will actually be under pressure to implement it. “I’d like to use SkyDrive,” says Joe User, “but it’s not compatible with the apps I like on my smartphone.” When Microsoft realizes that everyone’s using still Dropbox instead of SkyDrive, they’re going to wise up and roll out the necessary API support pretty fast.
Actually, they probably won’t. This is Microsoft we’re talking about, after all. But I can dream.
The death of Google Reader is a teachable moment. App developers should take note of what just happened to Reeder and NetNewsWire and all the other dozens of newsreaders that used Google Reader for synchronization. They tied a critical feature of their software to one specific implementation. When Google took their ball and went home, they got screwed. Don’t let that happen to you.
Using open, well-defined, standard APIs is the Way of the Web. Locking your product to someone else’s proprietary implementation is a recipe for heartache. Program to the API, not the application. And if the API doesn’t exist, get together with a bunch of like-minded folks and design one. In the long run, it’s better for everyone.
The Verge has a long article about Google’s design process, which talks about the way that Google has tried to create a unified but attractive and effective look and feel to all its applications, across multiple platforms.
In some ways, the article is frustrating: lots of fawning praise for Google’s approach, relatively few actionable recommendations that rise above the obvious (‘have your designers and developers talk to each other’). For me, however, the money quote is when one Google designer interviewed says:
“These are objects. They feel, not necessarily real, but they feel virtual. They’re not trying to be fake things, not … fake leather, fake wood, fake brushed aluminum.”
That, in case you didn’t spot it, is a dig at poor old Apple. For many years, Apple was the company that got design and UX right while everyone else was getting it wrong. It still does to a large extent. The elegance and success of Apple’s products such as the iPhone or the iPad is the product of much more than the superficial ‘lickability’ of its gleaming, futuristic interfaces. It’s the product of the way things work consistently and predictably, the way that the animations and the visual hints work to build an impression of objects that are, in Duarte’s words, “not real, but virtual”.
Unfortunately, Apple has recently tarnished its reputation for making good design choices by embracing skeuomorphic designs (complete with fake leather) for some of its built-in apps in MacOS and iOS. Skeuomorphism isn’t always wrong: if you can give the user a hint about functionality or meaning by alluding to some real-world object, it makes sense. Slavishly reproducing the look or functionality of something in the real world, however, is almost never a good idea. It’s a kind of cargo-cult approach to UX. Apple’s crude and ugly designs for the Calendar and Address Book are particularly shocking, both because they come from a company with a long history of making good choices and because their appearance is so glaringly different from everything around them.
Apple’s clumsy flirtation with skeuomorphism has spawned a backlash in the form of the flat design movement (or, as some commentators would have it, the almost flat design movement). But flat design isn’t a panacea. The flat design of RealMac Software’s Clear to-do list app, for example, works well (so much so that the iTunes Store is full of Clear knock-offs) On the other hand, all the flat design in the world can’t save Microsoft’s bizarre and confusing Windows 8 interface (the UI formerly known as Metro).
The reason Clear works has less to do with the look of the interface and more to do with the way it works. It hinges on a set of easily-learnable gestures that are tied to underlying behaviors. Essentially, Clear gives the user a set of virtual objects that they can manipulate, objects that may not have any direct mapping to the real world, but which behave in consistent and predictable ways. Once you have that, you don’t need skeuomorphism. Flat design is all the design Clear needs. You could implement Clear so that its objects appeared to have depth and texture, but you wouldn’t gain anything by it.
Slavishly adhering to the rules of flat design is probably just as much of a blind alley as trying to minutely recreate every detail of real-world objects. Or perhaps not just as much - at least a flat design won’t create the kind of visual atrocities that you get from excessive skeuomorphism. But really what we should be aiming for is ‘just enough design’: enough detail, in short, to help the user recognize and understand the virtual objects and metaphors that underlie the application. A user interface is the map of a virtual world: the first and most important duty of a designer is to choose a design that accurately and informatively reflects that underlying ‘virtuality’.
In my distant youth, I wasted a great many hours playing a space trading game called Elite. Elite ran on an early 8-bit computer called the BBC Micro and, despite the crudity of the graphics, was seriously addictive. David Braben, one of the authors, went on to leverage the greater power of the PC to produce more ‘realistic’ and complex variants under the title Frontier, but for me Elite will always be the game.
Now there’s going to be a new Elite, the Kickstarter-funded Elite: Dangerous, which has passed its funding target with more than £1.5M raised.
One of the most interesting aspects of this Kickstarter is that science-fiction publisher Gollancz kicked in a big chunk of funding, in return for rights to the tie-in novels. Elite and Frontier were always sold with booklets that contained SF short stories and novellas, intended to flesh out the universe. The most famous of these was the novella The Dark Wheel by SF writer Robert Holdstock, which accompanied the first game in the series, but later games were also sold with collections of short stories, mostly by lesser-known writers.
The list of authors included one very little known writer indeed. Via a friend who worked for Frontier Developments, I ended up contributing a bunch of short texts and one complete short story, for which I was paid quite generously. Thanks to fan-site Life on the Frontier, I’ve just re-read my own story for the first time in close to twenty years. It’s not a classic of the genre, but it’s not as embarrassing as I feared. Perhaps I should reach out to Gollancz …
It’s rare that I actually manage to drag myself to the movie theater to see a new movie. Some of that has to do with my own inertia. Some of it has to do with an increasing reluctance to pay inflated prices to sit through the thirty minute Ordeal by Commercial that precedes every showing. And some of it has to do with all the other inconveniences of watching a film in the company of people who can’t leave their cellphones alone for more than four minutes without suffering separation anxiety. What this means is that I don’t see many mainstream movies on their first run and do most of my movie-watching on seatback LCDs in aircraft, an environment that makes good movies painful and bad movies excruciating.
At a friend’s suggestion, however, I went to see The Hobbit (or, I should say, The Hobbit: An Unexpected Journey, as this is merely the first of three of these enormous things that are going to roll over us at one-year intervals). By and large, I had a good time.
The first and biggest obstacle to my enjoyment was, of course, the 3D. My friend had tried to buy tickets for the 2D showing, but the theater pulled some kind of bait-and-switch on him. When I got to the theater, he handed me my 3D glasses and muttered “Welcome to Hell”.
Unlike the 3D in Avatar and Prometheus, which was relatively unobtrusive much of the time, the 3D in The Hobbit was in your face continuously. If the illusion were perfect, that wouldn’t matter. 3D in movies today, however, is anything but perfect. In The Hobbit it seemed particularly egregious, making it look as if the world was divided crudely into two planes. Selected objects in the near plane seemed to be detached from a background plane in which everything else was happening. The result was an unrealistic two-dimensional look that ruined many of the best moments in the film. Between bad 3D, heavy-handed use of depth-of-field effects and eyestrain, I kept getting jerked out of the story. Instead of 3D creating an immersive experience, it just reminded me constantly that I was watching a movie. I really believe that 3D is one of the biggest steps backward in the history of film and The Hobbit did nothing to change my mind.
Most of the other major problems with The Hobbit probably stem from the fact that the accountants have apparently decreed that it has to be a trilogy. That means that a lot of extra material has to be shoe-horned in to pad the story out to the requisite length. I don’t mind the extensive exposition so much, but the scenes featuring Radagast the Brown as a kind of woodsy comic hobo cry out to be cut in their entirety. Radagast - who never appears ‘on-stage’ in either The Hobbit or The Lord of the Rings - is a vital figure, the anchor point of the continuum whose far end is Saruman. It is because Gandalf values the despised Radagast and what he represents that he is able to remain true to himself, rather than falling into the trap of ambition that destroys Saruman. That’s not, however, a reason for adding him into the movie. The film’s invented scenes featuring Radagast are not only embarrassing to watch, they miss the point of the character as well. Cut here, please.
The film also features a certain amount of what I think of as ‘fairground ride’ moments: giant explosions, crashing timbers, people dangling from things, epic falls in which the laws of physics are suspended, and so forth. I have a low threshold for this stuff, even when it hasn’t obviously been put there to support the videogame tie-in (George Lucas, I’m looking at you). Instead of making me excited, it just makes me mutter “Oh, come on.” Unfortunately, it seems that the director was contractually obligated to throw in some fixed quantity of thrills and spills, to the detriment of the film.
Despite the obnoxious 3D and other obvious weak points, I found many things to like about The Hobbit. The ‘landscape porn’ that Peter Jackson does so well is as beguiling here as it was in The Lord of the Rings. The key to Tolkien’s success is the way that he draws the reader into the world of Middle-Earth; Peter Jackson found Middle-Earth in the landscapes of New Zealand, and the films are never better than when they show the characters surrounded by these vast, beautiful landscapes. The heart-stopping scenery immerses the viewer in the story in a way that all the clunky 3D effects in the world cannot.
The other ace Jackson has to play is Ian McKellen as Gandalf. McKellen might not perfectly incarnate Gandalf, but he’s one of the few actors with the stature and the ability to even attempt the role. He’s as good a Gandalf as we’ll see in this lifetime and that’s no small thing.
There are other high-quality performances too. Martin Freeman delivers an excellent Bilbo, Hugo Weaving a reliable Elrond. Cate Blanchett, whose scenery-chewing episode supported by third-rate audio and visual effects was one of the low points of The Fellowship of the Ring, atones for it with a scene featuring Galadriel as she should have been, all subtlety and quiet majesty. Perhaps it’s simply that Jackson shoots her as if she were a particularly lovely piece of landscape, perhaps it’s that Blanchett is an actress whose strength lies in nuance rather than melodrama. In any case, she is vastly better here than she was in the previous films.
Inserts aside, Jackson is wise enough to stick fairly close to the source material, but he does add some lovely visual touches of his own making. The Great Goblin and his court are almost Bosch-like, at once comic and horrifying. Scenes of wargs racing downhill under a moonlit sky are pure beauty. Not all of the scenes featuring computer-animated creatures quite hold up, but the technology has evolved even since The Lord of the Rings and the apparent realism can sometimes be spectacular. Even the animated Gollum, who didn’t always convince me in the earlier films, held up well here.
The hardest choices that Jackson had to make may have been to do with the portrayal of the dwarves. In the book, I remember them as a largely undifferentiated mass. With the exception of Thorin, Fili and Kili, I wouldn’t be able to tell you what their various defining traits were. Jackson has done a good job of turning them all into individuals, although the result is a kind of weird composite palette that seems to draw on every possible representation of dwarves ever seen in fantasy: white-bearded gnomes, Disneyesque grotesques, rugged pseudo-Celts, fantasy exotics. Aidan Turner’s startlingly handsome Kili rubs shoulders with a bulbous-nosed Bombur and a Dwalin who could have stumbled off the set of a Mad Max movie. I’m not entirely sure it was a good idea, but while the visual gamut from caricature to heartthrob can be disconcerting, it’s less grating than you might expect.
One thing that Jackson deserves credit for is the way that he fleshes out the character of Thorin. Reading the book never gave me a great sense of the dwarf leader as a person. I had no strong mental image of him and his most-defined attribute seemed to be a general dourness. Here, ably played by Richard Armitage, he emerges as a powerful and conflicted figure, as much a major character as Bilbo or Gandalf. When he is onscreen, you start to wonder if the whole thing shouldn’t be retitled The Tragedy of Thorin Oakenshield.
Jackson’s other great insight is the way that the story hinges on the issue of home and homelessness. Bilbo’s comfortable, relentlessly English petit bourgeois existence with all its little comforts stands in stark contrast to the rootless existence of the dwarves. They are literally refugees and their hunger for a home of their own is the force that drives them all on. It’s a pity that Jackson is obliged to drive that idea home with a sledgehammer, to make sure that it’s not lost on even the dullest multiplex audience. Despite that, it’s a sharp observation and one that adds some depth to a superficially simple tale of adventure.
I ended up enjoying The Hobbit, although the reviews apparently say that I shouldn’t have. If you have a few hours to spare, by all means go along and see it.
Just not in 3D.
I wanted to play around with a new static site generator called middleman, which looks as if it will do lots of things I can use. middleman is distributed as a Ruby gem, and I have Ruby, so I can just type:
gem install middleman
and I’ll be good to go. Easy-peasy lemon-squeezy, as we used to say where I come from.
The installation is smooth as silk. Encouraged, I tell middleman to build a skeleton for my first site.
therubyracer won’t install because libv8 isn’t installed. libv8 won’t install because my stock version of gcc isn’t new enough. I can already see where this one is going, so I decide to cut my losses and try the next alternative in the list.
therubyrhino implements Mozilla’s Rhino engine. It has no dependencies, or at least none that I can’t satisfy. It installs without problems.
I run middleman again. Same error. therubyrhino may be installed, but ExecJS can’t find it. Searching Google for possible tips, I come across various pieces of advice, none of which seem to apply. Apparently I can resolve the problem my adding some explicit instructions to “the middleman Gemfile”. There is no hint in anything I can find to say where I might find this Gemfile, or even if it exists on my system at this point at all. It seems it’s connected to something called bundler, which I could probably investigate some more. At this point, however, the rabbit hole is starting to deepen vertiginously.
I go with the next fallback, Node. By now this is starting to feel increasingly like buying an F-15 to deal with a roach problem in the kitchen. Still, I’ve been meaning to do some stuff with Node anyway.
I quickly find that Node isn’t going to install because it considers that the stock version of Python on my box is outdated. Ruh-roh.
There was a time when, faced with an unsatisfied dependency, I would simply dive in and build everything from source. I have learned not to do that on CentOS. The great virtue of CentOS is that it is a very stable, very conservative system. If you use the package manager for everything, you have an excellent chance of having a generally smooth ride. The flip side of this is that if you try to slip in something that CentOS doesn’t officially support, the operating system will punish you with mindless, merciless ferocity. Few things in the universe are quite as vindictive as a CentOS install that catches you trying to put something past it. And it turns out that replacing the stock Python is really the big no-no. If you do that, you will break yum, the doors of Hell will gape wide and the foul fiend will walk the earth, gathering souls for his unearthly kingdom. Or at least you’ll probably never be able to upgrade anything on your box ever again.
Fortunately, I find some instructions that explain how to install Python 2.7.3 on CentOS 5.8 as an alternate. That works relatively smoothly. On with installing Node.
Installing Node from a repository doesn’t work because the only repository that had packages for Centos 5.8 mysteriously went away about two years ago. But I find more instructions on installing Node which look like they’ll take me the rest of the way.
The Node configure script breaks with a cryptic error, so I have to edit it to point it at my Python 2.7.3 install. After that, it runs normally, so I run make. The build fails:
cc1: error: unrecognized command line option “-Wno-old-style-declaration”
Nothing I can find online is any help at all. I take a guess that it’s because my stock gcc (v4.1.6, which seems to be the latest supported for CentOS 5.8) is too old.
I’m hesitant to upgrade gcc because experience has shown that touching major software like gcc is a recipe for a whole world of pain. See remarks about trying to go round the package manager above. However, it turns out that I can upgrade to gcc44 using yum without replacing the default compiler.
Node now builds successfully. Next I have to edit the Makefile to tell it to use my newer Python, but once I’ve done that, it even installs.
Hmm, now I have access to a relatively modern gcc, I could probably go back and try libv8 again. But let’s see if ExecJS can find Node.
Amazing … middleman ran without error, and built something in the place where I wanted it. This is success … of a kind.
I’m too tired to enjoy my victory. What began as a simple attempt to try out a promising tool did what I needed has turned into a multi-hour process full of cryptic error messages and wading through pages of even more cryptic Google results searching for the one hint that will get me out of the swamp. By the end, I’ve effectively run out of time: I don’t have the time to do what I set out to do because I spent so much time wrestling with the install (and whining about it on Tumblr). I’ll just have to come back to it another day.
This is not a criticism of any of the excellent software packages I have tried to install. It’s just pretty much my experience with doing software installs on Linux generally. You are never far from getting sucked into the vortex of dependency hell. On several occasions, I have simply given up on something because it became clear that the process of trying to get it running was going to suck up entire days of my life, with no guarantee of success at the end.
The alternative to the well-intentioned chaos of Linux, of course, is the sterile monoculture of MacOS X or Windows, where everything is locked down and the user is spoon-fed from a limited menu, but even there there are no guarantees. The Mac App Store, the ultimate spoonfeeding experience, can’t seem to download one application that I bought, insisting that the file is damaged and refusing to launch it.
Creating software so that it can be installed across a range of machines and configurations is just hard. That’s all there is to it. Smarter people than me have made tremendous strides in arranging things so that some idiot who doesn’t really know what he’s doing can get stuff going without too much trouble, much of the time. Unfortunately, we’re not there yet.
The one observation that I might make is that these problems are associated with a widening gap between development and production environments. The guys who make cool new toys like Node or middleman are building them on development boxes that they control, probably running some OS that encourages you to install the latest and greatest of everything, like Ubuntu. Modern gcc? Latest Python? Of course they have that installed. They probably built it from source themselves. So they’re never going to run into the problems you get when you try to deploy their software on a production box running older, known-stable versions of key software.
Maybe the lesson is this: if you as a developer want people to use your software in production, try being more conservative when you build out your development environment. If you don’t need features from the absolute newest version, try using an older one. The more modest you can be in your requirements, the smoother the installs will go, and the more widely your software will be adopted.
And the more chance people like me will have of actually getting something done.
In an attempt to discover new music, I’ve been using the random ‘radio’ feature of various online music services. I pick an artist or a genre that I like, and let the service suggest tracks that are related to that. Based on that, I’ve been able to infer something about the algorithms that each service uses. Here’s what I’ve worked out so far.
GET chosen_artist_or_genre SET albums = the_3_related_albums_we_have_permission_to_play REPEAT PLAY random_song_from(albums) PLAY random_song_from(albums) PLAY ad_for_kosher_adult_diapers END
GET chosen_artist_or_genre SET songs = songs_approximately_like_song_user_likes SET song = random_song_from_selected_set(songs) REPEAT FOR i FROM 1 TO 3 IF was_last_played_less_than_3_minutes_ago(song) PLAY something_else ELSE PLAY song END END SELECT ad FROM (ad_for_spotify_feature,ad_for_artist_in_totally_unrelated_genre) PLAY ad END
GET chosen_artist_or_genre REPEAT IF random(5) > 0 SET songs = songs_exactly_like_song_user_likes SET song = worst_song_in_selected_set(songs) PLAY song ELSE PLAY random_song_by_conor_obersts_bright_eyes END END
I am still not a Ron Paul supporter. But I am watching with a certain amount of open-mouthed amazement (or should that be amusement?) as the Republican Party continues their battle to airbrush him out of history.
Their latest efforts to give the impression that the party is unanimously united behind the officially-sanctioned candidate are detailed in this Fox News video. It seems that the RNC is playing fast and loose with its own rules on delegates, dismissing elected delegates and even trying to rewrite the rulebook on the fly to make sure that Paul can’t be nominated. Never mind that Romney has enough votes to secure the nomination: the RNC wants to make damn sure that Paul’s name can’t even be placed into nomination. The official story is that everyone wants Romney, and contrary opinions will not be heard.
It’s hard not to love the Republican Party when they do things like this. They are like perfect movie villains, deplorable along multiple dimensions at once. From nonsense-spewing bigots like Todd Akin to the way that the party machine demonstrates an almost Stalinist eagerness to enforce absolute conformity in backing their anointed choice, the Republicans are doing everything they can to make sure they will never be mistaken for the lesser of two evils.
It’s hard not to get the idea that while the Republicans like the sound of the word ‘democracy’, they’re none too keen on it in actual practice.
UPDATE: The RNC did offer Ron Paul the opportunity to speak at the convention … provided he submitted his speech for prior vetting, and promised to endorse Romney completely.
Proving once again that there is no such thing as ‘targeted’ email, I just received unsolicited email apparently from someone called Lisa Benson, who appears to be backing a Congressional candidate in Arizona. The mail begins:
If you are receiving this letter it is because you and I have worked together fighting back against the Islamization of America, or have been defending and protecting Israel as colleagues, friends and donors for many compelling projects. I thank you for your steadfast devotion.
It’s rather sobering to be reminded that there are really people who worry about ‘the Islamization of America’. Because I’m pretty sure that’s not actually a thing.
The one thing that the message lacks seems to be a link that says “I detest you and everything you stand for. Please take me off your mailing list.”
But that’s what spam filters are for.
A long time ago, in a galaxy far, far away, there was a director called Ridley Scott, and he made two of the best science-fiction movies ever. One of these was called “Blade Runner”, and it dealt with big hairy themes such as what it means to be human, mortality, and what Los Angeles would look like if it never stopped raining and the whole city was covered with huge gloomy pyramids and people who don’t speak English. The other was called “Alien”, and it is also about what it means to be human, only this time it provides an answer: being human means that you’re dog chow for pretty much everything else in the universe, especially eight-foot-tall creatures with extensible jaws and acid for blood.
Both films offer an intensely atmospheric vision of the future. The city of “Blade Runner” doesn’t just feel like a real place, it feels solid and weighty and gloomy and oppressive in a way that no subsequent movie, no matter how big the CGI budget, has ever quite equalled. “Alien” divides its action between an alien planet - which is creepy and weird and strange and threatening - and a spaceship - which is gritty and industrial and scuffed and kicked-about, run by a crew who are also gritty and industrial and kicked-about. Both movies are immersive: Scott evokes these future environments so perfectly that you feel like you’re right there.
Another strength of the movies is that there are pretty much no plot holes and no loose ends. Everything that happens is explicable, everything that happens advances the story. Characters act in ways that are consistent with their goals and personalities. Any time you find yourself asking “But why did X do Y?”, there is an answer, and it’s an answer that deepens your understanding and appreciation of the movie.
That was Ridley Scott, and that was “Alien” and “Blade Runner”. Fast forward to 2012 and someone who claims to be Ridley Scott has just released a new movie. This one is called “Prometheus”, and it’s unmistakably set in the same universe as “Alien” (and, conceivably, “Blade Runner” as well, although that’s not important).
“Prometheus” looks gorgeous. You can recognize Scott’s touch in the composition, the lighting, the whole look and feel. The environments of “Prometheus” are not as memorable as those in “Blade Runner” or “Alien”, but they’re pretty good. They’re certainly not going to disappoint any sci-fi fan who dreams of shiny spaceships and mysterious alien worlds.
There, however, is where most of the resemblance to a Ridley Scott movie ends. “Prometheus” is a squirming mess of plot-holes, inconsistencies, and inexplicable behavior.
One of the things that made “Alien” terrifying was that the characters make all the right moves and the monster still goes through them like a linebacker through a preschool creche. Unlike a typical horror movie, they don’t put themselves in danger by doing absurdly dumb things (they make one Big Mistake, but there’s a reason even for that). They act like real people trying their damnedest to save their own lives and it does them precisely no good whatsoever. That is real horror.
The hand-picked crew of the zillion-dollar exploratory starship Prometheus are a completely different story. Not only do these supposedly elite scientists and explorers lack the professionalism and basic smarts of the grungy roustabouts of the Nostromo, they actually seem to be dysfunctional and incompetent to a degree that makes you wonder if they could safely operate anything more complex than a smartphone. Sure, they have Your Plastic Pal Who’s Fun to Be With to do the physical and intellectual heavy-lifting, but the first rule of Ridley Scott movies is “Don’t Trust the Robot”, so you know how that one’s going to turn out.
I’m sorry, was that a spoiler? I was trying so hard to avoid them.
Anyway, the crew of the Prometheus are a pack of assholes, they act like total idiots, and mayhem ensues. (That’s not a spoiler: if you didn’t know there was going to be mayhem, you’re queuing for the wrong movie). Some of the mayhem is vaguely believable and advances the plot. Some of it just doesn’t serve any purpose whatsoever, except to thin an overlarge cast, something that could more usefully have been done at the script-writing stage. In between, characters act in ways that leave you going “What? Why?”
“Alien” was tight: everything that happened moved the plot along, everyone acted in ways that were consistent with their character. “Prometheus” is sloppy: there are too many unnecessary characters, there are interactions that don’t make sense, action that doesn’t add anything except action, people whose actions are simply not believable. The Plucky Heroine starts out frail and ends up superhuman. It’s a mess, and none of it is necessary. Scott could have fixed all the inconsistencies at the writing stage, told the same story and made a much stronger movie.
“Prometheus” isn’t all bad. It does look gorgeous. It’s decent science-fiction. Buried in it are meditations on powerful concepts - mortality and attachment and faith and what it means to be human and more besides - with hints of more to come. There are iconic characters struggling to get out. Above all, it reboots the tapped-out “Alien” franchise in a way that I would almost call brilliant. Instead of making just another installment of “Alien”, Scott opens up his universe and sets up for a fresh arc that could head off in some really fascinating directions. It’s not a prequel so much as the start of something entirely new.
The problem is that when you go in expecting something close to perfection, and you get instead the kind of amateurish fumbling that you’d expect from the hack director of next summer’s big popcorn movie, it’s hard not to come away feeling disappointed.
A few years back, I used to hang out on a semi-regular basis in a fake-Irish bar (very fake-Irish; it was actually French) with a good friend. On most nights, as the foam settled on the second pint of Guinness, the conversation would turn to a theme that she used to summarize as “Why do we suck?”
For the record, my friend doesn’t actually suck. She’s awesome, and she does awesome stuff Every Single Day. However, she thought that she sucked, which is another matter. Or perhaps not.
Anyway, the concept of sucking vs. being awesome was just reinjected into my consciousness by a blog post titled How to Stop Sucking and Be Awesome Instead written by Jeff Atwood of Coding Horror, which was promptly retweeted, reblogged, summarized and recirculated by pretty much everyone on the Internet who could actually tear themselves away from looking at cat pictures for thirty seconds. Here, finally, was the secret formula that we all needed to stop sucking forever.
The first thing that I read was the summary of Jeff’s thesis, distilled down to three points: embrace the suck, do it in public, pick stuff that matters. I have plenty of respect for Jeff Atwood, but this immediately put my hackles up. The sites that reposted the piece, usually expanding very slightly on his three bullet points, made it sound like just another variant on the theme of “The reason why you fail is because you fear failure, so you don’t try.” That little claim gets trotted out in pretty much every single piece of self-help pornography that you read on the Internet or anywhere else. If we could only get over our fear of not succeeding, then everything would fall into place.
You know what? I call bullshit. I really don’t believe that we are all quivering geniuses, trembling on the brink of unleashing something wonderful but held back by our fear of embarrassing ourselves in public. I’m sure some such people exist. I think few of us fit that pattern, though.
In my own case, I can say with a certain degree of confidence that ‘fear of failure’ is not holding me back. I am extremely familiar with failure, but I don’t lie awake at night worrying that I’m about to fail. Fear that I might not succeed isn’t stopping me doing anything (except perhaps dance, gymnastics and free-fall parachuting, and I think you have to concede that the third of these is actually a pretty reasonable fear to have). I have plenty of projects that I work on, all of which are substantially lame. The reason that they are lame, however, is not because I am held back by my fear of failure and public scorn. They are lame because I am intellectually lazy and easily-distracted (ooh, look, cat pictures!). The things I make suck because I Don’t Do The Hard Work, not because I’m too intimidated by the possibility of failure to let my genius flower.
Once you move beyond the bullet points, Jeff’s piece contains some decent advice. His first recommendation, ‘embrace the suck’, is really just a restatement of the idea familiar to writers under the name of “shitty first drafts”. Don’t aim for perfection in your first version: just get it out of your head and onto paper or into code as the case may be. Then refine it from there. It’s not a new idea, but it’s one that’s been tried and tested and found to work. So I’m with him there.
To illustrate the second point, the slides talk about various Internet projects and remind you that you can get pretty much all the training you need for free. He doesn’t really say why it’s good to do stuff in public - it’s presented more as if it’s a virtue in and of itself - but I can fill in that blank. You do stuff in public because that’s how you get the feedback you need to improve. You can’t see what needs to be fixed if nobody but you ever looks at it or tries to use it. Again, that’s fair enough. It’s a recurrent theme, particularly in photography or art circles: get critiques of your work and pay attention to what people tell you. ‘Doing it in public’ is just a means to that end.
Finally, ‘do stuff that matters’. To who? To you, presumably, or you won’t care enough to work on it. And to others, or they won’t care enough to use whatever you make or to comment on it. Again, that’s fair enough. You have a limited amount of time at your disposal. It should be obvious that you can’t afford to waste it on stuff no one cares about. (Should be obvious, but alas …)
So, all summed-up, his points are fair enough, but if you were looking for the magical recipe that will transport you from Suckville to Awesometown, you’re going to be disappointed. They are not the Secret you are looking for.
I can tell you the Secret, though, and it is this: Do the Fucking Work. I know a number of people who I think of as successful, and I do not believe that any of them ever battled heroically to overcome their fear of failure. They might have unconsciously followed Jeff Atwood’s three principles, but that wasn’t why they succeeded. The reason that they succeeded was because they saw something they wanted to do and they worked at it like maniacs until it was done. There is no other way.
Now I’ve spelled that out for you, you can stop reading all those blog posts and self-help books about ‘success’ and ‘creativity’ and ‘overcoming your fear of failure’. Ignore all that shit, and just go ahead and put your time and energy where it matters. Start now.
A few years back, the popular social site Digg decided to launch a major redesign, probably as part of an effort to better integrate advertising so that they could pay their bills. Digg had flirted with changes before, most of which were initially deplored by Digg’s highly-vocal user community and then grumblingly accepted.
This time, something went horribly wrong. It’s not entirely obvious what was so bad about the new design, but users hated it with a passion. Traffic to the site fell off by 25% in the three months following the launch. Today, the once vibrant Digg feels like a ghost town. Meanwhile, Reddit, a site that has embraced the CraigsList credo of “Designers? We doan need no steenkin’ designers” and still clings proudly to a look-and-feel that looks as if it was thrown together by a 14-year old in 1996, is thriving.
The fact that sites like Reddit or Craigslist can survive for years with a design that has all the visual appeal of well-aged roadkill doesn’t mean that design is unimportant. Both visual design and UX are key factors in creating a site that is easy to navigate, clear to read, and enjoyable to use. But it’s important not to change things arbitrarily. Familiarity is a big part of positive UX. Changes disrupt learned patterns. Existing users - particularly the power or heavy users who drive the community - may find the new site harder to interact with, simply because they’re accustomed to a certain way of working. Some users will never adapt. Even if the learning curve is small, even if the changes are positive, all they see is that it now takes them longer to do what they’re used to doing. They complain, they resent the change, and they leave. If the new design makes enough of your core users rage-quit, you’re in real trouble.
The latest popular site to play Redesign Roulette is the bookmarking site bit.ly. Bitly pulled off an impressive trick. They entered a market that was not only saturated - because anyone can make a URL shortening service, and pretty much everyone has - but also dominated by a single player, TinyURL. They set out to take over the market just by being visibly better than the competition. Amazingly, they seemed to have pretty much managed it.
Now, however, they’ve launched a redesign, and it’s not being well received. A certain percentage of users will always complain when something changes, but this time there’s some real substance to the criticism. On the old Bitly, the first thing you saw when you went to the site was a big field where you could paste the URL that you wanted shortened. You pasted in the link, Bitly shortened it, you copied it, and went on your way. If you cared, you could hang around and customize the link, or even look at stats to see how many of your friends had clicked the last link you shortened, but that was icing on the cake. The core of the cake was this: you could shorten URLs, and you could do it quickly and easily.
The new redesign has changed that. The big field has disappeared. Its place has been taken by a list of the links you shortened recently (now called ‘bitmarks’), plus a search field that lets you search through them. You can create ‘bundles’ of links and add notes to them. Shortening - the core functionality of the site - has been relegated to a field in the upper right. It’s not even a field: you have to click on a piece of text that says ‘Add a Bitmark’, then paste your link and click Return. When you do that, Bitly pops up a dialog that gives you the option to type a note. Then you click a Save button to actually create your link, after which you need to click again to copy the URL. The one thing that every Bitly user wants to do has gone from a simple one- or two-step operation to a cumbersome multi-step procedure in which you’re deluged with irrelevant information at every turn.
At first sight, this looks like madness. Prime real estate is given over to features that are little or never used (when is the last time you wanted to search your saved links?), while the core functionality of the site is squeezed in as a cluttered afterthought. It’s as if you bought a new car and discovered that the entire dashboard was taken up by the radio, with the steering wheel and accelerator hidden away in the glovebox.
More troubling still, Bitly CEO Peter Stern appears to be in denial about the reaction:
It’s the response from the vocal minority who are quick to complain about any change.
he told Techcrunch dismissively. It’s never a good sign when a company reacts to criticism from its customers by saying “No, you’re wrong.” Even if they’re right, it’s poor customer relations and smacks of hubris. But in this case, I think the users are right. In terms of what most users want to do, the design is a step backwards.
The problem, I suspect, is that the new design is driven by what Bitly wants users to do, not by what users want Bitly to do. Bitly wants to promote deeper ‘engagement’ with the site, presumably as a step towards monetization. There’s not much scope to make money if your users just paste, click and leave. But if you can draw them into a more complex interaction with the site then more opportunities emerge. Bitly is a business: it has to put profit first.
Digg’s redesign was similarly motivated. Digg didn’t want to make the best social bookmark sharing site possible: they weren’t too far off that target. They wanted to make one that could make them money. The problem was that what Digg wanted and what users wanted ended up being at odds. The redesign was for Digg’s benefit, and the end result was that users walked, killing the very goose whose golden eggs Digg had hoped to pocket. If Bitly isn’t careful, they could join Digg on their plunge to obscurity.
Redesigns don’t kill websites; redesigns that put your needs ahead of your users’ needs kill websites. Bear that in mind.
There’s a story that the humorist W.C. Fields once asked for a drink in a bar and was told that he couldn’t be served because it was Election Day. Outraged, Fields demanded to know how this came to be a law. “Why, the legislature made this law — the people voted for it,” the barman answered. Fields responded “That’s carrying democracy too far!”
Beyond the humor of Fields’ answer, there’s a serious point. Pure democracy doesn’t protect anyone’s rights. Ninety-nine people could vote to deprive one person of rights, property, even life. In fact, fifty-one people could vote against the interests of the other forty-nine. Warren Ellis puts it like this:
You want to know about voting. I’m here to tell you about voting. Imagine you’re locked in a huge underground night-club filled with sinners, whores, freaks and unnameable things that rape pitbulls for fun. And you ain’t allowed out until you all vote on what you’re going to do tonight. You like to put your feet up and watch “Republican Party Reservation”. They like to have sex with normal people using knives, guns, and brand new sexual organs you did not even know existed. So you vote for television, and everyone else, as far as your eye can see, votes to fuck you with switchblades. That’s voting. You’re welcome.
The most recent instance of this has been in North Carolina, where a majority of voters supported an amendment to the state constitution that would restrict the definition of marriage to exclude same-sex marriages. In doing so, they weren’t doing something merely symbolic. The decision to recognize a particular union as a marriage or not has real implications. It has consequences for the rights of same-sex partners to inherit property or to adopt children together, for their finances, even for the right of one partner to determine what medical care their partner should receive or visit them in hospital if they are seriously ill, and more besides. In essence, the voters of North Carolina just voted to deprive a selected group of their fellow citizens of some of the same rights that they enjoy.
Most of them seem to have done so for religious reasons. As followers of one possible interpretation of a collection of rather arbitrarily-edited and often ambiguous religious texts written more than twenty centuries ago, they believe that same-sex relationships are innately ‘sinful’ and ‘wrong’. They believe that they have the right - even the duty - to punish their neighbors for violating the moral code they have chosen for themselves. They even believe that this issue is so important that it takes precedence over the general recommendations made by the founder of their religion. On this one issue, the words of the latecomer Paul apparently trump even Jesus’s clear command to “love thy neighbor as thyself’.
The problem of the ‘tyranny of the majority’ is endemic in democracy. The reason why democracies don’t generally allow majorities to ride roughshod over the rights of minorities - or at least not overtly - is because most of them have some ground rules built in. The purpose of these ground rules - state and national constitutions in the case of the United States, common law in the United Kingdom and so on - is to ensure that the will of the people can be expressed insofar as it doesn’t trample on the rights of any group. For democracy not to devolve into tyranny, the ground rules must set limits to the power of the popular vote.
Some of the most important ground rules are framed in terms of human rights. They say that whatever else you decide, you can’t take these rights away. That’s how you avoid the tyranny of the majority in a democracy, or at least try to limit its capacity to do harm.
The problem is that the ground rules can’t be set in stone. There has to be a mechanism for updating them to reflect changing times. In the US, that mechanism is called amendments to the constitution. Amendments should be used sparingly and, in my view, they should always go in the direction of increasing rather than reducing people’s rights. The Volstead Act (and the 18th Amendment that it enabled) is an example of a change that went in the wrong direction and had to be repealed later.
In North Carolina, voters voted to change the ground rules. They also voted in the direction of reducing the rights of their fellow citizens.
In the words of W.C. Fields, that’s carrying democracy too far.
Reddit is currently hosting an interesting AMA with a botnet operator and malware coder, which begins with some useful (but obvious) tips on protecting yourself from drive-by downloads. While reading the page, the following quote caught my eye:
The ‘deep web’ is full of furfags and pedophiles, 50% of I2P deep web is furry porn, 30% conspiracy crackheads and the remaining pedophiles. TOR deep web has more pedophiles and less furfags. It’s awful.
Bear that in mind the next time you see a vendor boasting about how they can “search the deep web” …
The Philadelphia Fraternal Order of Police would very much like to oust retired police captain Ray Lewis from the police union because of his involvement in the Occupy protests. The FOP has accused Captain Lewis of “not respecting” the uniform by wearing it at protests, and wants to see him kicked out and stripped of his pension and benefits.
I happened to be standing very close to Captain Lewis when he was arrested in New York last November, and took video of his arrest. During the time I was there, I never saw him act in any way that would discredit his uniform or the Philly PD, nor did he ever present himself as anything other than what he was - a retired cop expressing his personal point of view. He was a dignified, calming presence at the protests and his graceful act of civil disobedience rightly won him the immediate admiration of everyone in the crowd. There is no doubt in my mind that his actions, far from disgracing his service, actually raised it in the estimation of many people there.
Voltaire’s biographer, Evelyn Hall, famously summarized the French philosopher’s beliefs with the phrase: “I disapprove of what you say, but I will defend to the death your right to say it.” The Fraternal Order of Police’s attitude might be better summed up as “I disapprove of what you say, and I intend to punish you for daring to say it.”
I recently came across a post in which John Scalzi explains why he deleted his Klout account. To paraphrase his argument in my own words, Klout creates an artificial anxiety about your Klout score, which leads to you behaving in ways that are presumably in Klout’s interest, not yours.
I’m a little surprised that anyone takes Klout seriously, given that Klout scores are notably arbitrary. I signed up for Klout, looked at my score once and then immediately lost interest. Klout attempted to get me back by sending me emails I hadn’t asked for and from which I couldn’t unsubscribe (their software was broken) but eventually I managed to wriggle off their mailing list, and that was it as far as I was concerned.
Klout’s … let’s call it a psychic model … is a variant of what I call ‘leaderboard anxiety’. The idea is that some service sets up a metric by which you can evaluate your status. You are then supposed to obsess over this number and - the goal of the whole thing - keep coming back to the service to check on your score. Klout scores, Facebook friends, Twitter followers, Foursquare mayorships, Reddit karma, all work on the same principle: get more, improve your score, earn worthless badges, feel bad about yourself if you don’t ‘measure up’.
Closely related is ‘attention anxiety’. Again, there’s a ‘score’ to track, but this time it’s a local score linked to a specific action that you have taken: Tumblr reblogs, Twitter retweets, Facebook likes, upvotes on Digg or Reddit, Pinterest repins. You do something - tweet something witty, blog something insightful - and then you check back obsessively to find out how many people have liked or repeated what you said.
The infamous Zynga, maker of Facebook games, invented another anxiety to keep you coming back. Call it ‘tamagotchi anxiety’, or ‘spinning plates anxiety’. You have an unstable system - a Farmville farm - that requires constant attention. Unless you attend to it continuously, everything goes rapidly to hell, and your virtual pets reprove you pathetically for your heartlessness, tongues lolling and little x’s stamped on their tiny eyes. It’s rather like being a system administrator, but with less lifting heavy servers into racks.
These features aren’t accidental: they’re deliberately engineered and their common goal is to get you to keep coming back to the service and keep participating. Maybe if I do this, I can improve my Klout score. Maybe if I post a funnier tweet, more people will retweet it. It’s an artificial addiction based on our need for constant validation.
Anxieties aren’t the only tool social media has up its sleeve to keep us hooked. Another is what you might call “distraction satisfaction”. We’re drawn to look for new stimuli, for little crumbs of new information that give us something to think about or act on. Twitter and Facebook play on this. It’s very easy to think “I’ll just check my Twitter/Facebook/email/phone messages to see if anything new has come in; it’ll only take a second.” But of course it doesn’t just take a second. If there is something new, it leads us off down a procrastinatory rabbit hole. If there is nothing new, we’re left feeling dissatisfied, so we try again. If Twitter didn’t deliver, try Facebook. If Facebook didn’t deliver, there’s email, Reddit, Digg, text messages, Twitter again …
Catering to our natural urge to procrastinate might be a little more benign than deliberately inducing anxieties but it’s equally insidious. Moreover, the end goal is the same: to keep us coming back and to get us to participate. As a bonus, our own involvement makes the service more effective at dragging our friends into a similar spiral. As we send out our tweets and post our Instagram photos, we’re baiting the trap for others. When we mention someone’s Twitter handle, or tag them in a photo on Facebook, we’re passing out a little packet of distraction. The designers know this. Each new feature added to a social media service is designed not to increase the utility of the service for its users, but to increase the utility of its users to the service.
The lesson to take away? Social media is not your friend. You are being manipulated in ways that are harmful to you.
Speaking for myself, I’m mostly immune to leaderboard anxiety, but I do worry about how many people repost or retweet what I say (although based on my performance to date, I should just give up). And, I will admit it, I check my email and Twitter far more often than I should.
The odd thing is that following people on Twitter gives me a quite different anxiety, one that I don’t think has been designed by the social psychologists. It’s that all the people I follow seem to be doing such cool stuff - building web applications, staying on the cutting edge of their discipline, writing novels, taking photos, traveling to exotic places, making art. When I read about their projects and their successes, I start to feel anxious about how little I’m doing in comparison.
Maybe it’s because I spend too much time using social media …
Publisher Tor Books has announced that they will begin selling their ebooks unprotected by DRM. This seems to be a consequence of something that’s been brewing for a while, and is a move that has been predicted by various observers (good call, Charlie).
If this is the beginning of the end for DRM, it’s high time. DRM is largely ineffectual in preventing piracy, but a persistent annoyance to honest readers. DRM locks you into reading the way that the vendor wants you to read, not the way you want to read. DRM’d ebooks, like DRM’d content of any kind, are hostage to changes in technology and unilateral policy changes down the line. You never really own anything that has DRM on it. While I can’t say that I never buy DRM’d ebooks, the knowledge that something comes encumbered by DRM is always enough to make me think twice. Very often, it’s enough to make me not buy at all.
Conversely, I buy very happily indeed from vendors who trust and respect their customers enough to offer them open ebooks in a choice of formats. O’Reilly have seen a lot of my dollars. When I learned, in the wake of the Tor announcement, that Nightshade Books sell their titles DRM-free through Baen Books (whose own catalog is also sold in multiple open formats), I went on a minor spending spree, picking up a number of books I’d had my eye on for a while. (Tough shit, Amazon - Baen got the sale, you didn’t).
It’s too early to tell if the publishing industry as a whole will follow suit. Still, the signs are hopeful. In the meantime, I think the onus is on readers to reward the early adopters and let them know that the experiment can work. Time to go shopping …
In the republic of Genova, a “pittima” was a person employed to try to shame debtors into paying their debts by accosting them in public [Wikipedia, Italian version]. The pittima, who was often a person incapable of performing other work, wore distinctive red clothing, attracting more attention to the debtor as the pittima followed them through the streets or the markets.
… and I go down to seek for money
from those who keep it from those who have loaned it
I ask for it timidly … but in the midst of the crowd
[”A’ Pittima”, Fabrizio de André]
The job of pittima couldn’t exist in the present-day US: the Fair Debt Collection Practices Act makes it illegal for debt collectors to “reveal or discuss the nature of debts with third parties”.
A new website called The Debtor List seems to have brought the idea of the pittima into the twenty-first century, providing a form for creditors to enter details about alleged debtors and then distribute their claims via ‘social media tools’ (the site’s tagline is “Use the Power of Social Media to Get Paid”).
Some commentators have questioned the legality of the scheme, although as the Debtor List is not itself a collection agency the Fair Debt Collection Practices Act may not apply. What’s interesting, however, is the transition that it implies. The historical pittima would accost his victims in the places where other people would see them - at church, or in the market, or the street. The modern variant, represented by sites like the Debtor List or reputation sites, does its work instead in the virtual spaces of Facebook and Twitter.
Our ‘public places’ are no longer physical spaces.
In Neal Stephenson’s science-fiction novel, “Snowcrash”, people can buy wearable computers that give them permanent access to the Multiverse - the gigantic shared virtual reality in which much of the novel’s action takes place. It’s something of a fringe choice: people who choose to festoon themselves with electronics just so that they can stay connected at all times are known contemptuously as ‘gargoyles’.
The novel’s protagonist (whose name is Hiro Protagonist) eventually succumbs to the lure of technology, leading to a scene in which he’s chatting with his friend Y.T. and offers to research something in the Multiverse (or, as we’d say, ‘on the web’). Y.T. hears traffic noise in the background, and realizes that he can’t be at his desk. “Oh my god, you didn’t”, she says. Poor Hiro, finally unmasked as a complete nerd, can only offer the ultimate nerd defense. “Yes”, he says weakly, “but it’s really cool …”
Enter Google Glass, which, incidentally, gives me a strong sense of déja-vu. It’s a slicker version of the wearable computers that my friend Rehmi used to build. It’s also a lot like some of the blue-sky concepts we kicked around when I was in a wearables group at Sony CSL Paris. Ten years on, miniaturization and an infrastructure that includes ubiquitous 4G mean that those ideas now have a chance of becoming a reality.
It remains to be seen whether people who are lucky enough not to have to wear glasses all the time will want to turn themselves into gargoyles just so that they can order coffee at Starbucks by blinking, or order concert tickets by twitching their nose. But even if the device never has real mass appeal, there are bound to be some people who’ll accept the inconvenience and the social humiliation because, after all, “it’s really cool” …
Responsive design, the currently trendy new way of getting your websites to work unaltered across a range of devices, has been getting a lot of attention lately. I’m a big fan of responsive design: I do a certain amount of web-browsing on an iPod Touch, and - for all Apple’s pinch-to-zoom wizardry, reading non-responsive pages can be painful.
Responsive design is pretty new, but its adherents have figured out a good set of basic techniques which work well in most cases. Unfortunately, in the world of the web, no problem stays solved for long. Just when everything looked rosy, Apple came along and messed everything up for everyone.
They did this, not in the way that Internet Explorer used to - by being too lame - but by being too good. The problem is Apple’s new high-resolution Retina display, as shipped in their latest piece of so-sexy-I-want-to-lick-it-all-over consumer electronics, the new iPad. The problem, basically, is this: the Retina display is so good that images sized for a standard resolution display look ugly. Now, the responsive folks want a way to serve up clean, crisp hi-res images to the iPad and its ilk, and boring, stodgy old lo-res images to those poor souls still suck on 72dpi displays.
Kelly proposes that instead the browser itself should be capable of sending HEAD requests for the possible variants of each image. If the page includes an ‘image.png’, and the browser is running on a device with a Retina display, it should send a HEAD request for both ‘image.png’ and for ‘image-2x.png’. Depending on what it gets back, it should then decide which image to load into the page. In this way, you avoid wasting bandwidth by downloading a lo-res version that you’re only going to throw away.
There are some problems, however. First, the number of requests is climbing: for each image, you’re sending two - or more - HEAD requests, followed by a GET once you’ve decided which image you’re actually going to use. In these days of Keep-Alive connections, that may not be such a big deal, but it seems inelegant. If nothing else, the server’s error log files are going to explode with 404’s. If the site’s ‘not found’ handling involves any significant processing, the server will also take a noticeable hit.
Then there’s the question of layout. Some browsers will hold off on laying out the page until they know how big an image is going to be. Yes, we should all be specifying our image sizes in the <img> tag or, better, in the CSS. Not everyone does, though. So the browser either has to wait until all the HEAD requests and the final GET request have returned, or it has to start laying things out and then do it all over again (producing an ugly page redraw) once the final results are in. They have to do this now anyway, but with more requests in the pipe, the wait is longer.
How can we avoid this awkward fumbling with HEAD requests, with the user agent essentially feeling the web server up to see what it’s got? My suggestion would be: don’t play a guessing game; if you want something, come straight out and ask for it.
The HTTP-Accept header is one of the lesser-used features of the HTTP standard. It allows a browser to say “I can handle these particular formats, and here’s my order of preference”. In theory, the web server could inspect the list and return the most appropriate content. In practice, almost no servers do anything useful with the HTTP-Accept header. Many do, however, do something very useful with the Accept-Encoding header: if a browser uses Accept-Encoding to say that it can accept compressed content, the server can give back the web page in compressed form, for substantial savings all round.
In principle, HTTP-Accept and Accept-Encoding represent, for me, a better way to approach this problem. The browser says explicitly what it would like; the server, if it can, honors the browser’s preferences. There are no extra HEAD requests, no waiting: the browser’s desires are expressed in the request, and the server comes straight back with the content that it wants.
So a browser could send, for example, a header called Accept-Resolution, with values such as ‘low’, ‘standard’, ‘high’, depending on the resolution of its screen (and the speed of its connection). The server then hands back the appropriate version of the image.
You could probably implement this today, using mod_rewrite rules. You could even - if you don’t mind browser-sniffing - do it on the server side without waiting for someone to implement an Accept-Resolution header.
There’s one big gotcha, and that’s caching proxies. If Joe Desktop requests a standard image for their PC and then Jane iPad uses the same proxy, Jane is going to see the jaggies on her Retina display. If Jane gets there first, Joe gets served up a gigantic image he can’t use (and if Joe Desktop is actually Joe Smartphone on a slow 3G connection, that’s going to hurt).
So either proxies will need to be updated to handle the ‘Accept-Resolution’ header, or images will have to be served with no-cache directives. I’d like to rule out the second ‘solution’ for reasons of economy, and suggest that proxies will need to be made resolution-aware.
Are there other options? I can think of just one more. Instead of having the browser say what it would like, the server can tell it what it can have, using a response header. When the agent requests the initial web page, the server returns a header that says, in effect, ‘hi-res spoken here’. That response would be an invitation to the browser to dynamically rewrite each image request to specify a hi-res version instead of the base version (we can use Kelly’s proposed naming conventions). Before the agent even knows which images it’s going to need to download, the server has already told it what it’s allowed to ask for. If there’s no hi-res version available for any particular image, the server just returns a standard-res version. Finally, as a not-insignificant bonus, the different resolution versions have different names, so we get around the whole caching problem.