Journal tags: hype

22

sparkline

In the margins

John Willshire has been pondering web marginilia AKA stuff you put in your sidebar.

He has a particular fondness for the good ol’ blogroll. I’ve still got my analogue equivalent on my homepage—the bedroll. It’s a list of links to people who’ve stayed over. Maybe I should also have a regular blogroll, but I suspect it would just be a reproduction of feeds I’m subscribed to.

Then there’s marginalia at the level of a blog post, rather than a whole blog. Kevin Marks points out that this is something that Vannevar Bush described his theoretical memex doing—a device I was just talking about. Kevin created a proof of concept showing outbound and inbound links.

Outbound links are annoted versions of the A elements in a blog post. Inbound links are webmentions (which should now include this post of mine).

Kevin has those links in the margins on either side of the blog post. I’ve also got links that go with my blog posts, but they’re displayed linearly:

  1. the post itself,
  2. any responses (webmentions),
  3. related posts, something I only recently added, and
  4. posts from the same day further back in time.

Do they still count as marginalia when they’re presented vertically rather than alongside? For mobile devices, I’m not sure there’s any alternative.

A memex in every web browser

When Mathew Modine’s character first shows up in Christopher Nolan’s Oppenheimer, I figured the rest of the cinema audience wouldn’t have appreciated me shouting out “VANNEVAR BUSH IN THE HOUSE!” so I screamed it on the inside.

The Manhattan Project was not his only claim to fame or infamy. When it comes to the world we now live in, Bush’s idea of the memex has been almost equally influential. His article As We May Think became a touchstone for Douglas Engelbart and later Tim Berners-Lee.

But as Matt Thompson points out:

…the device he describes does not resemble the internet or anything I’ve ever found on it.

Then he says:

What Bush was describing sounds to me like what you might get if you turned a browser history — the most neglected piece of the software — into a robust and fully featured machine of its own. It would help you map the path you charted through a web of knowledge, refine those maps, order them, and share them

Yes! This!! I 100% agree with the description of browser history as “the most neglected piece of the software.” While I wouldn’t go as far as Chris when he says web browsers kind of suck, I’m kind of amazed that there hasn’t been more innovation and competition in this space.

If anything we’ve outsourced the management of our browsing history to services like Delicious and Pinboard, or to tools like Obsidian and Roam Research. Heck, the links section of my website is my attempt to manage and annotate my own associative trails.

Imagine if that were baked right into a web browser. Then imagine how beautiful such a rich source of data might look.

Like Matt says:

I don’t think anything like this exists. So Bush’s essay still transfixes me.

Nailspotting

I’m sure you’ve heard the law of the instrument: when all you have is a hammer, everything looks like a nail.

There’s another side to it. If you’re selling hammers, you’ll depict a world full of nails.

Recent hammers include cryptobollocks and virtual reality. It wasn’t enough for blockchains and the metaverse to be potentially useful for some situations; they staked their reputations on being utterly transformative, disrupting absolutely every facet of life.

This kind of hype is a terrible strategy in the long-term. But if you can convince enough people in the short term, you can make a killing on the stock market. In truth, the technology itself is superfluous. It’s the hype that matters. And if the hype is over-inflated enough, you can even get your critics to do your work for you, broadcasting their fears about these supposedly world-changing technologies.

You’d think we’d learn. If an industry cries wolf enough times, surely we’d become less trusting of extraordinary claims. But the tech industry continues to cry wolf—or rather, “hammer!”—at regular intervals.

The latest hammer is machine learning, usually—incorrectly—referred to as Artificial Intelligence. What makes this hype cycle particularly infuriating is that there are genuine use cases. There are some nails for this hammer. They’re just not as plentiful as the breathless hype—both positive and negative—would have you believe.

When I was hosting the DiBi conference last week, there was a little section on generative “AI” tools. Matt Garbutt covered the visual side, demoing tools like Midjourney. Scott Salisbury covered the text side, showing how you can generate code. Afterwards we had a panel discussion.

During the panel I asked some fairly straightforward questions that nobody could answer. Who owns the input (the data used by these generative tools)? Who owns the output?

On the whole, it stayed quite grounded and mercifully free of hyperbole. Both speakers were treating the current crop of technologies as tools. Everyone agreed we were on the hype cycle, probably the peak of inflated expectations, looking forward to reaching the plateau of productivity.

Scott explicitly warned people off using generative tools for production code. His advice was to stick to side projects for now.

Matt took a closer look at where these tools could fit into your day-to-day design work. Mostly it was pretty sensible, except when he suggested that there could be any merit to using these tools as a replacement for user testing. That’s a terrible idea. A classic hammer/nail mismatch.

I think I moderated the panel reasonably well, but I have one regret. I wish I had first read Baldur Bjarnason’s new book, The Intelligence Illusion. I started reading it on the train journey back from Edinburgh but it would have been perfect for the panel.

The Intelligence Illusion is very level-headed. It is neither pro- nor anti-AI. Instead it takes a pragmatic look at both the benefits and the risks of using these tools in your business.

It has excellent advice for spotting genuine nails. For example:

Generative AI has impressive capabilities for converting and modifying seemingly unstructured data, such as prose, images, and audio. Using these tools for this purpose has less copyright risk, fewer legal risks, and is less error prone than using it to generate original output.

Think about transcripts of videos or podcasts—an excellent use of this technology. As Baldur puts it:

The safest and, probably, the most productive way to use generative AI is to not use it as generative AI. Instead, use it to explain, convert, or modify.

He also says:

Prefer internal tools over externally-facing chatbots.

That chimes with what I’ve been seeing. The most interesting uses of this technology that I’ve seen involve a constrained dataset. Like the way Luke trained a language model on his own content to create a useful chat interface.

Anyway, The Intelligence Illusion is full of practical down-to-earth advice based on plenty of research backed up with copious citations. I’m only halfway through it and it’s already helped me separate the hype from the reality.

Browser history

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Browser bugs like these are very frustrating. There’s nothing you can do from your side other than filing a bug. The locus of control is very much with the developers of the browser.

Still, the occasional regression in a browser is a price I’m willing to pay for a plurality of rendering engines. Call me old-fashioned but I still value the ecological impact of browser diversity.

That said, I understand the argument for converging on a single rendering engine. I don’t agree with it but I understand it. It’s like this…

Back in the bad old days of the original browser wars, the browser companies just made shit up. That made life a misery for web developers. The Web Standards Project knocked some heads together. Netscape and Microsoft would agree to support standards.

So that’s where the bar was set: browsers agreed to work to the same standards, but competed by having different rendering engines.

There’s an argument to be made for raising that bar: browsers agree to work to the same standards, and have the same shared rendering engine, but compete by innovating in all other areas—the browser chrome, personalisation, privacy, and so on.

Like I said, I understand the argument. I just don’t agree with it.

One reason for zeroing in a single rendering engine is that it’s just too damned hard to create or maintain an entirely different rendering engine now that web standards are incredibly powerful and complex. Only a very large company with very deep pockets can hope to be a rendering engine player. Google. Apple. Heck, even Microsoft threw in the towel and abandoned their rendering engine in favour of Blink and V8.

And yet. Andreas Kling recently wrote about the Ladybird browser. How we’re building a browser when it’s supposed to be impossible:

The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.

I’ll be watching that project with interest. Not because I plan to use the brower. I’d just like to see some evidence against the complexity argument.

Meanwhile most other browser projects are building on the raised bar of a shared browser engine. Blisk, Brave, and Arc all use Chromium under the hood.

Arc is the most interesting one. Built by the wonderfully named Browser Company of New York, it’s attempting to inject some fresh thinking into everything outside of the rendering engine.

Experiments like Arc feel like they could have more in common with tools-for-thought software like Obsidian and Roam Research. Those tools build knowledge graphs of connected nodes. A kind of hypertext of ideas. But we’ve already got hypertext tools we use every day: web browsers. It’s just that they don’t do much with the accumulated knowledge of our web browsing. Our browsing history is a boring reverse chronological list instead of a cool-looking knowledge graph or timeline.

For inspiration we can go all the way back to Vannevar Bush’s genuinely seminal 1945 article, As We May Think. Bush imagined device, the Memex, was a direct inspiration on Douglas Engelbart, Ted Nelson, and Tim Berners-Lee.

The article describes a kind of hypertext machine that worked with microfilm. Thanks to Tim Berners-Lee’s World Wide Web, we now have a global digital hypertext system that we access every day through our browsers.

But the article also described the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.

Our browsing histories are a kind of associative trail. They’re as unique as fingerprints. Even if everyone in the world started on the same URL, our browsing histories would quickly diverge.

Bush imagined that these associative trails could be shared:

The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Heck, making a useful browsing history could be a real skill:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record.

Taking something personal and making it public isn’t a new idea. It was what drove the wave of web 2.0 startups. Before Flickr, your photos were private. Before Delicous, your bookmarks were private. Before Last.fm, what music you were listening to was private.

I’m not saying that we should all make our browsing histories public. That would be a security nightmare. But I am saying there’s a lot of untapped potential in our browsing histories.

Let’s say we keep our browsing histories private, but make better use of them.

From what I’ve seen of large language model tools, the people getting most use of out of them are training them on a specific corpus. Like, “take this book and then answer my questions about the characters and plot” or “take this codebase and then answer my questions about the code.” If you treat these chatbots as calculators for words they can be useful for some tasks.

Large language model tools are getting smaller and more portable. It’s not hard to imagine one getting bundled into a web browser. It feeds on your browsing history. The bigger your browsing history, the more useful it can be.

Except, y’know, for the times when it just make shit up.

Vannevar Bush didn’t predict a Memex that would hallucinate bits of microfilm that didn’t exist.

Brandolini’s blockchain

I’ve already written about how much I enjoyed hosting Leading Design San Francisco last week.

All the speakers were terrific. Lola’s talk was particularly …um, interesting:

In this talk, Lola will share her adventures in the world of blockchain, the hostility she experienced in her first go-round in 2018, and why she’s chosen to head back to a technology that is going through its largest reputational and social crisis to date.

Wait …I was supposed to stand on stage and introduce a talk that was (at least partly) about blockchain? I have opinions.

As it turned out, Lola warned me that I’d be making an appearance in her talk. She was going to quote that blog post. Before the talk, I asked her how obnoxious I could be about blockchain in her intro. She told me to bring it.

So in the introduction, I deployed all the sarcasm I had in me and said:

Listen, we designers have a tendency to be over-critical of things sometimes. There are all these ideas that we dismiss: phrenology, homeopathy, flat-earthism …blockchain. Haters gonna hate.

I remember somebody asking online a while back, “Why the hate for web3?” And someone I know responded by saying “We hate it because we understand it.” I think there’s a lot of truth to that.

But look, just because blockchains are powering crypto ponzi schemes and N F fucking Ts, it’s worth remembering that it’s also simply a technology. It’s a technological solution in search of a problem.

To be fair, it’s still early days. After all, it’s only been over a decade now.

It’s like the law of instrument says; when all you have is a hammer, everything looks like a nail. Blockchain is like that. Except the hammer is also made of glass.

Anyway, Lola is going to defend the indefensible and talk about blockchain. One thing to keep in mind is this: remember when everyone was talking about “The Cloud”? And then it turned out that you could substitute the phrase “someone else’s server” for “The Cloud?” Well, every time you hear Lola say the word “blockchain”, I’d like you to mentally substitute the phrase “multiple copies of a spreadsheet.”

Please give an open mind and a warm welcome to Lola Oyelayo Pearson!

I got some laughs. I also got lots of gasps and pearl-clutching, as though I were saying something taboo. Welcome to San Francisco.

Lola gave as good as she got. I got a roasting in her talk.

And just to clarify, Lola and I are friends—this was a consensual smackdown.

There was a very serious point to Lola’s talk. Cryptobollocks and other blockchain-powered schemes have historically been very bro-y, and exploitative of non-bro communities. Lola wants to fight that trend.

I get it. But it reminds me a bit of the justifications you hear from people who go to work at Facebook claiming that they can do more good from the inside. Whatever helps you sleep at night.

The crux of Lola’s belief is this: blockchain technology is inevitable, therefore it is uncumbent on us as ethical designers to ensure that the technology is deployed in a way that empowers people instead of exploiting them.

But I take issue with the premise. Blockchain technology is not inevitable. That’s the worst kind of technological determinism. It’s defeatist. It’s a depressing view of “progress” driven not by people, but by technological forces beyond our control.

I refuse to accept that anti-humanist deterministic view.

In any case, for technological determinism to have any validity, there needs to be something to it. At least virtual reality and machine learning are based on some actual technologies. In the case of cryptobollocks, there is no there there. There is nothing except the hype, which is why you’ll see blockchain enthusiasts trying to ride the coattails of trending technologies in a logical fallacy that goes something like this:

  1. There are technologies that will be really big in the future,
  2. blockchain is a technology, therefore
  3. blockchain will be really big in the future.

Blockchain is bullshit. It isn’t even very clever bullshit. And it certainly isn’t inevitable.

Today, the distant future

It’s a bit of a cliché to talk about living in the future. It’s also a bit pointless. After all, any moment after the big bang is a future when viewed from any point in time before it.

Still, it’s kind of fun when a sci-fi date rolls around. Like in 2015 when we reached the time depicted in Back To The Future 2, or in 2019 when we reached the time of Blade Runner.

In 2022 we are living in the future of web standards. Again, technically, we’re always living in the future of any past discussion of web standards, but this year is significant …in a very insignificant way.

It all goes back to 2008 and an interview with Hixie, editor of the HTML5 spec at the WHATWG at the time. In it, he mentioned the date 2022 as the milestone for having two completely interoperable implementations.

The far more important—and ambitious—date was 2012, when HTML5 was supposed to become a Candidate Recommendation, which is standards-speak for done’n’dusted.

But the mere mention of the year 2022 back in the year 2008 was too much for some people. Jeff Croft, for example, completely lost his shit (Jeff had a habit of posting angry rants and then denying that he was angry or ranty, but merely having a bit of fun).

The whole thing was a big misunderstanding and soon irrelevant: talk of 2022 was dropped from HTML5 discussions. But for a while there, it was fascinating to see web designers and developers contemplate a year that seemed ludicriously futuristic. Jeff wrote:

God knows where I’ll be in 13 years. Quite frankly, I’ll be pretty fucking disappointed in myself (and our entire industry) if I’m writing HTML in 13 years.

That always struck me as odd. If I thought like that, I’d wonder what the point would be in making anything on the web to begin with (bear in mind that both my own personal website and The Session are now entering their third decade of life).

I had a different reaction to Jeff, as I wrote in 2010:

Many web developers were disgusted that such a seemingly far-off date was even being mentioned. My reaction was the opposite. I began to pay attention to HTML5.

But Jeff was far from alone. Scott Gilbertson wrote an angry article on Webmonkey:

If you’re thinking that planning how the web will look and work 13 years from now is a little bit ridiculous, you’re not alone.

Even if your 2022 ronc-o-matic web-enabled toaster (It slices! It dices! It browses! It arouses!) does ship with Firefox v22.3, will HTML still be the dominant language of web? Given that no one can really answer that question, does it make sense to propose a standard so far in the future?

(I’m re-reading that article in the current version of Firefox: 95.0.2.)

Brian Veloso wrote on his site:

Two-thousand-twenty-two. That’s 14 years from now. Can any of us think that far? Wouldn’t our robot overlords, whether you welcome them or not, have taken over by then? Will the internet even matter then?

From the comments on Jeff’s post, there’s Corey Dutson:

2022: God knows what the Internet will look like at that point. Will we even have websites?

Dan Rubin, who has indeed successfully moved from web work to photography, wrote:

I certainly don’t intend to be doing “web work” by that time. I’m very curious to see where the web actually is in 14 years, though I can’t imagine that HTML5 will even get that far; it’ll all be obsolete before 2022.

Joshua Works made a prediction that’s worryingly close to reality:

I’ll be surprised if website-as-HTML is still the preferred method for moving around the tons of data we create, especially in the manner that could have been predicted in 2003 or even today. Hell, iPods will be over 20 years old by then and if everything’s not run as an iPhone App, then something went wrong.

Someone with the moniker Grand Caveman wrote:

In 2022 I’ll be 34, and hopefully the internet will be obsolete by then.

Perhaps the most level-headed observation came from Jonny Axelsson:

The world in 2022 will be pretty much like the world in 2009.

The world in 2009 is pretty much like 1996 which was pretty much like the world in 1983 which was pretty much like the world in 1970. Some changes are fairly sudden, others are slow, some are dramatic, others subtle, but as a whole “pretty much the same” covers it.

The Web in 2022 will not be dramatically different from the Web in 2009. It will be less hot and it will be less cool. The Web is a project, and as it succeeds it will fade out of our attention and into the background. We don’t care about things when they work.

Now that’s a sensible perspective!

So who else is looking forward to seeing what the World Wide Web is like in 2036?

I must remember to write a blog post then and link back to this one. I have no intention of trying to predict the future, but I’m willing to bet that hyperlinks will still be around in 14 years.

Speaking of long bets…

Hope

My last long-distance trip before we were all grounded by The Situation was to San Francisco at the end of 2019. I attended Indie Web Camp while I was there, which gave me the opportunity to add a little something to my website: an “on this day” page.

I’m glad I did. While it’s probably of little interest to anyone else, I enjoy scrolling back to see how the same date unfolded over the years.

’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.

But when I scroll down through my “on this day” page, it also feels like descending deeper into the dark waters of linkrot. For each year back in time, the probability of a link still working decreases until there’s nothing but decay.

Sadly this is nothing new. I’ve been lamenting the state of digital preservation for years now. More recently Jonathan Zittrain penned an article in The Atlantic on the topic:

Too much has been lost already. The glue that holds humanity’s knowledge together is coming undone.

In one sense, linkrot is the price we pay for the web’s particular system of hypertext. We don’t have two-way linking, which means there’s no centralised repository of links which would be prohibitively complex to maintain. So when you want to link to something on the web, you just do it. An a element with an href attribute. That’s it. You don’t need to check with the owner of the resource you’re linking to. You don’t need to check with anyone. You have complete freedom to link to any URL you want to.

But it’s that same simple system that makes the act of linking a gamble. If the URL you’ve linked to goes away, you’ll have no way of knowing.

As I scroll down my “on this day” page, I come across more and more dead links that have been snapped off from the fabric of the web.

If I stop and think about it, it can get quite dispiriting. Why bother making hyperlinks at all? It’s only a matter of time until those links break.

And yet I still keep linking. I still keep pointing to things and saying “Check this out!” even though I know that over a long enough timescale, there’s little chance that the link will hold.

In a sense, every hyperlink on the World Wide Web is little act of hope. Even though I know that when I link to something, it probably won’t last, I still harbour that hope.

If hyperlinks are built on hope, and the web is made of hyperlinks, then in a way, the World Wide Web is quite literally made out of hope.

I like that.

Connections

Fourteen years ago, I gave a talk at the Reboot conference in Copenhagen. It was called In Praise of the Hyperlink. For the most part, it was a gushing love letter to hypertext, but it also included this observation:

For a conspiracy theorist, there can be no better tool than a piece of technology that allows you to arbitrarily connect information. That tool now exists. It’s called the World Wide Web and it was invented by Sir Tim Berners-Lee.

You know those “crazy walls” that are such a common trope in TV shows and movies? The detectives enter the lair of the unhinged villain and discover an overwhelming wall that’s like looking at the inside of that person’s head. It’s not the stuff itself that’s unnerving; it’s the red thread that connects the stuff.

Red thread. Blue hyperlinks.

When I spoke about the World Wide Web, hypertext, apophenia, and conspiracy theorists back in 2006, conspiracy theories could still be considered mostly harmless. It was the domain of Dan Brown potboilers and UFO enthusiasts with posters on their walls declaring “I Want To Believe”. But even back then, 911 truthers were demonstrating a darker side to the fun and games.

There’s always been a gamification angle to conspiracy theories. Players are rewarded with the same dopamine hits for “doing the research” and connecting unrelated topics. Now that’s been weaponised into QAnon.

In his newsletter, Dan Hon wrote QAnon looks like an alternate reality game. You remember ARGs? The kind of designed experience where people had to cooperate in order to solve the puzzle.

Being a part of QAnon involves doing a lot of independent research. You can imagine the onboarding experience in terms of being exposed to some new phrases, Googling those phrases (which are specifically coded enough to lead to certain websites, and certain information). Finding something out, doing that independent research will give you a dopamine hit. You’ve discovered something, all by yourself. You’ve achieved something. You get to tell your friends about what you’ve discovered because now you know a secret that other people don’t. You’ve done something smart.

We saw this in the games we designed. Players love to be the first person to do something. They love even more to tell everyone else about it. It’s like Crossfit. 

Dan’s brother Adrian also wrote about this connection: What ARGs Can Teach Us About QAnon:

There is a vast amount of information online, and sometimes it is possible to solve “mysteries”, which makes it hard to criticise people for trying, especially when it comes to stopping perceived injustices. But it’s the sheer volume of information online that makes it so easy and so tempting and so fun to draw spurious connections.

This is something that Molly Sauter has been studying for years now, like in her essay The Apophenic Machine:

Humans are storytellers, pattern-spotters, metaphor-makers. When these instincts run away with us, when we impose patterns or relationships on otherwise unrelated things, we call it apophenia. When we create these connections online, we call it the internet, the web circling back to itself again and again. The internet is an apophenic machine.

I remember interviewing Lauren Beukes back in 2012 about her forthcoming book about a time-travelling serial killer:

Me: And you’ve written a time-travel book that’s set entirely in the past.

Lauren: Yes. The book ends in 1993 and that’s because I did not want to have to deal with Kirby the heroine getting some access to CCTV cameras and uploading the footage to 4chan and having them solve the mystery in four minutes flat.

By the way, I noticed something interesting about the methodology behind conspiracy theories—particularly the open-ended never-ending miasma of something like QAnon. It’s no surprise that the methodology is basically an inversion of the scientific method. It’s the Texas sharpshooter fallacy writ large. Well, you know the way that I’m always going on about design principles and they way that good design principles should be reversible? Conspiracy theories take universal principles and invert them. Take Occam’s razor:

Do not multiply entities without necessity.

That’s what they want you to think! Wake up, sheeple! The success of something like QAnon—or a well-designed ARG—depends on a mindset that rigorously inverts Occam’s razor:

Multiply entities without necessity!

That’s always been the logic of conspiracy theories from faked moon landings to crop circles. I remember well when the circlemakers came clean and showed exactly how they had been making their beautiful art. Conspiracy theorists—just like cultists—don’t pack up and go home in the face of evidence. They double down. There was something almost pitiable about the way the crop circle UFO crowd were bending over backwards to reject proof and instead apply the inversion of Occam’s razor to come up with even more outlandish explanations to encompass the circlemakers’ confession.

Anyway, I recommend reading what Dan and Adrian have written about the shared psychology of QAnon and Alternate Reality Games, not least because they also suggest some potential course corrections.

I think the best way to fight QAnon, at its roots, is with a robust social safety net program. This not-a-game is being played out of fear, out of a lack of safety, and it’s meeting peoples’ needs in a collectively, societally destructive way.

I want to add one more red thread to this crazy wall. There’s a book about conspiracy theories that has become more and more relevant over time. It’s also wonderfully entertaining. Here’s my recommendation from that Reboot presentation in 2006:

For a real hot-tub of conspiracy theory pleasure, nothing beats Foucault’s Pendulum by Umberto Eco.

…luck rewarded us, because, wanting connections, we found connections — always, everywhere, and between everything. The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Web talk

At the start of this month I was in Amsterdam for a series of back-to-back events: Indie Web Camp Amsterdam, View Source, and Fronteers. That last one was where Remy and I debuted talk we’d been working on.

The Fronteers folk have been quick off the mark so the video is already available. I’ve also published the text of the talk here:

How We Built The World Wide Web In Five Days

This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.

Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.

That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.

At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.

But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.

I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.

There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.

We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:

It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.

In the meantime, you can read the text, watch the video, or look at the slides (but the slides really don’t make much sense in isolation).

Timelines of the web

Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.

Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.

So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.

The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.

Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.

The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.

So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.

I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.

Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I marked up each timeline as an ordered list of h-events:

<li class="h-event y1968">
  <a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
    <time class="dt-start" datetime="1968-12-09">1968</time>
    <abbr class="p-name" title="oN-Line System">NLS</abbr>
  </a>
</li>

With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.

The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:

.y1968 {
  left: calc((1968 - 1959) * (100%/30) - 5em);
}

For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:

.y2014 {
  right: calc((2019 - 2014) * (100%/30) - 5em);
}

(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)

I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.

As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.

Et voilà!

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:

1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.

1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.

1982: Literary Machines, Ted Nelson’s book which was on our desk at all times

I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.

The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.

Hello WorldWideWeb.

Links, tags, and feeds

A little while back, I switched from using Chrome as my day-to-day browser to using Firefox. I could feel myself getting a bit too comfortable with one particular browser, and that’s not good. I reckon it’s good to shake things up a little every now and then. Besides, there really isn’t that much difference once you’ve transferred over bookmarks and cookies.

Unfortunately I’m being bitten by this little bug in Firefox. It causes some of my bookmarklets to fail on certain sites with strict Content Security Policies (and CSPs shouldn’t affect bookmarklets). I might have to switch back to Chrome because of this.

I use bookmarklets throughout the day. There’s the Huffduffer bookmarklet, of course, for whenever I come across a podcast episode or other piece of audio that I want to listen to later. But there’s also my own home-rolled bookmarklet for posting links to my site. It doesn’t do anything clever—it grabs the title and URL of the currently open page and pre-populates a form in a new window, leaving me to add a short description and some tags.

If you’re reading this, then you’re familiar with the “journal” section of adactio.com, but the “links” section is where I post the most. Here, for example, are all the links I posted yesterday. It varies from day to day, but there’s generally a handful.

Should you wish to keep track of everything I’m linking to, there’s a twitterbot you can follow called @adactioLinks. It uses a simple IFTTT recipe to poll my RSS feed of links and send out a tweet whenever there’s a new entry.

Or you can drink straight from the source and subscribe to the RSS feed itself, if you’re still rocking it old-school. But if RSS is your bag, then you might appreciate a way to filter those links…

All my links are tagged. Heavily. This is because all my links are “notes to future self”, and all my future self has to do is ask “what would past me have tagged that link with?” when I’m trying to find something I previously linked to. I end up using my site’s URLs as an interface:

At the front-end gatherings at Clearleft, I usually wrap up with a quick tour of whatever I’ve added that week to:

Well, each one of those tags also has a corresponding RSS feed:

…and so on.

That means you can subscribe to just the links tagged with something you’re interested in. Here’s the full list of tags if you’re interested in seeing the inside of my head.

This also works for my journal entries. If you’re only interested in my blog posts about frontend development, you might want to subscribe to:

Here are all the tags from my journal.

You can even mix them up. For everything I’ve tagged with “typography”—whether it’s links, journal entries, or articles—the URL is:

The corresponding RSS feed is:

You get the idea. Basically, if something on my site is a list of items, chances are there’s a corresponding RSS feeds. Sometimes there might even be a JSON feed. Hack some URLs to see.

Meanwhile, I’ll be linking, linking, linking…

An associative trail

Every now and then, I like to revisit Vannevar Bush’s classic article from the July 1945 edition of the Atlantic Monthly called As We May Think in which he describes a theoretical machine called the memex.

A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk.

1945! Apart from its analogue rather than digital nature, it’s a remarkably prescient vision. In particular, there’s the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Many decades later, Anne Washington ponders what a legal memex might look like:

My legal Memex builds a network of the people and laws available in the public records of politicians and organizations. The infrastructure for this vision relies on open data, free access to law, and instantaneously availability.

As John Sheridan from the UK’s National Archives points out, hypertext is the perfect medium for laws:

Despite the drafter’s best efforts to create a narrative structure that tells a story through the flow of provisions, legislation is intrinsically non-linear content. It positively lends itself to a hypertext based approach. The need for legislation to escape the confines of the printed form predates the all major innovators and innovations in hypertext, from Vannevar Bush’s vision in ” As We May Think“, to Ted Nelson’s coining of the term “hypertext”, through to and Berners-Lee’s breakthrough world wide web. I like to think that Nelson’s concept of transclusion was foreshadowed several decades earlier by the textual amendment (where one Act explicitly alters – inserts, omits or amends – the text of another Act, an approach introduced to UK legislation at the beginning of the 20th century).

That’s from a piece called Deeply Intertwingled Laws. The verb “to intertwingle” was another one of Ted Nelson’s neologisms.

There’s an associative trail from Vannevar Bush to Ted Nelson that takes some other interesting turns…

Picture a new American naval recruit in 1945, getting ready to ship out to the pacific to fight against the Japanese. Just as the ship as leaving the harbour, word comes through that the war is over. And so instead of fighting across the islands of the pacific, this young man finds himself in a hut on the Philippines, reading whatever is to hand. There’s a copy of The Atlantic Monthly, the one with an article called As We May Think. The sailor was Douglas Engelbart, and a few years later when he was deciding how he wanted to spend the rest of his life, that article led him to pursue the goal of augmenting human intellect. He gave the mother of all demos, featuring NLS, a working hypermedia system.

Later, thanks to Bill Atkinson, we’d get another system called Hypercard. It was advertised with the motto Freedom to Associate, in an advertising campaign that directly referenced Vannevar Bush.

And now I’m using the World Wide Web, a hypermedia system that takes in the whole planet, to create an associative trail. In this post, I’m linking (without asking anyone for permission) to six different sources, and in doing so, I’m creating a unique associative trail. And because this post has a URL (that won’t change), you are free to take it and make it part of your own associative trail on your digital memex.

Talking about hypertext

#CSSday starts off with a great history lesson of our industry by @adactio

I’ve just published a transcript of the talk I gave at the HTML Special that preceded CSS Day a couple of weeks back. I’ve also recorded an audio version for your huffduffing pleasure.

It’s not like the usual talks I give. The subject matter was assigned to me, Mission Impossible style. PPK wanted each speaker to give an entire talk on just one HTML element. He offered me the best element of them all: the A element.

There were a few different directions I could’ve taken it. I could’ve tried to make it practical, but I quickly dismissed that idea. Instead I went in the completely opposite direction, making it as pretentious as possible. I figured a talk about hypertext could afford to be winding and circuitous, building on some of the ideas I wrote about in my piece for The Manual a few years back. It’s quite self-indulgent of me, but I used it as an opportunity to geek out about some of my favourite things; from Borges, Babbage, and Bletchley to Leibniz, Lovelace, and Licklider.

I wouldn’t usually write out an entire talk word-for-word in advance, but somehow it felt right for this one. In fact, my talk preparation this time ‘round was very similar to the process Charlotte recently wrote about:

  1. Get everything out of my head and onto a mind map.
  2. Write chunks of content in short bursts—this was when I was buddying up with Paul.
  3. Put together a slide deck of visuals to support the narrative.
  4. Practice delivering the talk so I don’t look I’m just reading off a screen.

It takes me a long time to prepare talks. As the deadline for this one approached, I was getting quite panicked. It was touch and go there for a while, but I managed to get it done in time.

I’m pleased with how it turned out. On the day, I had fun delivering it. People seemed to like it too, which was gratifying.

Although with this kind of talk, it was inevitable that I wouldn’t be able to please everyone.

I guess this talk was a one-off affair. That said, if you’re putting on an event and you think this subject matter would be appropriate, let me know. I’d be more than happy to deliver it again.

Taking an online book offline

Application Cache is—as Jake so infamously described—not a good API. It was specced and shipped before developers had a chance to figure out what they really needed, and so AppCache turned out to be frustrating at best and downright dangerous in some situations. Its over-zealous caching combined with its byzantine cache invalidation ensured it was never going to become a mainstream technology.

There are very few use-cases for AppCache, but I think I hit upon one of them. Six years ago, A Book Apart published HTML5 For Web Designers. A year and a half later, I put the book online. The contents are never going to change. There’s a second edition of the book out now but if you want to read all the extra bits that Rachel added, you’re going to have to buy the book. The website for the original book is static and unchanging. That’s what made it such a good candidate for using AppCache. I could just set it and forget.

Except that’s no longer true. AppCache is being deprecated and browsers are starting to withdraw support. Chrome is already making sure that AppCache—like geolocation—no longer works on sites that aren’t served over HTTPS. That’s for the best. In retrospect, those APIs should never have been allowed over unsecured HTTP.

I mentioned that I spent the weekend switching all my book websites over to HTTPS, so AppCache should continue to work …for now. It’s only a matter of time before AppCache is removed completely from many of the browsers that currently support it.

Seeing as I’ve got the HTML5 For Web Designers site running on HTTPS now, I might as well go all out and make it a progressive web app. By far the biggest barrier to making a progressive web app is that first step of setting up HTTPS. It’s gotten cheaper—thanks to Let’s Encrypt Certbot—but it still involves mucking around in the command line with root access; I never wanted to become a sysadmin. But once that’s finally all set up, the other technological building blocks—a Service Worker and a manifest file—are relatively easy.

In this case, the Service Worker is using a straightforward bit of logic:

  • On installation, cache absolutely everything: HTML, CSS, images.
  • When anything is requested, grab it from the cache.
  • If it isn’t in the cache, try the network.
  • If the network doesn’t work, show an offline page (or image).

Basically I’m reproducing AppCache’s overzealous approach. It works for this site because the content is never going to change. I hope that this time, I really can just set it and forget it. I want the site to be an historical artefact, available at the same URL for at least my lifetime. I don’t want to have to maintain it or revisit it every few years to swap out one API for another.

Which brings me back to the way AppCache is being deprecated…

The Firefox team are very eager to ditch AppCache as soon as possible. On the one hand, that’s commendable. They’re rightly proud of shipping Service Workers and they want to encourage people to use the better technology instead. But it sure stings for the suckers (like me) who actually went and built stuff using AppCache.

In a weird way, I think this rush to deprecate AppCache might actually hurt the adoption of Service Workers. Let me explain…

At last year’s Edge Conference, Nolan Lawson gave a great presentation on storing data in the browser. He enumerated the many ways—past and present—that we could store data locally: WebSQL, Local Storage, IndexedDB …the list goes on. He also posed the question: why aren’t more people using insert-name-of-latest-API-here? To me it seemed obvious why more people weren’t diving into using the latest and greatest option for local data storage. It was because they had been burned before. The developers who rushed into trying previous solutions end up being mocked for their choice. “Still using that ol’ thing? Pffftt!”

You can see that same attitude on display from Mozilla as they push towards removing AppCache. Like in a comment that refers to developers using AppCache in production as “the angry hordes”. Reminds me of something Tom said:

In that same Mozilla thread, Soledad echoes Tom’s point:

As a member of the devrel team: I think that this should be better addressed in a blog post that someone from the team responsible for switching AppCache off should write, so everyone can understand the reasons and ask questions to those people.

I’d rather warn people beforehand, pointing them to that post and help them with migration paths than apply emergency mitigation strategies when a lot of people find their stuff stopped working in the newer Firefox…

Bravo! That same approach should have also been taken by the Chrome team when it came to their thread about punishing display:browser in manifest files. There was absolutely no communication with developers about this major decision. I only found out about it because Paul happened to mention it to me.

I was genuinely shocked by this:

Withholding the “add to home screen” prompt like that has a whiff of blackmail about it.

I can confirm that smell. When I was making the manifest file for HTML5 For Web Designers, I really wanted to put display: browser because I want people to be able to copy and paste URLs (for the book, for individual chapters, and for sections within chapters). But knowing that if I did that, Android users would never see the “add to home screen” prompt made me question that decision. I felt strong-armed into declaring display: standalone. And no, I’m not mollified by hand-waving reassurances that the Chrome team will figure out some solution for this. Figure out the solution first, then punish the saps like me who want to use display: browser to allow people to share URLs.

Anyway, the website for HTML5 For Web Designers is now using AppCache and Service Workers. The AppCache part will probably be needed for quite a while yet to provide offline support on iOS. Apple are really dragging their heels on Service Worker support, with at least one WebKit engineer actively looking for reasons not to implement it.

There’s a lot of talk about making apps work offline, but I think it’s just as important that we consider making information work offline. Books are a great example of this. To use the tired transport tropes, the website for a book is something you might genuinely want to access when you’re on a plane, or in the underground, or out at sea.

I really, really like progressive web apps. But I also think it’s important that we don’t fall into the trap of just trying to imitate native apps on the web. I love the idea of taking the best of the web—like information being permanently available at a URL—and marrying that up with the best of native—like offline access. I also like the idea of taking the best of books—a tome of thought—and marrying it up with the best of the web—hypertext.

I’d love to see more experimentation around online/offline hypertext/books. For now, you can visit HTML5 For Web Designers, add it to your home screen, and revisit it whenever and wherever you like.

Relinkification

On Jessica’s recommendation, I read a piece on the Guardian website called The eeriness of the English countryside:

Writers and artists have long been fascinated by the idea of an English eerie – ‘the skull beneath the skin of the countryside’. But for a new generation this has nothing to do with hokey supernaturalism – it’s a cultural and political response to contemporary crises and fears

I liked it a lot. One of the reasons I liked it was not just for the text of the writing, but the hypertext of the writing. Throughout the piece there are links off to other articles, books, and blogs. For me, this enriches the piece and it set me off down some rabbit holes of hyperlinks with fascinating follow-ups waiting at the other end.

Back in 2010, Scott Rosenberg wrote a series of three articles over the course of two months called In Defense of Hyperlinks:

  1. Nick Carr, hypertext and delinkification,
  2. Money changes everything, and
  3. In links we trust.

They’re all well worth reading. The whole thing was kicked off with a well-rounded debunking of Nicholas Carr’s claim that hyperlinks harm text. Instead, Rosenberg finds that hyperlinks within a text embiggen the writing …providing they’re done well:

I see links as primarily additive and creative. Even if it took me a little longer to read the text-with-links, even if I had to work a bit harder to get through it, I’d come out the other side with more meat and more juice.

Links, you see, do so much more than just whisk us from one Web page to another. They are not just textual tunnel-hops or narrative chutes-and-ladders. Links, properly used, don’t just pile one “And now this!” upon another. They tell us, “This relates to this, which relates to that.”

The difference between a piece of writing being part of the web and a piece of writing being merely on the web is something I talked about a few years back in a presentation called Paranormal Interactivity at ‘round about the 15 minute mark:

Imagine if you were to take away all the regular text and only left the hyperlinks on Wikipedia, you could still get the gist, right? Every single link there is like a wormhole to another part of this “choose your own adventure” game that we’re playing every day on the web. I love that. I love the way that Wikipedia uses links.

That ability of the humble hyperlink to join concepts together lies at the heart of Tim Berners Lee’s World Wide Web …and Ted Nelson’s Project Xanudu, and Douglas Engelbart’s Dynamic Knowledge Environments, and Vannevar Bush’s idea of the Memex. All of those previous visions of a hyperlinked world were—in many ways—superior to the web. But the web shipped. It shipped with brittle, one-way linking, but it shipped. And now today anyone can create a connection between two ideas by linking to resources that represent those ideas. All you need is an HTML document that contains some A elements with href attributes, and a URL to act as that document’s address.

Like the one you’re accessing now.

Not only can I link to that article on the Guardian’s website, I can also pair it up with other related links, like Warren Ellis’s talk from dConstruct 2014:

Inventing the next twenty years, strategic foresight, fictional futurism and English rural magic: Warren Ellis attempts to convince you that they are all pretty much the same thing, and why it was very important that some people used to stalk around village hedgerows at night wearing iron goggles.

There is definitely the same feeling of “the eeriness of the English countryside” in Warren’s talk. If you haven’t listened to it yet, set aside some time. It is enticing and disquieting in equal measure …like many of the works linked to from the piece on the Guardian.

There’s another link I’d like to make, and it happens to be to another dConstruct speaker.

From that Guardian piece:

Yet state surveillance is no longer testified to in the landscape by giant edifices. Instead it is mostly carried out in by software programs running on computers housed in ordinary-looking government buildings, its sources and effects – like all eerie phenomena – glimpsed but never confronted.

James Bridle has been confronting just that. His recent series The Nor took him on a tour of a parallel, obfuscated English countryside. He returned with three pieces of hypertext:

  1. All Cameras Are Police Cameras,
  2. Living in the Electromagnetic Spectrum, and
  3. Low Latency.

I love being able to do this. I love being able to add strands to this world-wide web of ours. Not only can I say “this idea reminds me of another idea”, but I can point to both ideas. It’s up to you whether you follow those links.

Manual

I wrote a thing. I wrote it for The Manual, Andy’s rather lovely tertial dead-tree publication.

The thing I wrote is punnily titled As We May Link (and if you get the reference to what I’m punning on, I think you’re going to like it). It’s quite different to most of the other articles I’ve written. My usual medium is hypertext, but I knew that The Manual was going to be published by pressing solutions of pigments onto thinly-sliced pieces of vegetation.

There’s a lot of fetishisation around that particular means of production, often involving the emotional benefit of consuming the final product in the bathtub, something I’ve never understood—surely water is as unsuitable an environment for paper-based analogue systems as it is for silicon-based digital devices?

In any case, I wanted to highlight the bound—in both senses of the word—nature of The Manual’s medium. So I wrote about hypertext …but without being able to use any hyperlinks; an exercise in dancing about architecture, as whoever the hell it was so eloquently put it.

I’m pleased with how it turned out, though I suspect that’s entirely down to Carolyn’s editing skills combined with a lovely illustration by Rob Bailey.

If you’d like to read what I wrote (and what Tiffani wrote and what Ethan wrote and what all the other contributors wrote), then you can order The Manual online …but you’ll have to wait until it is delivered to you over a network of roads and other meatspace transportation routes. The latency is terrible, but the bandwidth is excellent: when you finally have the book in your hands, you’ll find that it contains an astonishing number of atoms.

That’s assuming there’s no packet loss.

Revving up

I was away in Berlin for a few days, delivering a to the good people at Aperto. I had a good time, made even better by some excellent Spring weather and the opportunity to meet up with Anthony and Colin while I was there.

I came home to find that, in my absence, rev="canonical" usage has gone stratospheric. First off, there are the personal sites like CollyLogic and Bokardo. Then there are the bigger fish:

Excellent! I’d just like to add one piece of advice to anyone implementing or thinking of implementing rev="canonical": if you are visibly linking to the short url of the current page, please remember to use rev="canonical" on that A element as well as on any LINK element you’ve put in the HEAD of your document. Likewise, for the coders out there, if you are thinking of implementing a rev="canonical" parser—and let’s face it, that’s a nice piece of low-hanging fruit to hack together—please remember to also check for rev attributes on A elements as well as on LINK elements. If anything, I would prioritise human-visible claims of canonicity over invisible metacrap.

Actually, there’s a whole bunch of nice metacrapital things you can do with your visible hyperlinks. If you link to an RSS feed in the BODY of your document, use the same rel values that you would use if you linked to the feed from a LINK element in the HEAD. If you link to an MP3 file, use the type attribute to specify the right mime-type (audio/mpeg). The same goes for linking to Word documents, PDFs and any other documents that aren’t served up with a mime-type of text/html. So, for example, here on my site, when I link to the RSS feed from the sidebar, I’m using type and rel attributes: href="/journal/rss" rel="alternate" type="application/rss+xml". I’m also quite partial to the hreflang attribute but I don’t get the chance to use that very often—this post being an exception.

The rev="canonical" convention makes a nice addition to the stable of nice semantic richness that can be added to particular flavours of hyperlinks. But it isn’t without its critics. The main thrust of the argument against this usage is that the rev attribute currently doesn’t appear in the HTML5 spec. I’ve even seen people use the past tense to refer to an as-yet unfinished specification: the rev attribute was taken out of the HTML5 spec.

As is so often the case with HTML5, the entire justification for dropping rev seems to be based on a decision made by one person. To be fair, the decision was based on available data from 2005. In light of recent activity and the sheer number of documents that are now using rev="canonical"—Flickr alone accounts for millions—I would hope that the HTML5 community will have the good sense to re-evaluate that decision. The document outlining the design principles of HTML5 states:

When a practice is already widespread among authors, consider adopting it rather than forbidding it or inventing something new.

The unbelievable speed of adoption of rev="canonical" shows that it fulfils a real need. If the HTML5 community ignore this development, not only would they not be paving a cowpath, they would be refusing to even acknowledge that a well-trodden cowpath even exists.

The argument against rev seems to be that it can be confusing and could result in people using it incorrectly. By that argument, new elements like header and footer should be kept out of any future specification for the same reason. I’ve already come across confusion on the part of authors who thought that these new elements could only be used once per document. Fortunately, the spec explains their meaning.

The whole point of having a spec is to explain the meaning of elements and attributes, be it for authors or user-agents. Without a spec to explain what they mean, elements like P and A don’t make any intuitive sense. It’s no different for attributes like href or rev. To say that rev isn’t a good attribute because it requires you to read the spec is like saying that in order to write English, you need to understand the language. It’s neither a good nor bad thing, it’s just a statement of the bleedin’ obvious.

Now go grab yourself the very handy bookmarklet that Simon has written for auto-discovering short urls.

Shrtr

In one of those instances of convergent online evolution, the subject of URL shorteners has been popping up a lot lately. You know; TinyURL, bit.ly, tr.im, and the like. I suspect a lot of this talk was prompted by the launch of the DiggBar and its accompanying short URL service that serves up your content in an iframe—time to break out that frame-busting JavaScript you haven’t needed for years.

David Weiss writes about the security implications of URL shortening services. Meanwhile, Joshua Schachter talks about the danger of link rot:

The worst problem is that shortening services add another layer of indirection to an already creaky system. A regular hyperlink implicates a browser, its DNS resolver, the publisher’s DNS server, and the publisher’s website. With a shortening service, you’re adding something that acts like a third DNS resolver, except one that is assembled out of unvetted PHP and MySQL, without the benevolent oversight of luminaries like Dan Kaminsky and St. Postel.

Dave Winer agrees:

We need to prepare for the day when N of the URL shorteners go out of business. When that happens a large part of the web will die. It will not be a good day.

Take the case of Twitter. Messages on Twitter are archived and addressable. If those messages contain links, they are shortened using TinyURL. If TinyURL were to disappear, it would leave a swamp of unresolved endpoints. Jason Kottke has a modest proposal:

In cases where shortening is necessary, Twitter should automatically use a shortener of their own. That way, users know what they’re getting and as long as Twitter is around, those links stay alive.

That would definitely work for that particular case. Of course Twitter could disappear, taking its archive of messages with it, but that’s a different situation. The loss of shortened URLs would be tightly coupled to the loss of the original messages.

But Twitter is just one example. What about the rest of us? Right now, if someone wants to pass around a shortened version of one of my URLs, they could use any one of the many URL shortening services out there. The result is potentially a score of different short URLs leading to the same endpoint. If some of those services disappear, link rot spreads.

Ideally, I should be able to specify a desired short URL for my content. This is something that Dopplr is already doing with its dplr.it domain.

Kellan says that they’re also putting together a URL shortener over at Flickr. He’s thinking about how to specify a short URL for a document: some way of specifying here’s the short URL for this page in the same way that we can specify here’s the stylesheet for this page or here’s the RSS feed for this page.

The rel attribute is used for stylesheets and RSS feeds so perhaps that’s the way to go. Something along the lines of rel="alternate shorter" in the same way that we can point to an alternate stylesheet with rel="alternate stylesheet". But in this case, we’re actually pointing to the same resource but with a different URL. So maybe something like rel="alternate shorter self" would be more accurate. Heck, we could probably throw the bookmark value in there too: rel="alternate shorter self bookmark".

Kevin pointed out on Twitter that rev (reverse relationship) would be more suitable than rel.

Google introduced rel="canonical" recently. It’s a way of pointing from an alternate URL back to the canonical URL of the current document: the relationship of the linked document to the current document is “canonical”.

If you’re linking from the canonical URL to an alternate URL (like, say, a shortened URL), you could use rev="canonical": the relationship of the current document to the linked document is “canonical”.

This certainly seems to be the more semantically correct way of pointing to a shortened URL. Alas, rev is a beleaguered little emo attribute: no-one understands it. At least, that’s the claim of the HTML5 community, who plan to drop it completely.

Personally, I share Paul’s intuitions:

HTML is a living language and the HTML5 WG should behave more like the OED rather than the French Government.

So if enough of us publish documents using ARIA roles, accesskey or rev attributes, they will not go gentle into that good night.

Should the idea of distributed, rather than centralised, URL shortening take off, I can imagine a situation where short URL auto-discovery is as commonplace as . So if I paste a link into a microblogging site like Twitter, or choose to “Mail this page” from my browser, then the website or mail client could check the head of the document for a preferred short URL. It’s a little bit like OpenID delegation: I could either create my own URL shortening service or specify a provider I trust.

Josh is already playing around with shortened links back to posts on his blog. Now suppose he also specifies the short URL (using rev="canonical") on those blog posts…

Update: Kellan has now implemented rev="canonical" auto-discovery.

Update 2: …and Dopplr have duly implemented rev="canonical" which works a treat with Kellan’s auto-discovery tool. Here’s an example.

Update 3: This just keeps getting better. Now there’s a blog devoted to rev="canonical" which has already documented not one, but two Wordpress plugins.

Orangutans, Oxen and Ogham Stones

Sean McGrath is delivering the closing keynote at XTech 2008. Sean would like to reach inside and mess with our heads today. He plans to modify our brain structures, talking about the movable Web.

Even though Sean has been doing tech stuff for a long time he freely admits that he doesn’t know what the Web is. He quotes Dylan:

I was so much older then, I’m so much younger now.

Algorithms + Data Structures = Programs is a book by Nicklaus Wirth from 1978. Anyone remember Pascal? Sean went to college here at Trinity in 1983 doing four years of computer science which is where he came across that book.

Computing is all about language …human language. People first, machines second. Information is really about words, not numbers. Words give the numbers context.

Sean used to sit in his student bedsit and think about what algorithms actually are. He was also around at the birth of SGML in 1985. More words, then. Then he got involved in the creation of XML …even more words. Then the Web came along. HTML is, yup, more words. Even JavaScript is words. His epiphany was realising that HTTP was about sending words across the wire. The Web is fundamentally words.

There’s a Bob Dylan documentary called . Sean took this as a sign from God …or at least from Dylan.

Sean explains stones — horizontal lines from top to bottom. is the Rosetta Stone of Ogham writing. The translation on this particular stone is If I were you, I would not stand here. The Irish have been using words for a long time. They’ve also been hacking for a long time. Dolmens are an example of neolithic hacking.

demonstrate the long Irish history of writing unit test cases for Cascading Style Sheets. A common thread in books from the Book Of Ballymote up to was that they were from a religious background. Joyce came along with the world’s first hypertext novel, Finnegan’s Wake. Sean goes from Yeats to Shane McGowan, quoting Summer In Siam as a sublime piece of Zen metaphysics:

When it’s Summer in Siam then all I really know is that I truly am in the Summer in Siam.

The Irish will even go to war over words. Copyright was a big bone of contention between and his student in the 6th Century. St. Columba ran a proto-Pirate Bay. If you saw him coming, you’d bury your books. There was a war between St. Finnian and St. Columba in which 3,000 people lost their lives. Finally, the High King of Ireland said As to every cow its calf, so to every book its copy, the first official statement on copyright. But because books were actually written on cows (vellum), the statement is ambiguous.

Here’s a picture. Nobody in the room knows what it is. We haven’t had our brains rewired yet.

Sean loves the simplicity of the idea that computing is words. Sadly, it’s just not true. There are plenty of images and video on the Web.

Back to that picture. It’s a cow. One person in the room sees the cow.

Sean likes the idea of the Web as electronic Ogham stones. But he sought the 2nd path to Web enlightenment. He realised that not only is the Web not just all words, the Web doesn’t exist at all.

What is the true nature of the words on the Web? Here’s something Sean created called Finite State Machines for a mobile app called Mission Control that generated documents based on the user, the device, the location and the network. There were no persistent documents. No words, just evaporation as Leonard Cohen said.

There are three models for the world.

  • Model A is the platonic model. Documents exist on the server before you observe them. You request them over HTTP.
  • Model B is Bishop Berkeley’s model. Stuff exists but we twist it (using CSS for example).
  • Model C is that nothing exists until you observe it. In quantum physics there is the idea that observing a system actually defines the system.

Model A exists within Model B which exists within Model C. Model C is the general case. If you have a system that is that dynamic, you could generate Model B and therefore Model A. Look at the way our sites have evolved over time. We used to create Model A websites. Then we switched over to Model B with Web Standards. Now we’re at Model C — we’re not going to create any actual content at all. There is no content but there is also an infinite amount of content at the same time. We generate a tailor-made document for each user but we don’t hold on to that document, we throw it away. So what content actually exists on the Web?

PHP, Django, Rails, Google App Engine …on the Web, Model C wins. It’s even starting to happen on the client side with Ajax, Silverlight and Air. It’s spooky sometimes to view source and see no actual content, just JavaScript to generate the content.

Doing everything dynamically is fine as long it scales. It’s better to solve the problems of scalability than to revert to the static model. The benefits of Model C are just so much greater than Model A.

Amazon are making great services but they are rubbish at naming things, like Mechanical Turk.

So where are all the words? HTTP still delivers words to me but they are generated on the fly. The programs that generate them are hidden.

The Web is becoming a Web of silos. As the Web becomes more dynamic, it’s harder for the little guy to compete (behind me I hear Simon grumble something about Moore’s Law). So we build silos on the client side; so-called Rich Internet Applications. We’re losing URIs.

Model C is Turing complete, user-sensitive, location-sensitive and device-sensitive. It’s scalable if it’s designed right. It’s commercially viable if it’s deployed right.

But we lose hyertext and deep linking as we know it. Perhaps we will lose search. Will the Googlebot download that JavaScript and eval it to spider it? URIs have emergent properties because they can be bookmarked, tagged and mashed up. We are also losing simplicity: simply surfing documents.

So is it worth it?

. That means I reject the premise of the question. We have no choice. We are heading towards Model C whether we want to or not. That’s bad for the librarians such as the Orangutan librarian from Discworld. Read Borges’s The Garden of Forking Paths. Sean recommends reading Borges first and Pratchett second — it just doesn’t work the other way around. Now Sean mentions Borges and John Wilkins — Jesus, this is just like my Hypertext talk at Reboot! Everyone has a good laugh about taxonomies. Model C makes it possible to build the library of Babel — every possible book that is 401 pages long. But the library of Babel is, in Standish’s view, useless. He says that a library is not useful for the books it contains but for the books that it doesn’t contain — the rubbish has been filtered out. How will we filter out the rubbish on a Model C Web?

Information content is inversely related to probability said Claude Shannon. George Dyson figured out that the library of Babel would be between a googol and googolplex of books.

Nothing that Sean has seen this week at XTech has rocked his belief that we are marching towards Model C. Our content is going into the cloud, despite what Steven Pemberton would wish for.

When Sean first started using the Web, you had static documents and you had a cgi-bin. Now we generate our documents dynamically. We are at an interesting crossroads right now between Joycean documents and Turing applications. Is there a middle way, a steady-state model? Sean doesn’t think so because he now believes that the Web doesn’t actually exist. The Web is really just HTTP. The value of URIs is that we can name things. It’s still important that we use URIs wisely.

Perhaps HTML is trying to be too clever, to anthropomorphise it. Perhaps HTML, in trying to balance documents and applications, is a jack of all trades and a master of none.

Sean now understands what Fielding was talking about. There is no such thing as a document. All there is is HTTP. Dan Connolly has a URI for his Volkswagen Beetle because it’s on the Web. Sean is now at peace, understanding the real value of HTTP + URIs.

Now Sean will rewire our brains by showing us the cow in the picture. Once we see the cow, we cannot unsee it.

Reboot slides

The first day of Reboot 9 in Copenhagen is at an end. It’s been quite an inspiring day: lots of good talks but, more importantly, lots of great conversations with smart interesting people. This is my second year here so today has been a nice mix of meeting up with old friends and getting to meet new people.

This year’s theme is “human”, a typically philosophical subject for this blue-sky conference. Getting into the spirit of things, I gave a presentation called soul. It wasn’t quite as pretenscious as last year’s talk but it was certainly a rambling, haphazard affair. I really just wanted to tie in a bunch of ideas that I’ve been thinking about lately: lifestreams, portable social networks, online activity as gaming… but mostly it was a recruitment drive for Hack Day.

You can download the slides of “soul” as a PDF (with notes).

I was in the first speaking slot and I was very happy to get it over and done with. I had been slightly panicking over this talk and only really got it together during an extended stay at Stansted airport on the way to Denmark. Thanks for the two hour delay, Easy Jet.

Even with the main talk done, I had one more task to accomplish. I foolishly agreed to do a micro-presentation—we can’t call them Pecha-Kucha, donchyaknow—of 15 slides with exactly 20 seconds per slide. I finished the slides for that shortly beforehand and then started psyching myself up for it by hyperventilating and increasing my heart-rate.

I think it paid off. I had an absolute blast, people seemed to enjoy it and Andy asked if I had been possessed by the spirit of Simon Willison.

Oh, and the subject of the rat-a-tat talk was Hypertext: a quick list of tips for improving your links with rel, rev and various microformats. Help ourself to a PDF of the slides.

Update: Here’s a video of my micro-presentation. I was even more incoherent than I feared.