Journal tags: communication

12

sparkline

One morning in the future

I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.

“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”

There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.

It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:

I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.

These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.

It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.

The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.

After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.

Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?

Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.

Accessibility testing

I was doing some accessibility work with a client a little while back. It was mostly giving their site the once-over, highlighting any issues that we could then discuss. It was an audit of sorts.

While I was doing this I started to realise that not all accessibility issues are created equal. I don’t just mean in their severity. I mean that some issues can—and should—be caught early on, while other issues can only be found later.

Take colour contrast. This is something that should be checked before a line of code is written. When designs are being sketched out and then refined in a graphical editor like Figma, that’s the time to check the ratio between background and foreground colours to make sure there’s enough contrast between them. You can catch this kind of thing later on, but by then it’s likely to come with a higher cost—you might have to literally go back to the drawing board. It’s better to find the issue when you’re at the drawing board the first time.

Then there’s the HTML. Most accessibility issues here can be caught before the site goes live. Usually they’re issues of ommission: form fields that don’t have an explicitly associated label element (using the for and id attributes); images that don’t have alt text; pages that don’t have sensible heading levels or landmark regions like main and nav. None of these are particularly onerous to fix and they come with the biggest bang for your buck. If you’ve got sensible forms, sensible headings, alt text on images, and a solid document structure, you’ve already covered the vast majority of accessibility issues with very little overhead. Some of these checks can also be automated: alt text for images; labels for inputs.

Then there’s interactive stuff. If you only use native HTML elements you’re probably in the clear, but chances are you’ve got some bespoke interactivity on your site: a carousel; a mega dropdown for navigation; a tabbed interface. HTML doesn’t give you any of those out of the box so you’d need to make your own using a combination of HTML, CSS, JavaScript and ARIA. There’s plenty of testing you can do before launching—I always ask myself “What would Heydon do?”—but these components really benefit from being tested by real screen reader users.

So if you commission an accessibility audit, you should hope to get feedback that’s mostly in that third category—interactive widgets.

If you get feedback on document structure and other semantic issues with the HTML, you should fix those issues, sure, but you should also see what you can do to stop those issues going live again in the future. Perhaps you can add some steps in the build process. Or maybe it’s more about making sure the devs are aware of these low-hanging fruit. Or perhaps there’s a framework or content management system that’s stopping you from improving your HTML. Then you need to execute a plan for ditching that software.

If you get feedback about colour contrast issues, just fixing the immediate problem isn’t going to address the underlying issue. There’s a process problem, or perhaps a communication issue. In that case, don’t look for a technical solution. A design system, for example, will not magically fix a workflow issue or route around the problem of designers and developers not talking to each other.

When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.

Upgrade paths

After I jotted down some quick thoughts last week on the disastrous way that Google Chrome rolled out a breaking change, others have posted more measured and incisive takes:

In fairness to Google, the Chrome team is receiving the brunt of the criticism because they were the first movers. Mozilla and Apple are on baord with making the same breaking change, but Google is taking the lead on this.

As I said in my piece, my issue was less to do with whether confirm(), prompt(), and alert() should be deprecated but more to do with how it was done, and the woeful lack of communication.

Thinking about it some more, I realised that what bothered me was the lack of an upgrade path. Considering that dialog is nowhere near ready for use, it seems awfully cart-before-horse-putting to first remove a feature and then figure out a replacement.

I was chatting to Amber recently and realised that there was a very different example of a feature being deprecated in web browsers…

We were talking about the KeyboardEvent.keycode property. Did you get the memo that it’s deprecated?

But fear not! You can use the KeyboardEvent.code property instead. It’s much nicer to use too. You don’t need to look up a table of numbers to figure out how to refer to a specific key on the keyboard—you use its actual value instead.

So the way that change was communicated was:

Hey, you really shouldn’t use the keycode property. Here’s a better alternative.

But with the more recently change, the communication was more like:

Hey, you really shouldn’t use confirm(), prompt(), or alert(). So go fuck yourself.

Foundations

There was quite a kerfuffle recently about a feature being removed from Google Chrome. To be honest, the details don’t really matter for the point I want to make, but for the record, this was about removing alert and confirm dialogs from cross-origin iframes (and eventually everywhere else too).

It’s always tricky to remove a long-established feature from web browsers, but in this case there were significant security and performance reasons. The problem was how the change was communicated. It kind of wasn’t. So the first that people found out about it about was when things suddenly stopped working (like CodePen embeds).

The Chrome team responded quickly and the change has now been pushed back to next year. Hopefully there will be significant communication before that to let site owners know about the upcoming breakage.

So all’s well that ends well and we’ve all learned a valuable lesson about the importance of communication.

Or have we?

While this was going on, Emily Stark tweeted a more general point about breakage on the web:

Breaking changes happen often on the web, and as a developer it’s good practice to test against early release channels of major browsers to learn about any compatibility issues upfront.

Yikes! To me, this appears wrong on almost every level.

First of all, breaking changes don’t happen often on the web. They are—and should be—rare. If that were to change, the web would suffer massively in terms of predictability.

Secondly, the onus is not on web developers to keep track of older features in danger of being deprecated. That’s on the browser makers. I sincerely hope we’re not expected to consult a site called canistilluse.com.

I wasn’t the only one surprised by this message.

Simon says:

No, no, no, no! One of the best things about developing for the web is that, as a rule, browsers don’t break old code. Expecting every website and application to have an active team of developers maintaining it at all times is not how the web should work!

Edward Faulkner:

Most organizations and individuals do not have the resources to properly test and debug their website against Chrome canary every six weeks. Anybody who published a spec-compliant website should be able to trust that it will keep working.

Evan You:

This statement seriously undermines my trust in Google as steward for the web platform. When did we go from “never break the web” to “yes we will break the web often and you should be prepared for it”?!

It’s worth pointing out that the original tweet was not an official Google announcement. As Emily says right there on her Twitter account:

Opinions are my own.

Still, I was shaken to see such a cavalier attitude towards breaking changes on the World Wide Web. I know that removing dangerous old features is inevitable, but it should also be exceptional. It should not be taken lightly, and it should certainly not be expected to be an everyday part of web development.

It’s almost miraculous that I can visit the first web page ever published in a modern web browser and it still works. Let’s not become desensitised to how magical that is. I know it’s hard work to push the web forward, constantly add new features, while also maintaining backward compatibility, but it sure is worth it! We have collectively banked three decades worth of trust in the web as a stable place to build a home. Let’s not blow it.

If you published a website ten or twenty years ago, and you didn’t use any proprietary technology but only stuck to web standards, you should rightly expect that site to still work today …and still work ten and twenty years from now.

There was something else that bothered me about that tweet and it’s not something that I saw mentioned in the responses. There was an unspoken assumption that the web is built by professional web developers. That gave me a cold chill.

The web has made great strides in providing more and more powerful features that can be wielded in learnable, declarative, forgiving languages like HTML and CSS. With a bit of learning, anyone can make web pages complete with form validation, lazily-loaded responsive images, and beautiful grids that kick in on larger screens. The barrier to entry for all of those features has lowered over time—they used to require JavaScript or complex hacks. And with free(!) services like Netlify, you could literally drag a folder of web pages from your computer into a browser window and boom!, you’ve published to the entire world.

But the common narrative in the web development community—and amongst browser makers too apparently—is that web development has become more complex; so complex, in fact, that only an elite priesthood are capable of making websites today.

Absolute bollocks.

You can choose to make it really complicated. Convince yourself that “the modern web” is inherently complex and convoluted. But then look at what makes it complex and convoluted: toolchains, build tools, pipelines, frameworks, libraries, and abstractions. Please try to remember that none of those things are required to make a website.

This is for everyone. Not just for everyone to consume, but for everyone to make.

Letters of exclusion

I think my co-workers are getting annoyed with me. Any time they use an acronym or initialism—either in a video call or Slack—I ask them what it stands for. I’m sure they think I’m being contrarian.

The truth is that most of the time I genuinely don’t know what the letters stand for. And I’ve got to that age where I don’t feel any inhibition about asking “stupid” questions.

But it’s also true that I really, really dislike acronyms, initialisms, and other kinds of jargon. They’re manifestations of gatekeeping. They demarcate in-groups from outsiders.

Of course if you’re in a conversation with an in-group that has the same background and context as you, then sure, you can use acronyms and initialisms with the confidence that there’s a shared understanding. But how often can you be that sure? The more likely situation—and this scales exponentially with group size—is that people have differing levels of inside knowledge and experience.

I feel sorry for anyone trying to get into the field of web performance. Not only are there complex browser behaviours to understand, there’s also a veritable alphabet soup of initialisms to memorise. Here’s a really good post on web performance by Harry, but notice how the initialisms multiply like tribbles as the post progresses until we’re talking about using CWV metrics like LCP, FID, and CLS—alongside TTFB and SI—to look at PLPs, PDPs, and SRPs. And fair play to Harry; he expands each initialism the first time he introduces it.

But are we really saving any time by saying FID instead of first input delay? I suspect that the only reason why the word “cumulative” precedes “layout shift” is just to make it into the three-letter initialism CLS.

Still, I get why initialisms run rampant in technical discussions. You can be sure that most discussions of particle physics would be incomprehensible to outsiders, not necessarily because of the concepts, but because of the terminology.

Again, if you’re certain that you’re speaking to peers, that’s fine. But if you’re trying to communicate even a little more widely, then initialisms and abbreviations are obstacles to overcome. And once you’re in the habit of using the short forms, it gets harder and harder to apply context-shifting in your language. So the safest habit to form is to generally avoid using acronyms and initialisms.

Unnecessary initialisms are exclusionary.

Think about on-boarding someone new to your organisation. They’ve already got a lot to wrap their heads around without making them figure out what a TAM is. That’s a real example from Clearleft. We have a regular Thursday afternoon meeting. I call it the Thursday afternoon meeting. Other people …don’t.

I’m trying—as gently as possible—to ensure we’re not being exclusionary in our language. My co-workers indulge me, even it’s just to shut me up.

But here’s the thing. I remember many years back when a job ad went out on the Clearleft website that included the phrase “culture fit”. I winced and explained why I thought that was a really bad phrase to use—one that is used as code for “more people like us”. At the time my concerns were met with eye-rolls and chuckles. Now, as knowledge about diversity and inclusion has become more widespread, everyone understands that using a phrase like “culture fit” can be exclusionary.

But when I ask people to expand their acronyms and initialisms today, I get the same kind of chuckles. My aversion to abbreviations is an eccentric foible to be tolerated.

But this isn’t about me.

Performance and people

I was helping a client with a bit of a performance audit this week. I really, really enjoy this work. It’s such a nice opportunity to get my hands in the soil of a website, so to speak, and suggest changes that will have a measurable effect on the user’s experience.

Not only is web performance a user experience issue, it may well be the user experience issue. Page speed has a proven demonstrable direct effect on user experience (and revenue and customer satisfaction and whatever other metrics you’re using).

It struck me that there’s a continuum of performance challenges. On one end of the continuum, you’ve got technical issues. These can be solved with technical solutions. On the other end of the continuum, you’ve got human issues. These can be solved with discussions, agreement, empathy, and conversations (often dreaded or awkward).

I think that, as developers, we tend to gravitate towards the technical issues. That’s our safe space. But I suspect that bigger gains can be reaped by tackling the uncomfortable human issues.

This week, for example, I uncovered three performance issues. One was definitely technical. One was definitely human. One was halfway between.

The technical issue was with web fonts. It’s a lot of fun to dive into this aspect of web performance because quite often there’s some low-hanging fruit: a relatively simple technical fix that will boost the performance (or perceived performance) of a website. That might be through resource hints (using link rel=“preload” in the HTML) or adjusting the font loading (using font-display in the CSS) or even nerdier stuff like subsetting.

In this case, the issue was with the file format of the font itself. By switching to woff2, there were significant file size savings. And the great thing is that @font-face rules allow you to specify multiple file formats so you can still support older browsers that can’t handle woff2. A win all ‘round!

The performance issue that was right in the middle of the technical/human continuum was with images. At first glance it looked like a similar issue to the fonts. Some images were being served in the wrong formats. When I say “wrong”, I guess I mean inappropriate. A photographic image, for example, is probably going to best served as a JPG rather than a PNG.

But unlike the fonts, the images weren’t in the direct control of the developers. These images were coming from a Content Management System. And while there’s a certain amount of processing you can do on the server, a human still makes the decision about what file format they’re uploading.

I’ve seen this happen at Clearleft. We launched an event site with lean performant code, but then someone uploaded an image that’s megabytes in size. The solution in that case wasn’t technical. We realised there was a knowledge gap around image file formats—which, let’s face it, is kind of a techy topic that most normal people shouldn’t be expected to know.

But it was extremely gratifying to see that people were genuinely interested in knowing a bit more about choosing the right format for the right image. I was able to provide a few rules of thumb and point to free software for converting images. It empowered those people to feel more confident using the Content Management System.

It was a similar situation with the client site I was looking at this week. Nobody is uploading oversized images in order to deliberately make the site slower. They probably don’t realise the difference that image formats can make. By having a discussion and giving them some pointers, they’ll have more knowledge and the site will be faster. Another win all ‘round!

At the other end of continuum was an issue that wasn’t technical. From a technical point of view, there was just one teeny weeny little script. But that little script is Google Tag Manager which then calls many, many other scripts that are not so teeny weeny. Third party scripts …the bane of web performance!

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

Remember when I did a countdown of the top four web performance challenges? At the number one spot is other people’s JavaScript.

Now one technical solution would be to remove the Google Tag Manager script. But that’s probably not very practical—you’ll probably just piss off some other department. That said, if you can’t find out which department was responsible for adding the Google Tag Manager script in the first place, it might we well be an option to remove it and then wait and see who complains. If no one notices it’s gone, job done!

More realistically, there’s someone who’s added that Google Tag Manager script for their own valid reasons. You’ll need to talk to them and understand their needs.

Again, as with images uploaded in a Content Management System, they may not be aware of the performance problems caused by third-party scripts. You could try throwing numbers at them, but I think you get better results by telling the story of performance.

Use tools like Request Map Generator to help them visualise the impact that third-party scripts are having. Talk to them. More importantly, listen to them. Find out why those scripts are being requested. What are the outcomes they’re working towards? Can you offer an alternative way of providing the data they need?

I think many of us developers are intimidated or apprehensive about approaching people to have those conversations. But it’s necessary. And in its own way, it can be as rewarding as tinkering with code. If the end result is a faster website, then the work is definitely worth doing—whether it’s technical work or people work.

Personally, I just really enjoy working on anything that will end up improving a website’s performance, and by extension, the user experience. If you fancy working with me on your site, you should get in touch with Clearleft.

Overlay gap

I think a lot about Danielle’s talk at Patterns Day last year.

Around about the six minute mark she starts talking about gaps and overlaps.

Gaps are where hidden complexity live. If we don’t have a category to cover it, in effect it becomes invisible. But that doesn’t mean it’s not there. Unidentified gaps cause inconsistency and confusion.

Overlaps occur when two separate categories encompass some of the same areas of responsibility. They cause conflict, duplication of effort, and unnecessary friction.

This is the bit I keep thinking about. It’s such an insightful lens to view things through. On just about any project, tensions are almost due to either gaps (“I thought someone else was doing that”) or overlaps (“Oh, you’re doing that? I thought we were doing that”).

When I was talking to Gerry on his new podcast recently, we were trying to figure out why web performance is in such a woeful state. I mused that there may be a gap. Perhaps designers think it’s a technical problem and developers think it’s a design problem. I guess you could try to bridge this gap by having someone whose job is to focus entirely on performance. But I suspect the better—but harder—solution is to create a shared culture of performance, of the kind Lara wrote about in her book:

Performance is truly everyone’s responsibility. Anyone who affects the user experience of a site has a relationship to how it performs. While it’s possible for you to single-handedly build and maintain an incredibly fast experience, you’d be constantly fighting an uphill battle when other contributors touch the site and make changes, or as the Web continues to evolve.

I suspect there’s a similar ownership gap at play when it comes to the ubiquitous obtrusive overlays that are plastered on so many websites these days.

Kirill Grouchnikov recently published a gallery of screenshots showcasing the beauty of modern mobile websites:

There are two things common between the websites in these screenshots that I took yesterday.

  1. They are beautifully designed, with great typography, clear branding, all optimized for readability.
  2. I had to install Firefox, Adblock Plus and uBlock Origin, as well as manually select and remove additional elements such as subscription overlays.

The web can be beautiful. Except it’s not right now.

How is this dissonance possible? How can designers and developers who clearly care about the user experience be responsible for unleashing such user-hostile interfaces?

PM/Legal/Marketing made me do it

I get that. But surely the solution can’t be to shrug our shoulders, pass the buck, and say “not my job.” Somebody designed each one of those obtrusive overlays. Somebody coded up each one and pushed them into production.

It’s clear that this is a problem of communication and understanding, rather than a technical problem. As always. We like to talk about how hard and complex our technical work is, but frankly, it’s a lot easier to get a computer to do what you want than to convince a human. Not least because you also need to understand what that other human wants. As Danielle says:

Recognising the gaps and overlaps is only half the battle. If we apply tools to a people problem, we will only end up moving the problem somewhere else.

Some issues can be solved with better tools or better processes. In most of our workplaces, we tend to reach for tools and processes by default, because they feel easier to implement. But as often as not, it’s not a technology problem. It’s a people problem. And the solution actually involves communication skills, or effective dialogue.

So let’s say it is someone in the marketing department who is pushing to have an obtrusive newsletter sign-up form get shoved in the user’s face. Talk to them. Figure out what their goals are—what outcome are they hoping to get to. If they don’t seem to understand the user-experience implications, talk to them about that. But it needs to be a two-way conversation. You need to understand what they need before you start telling them what you want.

I realise that makes it sound patronisingly simple, and I know that in actuality it’s a sisyphean task. It may be that genuine understanding between people is the wickedest of design problems. But even if this problem seems insurmoutable, at least you’d be tackling the right problem.

Because the web can’t survive like this.

Telling the story of performance

At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.

The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.

It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.

There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.

When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.

Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.

Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.

Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.

This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.

Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.

SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.

If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.

The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.

Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right nowCrux.run balances that with the story of performance over time.

Measuring performance is important. Communicating the story of performance is equally important.

Cat encounters

The latest episode of Ariel’s excellent Offworld video series (and podcast) is all about Close Encounters Of The Third Kind.

I have such fondness for this film. It’s one of those films that I love to watch on a Sunday afternoon (though that’s true of so many Spielberg films—Jaws, Raiders Of The Lost Ark, E.T.). I remember seeing it in the cinema—this would’ve been the special edition re-release—and feeling the seat under me quake with the rumbling of the musical exchange during the film’s climax.

Ariel invited Rose Eveleth and Laura Welcher on to discuss the film. They spent a lot of time discussing the depiction of first contact communication—Arrival being the other landmark film on this topic.

This is a timely discussion. There’s a new book by Daniel Oberhaus published by MIT Press called Extraterrestrial Languages:

If we send a message into space, will extraterrestrial beings receive it? Will they understand?

You can a read an article by the author on The Guardian, where he mentions some of the wilder ideas about transmitting signals to aliens:

Minsky, widely regarded as the father of AI, suggested it would be best to send a cat as our extraterrestrial delegate.

Don’t worry. Marvin Minsky wasn’t talking about sending a real live cat. Rather, we transmit instructions for building a computer and then we can transmit information as software. Software about, say, cats.

It’s not that far removed from what happened with the Voyager golden record, although that relied on analogue technology—the phonograph—and sent the message pre-compiled on hardware; a much slower transmission rate than radio.

But it’s interesting to me that Minsky specifically mentioned cats. There’s another long-term communication puzzle that has a cat connection.

The Yukka Mountain nuclear waste repository is supposed to store nuclear waste for 10,000 years. How do we warn our descendants to stay away? We can’t use language. We probably can’t even use symbols; they’re too culturally specific. A think tank called the Human Interference Task Force was convened to agree on the message to be conveyed:

This place is a message… and part of a system of messages… pay attention to it! Sending this message was important to us. We considered ourselves to be a powerful culture.

This place is not a place of honor…no highly esteemed deed is commemorated here… nothing valued is here.

What is here is dangerous and repulsive to us. This message is a warning about danger.

A series of thorn-like threatening earthworks was deemed the most feasible solution. But there was another proposal that took a two pronged approach with genetics and folklore:

  1. Breed cats that change colour in the presence of radioactive material.
  2. Teach children nursery rhymes about staying away from cats that change colour.

This is the raycat solution.

Toast

Shockwaves rippled across the web standards community recently when it appeared that Google Chrome was unilaterally implementing a new element called toast. It turns out that’s not the case, but the confusion is understandable.

First off, this all kicked off with the announcement of “intent to implement”. That makes it sounds like Google are intending to, well, …implement this. In fact “intent to implement” really means “intend to mess around with this behind a flag”. The language is definitely confusing and this is something that will hopefully be addressed.

Secondly, Chrome isn’t going to ship a toast element. Instead, this is a proposal for a custom element currently called std-toast. I’m assuming that should the experiment prove successful, it’s not a foregone conclusion that the final element name will be called toast (minus the sexually-transmitted-disease prefix). If this turns out to be a useful feature, there will surely be a discussion between implementators about the naming of the finished element.

This is the ideal candidate for a web component. It makes total sense to create a custom element along the lines of std-toast. At first I was confused about why this was happening inside of a browser instead of first being created as a standalone web component, but it turns out that there’s been a fair bit of research looking at existing implementations in libraries and web components. So this actually looks like a good example of paving an existing cowpath.

But it didn’t come across that way. The timing of announcements felt like this was something that was happening without prior discussion. Terence Eden writes:

It feels like a Google-designed, Google-approved, Google-benefiting idea which has been dumped onto the Web without any consideration for others.

I know that isn’t the case. And I know how many dedicated people have worked hard on this proposal.

Adrian Roselli also remarks on the optics of this situation:

To be clear, while I think there is value in minting a native HTML element to fill a defined gap, I am wary of the approach Google has taken. A repo from a new-to-the-industry Googler getting a lot of promotion from Googlers, with Googlers on social media doing damage control for the blowback, WHATWG Googlers handling questions on the repo, and Google AMP strongly supporting it (to reduce its own footprint), all add up to raise alarm bells with those who advocated for a community-driven, needs-based, accessible web.

Dave Cramer made a similar point:

But my concern wasn’t so much about the nature of the new elements, but of how we learned about them and what that says about how web standardization works.

So there’s a general feeling (outside of Google) that there’s something screwy here about the order of events. A lot discussion and research seems to have happened in isolation before announcing the intent to implement:

It does not appear that any discussions happened with other browser vendors or standards bodies before the intent to implement.

Why is this a problem? Google is seeking feedback on a solution, not on how to solve the problem.

Going back to my early confusion about putting a web component directly into a browser, this question on Discourse echoes my initial reaction:

Why not release std-toast (and other elements in development) as libraries first?

It turns out that std-toast and other in-browser web components are part of an idea called layered APIs. In theory this is an initiative in the spirit of the extensible web manifesto.

The extensible web movement focused on exposing low-level APIs to developers: the fetch API, the cache API, custom elements, Houdini, and all of those other building blocks. Layered APIs, on the other hand, focuses on high-level features …like, say, an HTML element for displaying “toast” notifications.

Layered APIs is an interesting idea, but I’m worried that it could be used to circumvent discussion between implementers. It’s a route to unilaterally creating new browser features first and standardising after the fact. I know that’s how many features already end up in browsers, but I think that the sooner that authors, implementers, and standards bodies get a say, the better.

I certainly don’t think this is a good look for Google given the debacle of AMP’s “my way or the highway” rollout. I know that’s a completely different team, but the external perception of Google amongst developers has been damaged by the AMP project’s anti-competitive abuse of Google’s power in search.

Right now, a lot of people are jumpy about Microsoft’s move to Chromium for Edge. My friends at Microsoft have been reassuring me that while it’s always a shame to reduce browser engine diversity, this could actually be a good thing for the standards process: Microsoft could theoretically keep Google in check when it comes to what features are introduced to the Chromium engine.

But that only works if there is some kind of standards process. Layered APIs in general—and std-toast in particular—hint at a future where a single browser vendor can plough ahead on their own. I sincerely hope that’s a misreading of the situation and that this has all been an exercise in miscommunication and misunderstanding.

Like Dave Cramer says:

I hear a lot about how anyone can contribute to the web platform. We’ve all heard the preaching about incubation, the Extensible Web, working in public, paving the cowpaths, and so on. But to an outside observer this feels like Google making all the decisions, in private, and then asking for public comment after the feature has been designed.

Shadows and smoke

When I wrote about a year of learning with Charlotte, I made an off-hand remark in parentheses:

Hiring Charlotte was an experiment for Clearleft—could we hire someone in a “junior” position, and then devote enough time and resources to bring them up to a “senior” level? (those quotes are air quotes—I find the practice of labelling people or positions “junior” or “senior” to be laughably reductionist; you might as well try to divide the entire web into “apps” and “sites”).

It breaks my heart to see so many of my colleagues prefix their job titles “senior” (not least because it becomes completely meaningless when every single Visual Designer is also a “Senior Visual Designer”).

I remember being at a conference after-party a few years ago chatting to a very talented front-end developer. She wasn’t happy with where she was working. I advised to get a job somewhere else After all, she lived and worked in San Francisco, where her talents are in high demand. But she was hesitant.

“They’ve promised me that in a few more months, my job title would become ‘Senior Developer’”, she said. “Ah, right,” I said, “and what happens then?” “Well”, she said, “I get to have the word ‘senior’ on my resumé.” That was it. No pay rise. No change in responsibilities. Just a word on a piece of paper.

I had always been suspicious of job titles, but that exchange put me over the edge. Job titles can be downright harmful.

Dan recently wrote about the importance of job titles. I love Dan, but I couldn’t disagree with him more in this instance.

He cite two situations where he believes job titles have value:

Your title tells your colleagues how to interact with you.

No. Talking to your colleagues tells your colleagues how to interact you. Job titles attempt to short-cut that. They do a terrible job of it.

What you need to know are the verbs that your colleagues are adept in: designing, developing, thinking, communicating, facilitating …all of that gets squashed down into one reductionist noun like “Copywriter” or “Designer”.

At Clearleft, we’ve recently started kicking off projects with an exercise called “Fuzzy Edges” that Boxman has been refining. In it, we look ahead to all the upcoming project roles (e.g. “Who will lead playbacks and demos?”, “Who will run stakeholder interviews?”, “Who will lead design direction?”). Together, everyone on the project comes to a consensus on who has which roles.

It’s really, really important to clarify these roles at the start of each project, and it’s exactly the kind of thing that can’t be summed up in a job title. In fact, the existence of job titles can lead to harmful assumptions like “Oh, I figured you were leading playbacks and demos!” or “Oh, I assumed they were running stakeholder interviews!”, or worse: “Hey, you can’t lead design direction because that’s not in your job title!”

The role assignments can vary hugely from project to project, which is great. People are varied and multi-faceted. Trying to force the same people into the same roles over and over again would be demoralising and counter-productive. I fear that’s exactly what job titles do—they reinforce barriers.

Here’s the second reason Dan gives for the value of job titles:

Your title tells your clients how to interact with you.

Again, no. Talking to your clients tells your clients how to interact with you.

Dan illustrates his point by recounting a tale of deception, demonstrating that a well-placed lie about someone’s job title can mollify the kind of people who place great stock in job titles. That’s not solving the real problem. Again, while job titles might appear to be shortcuts to a shared understanding, they’re actually more like façades covering up trapdoors.

In recounting the perceived value of job titles, there’s an assumption that the titles were arrived at fairly. If someone’s job title is “Senior Designer” and someone’s job title is “Junior Designer”, then the senior person must be the better, more experienced designer, right?

But that isn’t always the case. And that’s when job titles go from being silly pointless phrases to being downright damaging, causing real harm.

Over on Rands in Repose, there’s a great post called Titles are Toxic. His experience mirrors mine:

Never in my life have I ever stared at a fancy title and immediately understood the person’s value. It took time. I spent time with those people — we debated, we discussed, we disagreed — and only then did I decide: “This guy… he really knows his stuff. I have much to learn.” In Toxic Title Douchebag World, titles are designed to document the value of an individual sans proof. They are designed to create an unnecessary social hierarchy based on ego.

See? There’s no shortcut for talking to people. Job titles are an attempt to cut out one of the most important aspects of humans working together.

The unspoken agreement was that these titles were necessary to map to a dimwitted external reality where someone would look at a business card and apply an immediate judgement on ability based on title. It’s absurd when you think about it – the fact that I’d hand you a business card that read “VP” and you’d leap to the immediate assumption: “Since his title is VP, he must be important. I should be talking to him”. I understand this is how a lot of the world works, but it’s precisely this type of reasoning that makes titles toxic.

So it’s not even that I think that job titles are bad at what they’re trying to do …I think that what they’re trying to do is bad.

Communication for America

Mandy has written a great article about making remote teams work. It’s an oft-neglected aspect of working on a product when you’ve got people distributed geographically.

But remote communication isn’t just something that’s important for startups and product companies—it’s equally important for agencies when it comes to client communication.

At Clearleft, we occasionally work with clients right here in Brighton, but that’s the exception. More often than not, the clients are based in London, or somewhere else in the UK. In the case of Code for America, they’re based in San Francisco—that’s eight or nine timezones away (depending on the time of year).

As it turned out, it wasn’t a problem at all. In fact, it worked out nicely. At the end of every day, we had a quick conference call, with two or three people at our end, and two or three people at their end. For us, it was the end of the day: 5:30pm. For them, the day was just starting: 9:30am.

We’d go through what we had been doing during that day, ask any questions that had cropped up over the course of the day, and let them know if there was anything we needed from them. If there was anything we needed from them, they had the whole day to put it together while we went home. The next morning (from our perspective), it would be waiting in our in/drop-boxes.

Meanwhile, from the perspective of Code for America, they were coming into the office every morning and starting the day with a look over our work, as though we had beavering away throughout the night.

Now, it would be easy for me to extrapolate from this that this way of working is great and everyone should do it. But actually, the whole timezone difference was a red herring. The real reason why the communication worked so well throughout the project was because of the people involved.

Right from the start, it was clear that because of time and budget constraints that we’d have to move fast. We wouldn’t have the luxury of debating everything in detail and getting every decision signed off. Instead we had a sort of “rough consensus and running code” approach that worked really well. It worked because everyone understood that was what was happening—if just one person was expecting a more formalised structure, I’m sure it wouldn’t have gone quite so smoothly.

So we provided materials in whatever level of fidelity made sense for the idea under discussion. Sometimes that was a quick sketch. Sometimes it was a fairly high-fidelity mockup. Sometimes it was a module of markup and CSS. Whatever it took.

Most of all, there was a great feeling of trust on both sides of the equation. It was clear right from the start that the people at Code for America were super-smart and weren’t going to make any outlandish or unreasonable requests of Clearleft. Instead they gave us just the right amount of guidance and constraints, while trusting us to make good decisions.

At one point, Jon was almost complaining about not getting pushback on his designs. A nice complaint to have.

Because of the daily transatlantic “stand up” via teleconference, there was a great feeling of inevitability to the project as it came together from idea to execution. Inevitability doesn’t sound like a very sexy attribute of a web project, but it’s far preferable to the kind of project that involves milestones of “big reveals”—the Mad Men approach to project management.

Oh, and we made sure that we kept those transatlantic calls nice and short. They never lasted longer than 10 or 15 minutes. We wanted to avoid the many pitfalls of conference calls.