Link tags: computer

45

sparkline

She Built a Microcomputer Empire From Her Suburban Home

The story of Lore Harp McGovern is like something from Halt And Catch Fire.

Capt. Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part One, 1982) - YouTube

Wow! Grace Hopper has always been a hero to me, but I had no idea she was such a fantastic presenter. She’s completely engaging, with the timing and deadpan delivery of a stand-up comedian at times.

Capt. Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part One, 1982)

Your brain does not process information and it is not a computer | Aeon Essays

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

A holy communion | daverupert.com

You and I are partaking in something magical.

AI isn’t the app, it’s the UI - Stack Overflow Blog

In some ways, the fervor around AI is reminiscent of blockchain hype, which has steadily cooled since its 2021 peak. In almost all cases, blockchain technology serves no purpose but to make software slower, more difficult to fix, and a bigger target for scammers. AI isn’t nearly as frivolous—it has several novel use cases—but many are rightly wary of the resemblance. And there are concerns to be had; AI bears the deceptive appearance of a free lunch and, predictably, has non-obvious downsides that some founders and VCs will insist on learning the hard way.

This is a good level-headed overview of how generative language model tools work.

If something can be reduced to patterns, however elaborate they may be, AI can probably mimic it. That’s what AI does. That’s the whole story.

There’s very practical advice on deciding where and when these tools make sense:

The sweet spot for AI is a context where its choices are limited, transparent, and safe. We should be giving it an API, not an output box.

The Technium: Dreams are the Default for Intelligence

I feel like there’s a connection here between what Kevin Kelly is describing and what I wrote about guessing (though I think he might be conflating consciousness with intelligence).

This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.

Automate Mindfully | Jorge Arango

But a machine for writing isn’t the same as a machine that writes for you. A machine for viewing photos isn’t the same thing as a machine that travels in your stead. A machine for sketching isn’t the same thing as a machine that designs. I love doing these things and doing them more efficiently. But I have no desire to have them done for me. It’s a key distinction: Do not automate the work you are engaged in, only the materials.

The computer built to last 50 years | ploum.net

A fascinating look at what it might take to create a truly sunstainable long-term computer.

Let’s Not Dumb Down the History of Computer Science | Opinion | Communications of the ACM

I don’t think I agree with Don Knuth’s argument here from a 2014 lecture, but I do like how he sets out his table:

Why do I, as a scientist, get so much out of reading the history of science? Let me count the ways:

  1. To understand the process of discovery—not so much what was discovered, but how it was discovered.
  2. To understand the process of failure.
  3. To celebrate the contributions of many cultures.
  4. Telling historical stories is the best way to teach.
  5. To learn how to cope with life.
  6. To become more familiar with the world, and to know how science fits into the overall history of mankind.

Top Secret Rosies

I need to seek out this documentary, Top Secret Rosies: The Female Computers of World War II.

It would pair nicely with another film, The Eniac Programmers Project

Hyperland, Intermedia, and the Web That Never Was — Are.na

In 1990, the science fiction writer Douglas Adams produced a “fantasy documentary” for the BBC called Hyperland. It’s a magnificent paleo-futuristic artifact, rich in sideways predictions about the technologies of tomorrow.

I remember coming across a repeating loop of this documentary playing in a dusty corner of a Smithsonian museum in Washington DC. Douglas Adams wasn’t credited but I recognised his voice.

Hyperland aired on the BBC a full year before the World Wide Web. It is a prophecy waylaid in time: the technology it predicts is not the Web. It’s what William Gibson might call a “stub,” evidence of a dead node in the timeline, a three-point turn where history took a pause and backed out before heading elsewhere.

Here, Claire L. Evans uses Adams’s documentary as an opening to dive into the history of hypertext starting with Bush’s Memex, Nelson’s Xanadu and Engelbart’s oNLine System. But then she describes some lesser-known hypertext systems

In 1985, the students at Brown who encountered Intermedia had never seen anything like it before in their lives. The system laid a world of information at their fingertips, saved them hours at the library, and helped them work through tangles of thought.

Beyond Smart Rocks

Claire L. Evans on computational slime molds and other forms of unconvential computing that look beyond silicon:

In moments of technological frustration, it helps to remember that a computer is basically a rock. That is its fundamental witchcraft, or ours: for all its processing power, the device that runs your life is just a complex arrangement of minerals animated by electricity and language. Smart rocks.

Living in Alan Turing’s Future | The New Yorker

Portrait of the genius as a young man.

It is fortifying to remember that the very idea of artificial intelligence was conceived by one of the more unquantifiably original minds of the twentieth century. It is hard to imagine a computer being able to do what Alan Turing did.

Chaos Design: Before the robots take our jobs, can we please get them to help us do some good work?

This is a great piece! It starts with a look back at some of the great minds of the nineteenth century: Herschel, Darwin, Babbage and Lovelace. Then it brings us, via JCR Licklider, to the present state of the web before looking ahead to what the future might bring.

So what will the life of an interface designer be like in the year 2120? or 2121 even? A nice round 300 years after Babbage first had the idea of calculations being executed by steam.

I think there are some missteps along the way (I certainly don’t think that inline styles—AKA CSS in JS—are necessarily a move forwards) but I love the idea of applying chaos engineering to web design:

Think of every characteristic of an interface you depend on to not ‘fail’ for your design to ‘work.’ Now imagine if these services were randomly ‘failing’ constantly during your design process. How might we design differently? How would our workflows and priorities change?

Norbert Wiener’s Human Use of Human Beings is more relevant than ever.

What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.

1969 & 70 - Bell Labs

PIctures of computers (of the human and machine varieties).

Oh Hello Ana - Colours of 2018

I love this idea of comparing human colour choices to those of a computer:

I decided to do two things: the top three most used colours of the photo decided by “a computer” and my hand picked choices. This method ended up revealing a couple of things about me.

I also love that this was the biggest obstacle to finding representative imagery:

I wanted this to be an exciting task but instead I only found repeated photos of my cat.

This is the story of the ZX81…

This could’a, should’a, would’a been a great blog post.

March 1981: Shakin’ Stevens was top of the charts, Tom Baker was leaving Doctor Who and Clive Sinclair was bringing computers to the masses. Britain was moving into a new age, and one object above all would herald its coming.

Infovore » Pouring one out for the Boxmakers

This is a rather beautiful piece of writing by Tom (especially the William Gibson bit at the end). This got me right in the feels:

Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things… have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That’s it: back to single-serving websites for single-serving use cases.

A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it’s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.