An art piece where an electrical pylon is set on its tip in an industrial park, as if it were a lawn dart of the gods

How long has this been going on?

It’s easy to feel like AI* was the domain of science fiction and IBM moonshots until ChatGPT burst onto the public consciousness like Kool-Aid Man. It’s easy, but it’s not correct.

What even is AI?

LLMs have been around a lot longer than that, fixing your spelling and grammar and missed semicolons at the end of code lines. Is a recommendation engine AI? What about optical character recognition? What about fraud detection? What about Clippy?

I think part of the confusion is that normal humans, technology-obsessed humans, science-fiction writers, advanced statisticians, and several other constituencies are all using the same words to describe different things. For purposes of this post, let’s say that AI* is a software behavior that has a large database of previous actions, the ability to accept correction or scoring, and the capacity to suggest appropriate actions.

The ability to teach your phone to spell your friend’s name right, even though it’s an unusual spelling, that’s a manifestation of this definition of AI. Scanning and translating a menu in another language, that’s AI-ish. Generating a list of action items from a recorded meeting, that’s AI.

I’m not saying that there hasn’t been an explosion of interest and money in the last couple years, but it’s also not quite as surprising as you think.

Copilot and Microsoft

I was talking to a friend about GitHub Copilot, and how it has built-in metrics dashboards showing how much money you save by using it. Copilot launched as a fully-baked AI product with obvious business value and an excellent market and data lake. It’s pretty much the ideal combination of a useful, usable product, from a trusted maker, with a huge set of resources, and the runway to really make something of themselves.

Copilot’s rollout on the rest of the Microsoft ecosystem is a little bumpier, because the data is less structured and predictable, but it still feels like Microsoft is thinking about the short- and long-term effects of what they’re doing.

Obviously, they’ve been thinking about this for a long time. It was before their investment in, or consideration of, OpenAI. The path is old, maybe older than grammar check. If I had to put a finger on it, I’d say it starts at FrontPage.

FrontPage was, in a word, terrible at generating HTML. But it was one of the first mass-market no-code/low-code software products (Netscape also did this, but didn’t charge, or have certification tests). The garbage FrontPage spit out was not human-editable, and iterating on it was… not ok. But a person who didn’t know their ass from “a href” could make a webpage that worked.

If we follow that line through, from Microsoft’s FrontPage (and VisualBasic, which was low-code and much less mass-market) to the change to Word being in XML, to Powerpoint templates, we see there has always been a drive to make computing easier to sell. Have some macros. Have some templates. Have an animated little assistant asking if it can help. The acquisition of GitHub didn’t change Microsoft, it just worked out well as a way to capture a whole new market of people who needed to get things done.

I feel a little bit like when ChatGPT 3.0 released, we were using it to write bad sestinas, and Microsoft was studying the sword. If by sword we mean “provable business value”.

That doesn’t mean they’re going to entirely win. There’s a lot of people and money and attention in the space, and first-mover advantage is real, but not everything. It is super interesting to watch how the conversation is playing out, with a lot of concern about video-spoofing and everyone losing their jobs, and in the meantime, this 40+ year old company is just kind of strolling along, making unflashy and unthreatening upgrades to the way corporations work.

Marconi

If you know who Marconi is at all, you think of him as the father of radio. In the most useful, commercial, practical sense, he invented wireless telegraphy and changed the world in the span of 30 years. But he couldn’t have gotten there without Hertz and Righi and Maxwell, and all the pioneers of wired telegraphy. In 1897, he sent the first message we would recognize as deliberate radio communication. In 1912, hundreds of people were saved from the Titanic because her wireless operators could send a message to the Carpathia. Long-distance transmission increased only as cheap, reliable power was available to transmitters (sound familiar?). By 1922, entertainment broadcasts were available, and by the 30s, radio transmissions were a part of both public and family life. In one way, the idea of being able to broadcast to millions of people instantly seemed like it appeared almost overnight. In another way, all the foundations were there and it just took someone to see the commercial value and rally the industrial base around it.

For more on this story, see Thunderstruck by Erik Larson.

So what?

The industry paroxysm around AI feels like it’s all new, and exciting, and a little threatening, but I think that when you really dig into it, you’ll see that it’s an acceleration and intensification of things that we have already had, just faster and louder. It feels like a revolution because it’s widely available, not because it’s all that novel. It may change the world, but lots of things change the world, and the impact of the changes is determined by society, not the technology.