Leaning Into AI: Lessons from Technology’s Past

As I sit here watching a younger developer on YouTube express concerns about how AI-assisted programming will make us forget how to understand the underlying systems, I can’t help but feel like I’ve seen this before.

Every time we abstract functionality into a commonly used library, or when a feature becomes part of a language’s standard toolkit, we step one layer further away from the bare metal. It’s no different than when bulky discrete circuits were reduced to integrated circuits on tiny silicon chips. We abstract complex and cumbersome processes into commoditized components, and then replace them with even bigger abstractions to solve new, more complex problems.

That’s the nature of technological progress. Abstraction on top of abstraction.

The same worries came about during the rise of the Internet and Google. That we’d lose the ability to think critically, to remember, or to work without constant access to external resources. It’s true that our relationship with knowledge has changed, but it’s hard to argue that we haven’t built even more powerful systems as a result.

I’ll admit, the acceleration of technology is deeply concerning at times. Nuclear weaponry was probably the first real sign that innovation could outstrip human ability to fully grasp the consequences. AI is another exponential curve, and it’s one that carries genuine risks.

But I don’t believe there’s any realistic path backward.

If you’re comfortable with how radio, television, modern medicine, the Internet, and Google have reshaped human capability, even while introducing new vulnerabilities, then I think it’s time to lean into AI with the same mindset. It’s simply the next layer of abstraction that we’re building on.

Progress has always been uncomfortable. That’s how we know it’s happening.