Evolving with AI
Principles we need adapt to the Collective Intelligence era.
In early 2024 I found myself prototyping an AI system that foreshadows my own replacement. It was a moment I figured would come, just not so quickly.
AI is ideally suited to take on the work of language, specifically the language of computers. As a result software developers like me are getting an early taste of what happens when an employee is asked to play an active role in automating their own profession.
This moment is critical - we can either follow the headlong AI rush into an uncertain future, or we can formulate a plan for how the AI transition takes place and what values we imbue it with.
For me, the AI transition has come in phases.
Phase 1: The Writer
I first incorporated AI into my development workflow as part of a project for Mozillaâs Innovation team. We were building a local AI code assistant designed to preserve privacy, energy efficiency, and model choice.
Back then models could guess the next line of code, but the programmer was still the writer. Most of my coworkers were dabbling in AI, but many were either too proud of their craft or too skeptical of AIâs code quality to go all-in. I embraced the assist; it helped our small innovation team keep pace with larger players.
This was no small task. GitHub Copilot dominated, but its pace of change felt constrained under the weight of Microsoft. Meanwhile, upstarts like Windsurf, Cursor, and Anthropicâs Claude were quickly carving out market share by using their tools to build their tools. They were millions of dollars in funding and months ahead of us. Add the limitations of local compute and we were soon outmatched.
We pivoted away from the AI editor, but I kept leaning into AI assistance.
Phase 2: The Editor
AI in the toolkit was a relief: fewer dead ends, fewer Stack Overflow walk-of-shames.
I was no longer a writer - I was an editor, evaluating code, fixing bugs, renaming, reordering, planning. The bosses were impressed. I felt smart. The collaboration felt symbiotic. I needed Claude, and for a while Claude needed me.

Benchmark scores of AI models in different tasks over time. Source: Our World in Data
Phase 3: The Architect
Within two months, AIâs share of my output jumped from 35% to 90%. The key skill shifted to being an âAI whispererâ: learning how to talk to models in ways that produced better results. Prompt engineering â a term Iâd once scoffed at â was making a significant difference.
Writing specs for Claude improved my thinking. I collected rules of thumb: âALWAYS minimize client side stateâ and âREMEMBER TO validate user assumptions.â When AI went awry, I found threatening to switch models was effectiveâor at least therapeutic.
Phase 4: The Task Master
Even the skeptics were using AI regularly now, and you no longer needed clever prompts. Product managers noticed. The more we sped up, the more features got packed into the roadmap.
I was no longer a writer, an editor, or an AI whisperer. I was a taskmaster: line up the work, specify the outcomes, dispatch agents, review results, march. I found I could run multiple tasks at the same time. Some people in my org had half a dozen agents simultaneously doing their bidding.

My Most Recent Windsurf Statistics
I was shipping features as fast as we could design them. It no longer made sense for me to tackle UI-related code; our designer could vibe code them just by describing. I invited her to move from Figma comps and Replit prototypes straight to commits in our codebase. That meant I didnât have to perform a rough translation. The foundations I needed just appeared in the codebase, and they were pretty sturdy.
On one hand, I was sad to lose a part of the job Iâd built my career on - fleshing out coded interfaces and creating interactions that felt nice. On the other hand, this meant I could focus on more complex tasks. Of course, those tasks ushered in an existentially unnerving new identity for me. Out with the âcreative coderâ, in with the âAI developerâ.
Phase 5: The Bewildered Button Pusher
My code contributions in Github increased 5x from 2024 to 2025. But that graph is dimming a bit. As the codebase grows more complex, oversight and review consume more of my time.

AIâs speed and quality remain contested; some studies claim it can slow developers. For my part, while I certainly feel more productive, at times AI and I miss the mark. One lapse in oversight on a botched AI feature cost me a week untangling its scattered mistakes.
The blind acceptance of AI output is what researchers call âcognitive offloadingâ. A good friend and digital artist, Peter Burr, described the experience:
âI am no longer building a world through layered invention. I am curating an overflow of possibilities, generating meaning from the mediumâs excess.â
Iâm worried about how such heavy AI usage affect my own feelings of enjoyment and self worth. If Iâm no longer a writer of code, who am I? Just an AI button pusher? Whatâs the value in that?
Less and less, monetarily, according to a freelancer and former coworker:
âhourly rates seem to be plummeting over the last couple of years. I’m still charging $100 an hour but lots of places are like âthat is pretty highâ. I saw a posting for a Senior Dev at [a major travel service] that was only $60 an hour. I honestly don’t know how much longer I’ll be able to pull it off for! I don’t think there’s gonna be a market for 50 year old front-end devs soon âšď¸*â.*
Money aside, the rules of development are different now, and the identity I built as a âcreative coderâ no longer carries as much weight.
Phase 6: The Adaptive Worker
The clearest thing to me now is that I, along with a majority of the workforce, will need to adapt to stay relevant. I changed my job title to AI Engineer and took on thornier, less visible systems work. It is hard.
Itâs the work I used to go into dreading Iâd mess up. But with AI there are no dead ends. And while you can certainly check out mentally and let AI do its thing, Iâve found that close attention and oversight will save you some real headaches.
I still get existential. What are we giving up with AI, what are we gaining, and are we okay with it? Am I becoming obsolete? Am I complicit in driving forward an unstoppable technological leviathan that will swallow jobs and spit out societal idiocracy?
The Implications of AI
First, to be clear, America is all in on AI.
AI companies accounted for 80 percent of the gains in US stocks so far in 2025. And the investments are even bigger. By one estimate, Meta, Amazon, Microsoft, Google, and Tesla will by the end of this year have collectively spent $560 billion on AI-related capital expenditures, despite having brought in just $35 billion in AI-related revenue.

That investment will inevitably come at the cost of jobs. Up to 300 million of them by 2030. Erik Lortie calls this âThe Great Hollowingâ:
âFirst, the stack makes one talented human 10x more productive. This is praised. Margins improve. Then someone asks why that one human is still on payroll when the workflow can be automated. This is how the middle class gets hollowed outânot by collapse, but by compliance.â
Enter alarmists, like Anthony Moser, ârailing about the environmental harms, cognitive harms, racist output, the problems with consent and copyright, disinformation, surveillance,exploitation of workers, how people think it makes them faster when it [possibly] makes them slowerâ
So what to do?

from The Oatmeal, by Matthew Inman
Banning AI would be like banning technology. AI is incredibly enabling: anyone can write code, build a business, or make a Shrimp Jesus. Whatever can be built with AI is being built.

Erik Hoel observes that âwe are not witnessing the end of capitalism, but the early formation of a new phase: autocapitalism, in which automation, artificial intelligence, and robotics are set to redefine the relationship between labour, capital, and production.â
That evolution is happening faster than the infrastructure we erect to house it, power it, and keep it in check. Our atomic creations are smashing together at particle-accelerator speed; it will take time to understand the new elements weâve created. We need principles, not panic.
Establishing Principles
Hereâs a working set.
We Must Adapt
Rapid change will require an adaptive mindset. Weâre moving from deterministic systems to probabilistic ones where failure is not a breakdown of design but a necessary condition. What matters now is the ability to operate coherently in an environment of continuous volatility.
We Must Act with Intention
To adapt, weâll need both patience and presence. Tools accelerate; itâs up to us to decelerate. Some of us will find slack in our days. Use it: read the diffs, learn from the model, think ahead, give time to your coworkers. Take fewer actions, and make them more meaningful. Now is the time to put in the habits, processes, guardrails and incentives we want, in order to create a sustainable work culture.
Handle the human work and let AI handle the busywork.
We Must Work Together
Simone Weil called attention the ârarest and purest form of generosityâ. I think of attention as compost for community: if you only water what comes to the surface, then the soil that sustains it erodes. When attention goes to humans though, it builds healthy roots. Right now weâre flooding the topsoil - that outer ring of social media that is growing increasingly parasocial - or tending to our own private plots even as they shrink around us.
The middle ring of communityâthe place of shared work, not just shared attentionâis drying up. Young Americans spend around 30% less time socializing in person than two decades ago. In a recent Pew Study, half of respondents felt AI will worsen peopleâs ability to form meaningful relationships.

The hollowing out of social life isnât just cultural; itâs structural. Weâve allowed the architecture of our digital world to be defined almost entirely by private incentives, not public ones. If attention has become our default currency, itâs because weâve built no shared system to value anything else.
We Must Create Better Civic Structures

Mapping the Post Naive Era, Mozilla
AI now functions like a public utility, inseparable from the electricity it consumes. Yet unlike electricity, it has no national standards for algorithmic transparency or model auditing; no federal agency charged with monitoring large-scale AI deployments; no universal curriculum teaching citizens how to use these systems to shape their lives. Governance remains fragmented across corporate policy pages and voluntary ethics boards. As of 2023, only 2 states had issued official guidance on AI usage in schools.
Globally, comprehensive AI laws are sparse and the ones that do exist, like the EU AI Act, are slow to implement. Meanwhile, local governments struggle to keep up with bots, predictive policing, misinformation â technologies that already shape civic life but remain largely unregulated.
We need civic architecture grounded in the public interestâopen, collaborative, standardized, distributed. Treat AI like a utility: fund education, mandate oversight, define controls, and acknowledge what canât be controlled.
We Must Acknowledge Collective Intelligence
Humans have always been nodes in a larger network of collective intelligence â natural, social, and technological. Ancient and emergent. Designing for these systems holistically requires not just an understanding of them, but an empathy for them. Knowing when to nurture, intervene, or listen determines how wisely we evolve.
Where Does this Leave Us?
The future of intelligence depends less on what we know and more on how we create meaning together. As knowledge commoditizes, value shifts to places of creative frictionâwhere we must sit with contradiction, balancing intuition and logic until something new emerges. That tension isnât a bug; itâs the source code of invention.
AI loves to declare certainty and bury confusion. But confusion is where we live â where innovation happens, empathy grows. As our tools evolve to work faster, our work is to think slower, attend more deeply, balance acceleration with reflection. Thatâs how weâll evolve with intelligence, and not be buried underneath it.