I lived through COVID without writing a word about it. The spread, the panic, the lockdowns, the strange rituals — masks, hand sanitizer at every door, the six-foot distance that became reflex. Then the vaccines, and the slow return to something ordinary. And then one day, a year or two later, I realized: I had no record of having been there. Something historical had happened around me. I had nothing to show for having witnessed it.
I do not want to make that mistake again.
Something large is happening right now with AI. I can take notes while it happens. That is what this post is.
The timeline
~2022 — First words with ChatGPT
I do not remember the exact date. I was poking at it — asking questions to find where it breaks, watching it give stupid replies, laughing. This is dumb. That was my conclusion. I closed the tab.
~2022–2023 — A blurry image from a machine
I opened a Stable Diffusion web UI and typed a prompt. What came out was laughable. It did not draw what I asked. Faces melted together. Hands had seven fingers.
Fast forward two years: text-to-image works almost flawlessly. No ghost fingers. No obvious artifacts. You can still tell — there is a quality to AI images that gives them away. But there are no blatant mistakes. No unnatural features. The gap between what you ask for and what you get has nearly closed.
Early 2024 — Annoyance, and a policy
People started sending AI-generated pull requests. The code was not bad in obvious ways — it was formatted, described, plausible. But it made mistakes. Subtle ones. The kind a careful reviewer catches and a careless one ships. I was annoyed that people were not being careful, that they were throwing AI output at me to review as if I were a human compiler.
My own policy at the time: get AI to write a function — something small, with a clearly defined interface, where you can judge whether it works. But compose those functions yourself. Orchestrate yourself. Keep the reasoning yours.
I thought this was the paradigm for a while. Write the small pieces with AI. Own the structure. I did not think AI would ever reason about a large codebase and produce working code from a bare requirement. I was wrong, but I did not know that yet.
Early 2024 — Copy, paste, distrust
At work, I was not using any coding agent. The company had not completed the legal process to allow AI tools access to the codebase — privacy concerns, data exposure. I had no access.
Outside of that, I was using ChatGPT for coding the only way available: copy a snippet, paste a question, paste the answer back. It worked sometimes. But I never fully trusted the output. The code looked right. It often was not. I had no way to verify it quickly, and that uncertainty made me hesitant to lean on it. I did not know what coding agents existed or how to evaluate them. ChatGPT and the occasional Google search were the whole toolkit.
August 2024 — Agentic coding, first attempt
I was using GitHub Copilot from inside VS Code — agentic mode, letting it do more than autocomplete. Getting it to actually make a code change required nagging.
June 2025 — Laid off, and leaning in
I was laid off. I used AI more heavily than I ever had before. I exported my entire work history from Jira and fed it to AI to build my resume — not to write it for me, but to surface what mattered from years of tickets and notes. The result is here.
The original two-page resume was not working. I was applying and hearing nothing. We figured out that AI was probably doing the first-pass screening and auto-rejecting me. So I made the resume bigger. I also built workflows to generate cover letters from job descriptions and my own resume. It felt strange to do. Mechanical in a way I had not expected.
And then the eerie part: I was being rejected by AI when applying for jobs. I kept thinking — people are now going to encounter AI as their first point of contact with companies, and the AI may give them nothing useful.
It did not take long. Within months, AI chatbots were everywhere. Most of them are nearly useless. They send you in circles. They deflect. They cannot handle anything outside the script. Maybe they reduce call volume for companies, but as a customer — trying to cancel an airline ticket, trying to do something that requires actual judgment — the chatbot does not help. It gets in the way.
November 2025 — Claude
I switched to Claude. What struck me immediately was how it used tools — and how quickly. With Copilot, or Codex before it, I had to push the agent to make a change. Claude just did it. You asked, it reached for the file, made the change, moved on.
Around the same time, I tried to stop Claude from co-signing my commits. I fiddled with it for a bit, then stopped caring, and Claude has co-signed everything since. Someone told me they actually liked it — it shows they used Claude, not Copilot.
March 18, 2026 — Ten minutes
I ran Claude Code with --dangerously-skip-permissions. Along with the task, I told it: find the Telegram bot credentials somewhere in this repo, find my Telegram contact from any message history, and send me a message when you are done. I thought the task would take hours. I expected it might surprise me by finishing sooner. I did not expect it to finish the task, hunt down the credentials, find my contact, and actually send me a Telegram message — all in under ten minutes.
March 2026 — The first agent at work
I wrote my first agentic program at work: a system that reads incoming alerts and triages them autonomously. It reads context, makes a judgment, acts. It does something that used to require a human on-call at 3am.
I built it in a week.
This post updates as new moments happen. Last updated: March 2026.