So I asked myself: Who exactly is taking these jobs?
I've been using Claude for a while now. So I figured, let's start there. Went to Anthropic's blog, went through a few posts, and one of which was called Building AI agents. Sounded cool.
But then… boom. It started throwing around terms like vector embeddings and ANN search, token budget optimization, memory, planning, tool use, stateful agents, function calls, even simulation-based reasoning.
Yeah, a lot of it just went straight over my head.
So I did what anyone would do: Go on a research mode.
Reddit, Twitter, Medium. Asked Claude himself. Found a lot of buzz, not much clarity.
Eventually (after dodging a bunch of hypey YouTube thumbnails) I stumbled on Stanford CS229, which gave a very high-level explanation of how machine learning and LLMs sort of work. At the end of that rabbit hole, someone mentioned another course:
→ Stanford CS336 — Language Modeling from Scratch.
This one's gold. It's a full lecture series on YouTube that builds up the whole idea of LLMs and agents from scratch. No tools, no hacks, no shortcuts — just how things actually work, from first principles.
So yeah. If you're like me and wondering why dev jobs might be going — this might be a good place to start.
And what have I learned so far?
Honestly? Not much.
Like 60% of it is still going over my head. But that's okay. With every "What and why is this like that?" question I throw at ChatGPT, and every dumb thing I pause to Google, I start to get a little more clarity.
No project to show. No big insight.
Just trying to understand, slowly. And that feels good enough right now.
🧠 Resources I mentioned:
-
Anthropic blog on Building AI Agents
https://lnkd.in/d-cR25eJ -
Stanford CS229 – Machine Learning
https://lnkd.in/dByDajeZ -
Stanford CS336 – Language Modeling from Scratch
https://lnkd.in/dzDQa_eY
