The AI developer job market is genuinely strange right now. There's a mismatch between what gets talked about (foundation model research, AGI, billion-parameter training runs) and what actually gets hired for (shipping AI features, integrating LLM APIs, building reliable agent systems).
Here's what I'm actually seeing.
What's Getting Hired
AI product engineers. Not researchers, not ML scientists — engineers who can take an LLM API and ship something with it. Build the RAG pipeline, integrate the agent, wire up the tooling, deploy it, monitor it. This is the largest hiring category by volume.
The skills: standard backend engineering plus LLM API familiarity, prompt engineering judgment, vector database experience, and the operational awareness to make AI systems reliable in production.
Agent systems engineers. As companies get serious about autonomous agents, they need engineers who can design agent architectures that actually work reliably. This requires understanding orchestration, tool design, error handling, human-in-the-loop patterns, and observability.
This role is newer and less well-defined than AI product engineer, but it's growing fast.
ML platform engineers. Companies that run their own fine-tuned models need infrastructure: training pipelines, evaluation frameworks, model deployment and serving, A/B testing for model changes. This is MLOps at scale.
AI tooling developers. Building the tools other developers use to work with AI. SDK development, developer tools, IDE integrations, CLI tools. This is a smaller market but pays well.
What's Not as Hot as It Seems
Prompt engineering as a job title. Every developer who uses LLMs does prompt engineering. It's a skill, not a role. Companies hiring for this often just want a product engineer who understands AI.
Pure RAG implementation. Implementing a basic RAG pipeline is now well-understood and achievable by most engineers with LLM API experience. It's not differentiated enough to be a specialty.
Fine-tuning without MLOps. "I can fine-tune models" without the infrastructure to serve, version, and iterate those models in production is increasingly less useful. Fine-tuning the model is the easy part.
Skills That Actually Matter
Evaluation. The ability to measure whether an AI system is working — not just "does it produce output" but "does it produce correct, safe, consistent output." Building evals, running evals, using eval results to improve systems. This is underrated and undersupplied.
System design for AI. How do you design a system where one component is a probabilistic function (the LLM) that can fail in unpredictable ways? This requires different patterns than deterministic systems, and not enough engineers know them.
Observability for AI systems. Logging LLM calls, tracing multi-step agent runs, identifying where quality degraded, understanding cost drivers. Regular APM tools don't cover this well. Engineers who can instrument AI systems are in demand.
Domain knowledge. The most employable AI developers aren't the ones who know LLMs better than everyone else — they're the ones who combine solid LLM competence with deep knowledge of a specific domain (legal, healthcare, finance, enterprise software).
How to Position Yourself
The positioning question is: what problem can you solve that requires both AI knowledge and something else?
"I can build AI systems" is not a position — it's a capability increasingly shared by many developers.
"I can build reliable AI systems for healthcare document processing, understand the compliance requirements, and have shipped three of them" is a position.
The something else can be:
- A specific industry or domain
- A specific technical specialty (distributed systems, databases, mobile)
- A specific scale (startup speed vs. enterprise reliability)
- A specific workflow (DevOps, security, data engineering)
The developers I see doing the best are combining AI skills with a specific domain or specialty, not trying to be pure AI generalists.
The Honest Picture
There's real demand, and there will be more. But it's not "any AI keyword on your resume gets you hired" anymore. The bar for what counts as genuine AI engineering competence is rising fast. Companies have been burned by AI projects that didn't ship and aren't hiring for potential anymore — they want demonstrated experience with production AI systems.
Build things. Ship them. Write about what you learned. That's the portfolio that gets you hired.