AI-First is a company culture where everyone tries using AI to solve every task before trying to solve the problem via other means.1 I’ve got a few problems with the idea of jumping first to AI for everything.2

In 2025, a lot of companies announced they were going “AI-First”. Not everyone is in favor of this. Duolingo, for example, came under a lot of fire for their AI-First announcement. Most of the backlash they received came from people accusing them of replacing contractors and human nuance with AI.3 That’s a worthwhile conversation, but I’m here to consider something else: why the “AI-first” mindset is bad for business.

AI shouldn’t be thinking for you

The first reason for not using AI for everything is that the output is not always the important part of the job. When someone asks you to write a strategy, for example, producing a nice, well argued text is the part AI can do well, but it’s not the important part. The important part is thinking through the options and coming up with a solution that is best for the company. It is necessary to deeply consider the answer using all the information you have about the state of the market and the company – information the AI doesn’t have and is relatively hard to give it.

In a task like this, AI is not deeply considering the necessary information, and isn’t helping you understand why one strategy might make more sense than another. AI is just giving you an answer that looks nice. The AI-first result is a document that makes it look like you did the assignment, when you actually did not. You might fool your team or even yourself that AI did a good job, but you didn’t.

AI lacks context

AI is not just a problem for high-level documents. Paul Graham argues that “writing is thinking”, so when you outsource any of your writing to AI, you outsource your thinking with it.

I’m not suggesting you should never use AI for correcting your grammar, or for tasks like summarizing or formatting or doing research. Rather, I’m saying that if you are using AI to do any core piece of writing for you, you are robbing yourself of the opportunity to think things through and truly understand the situation you are writing about for yourself.4

AI shouldn’t be used to solve problems you can’t solve yourself

The best coders right now are the ones who are planning out their work, breaking it down, and then asking the AI to fill in the pieces. The better you understand a problem the more you are able to give AI concrete tasks that act as stepping stones to the finished product. If you ask it to do too much, you end up having to rewrite large parts and otherwise clean up the mess that AI coding tools tend to make.

The lesson is that AI works best when creating boiler plate and filling in details, but planning and organizing is still a challenge for it. In coding, I haven’t seen good success with AI taking over larger projects or doing any sort of work that can’t be solved by the developers themselves.

Maybe someday AI will improve and we will be able to use it to solve problems you can’t solve yourself, but for now, if you can solve the problem yourself, AI is good at taking away some of the menial bits of the job. If you can’t do it yourself, you won’t be able to do it with AI, either.5

AI is mediocre

The theme in the above examples is simple: AI is great at doing mediocre work. If you have lots of semi-repetitive work to do that requires minimal thinking and minimal context, by all means please use AI.6

But if you need someone to do a task that requires considerable planning or deep context, don’t outsource the job to AI. And definitely don’t outsource the task to AI if the process of producing the work is as important as the end result itself.

A recent study7 by MIT agreed, saying, “AI has already won the war for simple work, 70% prefer AI for drafting emails, 65% for basic analysis. But for anything complex or long-term, humans dominate by 9-to-1 margins.”

What I am doing instead

So how am I using AI? I love AI as a tutor, asking it to help me think through a problem and asking it questions about things I don’t understand. It’s really nice to be able to ask AI any question any time without having to nudge a colleague out of their flow state.

As for adopting AI at the company level, I prioritize making sure everyone is familiar with the tools, but I don’t treat all AI usage as good AI usage.8 Experimentation is what matters now and for that to work everyone must have the tools, as well as formal and informal learning opportunities and the psychological safety to share their experience. Everyone I work with is trying it for something and talking about what works and what doesn’t. Beyond that, I trust my team to use AI where it makes sense – and not for every task.

I’ll probably write more in the future about the best ways to use AI in products, but for now I’ll say this: since AI works best in straightforward cases where only mundane “thinking” and minimal context is required, I plan to focus on using it in existing workflows, rather than building new workflows entirely out of AI where it can more easily go off the rails, or needs more context.

Bottom line

AI works best when you can understand and break the problem down yourself, then assign the parts that require mediocre thinking to AI. If, instead, you are using AI first for everything, you can expect to get a lot of sub-par results.

Notes

  1. Here are two definitions from advocates of the idea. 

  2. To be clear, I’m neither a luddite nor an AI nay-sayer, but I have seen a number of new technologies find their way into far more places than they belong because of hype. While I agree with the claim that “AI is different” from previous technology changes, it’s not so different that it should become the default for every problem. 

  3. The backlash didn’t seem to impact the business. 

  4. In fact, I feel so strongly about this that I don’t use AI for this blog at all. The point of this blog is for me to think through important topics for myself, and I can’t do that if AI is doing the ‘thinking’. I do like the cute red panda character AI generates, but if art were the core point of this blog I would probably be forced to admit how mediocre that is, too. 

  5. Even for menial tasks like summarizing or formatting, I’ve seen problems caused by using AI. When it summarizes a thread or writes a ticket, for example, it doesn’t have the full context – and without context, AI is more prone to make mistakes. The more important the task, the more context is generally needed and the less likely AI is able to do a good job. 

  6. I like the way the Economist put it: AI excels at tasks that “don’t need a deep understanding of the company, and are ‘easily verifiable’.” Gary Marcus argues that this is because LLMs lack the ability to form or properly use models of the world

  7. Yes, this is THAT MIT Study. 

  8. I’ve heard of some companies creating leaderboards of people who use AI the most, and rewarding people for using AI, as if all AI usage is good. I can’t help but be reminded of companies that used to track how many lines of code their employees wrote.