Featured image of post To AI or not to AI

To AI or not to AI

LLMs can make us faster than ever—but they can also quietly take over our judgment. Here’s what I’ve learned after a year of using them almost daily.

This is not an AI-bashing post, and it’s not a fanboy love letter either. It’s just me trying to put into words what it actually feels like to work with LLMs as a developer after 25+ years in this job.

For a little over a year I’ve been using LLMs both on my personal projects and at work. Not every single day at the beginning, but now it’s pretty close to daily. I started small, and over time it’s become a natural part of how I work.

What I’m interested in isn’t just the usual either “LLMs are cool” or “LLMs will replace us all”. I’m more interested in questions like:

  • How does AI actually change the way we work?
  • What does it do to our skills and habits over time?
  • Where’s the line between using a tool and giving up control?

Because I still remember very clearly what life as a developer looked like before any of this existed.


From Guessing Docs to Asking an LLM

I’ve been writing software for around 27 years now. When I started, everything was slower and more painful in a very specific way.

I remember documentation that was either hard to find or often incomplete, and some unfortunate times both. I remember following a book’s code examples that would not compile at all, and spending hours trying to find out why. But it was not like I could simply search on a search engine because there was no online documentation or Stack Overflow to assist. Times were different. Sometimes you found a functions with a cryptic 3-letter name, and having fun trying to guess “okay, but what does this actually do?”.

I remember my Java days where JDK docs were just a huge list of every function in the book, but there were no examples and no sense of how the pieces fit together. You’d scroll through endless flat lists of methods and think: “Okay, great, but how do I actually use this in a real program?”

Then things started to improve. You started to find documentation online, and I still remember how good the old Java Tutorial was, with tiny, focused examples. Little snippets showing how different pieces worked together. I had this trick where I’d find the method in the JDK docs, copy its name, paste it into the tutorial search, and boom: an example that made sense.

That alone felt like a revolution.

Fast forward to today, and instead of hunting through tutorials, I can just ask an LLM:

“Hey, how can I read from a file using this function?”

And instead of one tiny snippet, it happily writes the entire program.

From my perspective, that’s at least a 10x improvement in how quickly we can move. But it also raises a big question:

If the machine can fill in more and more of the gaps for us, what happens to our own understanding over time?


The Phone Number Problem

Let me explain what I mean with a non-technical example.

Before smartphones, people actually remembered phone numbers. I had a tiny paper phone book with microscopic handwriting, but I still knew the important numbers by heart. I can still hear my best friend Simon asking me how I could remember everyone’s number so easily.

Then smartphones came along. Suddenly I no longer needed to remember numbers. I could simply search by name and never see the actual number. Over time, I realized I was happy not having to remember each number by heart, it was simpler, it was more logical… so that part of my brain just… switched off.

Fast forward 20+ years and earlier this year while I was out I needed to call my wife and didn’t have my phone on me. I borrowed someone else’s and when they asked: “What’s the number?” my brain just froze.

I eventually remembered it, but it took effort. That experience stuck with me. Not because forgetting a number is some huge tragedy, but because it’s a good example of what happens when a tool takes over a task completely:

We don’t just stop doing the task. We slowly lose the skill behind it.

I’ve noticed something similar starting to happen with LLMs.


How I Actually Use LLMs Day to Day

I use LLMs several times a week for work, sometimes several times a day. It ranges from simple stuff to more complex tasks.

Here are a few real examples.

1. Let the AI handle the boring SQL

I needed an SQL script. Not rocket science, but still a bit tedious to write by hand.

Instead of opening an editor and crafting it from scratch, I opened Gemini (that’s one of the LLMs cleared at work) and typed:

“Could you create an SQL script that does X, Y, Z, compares A and B, and orders the result by date ascending?”

Hit enter. Boom — script done.

I simply copy–pasted it, adjusted table and column names, ran it in dev, verified the result, and then used it.

That entire thing was much faster than writing it myself, and the quality was perfectly fine.

This is where LLMs shine: I know what I want, and I let the AI handle the tedious “how”.

2. Generating a proposal template

I had to write a document outlining technical projects for the next six months. I knew exactly what we needed to do, what the problem space for each project was, what the trade-offs were. But as I started to type them down in a presentable form I realized that this would take a considerable amount of time. Time I simply didn’t have.

So I started looking in our company wiki for a usable template, maybe something from one of the other teams or even something from a Staff engineer. To my dismay I didn’t find anything that suited my needs, and I really needed this document out quickly.

So I decided to try an LLM

“Could you create a template for a technical projects proposal for the next 6 months, with room for problem statement, options, pros/cons, and impact?”

Instantly I had:

  • A clean, structured document.
  • Sections that made sense.
  • A layout that made it easier to tell the story.

Then I just filled in the content with my own thinking and done. Of course I had all the content, which I could simply add to the new shiny template, so the collaboration was just right.

Again: I drove the “what” and the reasoning. The AI handled the formatting and structure.

3. Mappers and tests on autopilot

Another classic: mapping between DTOs and domain models.

Writing mappers and mapper tests by hand is not exactly the highlight of my day. It’s repetitive, easy to mess up, and frakly boring as hell. I would much rather look at some tech debt and figure out how to get rid of it, or for that matter anything else….

So I came up with a prompt something like:

“Please create a mapper from FooDto to Foo, and back. Fields have the same names. There’s one enum that also needs mapping. Once the mappers are done kindly add unit tests that verify all mandatory properties, make sure not to test implementation details but just outcome.’”

I hit enter and get:

  • A full mapper implementation in Kotlin (or Go, etc.) and a full set of tests like I had asked.

It feels like having a junior dev sitting next to you who’s insanely fast at typing boilerplate.

And honestly? This part is fantastic. It removed all the tedious work and allowed me to focus on other items in my day.

To be fair, sometimes Co-pilot doesn’t get it completely correct, it sometimes uses the wrong assertion library or tests non-existing props. But again, all much faster than I could type. The important thing is to go over the code and make sure that it’s exactly what one wanted.

4. Understanding unfamiliar flows

Sometimes I need to dig into an older part of the codebase, or something someone else wrote.

For this I have started to play with Claude Code, where I could simply ask:

“Document the flow for this entity, starting from the REST endpoints down to the database. Highlight heavy computation and sensitive areas.”

Or another example:

“Analyze the existing unit tests and tell me if anything obvious seems missing.”

That’s insanely powerful. Instead of manually tracing everything, I get:

  • A high-level overview,
  • A map of the flow,
  • A rough sense of where to pay attention.

This is where LLMs act almost like a code tour guide. This is something that has helped, especially when one considers the alternative, a simple out of date README.md, maybe a couple of ADRs also possibly outdated, and having to swim in code and not even knowing where to start.


Where It Starts to Get Dangerous

So far, everything sounds great, right?

  • Faster code generation assistance
  • Faster template or scaffolding generation
  • Faster mappers and simple Unit Tests
  • Better code understanding, and high level overview

But here’s the part that worries me.

At some point, I caught myself doing something different on my personal projects. Mind you, these are experimentations - learning new languages or frameworks, or just doing things that might not be so straight forward.

Instead of asking “Can you help me do X?” I started asking “How would you solve this problem?”

That’s a subtle but important shift.

That means I’m not just using the AI to accelerate my own plan. I’m asking it to propose the plan.

This can of course be super helpful for brain storming session, especially when working fully remote of if one is a solo indie developer.

Unfortunately the long term results were that I stopped pushing back and being critical about the proposed solution. I simply accepted what the LLM gave as an answer.

With enough years of experience, one is able to push back and be critical, not just because of Ego, but as a way to probe the reason behind the response from the LLM and have that meaningful brain storming session. To discover something we might have overlooked, or just see things from a different angle. Yet it becomes more and more difficult to push back because it’s simply so easy to get answers. Of course if the proposed solution is completely wild then I believe that one still retains the ability to push back… but otherwise it becomes too easy to just sit back and think ‘Awesome, let’s do that’.


Senior vs Junior: Same Tool, Very Different Risk

If you’ve been burned by bad architectures, race conditions, and hard-to-debug systems over many years, you develop a kind of instinct.

  • You can smell over-engineering.
  • You can feel when coupling is too tight.
  • You know when a solution is elegant vs. “clever but cursed”.

So when the LLM gives you a massive wall of code, you can still look at it and go:

  • “Nope.”
  • “Too complex.”
  • “This hides a time bomb.”
  • “This is technically correct but practically awful.”

But if you’re an associate or junior, how are you to know. Especially since the LLM can give you a complete solution in a matter of seconds. If this magical tool gives you a full solution that:

  • Compiles,
  • Runs,
  • Passes the (AI-generated) tests,

I believe that anyone’s initial instinct would be to to say:

“Looks good to me. Ship it.”

And that’s where the risk is.

You might be inadvertently introducing Technical Debt, you might be introducing code that is unnecessarily complex, or simply tightly coupled and difficult to maintain.

All without realizing it.


PRs, Shallow Reviews, and AI-Generated Code

Now combine that with how pull requests are often handled nowadays. The majority are:

  • Reviewed in a hurry,
  • Viewed in isolation (a few lines at a time),
  • Approved without running the code locally,
  • Approved without truly understanding the flow,
  • Focus stays on styling issues or nit picks.

We’re all super busy. Feature pressure is constant, and we keep getting told that we need to accelerate, that we need to be a high performance team. So the reality is that code reviews don’t get the attention they deserve.

Now imagine a PR where:

  • Most of the code was generated by an LLM,
  • The author doesn’t fully understand all of it,
  • The reviewer doesn’t have time to deeply inspect it.

That’s how you slowly end up with code bloat, misused abstractions, duplicate logic, repeated external calls (multiple calls to the same service done a few lines apart) and hidden technical debt.

And nobody really owns it, because “the AI wrote it”.


So… Should We Use LLMs or Not?

For me, the answer is:

Yes, we absolutely should use them — but with intention.

We, as developers, have always built tools to make ourselves more effective:

  • Compilers,
  • IDEs,
  • Static analysis,
  • CI/CD,
  • Linters,
  • Frameworks.

LLMs are just the next step in that evolution. But here’s my main point:

The tool should speed up the how, while we stay in charge of the what and why.

The danger is when we let the tool:

  • Decide the architecture.
  • Decide the design.
  • Decide the trade-offs.

…while we stop thinking critically about it.


Practical Advice (Especially If You’re More Junior)

If you’re a junior or early-career dev, here’s how I’d suggest using LLMs:

  1. Use them to speed up grunt work

    • Mappers, simple boilerplate, test skeletons, docs templates.
    • Use them like a very fast assistant, not a replacement for thinking.
  2. Make sure you understand the code

    • If the AI writes code you don’t understand, that’s a signal.
    • Slow down. Read it. Ask why it’s doing what it’s doing.
    • If you still don’t get it, don’t just ship it.
  3. Ask for reviews with intent

    • Don’t hide the fact that you used an LLM.
    • Say: “I used an LLM for this, and I’m not 100% sure about X and Y. Can you help me verify?”
    • That’s not weakness. That’s professionalism.
  4. Lean on your team

    • Tech leads, seniors, mid-levels — they’re there for a reason.
    • It should be normal to say: “I let the tool generate this, but I’m not fully confident yet.”
    • If you’re punished for that honesty, that’s not on you. That’s a culture problem.
  5. Remember that tools don’t carry accountability — you do

    • If something goes wrong in production, nobody is blaming “the LLM”.
    • It’s your name on the commit.
    • So stay in the loop. Stay responsible.

Final Thoughts

I’m not anti-AI. Far from it. But I’m also not in the “AI will code everything, humans can retire” camp. Right now, the tools are powerful and immature at the same time:

  • They hallucinate.
  • They over-engineer.
  • They happily generate bad tests.
  • They create complexity that looks impressive until you try to maintain it.

So my stance is this:

Use the tool. Enjoy the speed-up. But you stay in the driver’s seat.

You decide:

  • What needs to be built.
  • Why it matters.
  • Whether the generated solution is good enough.
  • When to say “Thanks, but no — I’ll do this part myself.”

And if you’re early in your career and unsure, this is where good teams matter:

  • Mentoring,
  • Pairing,
  • Honest questions,
  • Safe spaces to say “I don’t understand this yet.”

At the end of the day, LLMs are just tools. They’re here to help us — not to quietly take over our judgment. As long as we remember that, we’ll be fine.

Photo by Nahrizul Kadri on Unsplash

Built with Hugo
Theme Stack designed by Jimmy