Posts with tag: "Llm"

2 posts match this tag

What Makes a Good Tool for Claude Code

I’ve been using Claude Code extensively for personal projects, and similar AI coding tools at work. Recently I came across this excellent blog post that resonated with a lot of my experience.

One part stuck with me though: Noah emphasizes that tools fail with LLMs when they’re “overly complex,” with the Unix philosophy being particularly well-suited for tool calling. But then I thought about git.

Git breaks the Unix philosophy completely. It’s sprawling, stateful, and complex. And yet Claude Code handles it effortlessly. It composes commands that, even after 10+ years of daily git usage, I wouldn’t think to use. It handles rebasing, cherry-picking, complex resets—stuff that trips up experienced developers regularly.

So if simplicity and the Unix philosophy aren’t the whole story, what else matters?

I’ve come up with three “hallmarks” of a good tool for tool calling with LLMs.

1. It’s been around for a long time and/or is used by lots of people


Harnessing Frustration: Using LLMs to Overcome Activation Energy

One of my biggest weaknesses as a software engineer is procrastination when facing a new project. When the scope is unclear, I have a tendency to wait until I feel I’ve “felt out” the problem to start doing anything. I know I’ll feel better and work much faster when I get “stuck in” but I still struggle with that first step, overcoming the “activation energy” required to engage with the details.

LLMs have been a game-changer for me in this respect: I can just throw a couple of sentences at them with the shape of the problem. This leads to one of two outcomes:

  1. The LLM comes up with a good solution, usually in a slightly different way than what I was thinking. I realize “oh wow the solution is much simpler than I thought”. Straight away I start thinking about the consequences of implementing and improving what the LLM suggested.
  2. The LLM comes up with a solution that I intuitively recognize as “wrong”. My immediate reaction is frustration (“How could it get it so wrong”) which leads me to go back and forth with the model, explaining to it why its solution could not possibly work. But in the process of arguing with the model, my brain is churning away and generating variations or different approaches that could work. After a while, even if the AI is still on the wrong track, the debate will trigger a moment of inspiration where suddenly the solution will come to me. I’ll excitedly start up a new conversation and start working through it with the model.

The key is the emotional reaction I have immediately to the LLM’s response, either excitement or frustration. By harnessing this immediate feedback loop, I get my brain out of its passive, procrastination mode. It’s almost like a jolt: either I’m thrilled because it’s simpler than I thought, or I’m spurred to action by the urge to correct a perceived ‘wrong’ answer. This forces me to engage with the problem in a meaningful way.