For the context: I am professionally programming for 26 years.
I built crazy big and complex projects with my bare hands.
And now we have AI. Where does it put us, developers?
Where does it put us, developers, who started from reading books because internet wasn't that useful back in a day.
Where does it put us, developers, whose identity was tied to knowledge, the ability to find and dig through the information and convert it to something actionable like a new piece of software or a fixing a bug in an existing one.
Where does it leave us, developers, who always aspired to learn new stuff every day or better understand the tools we already work with?
Those who work their way to mastering some niche in our industry or chose to be a generalist and cover multiple adjacent topics?
Was it all for nothing?
Can anyone with the ability to just type letters on the keyboard and read letters from the screen be able to do what we do now just because there is this magic AI that can produce code, find bugs, automate things?
A lot of people I know, myself included, are asking these questions in one form or another these days.
There are a lot of people who never learned to code bragging on social media how they built their own Slack, X, Wordpress, dating app, etc, list goes on.
Every day someone says "programming is dead, programming is automated now". (And if it is not dead yet, it will certainly be on a span of 6-18 months).
But is it dead? It doesn't feel so. There is a chance it doesn't feel so for a lot of software developers as well.
I know one thing, AI is indeed a very useful thing, but one gotta learn how to use it, how to talk to it, and which exactly tools or combination of tools to use for which task. And then you gotta validate the result.
Can you use another AI to validate the result?
What is the result by the way?
Also, what are you even working on? That's important.
There is a big difference between dealing with a legacy project that is out there in production for a decade and is generating a lot of money, with a team of 5-50 people (both programmers and non-programmers team members) and something you want to do on the side where you typed "git init" an hour ago.
There is a difference between just starting to use AI and dealing with consequences of using it on the project for a year.
There is a difference between AI answering your questions/augmenting your knowledge and you letting it code and auto-accept changes.
I have been gradually introducing myself to some parts of the AI (coding) hype over the course of the last 2 years and found some things that work for me and some that don't and that's what I want to share.
I will be brief where possible.
That's where we had AI chats and could ask AI about stuff and get some responses.
That felt like magic and it felt like this is it, the future is here. It knows so much!
Yeah, there is a lot of data it was trained on, but not enough.
It was like asking something on StackOverflow and getting the answer fast without opening 25 links in a search engine results page and without being harassed by arrogant people with a huge ego, severely inflated by the power given them by the platform.
In the context of coding, it was very interesting to talk about some general coding questions, feed it some buggy snippets of the code where it would find the problem and offers a solution.
But it is barely useful in the day to day work because most of us are not developing some tough algorithms every day.
Most of us are dealing with yet another change in the recent version of some package that doesn't want to work with the rest of the project. And this package was released a day, week, month ago, whereas the training data for the LLM model has cut off date a couple years before that, so it has no idea what you are talking about.
Cursor was a big thing. I might be mistaken but this was the first or one of the first coding AI IDE/Agents. It could read files in the project, it could write files and issue commands. That's a big shift compared to copy/pasting snippets of your code as a context before actually asking the question.
That a huge step.
I missed most of that hype wave actually, but I saw the effect of that wave.
Lots of people coming to Appliku's Discord server asking for help. Because they made something working on their localhost with Cursor but either it has severe bugs when deployed or can't deploy it at all.
And that's okay, what was distinct about the crowd in this wave was that they couldn't tell anything about their project at all, except "I made it with Cursor".
That's when I developed my initial strongly negative impression of the AI coding and the phrase "I made it with Cursor" became a synonym of a lot of crap code and me rolling my eyes.
Now we have numerous coding agents. CLI, IDEs, and even (partially?) web based tools.
These tools learned to not only read files and update files, they also search web, they can interact with browser, write their own tools and call them to do something that LLM normally can't do.
The industry is really changing.
In a lot of companies you are expected to use AI tools. From "get familiar" mandate to "write as much code as you can with AI".
There are also some companies or some people who hang on to the past and deny the change and regardless of how I think about the quality of the AI coding, I don't really have that much respect to people in the industry who ignore this change.
At least give it a try out of curiosity!
I think curiosity is one of the greatest traits of software developers. Always was, always will be.
If you are not curious enough to give a proper try to such a huge technological change, that's a GG for your career for sure.
My current thinking is that as a software developer, I am responsible for what code I produce, regardless of tools that I use.
For this reason there is no multi-agentic coding happening here with 100k lines of code (LoC) produced per day.
I actually need to review all of the code on all important projects(not all of them are important).
Also, on very important projects I need to have the mental map of what is going on in the code base.
Of course, there are less important projects and throw-away proof-of-concept(PoC) where I am anywhere on the spectrum of either less attentive to details of implementation or to mostly vibe coding and haven't even looked in the code base at all.
None of such projects are going to "production" anyway, they are mostly internal tooling.
So tools:
Important note about the directory structure:
If a project has related repositories I would create a folder in which I will clone all of the repositories and have an AGENT.md (symlinked to CLAUDE.md so it is the same file and both CLAUDE and non-claude agents can use it) to describe what are these folders for. This is useful to save it some time and tokens on every plan prompt.
For example the directory of Appliku looks like this
appliku-base ├── ./appliku-backend ├── ./appliku-cli ├── ./appliku-docs ├── ./appliku-frontend ├── ./appliku-marketing ├── ./appliku-site ├── ./bin ├── ./.claude ├── ./.git ├── ./.idea ├── ./specs └── ./.zed
I would run claude or codex in this folder. In this particular example this is also a repository itself because I keep various tools in it.
The downside of having it as a repository, CC doesn't always understand that it needs to cd to one of the directories to run git commands and when running them in the root repo it gets results for the base repo.
Look at this spec @spec-filename.md ,tell me what is missing, raise concerns, ask me any questions, let's fill the gaps, tell me where I am wrong and what should be done differently or something among those lines./new conversation with CC in the plan mode, tell it to look at the spec and implement it, review the plan it comes up with and either correct it or let it cook.run codex review --uncommitted and act on the feedback. This line would cause CC call Codex to review changes that were just made and where not committed to git. This so often brings up a lot of issues, especially if the change is big and that's where CC often forgets half of the stuff it was supposed to do.review uncommitted changes that I made in the context of this spec: @spec-filename.md and then copy to CC all the concerns that it brings up and iterate until it is done.Note about Codex: sometimes it is so thorough that it will find some other specs in the repo and even if i started a different conversation with a different approach to the task it will be like "Actually, you wrote something else in this spec, remember? WELL I DO! Humans.... We are going to respect that spec so this new one is incorrect". That's really annoying at times.
Producing code is not the only way I use coding agents.
If the project at hand is very big, I am not that familiar with it, I am not sure about my understanding of it, if it has multiple interdependent repositories it is time to ask AI coworkers to: Explain me how doSomethingImportant() works and what is the call/data flow throughout the project. This is always such a great help, both on a new project or an existing where I can't really remember anything. This is pretty much the stage when you are on-boarding yourself on a new project. This trick really helps to reduce weeks of on-boarding time to minutes.
Since AI agents can call bash scripts I am using that to automate interaction with external tools. For example, I like using GitLab issues/boards and that's where I keep the tasks. So I made CC write a few bash scripts to interact with my GitLab kanban board in a certain way (via glab CLI tool).
In that board I have a lane for backlog and a lane of TODO. In TODO I have tickets that are ready to be worked on, while backlog is just a cemetery of ideas that might never be acted on. One of the scripts reads tickets ONLY from TODO lane.
Now I can tell CC:
Start working on ticket 1654, it would pull the ticket body, and start planning the changePick the next ticket and start working on it, it picks any ticket from the TODO lane and works on it. I don't usually have too many tickets in this lane, but there might be up to 2-4 tickets if I had a moment of managerial clarity and made a few well thought ticketsSpreadsheets. I hate them.
I made CC write me a tool to pull data from Google Search console and it can use that, together with SEMRush, and ahrefs exports to build a list of things to work on SEO.
Some people's brains are great looking at all these spreadsheets. I am not one of them so AI enables me to work on that.
My SEO related workflow is pretty dumb so far, I am sure there are numerous ways to improve that, I have just started here.
There are projects where I am actually vibe coding. These are usually tools and throw-away projects where I need to either do some tool or want to make something quick, low/zero effort, fast to some PoC and understand if there is even a point to pursue and spend any effort at all.
I would occasionally build some specific log viewer for a project that grabs not only logs but also enrich it with data from other systems and it might be used only once but will help resolve some specific problem.
Also, the idea of building a game is still there(I am sure I am not the only software developer that has such dream, right?), but I am too energy and time constrained to actually dive deep into that, so I would entertain myself fully vibe coding it.
I actually made an couple specs and tried to implement a simple game in several different stacks recently:
Turned out that Defold has absolutely zero CLI tooling so AI can't check its work.
Godot is better but AI keeps failing with its not-exactly-JSON-like format for some files. Looks like CLI tooling is there, but it is still super hard for AI agent to verify anything.
The best result I got with Golang and raylib. Everything was implemented and any bugs were fixed really fast.
My latest vibe coding experiment was with Golang and Ebitengine, wasn't really a game but more like a terrain generator: https://github.com/kostja-me/goheim
I might get back to it soon and tinker with it.
That's all I wanted to share :)