How I'm using AI in coding at the end of 2025
There have been a lot of these posts around this year. I've read quite a few and have found it very useful to see where others are in terms of usage and thinking. I'd particularly flag these 2 sources, which I found very balanced and reassuring.
How I use AI agents to write code
Nolan Lawson, 2025-12-22
ShopTalkShow 691: Dave tries vibe coding a tennis app
Dave Rupert and Chris Coyier, 2025-11-17
I'm keen to write this, not because I think anyone's particularly interested in reading it, but because I can use it as a line in the sand, a benchmark to look back on in the future and see how things have evolved.
Work usage
I use AI tools in a few different ways. In my day job I obviously take things more seriously and use tools very cautiously, remaining sceptical and testing everything.
My main use now is GitHub Copilot within VS Code but before that I was using Windsurf (formerly known as Codeium). With Windsurf I rarely used the chat feature. I pretty much just stuck to the code suggestions/autocomplete functionality. I never really saw it as doing any thinking for me, more just saving keystrokes, enabling me to type code faster.
With GitHub Copilot I now actively use the chat. I ask it to explain blocks of code, find particular functionality in the file system, refactor and write new code.
Claude
There's a wide choice of LLMs with different modes but I've generally stuck with Claude by Anthropic, currently Claude Sonnet 4.
Modes
I read up on the differences between the different modes and try to use the best tool for the job.
Ask mode
I now use Ask mode for finding code, e.g. "find the places where users are grouped into roles". This is great because you don't need to know the actual variable or function names. You can use natural language and it will generally understand what you are looking for and give you a list of possible matches. It's also not just a list, like with a standard text search. It will try to describe what is happening in each so you can find the relevant block faster.
Ask mode is also good for explaining code. The most obvious example I can think of is if you've got a regular expression. You can ask it to explain and it will give you the long form natural language explanation of what it will match.
Agent mode
I use Agent mode when I want it to actually write code. One use is refactoring. If I've found a block of code more difficult to understand than I feel it should be then I will highlight the block and ask "can this be simplified to make it more human readable" and it will offer a suggestion, which I can then keep or remove.
Another use case is when I have lost of logic, for example, nested if statements but with some AND logic, some OR and some NOT thrown in. Maybe I've coded it as it's written by the product owner in the spec. I can ask it to rewrite the code in a clearer way, maintaining the logic and it does this kind of job well. Obviously it needs thorough testing.
As an LLM is essentially just pattern matching, it's very good at doing things in a consistent way. GitHub Copilot lets you add particular files as context and it will then follow what has gone before in terms of code style and variable or function naming.
I've found it useful with TypeScript, generating the types or interfaces for me, or knowing what type is expected in a specific situation. Most of the issues with an instance of a type missing a property or trying to use one that doesn't exist seem to have gone away - they get picked up and corrected.
When it comes to writing new code via the chat this definitely feels like a step up in risk from the other functionality I have mentioned so far. I still tend to ask for small additions or revisions so that I can check everything as I go.
Plan mode
I've read that prompting lots of little bits is less effective than just handing over the masterplan. As a dev I understand this - you need the overview to know where a project is heading in order to plan effectively and make the right architectural decisions. This is where Plan mode comes into its own, apparently, but as this is my job I'm still being cautious. I'm leaving the wider scope and higher risk "vibe coding" for my own projects for the time being.
Fun personal projects
I've written a number of little games, using AI help to different degrees. This has been quite different to my work setup and has been mainly typing prompts into ChatGPT.
I started by just asking for help with some particularly complicated coding, like some game physics - detecting object collisions, recreating a realistic bouncing effect, random movement. These are things that I'm sure I could have achieved but would have taken me a long time. I would write most of the code but get it to help me with some of the complexity.
ChatGPT really seems to lead you into vibe coding. You ask it to do something and it ends its response with suggestions of what it could do next for you? With some small new projects I have allowed it to write all my code, and where things are not right I have asked it to fix them, so I'm not touching the code myself - proper vibe coding. After each response I copy the code into VS Code or sometimes a CodePen pen to run, see what happens then prompt again.
This way of working means that there's not a working copy as such. If ChatGPT breaks something that it then can't easily fix, or its fix then breaks something else, you can end up with a broken app. This can be very annoying if you've invested a lot of time. I've found that I need to copy the response code into a new file or commit each new version so that I can roll back and paste the code back into ChatGPT if I need to.
Results have generally been good. For making something like a game from scratch its suggestion of what it could do for you next have been useful. It hasn't always been perfect code - there are still bugs - but in fairness it's no worse than when I've coded something myself for fun, probably with less caution than I would use in work.
Personal project examples
These projects have used ChatGPT to write some of the game physics - collisions, bounce effects, etc.
Presentation
While it seems to understand coding logic and syntax well in JavaScript and TypeScript, I've had quite poor results when asking it to do anything presentational, like CSS or SVG. CSS layout and styling has not been too bad but it seems completely unaware of any newer features as I guess they're far less commonly used in the wild.
It's just about starting to use custom properties but this feels like a recent development. I think that if you want to use any newer features you would have to stipulate this in the prompt.
With SVG it seems very capable of just drawing a complete mess. It miscalculates the points in paths and has no awareness of when elements overlap each other. I've found that using an AI for this type of work actually takes longer than hand coding it.
Accessibility
I don't have a huge amount of faith in LLMs producing accessible code. They can often just use divs and spans for everything when there's a more semantic choice available. If you specifically ask it to make something more accessible then there's a tendancy for it to just toss in some aria attributes and think it's done the job. I guess that's the pattern it has seen without any real understanding.
I don't think it can be trusted to know what is accessible. This is still one for humans.
Other experimentation
I'm always interested in what other tools are out there and what they might be able to do for me.
Stitch
I tried Stitch by Google. It's a UI designer, where you explain your requirements in a prompt and it gives you high fidelity designs. I have used this to get some quick ideas based on product requirements, which I have then discussed with colleagues in a meeting. It worked well as a visual starting point, better than trying to describe everything in words.
As it is built on the idea of pattern matching it will always provide designs that are well established and predictable. Maybe that's a good thing from a user experience point of view as the interfaces should be easy to understand. It's never going to win any design awards or offer anything remotely interesting but it will do the job and give a solid foundation from which to get creative.
Final thoughts
I feel I'm still edging forward slowly, gradually learning how it can help, what sorts of jobs it does well, what it does badly. By playing around with the tools outside of work I feel my knowledge is growing.
For me the skill in getting productive use out of AI tooling is around picking the right battles - knowing when to use it and when not to. It's not perfect and it can be very frustrating but used for the right jobs at the right times it is definitely useful.
I can't get on board with vibe coding everything and it replacing people any time soon but I also can't accept the argument that it's terrible and we shouldn't use it.
There's a lot of uncertainty around AI but there are 2 things I am certain about. It's here to stay and it will continue to get better.