Five Tips for Better AI Programming with Large Language Models

After using LLMs like ChatGPT, Claude and Gemini for coding assistance, I’ve noticed a significant improvement in their capabilities over the past two years. However, getting consistently good results can still be a tricky process. Here are five key tips I’ve learned for getting better code from AI models.
Include Your Entire Codebase
The first and most impactful tip is to provide the AI with your complete codebase as context, not just isolated snippets. This helps the model understand your coding style, patterns, and (to some extent) architectural decisions. The AI can see how you handle database access, which libraries you use, and your existing implementation patterns. This makes it generate much more consistent and relevant code. I have a separate blog post on this if you want to read about my workflow!
Include Library Documentation
While AI models have broad knowledge of popular libraries, their training data always has a cutoff date. This means they may not know about newer versions or API changes. To get around this, include the documentation for the specific versions of libraries you’re using. Many projects now provide an llms.txt file specifically for this purpose. If not, you can usually grab documentation in Markdown format from the library’s code repository like GitHub.
I recommend keeping this documentation in version control alongside your project so you can track which versions you’re working with. You can also use the same tool as in the previous tip to package up all your code including the documentation to send to the AI.
Use a Strong Initial Prompt
The way you frame your request to the AI makes a big difference. When prompting an AI for code generation, the things that have worked well for me is to include the following in the prompt:
- Tell it that it’s an “expert developer” – this may seem silly but it noticeably improves output quality.
- Request complete files rather than snippets to avoid missing context.
- Ask it to preserve code comments since models often strip these out.
- Direct it to carefully consider the full project context before responding.
- Specify your preferred libraries and frameworks.
- Include any project-specific constraints or requirements like performance budgets or package restrictions.
Let the Model Think
Rather than just asking for code directly, you get better results by letting the model think through the problem first. Ask it to outline its approach and planned changes before writing code. This helps the model converge on better solutions. Recent reasoning models like OpenAI o1 and DeepSeek-R1 have proven this as a viable technique to improve model quality. But any model can use the chain of thought approach.
Choose the Right Model
For routine tasks like writing small functions or implementing UI components, strong general-purpose models like Claude 3.5 Sonnet or GPT-4o work well. However, for complex architectural decisions or thorny problems, specialized reasoning models like can be tried.
Keep in mind that reasoning models are typically more expensive and have strict usage limits, so you can save them for cases where standard models aren’t giving satisfactory results.
If you find that even reasoning models struggle with your request, that’s often a sign for me that I need to break the problem down into smaller pieces. So I usually take a step back and try approaching it more incrementally. Sometimes I’ll ask the AI itself to help divide a complex task into more manageable chunks.
Have you found other effective techniques for working with AI coding assistants? Share your experiences in the comments below.
Looking for a video version of this blog post? You can find it below:
View Comments
Reduce JavaScript Bundle Sizes in SvelteKit by using server load functions
The amount of JavaScript you ship to your users directly impacts your site’s...