Stanislav Khromov

While we’ve seen AI assistants like Claude create impressive small-scale projects like small scripts or simple games like Flappy Bird clones, their potential for real-world, large-scale software development has remained largely untapped. In this post, I want to share a programming workflow that allows you to harness the power of long context models for substantial, existing projects.

The Evolution of AI Prompting

To understand the significance of this approach, let’s first look at the different ways we have been able to prompt LLMs since they were created:

  1. Basic Prompting: This relies solely on the model’s pre-existing knowledge. While useful, it often falls short when dealing with rapidly changing technologies or specialized libraries.
  2. Retrieval Augmented Generation (RAG): This method supplements the model’s knowledge with specific documentation or code snippets. However, the results can be inconsistent as we can only ever use small pieces (chunks) of the data we’ve added to the knowledge, and you never know exactly which information chunks the model will end up using.
  3. Long Context Models: The latest advancement allows us to input entire codebases and documentation sets into the AI’s context. This allows us to preload the current chat session with all our code and documentation, without any compromises.

The Power of Claude Projects

Claude Projects stands out by allowing you to upload your entire knowledge base into the AI’s context. This provides extremely good recall and enables you to query about any aspect of your uploaded content.

To make this process more manageable, I developed a tool called ai-digest, which you can invoke in your project by running npx ai-digest. This utility simplifies the preparation of a single file containing all the documentation and code you need for your project, which can then be uploaded to Claude Projects (or any other LLM which has a long context!)

To see a complete demonstration of this workflow, check out this video:

This approach to using LLMs for large-scale software development opens up new possibilities. By filling the AI with the entire context of your project, you can tackle complex features and integrations across multiple files in a large codebase. As these models continue to evolve, their context windows are also likely to expand dramatically which will allow us to fit even more inside the context – perhaps documentation for every library you use, architecture diagrams and overviews, and more.

While this approach doesn’t replace the need for human judgment and oversight, it significantly accelerates the development process and reduces the friction of working with an AI pair programmer.

Have you experimented with using AI in your development workflow? I’d love to hear about your experiences in the comments below!

πŸ‡ΈπŸ‡ͺ Full-stack impostor syndrome sufferer & Software Engineer at Schibsted Media Group

View Comments

There are currently no comments.

Next Post