Writing Rules
- Using pseudo-XML for your rules can help LLM adherence
- LLMs are trained on a lot of structured data (such as XML), see [this blog post](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/use-xml-tags) for more information.
- On _many_ occasions I have been given feedback that after simply changing rules XML like structure - the LLM closer adheres to them.
- Only enable the rules you actually want to use
- Think about the signal to noise ratio of your rules (and context in general), _how much information could you be told at once and remember?_
- If you have a lot of rules that are not relevant to the current task, then you're just adding noise and misleading the prediction engine, while increasing the token count.
- Do not blindly import and enable all my (or anyone else's) rules!
- Be clear, concise and specific in your rules. Avoid ambiguity
- Use emphasis (e.g.
bold, italic, __underline__) to highlight important parts of your rules. - As well as global rules, consider adding project specific rules such as
.clinerules, CLAUDE.md or similar that relate to repository specific behaviour (e.g. "To build the application, you must run make build etc.)
- I have an _example_ of what these might look like in [Cline/Rules/adhoc/_repo-specific-rules.md](./Cline/Rules/adhoc/_repo-specific-rules.md).
- Rules are often transferable between agentic coding tools
- While I write a lot of my rules in Cline, for 95% of them there's no reason they can't be used with other Agentic Coding tools such as Claude Code etc. without modification.
- Get AI to help you write or improve rules
If you spend a long time on a difficult problem with a coding agent and you finally crack it - get it to:
- Summarise the fix
- Why previous attempts did not work
- What led them down the wrong paths initially
- Get them to write a concise, clear rule (prompt) that could be used in the future (or added to your global rules if it's a common issue) to prevent the issue from happening again or at least aid with debugging.
Example:
> You fixed it! That's taken a long time to fix. Can you please respond with details on:
> 1. What the fix was
> 2. Why it wasn't picked up earlier
> 3. What information could I have provided to AI coding agents in the future - not just for this project but also other projects in general?
> With those in mind I would like you also like you to create a 1 to 3 sentence prompt I can provide to future AI coding agents that would help them avoid having similar issues in the future.
---
Getting High Quality Outcomes
- Treat an agent like you would someone who just joined your team, don't assume they know anything about your codebase or intended outcomes. Unless the task is very simple and self explanatory - a single sentence is probably not going to be enough for a prompt. GIGO.
- Manage the context window usage effectively (see other notes here on this).
- Start with a plan - break down large or complex tasks into a checklist of items to complete, have the agent follow and mark off items has it completes them.
- Make use of tools (MCP servers), they extend and enhance LLMs with access to up to date information, new capabilities and integrations.
- Always have tools available to the agent that allow it to search the web, lookup package doc