Thoughts of Mads Nedergaard

Written without AI

Principles for code reviews

10 min read·27/10/2025

Over the years I've adopted and preached the following principles for code reviews in an attempt to make this infamous part of our daily work as software engineers more meaningful.

There's plenty of great articles on how to write good pull requests and how to review, but these principles are more high level and focused on the overall strategy.

The goal of the code review in my opinion is to teach and learn, create shared context and distribute knowledge, divide responsibility and ownership, reduce bugs - and then lastly to make the codebase clean.

These principles aim for high velocity and for getting things shipped (and enjoying the process) over clean and perfect code. Clean code should never be the goal. For early-stage startups, ensuring the perfect architecture and future-proofing abstractions doesn't matter if you're not making money...

Disclaimer

Your mileage may vary, or you may work in an environment where a small mistake in your internal dashboard can break the internet. These ideas are intended for the rest of us and especially in startups where we’d rather let a bug loose once in a while than slow down all the engineering work.


1. Prioritise reviewing

Always prioritise reviewing PRs and unblocking your colleagues. Reviewing PRs might be an interruption for you, but remember that someone is potentially blocked until you do 🚧

If you cannot review immediately, let the author know - ideally by asking how urgent it is.

Utilise tooling to make sure you know when your review is requested in real-time - using email for this is really not the best workflow (IMO).

Linear made this much easier this year (if your team is already using it), but otherwise there's tools like Gitify.

Example of Linear GitHub notificationAn example of a notification from Linear

Being good at reviewing fast generally means people will continue making small and bite-sized PRs - whereas slow reviews makes you bundle more stuff onto the same PR because you know your reviewer is going to take a long to time before reviewing, so you try to get more done at once or add new work to the PR.


2. Approve optimistically

Only left some minor comments? Just approve the PR ✅

You left feedback and that's great - now trust that your colleague will look into it and solve it in an appropriate way!

Approve with a "after X is addressed" comment if something really matters to you, but always remember the bigger picture.

Now you have unblocked the author and it allows them to address the comments and merge when they are ready, instead of requiring another round of reviewing with all the context switching, delays and so on that it requires for both of you.

This is probably controversial and requires seniority in the team, but by not approving you're adding a significant time investment for both you and the author. Yes, you won't get to test or review their solution to the feedback - but if it's minor things like naming, styling or small refactors, is it really worth the extra time and effort for everyone involved?

If you have ever tried getting a review only pointing out a typo and you had to wait hours before the reviewer got back to you to approve the PR, you know how silly this can feel 🫠

Disclaimer

With that said, this principle has a lot of caveats and there's plenty of situations where this should not be the default approach, such as:

  • If the feedback you left requires substantial work that should be reviewed, tested and discussed
  • If the PR is currently going to break things and it definitely cannot go out like this
  • If the PR has auto-merge enabled and you don't require resolved comments before merging
  • If you don't have automated tests and linting
  • If the author is junior and you don't trust them to test their work before merging

In general though, I found that this principle helps the team ship faster by avoiding the extra roundtrip and it fosters a culture of trust.


3. Make reviewing painless

Invest in the reviewing experience as it pays back in many ways.

The hypothesis behind this principle is that:

  1. the easier it is to review a PR, the faster it will be reviewed
  2. the faster a PR is reviewed, the easier it is to make smaller PRs
  3. the smaller PRs are, the easier they are to review

Or with a beautifully crafter illustration:

The loop of reviewsThe loop of reviews

There's many way to approach this, but in my experience the following are the most impactful ways.

3.1. Preview environments

A preview environment (aka. preview build) is a complete build of the work in your branch/PR, served so that a reviewer can access it without having to check out the branch and run things locally.

A GitHub deployment of a previewAn example of a GitHub deployment of a preview

It can be a website like https://preview-12e946---api-portal-staging-jfnx5klx2a-ew.a.run.app/ (there's a cold start so be patient), a downloadable binary, or whatever makes sense in your context.

The magic happens when this is automatically created and made available directly on the PR, so that a reviewer can very easily test your work.

How to do it

For websites, most modern hosting providers like CloudFlare, Vercel, Netlify, Fly.io and similar all make this part of their core offering. Alternatively, you can often stitch something together yourself: At Electricity Maps we recently used Cloud Run revisions with zero traffic + GitHub Deployments (via the GH API) to get it working as in the example above (happy to share more details if anyone cares).

Why it matters

If you have to stash your ongoing work, checkout a branch and spin up your local development environment just to test a PR, it's very easy to postpone reviewing. I've been guilty of that a lot of times! By making something testable in your browser (or similar), you make it significantly easier to review the code.

3.2. Git Worktrees

Git worktrees are pretty cool and probably still not commonly used feature (although it's getting a revival for AI-based development).

In short, it allows you to have multiple working directories from the same Git repository without having to clone it every time.

And for reviewing code, this is amazing - it means you can checkout a branch and test it WITHOUT having to stash your ongoing work!

I usually have a review worktree that I always keep around and can easily use with an alias:

# setup first time (also make sure you have the GitHub CLI installed)
git worktree add ../review
alias review="cd ~/dev/work/review && gh pr checkout"
 
# usage - alias + branch name
review mn/api/migrate-to-oxlint

There's a great guide here if you want a more detailed explanation.

3.3. Automate the basics and make it fast

This should hopefully be no surprise, but having basic checks in place for linting, typechecking and unit tests is a no-brainer so a reviewer doens't have to do those.

What I want to highlight here is that you should invest in making those fast so that they are almost always done before a reviewer gets started. Early feedback is key and helps the author fix any issues before they switch context.

How to do that is too context-specific, but don't be afraid to think outside of how your company normally works if needed - if the general CI system is too slow, consider alternative approaches specifically for your project/team to ensure early feedback. Sorry DevOps ¯\_(ツ)_/¯

3.4. Review your own 💩

Review your own PR like you would review a PR from someone else:

In the age of AI, this becomes even more important - please don't ask your reviewer to clean up code your LLM generated, that job is for you.

Remember: Your reviewer have dropped whatever they were working on to help unblock you, so make sure they don't have to spend that time on pointing out silly things you should have caught yourself.


4. Focus on the bigger picture

Focus on what matters most when giving reviews. Not all nitpicks are worth pointing out, and the ones that are should ideally be prefixed with a "nitpick:" or similar in comments so it is more clear for the author.

With that said, sometimes nitpicks are useful when:

Ideally you have a culture where it's considered okay for the author to ignore them if they deem that's the best, in my world nitpicks are almost always optional.


5. Be kind, ask questions

This should go without saying, but always be kind to the human behind the screen!

Receiving code reviews (as well as being asked to give them) can be anxiety-inducing, especially early in career. Remember that text is a shitty communication form that doesn't capture emotion well, so always ensure your messages are clearly framed and stay constructive.

The Code Review Anxiety Workbook is a great resource for diving more into this.

Secondly, ask questions instead of giving orders:

The Socratic method in software engineering works great for code reviews.

As an example, "What is the purpose of this function?" or "Is there a reason why you did X here instead of Y?" gives better grounds for a fruitful discussion than "this approach X is bad, please do Y instead".

In essence (how cliché), just treat people like you'd want to be treated 💚


My rambles end here, I appreciate you if you made it all the way down here!
All of the above are my personal opinions based on my personal experiences, so take it as such and let me know if you find any of it helpful (or not).