Feedback for Codecov AI reviewer in beta
Weβre excited to introduce the Codecov AI Reviewer Assistant! Itβs designed to help you review code changes and suggest improvements.
Weβd love your feedback on: π¦»
- The ease of interacting with the AI assistant
- The usefulness of the AI's responses
This issue is intended to gather feedback on the beta feature.
Hi team, π
I recently integrated the Codecov AI Reviewer into this repository and requested an AI review on this PR using the command @codecov-ai-reviewer review. However, I haven't seen any response or feedback from the AI yet.
Could you please help me troubleshoot this? Here's what Iβve already checked:
- The Codecov AI app is installed and permissions seem correct.
- The command format was used as specified.
- Waited some time to ensure there wasnβt a delay.
If there's any configuration or additional setup I might have missed, please let me know. Thanks in advance for your support! π
Best regards, Andi
Hi @andihalberkamp Thanks for reaching out. It looks like you have set up the AI bot correctly. Iβve reached the team to investigate this issue further. Thanks for your patience.
Hi @andihalberkamp The bot is probably working, but it may not response if it doesn't have any feedback on your code or nothing to review. Can you try making some meaningful changes and use the command @codecov-ai-reviewer review in your PR comment again. Thank you.
Feedback after use:
ππ» The GOOD
From the start (eg. from my first 14 days of daily use)
-
Easy to setup; no complaints. ( ππ» for limiting the needed permissions)
-
Free access to an AI for open-source projects ( :+1: for giving back to the community )
-
Documentation is easy to find, (and appropriately leads user here during the BETA)
TL;DR - Suggestion Expand the FAQ (as answered by @aj-codecov)
This GH issue provides a feeling of support and communicates interest in Codecov's community right from the get-go. (again, I'm sure this is nothing new to your team)
Regarding the FAQ, a quick-win could be expanding the FAQ by answering these questions:
UPDATE: Thanks to @aj-codecov for responding in this GHI and already answering :+1:
-
What
@prompts does @codecov-ai-reviewer recognize? RESOLVED HERE- e.g.
review- trigger a PR review - etc.
- e.g.
-
What are the rate/usage/API limits of the bot? ANSWERED IN THIS GHI π
- ~e.g. set expectations for the beta (especially for open-source projects π)~
- ~e.g. justify any throttling that is enforced (OpenAI backend probably is not cost-free to leverage π)~
- ~e.g. C.M.A. statement :shrug: (e.g. we reserve desecration to stop evil)?~
- e.g. Honor System during beta, (e.g. abuse will lose)
-
Can review instructions be specified to the bot? ANSWERED IN THIS GHI π
- e.g. at this time the bot uses fixed instructions passed to openAI (rational: along with the mentioned data from the user already mentioned)
-
what about security? ANSWERED IN THIS GHI π
- ~e.g. get ahead of this historically tricky question by setting expectations (the AI does have write access to the PR after all π)~
- e.g. see terms and policy links π€·π» (might get users to pay more attention)
Hopefully I've left the questions open enough that they apply to more than just me.
Thanks for reading this far. Hopefully this early feedback continues to help ππ»
Re-visit (Updates from recent re-review)
-
Now working. The
botnow (as of Jan 2025) responds with feedback for in-progress. (even if it has nothing else to comment) -
NO Throttling, (e.g. works at your scale)
-
Integrates by using the GitHub PR Review workflow
π π» Where the Beta still shows
The ease of interacting with the AI assistant
-
Not "hard", but Currently this is where the BETA really shows; AI still feels un-tuned, and requires human supervision. π Hopefully this continues to improve during the beta.
TL;DR - AI should infer BCP (OUT-OF-DATE)
(EDIT: This seems to be mostly fixed as of May, 2025)
A lot of the flaws and defects I see in the AI's response are avoidable by modern LLM models by just adding the constraint that "Consider any generated code should follow relevant best practices.". As I'm sure your team already knows, reviews are a chance to enforce things like BCP and project conventions. This is expected of most human reviewers by providing a contributors guide in the project, where conventions and standards are enumerated. Modern AI (pick any LLMs) should be-able to make some sense out of these common documents; perhaps the codecov-ai-reviewer could look for directions there? The rational is to shift some of the power, and responsibility back on the users; those with strict standards like me (see my own Convention Enhancement Proposal - 4 for example) can leverage existing documentation, and those without strict standards remain un-constrained by arbitrary standards. :shrug: This is already possible with GitHub's Co-pilot AI by attaching the contributors guide during review chats as an example to compare.
-
NIT - The
@reference still does not appear to be correct: (but it does work now)TL;DR - Bot should look like a normal bot
While this is more knit-picky, the appearance of the Bot's user mention should be picked up by github and link to a relevant profile or homepage. As I'm sure your team already knows there is often resistance to adopting new tools/tech by many users, this rather simple function of the `@` mention tags is a way of reassuring users by pointing at a profile or page that answers "who or what is this user/bot?!?" before anyone has to ask. :shrug: Nevertheless the mentions, while unlinked, __are__ working so this is probably fine during the beta phase; sorry if this is not actionable feedback. -
NIT -
botLacks feedback for "completed" review status.TL;DR - if you give a user a feature ...
If you give a user a feature ... they will ask for another...
While this is more knit-picky, I think it would be nice to also have feedback of when the bot is done with its review. Extra points if you can update (eg. edit) the initial response comment in the PR once done. This is especially helpful when the bot has no review comments afterward. -
bot, as any AI does, sometimes suggests incorrect, dangerous, or worse code. An advisory (e.g. just a C.M.A. disclaimer) to remind us humans to think before we just accept suggestions is not present. π even coffee cups have warnings these days.
Summary - π Works, just π« Not ready for unsupervised use, yet?
@Adal3n3 (and team):
The bot is probably working, but it may not response if it doesn't have any feedback on your code or nothing to review.
The bot is certainly working now. π I even found some of the suggestions useful. However, as with all betas, I emphasize patience and due diligence when considering the bot's code suggestions. At least right now; this is still a beta.
ππ» Thanks for reading. I genuinely hope this is helpful to both the engineering team and any fellow users interested in the beta.
This is also not working for me. I have installed the GitHub app, but there is no response or indication that anything is actually happening after submitting a comment with @codecov-ai-reviewer review
(Putting this here, since https://docs.codecov.com/docs/beta-codecov-ai indicates this is where you'd like feedback.)
Looking to enable this for some Mozilla repos. Our Legal team has the following questions:
Our two biggest concerns with AI services are that they not train on our data and that the provider addresses copyright risk re the outputted code by implementing measures not to copy code and/or offering an indemnity for IP claims. Neither of those are addressed in their terms. Could we propose changes to the terms or is there anything they can tell us that would reassure us about those issues?
Hi y'all! Catching up here @andihalberkamp One of the first things we're working on is giving better feedback about what the bot is doing, sometimes if it doesn't find something relevant to review it doesn't post a comment and it feels like nothing happened, which isn't the case. CC @mikebronner as I believe this also answers your Q.
@reactive-firewall Tremendous feedback, can't thank you enough here. Almost everything you have suggested is on the list of things to tackle in the next few months of development. To answer a couple of your points directly - there is no throttling today, we're optimizing for usage during the beta, but will obviously tamp down if we somebody abusing it. We do not have an ability to fine tune review instructions at this time, but it is something we're considering longer term. I'll come back to you in a couple weeks, we're overhauling some of the underlying infrastructure of the bot and I expect a smoother experience come end of January.
@larseggert Here's the terms of the bot with links to various additional resources: https://github.com/apps/codecov-ai. We do not train on your data and we do not make direct code suggestions at this time so as far as I can see we can't infringe copyright, let me know if your security team has further questions and I'm happy to chase down answers for you. I'll chat with our legal team re: copyright language for the future though, this is a great callout and some of what we're working on will eventually be more relevant to copyright risk.
So I just tried this out on a very simple PR, and I must say I am not impressed. The suggestions
- make the code less readable (even when they say they will increase readability)
-
introduce syntax errors (
largest_ok_mtu->largest_ok_mgu) - suggest incorrect comment changes
- are not indented correctly and don't follow Rust formatting style
I think this needs quite a bit of work before I'd consider it useful.
Hi all, just letting you know we've made a number of improvements to the bot:
- You'll now see a response every single time, whether it has a comment to make or not
- It will let you know when it is complete either by saying "nothing to review" or making a comment
- We're now using Claude under the hood to provide better responses than the ChatGPT model we had used before
This all involved some pretty fundamental changes to how we built the bot and we've got a few more things up our sleeve to further improve responses, please give it a try and continue to give us feedback and we'll continue to build what you need!
Weβre excited to introduce the Codecov AI Reviewer Assistant! Itβs designed to help you review code changes and suggest improvements.
Weβd love your feedback on: π¦»
- The ease of interacting with the AI assistant
- The usefulness of the AI's responses
This issue is intended to gather feedback on the beta feature.
Gg
Summary - π Works, just π« Not ready for unsupervised use, yet?
@Adal3n3 (and team):
The bot is probably working, but it may not response if it doesn't have any feedback on your code or nothing to review.
The bot is certainly working now. π I even found some of the suggestions useful. However, as with all betas, I emphasize patience and due diligence when considering the bot's code suggestions. At least right now; this is still a beta.
ππ» Thanks for reading. I genuinely hope this is helpful to both the engineering team and any fellow users interested in the beta.
@reactive-firewall Thank you for sharing your feedback! We're thrilled to hear that some of the suggestions were helpful. We recently improved the bot experience to address the 'nothing to review' feedback, and we're glad to see it's working as intended for you. We'd love to hear more of your thoughts!
Not only will I not be using this feature, by implementing it at all you have guaranteed that I will no longer use any of your services ever again.
@zackw sorry to hear that. Just to be clear, the AI reviewer is not automatically part of your Codecov experience. You need to install the Codecov AI GitHub integration to use it. So, if you don't want any AI features, no problem. Just don't install the integration.
I don't think you understand. Your organization has permanently lost my trust by jumping on the LLM bandwagon. I will never use any Codecov services again and I will tell everyone I know to do the same.
Is there a way to tell the bot that its feedback is not correct?
Hi , is AI Reviewer available for Codecove selfhosted
@isatfg unfortunately AI Review is not available on Codecov Self hosted at this time
@duelinggalois you should be able to ππ½ ππ½ on a review that should let us know. Out of curiosity, would you be willing to talk about an instance where we got it wrong? IF it's on a public repo, you can link it here or shoot me an email on [email protected]
Yesterday, I commented @codecov-ai-reviewer review (twice) in a simple PR, but both times I received a message "On it! We are reviewing the PR and will provide feedback shortly". However, no feedback was posted. Any idea what went wrong? In the same PR, I also added Copilot as a Reviewer (twice) and feedback was provided in around a minute.
Hey Graeme
Thanks for writing in. I'll take a look into this for you. Regards Rohan Bhaumik Senior Product Manager @ Sentry
On Thu, Jul 3, 2025 at 8:52β―AM Graeme Watt @.***> wrote:
GraemeWatt left a comment (codecov/feedback#578) https://github.com/codecov/feedback/issues/578#issuecomment-3032156644
Yesterday, I commented @codecov-ai-reviewer review (twice) in a simple PR https://github.com/HEPData/hepdata/pull/888, but both times I received a message "On it! We are reviewing the PR and will provide feedback shortly". However, no feedback was posted. Any idea what went wrong? In the same PR, I also added Copilot https://github.com/apps/copilot-pull-request-reviewer as a Reviewer (twice) and feedback was provided in around a minute.
β Reply to this email directly, view it on GitHub https://github.com/codecov/feedback/issues/578#issuecomment-3032156644, or unsubscribe https://github.com/notifications/unsubscribe-auth/BB5JMRK42TELMETJ6NEJG4D3GURP3AVCNFSM6AAAAABRXYVUAOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTAMZSGE2TMNRUGQ . You are receiving this because you commented.Message ID: @.***>
Tried @codecov-ai-reviewer review on a PR and:
- It commented on the same line of code, with the same comment, 3 times
- I think the agent should be able to respond to replies on their comments. For example asking it to generate the code it suggested.
For @codecov-ai-reviewer test:
- The PR had about 65% coverage
- Sentry determined no other tests were needed. Feels like there should have been something
- I had another instance where I asked it to generate tests and it just never did
I just tried it and got the following comment in response:
Seer had some issues with your request. Please try again.
This is also not working for me. I have installed the GitHub app, but there is no response or indication that anything is actually happening after submitting a comment with @codecov-ai-reviewer review
@codecov-ai-reviewer review also stops working about a week ago for me.
@codecov-ai-reviewer reviewalso stops working about a week ago for me.
:speak_no_evil: Likewise, @codecov-ai-reviewer review also stops working (unsure when in my case, just it occurred sometime after june's roll-out of the thumbs up/down feedback feature, where it was last known to be working for configured projects)
Example of non-response: https://github.com/reactive-firewall-org/multicast/pull/475#issuecomment-3316225306
This is a regression for me specifically, but still likely related to @mcfongtw's issue.
Observations:
- The Codecov AI dashboard still reports configured. (no changes on my end)
- The Codecov AI GitHub App is still reports installed and configured. (no changes on my end)
- https://status.codecov.com shows github services (including actions and PRs and APIs) are all "operational" :shrug:
FYI, for some reason, AI assisted code review starts working for me again a few days ago. Did not change anything from my end.