LLM Policy document?
In #1478, @hodgesmr noted:
I did not see an LLM policy in the repository, but I felt it was important to note that, given open-source sensitivities to those tools.
We have not yet discussed this as maintainers, but it is probably time to do so (or perhaps too late! 😆):
- What are our general views on contributions involving LLMs?
- Are there standard policies we might adopt or adapt? (For reference, @marcharper shared: https://wiki.gentoo.org/wiki/Project:Council/AI_policy)
Personally: I am not keen to review or maintain code generated by an LLM. In teaching, I am jaded having seen a significant amount of low-quality, AI-generated code that students could neither explain nor justify. It is often syntactically plausible but mathematically or logically incorrect—e.g., LLMs confidently producing code to give “solutions” to intractable differential equations.
That said, I have no objection to contributors using an LLM for idea exploration or boilerplate as long as the resulting code is genuinely authored, understood, and validated by a human. I would probably struggle to identify where the line is here.
My view is similar. We've already had someone aggressively push an AI slop PR, which was a total waste of time, and I don't want to encourage more of that in any way.
In principle I'm not opposed to the use of code generation tools. However with LLMs, unless we have some assurance on the full provenance of the generated code (for example, knowing that the LLM was only trained on code with permissive licenses), then we run the risk of accepting license-violating code. At my workplace, we are not, for example, allowed to copy-paste code from Stack Overflow. We can learn from it, but ultimately we must create original implementations. I'm in favor of policies that defend the integrity of our work, until the legalities associated with AI generated code are more settled.
Looking at the code for #1478, it looks straightforward to me and I don't have any specific concerns about potential inclusion of problematic code. We don't have an LLM policy in place currently and I don't want to block a new contribution, so let's make a one-off decision in this case.
Looking at the code for #1478, it looks straightforward to me and I don't have any specific concerns about potential inclusion of problematic code. We don't have an LLM policy in place currently and I don't want to block a new contribution, so let's make a one-off decision in this case.
Yeah I completely agree. :+1:
Honestly, I agree with everything that has been said so far, so I’m not sure it’s worth repeating.
With the new contribution, we don’t currently have a clear policy in place, and we shouldn’t reject it.
I’m also not opposed to the use of code generation tools in principle, but I share the concerns raised about “copyright issues". Assessing whether someone genuinely authored, understood, and validated the code could be easier in some cases and harder in others.
I do wonder whether, at this point, the simplest approach is to state that we do not accept any content that has been created with the assistance of such tools.
Having read https://matthodges.com/posts/2025-12-14-claude-axelrod-prisoners-dilemma/ (with the claims of "autonomous research through 200+ strategies, novel design creating a Bayesian opponent-modeling strategy" and clarification of the fact that Claude was use to do it all as opposed to "with the assistance of Claude Code") I'm more keen to adopt a strong line like you suggest @Nikoleta-v3. Let's add this to the README:
### Code Generation Policy
We do not accept any content that has been created with the assistance of generative code tools.
Regarding #1478:
As it seems clear that Claude wrote all the code (and came up with the "idea") I am less keen to review, merge and then have to maintain the PR.
If we were to merge it we should make sure this is described in the docstrings and we should correct the misleading narrative of the novely of the work (Bayesian modelling of a strategy is not as novel as some of the writing suggests, arguably this was done by Downing in Axelrod's original tournament FirstByDowning as well as in the second one).