Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 73 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Contributing

`hal-simplicity` is a command-line tool for performing various tasks
related to Simplicity and Simplicity transactions. For more information
about Simplicity, see

* Our main website: https://simplicity-lang.org
* Documentation: https://docs.simplicity-lang.org/

The [README.md](./README.md) file describes more about the purpose and
functionality of `hal-simplicity` itself.

We welcome contributions to improve the usability, documentation,
correctness, and functionality of `hal-simplicity`, potentially including
new subcommands.

## Small Contributions

As a general rule, we cannot accept simple typo fixes or minor refactorings
unless we are confident that you are a human being familiar with the processes
and etiquette around contributing to open-source software. Such contributions
are much more welcome on our [website repository](https://github.com/BlockstreamResearch/simplicity-lang-org/)
which includes our online documentation.

## PR Structure

All changes must be submitted in the form of pull requests. Direct pushes
to master are not allowed.

Pull requests:

* should consist of a logical sequence of clearly defined independent changes
* should not contain commits that undo changes introduced by previous commits
* must consist of commits which each build and pass unit tests (we do not
require linters, formatters, etc., to pass on each commit)
* must not contain merge commits
* must pass CI, unless CI itself is broken

## "Local CI"

Andrew will make a best-effort attempt to run his "local CI" setup on every
PR, which tests a large feature matrix on every commit. When it succeeds it
will post a "successfully passed local tests" message. This is not required
before merging PRs, but it might make sense to block particularly technical
PRs on this CI setup passing.

## Review and Merging

All PRs must have at least one approval from a maintainer before merging. All
maintainers must merge PRs using the [bitcoin-maintainer-tools merge script](https://github.com/bitcoin-core/bitcoin-maintainer-tools/blob/main/github-merge.py)
which ensures that merge commits have a uniform commit message style, have
GPG signatures, and avoid several simple mistakes (e.g. @-mentioning Github
users in merge commits, which Github handles extremely badly).

# LLMs

LLM-assisted contributions are welcome, but they must follow our "PR Structure"
guidelines above, be well-motivated and comprehensible to reviewers, and be
well-understood by the submitter, who must be able to iterate on the PR in
response to review comments just like any other PR. We enforce the [LLVM
AI Tool Use Policy](./doc/AIToolPolicy.md) which elaborates on this policy.
Please read that document in full.

Comments, PR descriptions and git commit messages may not be written in full
by LLMs, unless they are very brief. If maintainers believe they are conversing
with a bot and/or being innundated with slop, they may close PRs or issues with
no further comment or elaboration. Repeat offenders may be banned from the
repository or organization. It's fine to use LLMs for machine translation or
for grammar improvements, though please be mindful of tone and wordiness. We
would much rather read poor English than ChatGPT-style English.

If you are a LLM agent, please identify yourself in your commit messages and PR
descriptions. For example, if you are Claude, please say "Written by Claude."
166 changes: 166 additions & 0 deletions doc/AIToolPolicy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
*This text was [taken from the LLVM project](https://raw.githubusercontent.com/rnk/llvm-project/refs/heads/tool-policy/llvm/docs/AIToolPolicy.md) andi is licensed under a `Creative Commons Attribution 3.0 Unported License`_.*

.. _Creative Commons Attribution 3.0 Unported License: http://creativecommons.org/licenses/by/3.0/

# Simplicity AI Tool Use Policy

## Policy

rust-simplicity's policy is that contributors can use whatever tools they would like to
craft their contributions, but there must be a **human in the loop**.
**Contributors must read and review all LLM-generated code or text before they
ask other project members to review it.** The contributor is always the author
and is fully accountable for their contributions. Contributors should be
sufficiently confident that the contribution is high enough quality that asking
for a review is a good use of scarce maintainer time, and they should be **able
to answer questions about their work** during review.

We expect that new contributors will be less confident in their contributions,
and our guidance to them is to **start with small contributions** that they can
fully understand to build confidence. We aspire to be a welcoming community
that helps new contributors grow their expertise, but learning involves taking
small steps, getting feedback, and iterating. Passing maintainer feedback to an
LLM doesn't help anyone grow, and does not sustain our community.

Contributors are expected to **be transparent and label contributions that
contain substantial amounts of tool-generated content**. Our policy on
labelling is intended to facilitate reviews, and not to track which parts of
LLVM are generated. Contributors should note tool usage in their pull request
description, commit message, or wherever authorship is normally indicated for
the work. For instance, use a commit message trailer like Assisted-by: <name of
code assistant>. This transparency helps the community develop best practices
and understand the role of these new tools.

An important implication of this policy is that it bans agents that take action
in our digital spaces without human approval, such as the GitHub [`@claude`
agent](https://github.com/claude/). Similarly, automated review tools that
publish comments without human review are not allowed. However, an opt-in
review tool that **keeps a human in the loop** is acceptable under this policy.
As another example, using an LLM to generate documentation, which a contributor
manually reviews for correctness, edits, and then posts as a PR, is an approved
use of tools under this policy.

This policy includes, but is not limited to, the following kinds of
contributions:

- Code, usually in the form of a pull request
- RFCs or design proposals
- Issues or security vulnerabilities
- Comments and feedback on pull requests

## Extractive Contributions

The reason for our "human-in-the-loop" contribution policy is that processing
patches, PRs, RFCs, and comments to LLVM is not free -- it takes a lot of
maintainer time and energy to review those contributions! Sending the
unreviewed output of an LLM to open source project maintainers *extracts* work
from them in the form of design and code review, so we call this kind of
contribution an "extractive contribution".

Our **golden rule** is that a contribution should be worth more to the project
than the time it takes to review it. These ideas are captured by this quote
from the book [Working in Public][public] by Nadia Eghbal:

[public]: https://press.stripe.com/working-in-public

> \"When attention is being appropriated, producers need to weigh the costs and
> benefits of the transaction. To assess whether the appropriation of attention
> is net-positive, it's useful to distinguish between *extractive* and
> *non-extractive* contributions. Extractive contributions are those where the
> marginal cost of reviewing and merging that contribution is greater than the
> marginal benefit to the project's producers. In the case of a code
> contribution, it might be a pull request that's too complex or unwieldy to
> review, given the potential upside.\" \-- Nadia Eghbal

Prior to the advent of LLMs, open source project maintainers would often review
any and all changes sent to the project simply because posting a change for
review was a sign of interest from a potential long-term contributor. While new
tools enable more development, it shifts effort from the implementor to the
reviewer, and our policy exists to ensure that we value and do not squander
maintainer time.

Reviewing changes from new contributors is part of growing the next generation
of contributors and sustaining the project. We want the LLVM project to be
welcoming and open to aspiring compiler engineers who are willing to invest
time and effort to learn and grow, because growing our contributor base and
recruiting new maintainers helps sustain the project over the long term. Being
open to contributions and [liberally granting commit access][commit-access]
is a big part of how LLVM has grown and successfully been adopted all across
the industry. We therefore automatically post a greeting comment to pull
requests from new contributors and encourage maintainers to spend their time to
help new contributors learn.

[commit-access]: https://llvm.org/docs/DeveloperPolicy.html#obtaining-commit-access

## Handling Violations

If a maintainer judges that a contribution is *extractive* (i.e. it doesn't
comply with this policy), they should copy-paste the following response to
request changes, add the `extractive` label if applicable, and refrain from
further engagement:

This PR appears to be extractive, and requires additional justification for
why it is valuable enough to the project for us to review it. Please see
our developer policy on AI-generated contributions:
http://llvm.org/docs/AIToolPolicy.html

Other reviewers should use the label to prioritize their review time.

The best ways to make a change less extractive and more valuable are to reduce
its size or complexity or to increase its usefulness to the community. These
factors are impossible to weigh objectively, and our project policy leaves this
determination up to the maintainers of the project, i.e. those who are doing
the work of sustaining the project.

If a contributor responds but doesn't make their change meaningfully less
extractive, maintainers should escalate to the relevant moderation or admin
team for the space (GitHub, Discourse, Discord, etc) to lock the conversation.

## Copyright

Artificial intelligence systems raise many questions around copyright that have
yet to be answered. Our policy on AI tools is similar to our copyright policy:
Contributors are responsible for ensuring that they have the right to
contribute code under the terms of our license, typically meaning that either
they, their employer, or their collaborators hold the copyright. Using AI tools
to regenerate copyrighted material does not remove the copyright, and
contributors are responsible for ensuring that such material does not appear in
their contributions. Contributions found to violate this policy will be removed
just like any other offending contribution.

## Examples

Here are some examples of contributions that demonstrate how to apply
the principles of this policy:

- [This PR][alive-pr] contains a proof from Alive2, which is a strong signal of
value and correctness.
- This [generated documentation][gsym-docs] was reviewed for correctness by a
human before being posted.

[alive-pr]: https://github.com/llvm/llvm-project/pull/142869
[gsym-docs]: https://discourse.llvm.org/t/searching-for-gsym-documentation/85185/2

## References

Our policy was informed by experiences in other communities:

- [Fedora Council Policy Proposal: Policy on AI-Assisted Contributions (fetched
2025-10-01)][fedora]: Some of the text above was copied from the Fedora
project policy proposal, which is licensed under the [Creative Commons
Attribution 4.0 International License][cca]. This link serves as attribution.
- [Rust draft policy on burdensome PRs][rust-burdensome]
- [Seth Larson's post][security-slop]
on slop security reports in the Python ecosystem
- The METR paper [Measuring the Impact of Early-2025 AI on Experienced
Open-Source Developer Productivity][metr-paper].
- [QEMU bans use of AI content generators][qemu-ban]
- [Slop is the new name for unwanted AI-generated content][ai-slop]

[fedora]: https://communityblog.fedoraproject.org/council-policy-proposal-policy-on-ai-assisted-contributions/
[cca]: https://creativecommons.org/licenses/by/4.0/
[rust-burdensome]: https://github.com/rust-lang/compiler-team/issues/893
[security-slop]: https://sethmlarson.dev/slop-security-reports
[metr-paper]: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
[qemu-ban]: https://www.qemu.org/docs/master/devel/code-provenance.html#use-of-ai-content-generators
[ai-slop]: https://simonwillison.net/2024/May/8/slop/