A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/crossbario/autobahn-python/issues/1663 below:

Establishing a Clear Policy on AI-Assisted Contributions · Issue #1663 · crossbario/autobahn-python · GitHub

Hi everyone,

This issue is to open a discussion on a crucial topic for the future of our project(s):

The use of AI-powered coding assistants (like Anthropic Claude, Google Gemini, GitHub Copilot etc.) by project contributors.

Here is the problem:

which leads to:

The goal here is to gather opinions, have an open discussion, figure out and agree on how we move forward with all of that, as an Open-source project, and ultimately crystallize and embody into an AI policy and more, please see below deliverables.

Your Input is Crucial

This policy will shape how we work going forward in this new world unfolding. Please share:

Let's work together to create a policy that protects our project while remaining welcoming to contributors who use modern tools responsibly!

Ah, and sorry for the "long" issue, I took quite some time and care to collect material and thoughts exhaustively (my understanding / eyes / views / opinions) so we have some meat to chew and discuss;)

Applicable To

This issue is relevant to all of these projects, all WAMP related:

Rather than filing one issue on each of above 10 repositories, I've decided it makes more sense - for the discussion - to happen in one repository only, the one with the most GitHub stars - which is Autobahn|Python. But if and once it concludes, I would file the other corresponding 9 issues on the respective repos, promise.

Sidenote 1: Collecting all of this, I just realize, a) how crazy this whole endevour (WAMP etc) has turned out to be, and b) how much we have achieved with all of you contributing (OSS, oh yeah!), and c) that I am crazy! Did I mention already? Well, it's true;)

Sitenote 2: Personally, I have lately done quite some experimentation with "AI" is various ways and for various uses, and I am quite thrilled and optimistic that AI can indeed help us tame above beast! At least, for me, for hacking, coding and all that, it is an incredible catalyst / accelerator, time saver, and time is of the essence, always "too little" and all. Which is part of the reason I am filing this issue.

Deliverables Meta-Goal: Making the Right Thing the Easy Thing

Before diving into the details, let's be clear about our philosophy. We've all seen compliance processes fail because they create overhead without value. Our goal is different:

We want to create a process that:

We explicitly reject:

The principle is simple: If following the process makes developers' work better, they'll actually follow it.

Our Intent: Responsible Innovation

As AI tools become more powerful and integrated into our workflows, it's vital that we proactively establish a clear policy to:

  1. Protect the legal integrity of our codebase
  2. Respect our licensing commitments to users and contributors
  3. Enable responsible use of productivity-enhancing AI tools
  4. Create transparent documentation of our development practices
  5. Lead by example in the open source community

This affects all our projects, from the dual-licensed Crossbar.io to the permissively-licensed Autobahn|XYZ family.

The Core Challenge: AI, Authorship, and Copyright

The central issue stems from a fundamental principle in copyright law (e.g., as interpreted by the U.S. Copyright Office):

A work must be created by a human to be copyrightable. An AI cannot be an author and cannot hold copyright.

This has several critical consequences for us:

  1. The Ownership Gap: Code generated by an AI without significant human creative input or modification is not owned by the user who prompted it. It may fall into the public domain.

  2. The "Union License" Problem: AI models are trained on vast datasets containing code under various licenses (MIT, GPL, Apache, proprietary, etc.). If AI output is considered a derivative work of its training data, the legal implications are staggering:

  3. The "Derivative Work" Interpretation Chaos: The term "derivative work" itself is a legal minefield:

How This Impacts Our Projects

The risks differ depending on the project's license, but they are significant in all cases.

For Permissively-Licensed Projects (e.g., Autobahn|XYZ - MIT License) For Dual-Licensed Projects (e.g., Crossbar.io - EUPL + Commercial)

For our dual-licensed projects, the introduction of un-owned, AI-generated code creates two severe problems that impact both sides of our license. One risk is again license pollution. The other risk related to the dual-licensing model which is based entirely on my current company (typedef int GmbH, Germany) - that funded much of development - owning 100% of the copyright, which is achieved through our Contributor Assignment Agreement (CAA).

Real-World Context: The Industry is Taking Notice

Several high-profile projects and organizations are grappling with this issue:

This isn't theoretical - it's a present challenge that responsible projects must address.

Why This Isn't Just Paranoia

Before you run for the basement, remember: we're not abandoning AI tools, we're learning to use them responsibly. Many industries have navigated similar transitions:

Similarly, AI tools won't replace programmers, but we need clear boundaries between "AI-assisted" and "AI-generated" code.

The good news: By addressing this proactively, we:

  1. Protect our project's legal integrity
  2. Give contributors clear guidelines
  3. Can still benefit from AI as a productivity tool
  4. Position ourselves as a responsible leader in the OSS community
A Proposed Path Forward: A Multi-Layered Approach

To address this comprehensively, I propose we develop a two-pronged strategy:

1. Human Contributor Policy (AI_POLICY.rst)

A formal policy that contributors must follow, covering:

2. AI Assistant Guidelines (CLAUDE.md)

A machine-readable file that instructs AI assistants on how to behave when working with our codebase:

Proposed Implementation Timeline

If we reach consensus, I suggest:

  1. Week 1-2: Gather community feedback on this issue
  2. Week 3: Draft initial policy documents based on feedback
  3. Week 4: Review period for draft policies
  4. Month 2: Finalize and merge policies with clear effective date
  5. Ongoing: Update as we learn from real-world application
Questions for Discussion
  1. Disclosure threshold: What level of AI assistance requires disclosure? Any use? Substantial use (>X lines)?
  2. Enforcement: How do we verify compliance? Honor system? Code review flags?
  3. Retroactive application: Do we need to audit recent contributions?
  4. Tooling: Should we develop linters or hooks to detect potential AI-generated patterns?
  5. Education: How do we help contributors understand what constitutes "significant human creative input"?
  6. Risk tolerance: Given the legal uncertainty, how conservative should our policy be?
  7. Evolution: How do we update our policy as case law develops?
The Bottom Line

We're navigating uncharted legal waters. Different jurisdictions will likely reach different conclusions about AI and derivative works. Our policy needs to be protective enough to safeguard the project while practical enough to not discourage contribution.

This isn't about fear - it's about responsible stewardship of a codebase that others depend on.

Thanks a lot for your attention and time!

References:


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4