A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/sam-paech/slop-forensics/tree/main below:

GitHub - sam-paech/slop-forensics

A toolkit for generating & analyzing "slop" β€” over-represented lexical patterns β€” in LLM outputs.

Generate a standardised set of outputs from several models for downstream analysis.

Analyze a model's outputs for repetitive words, bigrams, trigrams, vocabulary complexity, and slop scores.

Aggregate findings across models to build canonical slop lists of of over-represented words and phrases.

🌳 Phylogenetic Tree Building

Cluster models based on slop profile similarity using parsimony (PHYLIP) or hierarchical clustering.

https://colab.research.google.com/drive/1SQfnHs4wh87yR8FZQpsCOBL5h5MMs8E6?usp=sharing

  1. Prerequisites & Installation
  2. Project Structure
  3. Configuration / Environment Setup
  4. Usage
  5. How it Works
  6. License
  7. Contact
Prerequisites & Installation
  1. Python 3.7+

  2. The required Python dependencies are listed in requirements.txt. Install them via:

    pip install -r requirements.txt
  3. PHYLIP (optional)

  4. NLTK data (recommended):
    We use punkt, punkt_tab, stopwords, and cmudict for parts of the analysis. Download via:

    import nltk
    nltk.download('punkt')
    nltk.download('punkt_tab')
    nltk.download('stopwords')
    nltk.download('cmudict')
slop-forensics/
  β”œβ”€ scripts/
  β”‚   β”œβ”€ generate_dataset.py
  β”‚   β”œβ”€ slop_profile.py
  β”‚   β”œβ”€ create_slop_lists.py
  β”‚   β”œβ”€ generate_phylo_trees.py
  β”‚   └─ ...
  β”œβ”€ slop_forensics/
  β”‚   β”œβ”€ config.py
  β”‚   β”œβ”€ dataset_generator.py
  β”‚   β”œβ”€ analysis.py
  β”‚   β”œβ”€ metrics.py
  β”‚   β”œβ”€ phylogeny.py
  β”‚   β”œβ”€ slop_lists.py
  β”‚   β”œβ”€ utils.py
  β”‚   └─ ...
  β”œβ”€ data/
  β”‚   └─ (internal data files for slop lists, e.g. slop_list.json, etc.)
  β”œβ”€ results/
  β”‚   β”œβ”€ datasets/
  β”‚   β”œβ”€ analysis/
  β”‚   β”œβ”€ slop_lists/
  β”‚   β”œβ”€ phylogeny/
  β”‚   └─ ...
  β”œβ”€ .env.example
  β”œβ”€ requirements.txt
  β”œβ”€ README.md  ← You are here!
  └─ ...
Configuration / Environment Setup
  1. Copy .env.example to .env and update the variables:
  2. In .env, set OPENAI_API_KEY to an OpenRouter or OpenAI-compatible key.
  3. (Optional) Set PHYLIP_PATH if the pars/consense binaries are not in your PATH.

Example .env contents:

# .env
OPENAI_API_KEY=sk-or-v1-xxxxxx
OPENAI_BASE_URL="https://openrouter.ai/api/v1"
PHYLIP_PATH="/usr/local/bin"

Note: If you are not using OpenRouter, you can point to another OpenAI-compatible service by changing OPENAI_BASE_URL.

Below is a typical workflow, using mostly defaults. Adjust paths/arguments as desired.

Note: several default parameters are pre-configured in slop_forensics/config.py.

Use generate_dataset.py to prompt the specified LLMs for story outputs.

python3 scripts/generate_dataset.py \
  --model-ids x-ai/grok-3-mini-beta,meta-llama/llama-4-maverick,meta-llama/llama-4-scout,google/gemma-3-4b-it \
  --generate-n 100
2. Analyze Outputs & Profile Slop

Once data is generated, run slop_profile.py to calculate word/bigram/trigram usage, repetition scores, slop scores, etc.

python3 scripts/slop_profile.py

Use create_slop_lists.py to combine analysis results from multiple models to create a master "slop list".

python3 scripts/create_slop_lists.py
4. Generate Phylogenetic Trees

Combining stylometric analysis with bioinformatics, we use our generated slop profiles to infer relationships between models purely from their outputs. With the generate_phylo_trees.py script, we create a pseudo-phylogenetic tree (via PHYLIP parsimony or hierarchical clustering fallback).

The parsimony algorithm is a little different to hierarchical clustering in that it tries to infer lineage from fewest number of mutations. Here we are representing mutations as presence/absence of a given word/phrase in the over-represented list for each model. For more info see the next section.

python3 scripts/generate_phylo_trees.py
1. Slop Profiling: Identifying Over-Used Words and Phrases

Purpose:
We analyze each model’s outputs to find words, phrases, and patterns that are frequently overusedβ€”what we call β€œslop.”

How we do it:

Result: We produce detailed profiles (saved as JSON files) showing which words and phrases each model repeats most, along with these metrics.

2. Slop List Creation: Making a Reference of Frequently Over-Used Words

Purpose:
We create comprehensive lists of commonly overused words and phrases (slop lists), which help identify repetitive patterns across multiple models.

How we do it:

Result:
We produce several canonical lists:

3. Phylogenetic Tree Building: Grouping Models by Similarity of Slop Usage

Purpose:
We infer a lineage tree based on similarity of each model's slop profile.

How we do it:

Result:
We produce visual tree diagrams (both circular and rectangular), as well as data files (.nwk and .nex) showing relationships among models based on their repetitive language patterns.

This pipeline allows you to clearly see which words and phrases each language model tends to overuse, combines these insights into helpful reference lists, and visually clusters models by their linguistic habits.

This project is licensed under the MIT License.

For questions or feedback:

If you use Slop Forensics in your research or work, please cite it as:

@software{paech2025slopforensics,
  author = {Paech, Samuel J},
  title = {Slop Forensics: A Toolkit for Generating \& Analyzing Lexical Patterns in LLM Outputs},
  url = {https://github.com/sam-paech/slop-forensics},
  year = {2025},
}

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4