A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/SAP/credential-digger/tree/keras_models below:

GitHub - SAP/credential-digger at keras_models

Credential Digger - Supporting Keras Model

Credential Digger is a Github scanning tool that identifies hardcoded credentials (Passwords, API Keys, Secret Keys, Tokens, personal information, etc). Credential Digger has a clear advantage compared to the other Github scanners in terms of False Positive reduction in the scan reports. Credential Digger is using two Machine Learning Models to identify false positives, especially in Password identification:

Credential Digger finds credentials hardcoded in a repository. The tool is composed of:

The database is structured in the following way (arrows point to foreign keys).

The project includes 3 components: a db (sql folder), a client (credentialdigger folder), and a user interface (ui folder).

create_table.sql defines the db schema.

Note that, given the file_name and commit_hash of a discovery, both the commit and the file can be accessible at addresses:

REPO_URL/commit/COMMIT_HASH
REPO_URL/blob/COMMIT_HASH/file_name

This client can be used to easily interact with the db. It offers a scanner for git repositories, based on Hyperscan (others can be implemented).

Please note that the database must be up and running.

The user interface can be used to easily perform scans and flag the discoveries.

  1. Prepare the .env file and edit it with the correct data

    cp .env.sample .env
    vim .env  # Insert real credentials
  2. Run the db using docker-compose:

    sudo docker-compose up --build postgres

    Consider not to expose the db port in production.

  3. Install the dependencies for the client.

    sudo apt install libhyperscan-dev libpq-dev
  4. Install the Python requirements from the requirements.txt file.

    pip install -r requirements.txt
  5. Set which models you want to use in ui/server.py

    MODELS = ['SnippetModel', 'PathModel']
  1. Run the ui:

The ui is available at http://localhost:5000/

Warning: To use the keras models, make sure the credentialdigger pypi package is NOT installed

Run the db on a different machine

In case the db and the client are run on different machines, then clone this repository on both of them.

Then, execute the steps 1. and 2. as described in the installation section above on the machine running the db, and execute the remaining steps on the machine running the client.

In case the db and the client/ui run on separate machines, the port of the db must be exposed.

Use machine learning models

Currently no pretrained keras models are provided.

If available, the models and their respective tokenizers are expected to be found in the models_data directory, in their respective subdirectories. Model hyperparameters can be found in the models/keras_support folder .

Here is a typical directory example for keras models:

models_data
│   ├── __init__.py
│   ├── path_model
│   │   ├── model_path.h5
│   │   └── tokenizer.pickle
│   └── snippet_model
│       ├── model_classifier.h5
│       ├── model_extractor.bin
│       └── tokenizer.pickle

Note that snippet_extractor is still a fasttext model.

The File Path Model classifies a discovery as false positive according to its file path when it indicates that the code portion is used for test or example.

The code Snippet model identifies the password based authentication in a code and differeciate between real and fake passwords.

WARNING: This Model is pre-trained with synthetic data in order to protect privacy. It will help to reduce the False Positives related to password recongnition but with a lower precision compared to a Model pre-trained with real data.

from credentialdigger.cli import Client
c = Client(dbname='MYDB', dbuser='MYUSER', dbpassword='*****',
           dbhost='localhost', dbport=5432)

Refer to the Wiki for further information.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4