A CLI tool written in Go that generates git commit messages or provides code review summaries using ChatGPT AI (gpt-4o, gpt-4 model). It also automatically installs a git prepare-commit-msg hook.
en
, zh-tw
, or zh-cn
).gpt-4
, gpt-4o
, etc.Install via Homebrew:
brew tap appleboy/tap brew install codegpt
Install via Chocolatey:
# Download and run the install script bash < <(curl -sSL https://raw.githubusercontent.com/appleboy/CodeGPT/main/install.sh)
Or download and run manually:
chmod +x install.sh ./install.shConfigurable Environment Variables Variable Name Default Value Description VERSION latest The CodeGPT version to install (defaults to latest release) INSTALL_DIR $HOME/.codegpt/bin Installation directory CURL_INSECURE false Skip SSL verification (true/false)
Example usage:
# Install a specific version to a custom directory VERSION=1.1.0 INSTALL_DIR=/opt/codegpt ./install.sh
The script will:
Download the pre-compiled binaries from the release page. Change the binary permissions to 755
and copy the binary to the system bin directory. Use the codegpt
command as shown below:
$ codegpt version Version: 1.1.0 Git Commit: 899396a Build Time: 2025-05-16T15:52:38Z Go Version: 1.24.3 OS/Arch: darwin/arm64
Install from source code:
go install github.com/appleboy/CodeGPT/cmd/codegpt@latestUsing vscode devcontainer
Add the feature to your devcontainer.json:
"features": { "ghcr.io/kvokka/features/codegpt:1": {} }
First, create your OpenAI API Key. The OpenAI Platform allows you to generate a new API Key.
Set the environment variable OPENAI_API_KEY
:
export OPENAI_API_KEY=sk-xxxxxxx
Alternatively, store your API key in a custom config file:
codegpt config set openai.api_key sk-xxxxxxx
This will create a .codegpt.yaml
file in your home directory ($HOME/.config/codegpt/.codegpt.yaml). The following options are available:
https://api.openai.com/v1
). openai.api_key Generate API key from openai platform page. openai.org_id Identifier for this organization sometimes used in API requests. See organization settings. Only for openai
service. openai.model Default model is gpt-4o
, you can change to other custom model (Groq or OpenRouter provider). openai.proxy HTTP/HTTPS client proxy. openai.socks SOCKS client proxy. openai.timeout Default HTTP timeout is 10s
(ten seconds). openai.skip_verify Default skip_verify is false
, You can change it to true
to ignore SSL verification. openai.max_tokens Default max tokens is 300
. See reference max_tokens. openai.temperature Default temperature is 1
. See reference temperature. git.diff_unified Generate diffs with <n>
lines of context, default is 3
. git.exclude_list Exclude file from git diff
command. openai.provider Default service provider is openai
, you can change to azure
. output.lang Default language is en
and available languages zh-tw
, zh-cn
, ja
. openai.top_p Default top_p is 1.0
. See reference top_p. openai.frequency_penalty Default frequency_penalty is 0.0
. See reference frequency_penalty. openai.presence_penalty Default presence_penalty is 0.0
. See reference presence_penalty. prompt.folder Default prompt folder is $HOME/.config/codegpt/prompt
. How to Customize the Default Prompt Folder
The default prompt folder is located at $HOME/.config/codegpt/prompt
. You can change this to another directory by executing:
codegpt config set prompt.folder /path/to/your/prompt
To load the prompt files from the custom folder, run:
Upon execution, you will see messages similar to the following:
save code_review_file_diff.tmpl to /Users/xxxxx/.config/codegpt/prompt/code_review_file_diff.tmpl save summarize_file_diff.tmpl to /Users/xxxxx/.config/codegpt/prompt/summarize_file_diff.tmpl save summarize_title.tmpl to /Users/xxxxx/.config/codegpt/prompt/summarize_title.tmpl save conventional_commit.tmpl to /Users/xxxxx/.config/codegpt/prompt/conventional_commit.tmplHow to Change to Azure OpenAI Service
Get the API key
, Endpoint
, and Model deployments
list from the Azure Resource Management Portal on the left menu.
Update your config file:
codegpt config set openai.provider azure codegpt config set openai.base_url https://xxxxxxxxx.openai.azure.com/ codegpt config set openai.api_key xxxxxxxxxxxxxxxx codegpt config set openai.model xxxxx-gpt-4oSupport for Gemini API Service
You can use the Gemini API or VertexAI Gemini service. See the Gemini API documentation and VertexAI documentation.
Update the following parameters in your config file.
gemini
to use Gemini provider gemini
Yes gemini.api_key API key for Gemini or VertexAI xxxxxxx
Yes gemini.model Model name (see Gemini models) gemini-2.0-flash
Yes gemini.backend Gemini backend: BackendGeminiAPI
(default, for Gemini API) or BackendVertexAI
(for VertexAI) BackendGeminiAPI
No BackendGeminiAPI
gemini.project_id VertexAI project ID (required if using BackendVertexAI
) my-gcp-project
Cond. gemini.location VertexAI location (required if using BackendVertexAI
) us-central1
Cond. Example: Gemini API (default backend)
codegpt config set openai.provider gemini codegpt config set openai.model gemini-2.0-flash codegpt config set gemini.api_key xxxxxxx # gemini.backend defaults to BackendGeminiAPI, so you can omit it
codegpt config set openai.provider gemini codegpt config set openai.model gemini-2.0-flash codegpt config set gemini.backend BackendVertexAI codegpt config set gemini.project_id my-gcp-project codegpt config set gemini.location us-central1
flowchart TD User([User]) subgraph CodeGPT GeminiClient([Gemini Provider]) end subgraph Google GeminiAPI([Gemini API]) VertexAI([VertexAI Gemini]) end User -->|Completion / GetSummaryPrefix| GeminiClient GeminiClient -- BackendGeminiAPI --> GeminiAPI GeminiAPI -- Response (text, usage) --> GeminiClient GeminiClient -- BackendVertexAI --> VertexAI VertexAI -- Response (text, usage) --> GeminiClient GeminiClient --> UserLoading
Build with the Anthropic API, you can see the Anthropic API documentation. Update the provider
and api_key
in your config file. Please create an API key from the Anthropic API page.
codegpt config set openai.provider anthropic codegpt config set openai.api_key xxxxxxx codegpt config set openai.model claude-3-5-sonnet-20241022
See the model list from the Anthropic API documentation.
How to Change to Groq API ServiceGet the API key
from the Groq API Service, please visit here. Update the base_url
and api_key
in your config file.
codegpt config set openai.provider openai codegpt config set openai.base_url https://api.groq.com/openai/v1 codegpt config set openai.api_key gsk_xxxxxxxxxxxxxx codegpt config set openai.model llama3-8b-8192
GroqCloud currently supports the following models:
How to Change to Ollama API ServiceWe can use the Llama3 model from the Ollama API Service, please visit here. Update the base_url
in your config file.
# pull llama3 8b model ollama pull llama3 ollama cp llama3 gpt-4o
Try to use the Ollama
API Service:
curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "messages": [ { "role": "user", "content": "Hello!" } ] }'
Update the base_url
in your config file. You don't need to set the api_key
in the config file.
codegpt config set openai.base_url http://localhost:11434/v1
You can see the supported models list, model usage can be paid by users, developers, or both, and may shift in availability. You can also fetch models, prices, and limits via API.
The following example uses the free model name: meta-llama/llama-3-8b-instruct:free
codegpt config set openai.provider openai codegpt config set openai.base_url https://openrouter.ai/api/v1 codegpt config set openai.api_key sk-or-v1-xxxxxxxxxxxxxxxx codegpt config set openai.model google/learnlm-1.5-pro-experimental:free
To include your app in openrouter.ai rankings and show it in rankings on openrouter.ai, you can set the openai.headers
in your config file:
codegpt config set openai.headers "HTTP-Referer=https://github.com/appleboy/CodeGPT X-Title=CodeGPT"
There are two methods for generating a commit message using the codegpt
command: CLI mode and Git Hook.
You can call codegpt
directly to generate a commit message for your staged changes:
git add <files...> codegpt commit --preview
The commit message is shown below:
Summarize the commit message using the gpt-4o model We are trying to summarize a git diff We are trying to summarize a title for the pull request ================Commit Summary==================== feat: Add preview flag and remove disableCommit flag in commit command and template file. - Add a `preview` flag to the `commit` command - Remove the `disableCommit` flag from the `prepare-commit-msg` template file ================================================== Write the commit message to .git/COMMIT_EDITMSG file
Or translate all git commit messages into a different language (Traditional Chinese
, Simplified Chinese
, or Japanese
):
codegpt commit --lang zh-tw --preview
Consider the following outcome:
Summarize the commit message using the gpt-4o model We are trying to summarize a git diff We are trying to summarize a title for the pull request We are trying to translate a git commit message to Traditional Chinese language ================Commit Summary==================== 功能:重構 codegpt commit 命令標記 - 將「codegpt commit」命令新增「預覽」標記 - 從「codegpt commit」命令中移除「--disableCommit」標記 ================================================== Write the commit message to .git/COMMIT_EDITMSG file
You can replace the tip of the current branch by creating a new commit. Just use the --amend
flag:
The default commit message template is as follows:
{{ .summarize_prefix }}: {{ .summarize_title }}
{{ .summarize_message }}
Change the format with a template string using the --template_string
parameter:
codegpt commit --preview --template_string \ "[{{ .summarize_prefix }}]: {{ .summarize_title }}"
Change the format with a template file using the --template_file
parameter:
codegpt commit --preview --template_file your_file_path
Add a custom variable to the git commit message template:
{{ .summarize_prefix }}: {{ .summarize_title }} {{ .summarize_message }} {{ if .JIRA_URL }}{{ .JIRA_URL }}{{ end }}
Add a custom variable to the git commit message template using the --template_vars
parameter:
codegpt commit --preview --template_file your_file_path --template_vars \ JIRA_URL=https://jira.example.com/ABC-123
Load a custom variable from a file using the --template_vars_file
parameter:
codegpt commit --preview --template_file your_file_path --template_vars_file your_file_path
See the template_vars_file
format as follows:
JIRA_URL=https://jira.example.com/ABC-123
You can also use the prepare-commit-msg hook to integrate codegpt
with Git. This allows you to use Git normally and edit the commit message before committing.
To install the hook in the Git repository:
To remove the hook from the Git repository:
Stage your files and commit after installation:
git add <files...> git commit
codegpt
will generate the commit message for you and pass it back to Git. Git will open it with the configured editor for you to review/edit it. Then, to commit, save and close the editor!
$ git commit Summarize the commit message using the gpt-4o model We are trying to summarize a git diff We are trying to summarize a title for the pull request ================Commit Summary==================== Improve user experience and documentation for OpenAI tools - Add download links for pre-compiled binaries - Include instructions for setting up OpenAI API key - Add a CLI mode for generating commit messages - Provide references for OpenAI Chat completions and ChatGPT/Whisper APIs ================================================== Write the commit message to .git/COMMIT_EDITMSG file [main 6a9e879] Improve user experience and documentation for OpenAI tools 1 file changed, 56 insertions(+)
You can use codegpt
to generate a code review message for your staged changes:
Or translate all code review messages into a different language (Traditional Chinese
, Simplified Chinese
, or Japanese
):
codegpt review --lang zh-tw
See the following result:
Code review your changes using gpt-4o model We are trying to review code changes PromptTokens: 1021, CompletionTokens: 200, TotalTokens: 1221 We are trying to translate core review to Traditional Chinese language PromptTokens: 287, CompletionTokens: 199, TotalTokens: 486 ================Review Summary==================== 總體而言,此程式碼修補似乎在增加 Review 指令的功能,允許指定輸出語言並在必要時進行翻譯。以下是需要考慮的潛在問題: - 輸出語言沒有進行輸入驗證。如果指定了無效的語言代碼,程式可能會崩潰或產生意外結果。 - 此使用的翻譯 API 未指定,因此不清楚是否存在任何安全漏洞。 - 無法處理翻譯 API 調用的錯誤。如果翻譯服 ==================================================
Example php code review:
<?php if( isset( $_POST[ 'Submit' ] ) ) { // Get input $target = $_REQUEST[ 'ip' ]; // Determine OS and execute the ping command. if( stristr( php_uname( 's' ), 'Windows NT' ) ) { // Windows $cmd = shell_exec( 'ping ' . $target ); } else { // *nix $cmd = shell_exec( 'ping -c 4 ' . $target ); } // Feedback for the end user $html .= "<pre>{$cmd}</pre>"; } ?>
code review result:
================Review Summary==================== Code review: 1. Security: The code is vulnerable to command injection attacks as the user input is directly used in the shell_exec() function. An attacker can potentially execute malicious commands on the server by injecting them into the 'ip' parameter. 2. Error handling: There is no error handling in the code. If the ping command fails, the error message is not displayed to the user. 3. Input validation: There is no input validation for the 'ip' parameter. It should be validated to ensure that it is a valid IP address or domain name. 4. Cross-platform issues: The code assumes that the server is either running Windows or *nix operating systems. It may not work correctly on other platforms. Suggestions for improvement: 1. Use escapeshellarg() function to sanitize the user input before passing it to shell_exec() function to prevent command injection. 2. Implement error handling to display error messages to the user if the ping command fails. 3. Use a regular expression to validate the 'ip' parameter to ensure that it is a valid IP address or domain name. 4. Use a more robust method to determine the operating system, such as the PHP_OS constant, which can detect a wider range of operating systems. ==================================================
Run the following command to test the code:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4