This repository lists key projects and related demos about CodeFuse.
CodeFuse aims to develop Code Large Language Models (Code LLMs) to support and enhance full-lifecycle AI native sotware developing, covering crucial stages such as design requirements, coding, testing, building, deployment, operations, and insight analysis. Below is the overall framework of CodeFuse.
** 2024.08 ** codefuse-ide: Release opensumi&CodeBlitz for code ide; CGE : Release D2Coder-v1 Embedding model for code search
** 2024.07 ** D2LLM : Release D2Coder-v1 Embedding model for code search, RepoFuse : Repository-Level Code Completion with Language Models with Fused Dual Context
** 2024.06 ** Codefuse-ai pages, D2LLM releases feature about Decomposed and Distilled Large Language Models for Semantic Search, MFTCoder releases V0.4.2. More detail see Release & Next Release
** 2024.05 ** ModelCache releases v0.2.1 with supporting multimodal, see CodeFuse-ModelCache. DevOps-Model support function call, see CodeFuse-DevOps-Model. More detail see Release & Next Release
** 2024.04 ** CodeFuse-muAgent: a multi-agent framework, more detail see Release & Next Release
List of CodeFuse RepositoriesWe listed repositories according to the lifecycle above.
LifeCycle Stage Project Repository Repo-Description Road Map Requirement & Design MFT-VLM Instruction-fine-tuning for Vision-language tasks Coding MFTCoder Instruction-Tuning Framework FastTransformer4CodeFuse FT based Inference Engine CodeFuse-Eval Evaluation kits for CodeFuse Test & Build TestAgent TestGPT demo frontend DevOps DevOps-Eval Benchmark for DevOps DevOps-Model index for DevOps models Data Insight NA NA Base ChatBot General chatbot frontend for CodeFuse muAgent multi-agent framework ModelCache Semantic Cache for LLM Serving CodeFuse-Query Query-Based Code Analysis Engine Others CoCA Colinear Attention Awesine-Code-LLM Code-LLM Survey This Repo General Introduction & index of CodeFuse Repos List of CodeFuse Primary Released Models ModelName Short Description Modele Linls CodeFuse-13B Training from scratch by CodeFuse HF ; MS CodeFuse-CodeLLaMA-34B Finetuning on CodeLLaMA-34B HF ; MS ** CodeFuse-CodeLLaMA-34B-4bits 4bits quantized 34B model HF ; MS CodeFuse-DeepSeek-33B FineTuning on DeepSeek-Coder-33b HF ; MS ** CodeFuse-DeepSeek-33B-4bits 4-bit quantized 33B model HF ; MS CodeFuse-VLM-14B SoTA vision-language model HF ; MS** -- recommended models;
Video demos: Chinese version at below, English version under preparation.
demo_video.mp4Online Demo: You can try our CodeFuse-CodeLlama-34B model on ModelScope: CodeFuse-CodeLlama34B-MFT-Demo
For more technique details about CodeFuse, please refer to our paper MFTCoder.
If you find our work useful or helpful for your R&D work, please feel free to cite our paper as follows.
@article{mftcoder2023,
title={MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning},
author={Bingchang Liu and Chaoyu Chen and Cong Liao and Zi Gong and Huan Wang and Zhichao Lei and Ming Liang and Dajun Chen and Min Shen and Hailian Zhou and Hang Yu and Jianguo Li},
year={2023},
journal={arXiv preprint arXiv},
archivePrefix={arXiv},
eprint={2311.02303}
}
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4