A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/sgl-project/sglang/commit/efc52f85e2d5c9b31545d4092f2b361b6ff04d67 below:

bump v0.4.1 (#2582) · sgl-project/sglang@efc52f8 · GitHub

File tree Expand file treeCollapse file tree 5 files changed

+11

-11

lines changed

Filter options

Expand file treeCollapse file tree 5 files changed

+11

-11

lines changed Original file line number Diff line number Diff line change

@@ -1,5 +1,5 @@

1 1

# Usage (to build SGLang ROCm docker image):

2 -

# docker build --build-arg SGL_BRANCH=v0.4.0.post2 -t v0.4.0.post2-rocm620 -f Dockerfile.rocm .

2 +

# docker build --build-arg SGL_BRANCH=v0.4.1 -t v0.4.1-rocm620 -f Dockerfile.rocm .

3 3 4 4

# default base image

5 5

ARG BASE_IMAGE="rocm/vllm-dev:20241022"

Original file line number Diff line number Diff line change

@@ -11,9 +11,9 @@ docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04

11 11

# Nvidia

12 12

docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash

13 13

# AMD

14 -

docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.0.post2-rocm620 /bin/bash

14 +

docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.1-rocm620 /bin/bash

15 15

# AMD just the last 2 GPUs

16 -

docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.0.post2-rocm620 /bin/bash

16 +

docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.1-rocm620 /bin/bash

17 17

```

18 18 19 19

### Step 2: Configure the runner by `config.sh`

Original file line number Diff line number Diff line change

@@ -13,7 +13,7 @@ Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/

13 13

## Method 2: From source

14 14

```

15 15

# Use the last release branch

16 -

git clone -b v0.4.0.post2 https://github.com/sgl-project/sglang.git

16 +

git clone -b v0.4.1 https://github.com/sgl-project/sglang.git

17 17

cd sglang

18 18 19 19

pip install --upgrade pip

@@ -26,7 +26,7 @@ Note: To AMD ROCm system with Instinct/MI GPUs, do following instead:

26 26 27 27

```

28 28

# Use the last release branch

29 -

git clone -b v0.4.0.post2 https://github.com/sgl-project/sglang.git

29 +

git clone -b v0.4.1 https://github.com/sgl-project/sglang.git

30 30

cd sglang

31 31 32 32

pip install --upgrade pip

@@ -51,7 +51,7 @@ docker run --gpus all \

51 51

Note: To AMD ROCm system with Instinct/MI GPUs, it is recommended to use `docker/Dockerfile.rocm` to build images, example and usage as below:

52 52 53 53

```bash

54 -

docker build --build-arg SGL_BRANCH=v0.4.0.post2 -t v0.4.0.post2-rocm620 -f Dockerfile.rocm .

54 +

docker build --build-arg SGL_BRANCH=v0.4.1 -t v0.4.1-rocm620 -f Dockerfile.rocm .

55 55 56 56

alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \

57 57

--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \

@@ -60,11 +60,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d

60 60

drun -p 30000:30000 \

61 61

-v ~/.cache/huggingface:/root/.cache/huggingface \

62 62

--env "HF_TOKEN=<secret>" \

63 -

v0.4.0.post2-rocm620 \

63 +

v0.4.1-rocm620 \

64 64

python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000

65 65 66 66

# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default

67 -

drun v0.4.0.post2-rocm620 python3 -m sglang.bench_one_batch --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8

67 +

drun v0.4.1-rocm620 python3 -m sglang.bench_one_batch --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8

68 68

```

69 69 70 70

## Method 4: Using docker compose

Original file line number Diff line number Diff line change

@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"

4 4 5 5

[project]

6 6

name = "sglang"

7 -

version = "0.4.0.post2"

7 +

version = "0.4.1"

8 8

description = "SGLang is yet another fast serving framework for large language models and vision language models."

9 9

readme = "README.md"

10 10

requires-python = ">=3.8"

@@ -23,7 +23,7 @@ runtime_common = ["aiohttp", "decord", "fastapi",

23 23

"psutil", "pydantic", "python-multipart",

24 24

"pyzmq>=25.1.2", "torchao>=0.7.0", "uvicorn", "uvloop",

25 25

"xgrammar>=0.1.6"]

26 -

srt = ["sglang[runtime_common]", "torch", "vllm>=0.6.3.post1,<=0.6.4.post1", "cuda-python", "flashinfer==0.1.6", "sgl-kernel"]

26 +

srt = ["sglang[runtime_common]", "torch", "vllm>=0.6.3.post1,<=0.6.4.post1", "cuda-python", "flashinfer==0.1.6", "sgl-kernel>=0.0.2.post8"]

27 27 28 28

# HIP (Heterogeneous-computing Interface for Portability) for AMD

29 29

# => base docker rocm/vllm-dev:20241022, not from public vllm whl

Original file line number Diff line number Diff line change

@@ -1 +1 @@

1 -

__version__ = "0.4.0.post2"

1 +

__version__ = "0.4.1"

You can’t perform that action at this time.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4