XFluids is a parallelized SYstem-wide Compute Language (SYCL) C++ solver for large-scale high-resolution simulations of compressible multi-component reacting flows. It is developed by Prof. Shucheng Pan's group at the School of Aeronautics, Northwestern Polytechincal University.
main developers:
other contributors:
If you use XFluids for academic aplications, please cite our paper:
Jinlong Li, Shucheng Pan (2024). XFluids: A unified cross-architecture heterogeneous reacting flows simulation solver and its applications for multi-component shock-bubble interactions. arXiv:2403.05910. (https://arxiv.org/abs/2403.05910)
The following gpus have been tested:
sudo apt install software-properties-common -y sudo apt-add-repository ppa:cantera-team/cantera -y sudo apt install libcantera-dev libcantera3.1 -y
export CANTERA_ROOT=/path/to/cantera
cd ./external && git clone --recurse-submodules https://github.com/Cantera/cantera
NOTE: SYCL implementation of AdaptiveCpp is strongly recommended for XFluids, and the support of Intel oneAPI will be deprecated.
wget https://apt.llvm.org/llvm.sh chmod +x llvm.sh # repalce "<llvm-version>" with number 14/16/18 sudo ./llvm.sh <llvm-version> all # NOTE That: libclang-<llvm-version>-dev,libomp-<llvm-version>-dev are needed export COMPILER_PATH=/usr/lib/llvm-<llvm-version>
export COMPILER_PATH=/path/to/cuda-toolkit # for cuda SSMP AdaptiveCpp compilation
export COMPILER_PATH=/path/to/rocm-release # for hip SSMP AdaptiveCpp compilation
cmake -DACPP_PATH=/path/to/AdaptiveCpp ..
export ACPP_PATH=/path/to/AdaptiveCpp && \ cmake ..
$ acpp-info =================Backend information=================== Loaded backend 0(platform_id): OpenMP Found device: hipSYCL OpenMP host device Loaded backend 1(platform_id): CUDA Found device: NVIDIA GeForce RTX 3070 Found device: NVIDIA GeForce RTX 3070 =================Device information=================== ***************** Devices for backend OpenMP ***************** Device 0(device_id) ***************** Devices for backend CUDA ***************** Device 0(device_id) ***************** Devices for backend CUDA ***************** Device 1(device_id)2.1.2. Intel oneAPI device discovery: exec "sycl-ls" in cmd for device counting
$ sycl-ls [opencl:acc(platform_id:0):0(device_id)] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device 1.2 [opencl:cpu(platform_id:1):1(device_id)] Intel(R) OpenCL, AMD Ryzen 7 5800X 8-Core Processor 3.0 [ext_oneapi_cuda:gpu(platform_id:2):0(device_id)] NVIDIA CUDA BACKEND, NVIDIA T600 0.0 [CUDA 11.5] [ext_oneapi_cuda:gpu(platform_id:2):1(device_id)] NVIDIA CUDA BACKEND, NVIDIA T600 0.0 [CUDA 11.5]2.2. Queue construction: set integer platform_id and device_id("DeviceSelect" in json file or option: -dev)
NOTE: platform_id and device_id are revealed in [2.1-Device-discovery]("2.1. Device discovery")
auto device = sycl::platform::get_platforms()[platform_id].get_devices()[device_id]; sycl::queue q(device);3. Compile and usage of this project 3.1. Read root <XFluids/CMakeLists.txt>
CMAKE_BUILD_TYPE
is set to "Release" by default, SYCL code would target to host while ${CMAKE_BUILD_TYPE}==DebugINIT_SAMPLE
as the problem being tested, path to "species_list.dat" and "reaction_list.dat" should be given to MIXTURE_MODEL
VENDOR_SUBMIT
allows throwing some parallism tuning cuda/hip model to their GPU, only supportted by AdaptiveCpp compile environmentbuild with cmake
cd ./XFluids mkdir build && cd ./build && cmake .. && make -j
XFluids automatically read <XFluids/settings/*.json> file depending on INIT_SAMPLE setting
Append options to XFluids in cmd for another settings, all options are optional, all options are listed in [6. executable file options]("6. Executable file options")
./XFluids -dev=1,1,0 mpirun -n mx*my*mz ./XFluids -mpi=mx,my,mz -dev=1,0,0
cd ./XFluids/scripts/KS-DCU sbatch ./1node.slurm sbatch ./2node.slurm
NOTE: MPI functionality is not supported by Intel oneAPI SYCL implementation
4.1. Set MPI_PATH browsed by cmake before buildcmake system of this project browse libmpi.so automatically in path of ${MPI_PATH}/lib, please export MPI_PATH to the mpi you want:
export MPI_PATH=/home/ompi
-- MPI settings: -- MPI_HOME:/home/ompi -- MPI_INC: /home/ompi/include added -- MPI_CXX lib located: /home/ompi/lib/libmpi.so found5. .json configure file arguments
NOTE: Output data format is controlled by the value of "OutDAT", "OutVTI" in .json file
paraview
to open *.pvti
files for MPI visualization(1D visualization is not allowed, using paraview for 2/3D visualization is recommended);@misc{li2024xfluids,
title={XFluids: A unified cross-architecture heterogeneous reacting flows simulation solver and its applications for multi-component shock-bubble interactions},
author={Jinlong Li and Shucheng Pan},
year={2024},
eprint={2403.05910},
archivePrefix={arXiv}
}
XFluids has received financial support from the following fundings:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4