A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/mtcp-stack/mtcp below:

mtcp-stack/mtcp: mTCP: A Highly Scalable User-level TCP Stack for Multicore Systems

mTCP is a highly scalable user-level TCP stack for multicore systems. mTCP source code is distributed under the Modified BSD License. For more detail, please refer to the LICENSE. The license term of io_engine driver and ported applications may differ from the mTCP’s.

We require the following libraries to run mTCP.

Compling PSIO/DPDK/NETMAP/ONVM driver requires kernel headers.

We have modified the dpdk package to export net_device stat data (for Intel-based Ethernet adapters only) to the OS. To achieve this, we have created a new LKM dpdk-iface-kmow. We also modified mk/rte.app.mk file to ease the compilation process of mTCP applications. We recommend using our package for DPDK installation.

You can optionally use CCP's congestion control implementation rather than mTCP's. You'll have wider selection of congestion control algorithms with CCP. (Currently this feature is experimental and under revision.)

Using CCP for congestion control (disabled by default), requires the CCP library. If you would like to enable CCP, simply run configure script with --enable-ccp option.

  1. Install Rust. Any installation method should be fine. We recommend using rustup:

    curl https://sh.rustup.rs -sSf | sh -- -y -v --default-toolchain nightly
  2. Install the CCP command line utility:

    cargo install portus --bin ccp
  3. Build the library (comes with Reno and Cubic by default, use ccp get to add others):

  4. You will also need to link your application against -lccp and -lstartccp as demonstrated in apps/example/Makefie.in

mtcp: mtcp source code directory

io_engine: event-driven packet I/O engine (io_engine)

dpdk - Intel's Data Plane Development Kit

apps: mTCP applications

util: useful source code for applications

config: sample mTCP configuration files (may not be necessary)

mTCP can be prepared in four ways.

  1. Download DPDK submodule.

    git submodule init
    git submodule update
  2. Setup DPDK.

    ./setup_mtcp_dpdk_env.sh [<path to $RTE_SDK>]
  3. Bring the dpdk compatible interfaces up, and then set RTE_SDK and RTE_TARGET environment variables. If you are using Intel NICs, the interfaces will have dpdk prefix.

    sudo ifconfig dpdk0 x.x.x.x netmask 255.255.255.0 up
    export RTE_SDK=`echo $PWD`/dpdk
    export RTE_TARGET=x86_64-native-linuxapp-gcc
  4. Setup mtcp library:

    ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET
    make

    Please note that your NIC should support RSS queues equal to the MAX_CPUS value (since mTCP expects a one-to-one RSS queue to CPU binding).

  5. Check the configurations in apps/example

  6. Run the applications!

  7. You can revert back all your changes by running the following script.

    ./setup_linux_env.sh [<path to $RTE_SDK>]
  1. make in io_engine/driver:

  2. install the driver:

    ./install.py <# cores> <# cores>
  3. Setup mtcp library:

    ./configure --with-psio-lib=<$path_to_ioengine>
    # e.g. ./configure --with-psio-lib=`echo $PWD`/io_engine
    make

    Please note that your NIC should support RSS queues equal to the MAX_CPUS value (since mTCP expects a one-to-one RSS queue to CPU binding).

  4. Check the configurations in apps/example

  5. Run the applications!

NEW: Now you can run mTCP applications (server + client) locally. A local setup is useful when only 1 machine is available for the experiment. ONVM configurations are placed as .conf files in apps/example directory. ONVM basics are explained in https://github.com/sdnfv/openNetVM.

Before running the applications make sure that onvm_mgr is running.
Also, no core overlap between applications and onvm_mgr is allowed.

  1. Install openNetVM following these instructions

  2. Set up the dpdk interfaces:

  3. Next bring the dpdk-registered interfaces up. This can be setup using:

    sudo ifconfig dpdk0 x.x.x.x netmask 255.255.255.0 up
  4. Setup mtcp library

    ./configure --with-dpdk-lib=$<path_to_dpdk> --with-onvm-lib=$<path_to_onvm_lib>
    # e.g. ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --with-onvm-lib=`echo $ONVM_HOME`/onvm
    make

    Please note that your NIC should support RSS queues equal to the MAX_CPUS value (since mTCP expects a one-to-one RSS queue to CPU binding).

  5. Check the configurations in apps/example

  6. Run the applications!

  7. You can revert back all your changes by running the following script.

Notes

Once you have started onvm_mgr, sometimes an mTCP application may fail to get launched due to an error resembling the one mentioned below:

To prevent this, use the base virtual address parameter to run the ONVM manager (core list arg 0xf8 isn't actually used by mtcp NFs but is required), e.g.:

cd openNetVM/onvm  
./go.sh 1,2,3 1 0xf8 -s stdout -a 0x7f000000000 

See README.netmap for details.

mTCP runs on Linux-based operating systems (2.6.x for PSIO) with generic x86_64 CPUs, but to help evaluation, we provide our tested environments as follows.

Intel Xeon E5-2690 octacore CPU @ 2.90 GHz 32 GB of RAM (4 memory channels)
10 GbE NIC with Intel 82599 chipset (specifically Intel X520-DA2)
Debian 6.0.7 (Linux 2.6.32-5-amd64)

Intel Core i7-3770 quadcore CPU @ 3.40 GHz 16 GB of RAM (2 memory channels)
10 GbE NIC with Intel 82599 chipset (specifically Intel X520-DA2)
Ubuntu 10.04 (Linux 2.6.32-47)

Event-driven PacketShader I/O engine (extended io_engine-0.2)

We tested the DPDK version (polling driver) with Linux-3.13.0 kernel.

  1. mTCP currently runs with fixed memory pools. That means, the size of TCP receive and send buffers are fixed at the startup and does not increase dynamically. This could be performance limit to the large long-lived connections. Be sure to configure the buffer size appropriately to your size of workload.

  2. The client side of mTCP supports mtcp_init_rss() to create an address pool that can be used to fetch available address space in O(1). To easily congest the server side, this function should be called at the application startup.

  3. The supported socket options are limited for right now. Please refer to the mtcp/src/api.c for more detail.

  4. The counterpart of mTCP should enable TCP timestamp.

  5. mTCP has been tested with the following Ethernet adapters:

    1. Intel-82598 ixgbe (Max-queue-limit: 16)
    2. Intel-82599 ixgbe (Max-queue-limit: 16)
    3. Intel-I350 igb (Max-queue-limit: 08)
    4. Intel-X710 i40e (Max-queue-limit: ~)
    5. Intel-X722 i40e (Max-queue-limit: ~)
Frequently asked questions
  1. How can I quit the application?

  2. My application doesn't use the address specified from ifconfig.

    sudo service network-manager stop
  3. Can I statically set the routing or arp table?

  1. Do not remove I/O driver (ps_ixgbe/igb_uio) while running mTCP applications. The application will panic!

  2. Use the ps_ixgbe/dpdk driver contained in this package, not the one from some other place (e.g., from io_engine github).

GitHub issue board is the preferred way to report bugs and ask questions about mTCP.

CONTACTS FOR THE AUTHORS

User mailing list <mtcp-user at list.ndsl.kaist.edu>
EunYoung Jeong <notav at ndsl.kaist.edu>
M. Asim Jamshed <ajamshed at ndsl.kaist.edu>

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4