A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/tiny-dnn/tiny-dnn below:

tiny-dnn/tiny-dnn: header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us know so that we can discuss next steps.

tiny-dnn is a C++14 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and IoT devices.

Check out the documentation for more info.

Comparison with other libraries

Please see wiki page.

Nothing. All you need is a C++14 compiler (gcc 4.9+, clang 3.6+ or VS 2015+).

tiny-dnn is header-only, so there's nothing to build. If you want to execute sample program or unit tests, you need to install cmake and type the following commands:

cmake . -DBUILD_EXAMPLES=ON
make

Then change to examples directory and run executable files.

If you would like to use IDE like Visual Studio or Xcode, you can also use cmake to generate corresponding files:

cmake . -G "Xcode"            # for Xcode users
cmake . -G "NMake Makefiles"  # for Windows Visual Studio users

Then open .sln file in visual studio and build(on windows/msvc), or type make command(on linux/mac/windows-mingw).

Some cmake options are available:

options description default additional requirements to use USE_TBB Use Intel TBB for parallelization OFF1 Intel TBB USE_OMP Use OpenMP for parallelization OFF1 OpenMP Compiler USE_SSE Use Intel SSE instruction set ON Intel CPU which supports SSE USE_AVX Use Intel AVX instruction set ON Intel CPU which supports AVX USE_AVX2 Build tiny-dnn with AVX2 library support OFF Intel CPU which supports AVX2 USE_NNPACK Use NNPACK for convolution operation OFF Acceleration package for neural networks on multi-core CPUs USE_OPENCL Enable/Disable OpenCL support (experimental) OFF The open standard for parallel programming of heterogeneous systems USE_LIBDNN Use Greentea LibDNN for convolution operation with GPU via OpenCL (experimental) OFF An universal convolution implementation supporting CUDA and OpenCL USE_SERIALIZER Enable model serialization ON2 - USE_DOUBLE Use double precision computations instead of single precision OFF - USE_ASAN Use Address Sanitizer OFF clang or gcc compiler USE_IMAGE_API Enable Image API support ON - USE_GEMMLOWP Enable gemmlowp support OFF - BUILD_TESTS Build unit tests OFF3 - BUILD_EXAMPLES Build example projects OFF - BUILD_DOCS Build documentation OFF Doxygen PROFILE Build unit tests OFF gprof

1 tiny-dnn use C++14 standard library for parallelization by default.

2 If you don't use serialization, you can switch off to speedup compilation time.

3 tiny-dnn uses Google Test as default framework to run unit tests. No pre-installation required, it's automatically downloaded during CMake configuration.

For example, type the following commands if you want to use Intel TBB and build tests:

cmake -DUSE_TBB=ON -DBUILD_TESTS=ON .

You can edit include/config.h to customize default behavior.

Construct convolutional neural networks

#include "tiny_dnn/tiny_dnn.h"
using namespace tiny_dnn;
using namespace tiny_dnn::activation;
using namespace tiny_dnn::layers;

void construct_cnn() {
    using namespace tiny_dnn;

    network<sequential> net;

    // add layers
    net << conv(32, 32, 5, 1, 6) << tanh()  // in:32x32x1, 5x5conv, 6fmaps
        << ave_pool(28, 28, 6, 2) << tanh() // in:28x28x6, 2x2pooling
        << fc(14 * 14 * 6, 120) << tanh()   // in:14x14x6, out:120
        << fc(120, 10);                     // in:120,     out:10

    assert(net.in_data_size() == 32 * 32);
    assert(net.out_data_size() == 10);

    // load MNIST dataset
    std::vector<label_t> train_labels;
    std::vector<vec_t> train_images;

    parse_mnist_labels("train-labels.idx1-ubyte", &train_labels);
    parse_mnist_images("train-images.idx3-ubyte", &train_images, -1.0, 1.0, 2, 2);

    // declare optimization algorithm
    adagrad optimizer;

    // train (50-epoch, 30-minibatch)
    net.train<mse, adagrad>(optimizer, train_images, train_labels, 30, 50);

    // save
    net.save("net");

    // load
    // network<sequential> net2;
    // net2.load("net");
}

Construct multi-layer perceptron (mlp)

#include "tiny_dnn/tiny_dnn.h"
using namespace tiny_dnn;
using namespace tiny_dnn::activation;
using namespace tiny_dnn::layers;

void construct_mlp() {
    network<sequential> net;

    net << fc(32 * 32, 300) << sigmoid() << fc(300, 10);

    assert(net.in_data_size() == 32 * 32);
    assert(net.out_data_size() == 10);
}

Another way to construct mlp

#include "tiny_dnn/tiny_dnn.h"
using namespace tiny_dnn;
using namespace tiny_dnn::activation;

void construct_mlp() {
    auto mynet = make_mlp<tanh>({ 32 * 32, 300, 10 });

    assert(mynet.in_data_size() == 32 * 32);
    assert(mynet.out_data_size() == 10);
}

For more samples, read examples/main.cpp or MNIST example page.

Since deep learning community is rapidly growing, we'd love to get contributions from you to accelerate tiny-dnn development! For a quick guide to contributing, take a look at the Contribution Documents.

[1] Y. Bengio, Practical Recommendations for Gradient-Based Training of Deep Architectures. arXiv:1206.5533v2, 2012

[2] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278-2324.

Other useful reference lists:

The BSD 3-Clause License

We have gitter rooms for discussing new features & QA. Feel free to join us!


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4