tiny-dnn is a C++11 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and IoT devices.
Check out the documentation for more info.
*unofficial version is available
Nothing. All you need is a C++11 compiler.
tiny-dnn is header-ony, so there's nothing to build. If you want to execute sample program or unit tests, you need to install cmake and type the following commands:
Then open .sln file in visual studio and build(on windows/msvc), or type make
command(on linux/mac/windows-mingw).
Some cmake options are available:
1 tiny-dnn use c++11 standard library for parallelization by default
2 If you don't use serialization, you can switch off to speedup compilation time.
3 tiny-dnn uses Google Test as default framework to run unit tests. No pre-installation required, it's automatically downloaded during CMake configuration.
For example, type the following commands if you want to use intel TBB and build tests:
cmake -DUSE_TBB=ON -DBUILD_TESTS=ON .
You can edit include/config.h to customize default behavior.
construct convolutional neural networks
#include "tiny_dnn/tiny_dnn.h" using namespace tiny_dnn; using namespace tiny_dnn::activation; using namespace tiny_dnn::layers; void construct_cnn() { using namespace tiny_dnn; network<sequential> net; // add layers net << conv<tan_h>(32, 32, 5, 1, 6) // in:32x32x1, 5x5conv, 6fmaps << ave_pool<tan_h>(28, 28, 6, 2) // in:28x28x6, 2x2pooling << fc<tan_h>(14 * 14 * 6, 120) // in:14x14x6, out:120 << fc<identity>(120, 10); // in:120, out:10 assert(net.in_data_size() == 32 * 32); assert(net.out_data_size() == 10); // load MNIST dataset std::vector<label_t> train_labels; std::vector<vec_t> train_images; parse_mnist_labels("train-labels.idx1-ubyte", &train_labels); parse_mnist_images("train-images.idx3-ubyte", &train_images, -1.0, 1.0, 2, 2); // declare optimization algorithm adagrad optimizer; // train (50-epoch, 30-minibatch) net.train<mse>(optimizer, train_images, train_labels, 30, 50); // save net.save("net"); // load // network<sequential> net2; // net2.load("net"); }
construct multi-layer perceptron(mlp)
#include "tiny_dnn/tiny_dnn.h" using namespace tiny_dnn; using namespace tiny_dnn::activation; using namespace tiny_dnn::layers; void construct_mlp() { network<sequential> net; net << fc<sigmoid>(32 * 32, 300) << fc<identity>(300, 10); assert(net.in_data_size() == 32 * 32); assert(net.out_data_size() == 10); }
another way to construct mlp
#include "tiny_dnn/tiny_dnn.h" using namespace tiny_dnn; using namespace tiny_dnn::activation; void construct_mlp() { auto mynet = make_mlp<tan_h>({ 32 * 32, 300, 10 }); assert(mynet.in_data_size() == 32 * 32); assert(mynet.out_data_size() == 10); }
more sample, read examples/main.cpp or MNIST example page.
Since deep learning community is rapidly growing, we'd love to get contributions from you to accelerate tiny-dnn development! For a quick guide to contributing, take a look at the Contribution Documents.
[1] Y. Bengio, Practical Recommendations for Gradient-Based Training of Deep Architectures. arXiv:1206.5533v2, 2012
[2] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278-2324.
other useful reference lists:
The BSD 3-Clause License
We have a gitter rooms for discussing new features & QA. Feel free to join us!
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4