A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/gcr/torch-residual-networks/issues/5 below:

BatchNorm after ReLU · Issue #5 · gcr/torch-residual-networks · GitHub

Hi,

I am performing somehow similar benchmark, but on caffenet128 (and moving to ResNets now) on ImageNet.
One thing, that I have found - the best position of BN in non-res net is after ReLU and without scale+bias layer (https://github.com/ducha-aiki/caffenet-benchmark/blob/master/batchnorm.md):

Name Accuracy LogLoss Comments Before 0.474 2.35 As in paper Before + scale&bias layer 0.478 2.33 As in paper After 0.499 2.21 After + scale&bias layer 0.493 2.24

May be, it is worth testing too.

Second, results on CIFAR-10 often contradicts results on ImageNet. I.e., leaky ReLU > ReLU on CIFAR, but worse on ImageNet.

P.S. We could cooperate in ImageNet testing, if you agree.

Viliami, sahilg06, giussepi and aromanro


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4