Do you want BuboFlash to help you learning these things? Or do you want to add or correct something? Click here to log in or create user.

#deep_learning #foundations #initialization #lesso_9
Batch normalization (Ioffe & Szegedy (2015)), a technique that inserts layers into the the deep net that transform the output for the batch to be zero mean unit variance, has successfully facilitated training of the twenty-two layer GoogLeNet (Szegedy et al. (2015)). However, batch normalization adds a 30% computational overhead to each iteration.
If you want to change selection, open document below and click on "Move attachment"


owner: ronaldokun - (no access) - All You Need is a Good Init ✔, p1


statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on



Do you want to join discussion? Click here to log in or create user.