Normalisasi batch telah dikreditkan dengan peningkatan kinerja substansial dalam jaring saraf yang dalam. Banyak materi di internet menunjukkan cara mengimplementasikannya berdasarkan aktivasi-demi-aktivasi. Saya sudah menerapkan backprop menggunakan aljabar matriks, dan mengingat bahwa saya bekerja dalam bahasa tingkat tinggi (sambil mengandalkan Rcpp
(dan akhirnya GPU) untuk perkalian matriks padat), merobek semuanya dan beralih ke for
-loop mungkin akan memperlambat kode saya secara substansial, selain menjadi sangat sakit.
Fungsi normalisasi bets adalah mana
- adalah simpul th, sebelum itu akan diaktifkan
- dan adalah parameter skalar
- dan adalah rata-rata dan SD dari . (Perhatikan bahwa akar kuadrat dari varians ditambah faktor fudge biasanya digunakan - mari kita asumsikan elemen bukan nol untuk kekompakan)
Dalam bentuk matriks, normalisasi bets untuk seluruh lapisan adalah mana
- adalah
- is a column vector of ones
- and are now row -vectors of the per-layer normalization parameters
- and are matrices, where each column is a -vector of columnwise means and standard deviations
- is the Kronecker product and is the elementwise (Hadamard) product
A very simple one-layer neural net with no batch normalization and a continuous outcome is
where
- is
- is
- is the activation function
If the loss is , then the gradients are
where
Under batch normalization, the net becomes
Is there a practical way of computing , , and within the matrix framework? A simple expression, without resorting to node-by-node computation?
Update 1:
I've figured out -- sort of. It is:
set.seed(1)
library(dplyr)
library(foreach)
#numbers of obs, variables, and hidden layers
N <- 10
p1 <- 7
p2 <- 4
a <- function (v) {
v[v < 0] <- 0
v
}
ap <- function (v) {
v[v < 0] <- 0
v[v >= 0] <- 1
v
}
# parameters
G1 <- matrix(rnorm(p1*p2), nrow = p1)
G2 <- rnorm(p2)
gamma <- 1:p2+1
beta <- (1:p2+1)*-1
# error
u <- rnorm(10)
# matrix batch norm function
b <- function(x, bet = beta, gam = gamma){
xs <- scale(x)
gk <- t(matrix(gam)) %x% matrix(rep(1, N))
bk <- t(matrix(bet)) %x% matrix(rep(1, N))
gk*xs+bk
}
# activation-wise batch norm function
bi <- function(x, i){
xs <- scale(x)
gk <- t(matrix(gamma[i]))
bk <- t(matrix(beta[i]))
suppressWarnings(gk*xs[,i]+bk)
}
X <- round(runif(N*p1, -5, 5)) %>% matrix(nrow = N)
# the neural net
y <- a(b(X %*% G1)) %*% G2 + u
Then compute derivatives:
# drdbeta -- the matrix way
drdb <- matrix(rep(1, N*1), nrow = 1) %*% (-2*u %*% t(G2) * ap(b(X%*%G1)))
drdb
[,1] [,2] [,3] [,4]
[1,] -0.4460901 0.3899186 1.26758 -0.09589582
# the looping way
foreach(i = 1:4, .combine = c) %do%{
sum(-2*u*matrix(ap(bi(X[,i, drop = FALSE]%*%G1[i,], i)))*G2[i])
}
[1] -0.44609015 0.38991862 1.26758024 -0.09589582
They match. But I'm still confused, because I don't really know why this works. The MatCalc notes referenced by @Mark L. Stone say that the derivative of should be
# playing with the kroneker derivative rule
A <- t(matrix(beta))
B <- matrix(rep(1, N))
diag(rep(1, ncol(A) *ncol(B))) %*% diag(rep(1, ncol(A))) %x% (B) %x% diag(nrow(A))
[,1] [,2] [,3] [,4]
[1,] 1 0 0 0
[2,] 1 0 0 0
snip
[13,] 0 1 0 0
[14,] 0 1 0 0
snip
[28,] 0 0 1 0
[29,] 0 0 1 0
[snip
[39,] 0 0 0 1
[40,] 0 0 0 1
This isn't conformable. Clearly I'm not understanding those Kronecker derivative rules. Help with those would be great. I'm still totally stuck on the other derivatives, for and -- those are harder because they don't enter additively like does.
Update 2
Reading textbooks, I'm fairly sure that and will require use of the vec()
operator. But I'm apparently unable to follow the derivations sufficiently as to be able to translate them into code. For example, is going to involve taking the derivative of with respect to , where (which we can treat as a constant matrix for the moment).
My instinct is to simply say "the answer is ", but that obviously doesn't work because isn't conformable with .
I know that
and from this, that
Update 3
Making progress here. I woke up at 2AM last night with this idea. Math is not good for sleep.
Here is , after some notational sugar:
Here's what you have after you get to the end of the chain rule:
And, in fact it is:
stub <- (-2*u %*% t(G2) * ap(b(X%*%G1)))
w <- t(matrix(gamma)) %x% matrix(rep(1, N)) * (apply(X%*%G1, 2, sd) %>% t %x% matrix(rep(1, N)))
drdG1 <- t(X) %*% (stub*w)
loop_drdG1 <- drdG1*NA
for (i in 1:7){
for (j in 1:4){
loop_drdG1[i,j] <- t(X[,i]) %*% diag(w[,j]) %*% (stub[,j])
}
}
> loop_drdG1
[,1] [,2] [,3] [,4]
[1,] -61.531877 122.66157 360.08132 -51.666215
[2,] 7.047767 -14.04947 -41.24316 5.917769
[3,] 124.157678 -247.50384 -726.56422 104.250961
[4,] 44.151682 -88.01478 -258.37333 37.072659
[5,] 22.478082 -44.80924 -131.54056 18.874078
[6,] 22.098857 -44.05327 -129.32135 18.555655
[7,] 79.617345 -158.71430 -465.91653 66.851965
> drdG1
[,1] [,2] [,3] [,4]
[1,] -61.531877 122.66157 360.08132 -51.666215
[2,] 7.047767 -14.04947 -41.24316 5.917769
[3,] 124.157678 -247.50384 -726.56422 104.250961
[4,] 44.151682 -88.01478 -258.37333 37.072659
[5,] 22.478082 -44.80924 -131.54056 18.874078
[6,] 22.098857 -44.05327 -129.32135 18.555655
[7,] 79.617345 -158.71430 -465.91653 66.851965
Update 4
Here, I think, is . First
Similar to before, the chain rule gets you as far as
It sort of matches:
drdg <- t(scale(X %*% G1)) %*% (stub * t(matrix(gamma)) %x% matrix(rep(1, N)))
loop_drdg <- foreach(i = 1:4, .combine = c) %do% {
t(scale(X %*% G1)[,i]) %*% (stub[,i, drop = F] * gamma[i])
}
> drdg
[,1] [,2] [,3] [,4]
[1,] 0.8580574 -1.125017 -4.876398 0.4611406
[2,] -4.5463304 5.960787 25.837103 -2.4433071
[3,] 2.0706860 -2.714919 -11.767849 1.1128364
[4,] -8.5641868 11.228681 48.670853 -4.6025996
> loop_drdg
[1] 0.8580574 5.9607870 -11.7678486 -4.6025996
The diagonal on the first is the same as the vector on the second. But really since the derivative is with respect to a matrix -- albeit one with a certain structure, the output should be a similar matrix with the same structure. Should I take the diagonal of the matrix approach and simply take it to be ? I'm not sure.
It seems that I have answered my own question but I am unsure whether I am correct. At this point I will accept an answer that rigorously proves (or disproves) what I've sort of hacked together.
while(not_answered){
print("Bueller?")
Sys.sleep(1)
}
sumber
Rcpp
to implement it efficiently is useful.Jawaban:
Not a complete answer, but to demonstrate what I suggested in my comment if
sumber