< Deeplarning > Understand Backpropagation of RNN/GRU and Implement It in Pure Python---1

< Deeplarning > Understand Backpropagation of RNN/GRU and Implement It in Pure Python---1

Understanding GRU

As we know, RNN has the disadvantage of gradient vanishing(and gradient exploding). GRU/LSTM is invented to prevent gradient vanishing, because more early information could be encoded in the late steps.

Just similarly to residual structure in CNN, GRU/LSTM could be treated as a RNN model with residual block. That is to say that former hidden states is identically added to the newly computed hidden state(with a gate).
For more details about GRU structure, you could check my former blog.

Forward Propagation of GRU

In this section, I will implement my pure python version of GRU forward propagation. Talk is cheap, show you the code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def forward(self, x, prev_s, W,Wb,C,Cb, V, Vb):
self.x_prev_s = np.concatenate((x, prev_s), axis = 0)
self.hidden_mum = len(prev_s)
self.mulw = mulGate.forward(W, Wb,self.x_prev_s)
self.mulwsig = sig_act.forward(self.mulw)

self.r = self.mulwsig[:len(self.mulwsig)/2]
self.u = self.mulwsig[len(self.mulwsig)/2:]

self.r_state = eltwise_mul.forward(self.r, prev_s)
self.x_r_state = np.concatenate((x, self.r_state), axis = 0)
self.x_r_state_mulc =mulGate.forward(C,Cb, self.x_r_state)
self.x_r_state_mulc_tan = tan_act.forward(self.x_r_state_mulc)

self.tmpadd1 = eltwise_mul.forward(self.u, prev_s)
self.u_fu = 1 - self.u
self.tmpadd2 = eltwise_mul.forward(self.u_fu, self.x_r_state_mulc_tan)
self.s = addGate.forward(self.tmpadd1, self.tmpadd2)
self.mulv = mulGate.forward(V,Vb, self.s)

As you can see, we firstly concatenate ‘hpre’ and ‘x’ together, and then multiply with weight ‘W’(Equal to ‘hpre’ and ‘x’ separately multiply with its weight and then add together).

Variable ‘r’ and ‘u’ are two gates. Gate ‘r’ controls how much hidden state could be mixed with ‘x’. Gate ‘u’ controls how much hidden state information could be directly flow to the next state.

Back Propagation of GRU

As we know, RNN uses back propagation through time(PBTT) to compute gradients for each time step. PBTT means the gradients should flow reversely through time. The bellow code snippet shows how the gradients flow inside the GRU structure at each time step:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def backward(self, x, prev_s, W, Wb, C, Cb, V, Vb,diff_s, dmulv):

dV,dVb, dsv = mulGate.backward(V,Vb, self.s, dmulv)
ds = dsv + diff_s
dtmpadd1, dtmpadd2 = addGate.backward(self.tmpadd1, self.tmpadd2, ds)
du_fu, dx_r_state_mulc_tan = eltwise_mul.backward(self.u_fu, self.x_r_state_mulc_tan, dtmpadd2)
du1 = -du_fu

du2, dprev_s0 = eltwise_mul.backward(self.u, prev_s, dtmpadd1)
du = du1 + du2
dx_r_state_mulc = tan_act.backward(self.x_r_state_mulc, dx_r_state_mulc_tan)
dC,dCb, dx_x_r_state = mulGate.backward(C,Cb, self.x_r_state, dx_r_state_mulc)
dx = dx_x_r_state[:len(x)]
dr_state = dx_x_r_state[len(x):]

dr, dprev_s1 = eltwise_mul.backward(self.r, prev_s, dr_state)
dmulwsig = np.concatenate((dr, du), axis = 0)
dmulw = sig_act.backward(self.mulw, dmulwsig)
dW,dWb, dx_prev_s = mulGate.backward(W, Wb, self.x_prev_s, dmulw)
dprev_s = dx_prev_s[-self.hidden_mum:] + dprev_s1 + dprev_s0

return (dprev_s, dW,dWb, dC,dCb, dV,dVb)

Here I have admit that to write a back propagation algorithm is a fussy thing, for you should carefully compute the gradients flowing through each operations without any error.

There are some notes for you.

Firstly, while hidden state is used three times in the forward propagation(First branch: compute gate ‘r’ and gate ‘u’. Second branch: to mix information with input ‘x’. Third branch, identically add to the final hidden state), so you should add each gradients of hidden state computed from each branch**(A way to prevent gradient vanishing)**.

Second thing to remember, feed your computed final gradients of hidden state to your last time step(or last layer of your model).

< Deeplarning > Understand Backpropagation of RNN/GRU and Implement It in Pure Python---1

https://zhengtq.github.io/2019/05/15/gru-bp1/

Author

Billy

Posted on

2019-05-15

Updated on

2021-03-13

Licensed under

Comments