Gated Recurrent Unit

As we have learned in my previous blogpost RNN, RNN suffered from vanishing gradient problem.

GRU (Gated Recurrent Unit) aims to solve the vanishing gradient problem which comes with a standard recurrent neural network. GRU can also be considered as a variation on the LSTM because both are designed similarly and, in some cases, produce equally excellent results.

To solve the vanishing gradient problem of a standard RNN, GRU uses, so-called, update gate and reset gate. Basically, these are two vectors which decide what information should be passed to the output. The special thing about them is that they can be trained to keep information from long ago, without washing it through time or remove information which is irrelevant to the prediction.

gru

Internal Mechanism of GRU

  • Lets learn what Update Gate do.
    • When x_t is plugged into the network unit, it is multiplied by its own weight W(z).
    • The same goes for h_(t-1) which holds the information for the previous t-1 units and is multiplied by its own weight U(z).
    • Finally both results are added together and a sigmoid activation function is applied to squash the result between 0 and 1.
    • The update gate helps the model to determine how much of the past information (from previous time steps) needs to be passed along to the future. That is really powerful because the model can decide to copy all the information from the past and eliminate the risk of vanishing gradient problem.

update gate

  • Lets learn what Forgot Gate do.
    • This gate is used from the model to decide how much of the past information to forget.
    • All step is same as the update gate but here difference arises between update and forget gate is its weight W(r) and U(r).

gru1

  • Lets learn about Current Memory Content

    • We introduce a new memory content which will use the reset gate to store the relevant information from the past. It is calculated as follows:

      currequ

    • Multiply the input xt with a weight W and h(t-1) with a weight U.

    • Calculate the Hadamard (element-wise) product between the reset gate rt and Uh(t-1). That will determine what to remove from the previous time steps.
    • Sum up the results of step 1 and 2.
    • Apply the nonlinear activation function tanh.
    • We do an element-wise multiplication of h_(t-1) — blue line and r_t — orange line and then sum the result — pink line with the input x_t — purple line. Finally, tanh is used to produce h’_t — bright green line.
    • Lets take an example:
      • Let’s say we have a sentiment analysis problem for determining one’s opinion about a book from a review he wrote. The text starts with “This is a fantasy book which illustrates…” and after a couple paragraphs ends with “I didn’t quite enjoy the book because I think it captures too many details.” To determine the overall level of satisfaction from the book we only need the last part of the review. In that case as the neural network approaches to the end of the text it will learn to assign r_t vector close to 0, washing out the past and focusing only on the last sentences.

currentstate

  • Lets learn what Final memory at current time step

    • As the last step, the network needs to calculate h_t — vector which holds information for the current unit and passes it down to the network. In order to do that the update gate is needed. It determines what to collect from the current memory content — h’t and what from the previous steps — h(t-1). That is done as follows:

      finalequ

    • Apply element-wise multiplication to the update gate zt and h(t-1)
    • Apply element-wise multiplication to (1-z_t) and h’_t.
    • Sum the results from step 1 and 2.
    • Following through, you can see how z_t — green line is used to calculate 1-z_t which, combined with h’_t — bright green line, produces a result in the dark red line. zt is also used with h(t-1) — blue line in an element-wise multiplication. Finally, h_t — blue line is a result of the summation of the outputs corresponding to the bright and dark red lines.

    • Lets take a example:

      • Let’s bring up the example about the book review. This time, the most relevant information is positioned in the beginning of the text. The model can learn to set the vector z_t close to 1 and keep a majority of the previous information. Since z_t will be close to 1 at this time step, 1-z_t will be close to 0 which will ignore big portion of the current content (in this case the last part of the review which explains the book plot) which is irrelevant for our prediction.

currfinal

Conclusion

You can see how GRUs are able to store and filter the information using their update and reset gates. That eliminates the vanishing gradient problem since the model is not washing out the new input every single time but keeps the relevant information and passes it down to the next time steps of the network. If carefully trained, they can perform extremely well even in complex scenarios.