Do you want BuboFlash to help you learning these things? Or do you want to add or correct something? Click here to log in or create user.



We’re going to train a simple neural network with a single hidden layer to perform a certain task, but then we’re not actually going to use that neural network for the task we trained it on! Instead, the goal is actually just to learn the weights of the hidden layer–we’ll see that these weights are actually the “word vectors” that we’re trying to learn.
If you want to change selection, open document below and click on "Move attachment"

Unknown title
aks and enhancements that start to clutter the explanation. Let’s start with a high-level insight about where we’re going. Word2Vec uses a trick you may have seen elsewhere in machine learning. <span>We’re going to train a simple neural network with a single hidden layer to perform a certain task, but then we’re not actually going to use that neural network for the task we trained it on! Instead, the goal is actually just to learn the weights of the hidden layer–we’ll see that these weights are actually the “word vectors” that we’re trying to learn. Another place you may have seen this trick is in unsupervised feature learning, where you train an auto-encoder to compress an input vector in the hidden layer, and decompress it back t

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Discussion

Do you want to join discussion? Click here to log in or create user.