Edited, memorised or added to reading queue

on 12-Jul-2019 (Fri)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Optionals ensure that nil values are handled explicitly.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 4241528589580

Question
Optionals ensure that nil values are handled explicitly.
Answer
[default - edit me]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







and it continues to evolve
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 4244677463308

Question
Historically , the answer to what makes us human, what makes us deal with the world in the ways we do,
Answer
has been that we have a different kind of intelligence than other animals.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







A very different answer to what makes us human is provided in this book. Yes, humans have a dif- ferent kind of intelligence than other animals, but we also have a different kind of motivation. The human motivation for shared reality— the motiva- tion to share our feelings, thoughts, and concerns with others— is unique to humans. It is captured in “I wish you were here.”
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




We’re going to train a simple neural network with a single hidden layer to perform a certain task, but then we’re not actually going to use that neural network for the task we trained it on! Instead, the goal is actually just to learn the weights of the hidden layer–we’ll see that these weights are actually the “word vectors” that we’re trying to learn.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Unknown title
aks and enhancements that start to clutter the explanation. Let’s start with a high-level insight about where we’re going. Word2Vec uses a trick you may have seen elsewhere in machine learning. <span>We’re going to train a simple neural network with a single hidden layer to perform a certain task, but then we’re not actually going to use that neural network for the task we trained it on! Instead, the goal is actually just to learn the weights of the hidden layer–we’ll see that these weights are actually the “word vectors” that we’re trying to learn. Another place you may have seen this trick is in unsupervised feature learning, where you train an auto-encoder to compress an input vector in the hidden layer, and decompress it back t




Given a specific word in the middle of a sentence (the input word), look at the words nearby and pick one at random.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Unknown title
ural network to perform, and then we’ll come back later to how this indirectly gives us those word vectors that we are really after. We’re going to train the neural network to do the following. <span>Given a specific word in the middle of a sentence (the input word), look at the words nearby and pick one at random. The network is going to tell us the probability for every word in our vocabulary of being the “nearby word” that we chose. When I say "nearby", there is actually a "window size" paramet