Edited, memorised or added to reading queue

on 24-Aug-2023 (Thu)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#feature-engineering #lstm #recurrent-neural-networks #rnn
For this exercise, the authors developed two separate LSTM models. The first one predicted the likelihood that each donor was going to respond favorably to the solicitation (0/1), and we calibrated it on the entire calibration data (N = 61,928). The second LSTM model predicted the donation amount t in case of donation, and we calibrated it on the individuals who donated in the calibration data (N = 6,456)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 7584631360780

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
For this exercise, the authors developed two separate LSTM models. The first one predicted the likelihood that each donor was going to [...] to the solicitation (0/1), and we calibrated it on the entire calibration data (N = 61,928). The second LSTM model predicted the donation amount t in case of donation, and we calibrated it on the individuals who donated in the calibration data (N = 6,456)
Answer
respond favorably

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
For this exercise, the authors developed two separate LSTM models. The first one predicted the likelihood that each donor was going to respond favorably to the solicitation (0/1), and we calibrated it on the entire calibration data (N = 61,928). The second LSTM model predicted the donation amount t in case of donation, and we calibrated

Original toplevel document (pdf)

cannot see any pdfs







#feature-engineering #lstm #recurrent-neural-networks #rnn
While training a model, the analyst aims at setting the parameters and hyperparameters such that the model reaches optimal capacity (Goodfellow et al., 2016) and therefore maximizes the chances that the model will generalize well to unseen data.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
While training a model, the analyst aims at setting the parameters and hyperparameters such that the model reaches optimal capacity (Goodfellow et al., 2016) and therefore maximizes the chances that the model will generalize well to unseen data. Models with low capacity would underfit the training set and hence have a high bias. However , models with high capacity may overfit the training set and exhibit high variance. Represen

Original toplevel document (pdf)

cannot see any pdfs