Edited, memorised or added to reading queue

on 25-Aug-2023 (Fri)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#feature-engineering #lstm #recurrent-neural-networks #rnn
While an LSTM model does not depend on the analyst's ability to craft meaningful model features, traditional benchmarks do heavily rely on human expertise. Consequently, when an LSTM model shows superior results over a traditional response model—as we have shown in the previous illustration —we cannot ascertain whether it is due to the superiority of the LSTM model, or to the poor performance of the analyst who designed the benchmark model. To alleviate that concern, we asked 297 graduate students in data science and business analytics from one of the top-ranked specialized masters in the world to compete in a marketing analytics prediction contest. 7 Each author participated and submitted multiple models as well, for a total of 816 submissions. With the LSTM model competing against such a wide variety of human expertise and modelling approaches, it becomes easier to disentangle the model performance from its human component
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
As machine learning models become ubiquitous in our everyday lives, demand for explaining their predictions is growing [5, 16, 14]. In the context of behaviour prediction, we want to understand how previous consumer actions influence model predictions
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
As machine learning models become ubiquitous in our everyday lives, demand for explaining their predictions is growing [5, 16, 14]. In the context of behaviour prediction, we want to understand how previous consumer actions influence model predictions: How does order probability change when products are put into the cart? Does it decrease significantly if a consumer does not return to a webshop for two days? Answers to these question

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7584649186572

Tags
#deep-learning #keras #lstm #python #sequence
Question

... a naive method that splits the 1,000-long sequence into 50 sequences (say) each of length 20 and treats each sequence of length 20 as a separate training case. This is a sensible approach that can work well in practice, but it is blind to temporal dependencies that span more than 20 time steps.

— Training Recurrent Neural Networks, 2013

This means as part of framing your problem you must split long sequences into subsequences that are both long enough to capture [...] for making predictions, but short enough to efficiently train the network

Answer
relevant context

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
re than 20 time steps. — Training Recurrent Neural Networks, 2013 This means as part of framing your problem you must split long sequences into subsequences that are both long enough to capture <span>relevant context for making predictions, but short enough to efficiently train the network <span>

Original toplevel document (pdf)

cannot see any pdfs







#deep-learning #keras #lstm #python #sequence
The caution is that LSTMs are not a silver bullet and to carefully consider the framing of your problem.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
The caution is that LSTMs are not a silver bullet and to carefully consider the framing of your problem. Think of the internal state of LSTMs as a handy internal variable to capture and provide context for making predictions. If your problem looks like a traditional autoregression type pro

Original toplevel document (pdf)

cannot see any pdfs