Edited, memorised or added to reading queue

on 27-Jan-2023 (Fri)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#recurrent-neural-networks #rnn
In this specific domain of customer base analysis, probabilistic approaches from the ‘‘Buy ’Till You Die” (BTYD) model family represent the gold standard, leveraging easily observable Recency and Frequency (RF, or RFM when including also the monetary value) metrics together with a latent attrition process to deliver accurate predictions (Schmittlein, Morrison, & Colombo, 1987; Fader, Hardie, & Lee, 2005; Fader & Hardie, 2009)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
n this specific domain of customer base analysis, probabilistic approaches from the ‘‘Buy ’Till You Die” (BTYD) model family represent the gold standard, leveraging easily observable Recency and Frequency (RF, or RFM when including also the monetary value) metrics together with a latent attrition process to deliver accurate predictions (Schmittlein, Morrison, & Colombo, 1987; Fader, Hardie, & Lee, 2005; Fader & Hardie, 2009). The simple behavioral story which sits at the core of BTYD models – while ”alive”, customers make purchases until they drop out – gives these models robust predictive power, especially

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7560252230924

Tags
#bayes #programming #r #statistics
Question
One way to summarize the [...] is by marking the span of values that are most credible and cover 95% of the distribution. This is called the highest density inter val (HDI) and is marked by the black bar on the floor of the distribution in Figure 2.5.
Answer
uncertainty

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
One way to summarize the uncertainty is by marking the span of values that are most credible and cover 95% of the distribution. This is called the highest density inter val (HDI) and is marked by the black bar on the floor

Original toplevel document (pdf)

cannot see any pdfs







#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
In principle, one could evaluate the logistic regression model at every single time-step in the consumer history to determine the influence of individual events. However, this would involve the inefficient process of re-calculating features for every time-step. Calculations at timesteps t and t − 1 would be highly redundant: features at t represent the complete history until t and not only what happened in between t − 1 and t.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
In principle, one could evaluate the logistic regression model at every single time-step in the consumer history to determine the influence of individual events. However, this would involve the inefficient process of re-calculating features for every time-step. Calculations at timesteps t and t − 1 would be highly redundant: features at t represent the complete history until t and not only what happened in between t − 1 and t. Generally speaking, explaining the predictions of vector-based methods is more difficult than often as- sumed. This holds even for linear models like logistic regression. Features are o

Original toplevel document (pdf)

cannot see any pdfs




#recurrent-neural-networks #rnn
The simple behavioral story which sits at the core of BTYD models – while ”alive”, customers make purchases until they drop out – gives these models robust predictive power, especially on the aggregate cohort level, and over a long time horizon.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
y value) metrics together with a latent attrition process to deliver accurate predictions (Schmittlein, Morrison, & Colombo, 1987; Fader, Hardie, & Lee, 2005; Fader & Hardie, 2009). <span>The simple behavioral story which sits at the core of BTYD models – while ”alive”, customers make purchases until they drop out – gives these models robust predictive power, especially on the aggregate cohort level, and over a long time horizon. Extended variants of the original models (e.g., Zhang, Bradlow, & Small (2015), Platzer & Reutterer (2016), Reutterer, Platzer, & Schröder (2021)) improve predictive accurac

Original toplevel document (pdf)

cannot see any pdfs




#recurrent-neural-networks #rnn
Toth, Tan, Di Fabbrizio, and Datta (2017) have shown that a mixture of RNNs can approximate several complex functions simultaneously.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
authors demonstrate the performance of several RNN architectures and benchmark them against more conventional machine learning approaches for predicting purchasing intent. In a similar context, <span>Toth, Tan, Di Fabbrizio, and Datta (2017) have shown that a mixture of RNNs can approximate several complex functions simultaneously. More recently, Sarkar and De Bruyn (2021) demonstrate that a special RNN type can help marketing response modelers to benefit from the multitude of inter-temporal customer-firm interact

Original toplevel document (pdf)

cannot see any pdfs




#feature-engineering #lstm #recurrent-neural-networks #rnn
The LSTM neural network would be well-suited for modeling online customer behavior across multiple websites since it can naturally capture inter-sequence and inter-temporal interactions from multiple streams of clickstream data without growing exponentially in complexity.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Park and Fader (2004) leveraged internet clickstream data from multiple websites, such that relevant information from one website could be used to explain behavior on the other. The LSTM neural network would be well-suited for modeling online customer behavior across multiple websites since it can naturally capture inter-sequence and inter-temporal interactions from multiple streams of clickstream data without growing exponentially in complexity.

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7560264027404

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
It is worth noting that though our study focuses on LSTM neural networks, there are other variants of the RNN as well such as the Gated Recurrent Unit (GRU) which use internal recurrence and gating mechanism along with the external recurrence of the RNN (Cho et al., 2014; Chung, Gulcehre, Cho, & Bengio, 2014). However, research seems to suggest that none of the existing variants of the LSTM may significantly improve on the [...] LSTM neural network
Answer
vanilla

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
rence of the RNN (Cho et al., 2014; Chung, Gulcehre, Cho, & Bengio, 2014). However, research seems to suggest that none of the existing variants of the LSTM may significantly improve on the <span>vanilla LSTM neural network <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7560267435276

Tags
#deep-learning #keras #lstm #python #sequence
Question
A sufficiently large single hidden layer Multilayer Perceptron can be used to approximate most functions. Increasing the depth of the network provides an alternate solution that requires fewer neurons and trains [...(how?)]. Ultimately, adding depth it is a type of representational optimization.
Answer
faster

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ge single hidden layer Multilayer Perceptron can be used to approximate most functions. Increasing the depth of the network provides an alternate solution that requires fewer neurons and trains <span>faster. Ultimately, adding depth it is a type of representational optimization. <span>

Original toplevel document (pdf)

cannot see any pdfs