Edited, memorised or added to reading list

on 27-Jun-2022 (Mon)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#RNN #ariadne #behaviour #consumer #deep-learning #patterns #priority #recurrent-neural-networks #retail #simulation #synthetic-data
The CLV models use different strategies for customer behaviour modelling. One of the most reliable ones is using the recency (R), frequency (F), and monetary value (M) variables, called RFM [3], [4], [5]. These variables present some understanding of customer’s behaviour and try to answer the following questions: “How recently did the customer purchase?”, “How often do they purchase?”, and “How much do they spend?” [2]. RFM variables are sufficient statistics for customer behaviour modelling and are a mainstay of the industry because of their ease of implementation in practice [6], [3].
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #patterns #priority #recurrent-neural-networks #retail #simulation #synthetic-data
The modified version of feed-forward neural networks (FFNNs) by adding recurrent connections is called recurrent neural networks (RNNs), which are capable of modelling sequential data for sequence recognition, sequence production, and time series prediction [15]. The RNNs are made of high dimensional hidden states with non-linear dynamics.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #patterns #priority #recurrent-neural-networks #retail #simulation #synthetic-data
. Customer Shopping Pattern Model We study the customer behaviour though time with equal time steps (intervals) as demonstrated in Figure 3. The time interval can be weekly, bi-weekly, monthly, or etc. Since the first purchase time among customers is different, we define a lower limit and upper limit during the time. The lower limit refers to the start point of our study and the upper limit refers to the end point of our study through time. In this case, we can define some equal time intervals between the lower and upper limits, as shown in Figure 3. The shopper’s purchase is then identified in each time interval and the R, F, and M variables are computed with respect to any point of interest. For example, if the point of interest is at t 4 , the recency is the time difference between the last purchase before the time of interest and the purchase itself. The time difference can be represented in scale of hour, day, week or etc., depending on the application. The frequency is the number of conducted purchases between the lower limit and the time of interest, which is F = 3 in this example. The monetary is the value customer has spent on the purchases between the lower limit and the time of interest. The R, F, and M values are computed for each customer with a CLN and for all times of interest.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #patterns #priority #recurrent-neural-networks #retail #simulation #synthetic-data

Learning Machine Architecture

The proposed RNN model is consisted of one input layer, one hidden (recurrent) layer, and one output layer. The input layer is an auto-encoder which extracts features from inputs. The CLN, R, F, and M values for each customer at each time- step t are the input sequence to the RNN model presented in Figure 4. The R, F, and M value of the next time-step t + 1 is shown to the model as target through predefined time intervals. The time-step t can be set depending on the application. For example, for grocery stores it can be weekly or bi-weekly and for sports wear every season

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #patterns #priority #recurrent-neural-networks #retail #simulation #synthetic-data
In general, the CLNs are provided as large integer digits in transaction data. We use the one-hot encoding method to break the dependencies between integers. For example, if a store has 50,000 customer, with loyalty numbers starting from 100000, the one-hot encoded representation for CLN u = 100125 is u = [0, ..., 0, 1, 0, ..., 0] 1×50,000 , where only the element number 125 is one and the rest are zero, Figure 4. The one-hot encoded CLN and binary representation of R t , F t , and M t are fed to an auto-encoder, which represents each input vector with a feature representation vector of fixed length. Each input vector is fully connected to the representation layer, where W V I is the weight matrix to be optimized while training the mode
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #patterns #priority #recurrent-neural-networks #retail #simulation #synthetic-data
This paper proposes a new model for RFM prediction of customers based on recurrent neural networks (RNNs) with rectified linear unit activation function. The model utilizes an auto-encoder to represent features of input parameters (i.e. customer loyalty number, R, F, and M). The proposed model is the first of its kind in the literature and has many opportunities for further improvement. The model can be improved by using more training data. It is interesting to explore deeper structures of the model in auto- encoder and recursion levels. Clumpiness is another variable which can be studied as an additive to R, F, and M (i.e. RFMC) variables. Another pathway is considering other parameters of user (e.g. location, age, and etc.) for automatic feature extraction and further development of recommender systems.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#recurrent-neural-networks #rnn
One of the primary goals that researchers look to achieve through customer base analysis is to leverage historical records of individual customer transactions and related context factors to forecast future behavior, and to link these forecasts with actionable characteristics of individuals, managerially significant customer sub-groups, and entire cohorts. This paper presents a new approach that helps firms leverage the automatic feature extraction capabilities of a specific type of deep learning models when applied to customer transaction histories in non-contractual business settings (i.e., when the time at which a customer becomes inactive is unobserved by the firm). We show how the proposed deep learning model improves on established models both in terms of individual-level accuracy and overall cohort-level bias. It also helps managers in capturing seasonal trends and other forms of purchase dynamics that are important to detect in a timely manner for the purpose of proactive customer-base management. We demonstrate the model performance in eight empirical real-life settings which vary broadly in transaction frequency, purchase (ir)regularity, customer attrition, availability of contextual information, seasonal variance, and cohort size. We showcase the flexibility of the approach and how the model further benefits from taking into account static (e.g., socio-economic variables, demographics) and dynamic context factors (e.g., weather, holiday seasons, marketing appeals)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#recurrent-neural-networks #rnn
Anticipating future customer behavior and making individual-level predictions for a firm’s customer base is crucial to any organization that wants to manage its customer portfolio proactively. More precisely, firms following a customer-centric business approach need to know how their clientele will behave on different future time scales and levels of behavioral complexity (Gupta & Lehmann, 2005; Fader, 2020): What are they going to do in the immediate future and when do they make their next transaction with the focal company, if any? Are some of them at risk of stopping doing business with the firm? How exactly do seasonality and other time-based events influence the propensity of customers to buy?
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#recurrent-neural-networks #rnn
contributions include predictive models and techniques for customer targeting and reactivation timing (Gönül & ter Hofstede, 2006; Simester, Sun, & Tsitsiklis, 2006; Holtrop & Wieringa, 2020), market response models for firm- and/or customer-initiated marketing actions (e.g., Hanssens, Parsons, & Schultz (2003), Blattberg, Kim, & Neslin (2008), Sarkar & De Bruyn (2021)), methods for churn prediction and prevention (e.g., Ascarza (2018), Ascarza, Iyengar, & Schleicher (2016), Lemmens & Gupta (2020)), as well as a growing literature on customer valuation (e.g., McCarthy, Fader, & Hardie (2017), McCarthy & Fader (2018)) and customer prioritizing (Homburg, Droll, & Totzek, 2008). However, none of these qualify as a (Swiss Army knife-like) general-purpose problem solver that generalizes across the described decision tasks of managing customer relationships. This article makes a first step towards this direction. We propose and implement a flexible methodological framework that provides marketing managers with highly accurate forecasts of fine granularity both in the short and in the long run. Our method also captures seasonal peaks and customer-level dynamics and allows to differentiate between different customer groups
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#recurrent-neural-networks #rnn
The challenge to derive such individual-level predictions is particularly demanding in the context of non-contractual settings (such as most retail businesses, online media consumption, charity donations). Contrary to subscription-based or contractual settings where customer ‘‘churn” events are directly observable, customer defection in non-contractual business settings is by definition unobserved by the firm and thus needs to be indirectly inferred from past transaction behavior (Reinartz & Kumar, 2000; Gupta et al., 2006). The specific challenge in such settings is to accurately and timely inform managers on the subtle distinction between a pending defection event (i.e., a customer stops doing business with the focal firm) and an extended period of inactivity of their customers, because possible marketing implications are completely different in each of these situations.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 7100426751244] #has-images #recurrent-neural-networks #rnn
What would we expect from customers like the first ten individuals 1001–1010, who started out as occasional benefactors, but through an evolving relationship with the firm have developed a more regular transaction behavior? Will they continue this trend; will they eventually turn into the firm’s premium customers? Conversely, how about the next ten individuals 1011–1020, who have all made a number of transactions historically, but recently have been on an unusually long hiatus? Is the customer-firm relationship at risk and are these customers potential defectors? A timely response is critical in such a situation, because it is generally easier to regain a customer before their new relationship with a competitor has consolidated
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#recurrent-neural-networks #rnn
n this specific domain of customer base analysis, probabilistic approaches from the ‘‘Buy ’Till You Die” (BTYD) model family represent the gold standard, leveraging easily observable Recency and Frequency (RF, or RFM when including also the monetary value) metrics together with a latent attrition process to deliver accurate predictions (Schmittlein, Morrison, & Colombo, 1987; Fader, Hardie, & Lee, 2005; Fader & Hardie, 2009). The simple behavioral story which sits at the core of BTYD models – while ”alive”, customers make purchases until they drop out – gives these models robust predictive power, especially on the aggregate cohort level, and over a long time horizon. Extended variants of the original models (e.g., Zhang, Bradlow, & Small (2015), Platzer & Reutterer (2016), Reutterer, Platzer, & Schröder (2021)) improve predictive accu- racy by incorporating more hand-crafted summary statistics of customer behavior. However, including customer covariates is cumbersome and an approach to account for time-varying covariates has only just recently been introduced by Bachmann, Meierer, and Näf (2021) at the cost of manual labeling and slower performance. Even advanced BTYD models can be too restrictive to adequately capture diverse customer behaviors in different contexts and the derived forecasts present cus- tomer future in an oftentimes too simplified way
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#recurrent-neural-networks #rnn
Other [ than ‘‘Buy ’Till You Die” (BTYD) ] options to capture changes between lower- and higher-frequency purchase episodes (as we observe for our customers in Fig. 1), or vice versa, are to adopt a dynamic changepoint model (Fader, Hardie, & Chun-Yao, 2004), a simulation based model of the type presented by Rust, Kumar, and Venkatesan (2011), or to incorporate additional states other than the absorbing, inactive state as in standard BTYD latent attrition models. The latter way of accounting for nonstationarity in transaction sequences can be achieved by applying more general hidden Markov models (see, e.g., Netzer, Lattin, & Srinivasan (2008), Schweidel, Bradlow, & Fader (2011), Romero, van der Lans, & Wierenga (2013)). A Bayesian non-parametric approach to flexibly model purchasing dynamics depending on calendar time effects, inter-event timing and customer lifetime was recently proposed by Dew and Ansari (2018). However, all such approaches come at the cost of additional model complexity, rising computational cost, and a loss in the sufficiency
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#recurrent-neural-networks #rnn
In this paper, we offer marketing analysts an alternative to these models by developing a deep learning based approach that does not rely on any ex-ante data labelling or feature engineering, but instead automatically detects behavioral dynamics like seasonality or changes in inter-event timing patterns by learning directly from the prior transaction history
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#recurrent-neural-networks #rnn
enables us to simulate future transactions at a very fine granular level and attribute them to the right customer (or any sub- group of the customer-base) and calendar time without prior domain knowledge. We explore the capabilities of this novel forecasting approach to customer base analysis in detail, and benchmark the proposed model against established probabilistic models with latent attrition, as well as a non-parametric approach based on Gaussian process priors, in very diverse non- contractual retail and charity scenarios. Our model raises the bar in predictive accuracy on both the individual customer and the cohort level, automatically capturing seasonal and other temporal patterns
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 7100438023436] #has-images #recurrent-neural-networks #rnn
Based on our initial discussion, an ideal model for customer base analysis in data-rich environments would combine a robust forecasting capability both in the short and in the long-term with limited engineering requirements at low computational cost and providing a direct link toward managerial decision-making. Recognizing that traditional statistical forecasting models often suffer from poor efficiency when increasing model complexity and heavily rely on manual feature engineering and data labeling, Table 1 picks up these issues and compares some of the key differences between stochastic BTYD models and the deep learning approach we present in this section
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#recurrent-neural-networks #rnn
To circumvent additional feature engineering when increasing model flexibility, Salehinejad and Rahnamayan (2016) and Mena, Caigny, Coussement, Bock, and Lessmann (2019) have introduced a recurrent neural network (RNN) 2 approach to the domain of customer base analysis by modeling the evolution of RFM variables over time. However, since the focus still remains on predicting hand-engineered RFM metrics, such an approach does not fully leverage the automatic feature extraction capabilities of deep learning methods. Sheil, Rana, and Reilly (2018) take this one step further by allowing the neural network to derive its own internal representation of transaction histories. The authors demonstrate the performance of several RNN architectures and benchmark them against more conventional machine learning approaches for predicting purchasing intent. In a similar context, Toth, Tan, Di Fabbrizio, and Datta (2017) have shown that a mixture of RNNs can approximate several complex functions simultaneously. More recently, Sarkar and De Bruyn (2021) demonstrate that a special RNN type can help marketing response modelers to benefit from the multitude of inter-temporal customer-firm interactions accompanying observed transaction flows for predicting the most likely next customer action. However, their approach is limited to single point, next-step predictions and to continue with such forecasts into the long-run one must estimate the new model repeatedly with each additional future time step
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




You are called to the emergency roo
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Prehospital recognition and care of neonatal congenital heart defects
tal recognition and care of neonatal congenital heart defects What are the potential causes, assessments and treatments for a neonate with refractory hypoxemia? Apr 17, 2008 Updated May 2, 2017 <span>You are called to the emergency room of a community-based hospital 50 nautical miles away for a 3-day-old male in severe respiratory distress. His birth was a spontaneous, uncomplicated normal vaginal delivery. His mother