Senin, 07 Juni 2010

MODELING CUSTOMER RELATIONSHIPS AS MARKOV CHAINS

Phillip E. Pfeifer and Robert L. Carraway
Darden School of Business
100 Darden Boulevard
Charlottesville, VA 22903

Journal of Interactive Marketing, 14(2), Spring 2000, 43-55




PFEIFERP@VIRGINIA.EDU



Please do not quote our copy without permission

1


Introduction

The lifetime value of a customer is an important and useful concept in interactive

marketing. Courtheaux (1986) illustrates its usefulness for a number of managerial problems—

the most obvious if not the most important being the budgeting of marketing expenditures for

customer acquisition. It can also be used to help allocate spending across media (mail versus

telephone versus television), vehicles (list A versus list B), and programs (free gift versus special

price), as well as to inform decisions with respect to retaining existing customers (see, for

example, Hughes 1997). Jackson (1996) even argues that its use helps firms to achieve a

strategic competitive advantage.




Dwyer (1989) helped to popularize the lifetime value (LTV) concept by illustrating the

calculation of LTV for both a customer retention and a customer migration situation. Customer

retention refers to situations in which customers who are not retained are considered lost for

good. In a customer retention situation, nonresponse signals the end of the firm’s relationship

with the customer. In contrast, customer migration situations are those in which nonresponse

does not necessarily signal the end of the relationship. Besides articulating this distinction

between customer retention and migration, Dwyer listed several impediments to the use of LTV.




More recently, Berger and Nasr (1998) argue for the importance of moving beyond

numerical illustrations of the calculation of LTV to consider mathematical models for LTV.

They offer five such mathematical models, couched in the Dwyer customer retention/migration

classification scheme. While Dwyer illustrates the calculation of LTV using two extensive

2


numerical examples, Berger and Nasr provide mathematical equations for LTV for five

situations—four involving customer retention and one involving customer migration.




Blattberg and Deighton (1996) also formulate a mathematical model for LTV, for the

purpose of helping managers decide the optimal balance between customer acquisition spending

(investments to convince prospects to become customers) and customer retention spending

(investments to convince current customers to continue purchasing from the firm). Whereas the

Berger and Nasr models apply only to customers (people or organizations who have already

purchased from the firm), the Blattberg and Deighton model applies specifically to prospects

(people or organizations who have yet to purchase from the firm). Because the Blattberg and

Deighton model uses an infinite horizon, their resulting equation for LTV is quite simple and easy

to evaluate, and involves none of the summation operators on which the Berger and Nasr finite-

horizon models rely.




This paper builds directly on the work of Berger and Nasr (1998) and Blattberg and

Deighton (1996) in that it introduces a general class of mathematical models, Markov Chain

Models, which are appropriate for modeling customer relationships and calculating LTV. The

major advantage of the Markov chain model (MCM) is its flexibility. Almost all of the situations

previously modeled are amenable to Markov Chain modeling. The MCM can handle both

customer migration and customer retention situations. It can apply either to a customer or to a

prospect. In addition, the flexibility inherent in the MCM means that it can be used in many

other situations not covered by previous models. MCMs have been used successfully in several

3


areas (White 1993) including marketing (Bronnenberg 1998 is a recent example). A major

purpose of this paper will be to explore their usefulness in modeling the relationship between an

individual customer and a marketing firm.


In addition to its flexibility, the MCM offers other advantages. Because it is a

probabilistic model, it explicitly accounts for the uncertainty surrounding customer relationships.

It uses the language of probability and expected value—language that allows one to talk about

the firm’s future relationship with an individual customer. As direct marketers move toward true

one-to-one marketing, we suggest that their language will also change. Rather than talking

about groups or cohorts of customers, direct marketers will talk about Jane Doe. Rather than

talking about retention rates, they will talk about the probability Jane Doe will be retained.

Rather than talking about average profits from a segment of customers, they will talk about the

expected profit from their relationship with Jane Doe. Because the MCM incorporates this

language of probability and expected value, it is ideally suited for facilitating true one-to-one

marketing.




Another advantage of the MCM is that it is supported by a well-developed theory

about how these models can be used for decision making (see, for example, Puterman 1994).

We will introduce the theory surrounding Markov decision processes, and illustrate how this

theory can help direct marketers manage and optimize their relationships with individual

customers.

4


The MCM also works quite nicely with the popular Recency, Frequency, Monetary

value (RFM) framework (see, for example, David Shepard Associates, Inc. 1995), which

direct marketers use to categorize customers and manage customer relationships. We will

illustrate the relationship between the MCM and the RFM framework.




This paper is organized as follows. After this introduction, we present the fundamentals

of the MCM. The section following that will illustrate the use of the MCM for a variety of

customer relationship assumptions. For these illustrations, we will choose customer situations

similar to those addressed previously in the literature. A final section will illustrate the use of

MCM and Markov decision processes to help a firm optimize a customer relationship. For this

illustration, we look at a catalog company deciding when to curtail its relationship with the

customer. This last example incorporates both recency and frequency as determinants of the

status of the firm’s relationship with the customer. The paper ends with a summary and

conclusions.




The Markov Chain Model

First we illustrate the fundamentals of Markov chain modeling. Consider the following

situation: the ABC direct marketing company is trying to acquire Jane Doe as a customer. If

successful, ABC expects to receive NC in net contribution to company profits on Jane’s initial

purchase and on each succeeding purchase. Purchases are made at most once a period, at the

end of the period. Consistent with the customer migration models of Dwyer and of Berger and

5


Nasr, it is possible that Jane will go several periods between purchases. Periods are of equal

length, and the firm uses a per-period discount rate d to account for the time value of money.




For each period that Jane is considered an active customer, ABC will spend throughout

the period in efforts to remarket to Jane. Let M represent the present value of those

remarketing expenditures. Furthermore, the firm believes the probability Jane will purchase at

the end of any period is a function only of Jane’s recency, the number of periods since Jane last

purchased. (If Jane purchased at the end of last period, Jane is at recency 1 for the current

period.) Let the probability Jane will purchase at the end of any period be pr, where r is Jane

Doe’s recency. For the purposes of illustration, assume that if and when Jane reaches r = 5, the

firm will curtail all future efforts to remarket to Jane.




Thus, there are five possible states of the firm’s relationship with Jane Doe at the end of

any period, corresponding to recencies 1 though 4 and a fifth state, “non customer” or “former

customer,” which we will label r = 5. A key feature of the firm’s relationship with Jane Doe is

that the future prospects for that relationship are a function only of the current state of the

relationship (defined by Jane Doe’s recency) and not of the particular path Jane Doe took to

reach her current state. This property is called the Markov property. The Markov property is

a necessary condition for a stochastic system to be a Markov Chain. It is a property of the

Berger and Nasr models, the Dwyer models, and the Blattberg and Deighton model. Because

all these models exhibit the Markov property with constant probabilities, they can all be

6


represented as Markov chains. Figure 1 is a graphical representation of the MCM for the

firm’s relationship with Jane Doe.




The probabilities of moving from one state to another in a single period are called

transition probabilities. For Jane Doe, the following 5x5 transition probability matrix

summarizes the transition probabilities shown graphically in Figure 1. The last row of this matrix

reflects the assumption that if Jane is at recency 5 in any future period, she will remain at

recency 5 for the next and all future periods. In the language of Markov chains, r = 5 or

“former customer” is an absorbing state. Once Jane enters that state she remains in that state.





p1 1 - p1





0





0





0

p2
P = p3

0
0

1 - p2
0

0
1 - p3

0
0

p4

0

0

0 1 - p4

0

0

0

0

1




Matrix P is the one-step transition matrix. The t-step transition matrix is defined to be

the matrix of probabilities of moving from one state to another in exactly t periods. It is a well-

known property of Markov chain modeling that the t-step transition matrix is simply the matrix

product of t one-step transition matrices. Thus the (i, j) element of matrix Pt (found by

multiplying P by itself t times) is the probability Jane Doe will be at recency j at the end of

period t given that she started at recency i. In essence, Pt is a tidy way both to summarize and

to calculate the probability forecast of Jane Doe’s recency at any future time point t.


7


Now that we have a probability forecast for ABC’s future relationship with Jane Doe,

we turn to the economic evaluation of that relationship. The cash flows received by the firm in

any future period will be a function of Jane Doe’s recency. R, a 5x1 column vector of rewards,

summarizes these cash flows:




NC – M
–M

R =

–M
–M
0





If at the end of any future period t Jane Doe makes a purchase and thereby transitions to

r = 1 for the next period, the firm receives NC at time t and is committed to marketing to Jane

throughout the next period. Recalling that M represents the present value of these marketing

expenditures, the total reward to the firm at time t if Jane transitions to r = 1 is NC – M. If Jane

does not make a purchase and transitions to recencies 2, 3, or 4, the firm’s reward is –M,

reflecting its decision to continue to remarket to Jane during the coming period. However, if

Jane transitions to r = 5, the firm has decided to curtail the relationship. The corresponding

reward is zero.




If the ABC company decides to evaluate its relationship with Jane Doe using a horizon

of length T, the remaining challenge is to combine the probability forecasts reflected in Pt with

the reward structure reflected in R. If the firm is risk neutral, it will be willing to make decisions

based on expected net present value. If so, the only remaining challenge is to take Pt, R, T and


8


d and develop an expression for the expected present value of the firm’s relationship with Jane

Doe. The theory of the Markov decision process provides the mechanism for doing so:


T

VT = S [(1 + d)-1P]tR
t = 0

(1)



where VT is the 5x1 column vector of expected present value over T periods. The elements of

VT correspond to the five possible initial states of the Jane Doe relationship. The top element of

VT corresponding to r = 1 (a Jane Doe relationship that starts with a purchase at t = 0), will be

of particular interest. This is the expected present value of the cash flows from the firm’s

proposed relationship with Jane Doe. It is, in other words, the expected LTV for Jane Doe.

Notice that this value accounts for NC from the initial purchase at t = 0 as well as a trailing

expenditure of M at time T if Jane is an active customer at T.




Rather than selecting some finite horizon T over which to evaluate its relationship with

Jane Doe, the firm might decide to select an infinite horizon. (Because expected cash flows

from a customer relationship usually become quite small at large values of t, the numerical

differences between selecting a long but finite horizon and an infinite horizon are usually

negligible. The infinite horizon approach has the advantage of being computationally simpler.)

For an infinite horizon, it can be shown that





V











lim VT
T→∞








where I is the identity matrix.




=

9

{I – (1 + d)-1 P}-1 R




(2)


Let us illustrate the use of (1) and (2) using a numerical example. Suppose that

for Jane Doe, d = 0.2, NC = $40, and M = $4. The numerical values for R turn out to be



$36
$(4)
R = $(4)
$(4)
$0



Suppose further that p1 = 0.3, p2 = 0.2, p3 = 0.15, and p4 = 0.05. The one, two, three, and

four step transition matrices are then






P =




0.3
0.2
0.15
0.05
0




0.7
0
0
0
0




0
0.8
0
0
0




0
0
0.85
0
0




0
0
0
0.95
1


0.2300 0.2100 0.5600


0


0


P2=

0.1800 0.1400
0.0875 0.1050
0.0150 0.0350
0 0

0
0
0
0

0.6800
0
0
0

0
0.8075
0.9500
1


0.1950 0.1610 0.1680 0.4760


0


P3=

0.1160 0.1260 0.1120
0.0473 0.0613 0.0840
0.0115 0.0105 0.0280

0
0
0

0.6460
0.8075
0.9500

0

0

0

0

1





P4=


0.1397 0.1365 0.1288 0.1428 0.4522
0.0768 0.0812 0.1008 0.0952 0.6460
0.0390 0.0331 0.0490 0.0714 0.8075


10


0.0098 0.0081 0.0084 0.0238 0.9500

0

0

0

0

1



The upper row of P4 tells us that if the firm successfully initiates a relationship with Jane Doe at t

= 0, there is a 0.1397 probability Jane will make a purchase at t = 4, a 0.1365 probability Jane

will reach recency 2 at t = 4, . . . , and a 0.4522 probability the firm will curtail its relationship

with Jane at the end of period 4 because she did not make a purchase in any of the four

preceding periods.




Substituting these transition matrices, R as given above, and d = 0.2 into (1) gives




$50.115
$4.220
V4 = $0.592
$(1.980)
$0



which represents the expected present value of the Jane Doe relationship over a four period

horizon. The expected LTV of the proposed relationship with Jane Doe is $50.11. This value

consists of $40 from the initial purchase and $10.11 of expected present value from future cash

flows.




If and when Jane reaches recency 2, her expected customer lifetime value decreases to

$4.22. Notice that $4.22 is the present value now of a Jane Doe customer relationship that

begins with Jane having just transitioned to recency 2. (This is a Jane Doe who purchased one


11


period ago but failed to purchase in this period.) Similarly, if Jane transitions to recency 3, the

firm’s relationship with Jane has an expected present value of $0.59, while if Jane transitions to

recency 4, the expected present value is a negative $1.98.




This negative expected value for a recency 4 Jane Doe means that the ABC firm loses

money in its efforts to remarket to a recency 4 Jane Doe. Perhaps this is because our analysis

used the relatively short time horizon of T = 4?




A simple inversion of the 5x5 matrix required of (2) allows us to calculate V, the

expected net present value of the Jane Doe relationship for an infinite horizon:




$52.320
$5.554
V = $1.251
$(1.820)
$0



The differences in the results reflect the expected present value of cash flows after t = 4. Using

an infinite horizon, the expected LTV for Jane Doe increases to $52.32.




While all the expected present values have improved under the longer horizon, the

expected present value is still negative at recency 4. Given the economic and probabilistic

assumptions of the model, the firm would do better if it curtailed its relationship with Jane Doe at

recency 4 rather than recency 5.

12




It is a simple matter to modify the Markov decision model to evaluate this proposed

change in policy. One way would be to reformulate the model using four rather than five states.

A quicker short cut is to modify the existing five-state model by setting p4 equal to zero and the

fourth element of R equal to zero. These two changes reflect the fact that under the new policy,

no money will be spent on a recency 4 Jane Doe and she will automatically transition to recency

5. Making these changes and recalculating V gives

$53.149
$6.621
V = $2.644
$0
$0



As expected, under the new policy the expected present value of a recency 4 Jane Doe is zero.

It is also apparent that because the new policy more effectively deals with Jane Doe if and when

she reaches recency 4, the change improves the value of the entire relationship. The new

expected LTV for the Jane Doe relationship is $53.15. The improved policy has added $0.83

to the expected LTV.




This simple example illustrates the notion that firms can use MCMs not only to evaluate

proposed customer relationships, but also to help manage and improve those relationships.

Markov chain modeling assists not only with prospecting decisions (should we try to initiate a

relationship with Jane Doe?) but also with retention and termination decisions. Finally, notice

13


that any improvements to a customer relationship are immediately incorporated into the LTV

calculation through the MCM.




Markov Chain Models of Customer Relationships

The ABC firm’s potential relationship with Jane Doe was our first example of a

customer relationship amenable to Markov chain modeling. The Jane Doe relationship might be

characterized as a customer migration situation with purchase probabilities dependent on

recency. This is a case 5 situation, as defined by Berger and Nasr.




Now suppose that, in addition to purchase probabilities, remarketing expenditures and

net contribution also depend on recency. In other words, suppose the amount the firm spends

on remarketing is adjusted based on recency. In addition, suppose the expected net

contribution from Jane Doe purchases is thought to depend on the recency state from which the

purchase is made. Purchase probabilities, net contributions, and remarketing expenses varying

with recency are characteristics of the situation considered by Dwyer in his customer migration

example.




To model this situation using an MCM will require an expanded state space. In order

to account for net contributions that depend on Jane Doe’s recency at time of purchase, we will

need to separate the recency 1 state into four new states. These four new states will be labeled

11, 12, 13, and 14 corresponding to Jane Doe purchasing at the end of a period in which she was

14


at recency 1, 2, 3, and 4 respectively. The transition matrix and reward vector for this more

complicated customer migration situation are given below:




State P













R


11

12

13

14


p11

p12

p13

p14


0

0

0

0


0

0

0

0


0

0

0

0


1 - p110

1 - p120

1 - p130

1 - p140


0

0

0

0


0

0

0

0


NC11- M11

NC12- M12

NC13- M13

NC14- M14


2


0


p2


0


0


0


1 - p20


0


-M2


3


0


0


p3


0


0


0


1 - p30


-M3


4

5


0

0


0

0


0

0


p4

0


0

0


0

0


0

0


1 - p4

1


-M4

0





For example, notice that if Jane Doe purchases at recency 2, she transitions to state 12,

where the firm receives NC12 in net contribution and spends M12 for remarketing. Purchases at

recency 3, however, send Jane to state 13, where the firm receives NC13 and spends M13. Just

as net contribution and remarketing expenditures depend on the recency at which Jane

purchased, so too do her subsequent purchase probabilities p11,p12,p13, and p14. Breaking out

the original recency 1 state into four substates has allowed us to apply the MCM to a situation

where purchase amounts, marketing expenditures, and repurchase probabilities depend on

customer recency at the time of purchase.


15


So far we have considered only customer migration situations. The MCM can also be

used for customer retention situations. The simplest of such situations is one in which a

customer has a constant probability p of responding in each period, and any nonresponse

signals the end of the customer relationship. The MCM for this situation requires only two

states: customer and former customer. The P and R matrices are given below:


State


P


R




Customer

Former
Customer




p


0




1 - p


1




NC - M


0






This is a very simple model—so simple that the advantages of Markov chain modeling

are negligible. Berger and Nasr consider this simplest customer-migration situation as case 1.

Notice that because the MCM applies to a period of arbitrary length (discount rate d is defined

as the per-period discount rate), this MCM also applies to the situation considered by Berger

and Nasr as case 2a.

It is a relatively simple matter to expand the MCM to handle the firm’s relationship with

a prospect. Suppose an expenditure A gives the firm probability pa of acquiring the prospect.

The MCM model for this prospecting/customer retention situation requires three states:

prospect, customer, and former customer. The transition matrix and reward vector are as

follows:


State


P


R





Prospect



Customer


Former
Customer




0



0


0

16


pa



p


0




1 - pa



1 - p


1




-A



NC - M


0




Once again, this model is simple enough that the benefits of Markov Chain modeling are limited.

In fact, this model is simple enough that equation (2) can be solved algebraically to give the

following expected values:





VProspect

VCustomer





=

=





-A + (1 + d)-1paVCustomer

(NC - M)/[1 - p/(1 + d)]


VFormer Customer =


0





This example illustrates that while it is possible to extend the MCM to include

prospecting, there is usually little reason to do so. The transition from prospect to customer is

usually simple enough that it can be treated algebraically, without recourse to the MCM.




Next, consider a customer retention situation where purchase amounts, repurchase

probabilities, and remarketing expenditures all depend on frequency—the number of times the

customer has purchased from the firm. For the purposes of this illustration, suppose these

variables change for frequencies 1, 2, and 3, but remain constant for frequencies of 4 or more.

The MCM of this situation requires five states: frequencies 1, 2, 3, and 4 together with “former


17


customer.” The frequency 4 state might be better labeled “frequency 4 or greater.” The

transition matrix and reward vector are as follows:








State








P








R


frequency 1

frequency 2


0

0


p1

0


0

p2


0 1 - p1

0 1 - p2


NC1- M1

NC2-M2


frequency 3

frequency 4

Former
Customer


0

0

0


0

0

0


0

0

0


p3

p4

0


1 - p3

1 - p4

1


NC3- M3

NC4- M4

0






Purchase probabilities, net contributions, and remarketing expenses varying with frequency in a

customer retention situation are characteristics of the situation considered by Dwyer in his

customer retention example. Here we have shown the MCM for this same situation.




As our final example of MCM of a customer relationship, consider a customer migration

situation in which the firm believes that purchase probabilities, net contribution, and remarketing

expenditures all depend on the customer’s recency, frequency, and monetary value of past

purchases. This is the familiar RFM method for categorizing customers—applied here to

characterize the status of the firm’s relationship with the customer. Let r be the customer’s


18


recency, let f be the customer’s frequency, and let m be an index corresponding to categories

associated with the monetary value of the customer’s past purchases.




The MCM model for this situation will use states defined by (r, f, m) where each of

these elements are integers with some known upper bound. The upper bound for r might be the

recency at which the firm ends the relationship. The highest frequency value might be a

“frequency 4 or greater” type of category. Finally, the upper bound for m will simply be the

number of monetary value categories the firm has created. The end result is some finite number

of states defined using (r, f, m) that characterize the status of the firm’s relationship with the

customer.




Because the number of states in this general RFM-based MCM model can be quite

large, we will not attempt to portray the transition probability matrix and reward vector. Instead

we offer Figure 2. Figure 2 focuses on state (r, f, m) and shows all possible transitions to and

from that state. The only path to state (r, f, m) is a nonresponse from state (r - 1, f, m).

Similarly, a nonresponse from state (r, f, m) transitions the customer to state (r + 1, f, m). A

response or purchase from state (r, f, m) transitions the customer to a recency 1 state, to

frequency f + 1 (assuming the next higher frequency is not above the upper bound for

frequency), and to one of several possible monetary value states. Figure 2 uses m - 1, m, and

m + 1 as three possibilities. The firm receives a reward which depends on the monetary value

category to which the customer transitions—higher net contribution, for example, if the customer

moves to a higher monetary value category.

19



A challenge in constructing an RFM-based MCM will be to define monetary value

categories in such a way that the Markov property will hold. Recall that the Markov property

requires that future prospects for the customer relationship depend only on the current state of

the relationship. By their very nature, monetary value categories based on moving averages will

be non-Markovian. Instead of moving averages, monetary value categories based on the single

last purchase amount or the cumulative total of all previous purchase amounts will be better

suited for Markov chain modeling.




Example Use of Markov Decision Processes to Manage a Customer Relationship

Up to this point we have attempted to illustrate how the MCM can be used to model a

firm’s relationship with a customer in a variety of situations. Presumably, the most important use

of these models is in calculating LTV as a function of some small set of meaningful input

parameters. LTV is perhaps most commonly used to help firms decide how much to spend in

their attempts to initiate relationships.




In this section, we offer an extensive numerical example that will concentrate on the

opposite end of the firm’s relationship with the customer: the firm’s decision to terminate the

relationship. This example will demonstrate the use of the theory of Markov decision processes

to find the best decision strategy in a reasonably complicated situation.

20


A large catalog company knows that a customer’s response probabilities to its thrice-

yearly catalog depend on both the customer’s recency (how many periods/catalogs since the

customer’s last purchase) and frequency (how many times the customer has purchased from the

firm). These purchase probabilities for a typical customer are given in Table 1.




While the MCM can accommodate purchase amounts and marketing expenses that

vary with recency and frequency, for the purposes of this example we will keep things simple.

Assume the customer’s purchases bring the firm $60 in expected net contribution regardless of

the customer’s recency or frequency at the time of purchase. For each period the customer is

active, the firm spends $1 mid-period in marketing—again regardless of the customer’s recency

or frequency. The firm uses a discount rate of 0.03 per period to account for the time value of

money.




Heretofore the firm has curtailed its relationship with the customer after recency 24.

This is why purchase probabilities for recencies greater than 24 are not available. In light of

projected increases in paper and postage costs, the firm is intent on reexamining this policy.

Perhaps the firm should be more aggressive in paring its mailing list of dormant customers? Or

perhaps the firm’s policy should depend on both recency and frequency—cutting off low

frequency customers sooner than higher frequency customers? It seems fairly obvious that such

a policy will do better—but how much better? Will the increase in profitability justify the costs

associated with implementing the change?

21


The Markov chain model of this catalog firm’s relationship with a customer will require

121 states. The first 120 states will be the 24x5 combination of 24 possible recencies (r = 1 to

24) with 5 possible frequencies (f = 1 to 5). Notice that frequencies greater than or equal to 5

are all lumped together as frequency 5. The final state is the familiar “former customer” state

that we will label recency 25 and frequency 1. The probabilities in Table 1 will be labeled pr,f,

where (r, f) refers to the recency and frequency of the customer.




The construction of the 121x121 transition matrix follows directly from the problem

description. The customer transitions to state (25, 1) from states (24, 1) through (24, 5) with

probabilities 1 - p24,1 through 1 - p24,5 respectively. The customer transitions to state (1, 5)

from states (1, 4) through (24, 4) with probabilities p1,4 through p24,4 respectively, and from

states (1, 5) through (24, 5) with probabilities p1,5 through p24,5 respectively. For f = 2, 3, or 4

the customer transitions to state (1, f) from states (1, f - 1) through (24, f - 1) with probabilities

p1,f-1 through p24,f-1 respectively. Finally, for recencies 2 through 24 inclusive, the customer

transitions to state (r, f) from state (r - 1, f) with probability 1 - pr-1, f.




The 121-component reward vector is defined as follows:





Rr,f





=


NC – M

-M

0


r = 1

r = 2, 3, …, 24

r = 25


where M is equal to the present value of remarketing expenditures or $1/(1.03)1/2. Notice the

very simple structure of these cash flows. As mentioned earlier, useful extensions to this model


22


would be to consider net contributions that vary with frequency and marketing expenditures that

vary with both recency and frequency. These extensions are easy to implement because they

involve changes to the reward vector only.




We are now in a position to evaluate the firm’s current policy of curtailing the customer

relationship after recency 24. Using equation (2) and the inversion of a 121x121 matrix allows

us to calculate the 121x1 column vector V. The first element of V is calculated to be $89.264.

The expected present value of the catalog firm’s relationship with the customer is $89.264.

Included in this number is the $60 net contribution from the initial purchase—so that $29.264 of

the present value of the relationship comes from cash flows after the initial purchase.




Examining the calculated expected values for the remaining possible states of the firm’s

relationship tells us that the policy of curtailing the relationship after recency 24 is a fairly good

one. Only one state, state (24, 1) has a negative expected value. The firm could do a little bit

better if it dropped frequency 1 customers after recency 23, rather than 24.




To illustrate in detail how the Markov decision model can be used to help the firm

manage its relationship with the customer, suppose that paper and postage increases raise the

cost of remarketing from $1 to $2 per period, with no other changes. Suppose further that the

firm is free to change its mailing policy as it sees fit with no effect on the customer’s repurchase

probabilities. (While this might be true in the short run, it would not be true in the long run. If

customers learn to expect a certain number of contacts and plan or allocate their purchases

23


accordingly, cutting back on the number of contacts might increase the purchase probabilities at

lower recencies.)




The firm’s challenge is to find the optimal contact policy. We saw earlier that the

optimal policy at M = $1 was to curtail mailings after recency 23 for a frequency 1 customer

and after recency 24 for frequencies 2, 3, 4, and 5. Let us refer to this policy as (23, 24, 24,

24, 24), reflecting the recency cutoffs for each of the five possible frequencies. Our challenge is

to find the best contact policy (the best string of five recency cutoff values) now that mid-period

marketing expenses are $2.




The problem described is an example of a Markov decision problem, and there is a

large body of knowledge about how to solve these kinds of problems. We will illustrate one

popular approach called the policy improvement algorithm (see Hillier and Lieberman 1986,

715).




The policy improvement algorithm begins by evaluating (calculating V) for some

arbitrary policy. As our initial arbitrary policy we use the firm’s current policy of curtailing

contact after recency 24: (24, 24, 24, 24, 24). Using (2) to evaluate V for this policy gives the

results in Table 2. The expected customer lifetime value using this policy is $69.470.

Subtracting the initial $60 contribution, we see that the higher remarketing cost has cut the value

of the firm’s future relationship with the customer rather drastically—from $29.264 down to

$9.470 if the firm sticks to its initial (24, 24, 24, 24, 24) policy. We can see from Table 2,

24


however, that there are many states in which the value of the firm’s relationship is negative.

Improvement will be possible.




The next step in the policy improvement algorithm is to revisit the firm’s decision at each

and every state and replace the current decision with a best decision based on the calculated

values. In our example, this means contacting only those customers at states with positive

values and not contacting the rest. The improved policy is therefore (3, 6, 9, 12, 14). Thus

ends the first iteration in the policy improvement algorithm. Policy (3, 6, 9, 12, 14) becomes the

new candidate policy.




To evaluate policy (3, 6, 9, 12, 14), we modified the Markov decision model by

changing the purchase probabilities to zero for any state outside the firm’s contact policy. In

addition, we set the reward for transitioning to any of these states at zero. In effect, if the

customer transitions to a state outside the firm’s contact policy she or he will transition with

probability 1 and with reward zero to state (25, 1). The results of evaluating policy (3, 6, 9, 12,

14) are given in Table 3. The expected customer lifetime value for this strategy has improved to

$71.487.




Turning to the policy improvement step, we notice that every state contacted in the

current policy has positive expected value. So there is no way to improve the current policy by

making further cutbacks. What we need to do now is consider contacting some of the states

we no longer contact. For example, suppose we contacted state (4, 1). Such a contact would

25


cost us $2 in the middle of the period but would bring us a 0.0450 probability of transitioning to

state (1, 2) and a 0.9550 probability of transitioning to state (5, 1) at the end of the period. The

expected present value of doing so is

(1.03)-1/2($2) + (1.03)-1[0.045($80.085) + 0.9550($0)] = $1.528.

You will notice that we have evaluated the proposed change in policy using the values calculated

for the current policy. While this evaluation will not hold if we make more than one change to

the current policy, it is as prescribed by the policy improvement algorithm. After performing

similar calculations for all states not contacted under the current policy, we find several that

project a profitable contact. The improved policy that results is (8, 12, 15, 16, 17).




The expected customer lifetime value for the (8, 12, 15, 16, 17) policy is calculated to

be $74.519. All contacted states have positive expected values and only one of the states not

contacted, state (9, 1), projects a small positive profit if contacted.




Our final stage of the policy improvement algorithm evaluates policy (9, 12, 15, 16, 17)

to find an expected lifetime value of $74.523, all contacted states with positive values, and no

other states projecting a positive profit if contacted. The policy improvement algorithm is

complete. We have found the optimal policy.




The net result of the increase in marketing costs from $1 to $2 has been a change in

optimal policy from (23, 24, 24, 24, 24) to (9, 12, 15, 16, 17) and a decrease in expected

customer lifetime value from $89.267 to $74.523. Subtracting the initial $60 contribution

26


shows us that the expected future value of the firm’s relationship with the customer decreases

from $29.267 to $14.523.







Summary and Conclusions

This paper introduced a general class of mathematical models, Markov Chain Models,

which are appropriate for modeling customer relationships and calculating LTVs. A major

advantage of this class of models is its flexibility. This flexibility was demonstrated by showing

how the MCM could handle the wide variety of the customer relationship situations previously

modeled in the literature. The MCM is particularly useful in modeling complicated customer-

relationship situations for which algebraic solutions will not be possible.




A second advantage of the MCM is that it is a probabilistic model. It incorporates the

language of probability and expected value—language that will help marketers talk about

relationships with individual customers.




The MCM is also supported by a well-developed theory about how these models can

be used for decision making. The use of this theory was demonstrated by a comprehensive

numerical example in which a catalog company adjusted its contact policy in response to

increased mailing costs.

27


References

Berger, P. D., and N. I. Nasr. 1998. Customer Lifetime Value: Marketing Models and
Applications. Journal of Interactive Marketing 12(1): 17-29.

Blattberg, R. C. 1998. Managing the Firm Using Lifetime-Customer Value. Chain Store
Age (January): 46-49.

Blattberg, R. C. 1987. Research Opportunities in Direct Marketing. Journal of Direct
Marketing 1(1): 7-14.

Blattberg, R., and J. Deighton. 1996. Manage Marketing by the Customer Equity Test.
Harvard Business Review (July-August): 136-144.

Bronnenberg, B. J. 1998. Advertising Frequency Decisions in a Discrete Markov Process
Under a Budget Constraint. Journal of Marketing Research 35(3): 399-406.

Courtheaux, R. J. 1986. Database Marketing: Developing a Profitable Mailing Plan.
Catalog Age (June-July).

David Shepard Associates, Inc. 1995. The New Direct Marketing. 2nd ed. Burr Ridge, IL:
Irwin.

Dwyer, F. R. 1989. Customer Lifetime Valuation to Support Marketing Decision Making.
Journal of Direct Marketing 3(4): 8-15, reprinted in 11(4): 6-13.

Hillier, F. S., and G. J. Lieberman. 1986. Introduction to Operations Research. 4th ed.
Oakland, CA: Holden-Day.

Hughes, A. M. 1997. Customer Retention: Integrating Lifetime Value into Marketing
Strategies. Journal of Database Marketing 5(2): 171-178.

Jackson, B. 1985. Winning and Keeping Industrial Customers. Lexington, MA:
Lexington Books.

Jackson, D. R. 1996. Achieving Strategic Competitive Advantage through the
Application of the Long-term Customer Value Concept. Journal of Database
Marketing 4(2): 174-186.

Puterman, M. J. 1994. Markov Decision Processes: Discrete Stochastic Dynamic
Programming. New York: John Wiley & Sons.

White, D. J. 1993. A Survey of Applications of Markov Decision Processes. The Journal

28


of the Operational Research Society 44(11): 1073-1096.

29



Table 1. Catalog Customer Repurchase Probabilities



Frequency
Recency 1 2 3 4











≥5

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

0.103
0.076
0.059
0.045
0.038
0.035
0.030
0.027
0.025
0.021
0.021
0.020
0.017
0.017
0.016
0.015
0.014
0.013
0.013
0.012
0.012
0.011
0.011
0.010

0.121
0.090
0.069
0.053
0.045
0.041
0.035
0.032
0.029
0.025
0.024
0.024
0.020
0.020
0.019
0.018
0.017
0.016
0.015
0.014
0.014
0.013
0.012
0.012

0.143
0.106
0.081
0.062
0.053
0.049
0.041
0.038
0.035
0.030
0.028
0.028
0.024
0.024
0.022
0.021
0.020
0.018
0.018
0.017
0.016
0.015
0.015
0.014

0.151
0.112
0.086
0.066
0.056
0.051
0.043
0.040
0.037
0.031
0.030
0.030
0.025
0.025
0.024
0.022
0.021
0.019
0.019
0.018
0.017
0.016
0.015
0.015

0.163
0.121
0.093
0.071
0.061
0.056
0.047
0.043
0.040
0.034
0.033
0.032
0.027
0.027
0.026
0.024
0.022
0.021
0.020
0.019
0.018
0.017
0.017
0.016


30




Table 2. Calculated V for Catalog-Firm Policy (24, 24, 24, 24, 24)
NC = $60, M = $2, and d = 0.03




Frequency

Recency

1

2

3

4

5

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

$ 69.470 $ 79.194 $ 88.052 $ 92.381 $ 95.821
$ 4.069 $ 12.697 $ 20.710 $ 24.632 $ 27.837
$ 0.184 $ 7.903 $ 15.169 $ 18.737 $ 21.704
$ (2.563) $ 4.415 $ 11.050 $ 14.320 $ 17.069
$ (4.371) $ 2.027 $ 8.155 $ 11.185 $ 13.751
$ (5.715) $ 0.172 $ 5.843 $ 8.660 $ 11.057
$ (6.861) $ (1.473) $ 3.750 $ 6.351 $ 8.576
$ (7.597) $ (2.634) $ 2.204 $ 4.613 $ 6.692
$ (8.153) $ (3.598) $ 0.868 $ 3.090 $ 5.027
$ (8.553) $ (4.384) $ (0.282) $ 1.771 $ 3.554
$ (8.651) $ (4.808) $ (1.016) $ 0.882 $ 2.538
$ (8.682) $ (5.169) $ (1.689) $ 0.056 $ 1.581
$ (8.679) $ (5.512) $ (2.362) $ (0.772) $ 0.611
$ (8.426) $ (5.547) $ (2.685) $ (1.231) $ 0.035
$ (8.142) $ (5.565) $ (2.996) $ (1.685) $ (0.546)
$ (7.765) $ (5.480) $ (3.197) $ (2.035) $ (1.021)
$ (7.257) $ (5.247) $ (3.243) $ (2.213) $ (1.315)
$ (6.646) $ (4.909) $ (3.174) $ (2.270) $ (1.494)
$ (5.940) $ (4.460) $ (2.984) $ (2.210) $ (1.544)
$ (5.168) $ (3.953) $ (2.737) $ (2.098) $ (1.547)
$ (4.295) $ (3.331) $ (2.362) $ (1.850) $ (1.411)
$ (3.343) $ (2.625) $ (1.902) $ (1.521) $ (1.189)
$ (2.310) $ (1.833) $ (1.354) $ (1.097) $ (0.878)
$ (1.194) $ (0.953) $ (0.715) $ (0.585) $ (0.473)
$0


31



Table 3. Calculated V for Catalog-Firm Policy (3, 6, 9, 12, 14)
NC = $60, M = $2, and d = 0.03




Frequency

Recency

1

2

3

4

5

1
2
3

$ 71.487 $ 80.085 $ 88.337 $ 92.781 $ 96.131
$ 6.281 $ 13.702 $ 20.987 $ 25.063 $ 28.159
$ 2.578 $ 9.011 $ 15.440 $ 19.197 $ 22.038

4
5
6

$ 0
$ 0
$ 0

$ 5.620 $ 11.318 $ 14.809 $ 17.417
$ 3.321 $ 8.424 $ 11.702 $ 14.113
$ 1.554 $ 6.113 $ 9.207 $ 11.433

7
8
9

$ 0
$ 0
$ 0

$ 0
$ 0
$ 0

$ 4.021 $ 6.927 $ 8.969
$ 2.478 $ 5.219 $ 7.101
$ 1.146 $ 3.728 $ 5.454

10
11
12

$ 0
$ 0
$ 0

$ 0
$ 0
$ 0

$ 0
$ 0
$ 0

$ 2.441 $ 3.999
$ 1.584 $ 3.000
$ 0.792 $ 2.063

13
14
15
16
17
18
19
20
21
22
23
24
25

$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0

$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0

$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0

$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0

$ 1.114
$ 0.559
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0


32



Figure 1. Graphical Representation of the Markov Chain Model
of the Firm’s Relationship with Jane Doe






p1









1





1 - p1







p2









2





1 - p2










p3









3





1 - p3











p4









4





1 - p4









5






1.0





32




Figure 2. Graphical Representation of Transitions into and out of
State (r, f, m) for an RFM-based MCM.













(1, f + 1, m -1)

















(r - 1, f, m)













(1, f + 1, m)

















(r, f, m)













(1, f + 1, m + 1)

















(r + 1, f, m)

0 komentar:

MODELING CUSTOMER RELATIONSHIPS AS MARKOV CHAINS

Phillip E. Pfeifer and Robert L. Carraway
Darden School of Business
100 Darden Boulevard
Charlottesville, VA 22903

Journal of Interactive Marketing, 14(2), Spring 2000, 43-55




PFEIFERP@VIRGINIA.EDU



Please do not quote our copy without permission

1


Introduction

The lifetime value of a customer is an important and useful concept in interactive

marketing. Courtheaux (1986) illustrates its usefulness for a number of managerial problems—

the most obvious if not the most important being the budgeting of marketing expenditures for

customer acquisition. It can also be used to help allocate spending across media (mail versus

telephone versus television), vehicles (list A versus list B), and programs (free gift versus special

price), as well as to inform decisions with respect to retaining existing customers (see, for

example, Hughes 1997). Jackson (1996) even argues that its use helps firms to achieve a

strategic competitive advantage.




Dwyer (1989) helped to popularize the lifetime value (LTV) concept by illustrating the

calculation of LTV for both a customer retention and a customer migration situation. Customer

retention refers to situations in which customers who are not retained are considered lost for

good. In a customer retention situation, nonresponse signals the end of the firm’s relationship

with the customer. In contrast, customer migration situations are those in which nonresponse

does not necessarily signal the end of the relationship. Besides articulating this distinction

between customer retention and migration, Dwyer listed several impediments to the use of LTV.




More recently, Berger and Nasr (1998) argue for the importance of moving beyond

numerical illustrations of the calculation of LTV to consider mathematical models for LTV.

They offer five such mathematical models, couched in the Dwyer customer retention/migration

classification scheme. While Dwyer illustrates the calculation of LTV using two extensive

2


numerical examples, Berger and Nasr provide mathematical equations for LTV for five

situations—four involving customer retention and one involving customer migration.




Blattberg and Deighton (1996) also formulate a mathematical model for LTV, for the

purpose of helping managers decide the optimal balance between customer acquisition spending

(investments to convince prospects to become customers) and customer retention spending

(investments to convince current customers to continue purchasing from the firm). Whereas the

Berger and Nasr models apply only to customers (people or organizations who have already

purchased from the firm), the Blattberg and Deighton model applies specifically to prospects

(people or organizations who have yet to purchase from the firm). Because the Blattberg and

Deighton model uses an infinite horizon, their resulting equation for LTV is quite simple and easy

to evaluate, and involves none of the summation operators on which the Berger and Nasr finite-

horizon models rely.




This paper builds directly on the work of Berger and Nasr (1998) and Blattberg and

Deighton (1996) in that it introduces a general class of mathematical models, Markov Chain

Models, which are appropriate for modeling customer relationships and calculating LTV. The

major advantage of the Markov chain model (MCM) is its flexibility. Almost all of the situations

previously modeled are amenable to Markov Chain modeling. The MCM can handle both

customer migration and customer retention situations. It can apply either to a customer or to a

prospect. In addition, the flexibility inherent in the MCM means that it can be used in many

other situations not covered by previous models. MCMs have been used successfully in several

3


areas (White 1993) including marketing (Bronnenberg 1998 is a recent example). A major

purpose of this paper will be to explore their usefulness in modeling the relationship between an

individual customer and a marketing firm.


In addition to its flexibility, the MCM offers other advantages. Because it is a

probabilistic model, it explicitly accounts for the uncertainty surrounding customer relationships.

It uses the language of probability and expected value—language that allows one to talk about

the firm’s future relationship with an individual customer. As direct marketers move toward true

one-to-one marketing, we suggest that their language will also change. Rather than talking

about groups or cohorts of customers, direct marketers will talk about Jane Doe. Rather than

talking about retention rates, they will talk about the probability Jane Doe will be retained.

Rather than talking about average profits from a segment of customers, they will talk about the

expected profit from their relationship with Jane Doe. Because the MCM incorporates this

language of probability and expected value, it is ideally suited for facilitating true one-to-one

marketing.




Another advantage of the MCM is that it is supported by a well-developed theory

about how these models can be used for decision making (see, for example, Puterman 1994).

We will introduce the theory surrounding Markov decision processes, and illustrate how this

theory can help direct marketers manage and optimize their relationships with individual

customers.

4


The MCM also works quite nicely with the popular Recency, Frequency, Monetary

value (RFM) framework (see, for example, David Shepard Associates, Inc. 1995), which

direct marketers use to categorize customers and manage customer relationships. We will

illustrate the relationship between the MCM and the RFM framework.




This paper is organized as follows. After this introduction, we present the fundamentals

of the MCM. The section following that will illustrate the use of the MCM for a variety of

customer relationship assumptions. For these illustrations, we will choose customer situations

similar to those addressed previously in the literature. A final section will illustrate the use of

MCM and Markov decision processes to help a firm optimize a customer relationship. For this

illustration, we look at a catalog company deciding when to curtail its relationship with the

customer. This last example incorporates both recency and frequency as determinants of the

status of the firm’s relationship with the customer. The paper ends with a summary and

conclusions.




The Markov Chain Model

First we illustrate the fundamentals of Markov chain modeling. Consider the following

situation: the ABC direct marketing company is trying to acquire Jane Doe as a customer. If

successful, ABC expects to receive NC in net contribution to company profits on Jane’s initial

purchase and on each succeeding purchase. Purchases are made at most once a period, at the

end of the period. Consistent with the customer migration models of Dwyer and of Berger and

5


Nasr, it is possible that Jane will go several periods between purchases. Periods are of equal

length, and the firm uses a per-period discount rate d to account for the time value of money.




For each period that Jane is considered an active customer, ABC will spend throughout

the period in efforts to remarket to Jane. Let M represent the present value of those

remarketing expenditures. Furthermore, the firm believes the probability Jane will purchase at

the end of any period is a function only of Jane’s recency, the number of periods since Jane last

purchased. (If Jane purchased at the end of last period, Jane is at recency 1 for the current

period.) Let the probability Jane will purchase at the end of any period be pr, where r is Jane

Doe’s recency. For the purposes of illustration, assume that if and when Jane reaches r = 5, the

firm will curtail all future efforts to remarket to Jane.




Thus, there are five possible states of the firm’s relationship with Jane Doe at the end of

any period, corresponding to recencies 1 though 4 and a fifth state, “non customer” or “former

customer,” which we will label r = 5. A key feature of the firm’s relationship with Jane Doe is

that the future prospects for that relationship are a function only of the current state of the

relationship (defined by Jane Doe’s recency) and not of the particular path Jane Doe took to

reach her current state. This property is called the Markov property. The Markov property is

a necessary condition for a stochastic system to be a Markov Chain. It is a property of the

Berger and Nasr models, the Dwyer models, and the Blattberg and Deighton model. Because

all these models exhibit the Markov property with constant probabilities, they can all be

6


represented as Markov chains. Figure 1 is a graphical representation of the MCM for the

firm’s relationship with Jane Doe.




The probabilities of moving from one state to another in a single period are called

transition probabilities. For Jane Doe, the following 5x5 transition probability matrix

summarizes the transition probabilities shown graphically in Figure 1. The last row of this matrix

reflects the assumption that if Jane is at recency 5 in any future period, she will remain at

recency 5 for the next and all future periods. In the language of Markov chains, r = 5 or

“former customer” is an absorbing state. Once Jane enters that state she remains in that state.





p1 1 - p1





0





0





0

p2
P = p3

0
0

1 - p2
0

0
1 - p3

0
0

p4

0

0

0 1 - p4

0

0

0

0

1




Matrix P is the one-step transition matrix. The t-step transition matrix is defined to be

the matrix of probabilities of moving from one state to another in exactly t periods. It is a well-

known property of Markov chain modeling that the t-step transition matrix is simply the matrix

product of t one-step transition matrices. Thus the (i, j) element of matrix Pt (found by

multiplying P by itself t times) is the probability Jane Doe will be at recency j at the end of

period t given that she started at recency i. In essence, Pt is a tidy way both to summarize and

to calculate the probability forecast of Jane Doe’s recency at any future time point t.


7


Now that we have a probability forecast for ABC’s future relationship with Jane Doe,

we turn to the economic evaluation of that relationship. The cash flows received by the firm in

any future period will be a function of Jane Doe’s recency. R, a 5x1 column vector of rewards,

summarizes these cash flows:




NC – M
–M

R =

–M
–M
0





If at the end of any future period t Jane Doe makes a purchase and thereby transitions to

r = 1 for the next period, the firm receives NC at time t and is committed to marketing to Jane

throughout the next period. Recalling that M represents the present value of these marketing

expenditures, the total reward to the firm at time t if Jane transitions to r = 1 is NC – M. If Jane

does not make a purchase and transitions to recencies 2, 3, or 4, the firm’s reward is –M,

reflecting its decision to continue to remarket to Jane during the coming period. However, if

Jane transitions to r = 5, the firm has decided to curtail the relationship. The corresponding

reward is zero.




If the ABC company decides to evaluate its relationship with Jane Doe using a horizon

of length T, the remaining challenge is to combine the probability forecasts reflected in Pt with

the reward structure reflected in R. If the firm is risk neutral, it will be willing to make decisions

based on expected net present value. If so, the only remaining challenge is to take Pt, R, T and


8


d and develop an expression for the expected present value of the firm’s relationship with Jane

Doe. The theory of the Markov decision process provides the mechanism for doing so:


T

VT = S [(1 + d)-1P]tR
t = 0

(1)



where VT is the 5x1 column vector of expected present value over T periods. The elements of

VT correspond to the five possible initial states of the Jane Doe relationship. The top element of

VT corresponding to r = 1 (a Jane Doe relationship that starts with a purchase at t = 0), will be

of particular interest. This is the expected present value of the cash flows from the firm’s

proposed relationship with Jane Doe. It is, in other words, the expected LTV for Jane Doe.

Notice that this value accounts for NC from the initial purchase at t = 0 as well as a trailing

expenditure of M at time T if Jane is an active customer at T.




Rather than selecting some finite horizon T over which to evaluate its relationship with

Jane Doe, the firm might decide to select an infinite horizon. (Because expected cash flows

from a customer relationship usually become quite small at large values of t, the numerical

differences between selecting a long but finite horizon and an infinite horizon are usually

negligible. The infinite horizon approach has the advantage of being computationally simpler.)

For an infinite horizon, it can be shown that





V











lim VT
T→∞








where I is the identity matrix.




=

9

{I – (1 + d)-1 P}-1 R




(2)


Let us illustrate the use of (1) and (2) using a numerical example. Suppose that

for Jane Doe, d = 0.2, NC = $40, and M = $4. The numerical values for R turn out to be



$36
$(4)
R = $(4)
$(4)
$0



Suppose further that p1 = 0.3, p2 = 0.2, p3 = 0.15, and p4 = 0.05. The one, two, three, and

four step transition matrices are then






P =




0.3
0.2
0.15
0.05
0




0.7
0
0
0
0




0
0.8
0
0
0




0
0
0.85
0
0




0
0
0
0.95
1


0.2300 0.2100 0.5600


0


0


P2=

0.1800 0.1400
0.0875 0.1050
0.0150 0.0350
0 0

0
0
0
0

0.6800
0
0
0

0
0.8075
0.9500
1


0.1950 0.1610 0.1680 0.4760


0


P3=

0.1160 0.1260 0.1120
0.0473 0.0613 0.0840
0.0115 0.0105 0.0280

0
0
0

0.6460
0.8075
0.9500

0

0

0

0

1





P4=


0.1397 0.1365 0.1288 0.1428 0.4522
0.0768 0.0812 0.1008 0.0952 0.6460
0.0390 0.0331 0.0490 0.0714 0.8075


10


0.0098 0.0081 0.0084 0.0238 0.9500

0

0

0

0

1



The upper row of P4 tells us that if the firm successfully initiates a relationship with Jane Doe at t

= 0, there is a 0.1397 probability Jane will make a purchase at t = 4, a 0.1365 probability Jane

will reach recency 2 at t = 4, . . . , and a 0.4522 probability the firm will curtail its relationship

with Jane at the end of period 4 because she did not make a purchase in any of the four

preceding periods.




Substituting these transition matrices, R as given above, and d = 0.2 into (1) gives




$50.115
$4.220
V4 = $0.592
$(1.980)
$0



which represents the expected present value of the Jane Doe relationship over a four period

horizon. The expected LTV of the proposed relationship with Jane Doe is $50.11. This value

consists of $40 from the initial purchase and $10.11 of expected present value from future cash

flows.




If and when Jane reaches recency 2, her expected customer lifetime value decreases to

$4.22. Notice that $4.22 is the present value now of a Jane Doe customer relationship that

begins with Jane having just transitioned to recency 2. (This is a Jane Doe who purchased one


11


period ago but failed to purchase in this period.) Similarly, if Jane transitions to recency 3, the

firm’s relationship with Jane has an expected present value of $0.59, while if Jane transitions to

recency 4, the expected present value is a negative $1.98.




This negative expected value for a recency 4 Jane Doe means that the ABC firm loses

money in its efforts to remarket to a recency 4 Jane Doe. Perhaps this is because our analysis

used the relatively short time horizon of T = 4?




A simple inversion of the 5x5 matrix required of (2) allows us to calculate V, the

expected net present value of the Jane Doe relationship for an infinite horizon:




$52.320
$5.554
V = $1.251
$(1.820)
$0



The differences in the results reflect the expected present value of cash flows after t = 4. Using

an infinite horizon, the expected LTV for Jane Doe increases to $52.32.




While all the expected present values have improved under the longer horizon, the

expected present value is still negative at recency 4. Given the economic and probabilistic

assumptions of the model, the firm would do better if it curtailed its relationship with Jane Doe at

recency 4 rather than recency 5.

12




It is a simple matter to modify the Markov decision model to evaluate this proposed

change in policy. One way would be to reformulate the model using four rather than five states.

A quicker short cut is to modify the existing five-state model by setting p4 equal to zero and the

fourth element of R equal to zero. These two changes reflect the fact that under the new policy,

no money will be spent on a recency 4 Jane Doe and she will automatically transition to recency

5. Making these changes and recalculating V gives

$53.149
$6.621
V = $2.644
$0
$0



As expected, under the new policy the expected present value of a recency 4 Jane Doe is zero.

It is also apparent that because the new policy more effectively deals with Jane Doe if and when

she reaches recency 4, the change improves the value of the entire relationship. The new

expected LTV for the Jane Doe relationship is $53.15. The improved policy has added $0.83

to the expected LTV.




This simple example illustrates the notion that firms can use MCMs not only to evaluate

proposed customer relationships, but also to help manage and improve those relationships.

Markov chain modeling assists not only with prospecting decisions (should we try to initiate a

relationship with Jane Doe?) but also with retention and termination decisions. Finally, notice

13


that any improvements to a customer relationship are immediately incorporated into the LTV

calculation through the MCM.




Markov Chain Models of Customer Relationships

The ABC firm’s potential relationship with Jane Doe was our first example of a

customer relationship amenable to Markov chain modeling. The Jane Doe relationship might be

characterized as a customer migration situation with purchase probabilities dependent on

recency. This is a case 5 situation, as defined by Berger and Nasr.




Now suppose that, in addition to purchase probabilities, remarketing expenditures and

net contribution also depend on recency. In other words, suppose the amount the firm spends

on remarketing is adjusted based on recency. In addition, suppose the expected net

contribution from Jane Doe purchases is thought to depend on the recency state from which the

purchase is made. Purchase probabilities, net contributions, and remarketing expenses varying

with recency are characteristics of the situation considered by Dwyer in his customer migration

example.




To model this situation using an MCM will require an expanded state space. In order

to account for net contributions that depend on Jane Doe’s recency at time of purchase, we will

need to separate the recency 1 state into four new states. These four new states will be labeled

11, 12, 13, and 14 corresponding to Jane Doe purchasing at the end of a period in which she was

14


at recency 1, 2, 3, and 4 respectively. The transition matrix and reward vector for this more

complicated customer migration situation are given below:




State P













R


11

12

13

14


p11

p12

p13

p14


0

0

0

0


0

0

0

0


0

0

0

0


1 - p110

1 - p120

1 - p130

1 - p140


0

0

0

0


0

0

0

0


NC11- M11

NC12- M12

NC13- M13

NC14- M14


2


0


p2


0


0


0


1 - p20


0


-M2


3


0


0


p3


0


0


0


1 - p30


-M3


4

5


0

0


0

0


0

0


p4

0


0

0


0

0


0

0


1 - p4

1


-M4

0





For example, notice that if Jane Doe purchases at recency 2, she transitions to state 12,

where the firm receives NC12 in net contribution and spends M12 for remarketing. Purchases at

recency 3, however, send Jane to state 13, where the firm receives NC13 and spends M13. Just

as net contribution and remarketing expenditures depend on the recency at which Jane

purchased, so too do her subsequent purchase probabilities p11,p12,p13, and p14. Breaking out

the original recency 1 state into four substates has allowed us to apply the MCM to a situation

where purchase amounts, marketing expenditures, and repurchase probabilities depend on

customer recency at the time of purchase.


15


So far we have considered only customer migration situations. The MCM can also be

used for customer retention situations. The simplest of such situations is one in which a

customer has a constant probability p of responding in each period, and any nonresponse

signals the end of the customer relationship. The MCM for this situation requires only two

states: customer and former customer. The P and R matrices are given below:


State


P


R




Customer

Former
Customer




p


0




1 - p


1




NC - M


0






This is a very simple model—so simple that the advantages of Markov chain modeling

are negligible. Berger and Nasr consider this simplest customer-migration situation as case 1.

Notice that because the MCM applies to a period of arbitrary length (discount rate d is defined

as the per-period discount rate), this MCM also applies to the situation considered by Berger

and Nasr as case 2a.

It is a relatively simple matter to expand the MCM to handle the firm’s relationship with

a prospect. Suppose an expenditure A gives the firm probability pa of acquiring the prospect.

The MCM model for this prospecting/customer retention situation requires three states:

prospect, customer, and former customer. The transition matrix and reward vector are as

follows:


State


P


R





Prospect



Customer


Former
Customer




0



0


0

16


pa



p


0




1 - pa



1 - p


1




-A



NC - M


0




Once again, this model is simple enough that the benefits of Markov Chain modeling are limited.

In fact, this model is simple enough that equation (2) can be solved algebraically to give the

following expected values:





VProspect

VCustomer





=

=





-A + (1 + d)-1paVCustomer

(NC - M)/[1 - p/(1 + d)]


VFormer Customer =


0





This example illustrates that while it is possible to extend the MCM to include

prospecting, there is usually little reason to do so. The transition from prospect to customer is

usually simple enough that it can be treated algebraically, without recourse to the MCM.




Next, consider a customer retention situation where purchase amounts, repurchase

probabilities, and remarketing expenditures all depend on frequency—the number of times the

customer has purchased from the firm. For the purposes of this illustration, suppose these

variables change for frequencies 1, 2, and 3, but remain constant for frequencies of 4 or more.

The MCM of this situation requires five states: frequencies 1, 2, 3, and 4 together with “former


17


customer.” The frequency 4 state might be better labeled “frequency 4 or greater.” The

transition matrix and reward vector are as follows:








State








P








R


frequency 1

frequency 2


0

0


p1

0


0

p2


0 1 - p1

0 1 - p2


NC1- M1

NC2-M2


frequency 3

frequency 4

Former
Customer


0

0

0


0

0

0


0

0

0


p3

p4

0


1 - p3

1 - p4

1


NC3- M3

NC4- M4

0






Purchase probabilities, net contributions, and remarketing expenses varying with frequency in a

customer retention situation are characteristics of the situation considered by Dwyer in his

customer retention example. Here we have shown the MCM for this same situation.




As our final example of MCM of a customer relationship, consider a customer migration

situation in which the firm believes that purchase probabilities, net contribution, and remarketing

expenditures all depend on the customer’s recency, frequency, and monetary value of past

purchases. This is the familiar RFM method for categorizing customers—applied here to

characterize the status of the firm’s relationship with the customer. Let r be the customer’s


18


recency, let f be the customer’s frequency, and let m be an index corresponding to categories

associated with the monetary value of the customer’s past purchases.




The MCM model for this situation will use states defined by (r, f, m) where each of

these elements are integers with some known upper bound. The upper bound for r might be the

recency at which the firm ends the relationship. The highest frequency value might be a

“frequency 4 or greater” type of category. Finally, the upper bound for m will simply be the

number of monetary value categories the firm has created. The end result is some finite number

of states defined using (r, f, m) that characterize the status of the firm’s relationship with the

customer.




Because the number of states in this general RFM-based MCM model can be quite

large, we will not attempt to portray the transition probability matrix and reward vector. Instead

we offer Figure 2. Figure 2 focuses on state (r, f, m) and shows all possible transitions to and

from that state. The only path to state (r, f, m) is a nonresponse from state (r - 1, f, m).

Similarly, a nonresponse from state (r, f, m) transitions the customer to state (r + 1, f, m). A

response or purchase from state (r, f, m) transitions the customer to a recency 1 state, to

frequency f + 1 (assuming the next higher frequency is not above the upper bound for

frequency), and to one of several possible monetary value states. Figure 2 uses m - 1, m, and

m + 1 as three possibilities. The firm receives a reward which depends on the monetary value

category to which the customer transitions—higher net contribution, for example, if the customer

moves to a higher monetary value category.

19



A challenge in constructing an RFM-based MCM will be to define monetary value

categories in such a way that the Markov property will hold. Recall that the Markov property

requires that future prospects for the customer relationship depend only on the current state of

the relationship. By their very nature, monetary value categories based on moving averages will

be non-Markovian. Instead of moving averages, monetary value categories based on the single

last purchase amount or the cumulative total of all previous purchase amounts will be better

suited for Markov chain modeling.




Example Use of Markov Decision Processes to Manage a Customer Relationship

Up to this point we have attempted to illustrate how the MCM can be used to model a

firm’s relationship with a customer in a variety of situations. Presumably, the most important use

of these models is in calculating LTV as a function of some small set of meaningful input

parameters. LTV is perhaps most commonly used to help firms decide how much to spend in

their attempts to initiate relationships.




In this section, we offer an extensive numerical example that will concentrate on the

opposite end of the firm’s relationship with the customer: the firm’s decision to terminate the

relationship. This example will demonstrate the use of the theory of Markov decision processes

to find the best decision strategy in a reasonably complicated situation.

20


A large catalog company knows that a customer’s response probabilities to its thrice-

yearly catalog depend on both the customer’s recency (how many periods/catalogs since the

customer’s last purchase) and frequency (how many times the customer has purchased from the

firm). These purchase probabilities for a typical customer are given in Table 1.




While the MCM can accommodate purchase amounts and marketing expenses that

vary with recency and frequency, for the purposes of this example we will keep things simple.

Assume the customer’s purchases bring the firm $60 in expected net contribution regardless of

the customer’s recency or frequency at the time of purchase. For each period the customer is

active, the firm spends $1 mid-period in marketing—again regardless of the customer’s recency

or frequency. The firm uses a discount rate of 0.03 per period to account for the time value of

money.




Heretofore the firm has curtailed its relationship with the customer after recency 24.

This is why purchase probabilities for recencies greater than 24 are not available. In light of

projected increases in paper and postage costs, the firm is intent on reexamining this policy.

Perhaps the firm should be more aggressive in paring its mailing list of dormant customers? Or

perhaps the firm’s policy should depend on both recency and frequency—cutting off low

frequency customers sooner than higher frequency customers? It seems fairly obvious that such

a policy will do better—but how much better? Will the increase in profitability justify the costs

associated with implementing the change?

21


The Markov chain model of this catalog firm’s relationship with a customer will require

121 states. The first 120 states will be the 24x5 combination of 24 possible recencies (r = 1 to

24) with 5 possible frequencies (f = 1 to 5). Notice that frequencies greater than or equal to 5

are all lumped together as frequency 5. The final state is the familiar “former customer” state

that we will label recency 25 and frequency 1. The probabilities in Table 1 will be labeled pr,f,

where (r, f) refers to the recency and frequency of the customer.




The construction of the 121x121 transition matrix follows directly from the problem

description. The customer transitions to state (25, 1) from states (24, 1) through (24, 5) with

probabilities 1 - p24,1 through 1 - p24,5 respectively. The customer transitions to state (1, 5)

from states (1, 4) through (24, 4) with probabilities p1,4 through p24,4 respectively, and from

states (1, 5) through (24, 5) with probabilities p1,5 through p24,5 respectively. For f = 2, 3, or 4

the customer transitions to state (1, f) from states (1, f - 1) through (24, f - 1) with probabilities

p1,f-1 through p24,f-1 respectively. Finally, for recencies 2 through 24 inclusive, the customer

transitions to state (r, f) from state (r - 1, f) with probability 1 - pr-1, f.




The 121-component reward vector is defined as follows:





Rr,f





=


NC – M

-M

0


r = 1

r = 2, 3, …, 24

r = 25


where M is equal to the present value of remarketing expenditures or $1/(1.03)1/2. Notice the

very simple structure of these cash flows. As mentioned earlier, useful extensions to this model


22


would be to consider net contributions that vary with frequency and marketing expenditures that

vary with both recency and frequency. These extensions are easy to implement because they

involve changes to the reward vector only.




We are now in a position to evaluate the firm’s current policy of curtailing the customer

relationship after recency 24. Using equation (2) and the inversion of a 121x121 matrix allows

us to calculate the 121x1 column vector V. The first element of V is calculated to be $89.264.

The expected present value of the catalog firm’s relationship with the customer is $89.264.

Included in this number is the $60 net contribution from the initial purchase—so that $29.264 of

the present value of the relationship comes from cash flows after the initial purchase.




Examining the calculated expected values for the remaining possible states of the firm’s

relationship tells us that the policy of curtailing the relationship after recency 24 is a fairly good

one. Only one state, state (24, 1) has a negative expected value. The firm could do a little bit

better if it dropped frequency 1 customers after recency 23, rather than 24.




To illustrate in detail how the Markov decision model can be used to help the firm

manage its relationship with the customer, suppose that paper and postage increases raise the

cost of remarketing from $1 to $2 per period, with no other changes. Suppose further that the

firm is free to change its mailing policy as it sees fit with no effect on the customer’s repurchase

probabilities. (While this might be true in the short run, it would not be true in the long run. If

customers learn to expect a certain number of contacts and plan or allocate their purchases

23


accordingly, cutting back on the number of contacts might increase the purchase probabilities at

lower recencies.)




The firm’s challenge is to find the optimal contact policy. We saw earlier that the

optimal policy at M = $1 was to curtail mailings after recency 23 for a frequency 1 customer

and after recency 24 for frequencies 2, 3, 4, and 5. Let us refer to this policy as (23, 24, 24,

24, 24), reflecting the recency cutoffs for each of the five possible frequencies. Our challenge is

to find the best contact policy (the best string of five recency cutoff values) now that mid-period

marketing expenses are $2.




The problem described is an example of a Markov decision problem, and there is a

large body of knowledge about how to solve these kinds of problems. We will illustrate one

popular approach called the policy improvement algorithm (see Hillier and Lieberman 1986,

715).




The policy improvement algorithm begins by evaluating (calculating V) for some

arbitrary policy. As our initial arbitrary policy we use the firm’s current policy of curtailing

contact after recency 24: (24, 24, 24, 24, 24). Using (2) to evaluate V for this policy gives the

results in Table 2. The expected customer lifetime value using this policy is $69.470.

Subtracting the initial $60 contribution, we see that the higher remarketing cost has cut the value

of the firm’s future relationship with the customer rather drastically—from $29.264 down to

$9.470 if the firm sticks to its initial (24, 24, 24, 24, 24) policy. We can see from Table 2,

24


however, that there are many states in which the value of the firm’s relationship is negative.

Improvement will be possible.




The next step in the policy improvement algorithm is to revisit the firm’s decision at each

and every state and replace the current decision with a best decision based on the calculated

values. In our example, this means contacting only those customers at states with positive

values and not contacting the rest. The improved policy is therefore (3, 6, 9, 12, 14). Thus

ends the first iteration in the policy improvement algorithm. Policy (3, 6, 9, 12, 14) becomes the

new candidate policy.




To evaluate policy (3, 6, 9, 12, 14), we modified the Markov decision model by

changing the purchase probabilities to zero for any state outside the firm’s contact policy. In

addition, we set the reward for transitioning to any of these states at zero. In effect, if the

customer transitions to a state outside the firm’s contact policy she or he will transition with

probability 1 and with reward zero to state (25, 1). The results of evaluating policy (3, 6, 9, 12,

14) are given in Table 3. The expected customer lifetime value for this strategy has improved to

$71.487.




Turning to the policy improvement step, we notice that every state contacted in the

current policy has positive expected value. So there is no way to improve the current policy by

making further cutbacks. What we need to do now is consider contacting some of the states

we no longer contact. For example, suppose we contacted state (4, 1). Such a contact would

25


cost us $2 in the middle of the period but would bring us a 0.0450 probability of transitioning to

state (1, 2) and a 0.9550 probability of transitioning to state (5, 1) at the end of the period. The

expected present value of doing so is

(1.03)-1/2($2) + (1.03)-1[0.045($80.085) + 0.9550($0)] = $1.528.

You will notice that we have evaluated the proposed change in policy using the values calculated

for the current policy. While this evaluation will not hold if we make more than one change to

the current policy, it is as prescribed by the policy improvement algorithm. After performing

similar calculations for all states not contacted under the current policy, we find several that

project a profitable contact. The improved policy that results is (8, 12, 15, 16, 17).




The expected customer lifetime value for the (8, 12, 15, 16, 17) policy is calculated to

be $74.519. All contacted states have positive expected values and only one of the states not

contacted, state (9, 1), projects a small positive profit if contacted.




Our final stage of the policy improvement algorithm evaluates policy (9, 12, 15, 16, 17)

to find an expected lifetime value of $74.523, all contacted states with positive values, and no

other states projecting a positive profit if contacted. The policy improvement algorithm is

complete. We have found the optimal policy.




The net result of the increase in marketing costs from $1 to $2 has been a change in

optimal policy from (23, 24, 24, 24, 24) to (9, 12, 15, 16, 17) and a decrease in expected

customer lifetime value from $89.267 to $74.523. Subtracting the initial $60 contribution

26


shows us that the expected future value of the firm’s relationship with the customer decreases

from $29.267 to $14.523.







Summary and Conclusions

This paper introduced a general class of mathematical models, Markov Chain Models,

which are appropriate for modeling customer relationships and calculating LTVs. A major

advantage of this class of models is its flexibility. This flexibility was demonstrated by showing

how the MCM could handle the wide variety of the customer relationship situations previously

modeled in the literature. The MCM is particularly useful in modeling complicated customer-

relationship situations for which algebraic solutions will not be possible.




A second advantage of the MCM is that it is a probabilistic model. It incorporates the

language of probability and expected value—language that will help marketers talk about

relationships with individual customers.




The MCM is also supported by a well-developed theory about how these models can

be used for decision making. The use of this theory was demonstrated by a comprehensive

numerical example in which a catalog company adjusted its contact policy in response to

increased mailing costs.

27


References

Berger, P. D., and N. I. Nasr. 1998. Customer Lifetime Value: Marketing Models and
Applications. Journal of Interactive Marketing 12(1): 17-29.

Blattberg, R. C. 1998. Managing the Firm Using Lifetime-Customer Value. Chain Store
Age (January): 46-49.

Blattberg, R. C. 1987. Research Opportunities in Direct Marketing. Journal of Direct
Marketing 1(1): 7-14.

Blattberg, R., and J. Deighton. 1996. Manage Marketing by the Customer Equity Test.
Harvard Business Review (July-August): 136-144.

Bronnenberg, B. J. 1998. Advertising Frequency Decisions in a Discrete Markov Process
Under a Budget Constraint. Journal of Marketing Research 35(3): 399-406.

Courtheaux, R. J. 1986. Database Marketing: Developing a Profitable Mailing Plan.
Catalog Age (June-July).

David Shepard Associates, Inc. 1995. The New Direct Marketing. 2nd ed. Burr Ridge, IL:
Irwin.

Dwyer, F. R. 1989. Customer Lifetime Valuation to Support Marketing Decision Making.
Journal of Direct Marketing 3(4): 8-15, reprinted in 11(4): 6-13.

Hillier, F. S., and G. J. Lieberman. 1986. Introduction to Operations Research. 4th ed.
Oakland, CA: Holden-Day.

Hughes, A. M. 1997. Customer Retention: Integrating Lifetime Value into Marketing
Strategies. Journal of Database Marketing 5(2): 171-178.

Jackson, B. 1985. Winning and Keeping Industrial Customers. Lexington, MA:
Lexington Books.

Jackson, D. R. 1996. Achieving Strategic Competitive Advantage through the
Application of the Long-term Customer Value Concept. Journal of Database
Marketing 4(2): 174-186.

Puterman, M. J. 1994. Markov Decision Processes: Discrete Stochastic Dynamic
Programming. New York: John Wiley & Sons.

White, D. J. 1993. A Survey of Applications of Markov Decision Processes. The Journal

28


of the Operational Research Society 44(11): 1073-1096.

29



Table 1. Catalog Customer Repurchase Probabilities



Frequency
Recency 1 2 3 4











≥5

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

0.103
0.076
0.059
0.045
0.038
0.035
0.030
0.027
0.025
0.021
0.021
0.020
0.017
0.017
0.016
0.015
0.014
0.013
0.013
0.012
0.012
0.011
0.011
0.010

0.121
0.090
0.069
0.053
0.045
0.041
0.035
0.032
0.029
0.025
0.024
0.024
0.020
0.020
0.019
0.018
0.017
0.016
0.015
0.014
0.014
0.013
0.012
0.012

0.143
0.106
0.081
0.062
0.053
0.049
0.041
0.038
0.035
0.030
0.028
0.028
0.024
0.024
0.022
0.021
0.020
0.018
0.018
0.017
0.016
0.015
0.015
0.014

0.151
0.112
0.086
0.066
0.056
0.051
0.043
0.040
0.037
0.031
0.030
0.030
0.025
0.025
0.024
0.022
0.021
0.019
0.019
0.018
0.017
0.016
0.015
0.015

0.163
0.121
0.093
0.071
0.061
0.056
0.047
0.043
0.040
0.034
0.033
0.032
0.027
0.027
0.026
0.024
0.022
0.021
0.020
0.019
0.018
0.017
0.017
0.016


30




Table 2. Calculated V for Catalog-Firm Policy (24, 24, 24, 24, 24)
NC = $60, M = $2, and d = 0.03




Frequency

Recency

1

2

3

4

5

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

$ 69.470 $ 79.194 $ 88.052 $ 92.381 $ 95.821
$ 4.069 $ 12.697 $ 20.710 $ 24.632 $ 27.837
$ 0.184 $ 7.903 $ 15.169 $ 18.737 $ 21.704
$ (2.563) $ 4.415 $ 11.050 $ 14.320 $ 17.069
$ (4.371) $ 2.027 $ 8.155 $ 11.185 $ 13.751
$ (5.715) $ 0.172 $ 5.843 $ 8.660 $ 11.057
$ (6.861) $ (1.473) $ 3.750 $ 6.351 $ 8.576
$ (7.597) $ (2.634) $ 2.204 $ 4.613 $ 6.692
$ (8.153) $ (3.598) $ 0.868 $ 3.090 $ 5.027
$ (8.553) $ (4.384) $ (0.282) $ 1.771 $ 3.554
$ (8.651) $ (4.808) $ (1.016) $ 0.882 $ 2.538
$ (8.682) $ (5.169) $ (1.689) $ 0.056 $ 1.581
$ (8.679) $ (5.512) $ (2.362) $ (0.772) $ 0.611
$ (8.426) $ (5.547) $ (2.685) $ (1.231) $ 0.035
$ (8.142) $ (5.565) $ (2.996) $ (1.685) $ (0.546)
$ (7.765) $ (5.480) $ (3.197) $ (2.035) $ (1.021)
$ (7.257) $ (5.247) $ (3.243) $ (2.213) $ (1.315)
$ (6.646) $ (4.909) $ (3.174) $ (2.270) $ (1.494)
$ (5.940) $ (4.460) $ (2.984) $ (2.210) $ (1.544)
$ (5.168) $ (3.953) $ (2.737) $ (2.098) $ (1.547)
$ (4.295) $ (3.331) $ (2.362) $ (1.850) $ (1.411)
$ (3.343) $ (2.625) $ (1.902) $ (1.521) $ (1.189)
$ (2.310) $ (1.833) $ (1.354) $ (1.097) $ (0.878)
$ (1.194) $ (0.953) $ (0.715) $ (0.585) $ (0.473)
$0


31



Table 3. Calculated V for Catalog-Firm Policy (3, 6, 9, 12, 14)
NC = $60, M = $2, and d = 0.03




Frequency

Recency

1

2

3

4

5

1
2
3

$ 71.487 $ 80.085 $ 88.337 $ 92.781 $ 96.131
$ 6.281 $ 13.702 $ 20.987 $ 25.063 $ 28.159
$ 2.578 $ 9.011 $ 15.440 $ 19.197 $ 22.038

4
5
6

$ 0
$ 0
$ 0

$ 5.620 $ 11.318 $ 14.809 $ 17.417
$ 3.321 $ 8.424 $ 11.702 $ 14.113
$ 1.554 $ 6.113 $ 9.207 $ 11.433

7
8
9

$ 0
$ 0
$ 0

$ 0
$ 0
$ 0

$ 4.021 $ 6.927 $ 8.969
$ 2.478 $ 5.219 $ 7.101
$ 1.146 $ 3.728 $ 5.454

10
11
12

$ 0
$ 0
$ 0

$ 0
$ 0
$ 0

$ 0
$ 0
$ 0

$ 2.441 $ 3.999
$ 1.584 $ 3.000
$ 0.792 $ 2.063

13
14
15
16
17
18
19
20
21
22
23
24
25

$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0

$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0

$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0

$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0

$ 1.114
$ 0.559
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0
$ 0


32



Figure 1. Graphical Representation of the Markov Chain Model
of the Firm’s Relationship with Jane Doe






p1









1





1 - p1







p2









2





1 - p2










p3









3





1 - p3











p4









4





1 - p4









5






1.0





32




Figure 2. Graphical Representation of Transitions into and out of
State (r, f, m) for an RFM-based MCM.













(1, f + 1, m -1)

















(r - 1, f, m)













(1, f + 1, m)

















(r, f, m)













(1, f + 1, m + 1)

















(r + 1, f, m)

0 komentar: