# Introduction

Data provided consists of 3 years of weekly volume of sales, price of product in question, prices of main competitors and promotion calendar for a FCMG product. Data is provided by SAP.

The task is to identify the volume uplift drivers, measure the promotional effectiveness and measure the cannibalization effect from main competitors. The goal is to analyze the impact of price reduction and promotions on volume of sales by also taking into account competitor’s prices.

In order to be captured most of the patterns in the behavior of the Volumes of Sales during promotions periods, were derived about 150 new variables. A set of various approaches were applied in the modelling stage. Derivation of the Baseline price is main step in the model, since it has been used in the discount effect capturing. Various Time series analysis were applied to the volume of sales and the price as ARIMA and Spectral Analysis (Fourier) in order to be investigated any seasonal or cyclical (frequency) pattern in the customer behavior. As a featuring selection tools the Lasso and Extreme gradient boosting approaches were used. The final model is based on the OLS method where the significance of all observed patterns from the previous steps is tested.

# Data understanding

Since a small number of features is provided, the first task was to review plots of the features provided.

Main observations:

- All but one of the competitors to our product are seasonal competitors, so their prices should have limited impact on our sales volumes.
- It seems from the data that the Competitor 2 and Competitor 7 are Companies that would ensure not only seasonal supply of their product since the information on their prices is provided for the period which follows the seasonal promotion period. But not hard assumption can be made since this behavior is observed at the end of the supplied time series with prices of this competitors.
- Competitors’ prices are very low in the end of their cycle. Most probably the product has short period of expiration, so the competitors are forced to sell the remaining quantities in the end of the season.
- All promotions lead to boost in the sales. For most promotions the price is raising in the end of the promotion. A continuous effect of the promotions can be observed after the promotion has ended.
- The mean price of our product is higher that the mean price of all of the competitors. Most probably this means that our producer is the leader on the market, and the rest of the competitors have follower’s behaviour.
- A couple of cases are observed where the price is obviously discounted, but this week is not market as a promotion in the promotions calendar. The reason for this may be that the retailers are also organizing promotions, but these cases are not included in the promotions calendar.
- The frequency of the promotions is between 1 and 7 weeks.
- There are some cases where in the last week of promotion the price is higher than the price in the first week of promotion (discounted price). It may be due to the way data is aggregated – on weekly level, so if the promotion has started on a Friday the data for the sales during this promotion period might not be absolutely correct.
- The strongest effect is observed for observations A, B, D
- The effect of the promotions of the competitors on the sales is estimated as proxy of the cannibalization effect. It is measured as the impact of the price discount of the competitor on the sales.
- For the most promotional cases the effect of the promotions has highest impact in the first week of the campaign and has decreasing in the following weeks .

# Feature engineering

Various data transformations approaches were applied in the derivation of new features. Several groups of variables were created. The main characteristics were connected to the discount formed for the main product during promotion periods. Additional Dummy variable approach was used for creation of separate Promotion type variables and variables indicating the weak of the promotion.

### Baseline Price Derivation

As a most important step, baseline price has been evaluated for our product and the products of the main competitors. The baseline price is defined from the periods with no promotion for every next period with promotion. Two different approaches have been used for defining the baseline. The first one was based on the mean price calculated for the period before the promotion. This value was used for the period in promotion.

The second more sophisticated and finally used approach included the following steps:

- For the period of promotions, the price before the promotion and the price after the promotion were used for linear interpolation of the base price during promotion periods
- For the periods with no promotions registered in the promotions calendar the real price was used
- In order to smooth the final base price Hodrick Prescott filtering was applied with lambda of 10

### Promotion Discount Derivation

The above baseline price is used in combination with the observed weekly Actual Prices. The Discount is estimated as : (Baseline Price – Actual Price) / Baseline Price. The variable is derived for our company price and the competitor prices.

### Additional Variables Derivation

As a next step, additional features have been derived that account for price changes, price elasticity, baseline prices, price discount, competitors’ base price, impact from competitors’ price discount, price log difference between own and competitors’ price, number of competitors, dummies for promotion type and week from start of promotion, time since last promotion started, competitor is in promotion or not etc. Variables based on the maximum realized volume of sales in the last periods before the promotion were derived in order to be captured the relative effect of the current promotion.

For the purposes of the investigation of the cannibalization effect on the market a base level of the Volume of Sales characteristic is created. The Sales are preliminary cleaned form the effect of promotions of the main product in order to be isolated the influence of the discount in the competitors price on the Volume of Sales. For this purpose a model with the dummies for the type of promotion and for the week of the promotions is used. The residuals of this models are smoother with HP filtering (Lambda = 10) and are used as a proxy for the volume of the volume of sales if there were no promotion campaigns.

### Time Series Effects

Auto-correlation function have been applied to the price. As a result of the ARIMA model additional features have been derived for the lagged values for the most significant lags. It is well visible from the applied graph that the first and second lag of the equation are more explainable for the volume of sales.

In addition Spectral analysis were applied to the volume of sales, previously cleaned from the effect of the prices in order to be captured pattern in the frequency of the sales which is independent from the price effect. For this purpose the Fourier frequency analysis was used.

Additional variable to represent two spikes within an year (52 weeks period) was derived to represent the short term increase in sales due to national holidays. Since there was non-national holiday calendar available the weeks for each year were determined through analysis of data. The approach was to divide the data into 52 week segments (incomplete for the last year). A sum for each week of the year for the three years was calculated and dived by the number of years for each data was available for the corresponding week (e.g. 2 or 3). Two spikes in volumes are noticeable through visual analysis of the average sales for each week of the year, and were further confirmed by having similar spikes for the corresponding period for the volumes within each separate year.

A full list of the created variables can be found as an attachment.

# Modeling

### OLS Model

A preferred method for statistical analysis in Marketing is OLS. This is the case, because OLS allows a straightforward interpretation of the sales volume uplift factors. All variables described in section “Feature Engineering” are tested in the model. As the above phase results in around 150 possible explanatory variables a model selection process is required.

The explanatory power of the features and their importance are evaluated by applying a Gradient Boosting Tree algorithm – XGBoost and by building regression models through LASSO analysis. Both analysis rely on initial calibration of the input parameters. This was done by an exhaustive iterative approach, which tries all combinations and selects as optimal the one, which results in highest accuracy. To account for over-fitting, when calibrating the parameters a KFold validation approach was applied, based on 5 sub-samples. The 2 Model Selection approaches described above resulted in a short list of variables with highest importance, which were used as starting point for the OLS model building phase. In the case of LASSO, all variables with non-zero coefficient were added to the short-list, while in the case of XGBoost, the first 20 variables with highest weight in the final prediction were included.

Both forward and backward stepwise selection processes were used when building the final model. The forward selection process was guided by the short-list obtained from the initial phase and relied on expert domain knowledge, statistical evaluation of the significance of the parameters, marginal improvement of the model and analysis on the correlation matrix. The Bakcward selection process started by forcing all variables in the model and evaluating their significance. The second step in the backward-selection process was to remove all insignificant variables one by one, while applying higher thresholds for the variables in the short-list and considering the marginal effect on the accuarcy.

A 5% significance was generally used for evaluating the statistical significance of the coefficients. Only one parameter estimates enters the model with p-value higher then 5% (equal to 7.8%), but as the p-value is still below 10% and the parameter has entered the model with logical sign and level value it was decided to keep the parameter in the model.

### Best and worst case scenario assessment

A function has been applied to the model predictions in order to assess not only the predicted sales, but also the best and worst case scenarios.

Residuals have been tested using Shapiro-Wilk test and they are assessed as normally distributed.

From business point of view it makes sense to have non-static best and worst case scenario intervals. So, for example when the volumes are on a peak then higher variation can be expected. On the other hand, when the sales volumes are low, the intervals between the best and worst case are shrinked, because the product is a market leader and has a stable customer base. In order to incorporate these to cases into the prediction we multiplied the intervals by a penalizing parameter having values between 0 and 1.

The function has been derived in the following steps:

**Best case scenario**

- to our predicted volume of sales a correction term is added
- the correction term is a sigmoid function applied to the percentage change in the sales volume since the previous week multiplied by 3 times the standard deviation of the residuals

Worst case scenario

- from our predicted volume of sales a correction term is substracted
- the correction term is a sigmoid function applied to the percentage change in the sales volume since the previous week multiplied by 3 times the standard deviation of the residuals

From the graph below it can be seen that the best and worst case scenario are affected by sales volumes being in the higher or lower bands.

**Evaluation**

- The following variables entered the model:

Baseline Price of Actual Price – This is an estimate of the company product price in situation of no promotion in he long-term. As seen from the model it is expected that and increase in the long-term value of the price results in decrease of volume of sales. - Discount of company promotions – This estimate shows the percentage discount of the baseline price when in promotion in the current week. It is expected that higher discount value is related to higher sales in the current week.
- Discount from last promotion – This estimate shows the highest discount in the previous promotion. This value is populated with the mean discount value through the development sample for the first few week of the data set. It is expected that a high discount value during the last promotion will result in lower sales in the current as customers often overbuy the product in case of high discounts. As seen in the final model, the coefficient in-front of the current month discount is higher than the one in-front of the last promotion discount. This is indicative that we can control for the previous promotion discount in case of high expectation from the current promotion session.
- Volume of sales from previous week – This variable enters with negative sign in front of its parameter, which shows us that high sales in the previous month are followed by lower sales in the current
- Week of promotion – It is expected that usuually highest sales are realised in the first week of the promotion.
- Type A promotion – The model shows that the non-pricing factors of the 5 types of promotion are significant only for promotion A. The current expetion is that they result in higher sales.
- Length of last Promotion – This parameter also enters the model with logical coefficient. It is expected that longer previous promotions will result in lower sales in the current period.
- The only competitior, which significantly influence the company sales volumes with their pricing policy is Competitor 3. It is expected that higher percentage difference between the company price and the competitor price results in lower sales for the company.
- Only 1 competitor in the market – The analysis shows that having only 1 competitor in the market influence positvely our company, while the situation of having no competitors is not investigated as there is no such data in the sample
- Holiday/Systematic High Sales – There are 2 periods within the year, which are related with significantly higher volumes of sales. The Statistical analysis shows that this is behavior, which is not explained by any other variables in the model.

The model predictions look quite good. In the graph below you can find the observed vs predicted values:

# Conclusion

Generally elasticity of demand is defined as the percentage change in quantity demanded divided by the percentage change in price. Elasticity for the continiuos variables is calculated based on the regression coefficients. It can be used to calculate for eaxh 1% change in the predictors what would be the corresponding change in the volume of sales. The Elasticity is calculated by dividing the mean value for each relevant predictor, multiplied by the regression estimated coefficient and dividing the product the mean value of the dependent variable (Volume of Sales). Or avg(Predictor)*Coefficinet)/avg(Volume of Sales). Additionally the regression formula can be used to simulate different results by inputting different values for the dummy variables and analyzing the changes.

The elasticity for the applicable factors contributing to the volume of sales can be seen in the table below:

Variable Elasticity

datafile[‘BASE2_ACTUAL_PRICE_y’] -1.6640498505

datafile[‘DISCOUNT2_ACTUAL_PRICE_y’] 0.2273465563

datafile[‘LAST_DISCOUNT’] -0.3009602194

datafile[‘VOLUME_LAG1’] -0.2775736955

datafile[‘Price_Relative_Diff__3’] 0.1079935245

In other words for each 1% change in the base price we expect the volume of sales to decrease with 1.7%. If there is 1% increase in the promotional discount we expect the volumes of sales to increase with 0.22%. Similarly the effect can be analysed for the other variables.

The elasticity for the applicable factors contributing to the volume of sales can be seen in the table below:

In other words for each 1% change in the base price we expect the volume of sales to decrease with 1.7%. If there is 1% increase in the promotional discount we expect the volumes of sales to increase with 0.22%. Similarly the effect can be analysed for the other variables.

The analysis showed that the promotional effectiveness is highly price dependent, while the non-pricing factors are significant only in the case of promotion A. The model also shows that the promotion calendar and planning significantly influence the sale volumes as seen by the inclusion of variables like length of last promotion and discount during last promotion

# Attachments

# SAP - The FMCG Consumer Packaged Goods Analytics¶

## 1. Data Preparation¶

```
#Load Data Prepr Librarier
import pandas as pd
import numpy as np
#Load Plotly
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.graph_objs as go
init_notebook_mode(connected=True)
```

```
#Import Data
pr_folder = "C:\\Users\\c10670A\\Documents\\ProjectLibrary\\Datathon_2018"
#Get the data
datafile = pd.read_csv(pr_folder + '\\Data\\1DATATHON_SAP_AI_initial_data.csv', delimiter = ";")
```

```
#Check head
datafile.head(6)
```

```
#Check head
datafile.tail(6)
```

```
#Replace all 0 prices with Na
for col in enumerate(datafile.columns):
if "COMPETITOR" in col[1]:
datafile.loc[datafile[col[1]] == 0, col[1]] = np.nan
#Check result
datafile.head(20)
```

```
#Check the data
datafile.iloc[:,1:].describe(include = 'all')
```

```
#Populate Promotion Type
datafile['TYPE_OF_PROMOTION_2'] = np.where(datafile['TYPE_OF_PROMOTION']=="A",1,
np.where(datafile['TYPE_OF_PROMOTION']=="B",2,
np.where(datafile['TYPE_OF_PROMOTION']=="C",3,
np.where(datafile['TYPE_OF_PROMOTION']=="D",4,
np.where(datafile['TYPE_OF_PROMOTION']=="E",5,0)))))
```

```
#Plot the Prices
data1 = []
data2 = []
for col in enumerate(datafile.columns):
if "PRICE" in col[1]:
graph = go.Scatter(
x=datafile.Week,
y=datafile[col[1]],
name = col[1])
data1 = data1 + [graph]
if "VOLUME_OF_SALES" == col[1]:
graph = go.Bar(
x=datafile.Week,
y=datafile[col[1]],
name = col[1],
yaxis = "y2",
opacity = 0.3)
data1 = data1 + [graph]
data2 = data2 + [graph]
if "PROMOTION_2" in col[1]:
graph = go.Bar(
x=datafile.Week,
y=datafile[col[1]],
name = col[1],
opacity = 0.8,
yaxis = "y3")
data1 = data1 + [graph]
data2 = data2 + [graph]
#Set layout if needed
layout1 = dict(title = "Volume Dynamics",
yaxis = dict(range=[0, 1.5]),
yaxis2 = dict(overlaying = "y", side = "right"),
yaxis3 = dict(range=[0,20],overlaying = "y", side = "left", showticklabels = False),
legend=dict(orientation="h"))
#Plot result
iplot(dict(data=data1, layout=layout1))
```

```
#Derive variables - calculate discount as 1 - current price/mean actual price before promotion or 1 - interpolated price
import statsmodels.api as sm
import statsmodels.formula.api as smf
import statsmodels.tsa.filters.hp_filter as filt
def calc_disc(price_name, prom_name):
for i in range(0, len(datafile)):
value = 1
#Calcualte baseline price
#Check for type of promotion
if datafile.loc[i, prom_name] != 0:
sumPrice = 0
counter = 0
track = 0
#Handle first week promotion
if i > 0:
#Iterate through the last prices
for k in range(i-1,-1,-1):
if track == 1 and datafile.loc[k, prom_name] != 0:
break
elif track == 0 and datafile.loc[k, prom_name] == 0:
track = 1
if track == 1:
sumPrice += datafile.loc[k, price_name]
counter += 1
#Set value
if counter > 0:
value = datafile.loc[i, price_name]/(sumPrice/counter)
basePrice1 = (sumPrice/counter)
basePrice2 = np.nan
else:
basePrice1 = datafile.loc[i, price_name]
basePrice2 = datafile.loc[i, price_name]
else:
basePrice1 = datafile.loc[i, price_name]
basePrice2 = datafile.loc[i, price_name]
#If no promotion
else:
basePrice1 = datafile.loc[i, price_name]
basePrice2 = datafile.loc[i, price_name]
#Populate new columns
datafile.loc[i, 'DISCOUNT1_' + price_name] = np.where(1 - value == 0, np.nan, 1 - value)
datafile.loc[i, 'BASE1_' + price_name] = basePrice1
datafile.loc[i, 'BASE2_' + price_name] = basePrice2
if price_name == "VOLUME_OF_SALES":
for i in ["A", "B", "C", "D", "E"]:
datafile['Promotion ' + i + ' dummy'] = np.where(datafile['TYPE_OF_PROMOTION']==i, 1, 0)
for i in range(0, len(datafile['TYPE_OF_PROMOTION'])):
count = 0
if datafile.loc[i,'TYPE_OF_PROMOTION'] in ['A', 'B', 'C', 'D', 'E']:
for j in range(i, -1, -1):
if datafile.loc[j,'TYPE_OF_PROMOTION'] in ['A', 'B', 'C', 'D', 'E']:
count += 1
else:
break
else:
count = 0
datafile.loc[i,'Week from promotion start'] = count
# Create week number dummy
for i in range(1,8):
datafile['Week ' + str(i)] = np.where(datafile['Week from promotion start']== i, 1, 0)
colnames = ['Promotion ' + p + ' dummy' for p in ["A", "B", "C", "D", "E"]]
weeknames = ['Week ' + str(p) for p in range(1, 8)]
for x in enumerate(colnames):
for i in enumerate(weeknames):
datafile[x[1] +' ' + i[1]] = datafile.loc[:,i[1]]*(datafile.loc[:,x[1]])
model = smf.ols(formula="datafile['VOLUME_OF_SALES'] ~ datafile['BASE2_ACTUAL_PRICE'] + datafile['Promotion A dummy'] + datafile['Promotion E dummy'] + datafile['Week from promotion start'] + datafile['Promotion B dummy'] + datafile['Promotion C dummy'] +datafile['Promotion D dummy'] + datafile['Week 1'] + datafile['Week 2'] + datafile['Week 3'] + datafile['Week 4'] + datafile['Week 5'] + datafile['Week 6'] + datafile['Week 7'] + datafile['Promotion A dummy Week 1'] + datafile['Promotion A dummy Week 2'] + datafile['Promotion A dummy Week 3'] + datafile['Promotion A dummy Week 4'] + datafile['Promotion A dummy Week 5'] + datafile['Promotion A dummy Week 6'] + datafile['Promotion A dummy Week 7'] + datafile['Promotion B dummy Week 1'] + datafile['Promotion B dummy Week 2'] + datafile['Promotion B dummy Week 3'] + datafile['Promotion B dummy Week 4'] + datafile['Promotion B dummy Week 5'] + datafile['Promotion B dummy Week 6'] + datafile['Promotion B dummy Week 7'] + datafile['Promotion C dummy Week 1'] + datafile['Promotion C dummy Week 2'] + datafile['Promotion C dummy Week 3'] + datafile['Promotion C dummy Week 4'] + datafile['Promotion C dummy Week 5'] + datafile['Promotion C dummy Week 6'] + datafile['Promotion C dummy Week 7'] + datafile['Promotion D dummy Week 1'] + datafile['Promotion D dummy Week 2'] + datafile['Promotion D dummy Week 3'] + datafile['Promotion D dummy Week 4'] + datafile['Promotion D dummy Week 5'] + datafile['Promotion D dummy Week 6'] + datafile['Promotion D dummy Week 7'] + datafile['Promotion E dummy Week 1'] + datafile['Promotion E dummy Week 2'] + datafile['Promotion E dummy Week 3'] + datafile['Promotion E dummy Week 4'] + datafile['Promotion E dummy Week 5'] + datafile['Promotion E dummy Week 6'] + datafile['Promotion E dummy Week 7']", data=datafile).fit()
print(model.summary())
datafile['Model'] = model.params[0] + model.params[1]*datafile['BASE2_ACTUAL_PRICE'] + model.resid
#and then filtering
cycle, trend = sm.tsa.filters.hpfilter(datafile['Model'], 10)
datafile['BASE2_' + price_name] = np.where(datafile[prom_name] == 0, datafile[price_name], trend)
else:
datafile['BASE2_' + price_name] = datafile['BASE2_' + price_name].interpolate()
datafile['BASE2_' + price_name] = np.where(datafile['BASE2_' + price_name] > 0, datafile['BASE2_' + price_name], np.mean(datafile['BASE2_' + price_name]))
cycle, trend = sm.tsa.filters.hpfilter(datafile['BASE2_' + price_name], 10)
datafile['BASE2_' + price_name] = np.where(datafile[prom_name] == 0, datafile['BASE2_' + price_name], trend)
datafile['BASE2_' + price_name] = np.where(datafile[price_name] > 0, datafile['BASE2_' + price_name], np.nan)
datafile['DISCOUNT2_' + price_name] = np.where(datafile[price_name] == datafile['BASE2_' + price_name],np.nan,1 - datafile[price_name]/datafile['BASE2_' + price_name])
#datafile = datafile.drop(['oopPrice', 'DISCOUNT'], axis=1)
```

```
#Derive discount for ACTUAL_PRICE
calc_disc("ACTUAL_PRICE", "TYPE_OF_PROMOTION_2")
#Check the result for baseline
data1 = []
for col in enumerate(datafile.columns):
if "ACTUAL_PRICE" == col[1] or "BASE1_ACTUAL_PRICE" == col[1] or "BASE2_ACTUAL_PRICE" == col[1]:
graph = go.Scatter(
x=datafile.Week,
y=datafile[col[1]],
name = col[1])
data1 = data1 + [graph]
if "VOLUME_OF_SALES" == col[1]:
graph = go.Bar(
x=datafile.Week,
y=datafile[col[1]],
name = col[1],
yaxis = "y2",
opacity = 0.3)
data1 = data1 + [graph]
data2 = data2 + [graph]
if "PROMOTION_2" in col[1]:
graph = go.Bar(
x=datafile.Week,
y=datafile[col[1]],
name = col[1],
opacity = 0.8,
yaxis = "y3")
data1 = data1 + [graph]
data2 = data2 + [graph]
#Set layout if needed
layout1 = dict(title = "Volume Dynamics",
yaxis = dict(range=[0, 1.5]),
yaxis2 = dict(overlaying = "y", side = "right"),
yaxis3 = dict(range=[0,20],overlaying = "y", side = "left", showticklabels = False),
legend=dict(orientation="h"))
#Plot result
iplot(dict(data=data1, layout=layout1))
```

```
#Check the result for Discounts
data1 = []
for col in enumerate(datafile.columns):
if "DISCOUNT1_ACTUAL_PRICE" == col[1] or "DISCOUNT2_ACTUAL_PRICE" == col[1]:
graph = go.Scatter(
x=datafile.Week,
y=datafile[col[1]],
name = col[1])
data1 = data1 + [graph]
if "VOLUME_OF_SALES" == col[1]:
graph = go.Bar(
x=datafile.Week,
y=datafile[col[1]],
name = col[1],
yaxis = "y2",
opacity = 0.3)
data1 = data1 + [graph]
data2 = data2 + [graph]
if "PROMOTION_2" in col[1]:
graph = go.Bar(
x=datafile.Week,
y=datafile[col[1]],
name = col[1],
opacity = 0.8,
yaxis = "y3")
data1 = data1 + [graph]
data2 = data2 + [graph]
#Set layout if needed
layout1 = dict(title = "Discount Dynamics",
yaxis = dict(range=[-0.1, 0.4]),
yaxis2 = dict(overlaying = "y", side = "right"),
yaxis3 = dict(range=[0,20],overlaying = "y", side = "left", showticklabels = False),
legend=dict(orientation="h"))
#Plot result
iplot(dict(data=data1, layout=layout1))
```

```
#Get Competitor Promotions
#Derive variables - SMOOTH COMPETITOR PRICE and PROMOTION PERIOD
def set_prom(price_name, period, s1, s2):
for i in range(0, len(datafile)):
#Temp variables
value = np.nan
sumPrice = 0
counter = 0
#Calcualte average price
if datafile.loc[i,price_name] > 0 :
for k in range(max(i - period,0), min(i + period,len(datafile))):
if datafile.loc[k,price_name] > 0:
sumPrice += datafile.loc[k,price_name]
counter += 1
value = sumPrice/counter
#Set Base price
datafile.loc[i, "SMOOTH_" + price_name + "_" + str(period)] = value
#Check for missing
if value != np.nan:
datafile.loc[i, "VAR_" + price_name + "_" + str(period)] = datafile.loc[i,price_name] - value
if datafile.loc[i, "VAR_" + price_name + "_" + str(period)] < -1*s1*np.std(datafile[price_name]):
datafile.loc[i, "TYPE_" + price_name + "_" + str(period)] = 1
if i > 0:
if datafile.loc[i-1, "VAR_" + price_name + "_" + str(period)] < s2*np.std(datafile[price_name]):
datafile.loc[i-1, "TYPE_" + price_name + "_" + str(period)] = 1
else:
if i > 0:
if datafile.loc[i, "VAR_" + price_name + "_" + str(period)] < s2*np.std(datafile[price_name]) and datafile.loc[i-1, "VAR_" + price_name + "_" + str(period)] < -1*s1*np.std(datafile[price_name]):
datafile.loc[i, "TYPE_" + price_name + "_" + str(period)] = 1
else:
datafile.loc[i, "TYPE_" + price_name + "_" + str(period)] = 0
else:
datafile.loc[i, "TYPE_" + price_name + "_" + str(period)] = 0
else:
datafile.loc[i, "VAR_" + price_name + "_" + str(period)] = np.nan
datafile.loc[i, "TYPE_" + price_name + "_" + str(period)] = np.nan
```

```
#Derive discount for competitor
def def_smooth(competitor,per,s1,s2):
for p in range(per,per+1):
set_prom(competitor, p, s1, s2)
#Check the result for Discounts
data1 = []
for col in enumerate(datafile.columns):
if col[1] in [competitor] + ['SMOOTH_' + competitor + '_' + str(p) for p in range(per,per+1)]:
graph = go.Scatter(
x=datafile.Week,
y=datafile[col[1]],
name = col[1])
data1 = data1 + [graph]
if col[1] in ['TYPE_' + competitor + '_' + str(p) for p in range(per,per+1)]:
graph = go.Bar(
x=datafile.Week,
y=datafile[col[1]],
name = col[1],
opacity = 0.8,
yaxis = "y3")
data1 = data1 + [graph]
#Set layout if needed
layout1 = dict(title = competitor + " Price Dynamics",
legend=dict(orientation="h"),
yaxis3 = dict(range=[0,20],overlaying = "y", side = "left", showticklabels = False))
#Plot result
iplot(dict(data=data1, layout=layout1))
```

```
#Derive discount for ACTUAL_PRICE
def comp_disct(competitor, promo):
calc_disc(competitor, promo)
#Check the result for baseline
data1 = []
for col in enumerate(datafile.columns):
if competitor == col[1] or "BASE2_" + competitor == col[1]:
graph = go.Scatter(
x=datafile.Week,
y=datafile[col[1]],
name = col[1])
data1 = data1 + [graph]
if promo in col[1]:
graph = go.Bar(
x=datafile.Week,
y=datafile[col[1]],
name = col[1],
opacity = 0.8,
yaxis = "y3")
data1 = data1 + [graph]
#Set layout if needed
layout1 = dict(title = "BASE Price " + competitor,
yaxis3 = dict(range=[0,20],overlaying = "y", side = "left", showticklabels = False),
legend=dict(orientation="h"))
#Plot result
iplot(dict(data=data1, layout=layout1))
```

```
#Derive discount for competitor
for i in range(1,8):
def_smooth("COMPETITOR" + str(i) +"_PRICE", 3, 0.5, 0.1)
```

```
for i in range(1,8):
comp_disct("COMPETITOR" + str(i) +"_PRICE","TYPE_COMPETITOR" + str(i) +"_PRICE_3")
```

```
comp_disct("VOLUME_OF_SALES","TYPE_OF_PROMOTION_2")
```

```
#Calcuate Empirical Elasticity Coefficients
def calc_el(price_name, vol_name):
for i in range(1, len(datafile)):
if datafile.loc[i,price_name] > 0 and datafile.loc[i-1,price_name] > 0 and (datafile.loc[i,price_name]-datafile.loc[i-1,price_name]) != 0:
value = ((datafile.loc[i,vol_name]-datafile.loc[i-1,vol_name])/datafile.loc[i-1,vol_name])/((datafile.loc[i,price_name]-datafile.loc[i-1,price_name])/datafile.loc[i-1,price_name])
else:
value = np.nan
datafile.loc[i,'EL_' + price_name] = value
```

```
#Check the result for baseline
for i in range(1,8):
price = "COMPETITOR" + str(i) +"_PRICE"
volume = "BASE2_VOLUME_OF_SALES"
calc_el(price,volume)
temp = datafile.loc[:,[price, "EL_" + price, volume]]
temp = temp.sort_values(price).dropna(thresh=1)
data1 = []
graph = go.Scatter(
x=temp[price],
y=temp["EL_" + price],
name = col[1])
data1 = data1 + [graph]
#Set layout if needed
layout1 = dict(title = "ELASTICITY " + price,
legend=dict(orientation="h"))
#Plot result
iplot(dict(data=data1, layout=layout1))
```

```
#Export data
export_list = ["Week","VOLUME_OF_SALES","DISCOUNT2_ACTUAL_PRICE", "BASE2_ACTUAL_PRICE"] + ['Week ' + str(p) for p in range(1, 8)]+["TYPE_COMPETITOR" + str(p) + "_PRICE_3" for p in range(1,8)] + ["BASE2_COMPETITOR" + str(p) +"_PRICE" for p in range(1,8)] + ["DISCOUNT2_COMPETITOR" + str(p) + "_PRICE" for p in range(1,8)] +["BASE1_COMPETITOR" + str(p) +"_PRICE" for p in range(1,8)]+["COMPETITOR" + str(p) +"_PRICE" for p in range(1,8)] + ["DISCOUNT1_COMPETITOR" + str(p) + "_PRICE" for p in range(1,8)]+['Promotion ' + p + ' dummy' for p in ["A", "B", "C", "D", "E"]]
datafile.loc[:,export_list].to_csv(pr_folder + '\\Data\\PREP_VARIABLES.csv')
```

```
datafile.head()
```

```
import xgboost as xgb
#Load Data Prepr Librarier
import pandas as pd
import numpy as np
#Load Plotly
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.graph_objs as go
init_notebook_mode(connected=True)
from sklearn.model_selection import KFold
import statsmodels.formula.api as smf
```

```
#Import Data
pr_folder = "C:\\Users\\c10670A\\Documents\\ProjectLibrary\\Datathon_2018"
#Get the data
datafile = pd.read_csv(pr_folder + '\\Data\\All_data.csv', delimiter = ",").fillna(0)
datafile['Holiday'] = np.where(datafile['Week']==6,1,
np.where(datafile['Week']==58,1,
np.where(datafile['Week']==110,1,
np.where(datafile['Week']==33,1,
np.where(datafile['Week']==85,1,
np.where(datafile['Week']==137,1,0))))))
#Add last discount
```

```
#Uplift factors
model_list = ["BASE2_ACTUAL_PRICE_y","BASE2_ACTUAL_PRICE_PROM", "ACTUAL_PRICE","DISCOUNT2_ACTUAL_PRICE_x", "BASE2_ACTUAL_PRICE_HP_C", "BASE2_ACTUAL_PRICE_HP_T", "DISCOUNT2_ACTUAL_PRICE_y","LAST_DISCOUNT"]
model_list = model_list +["VOLUME_LAG1", "VOLUME_LAG2", "VOLUME_LAG3" ,"VOLUME_AVGLAG2", "VOLUME_AVGLAG3"]
#Promotion effectivness
model_list = model_list +["Week from promotion start"] +['Week ' + str(p) for p in range(1, 8)]
model_list = model_list +["time_from_prev2"]+["time_from_prev_"+p for p in ["A","B","C","D","E"]]
model_list = model_list +['Promotion ' + p + ' dummy' for p in ["A", "B", "C", "D", "E"]]
model_list = model_list +['VOLUMES_LAST_PROM', 'TIME_LAST_PROM', 'AVG_VOL_LAST_PROM']
#Canibalization
model_list = model_list +["Price_Diff_" + str(p) for p in range(1,7)]+ ["Price_log_Diff_" + str(p) for p in range(1,7)]
model_list = model_list +["TYPE_COMPETITOR" + str(p) + "_PRICE_3" for p in range(1,8)] + ["BASE2_COMPETITOR" + str(p) +"_PRICE" for p in range(1,8)] + ["DISCOUNT2_COMPETITOR" + str(p) + "_PRICE" for p in range(1,8)] +["BASE1_COMPETITOR" + str(p) +"_PRICE" for p in range(1,8)]+["COMPETITOR" + str(p) +"_PRICE" for p in range(1,8)] + ["DISCOUNT1_COMPETITOR" + str(p) + "_PRICE" for p in range(1,8)]
print(len(model_list))
for i in model_list:
print(i)
X_columns = datafile.loc[:,model_list].columns
x_train = datafile.loc[:,model_list].values
y_train = datafile["VOLUME_OF_SALES"].values
```

```
results = pd.DataFrame()
#row = pd.DataFrame({"Sample" : k, "0" : 0, "1" : 0, "2" : 0, "3" : 0, "4" : 0, "5" : 0, "6" : 0, "7" : 0, "8" : 0, "9" : 0}, index = [k],)
for i in range(1):
print("ITERATION " + str(i))
#XGB Model
xgb_params = {}
xgb_params['booster'] = ['gbtree','gblinear','dart'][2]
xgb_params['objective'] = ['reg:linear','reg:gamma','reg:tweedie'][0]
xgb_params['learning_rate'] = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1][1]
xgb_params['eval_metric'] = ['rmse','mae','logloss','error','merror','mlogloss','ndcg','map'][0]
xgb_params['colsample_bytree'] = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0][8]
xgb_params['max_depth'] = [1, 2, 4, 6, 8, 10, 12, 14, 16, 20][3]
xgb_params['subsample'] = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0][5]
xgb_params['min_child_weight'] = [1, 2, 4, 5, 6, 8, 10, 20, 50, 100][2]
xgb_params['n_estimators'] = [10,50,100,300,500, 800, 1000, 1500, 2000][2]
num_boost_round = [5,10,25,50,100,200,250,300,500,900][3]
NFOLDS = 5
kf = KFold(n_splits = NFOLDS, random_state = 999)
ntrain = x_train.shape[0]
oof_train = np.zeros((ntrain,))
oof_test_skf = np.empty((NFOLDS, ntrain))
#Go through each FOLD, solve the problem and store the results for the train and test samples
k = 0
temp1 = 0
temp2 = 0
for train_index, test_index in kf.split(x_train):
x_tr = x_train[train_index]
y_tr = y_train[train_index]
x_te = x_train[test_index]
y_te = y_train[test_index]
dtrain = xgb.DMatrix(x_tr, y_tr, feature_names = X_columns)
dtrain_test = xgb.DMatrix(x_te, feature_names = X_columns)
gbm = xgb.train(dict(xgb_params, silent = 1), dtrain, num_boost_round = num_boost_round)
#Check the results
temp1 += np.power(pd.concat([pd.DataFrame(y_tr), pd.DataFrame(pd.DataFrame(gbm.predict(dtrain).tolist()).iloc[:,0])], axis = 1).corr().iloc[1,0],2)
temp2 += np.power(pd.concat([pd.DataFrame(y_te), pd.DataFrame(pd.DataFrame(gbm.predict(dtrain_test).tolist()).iloc[:,0])], axis = 1).corr().iloc[1,0],2)
k = k + 1
print(temp1/k)
print(temp2/k)
#table_macro = pd.concat([table_macro, row], axis = 0)
#corr_app_sale = pd.concat([t_predict_s2, t_predict_sale, t_predict_build ,t_predict_macro], axis = 1).corr()
```

```
fimp = pd.Series(gbm.get_score(), name = "Score").sort_values(ascending = False)
print(fimp)
```

```
#Copy File
datafile_2 = datafile.copy()
#Define list of no-log
nolog_list = ["Week from promotion start"] +['Week ' + str(p) for p in range(1, 8)] + ["TIME_LAST_PROM"]
nolog_list = nolog_list +["time_from_prev2"]+["time_from_prev_"+p for p in ["A","B","C","D","E"]]
nolog_list = nolog_list +['Promotion ' + p + ' dummy' for p in ["A", "B", "C", "D", "E"]]
nolog_list = nolog_list +["TYPE_COMPETITOR" + str(p) + "_PRICE_3" for p in range(1,8)]
nolog_list = nolog_list +["Price_log_Diff_" + str(p) for p in range(1,7)]
formula = "datafile_2['VOLUME_OF_SALES'] ~ "
#log_list = list(fimp.index)
log_list = []
for i in model_list:
if i not in ['ACTUAL_PRICE'] + nolog_list:
log_list = log_list + [i]
k = 0
for i in log_list:
if k == 0:
formula = formula + " datafile_2['" + i + "']"
else:
formula = formula + " + datafile_2['" + i + "']"
k = k + 1
#if i not in nolog_list:
# datafile_2[i] = np.log(datafile_2[i].replace(0, np.nan))
# datafile_2[i] = datafile_2[i].replace(np.nan, 0)
#Log volume
#datafile_2["VOLUME_OF_SALES"] = np.log(datafile_2["VOLUME_OF_SALES"])
model1 = smf.ols(formula=formula, data=datafile_2).fit()
print(model1.summary())
```

```
formula = "datafile['VOLUME_OF_SALES'] ~ "
#for i in list(fimp.index):
# formula = formula + " + datafile['" + i + "']"
remove_list = ['BASE2_COMPETITOR6_PRICE','Price_log_Diff_5','DISCOUNT2_COMPETITOR5_PRICE','DISCOUNT2_COMPETITOR5_PRICE','BASE2_COMPETITOR1_PRICE','DISCOUNT2_COMPETITOR4_PRICE','VOLUMES_LAST_PROM','Price_log_Diff_4','Price_log_Diff_1','BASE2_COMPETITOR3_PRICE','DISCOUNT2_COMPETITOR3_PRICE','BASE2_COMPETITOR7_PRICE','BASE2_COMPETITOR5_PRICE','DISCOUNT2_COMPETITOR2_PRICE','BASE2_COMPETITOR4_PRICE','DISCOUNT2_COMPETITOR1_PRICE','DISCOUNT2_COMPETITOR6_PRICE','DISCOUNT2_COMPETITOR7_PRICE','BASE2_COMPETITOR2_PRICE','VOLUME_AVGLAG2','VOLUME_LAG2','BASE2_ACTUAL_PRICE_HP_T','Price_Diff_6','COMPETITOR5_PRICE','COMPETITOR7_PRICE','time_from_prev_A','time_from_prev_C','time_from_prev_D','AVG_VOL_LAST_PROM','TYPE_COMPETITOR5_PRICE_3','TYPE_COMPETITOR1_PRICE_3','BASE1_COMPETITOR6_PRICE','BASE1_COMPETITOR7_PRICE','Week 3','Price_Diff_2','Price_Diff_6''Week 3','BASE1_COMPETITOR3_PRICE','DISCOUNT1_COMPETITOR7_PRICE','DISCOUNT1_COMPETITOR3_PRICE','DISCOUNT1_COMPETITOR2_PRICE','Price_Diff_1','DISCOUNT1_COMPETITOR4_PRICE','Price_Diff_3','BASE2_ACTUAL_PRICE_PROM','COMPETITOR4_PRICE','COMPETITOR6_PRICE','Week 7','time_from_prev_B','TYPE_COMPETITOR7_PRICE_3','VOLUME_LAG3','Promotion E dummy','VOLUME_AVGLAG3','BASE2_ACTUAL_PRICE_HP_C','BASE1_COMPETITOR4_PRICE','Price_log_Diff_2','Week from promotion start','TYPE_COMPETITOR3_PRICE_3','Price_Diff_5','COMPETITOR3_PRICE','COMPETITOR2_PRICE','ACTUAL_PRICE','Promotion B dummy','TYPE_COMPETITOR6_PRICE_3','DISCOUNT1_COMPETITOR5_PRICE','TYPE_COMPETITOR4_PRICE_3','TYPE_COMPETITOR6_PRICE_3''DISCOUNT1_COMPETITOR5_PRICE','BASE1_COMPETITOR5_PRICE','Week 4','Price_Diff_4','Week 2','BASE1_COMPETITOR2_PRICE','BASE1_COMPETITOR1_PRICE','DISCOUNT1_COMPETITOR1_PRICE', 'Week 6','DISCOUNT2_ACTUAL_PRICE_x','COMPETITOR1_PRICE','Week 5','Price_log_Diff_6','DISCOUNT1_COMPETITOR6_PRICE', 'Competitor3_Price_Impact','Promotion C dummy', 'Competitor1_Price_Impact', 'TYPE_COMPETITOR2_PRICE_3', 'Promotion D dummy','time_from_prev_E','time_from_prev2']
k = 0
corr_list = []
for i in model_list:
if i not in remove_list:
corr_list = corr_list + [i]
if k == 0:
formula = formula + " datafile['" + i + "']"
else:
formula = formula + " + datafile['" + i + "']"
k = k + 1
corr_list = corr_list + ["VOLUME_OF_SALES"]
model = smf.ols(formula=formula, data=datafile).fit()
print(model.summary())
```

```
for i in range(2,3):
formula = "datafile['VOLUME_OF_SALES'] ~ "
model_list2 = ['BASE2_ACTUAL_PRICE_y','DISCOUNT2_ACTUAL_PRICE_y','LAST_DISCOUNT','VOLUME_LAG1','Week 1','Promotion A dummy','TIME_LAST_PROM']
model_list2 = model_list2 + ["Price_Relative_Diff__3"]
model_list2 = model_list2 + ["COMP_1_only", "Holiday"]
k = 0
corr_list = []
for i in model_list2:
if k == 0:
formula = formula + " datafile['" + i + "']"
else:
formula = formula + " + datafile['" + i + "']"
k = k + 1
corr_list = corr_list + ["VOLUME_OF_SALES"]
model = smf.ols(formula=formula, data=datafile).fit()
print(model.summary())
```

# References

- MARKETING MIX POLICIES IN FMCG CASE-STUDY: THE ADVERTISING STRATEGY- ANA MARIA BOBEICA
- PREDICTIVE MARKETING MIX MODELLING IN FMCG PACKAGED FOOD CATEGORY INCLUDING PRICE AND ADVERTISING IMPACT – Tomasz Kolanowski

Presentation:

## 5 thoughts on “Price and promotion optimization for FCMG”

Excellent work team!

The most difficult part of this challenge is to understand the data, create new features and rerun the predictive models till you achieve a good accuracy.

As you may mentioned if you run a predictive model with the initial dataset you will get an extremely low modelling accuracy.

I will vote based on the below criteria:

1. business understanding

2. feature engineering

3. modelling accuracy

4. insights & final results

You achieved a good modeling accuracy and you created a number of new features based on your excellent understanding of the data and the business case.

Moreover you visualized all the variable in a meaningful way and you took the right decisions on creating new features.

You could further increase the accuracy of the model by implementing a better base price algorithm and by calculating the baseline volume in a better way.

Not more to add after Agamemnon 🙂

I also like your validation approach, keeping in mind the small number of data, also introducing different scenarios related to the forecasts.

And just to add: keeping in mind the limited time, I would be very happy to see continuation of your work here.

Questions during your presentation:

1. @agamemnon

What was the most challenging part of this analysis and how you came up with the elasticities at the end ?

2. @mladensavov

What is the explanatory power of your model? Could you quote the R value? How many explanatory variables did you eliminate to build the model?

3. @tonypetrov

Having too many variables in your model may impact performance. Did you take any measures to combat the potential slow down?

Hey all, now Team BOTS present their solution and we are LIVE at https://www.youtube.com/watch?v=INUbkBHqYiE

Don`t forget to ask your questions here or on YouTube and after their presentation we will share the resources with you, of course! 🙂

Take the chance and leave your comments now!