Datathon 2020 SolutionsDatathons Solutions

2020_Kaufland_Raven_Delivery

1
votes

Datathon 2020 Kaufland – Optimize Retail Supply Chain

 

Team Name: Raven Delivery


Authors

  • Borislav Aymaliev
  • Gabriela Vasileva
  • Irina Naskinova
  • Zainab Lawal

 Team Toolset

  • Python, Pandas, MySQL
  • Excel
  • Kaufland  dataset

 

Business Understanding

Client –  Kaufland 

In the innovative era we find ourselves in today,we have the ability to not only optimise cost by predicting consumer behaviour from a reflection from data analytics but also implement machine learning processes to avoid unnecessary costs. Here, to manage storage costs, we will reflect on other factors to develop key functions. These key functions will encourage a reward, in this case, as a cut in cost on the amount spent on storage cost.

Data Understanding

Two datasets were provided in the form of two CSV files. The first file is named the master data and the second named sales. The initial file conscious data regarding storage costs, expiration periods, and transportation parameters for the items. Whereas,the latter file contains sales information from over the last year for 100 individual items.

The CSV files were initially imported into Excel to get a full feel of the actual possibilities.

From these files, it is key to solve the costs attributed to the following: handling, transportation, storage and wastage. The manipulation of this data is further discussed  in data preparation.

Data Preparation

Raw Data

Dataset 1 → Data Type:Master Sheet → DataFormat:CSV

Dataset 2 → Data Type:Sales Sheet → DataFormat:CSV

Data preparation

To develop a better understanding ,we created  two new tables namely, stock per day and order per day based on information derived from the master sheet and sales sheet provided by our client, Kauflaud.

Stock per Day

The stock per day table  examines the  amount of stock available  daily.

Assumptions:

  • On 01/01/2019 the items were at maximum quantity
  • No sales were made by the end of first day

Current_date states the actual date the stock check occurs on. Item _id and item_name match up to the uniqueidentifier of the items available, this information has been provided by our client.

The next columns have been created by us based on assumptions stated and calculations developed. The following methodology has been executed in accordance to the case details:

[Insert snapshot of table]

Yesterday Quantity

Yesterday_qty describes the stock item available on the shop and in the warehouse from the day before. For simplification, yesterday_qty on the first day has been given to be equal to the maximum stock available on the stock shelves. For example, on 01/01/2019, the quantity for dried basil has been given as 40. Therefore, on the first day, Yesterday_qty  is given as

Yesterday_qty = maximum stock

On consequent dates, Yesterday_qty is given by the following equation :

Yesterday_qty = Quantity of items in storage on the previous day+The quantity of items on the shop floor – the number of items sold on the previous day.

End of day quantity

it is crucial to take into consideration the items that are currently in stock on a daily basis. Similarly to yesterday’s quantity, we take take the quantity on the shop floor, quantity in storage and the total sold for each item

Yesterday_qty = Quantity of items in storage on the current day+The quantity of items on the shop floor – the number of items sold on the current day. end_day_item_qty=yesterday_qty – (ordered_qty – sold_qty)

Wastage

It is necessary to identify the items which have expired as this contributes to the wastage cost and this is incurred on the company. Items that have expired need to be checked and identified on a daily basis. We have been given the time taken for an item to expire as mhd.

Determination of expired items

When an item has been delivered, the date is registered as the delivered date. On a daily basis, the number of days which has passed from the delivery (delivered date)   and the current date are compared to the mhd. Therefore, we can deduce, if the difference between these dates is greater than the mhd for each item, the item would be disposed of and contribute to waste cost, otherwise, the item will remain in stock.

As an expression:

If     delivery date  –  current date > mhd ,  consider item as wastage

otherwise keep to item in stock

Quantity of Items Stored

The quantity of items kept in stock

The order per day shows calculations to determine the number of orders made daily. The calculations are broken down further below:

the order ID, Item ID, and item order quantity were obtained from the xx sheet. When an order is made, it is crucial to fill the pallets to the maximum height before this goes on the truck for transportation.  Each pallet has been advised to be filled to the maximum height to incur less transportation charges. Each item on a pallet contributes to the maximum pallet height which has been defined as pallet_fraction_per_item_per_order. pallet_fraction_per_item_per_order can be calculated as follows:

actual_order_qty has been defined as the difference between the number sales per item in a day  and the maximum stock available to be kept for each item (max_qty), this can be shown as :

actual_order_qty= max_qty – sales_qty

When an order is made, it is crucial to fill the pallets to the maximum height before this goes on the truck for transportation.  Each pallet has been advised to be filled to the maximum height to incur less transportation charges

pallet_fraction_per_item_per_order = actual_order_qty/transport_qty( = z +(w/10^n))) /33

 

Data Analysis

df_sale = pd.read_sql(‘SELECT * FROM datathon_sale limit 10’, con=db_connection)

 

item_id the_date sold_qty
0 40001260 2019-01-02 7
1 40001260 2019-01-03 7
2 40001260 2019-01-04 16
3 40001260 2019-01-05 16
4 40001260 2019-01-06 8
5 40001260 2019-01-07 23
6 40001260 2019-01-08 12
7 40001260 2019-01-09 8
8 40001260 2019-01-10 4
9 40001260 2019-01-11 11

 

 

df_masterdata = pd.read_sql(‘SELECT * FROM datathon_masterdata limit 10’, con=db_connection)

 

item_id item_name unit order_qty transport_qty min_stock max_stock item_prio_ storage_cost mhd
0 101900 Dried Basil ST 20 2880 10 40 0.000 0.000563 548
1 181130 Softcookies ST 28 1680 30 150 0.002 0.000568 98
2 188640 Carrots juice ST 12 1296 6 60 0.002 0.000473 243
3 192100 Ice Cream stick ST 24 5304 40 100 0.002 0.000358 480
4 211110 Cake Cream ST 48 5760 12 48 0.002 0.000557 243
5 236490 Chiken spice ST 30 6300 10 40 0.002 0.000584 405
6 250520 Corn flour ST 12 1296 12 60 0.003 0.000552 547
7 263350 Deo ST 12 3072 18 120 0.004 0.000944 821
8 280560 Soda ST 6 264 6 60 0.005 0.000561 135
9 293220 Bio Bar ST 360 19440 10 50 0.005 0.000758 248

 

item_id  order_qty  transport_qty  min_stock  max_stock 

count      10.000000   10.00000      10.000000  10.000000   10.00000

mean   219902.000000   55.20000    4729.200000  15.400000   72.80000

std     57165.041337  107.78662    5570.190517  11.117554   37.57304

min    101900.000000    6.00000     264.000000   6.000000   40.00000

25%    189505.000000   12.00000    1392.000000  10.000000   48.50000

50%    223800.000000   22.00000    2976.000000  11.000000   60.00000

75%    260142.500000   29.50000    5646.000000  16.500000   90.00000

max    293220.000000  360.00000   19440.000000  40.000000  150.00000

 

item_prio_  storage_cost         mhd

count   10.000000     10.000000   10.000000

mean     0.002700      0.000592  376.800000

std      0.001567      0.000158  225.000642

min      0.000000      0.000358   98.000000

25%      0.002000      0.000553  243.000000

50%      0.002000      0.000562  326.500000

75%      0.003750      0.000580  530.250000

max      0.005000      0.000944  821.000000

 

print (df_sale.describe())

 

item_id   sold_qty

count        10.0  10.000000

mean   40001260.0  11.200000

std           0.0   5.711587

min    40001260.0   4.000000

25%    40001260.0   7.250000

50%    40001260.0   9.500000

75%    40001260.0  15.000000

max    40001260.0  23.000000

 

print (df_sale.describe(include=[‘object’]))

 

the_date

count           10

unique          10

top     2019-01-03

freq             1

 

print (df_sale.describe(include=’all’))

 

item_id    the_date   sold_qty

count         10.0          10  10.000000

unique         NaN          10        NaN

top            NaN  2019-01-03        NaN

freq           NaN           1        NaN

mean    40001260.0         NaN  11.200000

std            0.0         NaN   5.711587

min     40001260.0         NaN   4.000000

25%     40001260.0         NaN   7.250000

50%     40001260.0         NaN   9.500000

75%     40001260.0         NaN  15.000000

max     40001260.0         NaN  23.000000

 

df_order = pd.read_sql(‘SELECT * FROM datathon_order limit 10’, con=db_connection)

print (df_order.describe())

 

order_id       item_id  ordered_units_qty

count      10.0  1.000000e+01          10.000000

mean       12.0  5.500268e+06           1.400000

std         0.0  1.247951e+07           2.221111

min        12.0  3.462000e+05           0.000000

25%        12.0  6.034950e+05           0.000000

50%        12.0  6.582700e+05           0.500000

75%        12.0  7.840875e+05           1.750000

max        12.0  4.000126e+07           7.000000

 

group_by_date = df_sale.groupby(by=[‘the_date’])

sold_data_avg = group_by_date.mean()

sold_data_count = group_by_date.count()

 

print(sold_data_avg)

 

item_id  sold_qty

the_date

2019-01-02  40001260         7

2019-01-03  40001260         7

2019-01-04  40001260        16

2019-01-05  40001260        16

2019-01-06  40001260         8

2019-01-07  40001260        23

2019-01-08  40001260        12

2019-01-09  40001260         8

2019-01-10  40001260         4

df_sale.hist(column=’sold_qty’)

 

df_sale[‘sold_qty’].plot(kind=’kde’)

 

item_id   the_date

40001260  2019-01-07    23

2019-01-04    16

2019-01-05    16

2019-01-08    12

2019-01-11    11

Name: sold_qty, dtype: int64

 

df_sale['sold_qty'].plot(kind='hist',bins=50,figsize=[12,6])
df_sale['sold_qty'].plot(kind='kde')


Model Preparation

Given the limited time frame, we decided to go ahead with a simplified Q-learning approach. The learning process consists of the following steps:

  1. Agent action
    This is a binary decision to order an item or not to order an item today. It should be decided for all possible items.
    The number of items is 103. Therefore our agent can take 2^103 possible actions. To address this problem, we are considering each item independently on its own and the decision is buy or do not buy.
  2. “Transportation” and “handling” costs calculation
    In any case our agent’s actions incur transportation and handling costs. The function to calculate it should be on a per-item basis, so we can attribute these costs to our agent’s individual actions. Very much like in the case description.
  3. Update of the current state, given the daily sales
    This step is essentially an adjustment to the current state of the stock at the shop. Even though 37025 historical sales records are available in the datathon_sales.csv file, they may be insufficient to complete the training of our agent.
    Therefore, our approach is to sample data points for each item from its historical distribution, roughly preserving the frequency of its sales during the initial adjustment of the algorithm.
  4. “Storage” and approximate “waste” costs calculation
    At the end of the day, storage and waste costs need to be calculated. Storage costs are easier, given the fact that we already know what was sold during the day.
    Waste cost, on the other hand, needs to be attributed to the actions taken the current day, even though items may expire many days into the future. Best results we expect if we were to perform a time series forecast, but for simplicity we are going to use a simple calculation for our expectation.
  5. Reward formula, given the calculated waste costs.
    The outcome of this formula should provide our agent with information on how they performed for this given day. It is the last step of our feedback loop, from which the agent is to learn the impact of their actions.
    In essence, based on this result our agent adjusts the probability of ordering each item in the future. And when the process continues on the next day, they will base their decision on the old information, augmented with the new feedback.

This is a simple illustrative model-free Q-learning demo: 002_Model-free_Qlearning_Demo

Next steps are to apply it to our case data.

TBC

**Modeling**

Reinforcement learning has been selected as the modelling framework.

The main idea we used was to

We had the following major improvements:

Without any of the improvements we had accuracy of around …%, which was very high to begin with.

Although this produced relatively good results, there are cases in which

**Evaluation**

In terms of time complexity our solution is O(N * M), where N is the number of , and M is .

We used the training data for evaluation of our model (but not training per se).

The results there proved to be .

We noticed that most errors we get are caused by .

We decided to write a custom logic for

**Deployment**

We ran the algorithm on all the test data and created a

**Used Libraries and Technologies**

Python for main logic (Open source, https://www.python.org/)

**Sources**

Sources should be available in GitLab. We decided to use separate scripts for separate tasks, since this allowed us to work on specific tasks (e.g. logic) while other are being executed in the background. The sources are in :

This is a simple illustrative Q-learning demo: 002_Model-free_Qlearning_Demo

https://gitlab.com/datathon2020_kaufland_big/orsc/-/tree/master

Share this

3 thoughts on “2020_Kaufland_Raven_Delivery

  1. 0
    votes

    Hi Everyone,
    I find it really nice that you have tried to follow the case suggestion and implement a Reinforcement Learning algorithm to solve it.
    Please, don’t hesitate to continue working on this even after the Datatone and finish the article and actually apply RL to the data set.
    Best regards,

  2. 1
    votes

    Great approach regarding the data understanding and good explanation of variables and derived variables. Sound understanding of business process and caveats. This paper would benefit from general explanation of Reinforcement learning framework such as (1).
    Although the team explains how they think the environment and the agent is to act I would like to see the complete solution and how it really does in this task.
    Congratulations for the hard work!

    1 – https://arxiv.org/pdf/2005.01643.pdf

Leave a Reply