On 22.01.2018 Amazon opened Amazon Go – their first ever physical store without cashiers and checkout lines – customers just grab the products from the shelves and go. AI algorithms detect what product you have grabbed.
Kaufland offers the unique opportunity to work with their internal data on a similar problem – developing a computer vision algorithm that detects which is the fruit or vegetable scanned.
Business problem formulation
It is nice to provide more than sixty types of vegetables to your customer. It is not nice if the
customer is forced to search these products on a scale in a menu with more than sixty items. So, your
task will be to find an algorithm to do this automatically. Making this procedure more comfortable
will enhance customer satisfaction with Kaufland. Because of that, your task is to build up an image
recognition that reliably recognizes which type of vegetables the customer has selected.
A typical user scenario is: The customer is weighting one type of vegetables/fruits per time, which
may (in the case of cherries, plums) or may not (e.g. watermelons) be wrapped into a plastic bag. The algorithm embedded into the scale automatically recognizes the type of fruit/vegetable by recognizing an
image, taken from camera located above the scale. The scale’s monitor shows several options of
fruits/vegetables that are most likely on the scale and asks for customer’s confirmation. Ideally, the
final results shown on the screen will be one fruit/vegetable, but as many of them share similar
properties, it would also be acceptable several items to be shown to the customer.
The lighting, the angle from which the pictures are taken and the size of the pictures will be the same for every image.
Research problem specification
The goal of this research task will be to design image recognition system which recognizes and ranks
the fruit’s/vegetables’ type with certain probability. The input of the system is an image taken from
camera located above the weighting scale in real store environment.
The challenge for you is to recognize 3D objects that are naturally grown. For example, different
types of apples can look quite different even it is the same type of fruit. Furthermore, vegetables
appear differently if you rotate them. So, your model has to deal with this too. Last but not least,
vegetables are already wrapped into bags when being weighed. Accordingly, they have to be reliably
recognized in spite of strong reflections on the bags that may occur depending on the lighting of the
store. Ideally your model works without being newly trained even if the store gets a new lighting
system, the bags around the vegetables are badly crunched and the vegetables are of an
extraordinary shape. Perhaps your algorithm will even be better in recognizing fruit than you!
Pre-processing of the image such as filtering, background removal, and edge detection may increase
You are invited to try many different approaches to enhance the accuracy of your model. For
example, you can start with a Convolutional Neural Network. An approach to enhance the
performance might be a Capsule Network (Hinton, 2017) for recognizing and rotating complex three-dimensional objects like wrinkly potatoes. Your model should achieve a high accuracy even if the shape of the vegetables and the lighting conditions differ strongly from the ones during the model-training.
The model will be assessed by the final produced accuracy on the test dataset. The accuracy will be
calculated as number of correctly predicted objects over all objects in the test dataset. An object is
considered correctly predicted if it is in Top 3 most probable objects of the model output.
The input data are stored in jpeg image format and divided into 68 sub-folders (categories). Every
such sub-folder corresponds to one fruit/vegetable type. Every category sub-folder has unique name
that starts with a number. The size of all the present images is 640×480. The number of images in
sub-folder may vary from several up to thousands. It is up to the contestant to extract and prepare
train/validation datasets in format required by the model he/she is going to use.
The data specified in ‘Data description sections’ come as jpeg files spread into sub-folders according
to their category. The total amount of data is around 9GB. It is up to the contestant to extract and prepare train/validation datasets. The test datasets will not be available to the contestants, but will be used to assess the models. The pictures below represent an excerpt from the dataset and gives and preliminary information how the real dataset looks like.
The Kaufland expert for the Datathon
Hendrik Lange, Case Mentor, Data Scientist in Kaufland
Hendrik has been working for Kaufland Information Systems (KIS) as a Data Scientist for two years. Currently his work is focused on different approaches for the optimization of prices. Former projects focused on neural network approaches for a recommender system and a customer segmentation project. His favorite tools at the moment are Spark, H2O and Keras. Prior to joining KIS, he worked at the University of Marburg in the department of Social Science in the area of electoral studies. At that time he mainly used generalized linear model approaches.
Expected output and paper
The result should be a clearly presented model that can be realistically implemented in the Kaufland stores.
The main focal point for presenting the results from the Datathon from each team, is the written article. It would be considered by the jury and it would show how well the team has done the job.
Considering the short amount of time and resources in the world of Big Data Analysis it is essential to follow a time-tested and many-project-tested methodology CRISP-DM. You could read more at http://www.sv-europe.com/crisp-dm-methodology/
The organizing team has tried to do the most work on phases “1. Business Understanding” “2. Data Understanding”, while it is expected that the teams would focus more on phases 3, 4 and 5 (“Data Preparation”, “Modeling” and “Evaluation”), so that the best solutions should have the best results in phase 5. Evaluation.
Phase “6. Deployment” mostly stays in the hand of the case-study providing companies as we aim at continuation of the process after the event. So stay tuned and follow the updates on the website of the event.
1. Business Understanding
This initial phase focuses on understanding the project objectives and requirements from a business perspective, and then converting this knowledge into a data mining problem definition, and a preliminary plan designed to achieve the objectives. A decision model, especially one built using the Decision Model and Notation standard can be used.
2. Data Understanding
The data understanding phase starts with an initial data collection and proceeds with activities in order to get familiar with the data, to identify data quality problems, to discover first insights into the data, or to detect interesting subsets to form hypotheses for hidden information.
3. Data Preparation
The data preparation phase covers all activities to construct the final dataset (data that will be fed into the modeling tool(s)) from the initial raw data. Data preparation tasks are likely to be performed multiple times, and not in any prescribed order. Tasks include table, record, and attribute selection as well as transformation and cleaning of data for modeling tools.
In this phase, various modeling techniques are selected and applied, and their parameters are calibrated to optimal values. Typically, there are several techniques for the same data mining problem type. Some techniques have specific requirements on the form of data. Therefore, stepping back to the data preparation phase is often needed.
At this stage in the project you have built a model (or models) that appears to have high quality, from a data analysis perspective. Before proceeding to final deployment of the model, it is important to more thoroughly evaluate the model, and review the steps executed to construct the model, to be certain it properly achieves the business objectives. A key objective is to determine if there is some important business issue that has not been sufficiently considered. At the end of this phase, a decision on the use of the data mining results should be reached.
Creation of the model is generally not the end of the project. Even if the purpose of the model is to increase knowledge of the data, the knowledge gained will need to be organized and presented in a way that is useful to the customer. Depending on the requirements, the deployment phase can be as simple as generating a report or as complex as implementing a repeatable data scoring (e.g. segment allocation) or data mining process. In many cases it will be the customer, not the data analyst, who will carry out the deployment steps. Even if the analyst deploys the model it is important for the customer to understand up front the actions which will need to be carried out in order to actually make use of the created models.