The Business Problem
Receipt Bank provides technology that unlocks the value of accounting data and
automates the bookkeeping process. Our AI and automation technologies are used by
over 5,000 accounting & bookkeeping firms and tens of thousands of small business
customers globally. The huge amount of documents that our clients produce and the
diversity of these documents introduces complex machine learning challenges. In order
to efficiently extract the valuable information contained in the documents we must know
how many items are present in a client’s file. For example, an image contains 2 receipts,
a PDF file consisting of 6 pages contains 4 different invoices. Applying ML algorithm to
each item independently is more efficient and reduces business costs.
For this case, your task will be to develop an algorithm that detects how many
documents are contained in a PDF file. More precisely, you would need to think of a
model that outputs probability score for each page in a PDF file being the beginning of a
new document.
The Research Problem
While we do not restrict you in any way about techniques and algorithms that you may use to
solve the problem, here are some tips how to attack it.
The problem could be solved by supervised approaches; we will provide ground truth
labels for the data set. (If you can solve it with an unsupervised approach that would be
AWESOME.) You could try purely NLP approaches where you extract the texts from the
PDF files and work on the texts. The problem could also be approached as a computer
vision problem where you convert PDFs to images and work with those. The solution
could use any combination of approaches that you think of.
The log-loss of a model with 0.5 probability of every page being the beginning of a new
document is ~0.69. The accuracy of that model is ~0.51. You should be able to beat
that.
Data Description
The data consist of approximately 1000 PDF files. Each file may contain between 1 and
5 documents. The ground truth labels are provided in JSON format. For each PDF we
provide an array of pages on which new documents start.
Below you can find an excerpt of such a PDF file:
Download the dataset for the case here…
See the discussion for this case in the Data.Chat here…
The Receipt Bank experts for the Datathon
Svetlin Mladenov, Case Mentor, Machine Learning Engineer @ Receipt Bank
Svetlin Mladenov has been working in the field of Machine Learning for the past few years with primary focus on NLP. Before joining Receipt Bank Svetlin was working on chatbots and helped create one of the first Bulgarian startups in that field. He graduated from FMI, Sofia University.
Rumen Mihaylov, Case Mentor, Machine Learning Engineer @ Receipt Bank
Rumen graduated in Quantitative Financial Modelling between top universities in France, Spain and Italy. He worked as a financial analyst for OMV in Vienna as well as a quantitative developer for a hedge fund in Sofia before joining Receipt Bank.
Marin Delchev, Case Mentor, Machine Learning Engineer @ Receipt Bank
Marin graduated from one of the top 10 German Universities – RWTH Aachen University with a master thesis in the field of Machine learning. Marin worked for VMware, Nemetschek, Apply Financial and now is among the few that are able to work as Machine learning engineers for Receipt bank, a company that is ranked as one of the top 10 fastest growing UK companies.
Expected Output and Paper
Given a PDF file, your solution should output a probability score for each page being the
start of a new document. You should evaluate your algorithm using 5-fold cross
validation. You should use log-loss as validation metric.
Your results should be reproducible! Any code, plots, data exploration findings and
results should be pushed in the repository.
Article instructions
The main focal point for presenting the results from the Datathon from each team, is the written article. It would be considered by the jury and it would show how well the team has done the job.
Considering the short amount of time and resources in the world of Big Data Analysis it is essential to follow a time-tested and many-project-tested methodology CRISP-DM. You could read more at http://www.sv-europe.com/crisp-dm-methodology/
The organizing team has tried to do the most work on phases “1. Business Understanding” “2. Data Understanding”, while it is expected that the teams would focus more on phases 3, 4 and 5 (“Data Preparation”, “Modeling” and “Evaluation”), so that the best solutions should have the best results in phase 5. Evaluation.
Phase “6. Deployment” mostly stays in the hand of the case-study providing companies as we aim at continuation of the process after the event. So stay tuned and follow the updates on the website of the event.
1. Business Understanding
This initial phase focuses on understanding the project objectives and requirements from a business perspective, and then converting this knowledge into a data mining problem definition, and a preliminary plan designed to achieve the objectives. A decision model, especially one built using the Decision Model and Notation standard can be used.
2. Data Understanding
The data understanding phase starts with an initial data collection and proceeds with activities in order to get familiar with the data, to identify data quality problems, to discover first insights into the data, or to detect interesting subsets to form hypotheses for hidden information.
3. Data Preparation
The data preparation phase covers all activities to construct the final dataset (data that will be fed into the modeling tool(s)) from the initial raw data. Data preparation tasks are likely to be performed multiple times, and not in any prescribed order. Tasks include table, record, and attribute selection as well as transformation and cleaning of data for modeling tools.
4. Modeling
In this phase, various modeling techniques are selected and applied, and their parameters are calibrated to optimal values. Typically, there are several techniques for the same data mining problem type. Some techniques have specific requirements on the form of data. Therefore, stepping back to the data preparation phase is often needed.
5. Evaluation
At this stage in the project you have built a model (or models) that appears to have high quality, from a data analysis perspective. Before proceeding to final deployment of the model, it is important to more thoroughly evaluate the model, and review the steps executed to construct the model, to be certain it properly achieves the business objectives. A key objective is to determine if there is some important business issue that has not been sufficiently considered. At the end of this phase, a decision on the use of the data mining results should be reached.
6. Deployment
Creation of the model is generally not the end of the project. Even if the purpose of the model is to increase knowledge of the data, the knowledge gained will need to be organized and presented in a way that is useful to the customer. Depending on the requirements, the deployment phase can be as simple as generating a report or as complex as implementing a repeatable data scoring (e.g. segment allocation) or data mining process. In many cases it will be the customer, not the data analyst, who will carry out the deployment steps. Even if the analyst deploys the model it is important for the customer to understand up front the actions which will need to be carried out in order to actually make use of the created models.