Datathon cases

The VMware Case – Organize the Knowledge Base

VMware gives you a chance to organize better the support section of their website by grouping together articles dealing with the same problem.

2
votes

The Business Case

For 20 years now, VMWare, Inc. has been providing cloud computing and virtualization software and services. Current VMware product line includes software in multiple categories:

  • Server software – vSphere, ESX, vCenter
  • Networking and security – NSX
  • Storage and availability – vSAN
  • Cloud management – vRealize Automation, Horizon
  • Desktop software – Workstation, Fusion

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides.

Currently, there are around 35,000 Knowledge Base (KB) articles, covering multiple product versions, product combinations and written in several languages. With such variety and number of articles we face the problem of information duplication or a solution of same problem to be spread over multiple articles.

Your task will be cluster similar KB articles together so that they form clusters where the same problem is discussed.

The Research Problem

We do not restrict you by techniques and algorithms you may use. Hence there is no labeled data set with ground truth (target variable) and the context of the articles is very specific, so we encourage you to use unsupervised approaches.

You will need to go through the content of around 35,000 articles and find the ones that resolve similar issues.

Data Description

KB articles usually follow a stricture:

  • Document id
  • Purpose – brief summary of the guide, present if this article is a usage guide.
  • Symptoms – symptoms of the system and problems that have occurred, present if this article is a troubleshooting guide.
  • Cause – reasons why the issue might have occurred.
  • Resolution – explains steps to be taken by the users.
  • Workaround – steps that can be taken if the resolution guide is not applicable to the users’ case.

Each article also has a metadata, which contains – last update date, view count, category, language and list of products for which the article is applicable to. We are going to focus only on articles written in English.

Your task is to form clusters of KB articles which discuss similar issue.

We are going to provide __NUMBER_TOP_TOPICS__ of top topics and a list of corresponding KB – document ids. The KB articles have been placed to these topics by domain experts. You can use these set of topics as a validation set. We are also withholding different set of topics with corresponding KBs to be used as a private test set when we evaluate your work. We are going evaluate your work by looking through it by hand, however you can still use the validation set to get a feeling what we expect as an outcome.

For a head start, we wrote a toy Python script that parses the language metadata from one of the html files. You can find it as scrape.py.

 

Download the dataset for the case here…

See the discussion for this case in the Data.Chat here…

 

The VMware mentors

Shashank Shekhar, Industry Expert, Lead Data Scientist @ VMWare, India

Shashank Shekhar is a lead data scientist with VMware having 11 years of experience in Data Science and Machine learning varying across verticals including consumer insights, customer service, inventory management, merchandising, marketing and pricing in both B2C and B2B industries. In the past, he worked in Target, Amazon and Flipkart and was involved in solving various complex business problems using Machine Learning and Data Science. He has multiple publications in the field of data science, machine learning, deep learning and image recognition in several international journals of repute to his credit. Currently he is leading a team of data scientists working on business problems on Pricing and Partner domain.

Pavel Nikolov, Industry Expert, Senior Business Analyst @ VMware

Pavel has knowledge and experience with the following platforms: MATLAB, Apache Spark, Anaconda, Quantum, R. He has also experience with the programming languages MATLAB, Python, SparkSQL, C, Quantum, R, JavaScript and the development environments Eclipse C/C++, MS Visual Studio, Sublime Text.

He is interested in Data Science, Machine Learning, Segmentation, Stochastic Models, Predictive Models, State Estimators, Kalman Filter Linear Regression, Logarithmic Regression, Polynomial Regression, AR, MA, ARX, ARMAX, LS, WLS, Step-Wise Regression and Data Vizualization.

Denitsa Panova, Industry Expert, Business Analyst – Data Science and Advanced Analytics @ VMware

Denitsa has a strong analytical and mathematical background. She is a Business Analyst in the Innovation Information Center (IIC) of Excellence in VMware, where on a daily basis she is working towards improving customer experience. Denitsa has graduated from her bachelor degrees in Mathematics and Economics from the American University in Bulgaria and acquired her master degree in Data Science in Barcelona Graduate School of Economics.

Expected Output

The expected output is a list of topics and corresponding KB articles to this topic. You are not expected to provide meaningful summary of each topic, in other words you can just enumerate your topics.
We are also withholding different set of topics with corresponding KBs to be used as a private test set when we evaluate your work. We are going to evaluate your work by looking through it by hand, however you can still use the validation set to get a feeling what we expect as an outcome.

Article instructions

The main focal point for presenting the results from the Datathon from each team, is the written article. It would be considered by the jury and it would show how well the team has done the job.

Considering the short amount of time and resources in the world of Big Data Analysis it is essential to follow a time-tested and many-project-tested methodology CRISP-DM. You could read more at http://www.sv-europe.com/crisp-dm-methodology/
The organizing team has tried to do the most work on phases “1. Business Understanding” “2. Data Understanding”, while it is expected that the teams would focus more on phases 3, 4 and 5 (“Data Preparation”, “Modeling” and “Evaluation”), so that the best solutions should have the best results in phase  5. Evaluation.
Phase “6. Deployment” mostly stays in the hand of the case-study providing companies as we aim at continuation of the process after the event. So stay tuned and follow the updates on the website of the event.

1. Business Understanding
This initial phase focuses on understanding the project objectives and requirements from a business perspective, and then converting this knowledge into a data mining problem definition, and a preliminary plan designed to achieve the objectives. A decision model, especially one built using the Decision Model and Notation standard can be used.

2. Data Understanding
The data understanding phase starts with an initial data collection and proceeds with activities in order to get familiar with the data, to identify data quality problems, to discover first insights into the data, or to detect interesting subsets to form hypotheses for hidden information.

3. Data Preparation
The data preparation phase covers all activities to construct the final dataset (data that will be fed into the modeling tool(s)) from the initial raw data. Data preparation tasks are likely to be performed multiple times, and not in any prescribed order. Tasks include table, record, and attribute selection as well as transformation and cleaning of data for modeling tools.

4. Modeling
In this phase, various modeling techniques are selected and applied, and their parameters are calibrated to optimal values. Typically, there are several techniques for the same data mining problem type. Some techniques have specific requirements on the form of data. Therefore, stepping back to the data preparation phase is often needed.

5. Evaluation
At this stage in the project you have built a model (or models) that appears to have high quality, from a data analysis perspective. Before proceeding to final deployment of the model, it is important to more thoroughly evaluate the model, and review the steps executed to construct the model, to be certain it properly achieves the business objectives. A key objective is to determine if there is some important business issue that has not been sufficiently considered. At the end of this phase, a decision on the use of the data mining results should be reached.

6. Deployment
Creation of the model is generally not the end of the project. Even if the purpose of the model is to increase knowledge of the data, the knowledge gained will need to be organized and presented in a way that is useful to the customer. Depending on the requirements, the deployment phase can be as simple as generating a report or as complex as implementing a repeatable data scoring (e.g. segment allocation) or data mining process. In many cases it will be the customer, not the data analyst, who will carry out the deployment steps. Even if the analyst deploys the model it is important for the customer to understand up front the actions which will need to be carried out in order to actually make use of the created models.

Share this

Leave a Reply