The Business Case
Clients of Ontotext often have large text collections that need to be searched efficiently. They are particularly interested in key concepts like Organizations, People, Locations, but also relations between them. ML methods are able to learn from already annotated examples how to extract relations expressed in text. These methods however need large amount of expert annotations, which are quite expensive.
Can this limitation be overcome? Yes, if we teach AI to “read” text and Open data knowledge together.
An idea that is becoming popular lately is to use facts from open knowledge bases like Wikipedia/ DBPedia to automatically annotate large amount of text – larger than human experts could annotate. Then, AI can learn from those texts how to recognize relations between entities. From then on, AI can be used to automatically scan text and identify new relations, a by-product being “filling the gaps” in open knowledge bases.
The Research Case
The concrete case that we chose for the Datathon is to devise an algorithm that is able to recognize relations of type:
<company1> _ is parent of_ <company2>
(or reversed, <company2> _is subsidiary of_ <company1>)
in free text. The companies will be annotated already, the teams will not have to identify them. The teams will only need to identify if there is a relation of type _is parent of_ . Nota bene: the relation is not symmetric, it has a precise direction. In fact, in the training set, relations between subsidiaries and parent companies will be given as negative examples of _is parent of_.
If you choose to implement state-of-the-art deep learning approaches, you may want to use word vector representations.
Some pre-trained wordvectors cn be found here:
Lin et al. 2016, Neural Relation Extraction with Selective Attention over Instances
Mintz et al. 2009, Distant supervision for relation extraction without labeled data.
Miwa et al, 2016, End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures
Zeng et al, 2015, Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks
For each pair of companies <company1> and <company2>, one or more text snippets mentioning both are available. It is expected that the relation expressed in text is the parent-subsidiary relation, but exceptions may occur.
The dataset is about inferring parent-subsidiary relations from text. Hoping for a supervised learning model,
many annotated examples are needed. Those were automatically obtained by the following process:
– Identify pairs of companies mentioned in news (using Ontotext Named Entity Tagger)
– Ask DBpedia if there is a parent-subsidiary relationship between them.
– If yes, add the example to the training set as positive.
– We automatically generated negatives as well.
For the test set, we kept examples of parent-subsidiaries that are not written in DBpedia.
Ontotext has those from an acquired dataset (it’s expensive, it’s not worth buying it for the datathon! 🙂 ).
Thе input data consists of two files: the training samples (train.csv) and the test samples (test.csv).
The training data has the following columns:
- Company1 : string label of the first company, eg. Google, General_Motors, etc.
- Company2: string label of the second company. Same as above.
- TextSnippet: larger string, containing one or two sentences from a news article, mentioning the two companies in arbitrary order. The text is directly copied form the news article, without processing, apart from the mentions of the companies that was standardized.
- IsParent: boolean . True if C1 is a parent of C2, false otherwise.
The Test set has the same format, only that the IsParent column is missing.
|Centene Corporation||Health Net||Centene closed the deal with Health Net Inc.||TRUE|
|Health Net||Centene Corporation||Centene closed the deal with Health Net Inc.||FALSE|
|Aetna Inc.||Health Net||Aetna and Health Net are competitors.||FALSE|
A few excerpts from the articles may help you get a feeling about the task:
… where we see continued market weakness,” Michael Lamach, the company’s chairman and chief executive officer, said in the filing. Ingersoll-Rand manufactures an array of products, from Club Car golf cars to Trane air conditioning systems. The company employs about 2,000 in Mecklenburg County, and the majority are at …
… when users of the app attempt to use its navigation features, they are automatically transferred to an app from AutoNavi, a mapping company owned by Chinese internet leader Alibaba Group Holding . While the two apps differ in their design, some users report that they appear to be drawing on similar data. This suggests that Google has partnered with AutoNavi to obtain map data for its return to China…
…Centene Corp. agreed to buy Health Net Inc. for about $6.3 billion in cash and stock,…
...U.S. health insurer Centene Corp (CNC.N) said it would buy rival Health Net Inc (HNT.N) for $6.3 billion to bolster its position …
…Centene closed the deal with Health Net Inc. …
…a day after Centene Corp. said it struck a deal to acquire Health Net Inc. for $6.3 billion…
…health insurer Centene Corp. reported that its top and bottom lines continued to increase in its fourth quarter as the company continues to integrate its Health Net acquisition…
…Something big is likely cooking at struggling restaurant giant Yum! Brands. On Thursday evening, the parent of Taco Bell, KFC and Pizza Hut announced that its board and management are nearing an end of a review of strategic options, including those related to its structure, and will release their conclusions “shortly.” Given the stock’s 19% decline in response to lackluster third quarter earnings on Oct. 7, execs may be feeling the heat to announce value-creating actions such as spinning off its KFC and Pizza Hut operations in China…
The Ontotext experts for the Datathon
Andrey Tagarev, Case Mentor, Software developer at Ontotext
Andrey is part of the Innovations and Consulting Team of Ontotext. His main interest is applied ML. Over the past year, he was working on data reconciliation in graph databases with complex ontologies.
Laura Tolosi – Halacheva, Case Mentor, Lead scientist text analysis at Ontotext
Laura is part of the Innovations and Consulting Team of Ontotext. After getting a PhD in Computational Biology from the Max-Plank-Institute for Informatics, Germany, she moved to Ontotext, where she started working on text analytics. Older projects include introducing non-linear feature functions in text classification models, methods for model optimization that target a specific precision-recall trade-off. Two very popular projects that she worked on were on Twitter analysis for prediction of Brexit voting outcome and methods for inferring rumours in social media. Currently, Laura’s job is to help transition Ontotext’s text analytics technologies towards cognitive computing, therefore her focus is generic NL understanding, question answering, relation extraction.
Expected Output and Paper
We are expecting predictions for the observations in the test set.
The main focal point for presenting the results from the Datathon from each team, is the written article. It would be considered by the jury and it would show how well the team has done the job.
Considering the short amount of time and resources in the world of Big Data Analysis it is essential to follow a time-tested and many-project-tested methodology CRISP-DM. You could read more at http://www.sv-europe.com/crisp-dm-methodology/
The organizing team has tried to do the most work on phases “1. Business Understanding” “2. Data Understanding”, while it is expected that the teams would focus more on phases 3, 4 and 5 (“Data Preparation”, “Modeling” and “Evaluation”), so that the best solutions should have the best results in phase 5. Evaluation.
Phase “6. Deployment” mostly stays in the hand of the case-study providing companies as we aim at continuation of the process after the event. So stay tuned and follow the updates on the website of the event.
1. Business Understanding
This initial phase focuses on understanding the project objectives and requirements from a business perspective, and then converting this knowledge into a data mining problem definition, and a preliminary plan designed to achieve the objectives. A decision model, especially one built using the Decision Model and Notation standard can be used.
2. Data Understanding
The data understanding phase starts with an initial data collection and proceeds with activities in order to get familiar with the data, to identify data quality problems, to discover first insights into the data, or to detect interesting subsets to form hypotheses for hidden information.
3. Data Preparation
The data preparation phase covers all activities to construct the final dataset (data that will be fed into the modeling tool(s)) from the initial raw data. Data preparation tasks are likely to be performed multiple times, and not in any prescribed order. Tasks include table, record, and attribute selection as well as transformation and cleaning of data for modeling tools.
In this phase, various modeling techniques are selected and applied, and their parameters are calibrated to optimal values. Typically, there are several techniques for the same data mining problem type. Some techniques have specific requirements on the form of data. Therefore, stepping back to the data preparation phase is often needed.
At this stage in the project you have built a model (or models) that appears to have high quality, from a data analysis perspective. Before proceeding to final deployment of the model, it is important to more thoroughly evaluate the model, and review the steps executed to construct the model, to be certain it properly achieves the business objectives. A key objective is to determine if there is some important business issue that has not been sufficiently considered. At the end of this phase, a decision on the use of the data mining results should be reached.
Creation of the model is generally not the end of the project. Even if the purpose of the model is to increase knowledge of the data, the knowledge gained will need to be organized and presented in a way that is useful to the customer. Depending on the requirements, the deployment phase can be as simple as generating a report or as complex as implementing a repeatable data scoring (e.g. segment allocation) or data mining process. In many cases it will be the customer, not the data analyst, who will carry out the deployment steps. Even if the analyst deploys the model it is important for the customer to understand up front the actions which will need to be carried out in order to actually make use of the created models.