LearnNLPTeam solutions

Ontotext case – Team _A

The objective of our task is extract parent-subsidiary relationship in text. For example, a news from techcruch says this, ‘Remember those rumors a few weeks ago that Google was looking to acquire the plug-and-play security camera company, Dropcam? Yep. It just happened.’. Now from this sentence we can infer that Dropcam is a subsidiary of Google. But there are million of companies and several million articles talking about them. A Human being can be tired of doing even 10! Trust me ūüėČ We have developed some cool Machine learning models spanning from classical algorithms to Deep Neural network do this for you. There is a bonus! We just do not give you probabilities. We also give out that sentences that triggered the algorithm to make the inference! ¬†For instance when it says Orcale Corp is the parent of ¬†Microsys it can also return that the sentence in its corpus ‘Oracle Corp’s ¬†Microsys customer support portal was seen communicating with a server’, triggered its prediction.

8
votes

13 thoughts on “Ontotext case – Team _A

  1. 4
    votes

    Overall you show great understanding of the problem and the data itself. The structure of the article is well formed. However there are a few areas in which you can improve.

    First in your evaluation don’t just add the list of pairs. It is very difficult to tell if which is the parent of which just by looking at the list. Especially if you’ve never heard of Danaher_Corporation and Pall_Corporation. Who’s the parent and who’s the subsidiary? Was your model correct in predicting this relationship? Add for instance your precision or recall values or add a confusion matrix? Also among these pairs which did the algorithm misclassify and in your opinion why?

    Second and this one is really minor so don’t worry much about it. If you add an abstract don’t make it too detailed. Abstracts are for people who are not experts and are not familiar with NLP. It should be a simple layman’s description of what the problem is and how you propose to solve it. Leave the detailed description for the Introduction.

    1. 0
      votes

      Hello Toney! The main reason why I didn’t update the validation scores of the RNN-Attention model is because initially I trained the model and the best validation score was 93% accuracy. But when I used it on the test set the predictions were terrible that even a layman would ignore it. Later I realized that the problem is because in the training set each pair had an average of 30-40 text snippet whereas in the test set it was 1-2. Hence the validation accuracy was not getting translated. Instead I reduced the number of articles in the training set to lesser than 10 pairs on each pair and retrained the model and it scored a validation accuracy of 83.2% and the top 20 pairs you see in the article are from this model and the results look plausible. What I am essentially trying to tell is unlike other datasets, this particular dataset can be tricky to report the actual validation scores

  2. 3
    votes

    Really good work and a very detailed description of your efforts. Seems like you got the teamwork aspect down really well because you’ve managed to accomplish an impressive number of tasks over a single weekend.

    The existence of so many duplicate entries surprised me but I discussed it with Laura and apparently there really are whole snippets that frequently repeat in articles verbatim when discussing parent-subsidiary companies. Overall the data analysis was good and detailed and it was good to see that you discovered and resolved issues before feeding it to the algorithm.

    The task itself can come in two formats in the wild- either with a large corpus of text that needs to be searched for relations or as a streaming platform where each individual snippet is judged as it is received. Your solution is aimed more at the first case but can be applied to the second. I am satisfied that it is a solid algorithm that produces surprisingly good results. Looking through the pairs you’ve extracted from the (true) test set, the F-score you calculated seems to be supported.

    My only criticism is that I really would have liked to see something about how the traditional ML approaches fared in comparison to your state-of-the-art algorithm. It’s great to know you tried them but I am still not sure how they rank.

    Similarly the idea to generate more training samples with that service is interesting but reading the article, I am not sure if it got anywhere. Did you manage to generate extra training samples? Were they good? Were they used?

    A really minor note, but the article mentions the parent_of relation isnt’ transitive. It (sort of) is and I believe you meant it is anti-symmetric (i.e. A is parent_of B means B is NOT parent_of A)

    Great job, guys ūüôā

    1. 0
      votes

      Yes Andrey! We meant anti-symmetric, 40 hours of sleeplessness ūüėõ True, I was also regretting that we should have spent more time in critically analyzing the results of the classical machine learning models with NNs. Somehow these Neural networks are very fancy to quickly grab the attention like our attention model ūüėČ It was your and Laura’s continued support that helped us reach the finals! Thanks to you ūüôā

  3. 2
    votes

    You have done terrific job at analyzing the data in various ways and at designing a reasonable, directed neural network model for the task. The model uses deep learning and state-of-the-art tools and techniques (but TF.IDF-based SVM solutions have been also tried for comparison).

    What is the baseline F1? Also, what is the accuracy?

    Any results on cross-validation based on the training dataset for different choices of the hyperparameters of the network architecture?

    Any thought what can be done next to further improve the model? Maybe combine TF.IDF with deep learning? Or perform system combination? Did the different systems perform similarly on the training set (e.g., using cross-validation)?

    1. 1
      votes

      Also, your confusion matrix is non-standard: it should show the raw counts. I wanted to calculate accuracy, but I cannot do it from this matrix.

      BTW, it is nice that the network can give an explanation about what triggered the decision.

      1. 0
        votes

        Yes , Preslav you are right our confusion matrix shows normalized score between 0 and 1, however it can be easily changed by param inside the notebooks we shared.

        1. 0
          votes

          I updated the paper with conf matrix without normalization. The result on test set is different because I pretrained the model because of issue with the keras tokenizer.

    2. 0
      votes

      Preslav, every single question of yours make sense and I think we have to address it perfectly! On my part I will try to rerun the algorithm and try to understand even personally the science behind the combination of hyperparameters and write to you the results that I found. Thank you

  4. 3
    votes

    Great work guys! I’m really happy to see so many graphics and experiments! It’s really important to visualize the data and experiment (I would say more important than achieving top scores) and you did a great work!
    Here are some notes I made while reading the article:

    – good analysis and visualisation of the data
    – data augmentation is a good idea when not enough data is provided, or when training complex NN models, but 80k snippets seems like a big enough corpus already. I wouldn’t give that a high priority.
    – you claim there are differences in the text in the train and test sets? It would be nice to see some graphics about accuracy comparisons on the dev set and test set, or some other form of proof.
    – coreference resolution was in Identrics’ case, also some dependency parsing, perhaps you could have used their notebooks ūüôā
    – I don’t understand this: The first one was using function from R*R -> R that holds h(a,b) != h(b,a) and add this as feature.
    – normalizing the company names is a very good idea, specially if you only have 400 companies in all examples
    – “Now lets preprocess the unlabeled test set in order to use it as corpus for more words and prepare it for input in the models”. You should be very careful not to transfer some knowledge from the test set in the training phase, even through w2v embeddings.
    – Your understanding that there are examples in the training data which don’t hold information about the relation between the two mentioned companies, and are yet in the training set, is a serious problem (if the task is to detect relations on sentence level). Also, kudos for finding this! Concatenating the examples to solve the business problem is one option, yes. Also, you could try to handle the problem on its own, I would suggest using diffferent training sets (from the web), clustering of the training examples or any other analysis which would actually clean up the training data. If this is also valid for the test set, it would be very hard to evaluate any model, not knowing which of the test examples actually hold information about the parent-subsidiary relation.
    – “It is to be noted that the number of text snippets corresponding to each pair in the training data varied largely from some companies like Google and YouTube having approximately 4000 snippets to smaller companies having 2 or 3 snippets. Such a huge variance created big troubles in the test data which will be explained later.”
    – It looks like you introduced this problem yourself by concatenating all training examples for company pairs in single documents ūüôā
    – great set of useful experiments and results in the linked notebooks
    – Also, I agree with Tony about the abstract, keep it simple and let Ontotext sell their case to the audience ūüôā

    1. 0
      votes

      I will try to explain what I meant by “The first one was using function from R*R -> R that holds h(a,b) != h(b,a) and add this as feature”. The idea was to create a function that converts an ordered pair of labels to just one component. and use it as a feature, because some classifiers we tried create more features based on combinations of the initial and thus I decided that they will only create noise.

    2. 0
      votes

      Hello Yasen!
      Regarding your question on the train and test data differences we noted:
      The RNN-Attention model is because initially I trained the model and the best validation score was 93% accuracy. But when I used it on the test set the predictions were terrible that even a layman would ignore it. Later I realized that the problem is because in the training set each pair had an average of 30-40 text snippet whereas in the test set it was 1-2. Hence the validation accuracy was not getting translated. Instead I reduced the number of articles in the training set to lesser than 10 pairs on each pair and retrained the model and it scored a validation accuracy of 83.2% and the top 20 pairs you see in the article are from this model and the results look plausible. What I am essentially trying to tell is unlike other datasets, this particular dataset can be tricky to report the actual validation scores. Yes, by the process of concatenating the text snippets we introduced the problem of discrepancy. But we were trying to be creative and thought we would try to see if it works. BTW I just realized that you are Laura ūüėõ Thanks a lot for your mentorship!

Leave a Reply