Popular comments by dimitarvasilev

Datathon – HackNews – Solution – FlipFlops

Hi Alberto,

Thank you for you questions. Our answers are as follow:
1. We did not considered it. Not sure why, may be all of the chaos and stress to organize the tasks and start quickly to produce output has blind spotted us for this option.
2. They are the 2 most populated – Loaded_Language and Name_Calling,Labeling
3. You are correct, they are on the singleton tasks. When the models are applied one after other, the performance drop to significantly (pure multiplication). Our test set score was close to the dev and train-dev set (just 0.005 points drop). Something we have tried is to combined the 2 approaches. We have used the individual models for the top 2 propaganda models and combined them with the joint model of the other 16 techniques. Although we got better f1 scores for each propaganda type, the overall score was lower.