5 thoughts on “Datathon – HackNews – Solution – Task3-Antiganda”
0
votes
Antiganda:
* Summary:
– The full source code is made publicly available on github.
– The approach makes a lot of sense (for task 3):
– It is based on BIO encoding and then a sequence model based on a 2-layer bidirectional LSTM-based neural network.
– Unfortunately, the results are not very strong:
– the system is 3rd on both DEV and TEST (for task 3)
– It is behind the winner by a large margin.
– The article is nice:
– I like it that there is code to convert the data to BIO format.
– I also like it that it has the code integrated.
– There is a nice discussion of the deep learning model, and the loss.
– There are even directions for future work.
* Questions:
1. Why did you choose FastAI? Because it can handle an entire document as an input?
1. FastAI was chosen, because of the built-in functionality to handle the entire document via bptt. Doing this in another framework would have required a lot more work, not suitable for a hackathon.
2. FastAI is a wrapper over pytorch – it provides a lot of very sensible defaults and removes a lot of boilerplate. You can use fastai with any torch.nn module – pytorch-bert included. It would be a logical next step. The I did not opt for BERT was that I feared running out of GPU memory.
P.S. Actually, one of the models achieved a score of 0.11 on the dev set, coming in at second place, but it was overwritten somehow by the scorer and we had also lost the submission file so we couldnt resubmit.
5 thoughts on “Datathon – HackNews – Solution – Task3-Antiganda”
Antiganda:
* Summary:
– The full source code is made publicly available on github.
– The approach makes a lot of sense (for task 3):
– It is based on BIO encoding and then a sequence model based on a 2-layer bidirectional LSTM-based neural network.
– Unfortunately, the results are not very strong:
– the system is 3rd on both DEV and TEST (for task 3)
– It is behind the winner by a large margin.
– The article is nice:
– I like it that there is code to convert the data to BIO format.
– I also like it that it has the code integrated.
– There is a nice discussion of the deep learning model, and the loss.
– There are even directions for future work.
* Questions:
1. Why did you choose FastAI? Because it can handle an entire document as an input?
2. Do you see any advatanges of FastAI over BERT?
1. FastAI was chosen, because of the built-in functionality to handle the entire document via bptt. Doing this in another framework would have required a lot more work, not suitable for a hackathon.
2. FastAI is a wrapper over pytorch – it provides a lot of very sensible defaults and removes a lot of boilerplate. You can use fastai with any torch.nn module – pytorch-bert included. It would be a logical next step. The I did not opt for BERT was that I feared running out of GPU memory.
P.S. Actually, one of the models achieved a score of 0.11 on the dev set, coming in at second place, but it was overwritten somehow by the scorer and we had also lost the submission file so we couldnt resubmit.
Got it. In fact, both BERT and fastAI were quite popular frameworks.
From the results, BERT seems to have worked better though.
Hey, your Github link is not working, i wanted to see your solution for research purposes, can you please provide the link.
Hi Krishdkhurana, I think the link is working.
Try again https://github.com/mboyanov/propaganda-deteciton
Cheers,
Deni 🙂