hurrial

Popular comments by hurrial

Datathon – HackNews – Solution – LAMAs

Hi Alberto, thanks for the comment.
Let me be more explicit about how we facilitated BERT base models. We did not use the same model we created in our previous work. We started by using the base BERT models, which are trained on only two unsupervised tasks that are more or less context detection or facilitation. This is all. The rest is all fine-tuned for this task. The experience we used is about some configuration/parameter that we think works for classification tasks and the news domain when we use this paradigm. Like an SVM model, if you have some experience with it, you mostly know which hyperparameters to optimize in creating a new model for a new task. In order words, we do not do full hyperparameter search for all possible parameters of a machine learning algorithm in cases we know or have experience with that algorithm, domain, or task. Given that the time was limited and deep learning models require a lot of time to be created, the options we could explore remained unfortunately limited. But, it does not mean that nothing was done. You can see all steps of this hard work in the Github repository. Unfortunately, the video is too short as well for explaining details of our submissions for three tasks. Please let us know if you think we should improve this article or provide more details during the live panel discussion.

Datathon – HackNews – Solution – LAMAs

An additional point as a response to BERT vs. other algorithms: We have been experimenting with other algorithms such as LSTM, SVM, and Random Forests for tackling classification tasks as well. However, in all our experiments, BERT has outperformed the rest.