Introduction to NLP
Natural Language Processing (NLP) is the field of computer science that is concerned with developing algorithms for analysis of human languages. Artificial Intelligence approaches( eg. Machine Learning) have been used for solving many tasks of NLP such as parsing, POS tagging, Named Entity Recognition, word sense disambiguation, document classification, machine translation, textual entailment, question answering, summarization, etc. Natural languages are notoriously difficult to understand and model by machines mostly because of ambiguity (eg. humor, sarcasm, puns), lack of clear structure, diversity (eg. models for English are not directly applicable to Chinese). Even so, in recent years we’re witnessing rapid progress in the field of NLP, due to deep learning models, which are becoming more and more complex and able to capture subtleties of human languages.
Introduction to NLP
Team Members Tariq Alhindi (email@example.com) Christopher Hidey (firstname.lastname@example.org) Tuhin Chakrabarty (email@example.com) Business Understanding Automatic Detection of propaganda is essential to build tools that can assist people to navigate the web with more awareness of deliberate or indeliberate messages of what they read. Data Understanding 50000 articles for task 1 21000 sentences for task 2 Data Preparation […]
dina zaychik, dzay, firstname.lastname@example.org Sergey Sedov, Sianur, email@example.com Task 1. The hypothesis is that propaganda/non-propaganda on article level could be detected using distributional semantics features. That’s why we performed thorough preprocessing, removing urls, hashtags, unusual symbols, unusual articles beginnings, non-English first paragraphs (using langid open package), short texts. After that we trained fasttext supervised model (the […]
Team has considered following properties of data for coming up with the solution:
Repetition of text.
Length of words
Lexical analysis of words
frequency of words
trigrams and bigrams of words
Sentiments conveyed by the
The main modeling which included in
LSTM – Long short term memory with embedding from fasttext.
Using Bidirectional LSTMs and trainable embeddings initialized with GloVe for propaganda detection at the article level
Abstract¶This notebook tries to classify news articles in 2 classes propaganda and non-propaganda. 3 types of models Naive Bayes Classifier, Linear Support Vector Classifier and Recurrent Neural Network. The Linear SVC shows the best results. yesThe neural network comes close, while the Naive Bayes Classifier predicts only one class. The following packages have been used: […]
This work proposes the solution of HackTheNewsHackathon tasks. As the main problem binary classification for two classes “propaganda” and “non-propaganda” was chosen. This problem would be solved using open-source library DeepPavlov using ensemble of several different models, including sklearn models, shallow-and-wide convolutional model, attention bidirectional LSTM and GRU models and capsule networks.
Business Understanding Fake news is a massive problem for the multiple industries and government that needs to be addressed in a more automated format. Providing an automated method to examine text and classify it as fake or propaganda can help reduce the effect of fake news. It is easier said than done though as even […]
In order to do the following we have to undergo the process of text cleaning, understanding the text. We had to find a way in order to split the data and form a data frame which consists of the following columns.News_TextNews_NumberNews_TypeThe data has lots of fillers which had to be removed and some rows where news_numbers and type were missing. In order to clean the data we had to remove the fillers using the NLTK stop words filtration. Later on we tokenized the data using the word_tokenizer from the nltk package.The next important step was to lemmatize/stem the data to remove the tense from the words and normalize the words. Even though it was a time consumption process the results were promising.XGBoost has capability to handle the imbalanced dataset by using the parameter Scale_Pos_weight. we can do threshold probability throttling to increase the sensitivity by sacrificing the reasonable specificity.Evaluation:- This process is kind of tricky for the train data set provided, as the data was highly imbalanced, the dependent feature/variable had imbalanced classes
We are back to participate in another Datathon hosted by Data Science Society. This time the theme is Text Analytics.
We will not be able to completely devote ourselves to the cause this time because of the exams which start in next week. But We’ll try to keep the article as simple and well detailed as possible so that it will be helpful for any new Data Science Enthusiast seeking for little helps. So Lets roll.