Making a music-sentiment-analyzer with DistilBERT- uncased-emotion

Overview:

Created a web app that will read the emotions of a song upon the user’s request. I used a bidirectional transformer based model, DistilBERT. The version of this model that I used was hugging face: bhadresh-savani/distilbert-base-uncased-emotion · Hugging Face, a multi-classifier. Which is trained off an emotion dataset comprising of tweets. I used flask to host my model, and ainize as a launchpad, to deploy my model!

Ainize:
Ainize can be thought of as a Makerspace of open-source AI or ML projects. It serves as a launchpad to innovative Artificial Intelligence solutions and home to creative AI driven ideas, even without proper computing resources or configurations. The platform is a cloud-based machine, thus making it highly accessible. The platform also provides cloud-based workspaces, to train models with interactive environments for code with GPUs. Along with free and simple deployment, Ainize encourages the utilization and creation of open-source AI. Thus, bringing life to my music sentiment analyzer.

Demo:

The application is very straightforward. You just type in the song and artist, and in seconds, the model will output an emotion associated with the song! Feel free to try it out!

Distilbert-base-uncased-emotion I chose this model as it accomplishes the task of accurately and quickly classifying the emotional sentiment of words with a range of emotions. The base of this model is BERT , or Bidirectional Encoder Representation from Transformers (who would have guessed?). BERT is a 12-layer, 768-hidden, 12-heads, 110M parameter neural network architecture. Which is pretty hefty in size and computational power. This is what lead to the birth of DistiliBERT: the smaller, lighter, cheaper, and faster version of BERT. It uses, distillation; a technique that compresses large models — teacher — into a smaller models — student, during the pre-training phase.This method retains 97% of the language understanding, while reducing the size by 40%, all while being 60% faster.The version I found classifies emotions, as it’s fine-tuned from the emotion dataset The “uncased” just means that the model is not case sensitive. It is very straightforward and easy to use as it comes with a pipeline that tokenizes and cleans some of the data for use.

Future improvements: A problem with the Music Sentiment Analyzer, is that it only works for lyrical songs in English. The sentiment analyzer, by design, is limited to being able to read the emotion for songs with lyrics. It is also not the most accurate at gauging the emotion of lyrics as language is nuanced. My guess is that it fails to read “happy” songs more accurately, because the emotion dataset used to fine-tune this model, is based off sentiment analysis of twitter posts. So, in conjunction with the lyrical sentiment analysis, the next step is to evaluate the song with instrumental cues, such as chord-progressions and notes of a song. This can improve my model’s accuracy, and allow for songs without lyrics to be analyzed as well.

3 Likes