10 Challenges of sentiment analysis and how to overcome them Part 4

16 February 2023

A guide for evaluation of sentiment analysis solutions for market research: What to watch out for when choosing sentiment analysis software

7 min read
7 min read
Sentiment analysis 4

Today, in the final part of our series, we look at how to train models to conduct sentiment analysis. 

9. Domain specificity 

We already stressed the importance of training material for machine learning. However, the issue of training material is not only one of quantity but also of adequacy. If the training material reflects the reality of the texts later to be analysed the chance is much higher that the model will produce valid results. The quality of any machine learning model’s results will thus also reflect the quality of the training material. A model cannot become better than the data it was trained on. 

This leads us to the nature of the text material to be analysed. In sentiment analysis, we often have to deal with diverse material from all kinds of sources like forums, Twitter, Facebook, Instagram, blogs, online shops, etc. On each of these platforms, users developed diverse cultures, styles, codes and quality levels of writing and expressing oneself. Across platforms, we, therefore, find very different texts: short and long, good and bad grammar, good and bad spelling, abbreviations versus complete words, etc. Models that were trained on the typical news text and Wikipedia datasets struggle with user-generated data. 

However, there is one more problem because even the topic at hand makes a stark difference. One cannot use a model trained on political comments and expect it to do well on cosmetics. A model trained on electronic products will not work for food. In each case, the vocabulary used is different and for machine models, this makes a difference. 

For a sentiment analysis model, this domain specificity is a difficult challenge to master. In many cases, it needs specific training material for each of them to generate valid results. Of course, this creates a dilemma for all who develop sentiment analysis tools. Building a model for each domain takes a lot of time and money. 

Fortunately, it is not necessary to start from nothing in each domain. Large neural models that are trained on billions of words of general text can be adapted to specific tasks through transfer learning. The transformer model BERT (Devlin et al. 2018) has proven to be highly adaptable to different domains. Since even larger and more capable language models such as GPT-3 (Brown et al., 2020), were released and are available for fine-tuning, e.g., using the services of OpenAI. 

The typical steps for transfer learning are 1) Selecting an appropriate base model, 2) designing an annotation schema, 3) annotating a training dataset, 4) training and evaluating a model. Steps 3 and 4 are repeated until the desired performance is achieved, but it is also worth iterating on the annotation schema from step 2.

10. Building trust in a system that is a black box and not always right 

Even current state-of-the-art sentiment analysis models deployed in the domain they were trained on do not reach 100% accuracy. Users will see mistakes, which can undermine the trust that users have into the models. This is exacerbated

  • When the model is a black box model, such as a neural net model, so that users do not know how it comes to its results.

  • When its validity and the quality of its results are not tested, demonstrated, and proven so that users do not have a clue what the results mean for them.

  • When the model is not adapted repeatedly to the development of the domain. Social media discussions evolve, word choices change, and new platforms rise in popularity. Models must be re-tested with fresh labelled data and updated as necessary.

Model providers must manage the reputation of their models. They can provide users with the results of their own tests on validation data (data that the model has not seen during training) and give an accuracy estimate. Still, users need to have a way to communicate with the model provider and point out flaws of the model, especially ones that have a suspicious pattern to them. This also places responsibility on the customers. They should be aware of the challenges that sentiment analysis poses and the threats this means to the benefits of sentiment analysis. In the end, it's their money spent and their conclusions and measures derived from the results it provides.

Conclusion

When used correctly, sentiment analysis can provide insight on enormous amounts of text, representing thousands or millions of opinion holders. However, the 10 challenges we described here are not easy to overcome. 

There are three approaches that seem promising: 

  1. Transformer models are currently the best-performing ones thanks to their ability to understand words in context. They continue to evolve with larger language models as well as with advances in GPUs.

  2. Smart text selection is critical for correct analysis. Even the best algorithm cannot make up for problems in the data pipeline, such as irrelevant posts (by language, opinion holder, or topic). This layer of the data pipeline is easier to improve than the sentiment analysis model. Reviews are the easiest to work with because they have a clear target (the product) and an opinion holder (the buyer of the product).

  3. Domain-specific models can combine intelligent filters for text selection with sentiment analysis models trained on data from the specific domain. This promises accuracy improvements in comparison to general models. 

Sentiment analysis remains one of the tough challenges in machine learning. Our general advice on the topic:

  • Prefer models that give fine-grained predictions by aspect, rather than summarising whole texts in one number.

  • Test the validity and quality of the analysis you receive.

  • Ask about the metrics used to evaluate the model and the inter-coder reliability in the training data. 

Any provider with a solid model should be happy to help you with this by giving access to the necessary information. If not, mistrust is what you need.

References 

Paul Simmering
Data Scientist at Q Agentur für Forschung GmbH
Thomas Perry
Managing Director at Q Agentur für Forschung GmbH