Automatic Methods for Online Review Classification: An Empirical Investigation of Review Usefulness—An Abstract

Jorge Fresneda, David Gefen

Research output: Chapter in Book/Report/Conference proceedingChapter

2 Scopus citations

Abstract

In recent years, academic and practitioner interest in consumer-generated online reviews (OCR) has increased. One of the potential reasons for such growth in interest is their categorization as a valuable source of information for consumers making buying decisions online (e.g., Archak et al. 2011). OCRs consist of elements dealing with review valence (e.g., number of stars), review volume (total number of reviews over the same product), a textual portion (where reviewers are allowed to openly provide further information), usefulness of the review (number of yes/no to the question “was this review helpful to you?”), and verification that the reviewer actually purchased the product. In addition to these variables, readers can also deduce the variance of the review valence, since the distribution of reviews among the different valence levels is also reported. This study presents the available methods for text classification and empirically tests their performance sorting OCRs based on their usefulness variable. Previous research has shown the importance of the usefulness variable of the review, since it correlates with sales impact (Chen and Xie 2008; Ghose and Ipeirotis 2011; Ghose et al. 2012). Useful reviews were shown to more likely impact product sales than non-useful reviews, and this effect is more important over less popular products (Chen and Xie 2008). As a general conclusion, there is no single method that performs significantly better over other methods. The global classification capability of SVM with Class Weights is the best one of all the methods, but it fails to classify the non-useful and useful OCRs. S k-means is the most accurate method to classify useful reviews, but it fails to classify the other two categories. Additionally, its total performance is lower than other methods. Finally, SLDA shows the best performance classifying non-useful reviews; however, it fails in its classification of the other two categories and it also produces a low total performance. As an additional contribution, this work documents the inability of some methods to perform this analysis, as is the case for two of the unsupervised learning techniques: LSA and CTM. The results suggest that these methods fail to address some of the particular characteristics of OCRs, such as the comprehension of the text, its readability, and the type and novelty of the information contained (Li and Zhan 2011; Ludwig et al. 2013). Additionally, none of these methods evaluate the temporal variable or when the information was released (e.g., Purnawirawan et al. 2012). Further research in automatic methods of classification should address these characteristics, which tend to be context dependent and product type dependent as it has been suggested in previous research (Hong et al. 2014). In this sense, methods that combine semantics and an algorithm or a probabilistic approach can potentially solve the aforementioned shortcomings.

Original languageEnglish (US)
Title of host publicationDevelopments in Marketing Science
Subtitle of host publicationProceedings of the Academy of Marketing Science
PublisherSpringer Nature
Pages1331-1332
Number of pages2
DOIs
StatePublished - 2017
Externally publishedYes

Publication series

NameDevelopments in Marketing Science: Proceedings of the Academy of Marketing Science
ISSN (Print)2363-6165
ISSN (Electronic)2363-6173

All Science Journal Classification (ASJC) codes

  • Marketing
  • Strategy and Management

Fingerprint

Dive into the research topics of 'Automatic Methods for Online Review Classification: An Empirical Investigation of Review Usefulness—An Abstract'. Together they form a unique fingerprint.

Cite this