Next Article in Journal
Determinant Factors for Throwing in Competition in Male Elite Handball
Next Article in Special Issue
Blind Obedience to Environmental Friendliness: The Goal Will Set Us Free
Previous Article in Journal
Tunisian Consumer Quality Perception and Preferences for Dairy Products: Do Health and Sustainability Matter?
Previous Article in Special Issue
Marketing Clues on the Label Raise the Purchase Intention of Genetically Modified Food
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

“Worse Than What I Read?” The External Effect of Review Ratings on the Online Review Generation Process: An Empirical Analysis of Multiple Product Categories Using Amazon.com Review Data

1
HSBC Business School, Peking University, Shenzhen 518055, China
2
Department of Marketing, College of Business Administration, Kookmin University, Seoul 02707, Korea
3
Central Technology, Bangkok 10500, Thailand
4
School of Management, Kyung Hee University, Seoul 02447, Korea
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(19), 10912; https://doi.org/10.3390/su131910912
Submission received: 20 August 2021 / Revised: 26 September 2021 / Accepted: 27 September 2021 / Published: 30 September 2021
(This article belongs to the Collection Marketing and Sustainability)

Abstract

:
In this paper, we study the online consumer review generation process by analyzing 37.12 million online reviews across nineteen product categories obtained from Amazon.com. This study revealed that the discrepancy between ratings by others and consumers’ post-purchasing evaluations significantly influenced both the valence and quantity of the reviews that consumers generated. Specifically, a negative discrepancy (‘worse than what I read’) significantly accelerates consumers to write negative reviews (19/19 categories supported), while a positive discrepancy (‘better than what I read’) accelerates consumers to write positive reviews (16/19 categories supported). This implies that others’ ratings play an important role in influencing the review generation process by consumers. More interestingly, we found that this discrepancy significantly influences consumers’ neutral review generation, which is known to amplify the effect of positive or negative reviews by affecting consumers’ search behavior or the credibility of the information. However, this effect is asymmetric. While negative discrepancies lead consumers to write more neutral reviews, positive discrepancies help reduce neutral review generation. Furthermore, our findings provide important implications for marketers who tend to generate fake reviews or selectively generate reviews favorable to their products to increase sales. Doing so may backfire on firms because negative discrepancies can accelerate the generation of objective or negative reviews.

1. Introduction

I bought this cream because of all the wonderful reviews and was hoping it will help my terribly cracked hands. Well, after about a month this cream has done absolutely nothing. And it smells terrible. Will not be buying again and would not recommend it to anyone’.
—An anonymous review about Lotil Original Cream 114 mL posted on 1 February 2011 (Amazon.com)
Because consumers are exposed to or voluntarily seek online reviews in the online shopping environment, interested parties often distort reviews to maximize their own interests. It has been reported that these repurposed reviews are endemic. Yahoo (2018) reported that 30% of reviews on Amazon.com, Yelp, and TripAdvisor were not sincere (Source: https://finance.yahoo.com/news/rise-fake-amazon-reviews-spot-175430368.html, accessed on 26 September 2021). There is an increasing demand for internet websites such as fakespot.com and reviewmeta.com, which provide ratings for the truthfulness of online reviews. Indeed, fake reviews have become one of the most critical issues in the electronic word-of-mouth (WOM) literature [1,2,3,4]. This suggests that consumers are likely to confront a discrepancy between their own post-purchase evaluations and the evaluations of others that they observed before the purchase.
Particularly, fake reviews can be a critical issue for sustainable products because user-generated reviews play an effective communication tool for online consumers [5,6,7,8,9]. The sustainable marking practice of the product is more effectively perceived by reviews generated by customers with direct experience rather than the company-initiated information. Therefore, firms with sustainable products seem to benefit more from favorable reviews to attract potential customers, but positively biased reviews for a sustainable product induced by firms or platform providers could lead consumers to experience larger discrepancies.
Generally, it is widely known that providing recommendations or reviews for other people depends on consumers’ experience with the products they purchase [10,11,12,13,14,15]. Their good or bad experiences motivate consumers to generate WOM for the purpose of self-enhancement, self-efficacy, altruism, revenge, or a desire to share information.
Discrepancies between others’ evaluations and consumers’ post-purchase evaluations may critically influence the purchase decisions of potential future consumers by motivating experienced consumers to generate more positive, negative, or neutral reviews of the purchased product, depending on the direction of the discrepancy. In particular, this effect of others’ evaluations on the online review generation process is a distinctive feature of the electronic WOM context not typically observed in the traditional WOM literature.
For example, online consumers are unavoidably exposed to the overall review rating representing other consumers’ evaluations during their purchasing process on online shopping platforms. Generally, many online stores or professional review sites provide these review ratings to help potential consumers understand the quality of the product. Consequently, online stores commonly design their web page by displaying these reviews (typically in a standardized numerical measurement such as a 5-star rating) in the most conspicuous place on the screen. Thus, many previous consumers’ evaluations can be quantified along with this all-inclusive unitary measure, allowing online consumers to recognize differences between their own evaluations and previous evaluations.
Additionally, the anonymity of online reviews can increase the opportunities for consumers to experience this discrepancy and may motivate consumers to write their own reviews by underscoring the importance of review generation as a public good. Unlike traditional WOM, consumers cannot directly identify online reviewers. This anonymity of online reviews provides ample opportunities for firms and other interested parties to manipulate online reviews to maximize their own interests. The problem of consumers being misled by fake reviews has been widely reported, and these repurposed reviews have become an epidemic across many industries. Additionally, consumers may consider providing correct information to other people as an essential task when facing discrepancies from anonymous sources.
Based on these observations, it is critical to understand the external effect of others’ review ratings that result in discrepancies with those of experienced consumers on the review generation process after purchases. We operationally defined this effect induced by others’ review rating as an external effect in this study because the main purpose of others’ review rating is to provide information regarding a product or service to help online consumers make their purchasing decisions. More detailed discussions are provided in the next section. Particularly, due to the dual role of online review ratings that help consumers make their purchase decisions and provide further incentives to generate their reviews, it is important to understand this external effect in order to correctly verify the dynamic mechanism of the online review effect on consumer purchasing decisions [16,17,18,19,20]. A vast majority of eWOM studies have focused on the effect of online reviews on consumer purchase behaviors and related boundary conditions. In contrast, few studies have explored the role of others’ review ratings on consumers’ online review generation process.
Thus, in this study, we investigate the external effect of others’ review ratings on consumers’ online reviews. Particularly, we focus on how this discrepancy induced by online review ratings influences the review generation behavior of consumers after the purchase. Specifically, this study addresses the following research questions:
(i)
Whether the discrepancy between others’ evaluations and the experienced consumer’s evaluation induced by overall online review rating influences the experienced consumer’s review generation process.
(ii)
How different types of discrepancy (positive or negative) influence the online review generation process differently (positive, negative, and neutral reviews).
(iii)
Whether and how the effect of the discrepancy would be influenced by the experienced consumer’s previous review generation experience.
For our analysis, we employed a dataset of daily online reviews from Amazon.com in 19 industries from 2012 to 2015. A total of 37.12 million reviews are included in the dataset. For the sentiment analysis of our review context, we employed VADER (Valence Aware Dictionary for sEntiment Reasoning), the method widely adopted in the information literature [21].
Our empirical analysis indicates that the discrepancy between the rating of individual consumers and the overall rating of others’ reviews has a significant effect on the review contents generated by individual consumers. When consumers perceived a positive discrepancy (‘better than what I read’), they generated more positive reviews (16/19 categories supported, 84% of all categories) and fewer negative reviews (16/19 categories, 84% of all categories). When they felt a negative discrepancy (‘worse than what I read’), they generated fewer positive reviews (19/19 categories, 100% of all categories) and more negative reviews (19/19 categories, 100% of all categories). We infer from these findings that negative discrepancies seem to exert a stronger influence than positive discrepancies.
More interestingly, we found that the effect of the discrepancy on generating neutral reviews was asymmetric. When consumers observed a negative discrepancy, they generated more neutral reviews (all categories supported). However, consumers did not necessarily generate more neutral reviews when they observed a positive discrepancy. The effect of positive discrepancies on neutral review generation was found in only 20% of all categories (4/19 categories), and it was not observed in the remaining 80% of the categories (15/19 categories). Our findings suggest that consumers are motivated to share objective information only when they experience a negative discrepancy.
We found further evidence that consumers’ prior experience of review generation influences the impact of the external effect of online review ratings. When we analyzed the consumers who had prior experience in generating reviews, we could replicate the same main results described above, i.e., a negative discrepancy led to more negative reviews and fewer positive reviews, whereas a positive discrepancy led to more positive reviews and fewer negative reviews. Our data analysis also revealed that negative discrepancies increased neutral review generation in almost all categories (18/19 categories, 95% of all categories), while positive discrepancies did so only in half of the categories (10/19 categories). These findings suggest that when discrepancies are positive, consumers become immune to the discrepancy effect. Consumers sensitively respond to negative discrepancies even after having generated reviews.
Our study provides important insights into academic research in the eWOM literature. To the best of our knowledge, this study provides the first evidence of the external effect of online review ratings on future review generation by experienced consumers in an online shopping environment based on a large secondary dataset across multiple industries. This study has particularly meaningful implications for understanding the comprehensive and dynamic mechanism of the online review effect on consumers’ purchasing decisions by verifying the reproductive review generation process.
Additionally, our findings provide meaningful managerial implications to the ethical marketers and relevant stakeholders by expanding our understanding of the external effect of review rating on the online consumer’s review generation process. Managers with a long-term sustainable marketing strategy should seriously consider the external effect of online review ratings because it drives consumers’ tendency to generate reviews, influencing future sales of their products and services. Particularly, our findings suggest that aggressive marketing programs that create more favorable reviews for their products and services should be implemented with caution. If purchased goods fail to satisfy consumers, those repurposed reviews may backfire on their sales by creating a negative discrepancy that, in turn, induces experienced consumers to generate more negative reviews and other consumers to generate more detailed and objective reviews.

2. Literature Review

2.1. Online Review Generation

Online reviews have been frequently used as identical terminology for electronic word-of-mouth (eWOM). Chen and Xie considered online reviews as a type of product information created by users based on personal experience [22]. It is an effective communication and marketing tool on online platforms for sellers and a source of product information for consumers. Forbes (2017) reported that 90% of consumers read online reviews before visiting a business, and 84% of consumers trust online reviews as much as a personal recommendation (Source: https://www.forbes.com/sites/ryanerskine/2017/09/19/20-online-reputation-statistics-that-every-business-owner-needs-to-know/?sh=37bb711dcc5c, accessed on 26 September 2021). It is also reported that 67% of consumers are influenced by online reviews when they make a purchase decision. Luca argues that a one-star increase in Yelp ratings leads to a 5% to 9% increase in a firm’s revenue [23].
Along with its importance, there has been a surge of online review studies in the eWOM literature over the last few decades. These studies have found abundant evidence that both the volume and valence of online reviews play a significant role in influencing consumer purchasing behaviors [24,25,26,27,28]. In particular, they have focused on examining how and why online reviews influence consumers’ purchase decisions. Additionally, other researchers found that online reviews reduce uncertainty and search costs, therefore increasing product knowledge, trust and loyalty, consumer engagement, purchase intention, and willingness to pay for products [29,30,31,32,33,34]. These studies have flourished in various product categories, including movies, travel, restaurants, and grocery shopping, where online reviews play a critical role in signaling the quality of products and services.
However, while the vast majority of the prior eWOM studies focused on the impact of online reviews on consumers’ purchasing behaviors and the relevant boundary conditions, fewer studies have examined its review generation process after the purchase. In particular, understanding the distinctive generation process of online reviews is critical to fully account for the impact on consumers’ purchase behaviors due to the reproductive process of the online review system [17,35,36]. Consumers are influenced by online reviews when they make a purchase decision; however, after a purchase, they become review generators. Recently, researchers have explored the online review generation process to understand why and how online consumers generate their reviews following a purchase [20,37,38]. However, psychological factors influencing online review generation are fundamentally similar to those affecting traditional review generation, which has been extensively studied in the traditional WOM literature.
For instance, prior researchers have identified numerous psychological factors that motivate consumers to generate reviews. These factors include self-enhancement [10,39,40], innovativeness and opinion leadership ([41], ability and self-efficacy [12,42], individuation [43], neuroticism [44], and altruism [11,13]). In general, consumers have the desire to provide others with accurate and complete information, to signal their expertise [14,45], to present themselves favorably [46,47], to persuade others [48], or to be affiliated with others [47,49]. Particularly, Hennig-Thurau et al., summarized eight reasons why consumers generate online reviews: platform assistance, venting negative feelings, concern for other consumers, extraversion/positive self-enhancement, social benefits, economic incentives, helping the company, and advice seeking [12]. These eight reasons are closely linked with Berger’s five functions of generating traditional WOM reviews—impression management, emotion regulation, information acquisition, social bonding, and persuading others [50].
Along with these psychological factors, recent research has identified additional factors that motivate consumers to generate online reviews. For example, by adopting planned behavior theory using restaurant data, Dixit et al., found that perceived behavioral control, subjective norms, ego involvement, and taking vengeance are significant factors in generating online reviews [51]. Thakur also finds that customers’ satisfaction and trust with a retailer lead them to engage in online review generation by more actively utilizing the mobile app [52]. Furthermore, some researchers have also found that the personal traits of consumers can be an essential factor influencing the online review generation behaviors of consumers [36,44].
Additionally, some researchers have provided analyses to investigate the content effect of the online review generation process. Based on the analysis of 336 posts from 88 discussion threads from online discussion forums (e.g., TripAdvisor), Hamilton et al., found that early responses to a post tend to drive the content of the discussion more than the content of the initial query [15]. They attribute the findings to the fact that a common online goal and affiliation makes respondents repeat the attributes mentioned by previous respondents. Askalidis et al., examined the differences between email (prompted) and web (self-motivated) reviews in terms of key metrics, including review rating and volume (238,809 reviews for 27,574 unique products, across four major online retailers) [53]. Godes and Silva used the length of the written review as measured by the number of characters as a proxy of cost [54]. They found an inverse U-shaped relationship between review length and rating (summary statistics in Chevalier and Mayzlin, [55]).
Furthermore, a recent study by Powell et al., argued that the intensity of consumers’ participation in generating review comments plays a more critical role in affecting the effectiveness of reviews [56]. Greater intensity leads to more review generations, and this larger number of reviews can make the reviewed product more favorable. Powell et al., found that consumers are more likely to favor a product with a large number of reviews because the volume of reviews increases the credibility and reputation of the product [56]. In addition, they found that this impact of more reviews can mitigate the effect of negative reviews. Additionally, Powerreviews (2020) found that consumers could obtain more emotion-based information based on longer reviews. For example, consumers feel more positive and stronger connections when exposed to longer reviews, which is critical in influencing their purchasing decisions (Source: https://www.powerreviews.com/blog/why-we-built-the-review-meter/, accessed on 26 September 2021).
As mentioned above, the motivation of consumers to write longer reviews is on various factors, such as psychological factors, personal characteristics, and situational factors [10,11,15,35,41,42,44,48]. Additionally, Gvili and Levy found that consumers’ engagement with online reviews can be strongly tied to the social capital and credibility of eWOM channels and consumers’ fundamental attitude toward generating online reviews. Therefore, consumers’ likelihood of writing longer reviews can be influenced by various factors they encounter during their online shopping trips [57].

2.2. Online Review Ratings

Prior studies (Wu and Huberman, 2008; Moe and Schweidel, 2012; Yoo et al., 2013) focused on verifying the factors affecting online review generation [17,58,59]. Wu and Huberman argue that a consumer decides to leave his or her own reviews based on the comparison to previous reviews [58]. They proposed a theory called impact-cost analysis, claiming that consumers analyze whether the impact of their reviews will outweigh the cost of submitting them before leaving comments. Additionally, Moe and Schweidel explored how others’ online reviews influence consumers’ review generation behaviors [17]. They presented a model of a reviewer’s decision and found significant heterogeneity with respect to consumers’ desire to post in high-consensus versus high-variance environments. Yoo et al., found that the greater the disagreement among professional critics, the greater the motivation for expert consumers to step in and break the tie [59].
While these prior studies verify that other reviewers’ evaluations are one of the crucial factors affecting the online review generation process, these findings are limited because these studies are conducted based on simple experimental comparisons of review contents, and their boundary conditions do not reflect any distinctive feature of the online review system. For example, online consumers are exposed to ample amounts of others’ specific review content; however, they only read a few, which are generally displayed in the upper portion of the review section. Thus, it is difficult for consumers to understand the overall direction of others’ opinions from reading a large number of specific content-based reviews. In contrast, they can identify the degree and direction of others’ evaluations from a numeric review rating. Therefore, the role of others’ evaluations must be handled differently regarding the various types of reviews when we explore others’ evaluations of consumers’ online review generation.
Generally, online reviews are displayed in two formats: ‘verbal comments’ and ‘numerical ratings.’ While both types of online review influence consumers’ behaviors, they have distinctive characteristics. Verbal comments provide full freedom for consumers to express their opinions, feelings, and evaluations. They provide detailed information concerning the products and services, including individual-specific background information and circumstantial details. The contents of the verbal comments are often subjective and involve emotional expressions. On the other hand, numerical ratings are displayed on a platform-specific interval scale (e.g., five-star scale). Ratings provide a succinct and objective measurement of the reviews. It is easy for individual consumers to read and summarize others’ evaluations via ratings. Thus, while ratings lack detailed information regarding products and services, they enable consumers to easily compare others’ evaluations with their own experience within a single-dimensional scale.
In particular, an online consumer is inevitably exposed to an overall numerical review rating during his or her purchasing process. Online review ratings tend to be displayed in the most conspicuous places on online shopping platforms as a representative measurement of others’ evaluations [26]. This distinctive feature, ingrained in most online review systems, can help the consumer compare his or her own experience to the evaluation provided by many others. Particularly, the unitary measurement of others’ review ratings, such as a five-star scale, makes it much easier for consumers to make this comparison. In particular, discrepancies are a notable issue based on recent circumstances, where fake reviews have become a serious problem by providing biased information to consumers.
The anonymity of online reviews provides an ample opportunity for a firm and other interested parties to favorably manipulate online review ratings to maximize their own interests [1,2,3,4]. Researchers find that fake reviews are not an endemic or industry-specific problem but a global problem [4]. Indeed, fake reviews have become one of the most prominent topics in recent eWOM literature research [1,2,3]. However, consumers still rely heavily on online reviews, even though they are aware of the existence of fake reviews. Diamond research reported that 88% of consumers consider online reviews when they make purchase decisions (source: https://www.zendesk.com/resources/customer-service-and-lifetime-customer-value/, accessed on 26 September 2021), indicating that consumers are likely to face greater discrepancies when they use online shopping platforms.
Thus, it is critical to understand the effect of online review ratings on the consumers’ online review generation process after they make purchases so as to verify the dynamic mechanism by which online reviews affect purchasing decisions. However, little is known about the effect of discrepancies induced by online review ratings on consumers’ online review generation; the majority of prior studies have focused on the impact of online review ratings on consumers’ purchasing behaviors and their economic value for firms that provide the product [23,60,61].

2.3. External Effect of Online & Review Rating

The primary purpose of the online review rating is to provide more information regarding products to inexperienced consumers and help them make their purchase decisions. However, as mentioned above, online review ratings unintentionally permit consumers to compare their own evaluations to others’ evaluations, thus recognizing their differences. These discrepancies can motivate them to leave their own reviews that contain more specific information regarding products, leading to additional satisfaction and derived emotion [38,62,63,64,65].
Thus, in this study, we operationally define this unintended effect of online review ratings on the consumers’ online verbal review generation process as an external effect of online review ratings, following previous literature [66,67,68]. Specifically, the external effect can be considered “a negative effect” caused by a negative discrepancy if consumers’ evaluation is lower than the review ratings they observed before the purchase (e.g., “worse than what I read”). Such a negative discrepancy may create a negative effect, even if a consumer has a positive experience with the product or service purchased. In contrast, a positive discrepancy can create “a positive effect” if experienced consumer’s evaluation is higher than the observed review ratings (e.g., “better than what I read”). Similarly, such positive discrepancies may create a positive effect, even if a consumer has a negative experience.
The external effect of review ratings can accelerate the generation of experienced consumers’ opinions. First, an online review rating helps form the expectations of consumers who are exposed to this rating during their purchase process [62,69,70,71]. Thus, when they recognize the discrepancy between their experience and their expectations, this influences consumer satisfaction. It is well established in the marketing literature that consumer satisfaction is influenced not only by perceived experience but also by expectations [62,63,64,72]. Consumer satisfaction can be defined as “the degree to which a product meets or exceeds the consumer’s expectation about that product” [64]. Thus, expectations induced by others’ review ratings are a crucial factor in determining the satisfaction level of experienced consumers; it may also be an essential driver of the review generation process because satisfaction is a critical driver of consumer-generated reviews for other people [73,74,75,76].
Additionally, consumer satisfaction is related not only to cognitive elements such as expectations but also to affective elements such as consumers’ emotional responses [77,78,79]. Giese and Cote defined consumer satisfaction as a summary of effective consumer responses of varying intensity [78]. They argued that consumers feel positive or negative emotions during their purchasing and consumption process and that these various types of emotions can contribute to the intensity of their satisfaction. Thus, in our study, the discrepancy with others’ review ratings can create a particular type of emotion for experienced consumers and influence their satisfaction or dissatisfaction with the purchased product. In particular, the satisfaction induced by the emotional response of the experienced consumer can play a key role in intensifying their intention to write their own opinions [74,80,81,82]. They found that positive and negative emotions play an important role in influencing the level of consumer satisfaction and lead to WOM generation intention. Additionally, other researchers found that the emotion experienced during the purchase and consumption process can have an important influence on consumers’ intention to express their opinions through reviews [83,84,85].
Moreover, online review ratings can enhance the motivation of online consumers to generate reviews by increasing their perceived behavioral control [86]. The numeric measurement of others’ evaluations helps consumers compare their evaluations. Because a large number of other evaluations are more apparent and easier to understand, consumers can easily perceive the impact of their performance and behavior from creating their own reviews. Additionally, the anonymity of the overall review rating, a distinctive feature of online review ratings, can fortify consumers’ intention to write their own review by emphasizing social benefits when they experience a larger gap between others’ review rating and their own experience [11,86].
In particular, the external effect of others’ review ratings can be stronger when consumers perceive a negative discrepancy between their own evaluations and those of others (e.g., “worse than what I read”). One of the important factors affecting the motivation of online consumers to generate reviews is the ego defensive function [84]. They argued that online review generation is motivated by people’s need to minimize self-doubt. Thus, consumers tend to seek to reduce their feelings of guilt from not contributing. Their guilty feelings are much greater when the expected results of not contributing are more important to the public. Thus, when they experience a negative discrepancy in an evaluation, they have a greater motivation to generate their own review, contrary to positive reviews.
Similarly, another motive for generating reviews is to enhance consumers’ feelings of self-value [11,84]. Consumers tend to feel gratified by making contributions, which help their community validate itself. Thus, they have a greater motivation to leave their own opinions when it is considered vital information that could be helpful for other members of the community to which they belong.
In addition, the external effect may be influenced by the existence of prior consumers’ experience of a discrepancy in online review ratings because their satisfaction and emotions are affected when they engage in accumulated purchases [74,78,79]. However, they found that consumers’ feelings of satisfaction or certain emotions related to their consumption or purchases still exist, even if they have prior experiences while their feelings can be mitigated. In sum, the external effect of others’ review ratings can be a critical factor affecting the process of consumers generating online reviews by creating a discrepancy between the consumer’s own experience and evaluations by other anonymous consumers. This phenomenon is a unique characteristic of online review generation behavior that is ingrained in the online review system.

3. Data

We used Amazon product data, which were also used in McAuley et al., and He and McAuley [87,88]. The dataset contains 142.8 million reviews from May 1996 to July 2014 in 24 product categories. It has two main parts: reviews (titles, descriptions, ratings, and helpfulness votes) and metadata regarding products (product names, descriptions, prices, and brands). We used the aggressively deduplicated review dataset for our analysis, which includes no duplicated reviews. This dataset contains 82.83 million unique reviews and a metadata dataset of 9.4 million products. We dropped 5 product categories with different file formats (e.g., Kindle Store, Apps for Android, and Amazon Instant Video). The final data include a total of 19 product categories (Table 1).
Table 2 shows the variable description of our dataset, each for reviews of products in the broadest categories. Our dataset has one review for each entry. We selected unique identifiers of reviews and products, average rating, review time, count of helpfulness votes, price, sales rank in the broadest category, and brand (nonitalicized in Table 2). Furthermore, we transformed review texts and summaries, product titles, and descriptions into numerical features (italicized in Table 2), namely, the number of words, number of characters, and sentiment scores.
We used VADER, a rule-based sentiment analysis tool that is specifically attuned to social media sentiments, to extract sentiment scores from review texts and summaries [21]. There are two representations of the sentiment scores. First, the compound valence score is the sum of valences of each word in the text, if and only if they are included in the VADER lexicon, normalized to be between −1 (most negative) and 1 (most positive). Second, we looked at the proportions of the text that are in negative, neutral, or positive lexicons (between 0 and 1 inclusive). Table 3 shows example texts and their sentiment scores (All codes for data processing can be found at https://github.com/cstorm125/amzn_reviews, accessed on 26 September 2021).

4. Model

We considered two empirical online review generation models to address the research questions mentioned above: first-time reviews and multiple reviews. We defined the first-time online reviewer as the reviewer shown in our online review dataset for the first time. After the first review, if a reviewer reappears in the data, we considered the review by that reviewer as a multiple review.

4.1. Dependent Variable

We consider the length of the review text along with its sentiment as the dependent variable of our analysis. Volume and valence have been adopted as characteristics of online reviews in most eWOM literature [19,89,90,91]. Thus, we consider both the length and sentiment of the review text in the generation process. Specifically, we used three separate measurements for the dependent variable: (i) length of the negative sentiment of the review text (NegLRT), (ii) length of the neutral sentiment of the review text (NeuLRT), and (iii) length of the positive sentiment of the review text (PosLRT). These separate dependent variables verify the effect of different review ratings on the online review generation process across different sentiments.

4.2. Independent Variable

4.2.1. Individual Rating (IRT)

We employed an individual rating (IRT) as a key independent variable measuring a single reviewer’s experience (or evaluation based on the experience) of the purchased product or service. The individual rating has a possible value ranging from one to five stars: five stars indicate the highest level of satisfaction of an individual’s experience, while one star indicates the lowest. Three stars indicate neutral feelings regarding the experienced product or service. To test the effect of both positive and negative experiences separately, we consider IRT as two different binary variables. Thus, a positive individual rating (PosIRT) equals one if the overall rating is greater than neutral point 3, and zero otherwise. Similarly, a negative overall rating (NegIRT) equals one if the overall rating is lower than 3, and zero otherwise.

4.2.2. DIS (DIS)

To measure the discrepancy between consumers’ own experiences and others’ evaluations, we employed a discrepancy variable (DIS). Specifically, we measured DIS as a difference between the individual rating and the overall rating (ORT) exposed to the particular consumer before providing their own rating and review. That is:
Discrepancy   DIS = I n d i v i d u a l   r a t i n g I R T i , j , t M o v i n g   a v e r a g e   o f   o v e r a l l   r a t i n g M A O R T i , j , t 1
where i is brand, j is the individual consumer, and t is the time that the consumer provided the rating and review. In the estimation, we used a 5-day moving average of the overall rating. Thus, if an individual gives a lower rating regarding a product lower than the 5-day moving average of the overall average rating, it is a negative discrepancy (NegDIS). Similarly, if a consumer’s rating is higher than the average of the overall rating, it is a positive discrepancy (PosDIS). However, one might argue that a consumer leaves his or her rating or review sometime after purchasing. Consumers may have some time lag between purchases and the generation of reviews. Thus, we employed an additional measurement of MAORT for a longer range: a 10-day moving average and a common average of the overall rating from the day before the consumer leaves a rating and review We found consistent results for all three measurements. (The results are available upon request).

4.2.3. Product Information Variable

We also included control variables related to product (or service) information that a consumer observes on the website. First, we included the price of the purchased product (PRC) as the dollar value of the product displayed. Second, we included the length of the product description (DES); the longer the product description is, the more information is provided to consumers regarding the product. We also included a brand variable for the product (BRN), which is one of the brands of the product displayed on the website; otherwise, it is zero. Finally, we included a year dummy (YRDM) and a month dummy (MNDM) to capture potential seasonality issues in the review generation process.

4.3. Model Specification

We proposed two separate online review generation models: first-time reviews and multiple reviews (M1 and M2, respectively). Specifically,
(1) Online review generation model for the first-time review:
-
M1-a:
PosLRT i , j , t = β 0 P o s + β P O S I R T P o s PosIRT i , j , t + β N E G I R T P o s NegIRT i , j , t + β P O S D I S P o s P o s D I S i , j , t + β N E G D I S P o s N e g D I S i , j , t + β P R C P o s P R C i , t + β D E S P o s D E S i , t + β B R N P o s B R N i , t + YRDM γ P o s +   MNDM θ P o s + e i , j , t P o s
-
M1-b:
NegLRT i , j , t = β 0 N e g + β P O S I R T N e g PosIRT i , j , t + β N E G I R T N e g NegIRT i , j , t + β P O S D I S P o s P o s D I S i , j , t + β N E G D I S P o s N e g D I S i , j , t + β P R C N e g P R C i , t + β D E S N e g D E S i , t   + β B R N N e g B R N i , t + YRDM γ N e g   +   MNDM θ N e g + e i , j , t N e g
-
M1-c:
NeuLRT i , j , t = β 0 N e u + β P O S I R T N e u PosIRT i , j , t + β N E G I R T N e u NegIRT i , j , t + β P O S D I S P o s P o s D I S i , j , t + β N E G D I S P o s N e g D I S i , j , t + β P R C N e u P R C i , t + β D E S N e u D E S i , t + β B R N N e u B R N i , t + YRDM γ N e u   +   MNDM θ N e u + e i , j , t N e u
(2) Online review generation model for the multiple-time reviews.
For the multiple-time review model, we considered the fixed-effect model. Because our data verify individual reviewers’ identification, we can consider the unobserved individual effect.
-
M2-a:
PosLRT i , j , t = α 0 P o s + α P O S I R T P o s PosIRT i , j , t + α N E G I R T P o s NegIRT i , j , t + α P O S D I S P o s P o s D I S i , j , t + α N E G D I S P o s N e g D I S i , j , t + α P R C P o s P R C i , t + α D E S P o s D E S i , t + α B R N P o s B R N i , t + YRDM η P o s   +   MNDM ϕ P o s + υ i , j , t P o s
where υ i , j , t P o s = μ i P o s + ε i , j , t P o s .
-
M2-b:
NegLRT i , j , t = α 0 N e g + α P O S I R T N e g PosIRT i , j , t + α N E G I R T N e g NegIRT i , j , t + α P O S D I S P o s P o s D I S i , j , t + α N E G D I S P o s N e g D I S i , j , t + α P R C N e g P R C i , t + α D E S N e g D E S i , t + α B R N N e g B R N i , t + YRDM η N e g   +   MNDM ϕ N e g + υ i , j , t N e g
where υ i , j , t N e g = μ i N e g + ε i , j , t N e g .
-
M2-c:
NeuLRT i , j , t = α 0 N e u + α P O S I R T N e u PosIRT i , j , t + α N E G I R T N e u NegIRT i , j , t + α P O S D I S P o s P o s D I S i , j , t + α N E G D I S P o s N e g D I S i , j , t + α P R C N e u P R C i , t + α D E S N e u D E S i , t + α B R N N e u B R N i , t + YRDM η N e u   +   MNDM ϕ N e u + υ i , j , t N e u
where υ i , j , t N e u = μ i N e u + ε i , j , t N e u .

5. Estimation

For the estimation, we used cross-sectional OLS regression analysis and panel data fixed-effect model analysis for the review generation model for the first-time review and the multiple-time review, respectively. In this section, we report the estimation results for both models. We then discuss the relevant managerial implications of our findings.

5.1. Estimation Results of the First-Time Review Model

Figure 1, Figure 2 and Figure 3 report the estimation results of the review generation model for the first review. First, we obtained evidence that experience plays a key factor in the generation of the review. Specifically, we found that both positive and negative experiences ( PosIRT and NegIRT ) significantly influence review generation in three different dependent variables. This finding is also consistent across almost all categories: all 19 categories in positive and negative reviews (Figure 1 and Figure 2) and 18 categories out of 19 in neutral reviews, except the health and personal care category (Figure 3). A positive experience significantly increases the generation of a positive review and decreases the generation of a negative review.
On the other hand, a negative experience increases the generation of a negative review and decreases the generation of a positive review. However, the magnitude of these effects is asymmetric. The impact of a positive experience on the generation of a positive review is stronger than the effect of a negative experience. We also found that consumers generate fewer neutral reviews when they have positive experiences, while they generate more neutral reviews when they have negative experiences. Interestingly, the effect of a negative experience on neutral review generation is much stronger than its effect on negative review generation. This finding suggests that when consumers have positive experiences, they are more likely to share pleasant emotions instead of objective information. However, when they have negative experiences, they are more likely to write factual information instead of expressing negative emotions.
More importantly, we found that discrepancy plays a key role in generating online reviews. If a negative discrepancy, i.e., ‘worse than what I read’ occurs, consumers generate more negative and neutral reviews (Figure 2 and Figure 3). Discrepancy plays a significant role in influencing the review generation process, even after controlling consumers’ experience factors. These patterns are consistently found across all 19 categories in the generation processes of both negative and neutral reviews. Interestingly, only negative discrepancies increased neutral review generation in some categories, such as the Grocery and Gourmet Food and Health and Personal Care industries, while negative experiences did not. This finding provides practical implications for managers who manage eWOM for their products. For example, suppose a manager hires reviewers and asks them to generate positive reviews or increase product ratings to manipulate online reviews; this may induce more consumers to generate negative reviews. Negative discrepancy leads consumers to generate objective information about the product and write negative reviews to influence potential consumers.
Furthermore, negative discrepancies decrease in positive reviews (Figure 1). For example, even if consumers are satisfied with their experience, they hesitate to write positive reviews due to the negative evaluations of others. Thus, distorted reviews or ratings can successfully induce consumers to try the reviewed products for the first time. However, it may backfire on the firm by generating negative discrepancies, leading to more negative reviews and less positive reviews from experienced consumers.
On the other hand, when a consumer detects a positive discrepancy, i.e., ‘better than what I read,’ she is likely to generate a more positive than negative review (Figure 2 and Figure 3). Interestingly, a positive discrepancy reduces neutral review generation, unlike a negative discrepancy (Figure 3). This indicates that positive discrepancies, similar to positive experiences, may induce consumers to generate more reviews to share his or her emotions and satisfaction. Although this pattern of positive review discrepancies was consistent, it was not as strong as in the case of negative discrepancies. We found a significant effect of positive discrepancies on positive review generation across 16 out of 19 categories (84% of categories supported, Figure 1) and fewer negative reviews across 16 out of 19 categories (84% of categories supported, Figure 2). The special categories concerning the negative discrepancy effect on positive review generation include CDs and Vinyl, Kindle Store, and Digital Music industries. The Kindle Store, Movies and TV, and Digital Music industries are the exceptions for the positive discrepancy effect on negative review generation. In terms of neutral reviews, people generate fewer neutral reviews when they detect positive discrepancies across 12 categories (63% of all categories supported, Figure 3) and generate more neutral reviews across 4 categories (20% of all categories supported, Figure 3).

5.2. Estimation Results of the Multiple Review Model

We conducted similar analyses with the consumers who had generated reviews in the past. The findings are similar to the first-time review model; however, we found less consistent evidence for the external effect, while the experience effect was supported by all categories. The estimation results are reported in Figure 4, Figure 5 and Figure 6. Similar to the first-time model, experience plays a key factor in generating online reviews for both sentiments (PosIRT and Neg IRT). The direction of the findings is consistent: positive experiences result in more positive reviews and less negative reviews, and vice versa. These findings are consistent across all 19 categories (Figure 4 and Figure 5). We also found that a positive experience results in fewer neutral reviews and that a negative experience helps generate more neutral reviews (Figure 6). We found the same pattern across all 19 categories for the positive experience, in 17 categories for the negative experience, and in 2 categories for the insignificant effect of the negative experience.
In terms of review discrepancies, the findings of the multiple review model show consistent patterns with first-time reviewers, with several exceptions in various categories, particularly in the positive discrepancy case. We found that negative discrepancies result in more negative reviews and fewer positive reviews, and positive discrepancies lead to more positive reviews and fewer negative reviews. However, the effect of positive discrepancies on review generation was less supported than that of negative discrepancies. This relationship was found in 13 of 19 categories for positive discrepancy (64% of all categories), while the same relationship was found in all 19 categories for negative discrepancy (Figure 4 and Figure 5). For neutral reviews, negative review discrepancies increase neutral review generation in almost all categories (18/19 categories), while positive discrepancies decrease neutral review generation in 10 of 19 categories. In sum, our analysis of the multiple-review model shows that the positive external effect on the review generation process is weaker, whereas the negative discrepancy effect is stronger. One possible reason for these findings is that consumers might be less sensitive to a discrepancy when they have more experience generating online reviews because of greater familiarity or less scarcity. In particular, people are less susceptible to positive discrepancies, i.e., ‘better than what I read’; however, the negative discrepancy is a powerful driver to induce people to generate reviews. Nevertheless, little is known regarding how consumers respond to the different types of discrepancies along with accumulated generation experiences. Thus, it is meaningful for future research to investigate the topic related to how consumers’ review generation experience influences their review generation intention.

6. Conclusions

In this study, we investigate whether and how the consumers’ online review generation process is influenced by others’ review ratings. We focused on the discrepancy between the evaluations of experienced consumers and anonymous evaluations of others, represented in the online review ratings. We expand our understanding of the effect of other’s reviews on consumers’ purchasing processes by investigating the external effect of online review ratings on the consumers’ review generation process after the purchase. To address these research questions, we collected sizeable online review data that included 37.12 million unique reviews over 19 product categories from Amazon.com. We categorized review contents using an information system technique and analyzed the comprehensive dataset to find significant empirical evidence for the external effect of online review ratings across various industries.
Our empirical findings make an important contribution to the eWOM literature by shedding light on the external effect of online review ratings on the online review generation process. Our results imply that future research should include the dynamic mechanism of online reviews’ impact on consumers’ purchasing behaviors and firms’ revenue by considering the ongoing process of generating online reviews. Our findings also have meaningful implications for the fake review literature, which is one of the most severe concerns of managers and academia. Our data analysis demonstrated that positively distorted reviews generate negative discrepancies, increasing the intention of generating more negative and less positive reviews from experienced consumers. In addition, negative discrepancies significantly influence experienced consumers, leading them to generate more neutral reviews, which plays a critical role in enhancing the impact of negative and positive effects on future consumers [92]. Thus, intentional manipulation of online reviews might backfire on firms.
The issue of fake reviews is not only the problem of specific firms but also of online shopping or review platform providers. Recently, wide applications of systemically biased reviews have been made by platform providers [93]. For example, Taobao, the largest online C2C shopping platform, provided distorted review systems that favor positive reviews or ratings when consumers provide their reviews for the product or store they experienced. Additionally, consumers who want to leave their comments on Yelp can be systemically exposed to positive and high rating reviews of other reviewers during the review generation process. This is because online platform providers tend to provide favorable reviews and ratings to attract more companies or stores to their platforms. However, these systemic manipulations to create more positive reviews might increase the frequency and severity of negative reviews for products and stores by generating a large discrepancy in product evaluations with those of experienced consumers.
These findings also provide additional insight into the ethical marketing literature where the majority of studies focused on the adverse effect of firm’s unethical behaviors on their financial performances and brand equity that directly impairs sustainable customer relationships [94,95,96]. Adopting an unethical manipulation of online reviews by firms or platform providers consequently leads to an adverse effect on the focal firms and platform providers by reproducing more negative reviews for potential consumers. Thus, unethical manipulation of the reviews can damage the manipulated company or platform provider in the long run by provoking the experienced customers as well as hurting the reputation and creditability of the digital business environment.
Although this study verifies the external effect of online review ratings on the consumers’ review generation process by providing significant empirical evidence from the ample review data of one of the most representative online shopping platforms, there is a limitation to fully understanding this external effect. First, there may be other ways in which review ratings influence the consumer reviews. In the psychology and marketing literature, it is known that people can be affected by the numeric anchor that is exposed before they make a decision [97,98,99]. This cognitive bias of consumers is well reported in the marketing literature. Thus, consumers may have a tendency to be swayed by the rating they observed before they evaluate a product or service.
Additionally, people naturally tend to quickly adopt the majority of others’ opinions provided by the relevant group, known as “majority rule”. [100,101,102,103]. People are likely to take this simple heuristic into account when they make a decision. Thus, it is possible that consumers tend to follow the direction of others’ opinions when they observe discrepancies during the evaluation process. These counterfactual explanations for the external effect are also possible. However, our empirical findings seem to significantly support our hypotheses based on consumers’ psychological factors induced by the unintended discrepancy where others’ review ratings serve as a reference for consumers. Therefore, it would be a good addition to the literature if future studies investigated the generation mechanisms of different sources to induce the external effect of online review ratings.
Additionally, our study urgently calls for future studies to verify the key constructs of inducing the external effect of others’ review ratings and their comprehensive mechanisms. Due to the limitations of empirical analysis employing secondary data provided by Amazon.com, our result does not include demographic information or individual-level exposure history data of previous reviews. Thus, our analysis cannot provide relevant implications for individual-level causal inference. Such investigation needs to be supplemented in future studies to expand our understanding of the externality of others’ review ratings. While Amazon’s design makes it difficult to miss the ratings of each product, we cannot rule out the possibility that some consumers make their purchase decision regardless of the ratings and, therefore, totally ignore the rating information provided by the website. Unfortunately, it is possible that our data may contain such consumers, who we cannot separate from the dataset. However, given the highly noticeable feature provided by the platform and the usefulness of such information, we suspect the proportion of such consumers will not be significant.
Finally, it can be argued that the quality of the product consumers’ experience influences their response to the imbalance between other ratings and their own experience. The quality level of the product can influence consumers’ psychological factors, such as attitudes, beliefs, and perceptions [104,105,106,107]. Thus, their behavior when responding to the discrepancy may be different when they confront a similar level of discrepancy during their consumption of lower-quality products or services. This effect can be related to the brand, price, or other characteristics of the product. Thus, it would be interesting if future research investigated this external effect at the product category level by exploring the moderation effect of product quality on the external impact of others’ review ratings and considering the relevant characteristics of each product.
Finally, the all-inclusive measurement of the online review rating system may provide a misleading signal in terms of the product quality to consumers who do not put much value on sustainable marketing practices. Various consumers have a different appreciation of sustainable products. Therefore, previous consumers’ ratings are naturally noisy on how much weight is allocated to the importance of the sustainable attributes of the products. For example, a consumer who cares much about the sustainability of a particular product might leave a positive rating for an environment-friendly product. However, the next consumer might experience a substantial negative discrepancy if he or she cares little about the sustainability of the product. Consequently, the unitary online review rating system commonly adopted in most online platforms can be an additional source of the external effect of review ratings for a sustainable product. Thus, it could be an essential topic for sustainable marketers to verify the effect of the unitary system of online review rating.

Author Contributions

Conceptualization, Y.J.P., J.J. and Y.Y.; methodology, Y.J.P., J.J. and Y.Y.; software, C.P. and Y.Y.; formal analysis, Y.J.P., J.J. and Y.Y.; investigation, Y.J.P., J.J., C.P. and Y.Y.; resources, C.P.; data curation, C.P.; writing—original draft preparation, Y.J.P., J.J., C.P. and Y.Y.; writing—review and editing, Y.J.P., J.J. and Y.Y.; supervision, Y.J.P., J.J. and Y.Y.; project administration, Y.Y.; funding acquisition, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant from Kyung Hee University in 2018 (KHU-20180925).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Source: https://jmcauley.ucsd.edu/data/amazon/, accessed on 26 September 2021. He, Ruining, and Julian McAuley, “Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering”. Proceedings of the 25th international conference on world wide web. 2016. McAuley, Julian, et al., “Image-based recommendations on styles and substitutes”. Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval 2015.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, N.; Bose, I.; Koh, N.S.; Liu, L. Manipulation of online reviews: An analysis of ratings, readability, and sentiments. Decis. Support Syst. 2012, 52, 674–684. [Google Scholar] [CrossRef]
  2. Mayzlin, D.; Dover, Y.; Chevalier, J. Promotional reviews: An empirical investigation of online review. Am. Econ. Rev. 2014, 104, 2421–2455. [Google Scholar] [CrossRef] [Green Version]
  3. Luca, M.; Zervas, G. Fake it till you make it: Reputation, competition, and Yelp review fraud. Manag. Sci. 2016, 62, 3412–3427. [Google Scholar] [CrossRef] [Green Version]
  4. Gössling, S.; Zeiss, H.; Hall, C.M.; Martin-Rios, C.; Ram, Y.; Grøtte, I.P. A cross-country comparison of accommodation manager perspectives on online review manipulation. Curr. Issues Tour. 2019, 22, 1744–1763. [Google Scholar] [CrossRef] [Green Version]
  5. Lee, K.; Conklin, M.; Bordi, P.; Cranage, D. Restaurants’ healthy eating initiatives for children increase parents’ perceptions of CSR, empowerment, and visit intentions. Int. J. Hosp. Manag. 2016, 59, 60–71. [Google Scholar] [CrossRef]
  6. Kucukusta, D.; Perelygina, M.; Lam, W.S. CSR communication strategies and stakeholder engagement of upscale hotels in social media. Int. J. Contemp. Hosp. Manag. 2019, 31, 2129–2148. [Google Scholar] [CrossRef]
  7. D’Acuntoa, D.; Tuan, A.; Dalli, D.; Viglia, G.; Okumus, F. Do consumers care about CSR in their online reviews? An empirical analysis. Int. J. Hosp. Manag. 2020, 85, 102342. [Google Scholar] [CrossRef]
  8. Sung, K.K.; Tao, C.-W.W.; Slevitch, L. Restaurant chain’s corporate social responsibility messages on social networking sites: The role of social distance. Int. J. Hosp. Manag. 2020, 85, 102429. [Google Scholar] [CrossRef]
  9. Park, E.; Kwon, J.; Kim, S.B. Green Marketing Strategies on Online Platforms: A Mixe Approach of Experiment Design and Topic Modeling. Sustainability 2021, 13, 4494. [Google Scholar] [CrossRef]
  10. Fiske, A.P. Using individualism and collectivism to compare cultures—A critique of the validity and measurement of the constructs: Comment on Oyserman et al. Psychol. Bull. 2002, 128, 78–88. [Google Scholar] [CrossRef] [Green Version]
  11. Hennig-Thurau, T.; Gwinner, K.P.; Walsh, G.; Gremler, D.D. Electronic word-of-mouth via consumer-opinion platforms: What motivates consumers to articulate themselves on the Internet? J. Interact. Mark. 2004, 18, 38–52. [Google Scholar] [CrossRef]
  12. Gruen, T.W.; Osmonbekov, T.; Czaplewski, A.J. eWOM: The impact of customer-to-customer online know-how exchange on customer value and loyalty. J. Bus. Res. 2006, 59, 449–456. [Google Scholar] [CrossRef]
  13. Dellarocas, C.; Narayan, R. Tall heads vs. long tails: Do consumer reviews increase the informational inequality between hit and niche products? Robert H. Smith Sch. Bus. Res. Pap. 2007, 06–056. [Google Scholar] [CrossRef]
  14. Packard, G.; Wooten, D.B. Compensatory knowledge signaling in consumer word-of-mouth. J. Consum. Psychol. 2013, 23, 434–450. [Google Scholar] [CrossRef]
  15. Hamilton, R.W.; Schlosser, A.; Chen, Y.-J. Who’s driving this conversation? Systematic biases in the content of online consumer discussions. J. Mark. Res. 2017, 54, 540–555. [Google Scholar] [CrossRef]
  16. Litvin, S.W.; Goldsmith, R.E.; Pan, B. Electronic word-of-mouth in hospitality and tourism management. Tour. Manag. 2008, 29, 458–468. [Google Scholar] [CrossRef]
  17. Moe, W.W.; Schweidel, D.A. Online product opinions: Incidence, evaluation, and evolution. Mark. Sci. 2012, 31, 372–386. [Google Scholar] [CrossRef]
  18. Filieri, R.; McLeay, F. eWOM and accommodation: An analysis of the factors that influence travelers’ adoption of information from online reviews. J. Travel Res. 2013, 53, 44–57. [Google Scholar] [CrossRef]
  19. Yoon, Y.; Polpanumas, C.; Park, Y.J. The impact of word of mouth via Twitter on moviegoers’ decisions and film revenues: Revisiting prospect theory: How WOM about movies drives loss-aversion and reference-dependence behaviors. J. Advert. Res. 2017, 57, 144–158. [Google Scholar] [CrossRef]
  20. Rosario, A.B.; de Valck, K.; Sotgiu, F. Conceptualizing the electronic word-of-mouth process: What we know and need to know about eWOM creation, exposure, and evaluation. J. Acad. Mark. Sci. 2020, 48, 422–448. [Google Scholar] [CrossRef]
  21. Hutto, C.J.; Gilbert, E.E. VADER: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the Eighth International Conference on Weblogs and Social Media (ICWSM-14), Ann Arbor, MI, USA, 1–4 June 2014. [Google Scholar]
  22. Chen, Y.; Xie, J. Online consumer review: Word-of-mouth as a new element of marketing communication mix. Manag. Sci. 2008, 54, 477–491. [Google Scholar] [CrossRef] [Green Version]
  23. Luca, M. Reviews, reputation, and revenue: The case of Yelp.com. Harv. Bus. Sch. Work. Pap. 2016, 12–16. [Google Scholar] [CrossRef] [Green Version]
  24. Brown, T.J.; Barry, T.E.; Dacin, P.A.; Gunst, R.F. Spreading the word: Investigating antecedents of consumers’ positive word-of-mouth intentions and behaviors in a retailing context. J. Acad. Mark. Sci. 2005, 33, 123–138. [Google Scholar] [CrossRef]
  25. Forman, C.; Ghose, A.; Wiesenfeld, B. Examining the relationship between reviews and sales: The role of reviewer identity disclosure in electronic markets. Inf. Syst. Res. 2008, 19, 291–313. [Google Scholar] [CrossRef]
  26. De Langhe, B.; Fernbach, P.M.; Lichtenstein, D.R. Navigating by the stars: Investigating the actual and perceived validity of online user ratings. J. Consum. Res. 2016, 42, 817–833. [Google Scholar] [CrossRef]
  27. Gavilan DAvello, M.; Martinez-Navarro, G. The influence of online ratings and reviews on hotel booking consideration. Tour. Manag. 2018, 66, 53–61. [Google Scholar] [CrossRef]
  28. Hong, S.; Pittman, M. eWOM anatomy of online product reviews: Interaction effects of review number, valence, and star ratings on perceived credibility. Int. J. Advert. 2020, 39, 892–920. [Google Scholar] [CrossRef]
  29. Schau, H.J.; Muniz, A.M., Jr. Brand communities and personal identities: Negotiations in cyberspace. Adv. Consum. Res. 2002, 29, 344–349. [Google Scholar]
  30. Xia, L.; Bechwati, N.N. Word of mouse: The role of cognitive personalization in online consumer reviews. J. Interact. Advert. 2008, 9, 3–13. [Google Scholar] [CrossRef]
  31. Clemons, E.K.; Gao, G.G. Consumer informedness and diverse consumer purchasing behaviors: Traditional mass-market, trading down, and trading out into the long tail. Electron. Commer. Res. Appl. 2008, 7, 3–17. [Google Scholar] [CrossRef] [Green Version]
  32. Trusov, M.; Bucklin, R.E.; Pauwels, K. Effects of word-of-mouth versus traditional marketing: Findings from an internet social networking site. J. Mark. 2009, 73, 90–102. [Google Scholar] [CrossRef] [Green Version]
  33. Sparks, B.A.; Browning, V. The impact of online reviews on hotel booking intentions and perception of trust. Tour. Manag. 2011, 32, 1310–1323. [Google Scholar] [CrossRef] [Green Version]
  34. Lim, B.C.; Chung, C.M. The impact of word-of-mouth communication on attribute evaluation. J. Bus. Res. 2011, 64, 18–23. [Google Scholar] [CrossRef]
  35. Kannan, P.K.; Li, H.A. Digital marekting: A framework, review and research agenda. Int. J. Res. Mark. 2017, 34, 22–45. [Google Scholar] [CrossRef]
  36. Li, Y.; Zhang, L. Do online reviews truly matter? A study of the characteristics of consumers involved in different online review scenarios. Behav. Inf. Technol. 2020. [Google Scholar] [CrossRef]
  37. Standing, C.; Holzweber, M.; Mattsson, J. Exploring emotional expressions in e-word-of-mouth from online communities. Inf. Process. Manag. 2016, 52, 721–732. [Google Scholar] [CrossRef] [Green Version]
  38. Serra-Cantallops, A.; Ramon-Cardona, J.; Salvi, F. The impact of positive emotional experiences on eWOM generation and loyalty. Span. J. Mark.-ESIC 2018, 22, 142–162. [Google Scholar] [CrossRef]
  39. De Angelis, M.; Bonezzi, A.; Peluso, A.M.; Rucker, D.D.; Costabile, M. On braggarts and gossips: A self-enhancement account of word-of-mouth generation and transmission. J. Mark. Res. 2012, 49, 551–563. [Google Scholar] [CrossRef] [Green Version]
  40. Wojnicki, A.C.; Godes, D. Word-of-Mouth as Self-Enhancement. HBS Marketing Research Paper No. 06-01. 2008. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=908999 (accessed on 26 September 2021).
  41. Sun, T.; Youn, S.; Wu, G.; Kuntaraporn, M. Online word-of-mouth (or mouse): An exploration of its antecedents and consequences. J. Comput. Mediat. Commun. 2006, 11, 1104–1127. [Google Scholar] [CrossRef] [Green Version]
  42. Huang, C.C.; Lin, T.C.; Lin, K.J. Factors affecting pass-along email intentions (PAEIs): Integrating the social capital and social cognition theories. Electron. Commer. Res. Appl. 2009, 8, 160–169. [Google Scholar] [CrossRef]
  43. Ho, J.Y.; Dempsey, M. Viral marketing: Motivations to forward online content. J. Bus. Res. 2010, 63, 1000–1006. [Google Scholar] [CrossRef]
  44. Picazo-Vela, S.; Chou, S.Y.; Melcher, A.J.; Pearson, J.M. Why provide an online review? An extended theory of planned behavior and the role of Big-Five personality traits. Comput. Hum. Behav. 2010, 26, 685–696. [Google Scholar] [CrossRef]
  45. Schlosser, A.E. Posting versus lurking: Communicating in a multiple audience context. J. Consum. Res. 2005, 32, 260–265. [Google Scholar] [CrossRef] [Green Version]
  46. Barasch, A.; Berger, J. Broadcasting and narrowcasting: How audience size affects what people share. J. Mark. Res. 2014, 51, 286–299. [Google Scholar] [CrossRef]
  47. Schlosser, A.E. The effect of computer-mediated communication on conformity vs. nonconformity: An impression management perspective. J. Consum. Psychol. 2009, 19, 374–388. [Google Scholar] [CrossRef]
  48. Chen, Y.J.; Kirmani, A. Posting strategically: The consumer as an online media planner. J. Consum. Psychol. 2015, 25, 609–621. [Google Scholar] [CrossRef]
  49. Ludwig, S.; De Ruyter, K.; Friedman, M.; Brüggen, E.C.; Wetzels, M.; Pfann, G. More than words: The influence of affective content and linguistic style matches in online reviews on conversion rates. J. Mark. 2013, 77, 87–103. [Google Scholar] [CrossRef]
  50. Berger, J. Word of mouth and interpersonal communication: A review and directions for future research. J. Consum. Psychol. 2014, 24, 586–607. [Google Scholar] [CrossRef]
  51. Dixit, S.; Badgaiyan, A.J.; Khare, A. An integrated model for predicting consumer’s intention to write online reviews. J. Retail. Consum. Serv. 2019, 46, 112–120. [Google Scholar] [CrossRef]
  52. Thakur, R. Customer engagement and online reviews. J. Retail. Consum. Serv. 2018, 41, 48–59. [Google Scholar] [CrossRef]
  53. Askalidis, G.; Kim, S.J.; Malthouse, E.C. Understanding and overcoming biases in online review systems. Decis. Support Syst. 2017, 97, 23–30. [Google Scholar] [CrossRef]
  54. Godes, D.; Silva, J.C. Sequential and temporal dynamics of online opinion. Mark. Sci. 2012, 31, 448–473. [Google Scholar] [CrossRef]
  55. Chevalier, J.A.; Mayzlin, D. The effect of word of mouth on sales: Online book reviews. J. Mark. Res. 2006, 43, 345–354. [Google Scholar] [CrossRef] [Green Version]
  56. Powell, D.; Yu, J.; DeWolf, M.; Holyoak, K.J. The Love of Large Numbers: A Popularity Bias in Consumer Choice. Psychol. Sci. 2017, 28, 1432–1442. [Google Scholar] [CrossRef] [PubMed]
  57. Gvili, Y.; Levy, S. Consumer engagement with eWOM on social media: The role of social capital. Online Inf. Rev. 2018, 42, 482–505. [Google Scholar] [CrossRef]
  58. Wu, F.; Huberman, B.A. How public opinion forms. In International Workshop on Internet and Network Economics; Springer: Berlin/Heidelberg, Germany, 2008; pp. 334–341. [Google Scholar]
  59. Yoo, C.W.; Sanders, G.L.; Moon, J. Exploring the effect of e-WOM participation on e-Loyalty in e-commerce. Decis. Support Syst. 2013, 55, 669–678. [Google Scholar] [CrossRef]
  60. Park, S.; Nicolau, J.L. Asymmetric Effects of Online Consumer Reviews. Ann. Tour. Res. 2015, 50, 67–83. [Google Scholar] [CrossRef] [Green Version]
  61. Sherry, H.; Hollenbeck, B.; Proserpio, D. The Market for Fake Reviews. In Proceedings of the 22nd ACM Conference on Economics and Computation (EC ’21), Budapest, Hungary, 18–23 July 2021. [Google Scholar]
  62. Oliver, R.L. A cognitive model of the antecedents and consequences of satisfaction decisions. J. Mark. Res. 1980, 17, 460–469. [Google Scholar] [CrossRef]
  63. Cadotte, E.; Woodruff, R.; Jenkins, R. Expectations and norms in models of consumer satisfaction. J. Mark. Res. 1987, 24, 305–314. [Google Scholar] [CrossRef]
  64. Ferrell, O.C.; Hartline, M.D. Marketing Strategy; South-Western, Cengage Learning: Mason, OH, USA, 2011. [Google Scholar]
  65. Choraria, S. Exploring the role of negative emotions on customer’s intention to complain. Vision 2013, 17, 201–211. [Google Scholar] [CrossRef]
  66. Buchanan, J.M.; Stubblebine, W.C. Externality. In Classic Papers in Natural Resource Economics; Palgrave Macmillan: London, UK, 1962; pp. 138–154. [Google Scholar]
  67. Park, Y.J.; Zhang, F.; Yoon, Y. The external effect of a migrated star player on domestic sports league: An empirical analysis of three Asian leagues—Japan, Korea and Taiwan. Int. J. Sports Mark. Spons. 2021, 22, 262–292. [Google Scholar] [CrossRef]
  68. Feng, Y.; Cao, W.; Shin, G.C.; Yoon, Y. The external effect of international tourism on brand equity development process of multinational firms (MNFs). J. Brand Manag. 2021. [Google Scholar] [CrossRef]
  69. Churchill, G.A.; Suprenant, C. An investigation into the determinants of customer satisfaction. J. Mark. 1982, 19, 491–504. [Google Scholar] [CrossRef]
  70. Tse, D.K.; Wilton, P.C. Models of consumer satisfaction formatting: An extension. J. Mark. Res. 1988, 25, 204–212. [Google Scholar] [CrossRef]
  71. Zeithaml, V.A.; Berry, L.; Parasuraman, A. The nature and Determinants of customer expectations of service. J. Acad. Mark. Sci. 1993, 21, 1–12. [Google Scholar] [CrossRef] [Green Version]
  72. Li, H.; Ye, Q.; Law, R. Determinants of customer satisfaction in the totel industry: An application of online review analysis. Asia Pac. J. Tour. Res. 2013, 18, 784–802. [Google Scholar] [CrossRef]
  73. Liljander, V.; Strandvik, T. Emotions in service satisfaction. Int. J. Serv. Ind. Manag. 1997, 8, 148–169. [Google Scholar] [CrossRef]
  74. White, C. The impact of emotions on service quality, satisfaction, and positive word-of-mouth intentions over time. J. Mark. Manag. 2010, 26, 381–394. [Google Scholar] [CrossRef]
  75. Sun, M. How does the variance of product ratings matter? Manag. Sci. 2012, 58, 696–707. [Google Scholar] [CrossRef] [Green Version]
  76. Guo, Y.; Barnes, S.J.; Jia, Q. Mining meaning from online ratings and reviews: Tourist satisfaction analysis using latent dirichlet allocation. Tour. Manag. 2017, 59, 467–483. [Google Scholar] [CrossRef] [Green Version]
  77. Westbrook, R.A.; Oliver, R.L. The dimensions of consumption emotion patterns and consumer satisfaction. J. Consum. Res. 1991, 18, 84–91. [Google Scholar] [CrossRef]
  78. Giese, J.L.; Cote, J.A. Defining consumer satisfaction. Acad. Mark. Sci. Rev. 2000, 1, 1–27. [Google Scholar]
  79. White, C.; Yu, Y. Satisfaction emotions and consumer behavioral intentions. J. Serv. Mark. 2005, 19, 411–420. [Google Scholar] [CrossRef]
  80. Dubé, L.; Menon, K. Multiple roles of consumption emotions in post-purchase satisfaction with extended service transactions. Int. J. Serv. Ind. Manag. 2000, 11, 287–304. [Google Scholar] [CrossRef]
  81. McQuitty, S.; Finn, A.; Wiley, J. Systematically varying consumer satisfaction and its implications for product choice. Acad. Mark. Sci. Rev. 2000, 10, 231–254. [Google Scholar]
  82. Homburg, C.; Koschate, N.; Hoyer, W.D. The role of cognition and affect in the formation of customer satisfaction: A dynamic perspective. J. Mark. 2006, 70, 21–31. [Google Scholar] [CrossRef]
  83. Clary, E.G.; Snyder, M.; Ridge, R.D.; Miene, P.K.; Haugen, J.A. Matching messages to motives in persuasion: A functional approach to promoting volunteerism. J. Appl. Soc. Psychol. 1994, 24, 1129–1146. [Google Scholar] [CrossRef]
  84. Daugherty, T.; Eastin, M.S.; Bright, L. Exploring consumer motivations for creating user-generated content. J. Interact. Advert. 2008, 8, 1–24. [Google Scholar] [CrossRef]
  85. Beldad, A.; Voutsas, C. Understanding the intention to write reviews for mobile apps among German users: Testing the expanded theory of planned behavior using a structural equation modeling approach. J. Technol. Behav. Sci. 2018, 3, 301–311. [Google Scholar] [CrossRef] [Green Version]
  86. Ajzen, L. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  87. McAuley, J.; Targett, C.; Shi, Q.; Van Den Hengel, A. Image-Based Recommendations on Styles and Substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, Santiago, Chile, 9–13 August 2015; pp. 43–52. [Google Scholar]
  88. He, R.; McAuley, J. Ups and downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering. In Proceedings of the 25th International Conference on World Wide Web, Montréal, QC, Canada, 11–15 April 2016; pp. 507–517. [Google Scholar]
  89. Eliashberg, J.; Shugan, S.M. Film critics: Influencers or predictors? J. Mark. 1997, 61, 68–78. [Google Scholar] [CrossRef] [Green Version]
  90. Buttle, F.A. Word of mouth: Understanding and managing referral marketing. J. Strateg. Mark. 1998, 6, 241–254. [Google Scholar] [CrossRef]
  91. Harrison-Walker, L.J. The measurement of word-of-mouth communication and an investigation of service quality and customer commitment as potential antecedents. J. Serv. Res. 2001, 4, 60–75. [Google Scholar] [CrossRef]
  92. Tang, T.; Fang, E.; Wang, F. Is neutral really neutral? The effects of neutral user-generated content on product sales. J. Mark. 2014, 78, 41–58. [Google Scholar] [CrossRef]
  93. Aral, S. The Problem With Online Ratings. Sloan Manag. Rev. 2014, 55, 47–52. [Google Scholar]
  94. Kotler, P.; Lee, N. Best of Breed: When it comes to gaining a market edge while supporting a social cause, “corporate social marketing” leads the pack. Soc. Mark. Q. 2005, 11, 91–103. [Google Scholar] [CrossRef]
  95. Lee, J.Y.; Jin, C.H. The role of ethical marketing issues in consumer-brand relationship. Sustainability 2019, 11, 6536. [Google Scholar] [CrossRef] [Green Version]
  96. Tanveer, M.; Ahmad, A.R.; Mahmood, H.; Haq, I.U. Role of ethical marketing in driving consumer brand relationships and brand loyalty: A sustainable marketing approach. Sustainability 2021, 13, 6839. [Google Scholar] [CrossRef]
  97. Lynch, J.G.; Chakravarti, D.; Mitra, A. Contrasts in Consumer Judgments: Changes in Mental Representations or in the Anchoring of Rating Scales? J. Consum. Res. 1991, 18, 284–297. [Google Scholar] [CrossRef]
  98. Chapman, G.; Johnson, E. Anchoring, Activation, and the Construction of Values. Organ. Behav. Hum. Decis. Process. 1999, 79, 115–153. [Google Scholar] [CrossRef] [Green Version]
  99. Furnham, A.; Boo, C.H. A literature review of the anchoring effect. J. Socio-Econ. 2011, 40, 35–42. [Google Scholar] [CrossRef]
  100. Boyd, R.; Richerson, P.J. Culture and the Evolutionary Process; Chicago University Press: Chicago, IL, USA, 1985. [Google Scholar]
  101. Hutchins, E. Cognition in the Wild; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
  102. Gigerenzer, G.; Todd, P.M.; ABC Research Group. Simple Heuristics That Make Us Smart; Oxford University Press: Oxford, UK, 1999. [Google Scholar]
  103. Mercier, H.; Morin, O. Majority rules: How good are we at aggregating convergent opinions? Evol. Hum. Sci. 2019, 1, E6. [Google Scholar] [CrossRef] [Green Version]
  104. McFerran, B.; Aquino, K.; Tracy, J.L. Evidence for two facets of pride in consumption: Findings from luxury brands. J. Consum. Psychol. 2014, 24, 455–471. [Google Scholar] [CrossRef]
  105. Kessous, A.; Valette-Florence, P. From prada to nada: Consumers and their luxury products: A contrast between second-hand and first-hand luxury products. J. Bus. Res. 2019, 102, 313–327. [Google Scholar] [CrossRef]
  106. Zhang, L.; Zhao, H. Personal value vs. luxury value: What are Chinese luxury consumers shopping for when buying luxury fashion goods? J. Retail. Consum. Serv. 2019, 51, 62–71. [Google Scholar] [CrossRef]
  107. Dhaliwal, A.; Singh, D.P.; Paul, J. The consumer behavior of luxury goods: A review and research agenda. J. Strateg. Mark. 2020, 1–27. [Google Scholar] [CrossRef]
Figure 1. Estimation results for the review generation of first-time reviewers: Positive review generation (DV: length of a positive review).
Figure 1. Estimation results for the review generation of first-time reviewers: Positive review generation (DV: length of a positive review).
Sustainability 13 10912 g001
Figure 2. Estimation results for the review generation of first-time reviewers: Negative review generation (DV: length of a negative review).
Figure 2. Estimation results for the review generation of first-time reviewers: Negative review generation (DV: length of a negative review).
Sustainability 13 10912 g002
Figure 3. Estimation results for the review generation of first-time reviewers: Neutral review generation (DV: length of a neutral review).
Figure 3. Estimation results for the review generation of first-time reviewers: Neutral review generation (DV: length of a neutral review).
Sustainability 13 10912 g003
Figure 4. Estimation results for the review generation of multiple-time reviewers: Positive review generation (DV: length of a positive review).
Figure 4. Estimation results for the review generation of multiple-time reviewers: Positive review generation (DV: length of a positive review).
Sustainability 13 10912 g004
Figure 5. Estimation results for the review generation of multiple-time reviewers: Negative review generation (DV: length of a negative review).
Figure 5. Estimation results for the review generation of multiple-time reviewers: Negative review generation (DV: length of a negative review).
Sustainability 13 10912 g005
Figure 6. Estimation results for the review generation of multiple-time reviewers: Neutral review generation (DV: length of a neutral review).
Figure 6. Estimation results for the review generation of multiple-time reviewers: Neutral review generation (DV: length of a neutral review).
Sustainability 13 10912 g006
Table 1. Industries included in the sample.
Table 1. Industries included in the sample.
No.Industry (Full Name)Industry (Abbreviation)
1AutomotiveAuto
2BabyBaby
3BeautyBeauty
4CDs and VinylCV
5Cell Phones and AccessoriesCell
6Clothing Shoes and JewelryClothes
7Patio Lawn and GardenGarden
8Grocery and Gourmet FoodGrocery
9Home and KitchenHome
10Musical InstrumentInstruments
11Kindle StoreKindle
12Movies and TVMovie
13Digital MusicMusic
14Office ProductsOffice
15Health and Personal CarePC
16Pet SuppliesPet
17Tools and Home ImprovementTool
18Toys and GamesToys
19Video GamesVgame
Table 2. Summary of variables in dataset.
Table 2. Summary of variables in dataset.
VariableDescription
reviewer_nbunique identifier of a review
asinunique identifier of a product
overallaverage rating of the product (1: worse, 5: best)
unixReviewTimeunix time of the day the review was written
helpful_yesnumber of people who found this review helpful
helpful_nonumber of people who found this review unhelpful
reviewText_lennumber of words in the review
reviewText_charnumber of characters in the review
summary_lennumber of words in the summary
summary_charnumber of characters in the summary
reviewText_compoundsentiment of review; sum of valences of each word normalized (−1: most negative, 1: most positive)
reviewText_negratios for proportions of review text that are in negative lexicon
reviewText_neuratios for proportions of review text that are in neutral lexicon
reviewText_posratios for proportions of review text that are in positive lexicon
summary_compoundsentiment of summary; sum of valences of each word normalized (−1: most negative, 1: most positive)
summary_negratios for proportions of summary that are in negative lexicon
summary_neuratios for proportions of summary that are in neutral lexicon
summary_posratios for proportions of summary that are in positive lexicon
lev1broadest category of the product
title_lennumber of words in the product title
title_charnumber of characters in the product title
desc_lennumber of words in the product description
desc_charnumber of character in the product description
priceprice of the product in USD
salesRanksales rank of the product in the broadest category
brandbrand of the product
Table 3. Example review texts and their sentiment scores.
Table 3. Example review texts and their sentiment scores.
Review TextReview Text
(Compound)
Review Text
(Negative)
Review Text
(Neutral)
Review Text
(Positive)
“This is excellent edition and perfectly true to the orchestral version! It makes playing Vivaldi a joy! I used this for a wedding and was totally satisfied with the accuracy!”0.965100.520.48
“Mat and would not hit it off. Instant personality clash for sure. Then again I didn’t buy this DVD to make friends. I bought it to learn blues. 50 good usable licks presented in a way that you can actually learn them. Good camera angles, the sound is fair. Tab could be onscreen but I guess the booklet works just fine. Just the thing for the early intermediate player. This DVD spends a lot of time in my player!”0.82900.8510.149
“I like Simon Phillips, I think he is one of the best drummers in the world. This video, however, was obviously made a long time ago because he looks very young in it. I admit I saw the cover and could see he appeared to be much younger, but because the video had been remastered and redone I hoped some of his more recent performances might be included, they were not. I still enjoyed it but wish he would bring out something containing more recent material.”0.67540.0410.8330.127
“This is not what you’re thinking. This DVD is merely two hours of the ‘flamboyant’ John Patitucci playing songs. He does not show any of his riffs to you, he plays them at full speed so it takes forever to find the pattern. This is $30.00 I’ll never have back!”0.475300.9210.079
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Park, Y.J.; Joo, J.; Polpanumas, C.; Yoon, Y. “Worse Than What I Read?” The External Effect of Review Ratings on the Online Review Generation Process: An Empirical Analysis of Multiple Product Categories Using Amazon.com Review Data. Sustainability 2021, 13, 10912. https://doi.org/10.3390/su131910912

AMA Style

Park YJ, Joo J, Polpanumas C, Yoon Y. “Worse Than What I Read?” The External Effect of Review Ratings on the Online Review Generation Process: An Empirical Analysis of Multiple Product Categories Using Amazon.com Review Data. Sustainability. 2021; 13(19):10912. https://doi.org/10.3390/su131910912

Chicago/Turabian Style

Park, Young Joon, Jaewoo Joo, Charin Polpanumas, and Yeujun Yoon. 2021. "“Worse Than What I Read?” The External Effect of Review Ratings on the Online Review Generation Process: An Empirical Analysis of Multiple Product Categories Using Amazon.com Review Data" Sustainability 13, no. 19: 10912. https://doi.org/10.3390/su131910912

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop