Scholarly article on topic 'Sentiment Classification Based on AS-LDA Model'

Sentiment Classification Based on AS-LDA Model Academic research paper on "Computer and information sciences"

CC BY-NC-ND
0
0
Share paper
Academic journal
Procedia Computer Science
OECD Field of science
Keywords
{"sentiment analysis" / "sentiment classification" / "Latent Dirichlet Allocation" / "subjective document"}

Abstract of research paper on Computer and information sciences, author of scientific article — Jiguang Liang, Ping Liu, Jianlong Tan, Shuo Bai

Abstract We address the task of sentiment classification - identification of the polarity of the subjective document in this paper. We introduces a sentiment classification method called AS LDA. In this model, we assume that words in subjective documents consists of two parts: sentiment element words and auxiliary words which are sampled accordingly from sentiment topics and auxiliary topics. Sentiment element words include targets of the opinions, polarity words and modifiers of polarity words. Experimental results demonstrate that our approach outperforms Latent Dirichlet Allocation (LDA).

Academic research paper on topic "Sentiment Classification Based on AS-LDA Model"

CrossMark

Available online at www.sciencedirect.com

ScienceDirect

Procedía Computer Science 31 (2014) 511 - 516

Information Technology and Quantitative Management, ITQM 2013

Sentiment Classification Based on AS-LDA Model

Jiguang Liang1', Ping Liu1, Jianlong Tan1, Shuo Bai1,1

aNational Engineering Laboratory for Information Security Technologies, Institute of Information Engineering Chinese Academy of Sciences, Beijing 100190, China b Shanghai Stock Exchange, Shanghai 200120, China

Abstract

We address the task of sentiment classification - identification of the polarity of the subjective document in this paper. We introduces a sentiment classification method called AS_LDA. In this model, we assume that words in subjective documents consists of two parts: sentiment element words and auxiliary words which are sampled accordingly from sentiment topics and auxiliary topics. Sentiment element words include targets of the opinions, polarity words and modifiers of polarity words. Experimental results demonstrate that our approach outperforms Latent Dirichlet Allocation (LDA).

© 2014PublishedbyElsevierB.V.Thisisanopen access article under the CC BY-NC-ND license (http://creativecommons.Org/licenses/by-nc-nd/3.0/).

Selection and peer-review under responsibility of the Organizing Committee of ITQM 2014.

Keywords: sentiment analysis; sentiment classification; Latent Dirichlet Allocation; subjective document

1. Introduction

Sentiment analysis, also known as opinion mining, is the computational study of opinions, sentiments and emotions expressed in text (Liu, 2010). Sentiment analysis has been widely used across a wide range of domains in recent years, such as information retrieval (IR)(Pang and lee, 2008; Liu, 2010; Li et al., 2012), question answering systems (Oh et al., 2012; Kucuktunc et al., 2012) and social network (Diakopoulos et al., 2010; Tan et al., 2011).

Usually, sentiment analysis can be decomposed into three subtasks: subjective text detection, subjective information extraction and sentiment classification (polarity identification). This paper addresses the third subtask - sentiment classification. In short, Sentiment classification aims to automatically predict sentiment polarity (eg, positive or negative) of users publishing sentiment data (eg, reviews, blogs) (Pan et al., 2010).

In this paper, we target at describing an effective method, called AS-LDA, for the sentiment classification problem. Given a subjective document, we assume that there are two kinds of words in it, sentiment element words and auxiliary words. Sentiment element words are those words that can be used to express certain opinions, sentiments or emotions. Sentiment element words is an essential part of the subjective document. Comparing to sentiment element words, auxiliary words are not all that important. Auxiliary words are used to assist or enhance meaning expression. In AS-LDA model, two kinds of topics, corresponding to sentiment element words

* Corresponding author. Tel.: +86-010-82546747 ; fax: +86-010-82546701 . E-mail address: liangjiguang@iie.ac.cn.

1877-0509 © 2014 Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.Org/licenses/by-nc-nd/3.0/).

Selection and peer-review under responsibility of the Organizing Committee of ITQM 2014. doi: 10.1016/j.procs.2014.05.296

and auxiliary words, are produced. In other words, words in subjective documents are sampled either from the sentiment topics or from the auxiliary topics. Consider the following examples in which sentiment element words are shown with italics.

• Review 1 : There are good performances by Hattie Jacques as the matron, however her character seems a little more subdued and quieter than her previous 'matron's' .

This is a review about film and Hattie Jacques is the leading actress of this film. From the italic items, we can clearly observe that targets of the opinion (performances, character), polarity words (good, subdued and quieter) and modifiers (a little more), are considered as sentiment element words. And the rest are auxiliary words which are sampled from auxiliary topics.

The rest of the paper is structured as follows. Section 2 begins with a discussion of the related work. Section 3 presents the AS-LDA model for sentiment classification. In Section 4 we provide the evaluation of the proposed method. Then we conclude in Section 5.

2. Related Work

Significant research effort has been invested into sentiment analysis, especially in the domain of movie reviews (Pang et al., 2002; Kennedy and Inkpen, 2006; Maas et al., 2011), product reviews (Cui et al., 2006; Wei and Gulla, 2010), Twitter (Go et al., 2009; Jiang et al., 2011) and microblogs (Bermingham and Smeaton, 2010; Stieglitz et al., 2013). Go et al. (2009) introduce a novel approach using distant supervision for automatically classifying the sentiment of Twitter messages. Jiang et al. (2011) address target-dependent Twitter sentiment classification; given a query, they classify the sentiments of the tweets as positive, negative or neutral. The research progress on Chinese sentiment analysis is limited by the lack of Chinese sentiment corpora. Glorot et al. (2011) propose a deep learning approach, which learns to extract a meaningful representation for each review in an unsupervised fashion, to tackle the problem of domain adaptation for sentiment classifications. Wan (2009) focuses on this problem and proposes a cross-lingual sentiment classification by making use of labeled English corpus and unlabeled Chinese data.

Particularly worth mentioning is that Latent Dirichlet Allocation has been successfully used to classify the sentiments in recent years. He and Lin (2009) propose a novel probabilistic modeling framework based on LDA, called joint sentiment/topic model (JSP) for sentiment analysis. This model is fully unsupervised. Li et al. (2010) introduce a Sentiment-LDA model for sentiment analysis with global topics and local dependency. Jo and Oh (2011) describe two models, Sentence-LDA (SLDA) and Aspect and Sentiment Unification Model (ASUM),to tackle the problem of automatically discovering what aspects are evaluated in reviews and how sentiments for different aspects are expressed. Sentiment classification are obtained more granular down to the sentences and aspects. However, these works don't make use of sentiment information during modeling.

3. Method

As proposed in Section 1, we consider the task of sentiment classification. In this section, we describe a new sentiment model called Auxiliary-Sentiment Latent Dirichlet Allocation (AS-LDA) for sentiment classification.

3.1. Auxiliary-Sentiment Latent Dirichlet Allocation (AS-LDA)

In LDA for sentiment classification, all words in the document are sampled from the global topics which neglects the particularity of sentiment words. In AS-LDA, sentiment element words and auxiliary words are treated differently. The graphical representation of AS-LDA is shown in Figure 1(b) and the notations are explained in Table 1. And the generative process of SA-LDA is described in Table 2.

(a) LDA model

Fig. 1. (a) Latent Dirichlet Allocation (LDA) model. (b) An extension of LDA to obtain AS-LDA for sentiment classification.

Table 1. Meanings of the notations.

M the number of documents. N the number of words in a document.

Va the number of auxiliary words. Vs the number of sentiment element words.

wa the auxiliary words. ws the sentiment element words.

za the auxiliary topics. zs the sentiment topics.

ea multinomial distribution over auxiliary topics. es multinomial distribution over sentiment topics.

V multinomial distribution over auxiliary words. ô multinomial distribution over sentiment element words.

Y Bernoulli distribution over sentiment element words. à hyper-parameter for y.

aa Dirichlet prior vector for ea. a s Dirichlet prior vector for 6s.

fia Dirichlet prior vector for v. fi s Dirichlet prior vector for 6.

Table 2. The generative process of reviews with AS-LDA model.

1: For each topic k e{1,2, ••• , K} for auxiliary words, do

draw tfk ~ Dir(fia) 2: End for

3: For each topic I e{1,2, ••• , L} for sentiment words, do

draw 6t ~ Dir(ps) 4: End for

5: For each document m in D do

• Choose a distribution 6a ~ Dir(aa) for auxiliary words.

• Choose a distribution 6s ~ Dir(as) for sentiment element words.

6: End for

7: For each word wmn in m do

If wmn is a sentiment candidate word, then draw y = Ber(X) If y = 1, then

Choose a topic zsmn for wmn, draw zsmn ~ Multi(6s)

Choose wmn from the distribution 6i for auxiliary words, draw zmn ~ Multi(6lmn)

Choose a topic m for wmn, draw m ~ Multi(da)

Choose wmn from the distribution pk for sentiment words, draw zmn ~ Multi(pkmn)

Choose a topic zmmn for Wmn, draw m ~ Multi(da)

Choose wmn from the distribution ipk for auxiliary words, draw zsmn ~ Multi(if^) 8: End for

3.2. Inference in AS-LDA

The inference goal is to find the solution of AS-LDA. In this paper, we use Gibbs sampling to perform model inference. The joint probability of the topics and the words can be factored into the following:

P(w, z | a,j3) = P( w | z,J3) P( z | a) J P( z | 6) P( 6 | a) de J P( w | z, p) P( p | J3) dp (1)

Integrating out 6 and p, we can obtain:

/ r(V3) \K K Uv r(njv + 3)

P<wz|3) = (nTw) n Ti^ <2>

P(z | a)=\K [1 jjal (3)

Rj r (a) I m=1 r(nm,. + Ka)

where njv is the number of times word v assigned to topic j, nmj is the number of times word v occurs in document m.

At each iteration, the topics of words are chosen according to the conditional probability:

. , m P(z, w | a, 3) nni,j,v + 3 n^i,m,j + a

P(zi = j | w, z-^i, a, 3) = —---— cx -—— • -(4)

P(z^i, w |a, 3) n-j + V3 n-i,m,. + Ka

The approximate probability of auxiliary topic in document m is

nam ; + aa

6a = m,j__(5)

m nam„ + Kaa {D)

Fig. 2. The variance of classification performance of Support Vector Machine (SVM), using LDA and AS-LDA to represent the document, with different topic numbers.

The approximate probability of auxiliary words in topic z is

j + Pa

VZ = n-TvJa (6)

In a similar way, the solution formulas for sentiment element words are

nsm j + as

% = m , (7)

nm,. + Las

ns, + Ps

6-z = njryj, <8>

The notations are similar to those for Equation (2)(3). 4. Experiments

In this section, we demonstrate the effectiveness of our model through comparison experiments on Chinese sentiment corpus ChnSentiCorp (Tan, 2008). More precisely, we use ChnSentiCorp-Htl-ba-4000 and ChnSentiCorp-NB-ba-4000 corresponding to the domains hotel and computer to test our model. The vectors of the documents are represented by LDA and AS-LDA after Chinese word segmentation, deleting stop word and other process. Then we choose Support Vector Machine (SVM) to classifier the sentiment. The measurement we use to compare the classification performance is F-measure.

In our experiments we set that the number of topics in LDA is T and K, L are the numbers of sentiment topics and auxiliary topics in AS-LDA, where T = K + L. The results are shown in Figure 2. From Figure 2, we can observe that AS-LDA performs better than LDA on the two sentiment corpora. We also show how the precision changes with the increase of the number of topics. We can find that AS-LDA get a better result when K + L = 12 from Figure 2.

5. Conclusions

In this paper, we present a new probabilistic sentiment-topic model called AS-LDA for sentiment classification. With this model, we divide the words in the subjective documents into two categories: sentiment element words and auxiliary words. Sentiment element words are sampled from sentiment topics while auxiliary words are sampled from auxiliary topics. We evaluate our model on Chinese sentiment corpora; the results show that the AS-LDA model is more applicable to classifier the sentiment than LDA.

Acknowledgements

This work was supported by Strategic Priority Research Program of Chinese Academy of Sciences (X-DA06030602), National Nature Science Foundation of China (No. 61202226), National 863 Program (No. 2011AA010703), IIE Program (No.Y3Z0062201).

References

[1] Bermingham A, Smeaton A F. Classifying sentiment in microblogs: is brevity an advantage? Proceedings of CIKM, 2010: 1833-1836.

[2] Diakopoulos N A, Shamma D A. Characterizing debate performance via aggregated twitter sentiment. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2010: 1195-1198.

[3] Glorot X, Bordes A, Bengio Y. Domain adaptation for large-scale sentiment classification: A deep learning approach. Proceedings of ICML, 2011: 513-520.

[4] Go A, Bhayani R, Huang L. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, 2009: 1-12.

[5] Jiang L, Yu M, Zhou M, et al. Target-dependent twitter sentiment classification. Proceedings of ACL-HLT, 2011: 151-160.

[6] Jo Y, Oh A H. Aspect and sentiment unification model for online review analysis. Proceedings of the fourth ACM international conference on Web search and data mining. ACM, 2011: 815-824.

[7] Kucuktunc O, Cambazoglu B B, Weber I, et al. A large-scale sentiment analysis for Yahoo! answers. Proceedings of the fifth ACM international conference on Web search and data mining, 2012: 633-642.

[8] Lin C, He Y. Joint sentiment/topic model for sentiment analysis. Proceedings of CIKM, 2009: 375-384.

[9] Maas A L, Daly R E, Pham P T, et al. Learning word vectors for sentiment analysis. Proceedings of ACL-HLT, 2011: 142-150.

[10] Oh J H, Torisawa K, Hashimoto C, et al. Why question answering using sentiment analysis and word classes. Proceedings of EMNLP-CNLL, 2012: 368-378.

[11] Pan S J, Ni X, Sun J T, et al. Cross-domain sentiment classification via spectral feature alignment.Proceedings of WWW,2010:751-760.

[12] Pang B, Lee L, Vaithyanathan S. Thumbs up?: sentiment classification using machine learning techniques. Proceedings of EMNLP, 2002: 79-86.

[13] Tan C, Lee L, Tang J, et al. User-level sentiment analysis incorporating social networks. Proceedings of SIGKDD, 2011: 1397-1405.

[14] Tan S. ChnSentiCorp [EB/OL]. [2012-12-17]. http://www.searchforum.org.cn/tansongbo/sentLcorpus.jsp.

[15] Stieglitz S, Dang-Xuan L. Emotions and Information Diffusion in Social MedialSentiment of Microblogs and Sharing Behavior. Journal of Management Information Systems, 2013, 29(4): 217-248.

[16] Wan X. Co-training for cross-lingual sentiment classification. Proceedings of ACL, 2009: 235-243.

[17] Wei W, Gulla J A. Sentiment learning on product reviews via sentiment ontology tree. Proceedings of ACL, 2010: 404-413.