文件名称:2808-14159-1-PB
下载
别用迅雷、360浏览器下载。
如迅雷强制弹出,可右键点击选“另存为”。
失败请重下,重下不扣分。
如迅雷强制弹出,可右键点击选“另存为”。
失败请重下,重下不扣分。
介绍说明--下载内容均来自于网络,请自行研究使用
In this paper, we systematically explore feature definition and selection strategies for sentiment polarity classification. We begin by exploring basic questions, such
as whether to use stemming, term frequency versus binary weighting, negation-enriched features, n-grams or
phrases. We then move onto more complex aspects
including feature selection using frequency-based vocabulary trimming, part-of-speech and lexicon selection (three types of lexicons), as well as using expected Mutual Information (MI). Using three product
and movie review datasets of various sizes, we show,
for example, that some techniques are more beneficial
for larger datasets than the smaller. A classifier trained
on only few features ranked high by MI outperformed
one trained on all features in large datasets, yet in small
dataset this did not prove to be true. Finally, we perform a space and computation cost analysis to further
understand the merits of various feature types.
as whether to use stemming, term frequency versus binary weighting, negation-enriched features, n-grams or
phrases. We then move onto more complex aspects
including feature selection using frequency-based vocabulary trimming, part-of-speech and lexicon selection (three types of lexicons), as well as using expected Mutual Information (MI). Using three product
and movie review datasets of various sizes, we show,
for example, that some techniques are more beneficial
for larger datasets than the smaller. A classifier trained
on only few features ranked high by MI outperformed
one trained on all features in large datasets, yet in small
dataset this did not prove to be true. Finally, we perform a space and computation cost analysis to further
understand the merits of various feature types.
(系统自动生成,下载前可以参看下载内容)
下载文件列表
2808-14159-1-PB.pdf