.
In recent years, Weibo has greatly enriched people's life. More and more people are actively sharing information with others and expressing their opinions and feelings on Weibo. Analyzing emotion hidden in this information can benefit online marketing, branding, customer relationship management and monitoring public opinions. Sentiment analysis is to identify the emotional tendencies of the microblog messages, that is to classify users' emotions into positive, negative and neutral. This paper presents a novel model to build a Sentiment Dictionary using Word2vec tool based on our Semantic Orientation Pointwise Similarity Distance (SO-SD) model. Then we use the Emotional Dictionary to obtain the emotional tendencies of Weibo messages. Through the experiment, we validate the effectiveness of our method, by which we have performed a preliminary exploration of the sentiment analysis of Chinese Weibo in this paper.
.
https://www.researchgate.net/publication/286758692_A_Study_on_Sentiment_Computing_and_Classification_of_Sina_Weibo_with_Word2vec
.
.
Read More
What Is Emotions?
Are you sure that you got it right?
Friday, March 15, 2019
A Joint Model for Chinese Microblog Sentiment Analysis
.
Topic-based sentiment analysis for Chinese microblog aims to identify the user attitude on specified topics. In this paper, we propose a joint model by incorporating Support Vector Machines (SVM) and deep neural network to improve the performance of sentiment analysis. Firstly, a SVM Classifier is constructed using N-gram, NPOS and sentiment lexicons features. Meanwhile, a convolutional neural network is applied to learn paragraph representation features as the input of another SVM classifier. The classification results outputted by these two classifiers are merged as the final classification results. The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.
.
https://www.researchgate.net/publication/301449007_A_Joint_Model_for_Chinese_Microblog_Sentiment_Analysis
.
Download
.
Read More
Topic-based sentiment analysis for Chinese microblog aims to identify the user attitude on specified topics. In this paper, we propose a joint model by incorporating Support Vector Machines (SVM) and deep neural network to improve the performance of sentiment analysis. Firstly, a SVM Classifier is constructed using N-gram, NPOS and sentiment lexicons features. Meanwhile, a convolutional neural network is applied to learn paragraph representation features as the input of another SVM classifier. The classification results outputted by these two classifiers are merged as the final classification results. The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.
.
https://www.researchgate.net/publication/301449007_A_Joint_Model_for_Chinese_Microblog_Sentiment_Analysis
.
Download
.
Read More
Towards Building a High-Quality Microblog-Specific Chinese Sentiment Lexicon
.
Due to the huge popularity of microblogging services, microblogs have become important sources of customer opinions. Sentiment analysis systems can provide useful knowledge to decision support systems and decision makers by aggregating and summarizing the opinions in massive microblogs automatically. The most important component of sentiment analysis systems is sentiment lexicon. However, the performance of traditional sentiment lexicons on microblog sentiment analysis is far from satisfactory, especially for Chinese. In this paper, we propose a data-driven approach to build a high-quality microblog-specific sentiment lexicon for Chinese microblog sentiment analysis system. The core of our method is a unified framework that incorporates three kinds of sentiment knowledge for sentiment lexicon construction, i.e., the word-sentiment knowledge extracted from microblogs with emoticons, the sentiment similarity knowledge extracted from words' associations among all the messages, and the prior sentiment knowledge extracted from existing sentiment lexicons. In addition, in order to improve the coverage of our sentiment lexicon, we propose an effective method to detect popular new words in microblogs, which considers not only words' distributions over texts, but also their distributions over users. The detected new words with strong sentiment are incorporated in our sentiment lexicon. We built a microblog-specific Chinese sentiment lexicon on a large microblog dataset with more than 17 million messages. Experimental results on two microblog sentiment datasets show that our microblog-specific sentiment lexicon can significantly improve the performance of microblog sentiment analysis.
.
https://www.researchgate.net/publication/301902793_Towards_Building_a_High-Quality_Microblog-Specific_Chinese_Sentiment_Lexicon
. Read More
Due to the huge popularity of microblogging services, microblogs have become important sources of customer opinions. Sentiment analysis systems can provide useful knowledge to decision support systems and decision makers by aggregating and summarizing the opinions in massive microblogs automatically. The most important component of sentiment analysis systems is sentiment lexicon. However, the performance of traditional sentiment lexicons on microblog sentiment analysis is far from satisfactory, especially for Chinese. In this paper, we propose a data-driven approach to build a high-quality microblog-specific sentiment lexicon for Chinese microblog sentiment analysis system. The core of our method is a unified framework that incorporates three kinds of sentiment knowledge for sentiment lexicon construction, i.e., the word-sentiment knowledge extracted from microblogs with emoticons, the sentiment similarity knowledge extracted from words' associations among all the messages, and the prior sentiment knowledge extracted from existing sentiment lexicons. In addition, in order to improve the coverage of our sentiment lexicon, we propose an effective method to detect popular new words in microblogs, which considers not only words' distributions over texts, but also their distributions over users. The detected new words with strong sentiment are incorporated in our sentiment lexicon. We built a microblog-specific Chinese sentiment lexicon on a large microblog dataset with more than 17 million messages. Experimental results on two microblog sentiment datasets show that our microblog-specific sentiment lexicon can significantly improve the performance of microblog sentiment analysis.
.
https://www.researchgate.net/publication/301902793_Towards_Building_a_High-Quality_Microblog-Specific_Chinese_Sentiment_Lexicon
. Read More
Sentiment Analysis for Chinese Microblog based on Deep Neural Networks with Convolutional Extension Features
.
Related research for sentiment analysis on Chinese microblog is aiming at the analysis procedure of posts. The length of short microblog text limits feature extraction of microblog. Tweeting is the process of communication with friends, so that microblog comments are important reference information for related post. A contents extension framework is proposed in this paper combining posts and related comments into a microblog conversation for features extraction. A novel convolutional auto encoder is adopted which can extract contextual information from microblog conversation as features for the post. A customized DNN(Deep Neural Network) model, which is stacked with several layers of RBM (Restricted Boltzmann Machine), is implemented to initialize the structure of neural network. The RBM layers can take probability distribution samples of input data to learn hidden structures for better high level features representation. A ClassRBM (Classification RBM) layer, which is stacked on top of RBM layers, is adopted to achieve the final sentiment classification label for the post. Experimental results show that, with proper structure and parameters, the performance of proposed DNN on sentiment classification is better than state of the art surface learning models such as SVM or NB, which proves that the proposed DNN model is suitable for short-length document classification with proposed feature dimensionality extension method.
.
https://www.researchgate.net/publication/303952937_Sentiment_Analysis_for_Chinese_Microblog_based_on_Deep_Neural_Networks_with_Convolutional_Extension_Features
.
Read More
Related research for sentiment analysis on Chinese microblog is aiming at the analysis procedure of posts. The length of short microblog text limits feature extraction of microblog. Tweeting is the process of communication with friends, so that microblog comments are important reference information for related post. A contents extension framework is proposed in this paper combining posts and related comments into a microblog conversation for features extraction. A novel convolutional auto encoder is adopted which can extract contextual information from microblog conversation as features for the post. A customized DNN(Deep Neural Network) model, which is stacked with several layers of RBM (Restricted Boltzmann Machine), is implemented to initialize the structure of neural network. The RBM layers can take probability distribution samples of input data to learn hidden structures for better high level features representation. A ClassRBM (Classification RBM) layer, which is stacked on top of RBM layers, is adopted to achieve the final sentiment classification label for the post. Experimental results show that, with proper structure and parameters, the performance of proposed DNN on sentiment classification is better than state of the art surface learning models such as SVM or NB, which proves that the proposed DNN model is suitable for short-length document classification with proposed feature dimensionality extension method.
.
https://www.researchgate.net/publication/303952937_Sentiment_Analysis_for_Chinese_Microblog_based_on_Deep_Neural_Networks_with_Convolutional_Extension_Features
.
Read More
Context-Aware Chinese Microblog Sentiment Classification with Bidirectional LSTM
.
Recently, with the fast development of the microblog, analyzing the sentiment orientations of the tweets has become a hot research topic for both academic and industrial communities. Most of the existing methods treat each microblog as an independent training instance. However, the sentiments embedded in tweets are usually ambiguous and context-aware. Even a non-sentiment word might convey a clear emotional tendency in the microblog conversations. In this paper, we regard the microblog conversation as sequence, and leverage bidirectional Long Short-Term Memory (BLSTM) models to incorporate preceding tweets for context-aware sentiment classification. Our proposed method could not only alleviate the sparsity problem in the feature space, but also capture the long distance sentiment dependency in the microblog conversations. Extensive experiments on a benchmark dataset show that the bidirectional LSTM models with context information could outperform other strong baseline algorithms.
.
https://www.researchgate.net/publication/308188542_Context-Aware_Chinese_Microblog_Sentiment_Classification_with_Bidirectional_LSTM
.
Read More
Recently, with the fast development of the microblog, analyzing the sentiment orientations of the tweets has become a hot research topic for both academic and industrial communities. Most of the existing methods treat each microblog as an independent training instance. However, the sentiments embedded in tweets are usually ambiguous and context-aware. Even a non-sentiment word might convey a clear emotional tendency in the microblog conversations. In this paper, we regard the microblog conversation as sequence, and leverage bidirectional Long Short-Term Memory (BLSTM) models to incorporate preceding tweets for context-aware sentiment classification. Our proposed method could not only alleviate the sparsity problem in the feature space, but also capture the long distance sentiment dependency in the microblog conversations. Extensive experiments on a benchmark dataset show that the bidirectional LSTM models with context information could outperform other strong baseline algorithms.
.
https://www.researchgate.net/publication/308188542_Context-Aware_Chinese_Microblog_Sentiment_Classification_with_Bidirectional_LSTM
.
Read More
Chinese Microblog Sentiment Analysis Based on Sentiment Features
.
As the microblog has increasingly become an information platform for netizens to share their ideas, the study on the sentiment analysis of microblog has got scholars’ wide attention both at home and abroad. The primary goal of this research is to improve the accuracy of microblog sentiment polarity classification. With a view to the characteristics of microblog, a new method of semantically related feature extraction is proposed. Firstly, the Chinese word features are selected by text presentation in VSM and computing the weight by TF*IDF. Secondly, the proposed eight microblog semantic features are extracted, including sentence sentiment judgment based on emotional dictionary. Finally, three kinds of machine learning methods are used to classify the Chinese microblog under the feature vector combining the two methods. The experimental results indicate that the proposed feature extraction method outperforms the state-of-the-art approaches, and for this feature extraction algorithm, the classification performance is best when using the Naïve Bayes algorithm.
.
https://www.researchgate.net/publication/308327580_Chinese_Microblog_Sentiment_Analysis_Based_on_Sentiment_Features
.
Read More
As the microblog has increasingly become an information platform for netizens to share their ideas, the study on the sentiment analysis of microblog has got scholars’ wide attention both at home and abroad. The primary goal of this research is to improve the accuracy of microblog sentiment polarity classification. With a view to the characteristics of microblog, a new method of semantically related feature extraction is proposed. Firstly, the Chinese word features are selected by text presentation in VSM and computing the weight by TF*IDF. Secondly, the proposed eight microblog semantic features are extracted, including sentence sentiment judgment based on emotional dictionary. Finally, three kinds of machine learning methods are used to classify the Chinese microblog under the feature vector combining the two methods. The experimental results indicate that the proposed feature extraction method outperforms the state-of-the-art approaches, and for this feature extraction algorithm, the classification performance is best when using the Naïve Bayes algorithm.
.
https://www.researchgate.net/publication/308327580_Chinese_Microblog_Sentiment_Analysis_Based_on_Sentiment_Features
.
Read More
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
.
We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
.
https://www.researchgate.net/publication/308457764_Quantized_Neural_Networks_Training_Neural_Networks_with_Low_Precision_Weights_and_Activations.
Download
. Read More
We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
.
https://www.researchgate.net/publication/308457764_Quantized_Neural_Networks_Training_Neural_Networks_with_Low_Precision_Weights_and_Activations.
Download
. Read More
Sentiment Target Extraction Based on CRFs with Multi-features for Chinese Microblog
.
Sentiment target extraction on Chinese microblog has attracted increasing research attention. Most previous work relies on syntax, such as automatic parse trees, which are subject to noise for informal text such as microblog. In this paper, we propose a modified CRFs model for Chinese microblog sentiment target extraction. This model see the sentiment target extraction as a sequence-labeling problem, incorporating the contextual information, syntactic rules and opinion lexicon into the model with multi-features. The major contribution of this method is that it can be applied to the texts in which the targets are not mentioned in the sequence. Experimental results on benchmark datasets show that our method can consistently outperform the state-of-the-art methods.
.
https://www.researchgate.net/publication/308499279_Sentiment_Target_Extraction_Based_on_CRFs_with_Multi-features_for_Chinese_Microblog
.
Read More
Sentiment target extraction on Chinese microblog has attracted increasing research attention. Most previous work relies on syntax, such as automatic parse trees, which are subject to noise for informal text such as microblog. In this paper, we propose a modified CRFs model for Chinese microblog sentiment target extraction. This model see the sentiment target extraction as a sequence-labeling problem, incorporating the contextual information, syntactic rules and opinion lexicon into the model with multi-features. The major contribution of this method is that it can be applied to the texts in which the targets are not mentioned in the sequence. Experimental results on benchmark datasets show that our method can consistently outperform the state-of-the-art methods.
.
https://www.researchgate.net/publication/308499279_Sentiment_Target_Extraction_Based_on_CRFs_with_Multi-features_for_Chinese_Microblog
.
Read More
An approach to sentiment analysis of short Chinese texts based on SVMs
.
... Experimental results have shown that the Naive Bayes classifier performs the best. Approach to analyze the sentiment of short Chinese texts is presented in [9]. By using word2vec tool, sentiment dictionaries from NTU and HowNet are extended. ...
.
https://www.researchgate.net/publication/308868603_An_approach_to_sentiment_analysis_of_short_Chinese_texts_based_on_SVMs
.
Read More
... Experimental results have shown that the Naive Bayes classifier performs the best. Approach to analyze the sentiment of short Chinese texts is presented in [9]. By using word2vec tool, sentiment dictionaries from NTU and HowNet are extended. ...
.
https://www.researchgate.net/publication/308868603_An_approach_to_sentiment_analysis_of_short_Chinese_texts_based_on_SVMs
.
Read More
A Dynamic Conditional Random Field Based Framework for Sentence-Level Sentiment Analysis of Chinese Microblog
.
.
https://www.researchgate.net/publication/319051638_A_Dynamic_Conditional_Random_Field_Based_Framework_for_Sentence-Level_Sentiment_Analysis_of_Chinese_Microblog
. Read More
.
https://www.researchgate.net/publication/319051638_A_Dynamic_Conditional_Random_Field_Based_Framework_for_Sentence-Level_Sentiment_Analysis_of_Chinese_Microblog
. Read More
Speech Recognition With Deep Recurrent Neural Networks
.
Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates $backslash$emphdeep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.
.
https://www.researchgate.net/publication/319770184_Speech_Recognition_With_Deep_Recurrent_Neural_Networks
. Read More
Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates $backslash$emphdeep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.
.
https://www.researchgate.net/publication/319770184_Speech_Recognition_With_Deep_Recurrent_Neural_Networks
. Read More
Sentiment analysis of Chinese micro-blog text based on extended sentiment dictionary
.
Micro-blog texts contain complex and abundant sentiments which reflect user's standpoints or opinions on a given topic. However, the existing classification method of sentiments cannot facilitate micro-blog topic monitoring. To solve this problem, this paper presents a sentiment analysis method for Chinese micro-blog text based on the sentiment dictionary to support network regulators' work better. First, the sentiment dictionary can be extended by extraction and construction of degree adverb dictionary, network word dictionary, negative word dictionary and other related dictionaries. Second, the sentiment value of a micro-blog text can be obtained through the calculation of the weight. Finally, micro-blog texts on a topic can be classified as positive, negative and neutral. Experimental results show the effectiveness of the proposed method.
.
https://www.researchgate.net/publication/320682973_Sentiment_analysis_of_Chinese_micro-blog_text_based_on_extended_sentiment_dictionary
.
Read More
Micro-blog texts contain complex and abundant sentiments which reflect user's standpoints or opinions on a given topic. However, the existing classification method of sentiments cannot facilitate micro-blog topic monitoring. To solve this problem, this paper presents a sentiment analysis method for Chinese micro-blog text based on the sentiment dictionary to support network regulators' work better. First, the sentiment dictionary can be extended by extraction and construction of degree adverb dictionary, network word dictionary, negative word dictionary and other related dictionaries. Second, the sentiment value of a micro-blog text can be obtained through the calculation of the weight. Finally, micro-blog texts on a topic can be classified as positive, negative and neutral. Experimental results show the effectiveness of the proposed method.
.
https://www.researchgate.net/publication/320682973_Sentiment_analysis_of_Chinese_micro-blog_text_based_on_extended_sentiment_dictionary
.
Read More
Alternating Multi-bit Quantization for Recurrent Neural Networks
.
Recurrent neural networks have achieved excellent performance in many applications. However, on portable devices with limited resources, the models are often too large to deploy. For applications on the server with large scale concurrent requests, the latency during inference can also be very critical for costly computing resources. In this work, we address these problems by quantizing the network, both weights and activations, into multiple binary codes {-1,+1}. We formulate the quantization as an optimization problem. Under the key observation that once the quantization coefficients are fixed the binary codes can be derived efficiently by binary search tree, alternating minimization is then applied. We test the quantization for two well-known RNNs, i.e., long short term memory (LSTM) and gated recurrent unit (GRU), on the language models. Compared with the full-precision counter part, by 2-bit quantization we can achieve ~16x memory saving and ~6x real inference acceleration on CPUs, with only a reasonable loss in the accuracy. By 3-bit quantization, we can achieve almost no loss in the accuracy or even surpass the original model, with ~10.5x memory saving and ~3x real inference acceleration. Both results beat the exiting quantization works with large margins. We extend our alternating quantization to image classification tasks. In both RNNs and feedforward neural networks, the method also achieves excellent performance.
.
https://www.researchgate.net/publication/322886129_Alternating_Multi-bit_Quantization_for_Recurrent_Neural_Networks
. Read More
Recurrent neural networks have achieved excellent performance in many applications. However, on portable devices with limited resources, the models are often too large to deploy. For applications on the server with large scale concurrent requests, the latency during inference can also be very critical for costly computing resources. In this work, we address these problems by quantizing the network, both weights and activations, into multiple binary codes {-1,+1}. We formulate the quantization as an optimization problem. Under the key observation that once the quantization coefficients are fixed the binary codes can be derived efficiently by binary search tree, alternating minimization is then applied. We test the quantization for two well-known RNNs, i.e., long short term memory (LSTM) and gated recurrent unit (GRU), on the language models. Compared with the full-precision counter part, by 2-bit quantization we can achieve ~16x memory saving and ~6x real inference acceleration on CPUs, with only a reasonable loss in the accuracy. By 3-bit quantization, we can achieve almost no loss in the accuracy or even surpass the original model, with ~10.5x memory saving and ~3x real inference acceleration. Both results beat the exiting quantization works with large margins. We extend our alternating quantization to image classification tasks. In both RNNs and feedforward neural networks, the method also achieves excellent performance.
.
https://www.researchgate.net/publication/322886129_Alternating_Multi-bit_Quantization_for_Recurrent_Neural_Networks
. Read More
Research on sentiment analysis of microblogging based on LSA and TF-IDF
.
... The anonymity of Weibo makes peo- ple being willing to express their real sentiments. Many existing techniques of sentiment analysis are based on sen- timent lexicons and traditional feature engineering [1]- [8]. Most of these methods need resort to external resource or manually preprocess features of words. ...
.
https://www.researchgate.net/publication/324462809_Research_on_sentiment_analysis_of_microblogging_based_on_LSA_and_TF-IDF
. Read More
... The anonymity of Weibo makes peo- ple being willing to express their real sentiments. Many existing techniques of sentiment analysis are based on sen- timent lexicons and traditional feature engineering [1]- [8]. Most of these methods need resort to external resource or manually preprocess features of words. ...
.
https://www.researchgate.net/publication/324462809_Research_on_sentiment_analysis_of_microblogging_based_on_LSA_and_TF-IDF
. Read More
Multi-label Chinese Microblog Emotion Classification via Convolutional Neural Network
.
Recently, analyzing people’s sentiments in microblogs has attracted more and more attentions from both academic and industrial communities. The traditional methods usually treat the sentiment analysis as a kind of single-label supervised learning problem that classifies the microblog according to sentiment orientation or single-labeled emotion. However, in fact multiple fine-grained emotions may be coexisting in just one tweet or even one sentence of the microblog. In this paper, we regard the emotion detection in microblogs as a multi-label classification problem. We leverage the skip-gram language model to learn distributed word representations as input features, and utilize a Convolutional Neural Network (CNN) based method to solve multi-label emotion classification problem in the Chinese microblog sentences without any manually designed features. Extensive experiments are conducted on two public short text datasets. The experimental results demonstrate that the proposed method outperforms strong baselines by a large margin and achieves excellent performance in terms of multi-label classification metrics.
.
https://www.researchgate.net/publication/308187960_Multi-label_Chinese_Microblog_Emotion_Classification_via_Convolutional_Neural_Network
.
Read More
Recently, analyzing people’s sentiments in microblogs has attracted more and more attentions from both academic and industrial communities. The traditional methods usually treat the sentiment analysis as a kind of single-label supervised learning problem that classifies the microblog according to sentiment orientation or single-labeled emotion. However, in fact multiple fine-grained emotions may be coexisting in just one tweet or even one sentence of the microblog. In this paper, we regard the emotion detection in microblogs as a multi-label classification problem. We leverage the skip-gram language model to learn distributed word representations as input features, and utilize a Convolutional Neural Network (CNN) based method to solve multi-label emotion classification problem in the Chinese microblog sentences without any manually designed features. Extensive experiments are conducted on two public short text datasets. The experimental results demonstrate that the proposed method outperforms strong baselines by a large margin and achieves excellent performance in terms of multi-label classification metrics.
.
https://www.researchgate.net/publication/308187960_Multi-label_Chinese_Microblog_Emotion_Classification_via_Convolutional_Neural_Network
.
Read More
Sentiment Analysis of Chinese Microblog Based on Stacked Bidirectional LSTM
.
Sentiment analysis on Chinese microblogs has received extensive attention recently. Most previous studies focus on identifying sentiment orientation by encoding as many wordproperties as possible while they fail to consider contextual features (e.g., the long-range dependencies of words), which are however essentially important in the sentiment analysis. In this paper, we propose a Chinese sentiment analysis method by incorporating Word2Vec model and Stacked Bidirectional long short-term memory (Stacked Bi-LSTM) model. We first employ Word2Vec model to capture semantic features of words and transfer words into high dimensional word vectors. We evaluate the performance of two typical Word2Vec models: Continuous Bag-of-Words (CBOW) and Skip-gram. We then use Stacked Bi-LSTM model to conduct the feature extraction of sequential word vectors. We next apply a binary softmax classifier to predict the sentiment orientation by using semantic and contextual features. Moreover, we also conduct extensive experiments on real dataset collected from Weibo (i.e., one of the most popular Chinese microblogs). The experimental results show that our proposed approach achieves better performance than other machine learning models.
.
https://www.researchgate.net/publication/331752074_Sentiment_Analysis_of_Chinese_Microblog_Based_on_Stacked_Bidirectional_LSTM
. Read More
Sentiment analysis on Chinese microblogs has received extensive attention recently. Most previous studies focus on identifying sentiment orientation by encoding as many wordproperties as possible while they fail to consider contextual features (e.g., the long-range dependencies of words), which are however essentially important in the sentiment analysis. In this paper, we propose a Chinese sentiment analysis method by incorporating Word2Vec model and Stacked Bidirectional long short-term memory (Stacked Bi-LSTM) model. We first employ Word2Vec model to capture semantic features of words and transfer words into high dimensional word vectors. We evaluate the performance of two typical Word2Vec models: Continuous Bag-of-Words (CBOW) and Skip-gram. We then use Stacked Bi-LSTM model to conduct the feature extraction of sequential word vectors. We next apply a binary softmax classifier to predict the sentiment orientation by using semantic and contextual features. Moreover, we also conduct extensive experiments on real dataset collected from Weibo (i.e., one of the most popular Chinese microblogs). The experimental results show that our proposed approach achieves better performance than other machine learning models.
.
https://www.researchgate.net/publication/331752074_Sentiment_Analysis_of_Chinese_Microblog_Based_on_Stacked_Bidirectional_LSTM
. Read More
Thursday, March 14, 2019
Deep-based Ingredient Recognition for Cooking Recipe Retrieval
.
Retrieving recipes corresponding to given dish pictures facilitates the estimation of nutrition facts, which is crucial to various health relevant applications. The current approaches mostly focus on recognition of food category based on global dish appearance without explicit analysis of ingredient composition. Such approaches are incapable for retrieval of recipes with unknown food categories, a problem referred to as zero-shot retrieval. On the other hand, content-based retrieval without knowledge of food categories is also difficult to attain satisfactory performance due to large visual variations in food appearance and ingredient composition. As the number of ingredients is far less than food categories, understanding ingredients underlying dishes in principle is more scalable than recognizing every food category and thus is suitable for zero-shot retrieval. Nevertheless, ingredient recognition is a task far harder than food categorization, and this seriously challenges the feasibility of relying on them for retrieval. This paper proposes deep architectures for simultaneous learning of ingredient recognition and food categorization, by exploiting the mutual but also fuzzy relationship between them. The learnt deep features and semantic labels of ingredients are then innovatively applied for zero-shot retrieval of recipes. By experimenting on a large Chinese food dataset with images of highly complex dish appearance, this paper demonstrates the feasibility of ingredient recognition and sheds light on this zero-shot problem peculiar to cooking recipe retrieval.
.
https://dl.acm.org/citation.cfm?id=2964315 Read More
Retrieving recipes corresponding to given dish pictures facilitates the estimation of nutrition facts, which is crucial to various health relevant applications. The current approaches mostly focus on recognition of food category based on global dish appearance without explicit analysis of ingredient composition. Such approaches are incapable for retrieval of recipes with unknown food categories, a problem referred to as zero-shot retrieval. On the other hand, content-based retrieval without knowledge of food categories is also difficult to attain satisfactory performance due to large visual variations in food appearance and ingredient composition. As the number of ingredients is far less than food categories, understanding ingredients underlying dishes in principle is more scalable than recognizing every food category and thus is suitable for zero-shot retrieval. Nevertheless, ingredient recognition is a task far harder than food categorization, and this seriously challenges the feasibility of relying on them for retrieval. This paper proposes deep architectures for simultaneous learning of ingredient recognition and food categorization, by exploiting the mutual but also fuzzy relationship between them. The learnt deep features and semantic labels of ingredients are then innovatively applied for zero-shot retrieval of recipes. By experimenting on a large Chinese food dataset with images of highly complex dish appearance, this paper demonstrates the feasibility of ingredient recognition and sheds light on this zero-shot problem peculiar to cooking recipe retrieval.
.
https://dl.acm.org/citation.cfm?id=2964315 Read More
Machine learning in the cloud: How it can help you right now
.
What is machine learning?
Machine learning is really about the study of algorithms that have the ability to learn through patterns and, based on that, make predictions against patterns of data. It’s a better alternative to leveraging static program instructions and instead making data-driven predictions or decisions that will improve over time without human intervention and additional programming.
Machine learning could be a game-changer for the business.
One of the concerns, as machine learning becomes more affordable through the use of cloud platforms, is that the technology will be misapplied. This already seems to be a pattern, as cloud providers promote machine learning as having wide value. However, that value won’t be realized if machine learning is applied to systems that can’t benefit from making predictions based on patterns found in data.
So what’s the bottom line with machine learning and the cloud? There is actual value there for businesses, if correctly applied. Enterprises looking for applications for this technology may find that, in some cases, machine learning could be a game-changer for the business.
[ Webinar: Stop Random Acts of Cloud (and Overspending) At Your Organization ]
Finding machine-learning use cases
Machine-learning applications have been widely promoted as the ultimate systems builds that can provide better value to enterprises. However, machine learning is best leveraged for specific types of applications that will benefit the most from this technology, such as fraud detection, predictive marketing, machine monitoring (for the Internet of Things), and inventory management.
Keep in mind that not all machine-learning models are the same. They provide different solution patterns. Most cloud providers, including AWS, Google, and Amazon, provide support for three types of predictions. They have different names, but they boil down to three:
Binary prediction
Category prediction
Value prediction
Let’s explore the potential use cases with each of these.
Binary predictions deal with yes-or-no responses. Use cases where it can be helpful include evaluating data in orders that could suggest fraud, or deciding when it might be worthwhile to try to “upsell” products to a customer based on input from a machine learning-enabled recommendation engine.
The types of applications we leverage for these types of predictions are more numerous than the other types of predictions, considering that the responses are much less complex: yes or no. Thus, these types of machine-learning use cases often are found in business processes such as order processing, credit check systems, and engines used to recommend videos, music, or other products to users based upon gathered data and learned responses.
Category prediction means that we’re able to look at a data set and, based upon learned information, place that information into a particular category. This is useful when much different types of data are being analyzed and a category should be applied so that data can be better understood and processed.
For instance, insurance companies place different claims in specific categories, based upon what’s been learned over the years. An example would be to define the likely cause of an accident, even if the information is not a part of the data, such as “alcohol likely involved,” “likely fraudulent,” or “likely weather-related.” The machine-learning system makes these assignments based upon past learning, such as the time of day that the accident occurred, location, type of damage done, age of driver, etc.
Category predictions can work with many different types of applications, such as when we need to place additional meaning around the data and the direct correlation data is not in the existing database. Finance, manufacturing, and retail are all verticals that can use this type of technology.
Value predictions are more complex but also more insightful. They tell you quantitatively about likely outcomes from the data culled, again, from using learning models to find patterns in the data.
Say we want to find out how many units of a product are likely to sell in the next month. It's good information to know, because it allows us to do tighter manufacturing planning and perhaps economize on travel as salespeople follow up on leads.
The idea is to place these types of predictions within systems that can find this information of value, such as planning and financial systems. Also, they can be part of a management dashboard, so those who make critical decisions in the organization are more likely to find this information of value.
Machine learning is best leveraged for specific types of applications, such as fraud detection, predictive marketing, machine monitoring (for the IoT), and inventory management.
Machine learning in the cloud: What's available today?
Many open-source and proprietary machine-learning systems support the types of predictions described above, and they've been around for years. However, the cost of these systems, in terms of hardware and software, was until recently out of reach for most enterprises. Moreover, even if a business could afford it, it typically did not have the machine-learning talent required to design the prediction models or deal with the data science required.
Enter cloud-based machine-learning solutions from the big three public cloud providers: Google, AWS, and Microsoft. They are very different from each other but share some commonality, advantages, and limitations.
Advantages of available ML systems
These systems are cheap to operate. You only have to pay a few dollars an hour, on average, to drive your very own machine-learning application such as the ones outlined above.
Public clouds also provide cheap data storage. You can leverage true databases or storage systems as the input of the data into the machine learning-enabled applications.
Finally, they all provide SDKs (software developer kits) and APIs that allow you to embed machine-learning functionality directly into applications, and they support most programming languages. The real value of machine-learning technology is the use from within applications, because the types of predictions that are made are more operations- and transaction-focused—for instance, the ability to determine in real time if a loan application is likely fraudulent and provide a process to immediately deal with the issue, perhaps allowing the applicant to fix any errors and resubmit.
Disadvantages
The machine-learning systems on particular public clouds are pretty much bound to those clouds. So if you use a machine-learning system on cloud A, then the data storage mechanism on cloud A will typically be natively supported. However, your enterprise database is not supported unless you provide data integration between your on-premises data storage system and those in the cloud.
Thus, the key value for the cloud provider is clear: If you, the customer, are looking to take advantage of the native machine-learning system, then you will probably want to take advantage of the native storage systems and native databases as well. Also, the applications live better on the cloud platform if they frequently talk to the machine-learning models, which, in turn, often talk to the data. Get the hook?
Of course, if you’re looking to move the data, applications, and other processes to the cloud, you’re fine. The machine-learning system can be accessed as a native cloud service. But if you’re working with hybrid- or multi-cloud deployments—and most of us are—then the separation of the data from the machine-learning engine will be problematic in terms of performance, cost, and usability. Clearly, machine learning could be a loss leader that is designed to attach more enterprises to the cloud.
Learning to make systems that learn
Although machine learning is being sold as a shiny new tool, it’s actually technology that’s been evolving for years. Current IT economics allow us to consider the power of AI, and the AI instances of machine learning, to finally provide value to the enterprise.
A few events brought us to this point.
First, the rise of cheap data storage, cloud and non-cloud, that makes the same massive data sets available from the same source.
Second, the power that we have to process the data, both in sheer processing power, storage, and new big data architectures such as Hadoop.
Third, the use of machine learning and other formally expensive services, now provided as cheap cloud services that can be rented for pennies an hour, in some cases.
Still, machine-learning systems need to be created and managed by those who understand machine learning and data-driven decisions. The limitations are not within the technology, but in the limited number of people who can understand and use it. The skills issue will take much longer to solve, but when we do solve it, we’re looking at technology that can be a real game-changer for most businesses.
How are you using machine learning? Has it helped your business? Share your experiences in the comments section.
Topics: Enterprise IT, Cloud
.
https://techbeacon.com/enterprise-it/machine-learning-cloud-how-it-can-help-you-right-now Read More
What is machine learning?
Machine learning is really about the study of algorithms that have the ability to learn through patterns and, based on that, make predictions against patterns of data. It’s a better alternative to leveraging static program instructions and instead making data-driven predictions or decisions that will improve over time without human intervention and additional programming.
Machine learning could be a game-changer for the business.
One of the concerns, as machine learning becomes more affordable through the use of cloud platforms, is that the technology will be misapplied. This already seems to be a pattern, as cloud providers promote machine learning as having wide value. However, that value won’t be realized if machine learning is applied to systems that can’t benefit from making predictions based on patterns found in data.
So what’s the bottom line with machine learning and the cloud? There is actual value there for businesses, if correctly applied. Enterprises looking for applications for this technology may find that, in some cases, machine learning could be a game-changer for the business.
[ Webinar: Stop Random Acts of Cloud (and Overspending) At Your Organization ]
Finding machine-learning use cases
Machine-learning applications have been widely promoted as the ultimate systems builds that can provide better value to enterprises. However, machine learning is best leveraged for specific types of applications that will benefit the most from this technology, such as fraud detection, predictive marketing, machine monitoring (for the Internet of Things), and inventory management.
Keep in mind that not all machine-learning models are the same. They provide different solution patterns. Most cloud providers, including AWS, Google, and Amazon, provide support for three types of predictions. They have different names, but they boil down to three:
Binary prediction
Category prediction
Value prediction
Let’s explore the potential use cases with each of these.
Binary predictions deal with yes-or-no responses. Use cases where it can be helpful include evaluating data in orders that could suggest fraud, or deciding when it might be worthwhile to try to “upsell” products to a customer based on input from a machine learning-enabled recommendation engine.
The types of applications we leverage for these types of predictions are more numerous than the other types of predictions, considering that the responses are much less complex: yes or no. Thus, these types of machine-learning use cases often are found in business processes such as order processing, credit check systems, and engines used to recommend videos, music, or other products to users based upon gathered data and learned responses.
Category prediction means that we’re able to look at a data set and, based upon learned information, place that information into a particular category. This is useful when much different types of data are being analyzed and a category should be applied so that data can be better understood and processed.
For instance, insurance companies place different claims in specific categories, based upon what’s been learned over the years. An example would be to define the likely cause of an accident, even if the information is not a part of the data, such as “alcohol likely involved,” “likely fraudulent,” or “likely weather-related.” The machine-learning system makes these assignments based upon past learning, such as the time of day that the accident occurred, location, type of damage done, age of driver, etc.
Category predictions can work with many different types of applications, such as when we need to place additional meaning around the data and the direct correlation data is not in the existing database. Finance, manufacturing, and retail are all verticals that can use this type of technology.
Value predictions are more complex but also more insightful. They tell you quantitatively about likely outcomes from the data culled, again, from using learning models to find patterns in the data.
Say we want to find out how many units of a product are likely to sell in the next month. It's good information to know, because it allows us to do tighter manufacturing planning and perhaps economize on travel as salespeople follow up on leads.
The idea is to place these types of predictions within systems that can find this information of value, such as planning and financial systems. Also, they can be part of a management dashboard, so those who make critical decisions in the organization are more likely to find this information of value.
Machine learning is best leveraged for specific types of applications, such as fraud detection, predictive marketing, machine monitoring (for the IoT), and inventory management.
Machine learning in the cloud: What's available today?
Many open-source and proprietary machine-learning systems support the types of predictions described above, and they've been around for years. However, the cost of these systems, in terms of hardware and software, was until recently out of reach for most enterprises. Moreover, even if a business could afford it, it typically did not have the machine-learning talent required to design the prediction models or deal with the data science required.
Enter cloud-based machine-learning solutions from the big three public cloud providers: Google, AWS, and Microsoft. They are very different from each other but share some commonality, advantages, and limitations.
Advantages of available ML systems
These systems are cheap to operate. You only have to pay a few dollars an hour, on average, to drive your very own machine-learning application such as the ones outlined above.
Public clouds also provide cheap data storage. You can leverage true databases or storage systems as the input of the data into the machine learning-enabled applications.
Finally, they all provide SDKs (software developer kits) and APIs that allow you to embed machine-learning functionality directly into applications, and they support most programming languages. The real value of machine-learning technology is the use from within applications, because the types of predictions that are made are more operations- and transaction-focused—for instance, the ability to determine in real time if a loan application is likely fraudulent and provide a process to immediately deal with the issue, perhaps allowing the applicant to fix any errors and resubmit.
Disadvantages
The machine-learning systems on particular public clouds are pretty much bound to those clouds. So if you use a machine-learning system on cloud A, then the data storage mechanism on cloud A will typically be natively supported. However, your enterprise database is not supported unless you provide data integration between your on-premises data storage system and those in the cloud.
Thus, the key value for the cloud provider is clear: If you, the customer, are looking to take advantage of the native machine-learning system, then you will probably want to take advantage of the native storage systems and native databases as well. Also, the applications live better on the cloud platform if they frequently talk to the machine-learning models, which, in turn, often talk to the data. Get the hook?
Of course, if you’re looking to move the data, applications, and other processes to the cloud, you’re fine. The machine-learning system can be accessed as a native cloud service. But if you’re working with hybrid- or multi-cloud deployments—and most of us are—then the separation of the data from the machine-learning engine will be problematic in terms of performance, cost, and usability. Clearly, machine learning could be a loss leader that is designed to attach more enterprises to the cloud.
Learning to make systems that learn
Although machine learning is being sold as a shiny new tool, it’s actually technology that’s been evolving for years. Current IT economics allow us to consider the power of AI, and the AI instances of machine learning, to finally provide value to the enterprise.
A few events brought us to this point.
First, the rise of cheap data storage, cloud and non-cloud, that makes the same massive data sets available from the same source.
Second, the power that we have to process the data, both in sheer processing power, storage, and new big data architectures such as Hadoop.
Third, the use of machine learning and other formally expensive services, now provided as cheap cloud services that can be rented for pennies an hour, in some cases.
Still, machine-learning systems need to be created and managed by those who understand machine learning and data-driven decisions. The limitations are not within the technology, but in the limited number of people who can understand and use it. The skills issue will take much longer to solve, but when we do solve it, we’re looking at technology that can be a real game-changer for most businesses.
How are you using machine learning? Has it helped your business? Share your experiences in the comments section.
Topics: Enterprise IT, Cloud
.
https://techbeacon.com/enterprise-it/machine-learning-cloud-how-it-can-help-you-right-now Read More
DASH2M: Exploring HTTP/2 for Internet Streaming to Mobile Devices
.
Today HTTP/1.1 is the most popular vehicle for delivering Internet content, including streaming video. Standardized in 2015 with a few new features, HTTP/2 is gradually replacing HTTP 1.1 to improve user experience. Yet, how HTTP/2 can help improve the video streaming delivery has not been thoroughly investigated. In this work, we set to investigate how to utilize the new features offered by HTTP/2 for video streaming over the Internet, focusing on the streaming delivery to mobile devices as, today, more and more users watch video on their mobile devices. For this purpose, we design DASH2M, Dynamic Adaptive Streaming over HTTP/2 to Mobile Devices. DASH2M deliberately schedules the streaming content delivery by comprehensively considering the user's Quality of Experience (QoE), the dynamics of the network resources, and the power efficiency on the mobile devices. Experiments based on an implemented prototype show that DASH2M can outperform prior strategies for users' QoE while minimizing the battery power consumption on mobile devices.
.
https://dl.acm.org/citation.cfm?id=2964313 Read More
Today HTTP/1.1 is the most popular vehicle for delivering Internet content, including streaming video. Standardized in 2015 with a few new features, HTTP/2 is gradually replacing HTTP 1.1 to improve user experience. Yet, how HTTP/2 can help improve the video streaming delivery has not been thoroughly investigated. In this work, we set to investigate how to utilize the new features offered by HTTP/2 for video streaming over the Internet, focusing on the streaming delivery to mobile devices as, today, more and more users watch video on their mobile devices. For this purpose, we design DASH2M, Dynamic Adaptive Streaming over HTTP/2 to Mobile Devices. DASH2M deliberately schedules the streaming content delivery by comprehensively considering the user's Quality of Experience (QoE), the dynamics of the network resources, and the power efficiency on the mobile devices. Experiments based on an implemented prototype show that DASH2M can outperform prior strategies for users' QoE while minimizing the battery power consumption on mobile devices.
.
https://dl.acm.org/citation.cfm?id=2964313 Read More
Patterns of Free-form Curation: Visual Thinking with Web Content
.
Web curation involves choosing, organizing, and commenting on content. Popular web curation apps-- e.g., Facebook, Twitter, and Pinterest-- provide linear feeds that show people the latest content, but provide little support for articulating relationships among content elements. The new medium of free-form web curation enables multimedia elements to be spontaneously gathered from the web, written about, sketched amidst, manipulated, and visually assembled in a continuous space. Through free-form web curation, content is collected, interpreted, and arranged, creating context. We conducted a field study of 1581 students in 6 courses, spanning diverse fields. We derive patterns of free-form curation through a visual grounded theory analysis of the resulting dataset of 4426 curations. From the observed range of invocations of the patterns in the performance of ideation tasks, we conclude that free-form is valuable as a new medium of web curation in how it supports creative visual thinking.
.
https://dl.acm.org/citation.cfm?id=2964303 Read More
Web curation involves choosing, organizing, and commenting on content. Popular web curation apps-- e.g., Facebook, Twitter, and Pinterest-- provide linear feeds that show people the latest content, but provide little support for articulating relationships among content elements. The new medium of free-form web curation enables multimedia elements to be spontaneously gathered from the web, written about, sketched amidst, manipulated, and visually assembled in a continuous space. Through free-form web curation, content is collected, interpreted, and arranged, creating context. We conducted a field study of 1581 students in 6 courses, spanning diverse fields. We derive patterns of free-form curation through a visual grounded theory analysis of the resulting dataset of 4426 curations. From the observed range of invocations of the patterns in the performance of ideation tasks, we conclude that free-form is valuable as a new medium of web curation in how it supports creative visual thinking.
.
https://dl.acm.org/citation.cfm?id=2964303 Read More
Multi-modal Multi-view Topic-opinion Mining for Social Event Analysis
.
In this paper, we propose a novel multi-modal multi-view topic-opinion mining (MMTOM) model for social event analysis in multiple collection sources. Compared with existing topic-opinion mining methods, our proposed model has several advantages: (1) The proposed MMTOM can effectively take into account multi-modal and multi-view properties jointly in a unified and principled way for social event modeling. (2) Our model is general and can be applied to many other applications in multimedia, such as opinion mining and sentiment analysis, multi-view association visualization, and topic-opinion mining for movie review. (3) The proposed MMTOM is able to not only discover multi-modal common topics from all collections as well as summarize the similarities and differences of these collections along each specific topic, but also automatically mine multi-view opinions on the learned topics across different collections. (4) Our topic-opinion mining results can be effectively applied to many applications including multi-modal multi-view topic-opinion retrieval and visualization, which achieve much better performance than existing methods. To evaluate the proposed model, we collect a real-world dataset for research on multi-modal multi-view social event analysis, and will release it for academic use. We have conducted extensive experiments, and both qualitative and quantitative evaluation results have demonstrated the effectiveness of the proposed MMTOM.
.
https://dl.acm.org/citation.cfm?id=2964294 Read More
In this paper, we propose a novel multi-modal multi-view topic-opinion mining (MMTOM) model for social event analysis in multiple collection sources. Compared with existing topic-opinion mining methods, our proposed model has several advantages: (1) The proposed MMTOM can effectively take into account multi-modal and multi-view properties jointly in a unified and principled way for social event modeling. (2) Our model is general and can be applied to many other applications in multimedia, such as opinion mining and sentiment analysis, multi-view association visualization, and topic-opinion mining for movie review. (3) The proposed MMTOM is able to not only discover multi-modal common topics from all collections as well as summarize the similarities and differences of these collections along each specific topic, but also automatically mine multi-view opinions on the learned topics across different collections. (4) Our topic-opinion mining results can be effectively applied to many applications including multi-modal multi-view topic-opinion retrieval and visualization, which achieve much better performance than existing methods. To evaluate the proposed model, we collect a real-world dataset for research on multi-modal multi-view social event analysis, and will release it for academic use. We have conducted extensive experiments, and both qualitative and quantitative evaluation results have demonstrated the effectiveness of the proposed MMTOM.
.
https://dl.acm.org/citation.cfm?id=2964294 Read More
A Digital World to Thrive In: How the Internet of Things Can Make the "Invisible Hand" Work
.
Managing data-rich societies wisely and reaching sustainable development are among the greatest challenges of the 21st century. We are faced with existential threats and huge opportunities. If we don't act now, large parts of our society will not be able to economically benefit from the digital revolution. This could lead to mass unemployment and social unrest. It is time to create the right framework for the digital society to come.
.
https://dl.acm.org/citation.cfm?id=2984749 Read More
Managing data-rich societies wisely and reaching sustainable development are among the greatest challenges of the 21st century. We are faced with existential threats and huge opportunities. If we don't act now, large parts of our society will not be able to economically benefit from the digital revolution. This could lead to mass unemployment and social unrest. It is time to create the right framework for the digital society to come.
.
https://dl.acm.org/citation.cfm?id=2984749 Read More
Sentiment and Emotion Analysis for Social Multimedia: Methodologies and Applications
.
Online social networks have attracted the attention from both the academia and real world. In particular, the rich multimedia information accumulated in recent years provides an easy and convenient way for more active communication between people. This offers an opportunity to research people's behaviors and activities based on those multimedia content. One emerging area is driven by the fact that these massive multimedia data contain people's daily sentiments and opinions. However, existing sentiment analysis typically focuses on textual information regardless of the visual content, which may be as informative in expressing people's sentiments and opinions. In this research, we attempt to analyze the online sentiment changes of social media users using both the textual and visual content. Nowadays, social media networks such as Twitter have become major platforms of information exchange and communication between users, with tweets as the common information carrier. As an old saying has it, an image is worth a thousand words. The image tweet is a great example of multimodal sentiment. In this research, we focus on sentiment analysis based on visual and multimedia information analysis. We will review the state-of-the-art in this topic. Several of our projects related to this research area will also be discussed. Experimental results are included to demonstrate and summarize our contributions.
.
https://dl.acm.org/citation.cfm?id=2971475 Read More
Online social networks have attracted the attention from both the academia and real world. In particular, the rich multimedia information accumulated in recent years provides an easy and convenient way for more active communication between people. This offers an opportunity to research people's behaviors and activities based on those multimedia content. One emerging area is driven by the fact that these massive multimedia data contain people's daily sentiments and opinions. However, existing sentiment analysis typically focuses on textual information regardless of the visual content, which may be as informative in expressing people's sentiments and opinions. In this research, we attempt to analyze the online sentiment changes of social media users using both the textual and visual content. Nowadays, social media networks such as Twitter have become major platforms of information exchange and communication between users, with tweets as the common information carrier. As an old saying has it, an image is worth a thousand words. The image tweet is a great example of multimodal sentiment. In this research, we focus on sentiment analysis based on visual and multimedia information analysis. We will review the state-of-the-art in this topic. Several of our projects related to this research area will also be discussed. Experimental results are included to demonstrate and summarize our contributions.
.
https://dl.acm.org/citation.cfm?id=2971475 Read More
Subscribe to:
Posts (Atom)