摘要: 现有网民情感识别研究多基于文本这一单模态,缺乏结合网民所发的文本及附带的图片来识别网民情感的研究。本文基于深度学习设计多模态融合网民情感识别模型,利用词向量模型对文本进行表示,并构建BiLSTMs模型提取文本情感特征,构建基于迁移学习的微调CNNs提取图片情感特征;将提取的文本和图片情感特征进行特征层融合后,输入至SVM中,实现多模态融合网民情感识别,同时将构建的多模态融合网民情感识别模型(DNNs-SVM)与设计的基线模型做实验效果对比,基线模型分别是word2vec+BiLSTMs、BERT+BiLSTMs、CNNs、微调CNNs和DNNs。实验结果表明,融合文本和图片特征的多模态融合情感识别结果优于单模态情感识别结果,多模态融合DNNs-SVM模型均优于所设计的基线模型。
关键词: 网民情感, 多模态融合, 情感识别, 双向长短期记忆模型, 微调卷积神经网络, 网络舆情, 舆情监测
Abstract: The research of sentiment recognition of online users mostly are based texts, lacking the research which consider the texts and attached images to recognize the sentiment. The paper proposed a DNNs-SVM multimodal fusion model. In the extraction of textual features, we used word2vec model to represent texts. And a BiLSTMs model was built to extract the features of texts. In the extraction of visual features, we built a fine-tuned CNNs, using VGG16 as base model, to extract the features of images. We concatenated textual features and visual features in feature-level. Then the fused features were fed into SVM classifier to complete multimodal sentiment recognition. Additionally, the proposed model was compared to designed baseline models. The baseline models were word2vec+BiLSTMs, BERT+BiLSTMs, CNNs, fine-tuned CNNs and DNNs. The results showed that the results of fused features outperformed that of unimodal features and the proposed model outperformed all baseline models.
Key words: Sentiment of online users; Multimodal fusion, Sentiment recognition, BiLSTMs, Fine-tuned CNNs, Online public opinion, Public opinion monitoring
中图分类号:
G350
相关知识
基于深度学习的多模态融合网民情感识别研究
基于脑电图和眼电图融合的多模态情绪识别研究,IEEE Transactions on Instrumentation and Measurement
多模态情感识别数据集和模型(下载地址+最新综述2021.8)
双模态跨语料库语音情感识别
查找: 关键字=情感识别
顶刊TPAMI 2022!基于不同数据模态的行为识别:最新综述
多模态传感器融合提升宠物训练效率.docx
对话情感识别研究综述:从基础到前沿
情感分析的终极形态:全景式细粒度多模态对话情感分析基准PanoSent
宠物声音识别与理解研究.pptx
网址: 基于深度学习的多模态融合网民情感识别研究 https://m.mcbbbk.com/newsview412385.html
上一篇: 基于GA |
下一篇: 12星座女如何识别感情中的渣男 |