Between the eyes rolled, the shoulder that shrugs, jazzed hands and voweled vocal inflections, it is not difficult to know when someone is sarcastic because they give you a face-to-face business. Online, however, you will need spongebob memes and liberal applications from the Shift key to get your contradictory points. Luckily for US netizens, the Darpa Information Innovation Office (I2O) has collaborated with researchers from the Central University of Florida to develop AI’s deep learning that is able to understand written sarcasm with a surprising level of accuracy.
“With high speed and social media data volume, the company depends on tools to analyze data and provide customer service. These tools carry out tasks such as content management, sentiment analysis, and relevant message extraction for corporate customer service representatives to respond to “UCF Associate Professor of Industrial Engineering and Management Systems, Dr. Ivan Garibay, told Engadget via email. “However, these tools do not have the sophistication to describe more language forms such as sarcasm or humor, where the meaning of the message is not always clear and explicit. This charges the extra burden on the social media team, which has been flooded. With customer messages to identify These messages and respond correctly. “
As they explained in a study published in journals, entropy, garibay and UCF students PhD Ramya Akula had built “deep learning models that are interpreted using self-head self-head and repetitive units that are gated. Multi-head self-attention module Help in identifying important sarcastic cues from input, and repeat units study long distance dependencies between these signal words to classify better input text. “
“Basically, the research approach is focused on the discovery of patterns in the text that shows sarcasm,” Dr. Brian Ketler, a program manager at I2O who oversaw Socialsim’s program, was explained in a press statement recently. “It identifies their words and relationships with other words representing sarcastic expressions or statements.”
Team methodology is different from the approach used in previous efforts to use the machine to see twitter sarcasm. “The older way to approach it will sit there and define the features we will see,” Kettler said to Engadget, “Maybe, linguistic language theories about what makes sarcastic language” or a label marker withdrawn from the context of the sentence, like Amazon review positively randomized on products or features that are universally declared. This model also learns to pay attention to certain words and punctuation as fair, once again, really, and “!” After that pay attention to them. “These are words in sentences that signal sarcasm and, as expected, this receives higher attention than others,” the researchers wrote.
Sarcasm Ai Word Graphcomplex Adaptive Systems Lab, University of Central Florida
For this project, the researchers used a diverse dataset group sourced from Twitter, Reddit, Onion, Huffpost and Dialogue Sarcasmo Corpus V2 from the Corpus Internet argument. “That’s the beauty of this approach, all you need is an example of training,” Kettler said. “Pretty of them, and the system will learn what features in the input text that predict the language becomes sarcastic.”
This model also offers a level of transparency in the decision-making process which is usually not seen in the AI model in depth learning like this. AI sarcasm will actually show the user of what linguistic features learned and think of the importance in certain sentences through the mechanism of attention of visualization (below)
What’s more impressive is the accuracy and precision of the system. In the Twitter dataset, the models of disadvantaged scores F1 98.7 (8.7 points higher than their closest rivals) while, on the Reddit dataset, it scored 81.0 – 4 points higher than the competition. At the main headline, it scored 91.8, more than five points in front of the same detection system, although it seemed to fight a little with a dialogue (only about F1 from 77.2).
When the model was developed next, it could be a pricier tool for the public and private sectors. Kettler saw this AI fitting into a greater mission of the Socialsim program. “This is part of what we do more broadly, who really see and understand the information environment online,” he said, trying to find out “involvement at a high level [and] how many people tend to be involved in what kind of information.”
For example, when NIH or CDC conduct public health campaigns and request feedback online, the organizers will have ivory that is easier to assume the overall public opinion about the campaign after the troll’s sarcastic reply and the diversion has been filtered.
“We want to understand sentiment,” he continued. “Where interesting people, whether people are widely like something or don’t like something, and sarcasm can really cheat sentimental detection … this is an important technology and allows the machine to further interpret what we see online. “
The UCF team has a plan to develop a further model so that it can be used for languages other than English before finally opening the code. But Garibay notes that a potential sticky point will be their ability to produce “high-quality voluminous datasets in various languages. Then the next big challenge will handle ambiguity, everyday, slang, and overcome the evolution of language.”