-
Content-Based Features Predict Social Media Influence Operations
Alizadeh, Meysam,
Shapiro, Jacob,
Buntain, Cody,
and
Tucker, Joshua A.
Science Advances
2020
-
Incident Streams 2019 : Actionable Insights and How to Find Them
McCreadie, Richard,
Buntain, Cody,
and
Soboroff, Ian
In Proceedings of the 17th International Conference on Information Systems for Crisis Response And Management
2020
-
Information Processing on Social Media Networks as Emergent Collective Intelligence
Smyth, Martin,
Buntain, Cody,
Dwyer, Debra,
Finn, Joseph,
Jones, Jason,
Garland, Joshua,
and
Egan, Michael
ACM Collective Intelligence 2020
2020
-
Hawkes binomial topic model with applications to coupled conflict-Twitter data
Mohler, George,
McGrath, Erin,
Buntain, Cody,
and
LaFree, Gary
The Annals of Applied Statistics
2020
-
Artificial Dissimilarity : Multi-Modal Content Similarities in Online Disinformation Campaigns
Buntain, Cody,
Padmakumar, Vishakh,
Bonneau, Richard,
Nagler, Jonathan,
and
Tucker, Joshua A
In ACM Collective Intelligence 2020
2020
-
Cross-Platform State Propaganda: Russian Trolls on Twitter and YouTube During the 2016 US Presidential Election
Golovchenko, Yevgenii Yevgeniy,
Buntain, Cody,
Eady, Gregory,
Yin, Leon,
Brown, Megan A,
Tucker, Joshua A,
Yin, Leon,
Brown, Megan A,
and
Tucker, Joshua A
International Journal of Press/Politics
2020
[Abs]
This paper investigates online propaganda strategies of the Internet Research Agency (IRA)—Russian “trolls”—during the 2016 U.S. presidential election. We assess claims that the IRA sought either to (1) support Donald Trump or (2) sow discord among the U.S. public by analyzing hyperlinks contained in 108,781 IRA tweets. Our results show that although IRA accounts promoted links to both sides of the ideological spectrum, “conservative” trolls were more active than “liberal” ones. The IRA also shared content across social media platforms, particularly YouTube—the second-most linked destination among IRA tweets. Although overall news content shared by trolls leaned moderate to conservative, we find troll accounts on both sides of the ideological spectrum, and these accounts maintain their political alignment. Links to YouTube videos were decidedly conservative, however. While mixed, this evidence is consistent with the IRA’s supporting the Republican campaign, but the IRA’s strategy was multifaceted, with an ideological division of labor among accounts. We contextualize these results as consistent with a pre-propaganda strategy. This work demonstrates the need to view political communication in the context of the broader media ecology, as governments exploit the interconnected information ecosystem to pursue covert propaganda strategies.
-
What is BitChute? Characterizing the "Free Speech" Alternative to YouTube
Trujillo, Milo,
Gruppi, Maurício,
Buntain, Cody,
and
Horne, Benjamin D.
2020
[Abs]
In this paper, we characterize the content and discourse on BitChute, a social video-hosting platform. Launched in 2017 as an alternative to YouTube, BitChute joins an ecosystem of alternative, low content moderation platforms, including Gab, Voat, Minds, and 4chan. Uniquely, BitChute is the first of these alternative platforms to focus on video content and is growing in popularity. Our analysis reveals several key characteristics of the platform. We find that only a handful of channels receive any engagement, and almost all of those channels contain conspiracies or hate speech. This high rate of hate speech on the platform as a whole, much of which is anti-Semitic, is particularly concerning. Our results suggest that BitChute has a higher rate of hate speech than Gab but less than 4chan. Lastly, we find that while some BitChute content producers have been banned from other platforms, many maintain profiles on mainstream social media platforms, particularly YouTube. This paper contributes a first look at the content and discourse on BitChute and provides a building block for future research on low content moderation platforms.
-
What is BitChute? Characterizing the "Free Speech" Alternative to YouTube
Trujillo, Milo,
Gruppi, Maur\’\icio,
Buntain, Cody,
and
Horne, Benjamin D
In Proceedings of the 31st ACM Conference on Hypertext and Social Media
2020
[Abs]
In this paper, we characterize the content and discourse on BitChute, a social video-hosting platform. Launched in 2017 as an alternative to YouTube, BitChute joins an ecosystem of alternative, low content moderation platforms, including Gab, Voat, Minds, and 4chan. Uniquely, BitChute is the first of these alternative platforms to focus on video content and is growing in popularity. Our analysis reveals several key characteristics of the platform. We find that only a handful of channels receive any engagement, and almost all of those channels contain conspiracies or hate speech. This high rate of hate speech on the platform as a whole, much of which is anti-Semitic, is particularly concerning. Our results suggest that BitChute has a higher rate of hate speech than Gab but less than 4chan. Lastly, we find that while some BitChute content producers have been banned from other platforms, many maintain profiles on mainstream social media platforms, particularly YouTube. This paper contributes a first look at the content and discourse on BitChute and provides a building block for future research on low content moderation platforms.
-
#Handsoffmyada: A twitter response to the ada education and reform act
Auxier, Brooke E.,
Buntain, Cody L.,
Jaeger, Paul,
Golbeck, Jennifer,
and
Kacorri, Hernisa
Conference on Human Factors in Computing Systems - Proceedings
2019
[Abs]
Twitter continues to be used increasingly for communication related advocacy, activism, and social change. This is also the case for the disability community. In light of the recently proposed ADA Education and Reform in the United States, we investigate factors for effectiveness of sharing or retweeting messages about topics affecting the rights of people with disabilities. We perform a multifaceted study of the #HandsOffMyADA campaign against the proposed H.R.620 bill to: (1) explore how communication via Twitter compares to previous disability rights movements; (2) characterize the campaign in terms of hashtags, user groups, and content such as accessible multimedia that contribute to dissemination of campaign messages; (3) identify major themes in tweets and responses, and their variation among user groups; and (4) understand how the disability community mobilized for this campaign compared to previous Twitter initiatives.
-
Analyzing a fake news authorship network
Buntain, Cody,
Golbeck, Jennifer,
Auxier, Brooke,
Assefa, Biniyam Girum,
Boyd, Karen,
Byers, Kristen,
Chawla, Gursimran,
Chen, Daniel,
Cooper, Benjamin,
Cupani, Jake,
Daetwyler, Clay,
DeWitt, Nicholas,
Garcia, Suzanne,
Hafer, Christine,
Khan, Misbah,
Lewis, Elo,
Martindale, Marianna,
Mauriello, Matthew,
McNamara, Helen,
McWillie, Sean,
Millay, Daniel,
Munzar, Talal,
Mussenden, Sean,
Orji, Nicholar,
Phung, Lisa,
Rogers, Kristine,
Rytting, Christopher,
Shadan, Tuba,
Sivam, Subhatra,
Stavish, Koralleen,
Subramanian, Aditya,
Tipirneni, Sai,
Topiwala, Rrahul,
Wagner-Riston, Melissa,
Wiriyathammabhum, Peratham,
and
Workneh, Frazer
In iConference 2019 Proceedings
2019
-
TREC Incident Streams: Finding Actionable Information on Social Media
Mccreadie, Richard,
Buntain, Cody,
and
Soboroff, Ian
In Proceedings of the 16th International Conference on Information Systems for Crisis Response And Management
2019
[Abs]
The Text Retrieval Conference (TREC) Incident Streams track is a new initiative that aims to mature social media-based emergency response technology. This initiative advances the state of the art in this area through an evaluation challenge, which attracts researchers and developers from across the globe. The 2018 edition of the track provides a standardized evaluation methodology, an ontology of emergency-relevant social media information types, proposes a scale for information criticality, and releases a dataset containing fifteen test events and approximately 20,000 labeled tweets. Analysis of this dataset reveals a significant amount of actionable information on social media during emergencies (> 10%). While this data is valuable for emergency response efforts, analysis of the 39 state-of-the-art systems demonstrate a performance gap in identifying this data. We therefore find the current state-of-the-art is insufficient for emergency responders’ requirements, particularly for rare actionable information for which there is little prior training data available.
-
TREC Incident Streams: Finding Actionable Information on Social Media
McCreadie, Richard,
Buntain, Cody,
and
Soboroff, Ian
In 16th International Conference on Information Systems for Crisis Response and Management
2019
-
Towards a General Understanding of Coordinated Action Across Online Social Platforms: A Case Study on Russian Manipulation
Buntain, Cody,
Linder, Fridolin,
Bonneau, Richard,
Nagler, Jonathan,
and
Tucker, Joshua A
Technical Report
2019
-
Automatically Identifying Fake News in Popular Twitter Threads
Buntain, Cody,
and
Golbeck, Jennifer
In 2017 IEEE International Conference on Smart Cloud (SmartCloud)
2017
-
A Large Labeled Corpus for Online Harassment Research
Golbeck, Jennifer,
Gnanasekaran, Rajesh Kumar,
Gunasekaran, Raja Rajan,
Hoffman, Kelly M.,
Hottle, Jenny,
Jienjitlert, Vichita,
Khare, Shivika,
Lau, Ryan,
Martindale, Marianna J.,
Naik, Shalmali,
Nixon, Heather L.,
Ashktorab, Zahra,
Ramachandran, Piyush,
Rogers, Kristine M.,
Rogers, Lisa,
Sarin, Meghna Sardana,
Shahane, Gaurav,
Thanki, Jayanee,
Vengataraman, Priyanka,
Wan, Zijian,
Wu, Derek Michael,
Banjo, Rashad O.,
Berlinger, Alexandra,
Bhagwan, Siddharth,
Buntain, Cody,
Cheakalos, Paul,
Geller, Alicia A.,
and
Gergory, Quint
Proceedings of the 2017 ACM on Web Science Conference - WebSci ’17
2017
-
I Want to Believe: Journalists and Crowdsourced Accuracy Assessments in Twitter
Buntain, Cody,
and
Golbeck, Jennifer
Technical Report
2017
[Abs]
Evaluating information accuracy in social media is an increasingly important and well-studied area, but limited research has compared journalist-sourced accuracy assessments to their crowdsourced counterparts. This paper demonstrates the differences between these two populations by comparing the features used to predict accuracy assessments in two Twitter data sets: CREDBANK and PHEME. While our findings are consistent with existing results on feature importance, we develop models that outperform past research. We also show limited overlap exists between the features used by journalists and crowdsourced assessors, and the resulting models poorly predict each other but produce statistically correlated results. This correlation suggests crowdsourced workers are assessing a different aspect of these stories than their journalist counterparts, but these two aspects are linked in a significant way. These differences may be explained by contrasting factual with perceived accuracy as assessed by expert journalists and non-experts respectively. Following this outcome, we also show preliminary results that models trained from crowdsourced workers outperform journalist-trained models in identifying highly shared "fake news" stories.
-
Capturing Micro-Expressions of Grievance to Explain Electoral Violence in Sub-Saharan Africa Fall 2017
McGrath, Erin C,
Dunford, Eric,
Buntain, Cody,
and
Backer, David
In American Political Science Association
2017
-
A Large Labeled Corpus for Online Harassment Research
Golbeck, Jennifer,
Ashktorab, Zahra,
Banjo, Rashad O,
Berlinger, Alexandra,
Bhagwan, Siddharth,
Buntain, Cody,
Cheakalos, Paul,
Geller, Alicia A,
Gergory, Quint,
Gnanasekaran, Rajesh Kumar,
Gunasekaran, Raja Rajan,
Hoffman, Kelly M,
Hottle, Jenny,
Jienjitlert, Vichita,
Khare, Shivika,
Lau, Ryan,
Martindale, Marianna J,
Naik, Shalmali,
Nixon, Heather L,
Ramachandran, Piyush,
Rogers, Kristine M,
Rogers, Lisa,
Sarin, Meghna Sardana,
Shahane, Gaurav,
Thanki, Jayanee,
Vengataraman, Priyanka,
Wan, Zijian,
and
Wu, Derek Michael
In Proceedings of the 2017 ACM on Web Science Conference
2017
-
A Large Labeled Corpus for Online Harassment Research
Golbeck, Jennifer,
Gnanasekaran, Rajesh Kumar,
Gunasekaran, Raja Rajan,
Hoffman, Kelly M.,
Hottle, Jenny,
Jienjitlert, Vichita,
Khare, Shivika,
Lau, Ryan,
Martindale, Marianna J.,
Naik, Shalmali,
Nixon, Heather L.,
Ashktorab, Zahra,
Ramachandran, Piyush,
Rogers, Kristine M.,
Rogers, Lisa,
Sarin, Meghna Sardana,
Shahane, Gaurav,
Thanki, Jayanee,
Vengataraman, Priyanka,
Wan, Zijian,
Wu, Derek Michael,
Banjo, Rashad O.,
Berlinger, Alexandra,
Bhagwan, Siddharth,
Buntain, Cody,
Cheakalos, Paul,
Geller, Alicia A.,
and
Gergory, Quint
Proceedings of the 2017 ACM on Web Science Conference - WebSci ’17
2017