Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2076732.2076746acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacsacConference Proceedingsconference-collections
research-article

The socialbot network: when bots socialize for fame and money

Published: 05 December 2011 Publication History

Abstract

Online Social Networks (OSNs) have become an integral part of today's Web. Politicians, celebrities, revolutionists, and others use OSNs as a podium to deliver their message to millions of active web users. Unfortunately, in the wrong hands, OSNs can be used to run astroturf campaigns to spread misinformation and propaganda. Such campaigns usually start off by infiltrating a targeted OSN on a large scale. In this paper, we evaluate how vulnerable OSNs are to a large-scale infiltration by socialbots: computer programs that control OSN accounts and mimic real users. We adopt a traditional web-based botnet design and built a Socialbot Network (SbN): a group of adaptive socialbots that are orchestrated in a command-and-control fashion. We operated such an SbN on Facebook---a 750 million user OSN---for about 8 weeks. We collected data related to users' behavior in response to a large-scale infiltration where socialbots were used to connect to a large number of Facebook users. Our results show that (1) OSNs, such as Facebook, can be infiltrated with a success rate of up to 80%, (2) depending on users' privacy settings, a successful infiltration can result in privacy breaches where even more users' data are exposed when compared to a purely public access, and (3) in practice, OSN security defenses, such as the Facebook Immune System, are not effective enough in detecting or stopping a large-scale infiltration as it occurs.

References

[1]
Facebook Open Graph Protocol. http://developers.facebook.com/docs/opengraph.
[2]
Sick profile maker. http://sickmarketing.com/sick-profile-maker.
[3]
Facebook Statistics. http://www.facebook.com/press, March 2011.
[4]
Jet bots. http://allbots.info, 2011.
[5]
J. Baltazar, J. Costoya, and R. Flores. The real face of Koobface: The largest web 2.0 botnet explained. Trend Micro Research, July 2009.
[6]
L. Bilge, T. Strufe, D. Balzarotti, and E. Kirda. All your contacts are belong to us: automated identity theft attacks on social networks. In WWW '09: Proceedings of the 18th International Conference on World Wide Web, pages 551--560, New York, NY, USA, 2009. ACM.
[7]
N. Bos, K. Karahalios, M. Musgrove-Chávez, E. S. Poole, J. C. Thomas, and S. Yardi. Research ethics in the facebook era: privacy, anonymity, and oversight. In CHI EA '09: Proceedings of the 27th international conference extended abstracts on Human factors in computing systems, pages 2767--2770, New York, NY, USA, 2009. ACM.
[8]
D. Boyd. Social media is here to stay... Now what? Microsoft Research Tech Fest, February 2009.
[9]
G. Brown, T. Howe, M. Ihbe, A. Prakash, and K. Borders. Social networks and context-aware spam. In CSCW '08: Proceedings of the ACM 2008 Conference on Computer Supported Cooperative Work, pages 403--412, New York, NY, USA, 2008. ACM.
[10]
Z. Coburn and G. Marra. Realboy: Believable twitter bots. http://ca.olin.edu/2008/realboy, April 2011.
[11]
J. R. Douceur. The sybil attack. In IPTPS '01: Revised Papers from the First International Workshop on Peer-to-Peer Systems, pages 251--260, London, UK, 2002. Springer-Verlag.
[12]
M. Egele, L. Bilge, E. Kirda, and C. Kruegel. Captcha smuggling: hijacking web browsing sessions to create captcha farms. In SAC '10: Proceedings of the 2010 ACM Symposium on Applied Computing, pages 1865--1870, New York, NY, USA, 2010. ACM.
[13]
N. B. Ellison, C. Steinfield, and C. Lampe. The benefits of Facebook "friends:" social capital and college students' use of online social network sites. Journal of Computer-Mediated Communication, 12(4):1143--1168, July 2007.
[14]
N. FitzGerald. New Facebook worm - don't click da' button baby! http://fitzgerald.blog.avg.com/2009/11/, November 2009.
[15]
M. Gjoka, M. Kurant, C. T. Butts, and A. Markopoulou. Walking in Facebook: A case study of unbiased sampling of osns. In Proceedings of IEEE INFOCOM '10, San Diego, CA, March 2010.
[16]
C. Grier, K. Thomas, V. Paxson, and M. Zhang. @spam: the underground on 140 characters or less. In Proceedings of the 17th ACM Conference on Computer and Communications Security, CCS '10, pages 27--37, New York, NY, USA, 2010. ACM.
[17]
C. Herley. The plight of the targeted attacker in a world of scale. In The 9th Workshop on the Economics of Information Security (WEIS 2010), 2010.
[18]
C. Hernandez-Castro and A. Ribagorda. Remotely telling humans and computers apart: An unsolved problem. In J. Camenisch and D. Kesdogan, editors, iNetSec 2009? Open Research Problems in Network Security, volume 309 of IFIP Advances in Information and Communication Technology, pages 9--26. Springer Boston, 2009.
[19]
M. Huber, M. Mulazzani, and E. Weippl. Who on earth is Mr. Cypher? automated friend injection attacks on social networking sites. In Proceedings of the IFIP International Information Security Conference 2010: Security & Privacy --- Silver Linings in the Cloud, 2010.
[20]
T. N. Jagatic, N. A. Johnson, M. Jakobsson, and F. Menczer. Social phishing. Commun. ACM, 50(10):94--100, 2007.
[21]
A. M. Kaplan and M. Haenlein. Users of the world, unite! the challenges and opportunities of social media. Business Horizons, 53(1):59--68, 2010.
[22]
J. Leskovec and E. Horvitz. Planetary-scale views on a large instant-messaging network. In Proceeding of the 17th International Conference on World Wide Web, pages 915--924, New York, NY, USA, 2008. ACM.
[23]
G. Livingston. Social media: The new battleground for politics. http://mashable.com/2010/09/23/congress-battle-social-media/, September 2010.
[24]
D. Misener. Rise of the socialbots: They could be influencing you online. http://www.cbc.ca/news/technology/story/2011/03/29/f-vp-misener-socialbot-armies-election.html, March 2011.
[25]
A. Mislove, M. Marcon, K. P. Gummadi, P. Druschel, and B. Bhattacharjee. Measurement and analysis of online social networks. In IMC '07: Proceedings of the 7th ACM SIGCOMM Conference on Internet Measurement, pages 29--42, New York, NY, USA, 2007. ACM.
[26]
E. Morozov. Swine flu: Twitter's power to misinform. http://neteffect.foreignpolicy.com/posts/2009/04/25/swine_flu_twitters_power_to_misinform, April 2009.
[27]
M. Motoyama, K. Levchenko, C. Kanich, D. McCoy, G. M. Voelker, and S. Savage. Re: Captchas: understanding captcha-solving services in an economic context. In Proceedings of the 19th USENIX Conference on Security, USENIX Security'10, pages 28--28, Berkeley, CA, USA, 2010. USENIX Association.
[28]
S. Nagaraja, A. Houmansadr, P. Agarwal, V. Kumar, P. Piyawongwisal, and N. Borisov. Stegobot: A covert social network botnet. In Proceedings of the Information Hiding Conference, 2011.
[29]
F. Nagle and L. Singh. Can friends be trusted? exploring privacy in online social networks. In Proceedings of the 2009 International Conference on Advances in Social Network Analysis and Mining, pages 312--315, Washington, DC, USA, 2009. IEEE Computer Society.
[30]
S. Patil. Social network attacks surge. http://www.symantec.com/connect/blogs/social-network-attacks-surge, June 2011.
[31]
A. Pitsillidis, K. Levchenko, C. Kreibich, C. Kanich, G. M. Voelker, V. Paxson, N. Weaver, and S. Savage. Botnet judo: Fighting spam with itself. In NDSS, 2010.
[32]
A. Rapoport. Spread of information through a population with socio-structural bias: I. assumption of transitivity. Bulletin of Mathematical Biology, 15:523--533, 1953. 10.1007/BF02476440.
[33]
J. Ratkiewicz, M. Conover, M. Meiss, B. Gonçalves, S. Patil, A. Flammini, and F. Menczer. Truthy: mapping the spread of astroturf in microblog streams. In Proceedings of the 20th international conference companion on World wide web, WWW '11, pages 249--252, New York, NY, USA, 2011. ACM.
[34]
C. P. Robert and G. Casella. Monte Carlo Statistical Methods (Springer Texts in Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2005.
[35]
F. Salem and R. Mourtada. Civil movements: The impact of Facebook and Twitter. The Arab Social Media Report, 1(2), 2011.
[36]
T. Stein, E. Chen, and K. Mangla. Facebook immune system. In Proceedings of the 4th Workshop on Social Network Systems, SNS '11, pages 8:1--8:8, New York, NY, USA, 2011. ACM.
[37]
G. Stringhini, C. Kruegel, and G. Vigna. Detecting spammers on social networks. In Proceedings of the 26th Annual Computer Security Applications Conference, ACSAC '10, pages 1--9, New York, NY, USA, 2010. ACM.
[38]
C. Taylor. Why not call it a Facebook revolution? http://edition.cnn.com/2011/TECH/social.media/02/24/facebook.revolution/, February 2011.
[39]
S. T. Tong, B. Van Der Heide, L. Langwell, and J. B. Walther. Too much of a good thing? the relationship between number of friends and interpersonal impressions on Facebook. Journal of Computer-Mediated Communication, 13(3):531--549, 2008.
[40]
J. A. Vargas. Obama raised half a billion online. http://voices.washingtonpost.com/44/2008/11/obama-raised-half-a-billion-on.html, November 2008.
[41]
B. Viswanath, A. Post, K. P. Gummadi, and A. Mislove. An analysis of social network-based sybil defenses. In Proceedings of the ACM SIGCOMM 2010 conference on SIGCOMM, SIGCOMM '10, pages 363--374, New York, NY, USA, 2010. ACM.
[42]
L. von Ahn, M. Blum, N. J. Hopper, and J. Langford. Captcha: Using hard AI problems for security. In E. Biham, editor, EUROCRYPT, volume 2656 of Lecture Notes in Computer Science, pages 294--311. Springer, 2003.
[43]
F. Walter, S. Battiston, and F. Schweitzer. A model of a trust-based recommendation system on a social network. Autonomous Agents and Multi-Agent Systems, 16:57--74, 2008.
[44]
H. Yeend. Breaking CAPTCHA without OCR. http://www.puremango.co.uk/2005/11/breaking_captcha_115/, November 2005.
[45]
H. Yu, P. B. Gibbons, M. Kaminsky, and F. Xiao. Sybillimit: A near-optimal social network defense against sybil attacks. In Proceedings of the 2008 IEEE Symposium on Security and Privacy, pages 3--17, Washington, DC, USA, 2008. IEEE Computer Society.

Cited By

View all
  • (2024)Detection of Fake Profiles on Social NetworkingInternational Journal of Advanced Research in Science, Communication and Technology10.48175/IJARSCT-17882(546-549)Online publication date: 29-Apr-2024
  • (2024)Use & Abuse of Personal Information, Part II: Robust Generation of Fake IDs for Privacy ExperimentationJournal of Cybersecurity and Privacy10.3390/jcp40300264:3(546-571)Online publication date: 11-Aug-2024
  • (2024)Advanced Algorithmic Approaches for Scam Profile Detection on InstagramElectronics10.3390/electronics1308157113:8(1571)Online publication date: 19-Apr-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ACSAC '11: Proceedings of the 27th Annual Computer Security Applications Conference
December 2011
432 pages
ISBN:9781450306720
DOI:10.1145/2076732
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • ACSA: Applied Computing Security Assoc

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 December 2011

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Conference

ACSAC '11
Sponsor:
  • ACSA
ACSAC '11: Annual Computer Security Applications Conference
December 5 - 9, 2011
Florida, Orlando, USA

Acceptance Rates

Overall Acceptance Rate 104 of 497 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)137
  • Downloads (Last 6 weeks)12
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Detection of Fake Profiles on Social NetworkingInternational Journal of Advanced Research in Science, Communication and Technology10.48175/IJARSCT-17882(546-549)Online publication date: 29-Apr-2024
  • (2024)Use & Abuse of Personal Information, Part II: Robust Generation of Fake IDs for Privacy ExperimentationJournal of Cybersecurity and Privacy10.3390/jcp40300264:3(546-571)Online publication date: 11-Aug-2024
  • (2024)Advanced Algorithmic Approaches for Scam Profile Detection on InstagramElectronics10.3390/electronics1308157113:8(1571)Online publication date: 19-Apr-2024
  • (2024)The “Russian bots” between social and technological: Examining the ordinary folk theories of Twitter usersNew Media & Society10.1177/14614448241255692Online publication date: 27-May-2024
  • (2024)A First Look into Fake Profiles on Social Media through the Lens of Victim's ExperiencesCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681889(444-450)Online publication date: 11-Nov-2024
  • (2024)CGNN: A Compatibility-Aware Graph Neural Network for Social Media Bot DetectionIEEE Transactions on Computational Social Systems10.1109/TCSS.2024.339641311:5(6528-6543)Online publication date: Oct-2024
  • (2024)Exploring the Design of Technology-Mediated Nudges for Online MisinformationInternational Journal of Human–Computer Interaction10.1080/10447318.2023.2301265(1-28)Online publication date: 17-Jan-2024
  • (2024)Threats on online social network platforms: classification, detection, and prevention techniquesMultimedia Tools and Applications10.1007/s11042-024-19724-5Online publication date: 2-Jul-2024
  • (2024)Fake Profile Detection on Social Networks—A SurveyProceedings of International Conference on Recent Innovations in Computing10.1007/978-981-97-3442-9_28(403-416)Online publication date: 23-Oct-2024
  • (2023)Yapay Zekânın Siyasi, Etik ve Toplumsal Açıdan Dezenformasyon TehdidiThe Threat of Disinformation from The Political, Ethical and Social Perspective of Artificial Intelligenceİletişim ve Diplomasi10.54722/iletisimvediplomasi.1358267Online publication date: 2-Nov-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media