{"source":{"name":"Evidence Hub on Social Media Ban for Kids - A project by the Lisbon Council","url":"https:\/\/socialmediaban.lisboncouncil.net","license":"Creative Common CC-BY 4.0 International"},"data":[{"data":[0,4,12,38,56,84,92,88],"name":"16 years old - generic account"},{"data":[0,0,2,16,46,40,48,64],"name":"18 years old - generic account"},{"data":[2,8,36,42,42,56,52,58],"name":"16 years old - manosphere-curious account"},{"data":[0,0,0,8,44,44,52,76],"name":"18 years old - manosphere-curious account"}],"_data":[["\ufeffExposure Progression","16 years old - generic account","18 years old - generic account","16 years old - manosphere-curious account","18 years old - manosphere-curious account"],["Stage 1",0,0,2,0],["Stage 2",4,0,8,0],["Stage 3",12,2,36,0],["Stage 4",38,16,42,8],["Stage 5",56,46,42,44],["Stage 6",84,40,56,44],["Stage 7",92,48,52,52],["Stage 8",88,64,58,76]],"labels":{"name":"\ufeffExposure Progression","values":["Stage 1","Stage 2","Stage 3","Stage 4","Stage 5","Stage 6","Stage 7","Stage 8"]},"metadata":{"link":"https:\/\/doras.dcu.ie\/31681\/1\/DCU_Recommending_Toxicity_Summary_Report_FV%2812%29.pdf","type":"","unit":"Percent (%)","year":"2024","title":"The Role of Algorithmic Recommender Functions TikTok in Promoting Male Supremacist Influencers","topic":"Harms and Wellbeing","method":"data collection","source":"Summary Report: Recommending Toxicity: The role of algorithmic recommender functions on TikTok in promoting male supremacist influencers, Dublin City University","sub_topic":"","chart_number":"53.0","geographical":"Ireland"},"description":"Note: Stages represent the exposure progression, or cumulative viewing intervals, with Stage 5 occurring after approximately 400 videos or 2\u20133 hours of platform engagement.\r\n\r\n\r\n\r\nThis table tracks the \"rabbit hole\" effect of social media algorithms by measuring the percentage of toxic\/manosphere content recommended to experimental accounts over five progressive rounds of viewing.\r\nConducted by Dublin City University (2024) in Ireland, the study utilised ten experimental accounts on blank smartphones to simulate the digital experiences of 16 and 18-year-old males on TikTok. The researchers tested two distinct user profiles: (1) generic (Gen) accounts seeking \"gender-normative\" interests like sports, gym content, and gaming; and manosphere-curious (MC) accounts actively seeking \"manfluencer\" content (e.g., Andrew Tate, anti-feminist topics). Researchers manually coded over 29 hours of video to identify the frequency of toxic or male-supremacist recommendations.\r\nThe data demonstrates a rapid escalation in toxic recommendations across all profiles. While most accounts began with 0% toxic recommendations at stage 1, the algorithmic \"recommender functions\" quickly pivoted: by Round 5, the 16-year-old Generic account (Gen) saw the highest saturation, with 56% of all recommended content being classified as toxic; accounts that initially showed interest in manosphere content (MC) were targeted more aggressively in earlier rounds (e.g., the 16 MC accounts hit 36% toxicity by Round 3). Regardless of whether the initial intent was \"generic\" or \"curious,\" all accounts were fed toxic content within the first 23\u201326 minutes of use, eventually resulting in a majority-toxic feed by the end of the experiment."}