The Role of Algorithmic Recommender Functions TikTok in Promoting Male Supremacist Influencers
Note: Stages represent the exposure progression, or cumulative viewing intervals, with Stage 5 occurring after approximately 400 videos or 2–3 hours of platform engagement. This table tracks the "rabbit hole" effect of social media algorithms by measuring the percentage of toxic/manosphere content recommended to experimental accounts over five progressive rounds of viewing. Conducted by Dublin City University (2024) in Ireland, the study utilised ten experimental accounts on blank smartphones to simulate the digital experiences of 16 and 18-year-old males on TikTok. The researchers tested two distinct user profiles: (1) generic (Gen) accounts seeking "gender-normative" interests like sports, gym content, and gaming; and manosphere-curious (MC) accounts actively seeking "manfluencer" content (e.g., Andrew Tate, anti-feminist topics). Researchers manually coded over 29 hours of video to identify the frequency of toxic or male-supremacist recommendations. The data demonstrates a rapid escalation in toxic recommendations across all profiles. While most accounts began with 0% toxic recommendations at stage 1, the algorithmic "recommender functions" quickly pivoted: by Round 5, the 16-year-old Generic account (Gen) saw the highest saturation, with 56% of all recommended content being classified as toxic; accounts that initially showed interest in manosphere content (MC) were targeted more aggressively in earlier rounds (e.g., the 16 MC accounts hit 36% toxicity by Round 3). Regardless of whether the initial intent was "generic" or "curious," all accounts were fed toxic content within the first 23–26 minutes of use, eventually resulting in a majority-toxic feed by the end of the experiment.