4.1 Ranking Potential Threats
As part of the study, experts ranked which, if any, of seven categories of hate & harassment-related threats internet users should prioritize protecting themselves from, and why. In this section, we describe the criteria experts used to rank the categories, then review results for each category.
Ranking criteria. As shown in Table
1, experts were split on the foremost category of threat they thought internet users should prioritize. This was, in part, due to differences in the criteria 22 of our 24 experts used while ranking (two did not mention any criteria). Their ranking criteria included the
severity of (potential) harms that might result from a threat, the
prevalence of the threat (i.e., the likelihood of an attack occurring), and the
agency of users to mitigate the threat.
For 10 experts,
severity of (potential) harms was their primary criterion when ranking threats, and particularly threats to “physical safety, their bodily integrity, [as well as] to their mental health” (P22), echoing Scheuerman et al.’s
Framework of Severity [
72]. One expert favored this strategy because it allocated attention to those most in need of help:
“People who are targeted by the most severe forms of online hate and harassment are in marginalized communities and they need additional protections.” – P21
Nine experts relied on prevalence as their primary criterion for ranking threats. Experts expressed that this meant any guidance would better resonate with internet users, as it reflected attacks they were more likely to encounter. As P18 explained: “What is the most prevalent problem right now... that people need to be aware of?” For other participants, prevalence reflected a disciplinary norm that stemmed from limited time and resources:
“In computer security, you want to educate people about attacks or threats they are likely to encounter. There are some attacks that are only relevant to government agencies, or high-profile organizations and so on.” – P1
Three experts used agency as their primary criterion for ranking. These experts remarked on the importance of building on user self-efficacy: “What is the lightest lift for a user?” (P23). These experts focused on which threats had the most meaningful existing protections, or where “a well-timed warning or educational intervention” (P20) might be effective.
The differences across our experts in the primary criterion—and even secondary and tertiary criteria—they used for ranking emphasize a challenge for protecting internet users from hate and harassment: there is no consensus yet for which problems to prioritize, or even
how to prioritize them. While rankings may meaningfully differ for at-risk groups, many members of those groups may be unaware they are at-risk, or an event may suddenly put them at-risk [
83]. General awareness of certain hate and harassment threats can thus provide critical, early protection before they are targeted. In this light, we explore which threats stood out more than others for experts, and where opinions diverged.
Toxic content. On average, toxic content—which includes bullying, hate speech, and sexual harassment—ranked as the highest priority threat across experts, often because of its prevalence. P15 noted that it was “the number one type of harassment that I see.” Others added that toxic content could incur emotional harm and have “significant long-term repercussions” (P16), and that some users “might not even know that they are [experiencing it]” (P6), contributing to a greater need for users to prioritize learning what constitutes toxic content and taking proactive measures to prevent it.
Some experts ranked toxic content with lower priority, as—though it can cause harm—it “usually doesn’t get to physical, severe harm” (P13) and because prevention is better handled at the community-level: “toxic content normalizes certain types of behavior, so it’s a greater danger as a community norm than towards an individual” (P19). Others ranked it lower priority, saying that users had more agency:
“You can remove yourself from those situations either by logging out or by initiating or installing all of the protection features that a lot of online platforms have. It really sucks... [sending toxic content] is not okay—no one should do that—but you can remove yourself from those situations.” – P3
Content leakage. Content leakage—which includes doxxing and non-consensual sharing of intimate images—was ranked the second highest threat on average. Experts pointed to how common this threat is—“people send sexts all the time” (P10)—though often underestimated the risks, because people “really cannot imagine what it’s like to be doxxed” (P21). The severity of content leakage, experts judged, arose because leakage is irreversible and attacks could easily spill over into users’ “real lives, their experience of life outside” (P3) such as by facilitating stalking. Conversely, other experts rated content leakage a lower priority because it is less prevalent—“requires more work from the trolls” (P4)—or because users have less agency to prevent it:
“I can’t think of any particular platform that really does an effective job of full control of [content leakage]... A lot of people have to escalate. So it’s not just primarily relying on tools in the online space, but looking at resources that could help them seek justice offline.” – P24
Surveillance. Just five experts ranked surveillance—which includes stalking and monitoring accounts or devices—as the foremost threat in the context of hate and harassment, though it featured in 12 experts’ top three. In general, experts felt surveillance was unlikely to be prevalent and was “more context dependent” (P19). Though experts noted that it had the potential to cause severe harm (e.g., it can be a “high risk to physical safety”), P22 thought that people had more agency to prevent it (i.e., people “generally have more control and can find technical solutions”). Experts emphasized three contexts where this prioritization changed. The first was individuals experiencing intimate partner abuse, as surveillance “often begins before people realize they’re in an abusive relationship” (P12), preceding the phases of abuse as identified in Matthews et al. [
50]. The second was for people in civil society targeted by government-backed harassment and trolls: “one of the biggest digital issues [for journalists], [it] leads to physical threats and imprisonment, or assassination” (P4), and third, for prominent individuals [
83] as attacks were “more relevant for popular accounts for people of a certain reputation” (P1). Experts broadly commented that incidents with surveillance could be exceptionally severe for targets:
“It’s one of those thing where if it happens to you, it’s going to have a significant impact emotionally and for your physical safety. In terms of long term consequences, it impacts how you interact in online spaces.” – P24
Lockout and control. Experts disagreed on how prevalent lockout and control—manipulating devices, being maliciously locked out of one’s account—would be for an internet user specifically in the context of online hate and harassment. However, many felt this was a more general security threat due to the prevalence of phishing and data breaches. For example, P8 noted that the “prevalence is high if you’re vulnerable to a credential stuffing attack” while P17 ranked this threat the lowest because it is “not a primary way perpetrators attack people in the context of hate and harassment.”
Regardless of the prevalence of this threat, experts remarked that being locked out of accounts and devices could facilitate other threats. Experts emphasized that targets “have to lock down [their] accounts and personal information first” (P14) in order to prevent down-stream harms, such as content leakage or surveillance. In this way, experts prioritized account security as a locus of agency:
“[Lockout and control] strikes me as the most invasive. So anything where somebody feels like they don’t have control over their own content to me, is the number one [priority].” – P3
Impersonation. Only one expert ranked impersonation—fake profiles or communication posing as the target—as their foremost threat, commenting that it poses a “very immediate threat to personal information, devices, and can have a very large effect on someone’s life” (P14). In terms of severity, experts agreed about the potential for impersonation to affect an individual’s emotional well-being and reputation, as well as “collective harm on people in your network” (P24). Similar to surveillance, experts noted the low prevalence for most internet users, though it could be higher priority for prominent figures.
Impersonation was seen as harder to prepare for, or even not preventable at all. One expert pointed out the precarity of people who have begun to gain public followings, but may not have all the resources of more prominent public figures:
“The place I see impersonation happen a lot is with low-level influencers... they’re less likely to know it; they won’t have a [support] team.” – P21
Some experts spoke to the challenges of recovering from impersonation: that marginalized people are harmed the most because there are “not a lot of tools or legal protections” (P19) for them, and that it was a “pain in the butt to get platforms to respond to impersonation reports and get them taken down” (P23). One expert with personal experience assisting targets of harassment seemed more optimistic about recovery, saying that in their experience, it “usually turns out more alright than other situations” (P10).
False reporting. No expert in our study ranked false reporting—such as swatting or false abusive account reporting—as the top threat for internet users, though seven put it in their top three. Experts viewed false reporting as a very rare occurrence, though they noted that it was more common on gaming platforms and among “big armies of trolls” used by “authoritarian regimes” (P4).
Experts noted the severity of harms stemming from false reporting could be extremely divergent or unpredictable. P6 shared that false reporting was a “standard bullying tactic” employed by kids—one that might not lead to consequences for those employing it or to those targeted by it (though it would slow triaging legitimate complaints). On the other hand, P20 spoke about how swatting could cause extremely severe harm, including being fatal. The viability of false reporting as a tactic, and thus agency of users to act, largely fell to the review process of the emergency service or platform contacted, which could be complicated by limited resources:
“The claim is usually that the content they have, the video they’ve shared, or the post is of a ‘sexual nature.’ And it doesn’t contain any of it. But because it’s in a foreign language that isn’t supported by the platform, it’s taken down immediately.” – P15
Overloading. Just three experts ranked overloading—including brigading, notification bombing, or denial of service attacks—in their top three threats; similar to false reporting, none ranked it as the top threat. Most experts commented that while overloading could be frustrating, it has a low prevalence of occurring for most internet users (notable exceptions are those with high profile accounts or websites). For notification-based or network-based attacks, experts felt such attacks were low severity: “it’s not necessarily going to affect your psyche or your personal well-being” (P4) and “annoying but not as important” (P5). Experts expressed that overwhelming volumes of potentially toxic comments could be far more severe:
“For an individual to get piled on... that was one of the primary tools that Gamergate used to harm their targets. It was very harmful, the scale of the harm, in addition to the toxicity.” – P19
4.2 Prioritizing Current Advice
Experts ranked each of the 45 pieces of advice we collected as “high,” “medium,” or “low” priority, or advice they “don’t recommend.” In reasoning aloud, experts weighed factors such as efficacy, ease of implementation (and the existence of appropriate tooling), and whether advice curtailed a user’s participation online. In this section, we review advice for staying safer from each threat, ordered by the average ranking of each threat from the prior section. We highlight only the advice that experts ranked highly, or where experts felt challenges persist or alternative solutions are needed. The complete set of advice is shown in Figures
1–
7.
7Preventing toxic content: Agreement about muting and blocking, but challenges around curtailing personal expression. To combat toxic content, experts favored platform-assisted moderation, with 83% highly prioritizing
mute people who post abusive messages and 71%
block people who post abusive messages (Figure
1). Experts prioritized muting over blocking because blocking is more visible to attackers, who might escalate attacks when they find out they have been blocked. Additionally, blocking impedes potential targets from monitoring their attackers:
“[Targets] don’t want to read misogynist or racist comments, but they need to know that certain conversations exist, or whether they face threats. So they want to mute.” – P4
Muting allows a target to quietly filter offensive users they encounter online (e.g., community members), whereas “blocking sends a signal you no longer want to interact” (P24). As such, experts noted that being aware of and being quick to use these features could curb future harm, in addition to their conventional use when there is an active attacker.
When asked if any advice to help prevent toxic content was missing, 13 experts said that reporting hate and harassment should be included,
8 grouping it with blocking or muting as a standard best practice. Experts recommended reporting to the platform as well as to civil society organizations that can organize multiple reports, noting that reporting was a primary mechanism for platforms to find new issues and make improvements. At the same time, experts lamented that “reporting doesn’t have an immediate impact” (P16) and could be detrimental emotionally if the platform ultimately determined the reported attack did not cross a policy line:
“It’s more harmful for the person [who submitted the report] to get a message that this wasn’t even [determined to be] harmful.” – P24
While experts broadly agreed on the high prioritization of advice for mitigating toxic content, advice that required a user to limit their participation online was far more contentious, even when it was considered to be effective at preventing an attack. Of experts, 63% highly prioritized be selective about which online communities you participate in and just 42% be selective about when and to whom you reveal marginalized aspects of your identity, while 29% of experts did not recommend the latter at all. Among experts who rated either highly, a common refrain was being aware of unsafe communities and what you share as part of dealing with the realities of hate and harassment today:
“As a user, you should be able to decide... where you feel comfortable the most. If you don’t feel comfortable on say, [platform], because a) you’re not sharing that much and b) you’re getting a lot of information pollution, or you don’t find it useful at all, it makes sense to be selective.” – P15
“Heartbreaking. The whole idea of not being able to bring your whole self to an experience... Sadly I would always give that advice for today. I hope it’s not advice I need to give in the future.” – P20
Experts who were opposed expressed concerns that such advice required more nuance than was possible for a general guide. Others felt such recommendations gave up the ability to participate freely:
“I understand the practical reasons behind it, but philosophically it’s not right to expect people to do that... I’ve been doing stuff with [platform type], and there’s this general philosophy we’re trying to disrupt: ‘If you don’t like it you can go somewhere else.’ I don’t like that sensibility being recommended from the top down.” – P10
The most contentious advice for combating toxic content was leave a platform entirely. Only 13% of experts ranked it highly, while 67% put it as low priority or not recommended. Experts in support highlighted it could be appropriate as a last resort:
“It’s always a tradeoff between having fun and not receiving too much harm... It’s not the first thing you should do to deal with harm, you should try other things first. But if the harm is too pervasive and this is the only way to prevent it, they should.” – P13
However, most experts opposed this advice due to losing voices of people targeted by hate and harassment, or the quality of life for following it:
“Just imagining the life of a perfectly secure user is really depressing. Is that really a life at all?” – P10
Experts recommended an alternative: taking a break or turning off notifications in order to disconnect. Broadly, advice for combating toxic content was more sparse compared to other threats we discuss. However, it was also one of the few threats with protections built-in to most platforms today.
Preventing content leakage: Agreement about the need to restrict information that’s publicly available, but challenges with the ease of implementation and curtailing personal expression. To combat content leakage, experts recommended that individuals focus on restricting what information they share (Figure
2). 88% of experts highly prioritized
never share your home address publicly and 79% highly prioritized
limit sharing of personal information online generally, being conscious of incidental information leaks, reasoning that “the more information that’s out there, the more potential for leakage” (P11). For other highly recommended advice, such as
set restrictive privacy settings on social media (like using a Privacy Check-Up tool), experts believed user awareness to be low: P3 commented that “most people don’t know they can change their settings.”
Though restricting information sharing was perceived as effective, experts discussed challenges with a cluster of advice that would be effortful to implement. For example, 58% of experts highly prioritized not sharing personal phone numbers, but P6 noted that people might do so accidentally—“maybe you didn’t intend to share it publicly but it’s attached to a review or something.” Similarly, only 25% of experts reported that not keeping digital copies of IDs was a high priority, because digital copies of IDs are becoming very common and sometimes obligatory (e.g., vaccination records to help manage the COVID-19 pandemic). Other pieces of advice that experts thought could be helpful but would require excessive effort for a general internet user included using a second email address for accounts, using third party services to remove information online (e.g., DeleteMe), or ensuring that public records like domain name registration or housing records are tied to a pseudonym.
Experts were very divided whether never send intimate images should be recommended to prevent content leakage: 38% prioritized it highly, 38% prioritized it as medium or low, and 25% would not recommend it. Some experts noted that never sharing would be highly effective—“that’s one of the easy ones” (P12)—while other experts considered the advice to be victim blaming:
“If people want to share intimate images, technology should support their ability to do so.” – P8
To sidestep issues of personal digital expression, experts were in greater agreement that people should encrypt and/or keep intimate imagery offline, as 63% highly prioritized doing so. Experts emphasized the offline part most—“don’t use cloud storage” (P7), “prefer offline to encrypted” (P3)—but mentioned “there are a lot of tools now to keep these under lock and key” (P24). Experts also recommended other tips for sending intimate images more safely, such as only sending them to highly trusted people, or ensuring the images do not include identifying details such as one’s face or tattoos.
Another challenge that experts noted for preventing content leakage was that certain pieces of advice would be relevant only for a subset of users. Only 17% of experts advised general users to set up alerts to monitor where your name appears in search results (like Google Alerts):
“Only if you have some higher risk factor. Are you a streamer, or do you work in an industry where you deal with the public in a way that you are more likely to encounter harassment? Working at [a high profile company], this was a huge concern of mine.” – P20
Other experts added that alerts were also only useful for people with unique names, and cautioned that alerts would lead to frequent false alarms for people with common names.
Similarly, experts judged that reviewing old content was only worth the effort for certain groups:
“People will go after you if you are a journalist and write about sensitive topics like politics or extremism. So they will search for what you wrote as a student from 10 years ago, which you may have forgotten about.” – P4
67% of experts considered
find your personal information or intimate images in search engines or social media sites to remove or request your data be removed high priority to do once in a while, though P6 cautioned that overemphasizing this advice “can make people really paranoid” and “only gives this advice if there is a reason, like someone saw a picture of you online or you have an abusive ex.”
Preventing surveillance: Agreement about the usage of privacy tools, but challenges around effectiveness and ease of implementation. High priority advice for surveillance focused primarily on using strong privacy tools, or limiting certain application features that might leak one’s location or identity (Figure
3). However, experts’ evaluation of advice surfaced challenges about whether advice would be effective in mitigating a surveillance threat such as stalking.
73% of experts highly prioritized use secure messaging apps for communication, but multiple experts viewed secure messaging more through a lens of general security threats, rather than hate and harassment. For example, P16, who ranked the advice as high priority, explained: “I do recommend [secure messaging] to people, maybe not in this [hate and harassment] context, but I generally do.” Other highly ranked advice for mitigating surveillance via compromised devices was also more protective against general threats, and less aligned to surveillance for hate and harassment. Advice such as keep your web camera covered when you aren’t using it and use antivirus software to detect spyware on your devices were highly prioritized by 68% and 64% of experts respectively, as they were seen as supporting user agency—they are simple steps that could provide some protection: “no harm in doing it, but I wouldn’t say you need to go home tonight and cover every web camera” (P14). Yet, P8 clarified that cameras were only a superficial concern for surveillance and ranked this as low priority:
“[You’re] not dealing with the root cause. If you’re worried about your web camera, [you] should be worried about bad software in general on your device.” – P8
Thus, despite experts finding some advice in this section high priority, there remains room for new advice and protections that would more effectively protective against surveillance.
Experts were generally not in favor of other more strict physical access measures such as use a virtual or PO mail box rather than sharing your home address, do a physical search or digital scan for tracking devices like Airtags or Tiles, or use a second, separate SIM card to prevent tracking of your location or phone calls due to the substantial effort of implementing the advice. Experts felt this advice “really depends on your threat model” (P9) and expressed that they were “not sure creating an atmosphere of anxiety is needed” (P20) for general internet users. However, experts noted that in some contexts, these practices became critical:
“If you are running from an abusive spouse, then absolutely... But I wouldn’t recommend everyone in the world do this. ” – P11
Experts also warned of the challenges of enacting this advice successfully. Searching for physical tracking devices is “really difficult to do... people don’t know how to do a digital scan" (P12) and “may not be possible for people who aren’t well versed” (P15), echoing Gallardo et al.’s findings that detecting surveillance issues is difficult [
31]. Likewise, “it’s a lot of work to get a P.O. box for all deliveries. It’s inconvenient for real life” (P12). As a whole, experts felt this advice was best suited to people who knew they were in a surveillance situation, but not something that general internet users needed to be concerned about.
Preventing lockout and control: Agreement about establishing account hygiene, but challenges with the ease of implementation. To protect against account-based threats, experts overwhelmingly favored protections they considered to be basic account hygiene (Figure
4). 96% of experts highly prioritized
enable any form of 2FA for your most important accounts, as did 83%
use a strong PIN or passcode for your devices, and 74%
use a strong, unique password for all of your accounts. As P16 explained regarding 2FA:
“If you are actually worried about people hacking [your account], a password isn’t enough.” – P16
Experts also discussed how 2FA alleviates the need for users to change passwords regularly, noting the reality that many users do not use strong
or unique passwords. Experts also noted that users are becoming more familiar with it and finding it “less horrible” [
21] than they expected. Only one expert did not recommend 2FA because “people get locked out of basic services often” (P12).
Favorability of 2FA stopped short of hardware keys (as opposed to SMS or on-device prompts), with just 13% of experts stating hardware keys were a high priority, mainly because it was unnecessarily burdensome for general users. P16 felt this level of security was only needed “if you have the nuclear codes” while others stated this was more important if you had business secrets or professional accounts that might be targeted.
The effort necessary to protect against attackers exploiting weak security questions or having multiple accounts to avoid a single source of failure was also viewed as too onerous. Of experts, 62% rated if a website uses security questions... use a password-like response and 84% rated create a pseudonym or use a different email for each of your online accounts as low priority or not recommended. For hardening security responses, experts were concerned primarily with users forgetting responses. For managing multiple accounts, experts felt the credentials would be too much to remember:
“How are you going to keep track?...we’ve all got at least 10 or 20 different accounts.” – P11
When asked about any missing advice, experts added four pieces for helping prevent lockout and control: keeping account recovery vectors up-to-date (mentioned by 2 experts), checking whether passwords have been exposed by a breach (2), never sharing passwords (1), and keeping an eye out for notifications of suspicious account logins (1).
Preventing impersonation: Lack of effective advice. Across experts, there was no existing advice—nor any advice they could provide—that a consensus felt was high priority to help prevent impersonation (Figure
5). Advice such as
ask friends, family, and colleagues to help keep an eye out for impersonation were ranked as both high and low priority by 35% of experts. As a proactive practice, most experts viewed this as too “paranoid,” particularly in light of the low prevalence of impersonation in their experiences. Similarly, experts raised concerns about feasibility. As P4 put it:
“Do you really think your friends and family and colleagues will spend the time to look out for impersonation for you? They don’t care. They have so many things to do.” – P4
Experts felt this advice was more pertinent when responding to an active or previous attack (i.e., if someone has been or is being impersonated):
“If you were being targeted, you should do this. But not if you didn’t have reason to believe you were being targeted.” – P14
Experts also deemed other forms of bolstering one’s digital identity as infeasible or ineffective: 48% ranked
request for your account to be verified as low priority or not recommended, while the same was true for 74% of experts when ranking
create accounts with your name on all major platforms. Verification (e.g., a visual indicator of trust available on many social media platforms) was perceived as restricted by platforms to celebrity-like individuals who had a sufficiently large audience, and thus beyond the capabilities of most internet users.
9 Likewise, managing multiple accounts that a user wasn’t planning to actively use was viewed as burdensome and potentially even harmful due to compounding account security risks (e.g., the reality that many users would likely use weak passwords).
“I don’t recommend that at all. That’s basically saying you need to sign up for everything... If you don’t have good password hygiene and use the same password on all of them, you can be compromised faster.” – P9
The lack of advice for impersonation stems, in part, from the challenge that attacks frequently occur without a target’s knowledge, and often on platforms where the target is not a participant (e.g., fake dating profiles, fake social media accounts).
Preventing false reporting: Lack of effective advice. When gathering existing advice, the only advice we found to combat false reporting was to
reach out to law enforcement in advance to warn about you being a potential target of swatting (Figure
6). A majority of experts—69%—ranked this as either low priority or not recommended, most commonly because of the low prevalence of swatting on general internet users:
“If you’re likely to get swatted, then it’s a high priority. If you’re just a regular person and you did this, the police would think you’re crazy... In the general case, you shouldn’t even think about [being swatted].” – P1
Other concerns focused on the perceived indifference of law enforcement, a lack of law enforcement training on how to handle such warnings, or a general distrust of law enforcement (particularly in authoritarian regions):
“This one is complicated. A lot of times law enforcement isn’t well set up to do anything with this information. Maybe a good idea, but it’s contingent on where you are in the world.” – P20
While swatting is the most severe form of false reporting in terms of physical harm, there remains a lack of helpful advice for attacks that attempt to silence a target by having their account terminated. Such attacks depend entirely on the procedures and practices of third-party platforms, which targets can only partially navigate by choosing where they participate.
Preventing overloading: Lack of effective advice. While overloading encompasses multiple threats—such as notification bombing, brigading, or dogpiling—existing online advice we found was limited solely to network security (Figure
7). For
use a VPN while online to hide your IP address, there was a large spread of prioritization among experts. For P15, this was a “general thing that everyone should be doing,” whereas for P8, this advice was “pretty in the weeds and not relevant to most, but if you’re targeted, could be reasonable.” Other concerns included barriers to access, usability concerns around proper configuration, and misconceptions about what protections VPNs provide (as recent work has also explored [
4,
8,
63]).
Similarly, get DDoS protection for personal websites was prioritized as either medium or low by 70% of experts. P22 felt it was a “no brainer, but not easy,” whereas most experts felt this advice should be restricted to people who had personal websites with a higher likelihood of being targeted.
The lack of guidance for brigaiding or dogpiling—such as when a person goes viral outside their intended audience—exposes a critical gap in advice today for general internet users. This is particularly problematic as these attacks occur spontaneously, limiting the window for a target to react, or to control the spread of their content once its shared beyond spheres where they have platform-provided privacy controls.
4.3 Overall Safety Strategies
When we asked experts to describe their personal top three recommendations for general internet users with respect to online hate and harassment, we received responses that varied greatly in specificity. Some experts named discrete actions, such as pieces of advice from Section
4.2, while others spoke broadly about things users should keep in mind. We synthesize the 65 top recommendations of the experts we interviewed below.
10Data Minimization (recommended 24 times). Across all experts, the most common top recommendation was to minimize sharing personal information. Experts spoke about the importance of reducing the amount of personal information that is available online, both by being mindful of what a user shares, as well as deleting existing data that is already online. However, experts were also cautious about recommending that people limit what they share online noting that it “may not eliminate the potential for things to happen” (P23). Going further, P23 explained that data minimization is not a sustainable solution:
“Putting limits on self-expression may keep you safe in the short term but it’s not good for the health of online spaces overall.” – P23
Echoing this concern, P8 reasoned that the framing of the advice would be crucial:
“Being careful about what you put online is always a reasonable thing to suggest to people. It is a little victim-blaming at the end of the day, right? So it has to be worded appropriately, but certainly good advice.” – P8
In addition to limiting sharing, experts favored auditing security and privacy settings, especially for social media accounts or location tracking. P24 noted that it was important to consider how information is presented online, and making sure that users know who content is visible to. Privacy and security settings, similar to limiting information available, were seen by experts as actions where users had agency, which may be why they were the most common pieces of top advice. Further, these recommendations align with our finding that content leakage was, on average, the second most important hate and harassment threat that experts thought general users should be concerned with (see Section
4.1).
Account Security (recommended 18 times). Experts frequently recommended general account security practices, including using 2FA, creating strong and/or unique passwords, and using a password manager. P3 described these tips as putting yourself on the path of least resistance:
“You don’t have to set up the most complicated security system you can think of. Do things that will slightly deter you from having a bad experience online compared to the general public.” – P3
Self-Determination and Awareness (recommended 17 times). Experts believed that users should determine for themselves where they choose to engage online:
“Consider the community you’re engaging in and its culture... if you’re going to be on 4chan, you’re going to get hateful content... so it’s better to start off in more protected, smaller, or closed communities with better norms.” – P2
By being more aware of the community norms, as well as the potential protections afforded by certain platforms, experts reasoned that users could better avoid harm. Experts also recommended that users pay attention to how long to engage online, or in P2’s words, “decide for yourself how much bullying or harassment you’re willing to endure.” By determining how much abuse an individual is willing to tolerate, experts reasoned that users could decide when to “leave the platform, especially if it’s continuous and targeted – the platform isn’t for you” (P11) or at least temporarily “remove yourself from any situation from which you feel unsafe” (P20).
In a similar vein, experts recommended that users stay aware of how they might be threatened, and what existing tools could help. Searching for yourself online was seen as a good way to “be aware in general of your digital footprint or online presence” (P15). Given that threat modeling is a standard practice in security for enumerating threats, two experts explicitly recommended it, and one expert implicitly: “Think deeply about who has access to your devices and how you keep those secure” (P24).
Safer Through Community (recommended 9 times). The final strategies recommended by experts were communally-focused. Experts recommended reporting hateful or harassing content—“my favorite is still: block aggressively” (P7)—not only for immediate individual relief, but also because doing so would ultimately help foster safer online communities.
“Don’t be a silent bystander... we’re not going to create a better world by being silent about it. Use the tools you’ve got. If you can report, report. If you can stand up for folks, stand up for folks... So it’s not just about protecting yourself, it’s about being a good digital citizen. It’s important because if you’re waiting for others to change, there won’t be change.” – P18
Other experts further supported the need for pro-social behaviors that would improve broader online communities by proactively looking out for others, as well as sharing the responsibility for creating healthier online environments. If users do experience harm, one expert recommended reaching out for help from trusted parties. P13 hoped people who have been targeted would understand that:
“It’s not your fault. As long as we expose ourselves online, there are dangers that we face. Many times, survivors blame themselves for it. They aren’t sure whether it’s harm or if they’re overreacting. Or they think that they did something wrong so they should be blamed for receiving harassment. The internet environment can be toxic sometimes, and platforms may have given you limited tools to address the harassment, so you feel like you have less agency, but it’s not your fault. We should acknowledge that others have responsibility to protect them.” – P13