3.1 Attention Capture Damaging Patterns: Definition and Criteria
As pointed out by Gray et al. [
46], the original definition of damaging patterns, i.e., functionality that exploits people’s psychological vulnerabilities to promote choices that are not in the user’s best interest, leaves many questions unanswered, e.g., “
what is the user being ‘tricked’ into doing, and with what motivation” (p. 3). In our work, we refer to those patterns through which designers explicitly assert control over the user’s experience [
44] to keep users as customers of the service [
10,
81] and generate more income [
85,
128]. Widdicks et al. [
128], for example, reported a statement of a former Facebook employee (taken from [
5]), who said:
“you have a business model designed to engage you and get you to basically suck as much time out of your life as possible and then selling that attention to advertisers.”We define an attention capture damaging pattern (ACDP) as:
A recurring pattern in digital interfaces that a designer uses to exploit psychological vulnerabilities and capture attention, often leading the user to lose track of their goals, lose their sense of time and control, and later feel regret.
The goal of ACDPs is to maximize continuous usage [
20,
27,
34,
73], daily visits [
20,
73], and interactions [
20,
44,
73,
131] (e.g., clicks, shares, likes, etc.). They make users
“more likely to visit [a digital service] again and click on similar types of rewarding content” [
20], thus creating a
“trap for the user that enables the stakeholder’s goal” [
45].
Table
3 reports five criteria that can be used to further characterize and identify ACDPs
10. The first two criteria (C1, C2) are related to the
mechanisms exploited by ACDPs:
C1 - Exploit Psychological Vulnerabilities.. In the same way that nudges leverage psychological heuristics and biases to guide people toward actions that are in their best interests (e.g., eating healthier) [
23], attention capture damaging patterns exploit this same psychology to induce actions that go against their best interests (e.g., spending more time in an app than they would like). By preying on the fact that many user biases are predictable [
7], these designs shove people towards actions that users may not choose if they were making a considered decision [
20]. Notasil and Payne [
91], for example, concluded that an emotional memory bias might increase the attractiveness of the newsfeed. Lukoff et al. [
73] proposed that recommendations on YouTube might exploit short-term bias, wherein people favor the choice that offers immediate gratification, e.g., watching a new catchy video, at the expense of long-term goals. Similarly, Bongard-Blanchy et al. [
16] stated that ACDPs seduce users with benefits like ease of use and immediate gratification.
Also, ACDPs often leverage a variable schedule of rewards [
13,
20]. According to Skinner’s operant conditioning theory [
113], the most effective way of reinforcing behavior is to follow a variable schedule of rewards: even the task of predicting an outcome is
itself rewarding and triggers the release of dopamine. Burr et al. [
20] reported that exposure to variable reward might occur every time a user engages with an intelligent system agent, as the user typically does not know what items will be presented. Some analyzed papers [
6,
20,
34,
91,
128] even relate attention capture damaging patterns to slot machines, saying, for example, that the newsfeed of popular social networks exploit the same psychological vulnerabilities targeted in gambling addictions: users do not know in advance the posts that will be displayed, and each visualized post may be rewarding or not, e.g., a photo by a friend
vs. an unwanted advertisement. This uncertainty fosters the temptation to constantly check [
34] and leads to continuous use. In turn, this even leads to reward depletion [
29,
66]: where users find themselves scrolling through posts and videos that they have already seen, while they hope for new items to appear.
C2 - Automate the User Experience.. Attention capture damaging patterns may automate the user experience to induce meaningless normative dissociation experiences that direct the behavior and keep users on the platform. Normative dissociation is a phenomenon in which a person temporarily experiences a disconnection from physical and emotional experiences. During normative dissociation, people experience a loss of self-awareness and reflection, and they are less inclined to exercise intentional choice. These experiences are typically only realized in hindsight, i.e., once self-refection is reengaged [
22]. Normative dissociation can characterize different mental states, including daydreaming, flow states, and becoming absorbed in watching a movie [
22].
While these experiences may sometimes be beneficial for the user, e.g., in the case of flow, Baughan et al. [
11] warn that designers may intentionally adopt patterns that promote “zone states,” i.e., absorption in personally meaningless activities with little to no intrinsic value. In the context of a study on Twitter, the authors reported that participants described feelings of being absorbed in a “zombie”-like state when passively scrolling the newsfeed. Similarly, “The 30-Minute Ick Factor” reported by Tran et al. [
119] describes the negative feelings that users experience after noticing that they have spent a notable amount of time on social media unconsciously.
To induce meaningless normative dissociation, ACDPs often remove the need for autonomous decision making [
20,
44], by promoting “endless” sessions [
128]. Gray et al. [
44], for example, report that manipulative designs, including those that can lead to attentional harms, adopt mechanisms that
“automate the process of performing essential tasks without the user’ confirmation” (p. 67). As reported by Chaudhary et al. [
26], there is a need to discuss
“the close correlation between ease of usability and dark persuasive patterns” (p. 788). In analyzing deceptive designs on video streaming platforms, the authors found several functional and helpful features that, in reality, may evolve into damaging patterns with adverse consequences on users’ digital wellbeing. The negative side of ACDPs is that user interface improvements and simplifications are sometimes a deliberate choice of designers and tech companies to promote a frequent and continuous use of technology [
13,
16]. Designers, in particular, often try to improve their services’ design without thinking about their choices’ unintended adverse consequences [
128]. Consequently, Chaudhary et al. [
26] warn that the trade-off between usability and persuasion is critical, especially when there are ambiguities in the designer’s intentions. After prolonged use, in particular, features like content autoplay may become
“habit-forming designs” [
26].
The three remaining criteria (C3, C4, C5) address the impacts that ACDPs may have on users’ digital wellbeing:
C3 - Lose Track of Goals for Use.. Attention capture damaging patterns lead users to lose track of their goals by demanding their attention and introducing frequent distractions [
31,
77]. As with other damaging patterns, ACDPs may
attract [
131] or
divert [
13] attention. As a result, users experience situations in which they are tricked into taking actions that are aligned with the stakeholders’ goals rather than their own [
13,
31,
61,
77]. Lyngs et al. [
77] found that a typical newsfeed on Facebook contains much attention-grabbing and distracting content. More generally, Conti et al. [
31] classified different malicious designs and strategies that may cause distraction, e.g., catchy videos and animations in advertisements. Frequent distractions are in turn correlated with a decrease in users’
productivity [
27,
73].
C4 - Lost Sense of Time and Control.. Attention capture damaging patterns make a person experience a lost sense of time and control. A participant in the study of Cho et al. [
29] explains such a feeling in this way:
I keep pressing next and flipping a story to another. I just keep pressing... to just waste time rather than actually viewing it (p. 12).
Damaging designs that lead to attentional harm, in particular, negatively influence users’ sense of agency [
73]. User agency or self-agency is defined as a person’s self-perception of being the initiator of its actions [
115]. ACDPs may present information in a way that reduces user autonomy of choice by adopting coercive and deceptive strategies [
10,
20,
114]. Lukoff et al. [
73] point out that low sense of agency over technology use is, in turn, associated with negative experiences and a general sense of dissatisfaction over social media use. By surveying and interviewing YouTube users, the authors found that features like recommendations and autoplay often make users feel less in control as they undermine their sense of agency, e.g., because suggestions of new videos are typically “hard to decline.”
C5 - Sense of Regret.. The exposure to an attention capture damaging pattern is typically associated with a later sense of regret, e.g., about the time spent on a digital service or a specific interaction with it. As explained by a participant of the study of Tran et al. [
119] speaking about Instagram usage, for instance:
“[It] gave me like, temporary satisfaction. Like, ‘Oh yeah, all these people like my photo,’ or ‘All these people think my story is funny.’ And yeah, it’s great in that moment, but then after it dies down, you’re just kind of just like, ‘What’s the point?”’ (p. 8).
As reported by Cho et al. [
29], regret happens when
“the rewards of a taken action are outweighed by the expected rewards of what could have happened alternatively” (p. 456:2). The regret theory [
105], in particular, defines regret as a counterfactual feeling that
“the past might have unfolded differently, particularly if a different decision had been made” (p. 2). Being exposed to ACDPs increases the chances of using (or continuing to use) a digital service at times when users would not have otherwise [
27], and this causes regret, e.g., when users spend more time than they planned [
6]. Indeed, websites and mobile apps on which we spend the most time, e.g., social networks, are also those we regret using the most [
20]. This tendency is confirmed by the study of Cho et al. [
29], which investigates the relationship between different features of social media and regret. For example, the authors found that repeated use of “following-based” features like newsfeed and stories quickly deplete content and cause regret. Similarly, “recommendations-based” features with bite-sized contents, e.g., Facebook’s Watch Videos, induce users to use the service “just a bit more,” promoting a behavioral cycle that makes users experience a later sense of regret.
3.2 A Typology of Attention Capture Damaging Patterns
Our next step was to develop a typology of ACDPs based on the literature we reviewed. Although the terms typology and taxonomy are often used interchangeably [
35], we purposefully chose
typology to emphasize that our patterns are “ideal types,” as in types that represent elements common to most cases across the literature. Unlike a taxonomy, there are no strict decision rules to determine whether a given design pattern fits type A or type B.
In developing the typology, we faced three significant methodological decisions. First, we considered whether to name patterns in academic language or everyday language. Here, we drew upon the early work of the architect Christopher Alexander who advocated for patterns that are ‘alive,’ which spark inspiration for the designers and capture the imagination of the public [
2]. Certainly Brignull could have given “Sneak into the Basket” a more technical name, e.g., “opt-out e-commerce,” however we doubt that it would have reached as wide of an audience and served its function as a common reference for the design community and beyond. We thus chose to give patterns evocative names using everyday language.
Second, we had to decide whether to include design patterns that met only one part of our definition. For example, in their review of shopping websites, Mathur et al. describe “Countdown Timer” and “Limited-time Message” as deceptive designs that use a sense of time urgency to capture attention [
81]. While these patterns do leverage sense of time as a
mechanism, we decided to exclude them from our typology as their impact is primarily a financial harm. Instead, we focused on patterns where the impact is also an attentional harm.
Finally, we needed to determine how much context to include in our patterns. A universal challenge for damaging patterns is that not all patterns are harmful all of the time. For instance, Brignull describes how an interface element like “opt-out defaults” (a checkbox or radio button that is pre-selected for the user) might be ethical in one context, but not in another [
18]. On a form for organ donation, it might be ethical to set “donor” as the default. However, the same interface element might be unethical if used to automatically add an iPad case to a user’s shopping cart when they purchase an iPad. Thus, instead of “default settings” Brignull formulates the pattern as “Sneak into basket” that is specific to the e-commerce shopping experience:
“You attempt to purchase something, but somewhere in the purchasing journey the site sneaks an additional item into your basket.” Similarly, regulation by the EU consumer protection agency forbids opt-out for shopping baskets and email newsletters, rather than as a design pattern in general [
1]. The same pattern can have different impacts in different domains.
Even
within a domain, we found that context matters. In their study of the features of YouTube, Lukoff et al. note that in most cases (77%) participants described video recommendations as reducing their sense of agency, but in some cases (23%) participants reported that recommendations actually supported their sense of agency by allowing them to lean back and let YouTube take control [
73]. It depended on the the internal state of the user: whether they were visiting YouTube for a specific purpose or just to browse and pass the time. In short, damaging design patterns depend upon context and ACDPs are no exception. Therefore, our description of damaging patterns also capture the context in which they are most likely to lead to attentional harms.
Table
4 summarizes the typology of
11 attention capture damaging patterns that we extracted from our literature review
11. The rest of this section describes those 11 designs and reports a definition, context of use and examples for each.
3.2.1 Infinite Scroll.
Infinite Scroll (N= 16, [
6,
11,
13,
27,
29,
73,
75,
77,
82,
85,
91,
96,
104,
119,
128,
132]) is a design pattern in which, as the user scrolls down a mobile app or a website on their PC, more content automatically and continuously loads at the bottom. Despite its advantages, infinite scroll may become an “harmful feature” [
85] or an “anti-pattern” [
128] that promotes endless usage sessions [
13,
128]. According to the studies in our corpus, the effects of patterns like Infinite Scroll can be
“understood or at least reasoned about in terms of established psychological theories” (Notasil and Payne [
91], p. 3). Infinite Scroll, in particular, can be related to the operant conditioning theory [
113] and the variable reward technique [
82] since it creates the illusion that new interesting content will “flow” forever. Unfortunately, the “quality” of the next visualized items cannot be predicted. Furthermore, Infinite Scroll is a good example of how attention capture damaging patterns automate interactions to reduce the individual’s physical and mental effort to spend more time on the platform.
Context and Examples. Of the 16 papers we reviewed that mentioned Infinite Scroll, 14 were in the context of social media such as Facebook (9, i.e., [
13,
27,
29,
75,
77,
85,
91,
104,
128]), Instagram (5, [
29,
96,
104,
119,
128]), and Twitter (2, [
11,
132]). Therefore, we hypothesize that Infinite Scroll is most problematic in bite-size content, e.g., as in social networks. With Infinite Scroll, social networks provide unlimited new content to the user [
73,
91], with the risk of making users
“passively slip into a dissociative state while scrolling” (Baughan et al. [
11]). Passively and mindlessly scrolling the newsfeed of a social network, in particular, negatively influence users’ digital wellbeing [
126], and it is one of the reasons why people feel nowadays conflicted about the amount of time they spend on their devices [
66]. A participant of the study by Tran et al. [
119], for example, reported that
“I go on Instagram and I just scroll through even though there’s no real purpose.” Similarly, a participant of the study by Aranda et al. [
6] said
“I hate when I spend time just scrolling and scrolling...it’s all mind-numbing, and I don’t benefit from any of it.” 3.2.2 Casino Pull-to-refresh.
Casino Pull-to-refresh (N = 6, [
20,
29,
34,
66,
91,
119]) is an interaction technique through which users can “pull” an interface, e.g., by swiping down on a mobile app, to reload the status of the system manually. As the user performs the swipe, there is an animated reload of the page, e.g., through a reload wheel icon, that may or may not reveal new appealing content, e.g., an incoming email or a new friend’s post. As the papers in our corpus and tech-insiders [
68] warn, such a design pattern can be classified as an attention capture damaging pattern that offers a
variable reward to its users. Indeed, it may result in a compulsive usage pattern that makes users repeatedly refresh an app hoping for new content to appear [
6,
66]. In other words, pull-to-refresh exploits the same psychological vulnerabilities typically targeted in gambling addictions, e.g., in slot machines, since new rewards may be available at any time, e.g., messages or notifications.
Context and Examples. As reported by the 6 papers mentioning pull-to-refresh as an ACDPs, such an interaction technique characterizes touch-based interfaces, and targets social network users on smartphones (N = 5, [
29,
34,
66,
91,
119]). According to Nontasil and Payne [
91], animated pull-to-refresh techniques on social networks’ mobile apps are deliberately modeled on “one-armed bandits.” While we were expecting to find this pattern also in other contexts (see the email checking habits described by Oulasvirta et al. [
94]) the fact that this pattern appeared predominantly on social networks suggests that casino-like pull-to-refresh techniques are most problematic when the underlying content is various and less predictable than, for example, a simple email. In this way, not only the quantity (whether or not a reward is present), but also the quality of the reward is variable (the degree to which the new social media post(s) satisfy the user).
3.2.3 Neverending Autoplay.
Neverending Autoplay is a design pattern in which new videos are continue playing indefinitely without any user interaction. Many papers included in our review (N = 14, [
13,
16,
20,
26,
27,
29,
61,
73,
75,
77,
82,
96,
119,
128]) describe it as one of the most common attention capture damaging patterns. As with other patterns, autoplay can be a useful feature in some circumstances, e.g., to listen to YouTube’s music videos while working, and detrimental in others, e.g., when it is used to attract attention against the user’s best interests [
73]. In particular, people’s digital wellbeing problems arise when autoplay is “neverending” and cannot be easily turned off. Similarly to Infinite Scroll, indeed, Neverending Autoplay
“works by continuously providing users with yet another film clip for them to watch after one finishes — allowing ‘endless’ video streaming sessions” (Widdicks et al. [
128], p. 5). Bongard-Blanchy et al. [
16] selected Neverending Autoplay as one of the deceptive designs to be investigated. While their recruited participants described autoplay as more acceptable than other strategies, e.g., hiding information, they also found it as one of the more influential features implemented by digital services to drive people’s behavior. Lukoff et al. [
73] classified autoplay as an ACDPs that undermines users’ sense of agency, as it removes the need for autonomous decision-making [
20].
Context and Examples. Nevereding Autoplay is an ACDP that is common in social networks (N = 5, [
27,
29,
61,
77,
128]) and video streaming platforms (N = 6, [
13,
26,
29,
73,
75,
82]). As services with different characteristics, the pattern can be present in slightly different variations and with different goals. In social networks like Facebook (N = 3) and Instagram (2), videos embedded in the newsfeeds start automatically as long as they appear on the screen. At the same time, users’ stories flow on their own or through a simple tap on the screen. Autoplay is also often active by default, and settings to deactivate it are often difficult to access [
85], meaning that most users experience this pattern during all their usage sessions. YouTube (5), instead, attracts users by automatically (and infinitely) starting a new video [
91] when the previous one ends (Figure
3). As reported by a participant of the study by Lukoff et al. [
73], for instance,
“I often spend more time than I meant to because there is a good related video that seems worth watching so ya know, ‘Just one more’ which becomes a couple hours.” It is worth noting that, unlike other services, YouTube users can easily disable/enable the autoplay functionality through a slider embedded in the video player. While Neverending Autoplay works differently depending on the underlying service, the common point is that there is never a stopping cue or pause for reflection for the user. In particular, videos on social networks are generally short and often consumed with little or no attention, e.g., while passively scrolling the newsfeed [
126]. Thus, Neverending Autoplay on social networks is used to attract the user and maximize the amount of (different) content the user interacts with. On platforms like YouTube, instead, autoplayed videos “follow” the previous ones as step-by-step recommendations, thus enforcing more extended viewing sessions [
26].
3.2.4 Guilty Pleasure Recommendations.
Guilty Pleasure Recommendations (N = 12, [
6,
13,
20,
26,
29,
50,
73,
77,
91,
96,
114,
132]) are personalized suggestions that pray on individual consumer frailty to target every guilty pleasure of the users and keep them on the platform. They offer pleasurable content on some level but also leave users feeling guilty afterward. Many digital services use recommender systems to propose new and appealing content to the user based on their past interactions (content-based approach) or the preferences of similar users (collaborative filtering approach). Recommendations are undoubtedly an important mechanism that can improve the overall user experience with a platform that is designed to maximize a user’s utility [
20]. However, as reported by the 12 papers under analysis, misalignment between the goals of the platform and the user’s goals – i.e., a value alignment problem [
20] – can make recommendations an attention capture damaging pattern that “trap” the users in the system and keeps their attention [
13,
114]. These clickbait suggestions [
73,
77] increase the platform’s utility without a benefit for the user. In particular, the paper by Chaudhary et al. [
26] talks about “bias grind,” by referring to UI patterns that
“disproportionately overload user interests and biases [...] providing an infinitely long scroll of Recommendations based on previous watching history” (p. 788). Unfortunately, Guilty Pleasure Recommendations cannot be easily personalized or disabled without third-party tools, e.g., Unhook [
123]. As for other ACDPs,
“the variable schedule of rewards in content recommendations also play a huge role in hooking users” (p. 456:20), as reported by Cho et al. [
29]. Furthermore, Guilty Pleasure Recommendations are particularly harmful to people lacking self-control and self-esteem [
13,
50].
Context and Examples. The 12 papers mentioning Guilty Pleasure Recommendations as an ACDP are in the context of social media, e.g., Facebook (4, [
13,
29,
77,
91]), and video streaming platforms, e.g., YouTube (3, [
13,
29,
73]). Both social media and video streaming platforms often provide their users with clikbait [
73,
91] suggestions, thus increasing compulsiveness in their long-term usage [
26]. Video streaming platforms like YouTube display an unlimited number of personalized (often viral) suggestions in almost every part of their interfaces, with the main aim of attracting users’ attention and making them watch more videos [
13] (Figure
4). According to the study by Chaudhary et al. [
26], recommendations in these platforms may be used to extend users’ current viewing sessions and increase the chances of experiencing regret by 34%. In contrast, social networks can recommend different kinds of content, from friends to follow [
13] to games [
77] and trending topics [
132]. In some cases, these recommendations are also deliberately disguised into the users’ newsfeed (see the Disguised Ads and Recommendations pattern). Furthermore, the study by Cho et al. [
29] highlighted that social networks like Instagram place recommendation-based features close to features that are used more actively, e.g., the search bar, thus causing
“habitual feature tour and sidetracking from the original intention of app use” (p. 1). All in all, the papers under analysis suggest that Guilty Pleasure Recommendations, both on social networks and on video streaming platforms, are particularly harmful when they are frequently updated to increase the platform’s utility [
20]. Lukoff et al. [
73] cited a study by Pew Research that finds that YouTube’s recommender system directs users towards progressively longer videos [
110]. In discussing the study by Bakshy et al. [
9], instead, Burr et al. [
20] noted that Facebook uses the users’ feedback to refine its future recommendations and provide users with more catchy suggestions.
3.2.5 Disguised Ads and Recommendations.
The
www.deceptive.design website defines the Disguised Ads pattern as
“adverts that are disguised as other kinds of content or navigation, in order to get you to click on them.” This deceptive design has already been investigated by previous works exploring dark patterns in websites (e.g., [
47]) and mobile apps (e.g., [
33]) and has been classified as a form of interface interference [
46]. In our work, we extend the original definition of the Disguised Ads pattern by referring to the practice of mixing disguised advertisements with recommendations that are camouflaged as normal content (N = 14, [
20,
29,
33,
39,
41,
44,
73,
77,
85,
98,
114,
125,
131,
132]).
Context and Examples. Disguised Ads and Recommendations is an ACDPs that is typically used to increase the time users spend on social networks. Indeed, all the 14 papers in our corpus that mentioned this pattern were in the context of social media, e.g., Facebook (6, [
20,
73,
77,
85,
98,
114]) and Twitter (3, [
11,
98,
132]). These digital services purposely inject new, catchy content resembling friends’ posts in their newsfeeds to mislead users to click it more often [
44]. Injection may not only involves advertisements [
41,
44,
131], but also sponsored pages [
33,
98] and recommended posts [
29,
77], often by other people the user does not follow. For example, sponsored pages on Facebook and Instagram are mixed with stories and posts from friends and followed pages (Figure
5): when clicking on them, the user is still using (and paying attention to) the same service. In this sense, this kind of injection can be seen as a particular type of personalized recommendation – delivered according to the specific user’s profile – that influence people’s behavior online [
114]. Another distinctive example of the Disguised Ads and Recommendations pattern can be found on Twitter, that often displays tweets from people that the user is not following, e.g., tweets of users followed by a friend or generic
“you might like” tweets, by pretending they are normal content. Mildner et al. [
85] highlight that “smart” social media newsfeeds with Disguised Ads and Recommendations are tempting and likely increase the chances of prolonging usage sessions, thus causing a sense of regret. As warned by Burr et al. [
20], the pattern of Disguising Ads and Recommendations into social networks is also problematic because newsfeeds become
“a representation of what the ISA expects will elicit the most clicks based on prior behaviour” (p. 755), rather than a representation of the users’ belief and preferences. Unfortunately, most users are not able to process such a misalignment. Susser et al. [
114], for example, referenced a survey by Pew Research highlighting that more than half of Facebook users do not understand well why certain posts are included in their newsfeed and others are not.
3.2.6 Recapture Notifications.
Recapture Notifications (N = 7, [
6,
11,
13,
73,
77,
125,
132]) are notifications that are deliberately sent to recapture the attention of a user who escaped or left a digital service for some period of time, e.g., to make the start a new usage session. Several previous works have studied the influence of notifications on users’ digital wellbeing. Indeed, the huge and continuously growing number of notifications that users receive every day [
97] can interfere with daily activities like studying, working, and driving [
4,
48], and make users less productive [
79] and more stressed [
80]. As a consequence, several digital self-control tools nowadays help users filter or block notifications [
87]. While notifications can alert users to important information, the analyzed papers highlight that Recapture Notifications are an attention capture damaging pattern that should be managed or avoided, as they used as a pretext to make user unlock a device and going into apps or websites to engage further [
6]. According to a participant in Lukoff et al. [
73], for example, Recapture Notifications
“draw me to YouTube and create my schedule for 20-30 minutes, this creates an addiction” (p. 7). Unfortunately, these notifications are typically activated by default in contemporary digital services [
10] and often distract users [
6,
13,
61].
Context and Examples. In the papers we analyzed, sending Recapture Notifications is a cross-cutting design pattern that characterizes social media (4, [
11,
13,
77,
132]), video streaming platforms (2, [
13,
73]), and instant messaging applications (1, [
125]). Specifically, the content of notifications plays an important role in determining their (negative) impact [
73], and Recapture Notifications typically convey unimportant unimportant information [
6]. A classic example of Recapture Notifications are ones that share information about others’ activities on social networks [
77]. A participant in the study by Lyngs et al. [
77], for instance, stated that
“if I didn’t have things popping up every 30 minutes like ‘this has happened’ I don’t think I would think about Facebook” (p. 8).
3.2.7 Playing by Appointment.
Playing by Appointment (N = 5, [
41,
73,
82,
128,
130]) is an attention capture damaging pattern that forces users to use a digital service at specific times as defined by the service, rather than the user [
130]. It has been classified as a “temporal dark pattern” by Zhagal et al. [
130], that related it to a time-wasting activity that attempts to test the user’s patient [
41]. The pattern is engineered to encourage users to re-visit a digital service to avoid losing the possibility of earning something, e.g., points or even the ability to progress in a game.
Context and Examples. Playing by Appointment has been originally studied in the context of social media games [
130], wherein resources may whither away if the user does not access the game at specific times [
73]. Zagal et al. [
130], for example, mention the game FarmVille. In this social media game, users that plant crops are encouraged to return to the game after a given amount of time not only because they can earn points but because a crop that is not harvested in time loses its value. Zagal et al. [
130] also mention “lighter” versions of Playing by Appointment, e.g., as in some Pokémon games. Here, some Pokémons can only be captured at specific hours of the day, but a player can complete the game even without capturing them. Besides games, we also highlight the possibility of finding the Playing by Appointment pattern in social networks. An example is the Snapchats’ social streaks, which count how many consecutive days two people have been sending Snaps to each other [
28]: keeping up a Snapchat streak gives the user extra points, while even a single day without sending a Snap breaks the streak. Another example of Playing by Appointment can be found on the BeReal social network [
14], which asks users to publish every day a post at a time that is randomly selected by the system and communicated to the user through a notification.
3.2.8 Grinding.
Grinding (N = 7, [
33,
41,
46,
47,
82,
128,
130]) is an attention capture damaging pattern that forces users to repeat the same process several times to unlock an achievement [
33]. As Playing by Appointment, grinding has been classified as a “temporal dark pattern [
130]”. According to Widdicks et al. [
128] in the context of video games,
“it is interest of a game developer to entice players to commit more time to a game than what the player expects or plans, encouraging players to waste time” (p. 2). Through Grinding, digital services “consume” the user’s time and attention by increasing engagement and promising a later achievement [
47], e.g., a new level in a video game or a badge on a social network. Di Geronimo et al. [
33], reports that identifying this kind of damaging patterns is not easy, as they are initially disguised as features that increase user engagement.
Context and Examples. Grinding is an ACDPs that has been defined by Zagal et al. [
130] in the context of their research work on “dark game design patterns”. According to the authors, Grinding is common in massively multiplayer games, e.g., World of Warcraft. Here, players are forced into
“needlessly spending time in a game for the sole purpose of extending the game’s duration” (Zagal et al. [
130], p. 3), e.g., killing monsters to gain experience points [
46]. Besides multiplayer games, researchers highlight that Grinding can be adopted in social media games [
46], e.g., FarmVille, and social networks in general. An example is the verified badge on Twitter, which can be achieved if the account is
notable and
active, e.g., with a sufficient number of followers and mentions [
122].
3.2.9 Attentional Roach Motel.
Attentional Roach Motel (N = 6, [
10,
13,
33,
47,
78,
85]) represents the (engineered) difficulty of canceling an account or logging out from an attention-capture digital service, in contrast to the simplicity of creating an account and accessing the service. This deceptive design pattern is an extension of the original Roach Motel pattern described by the
www.deceptive.design website and previous typologies, e.g., [
33,
46,
47]. It was defined as a mechanism that generates situations for the user that are easy to get in, but hard to get out. Besides entrapping users into paid subscriptions, Baroni et al. [
10] highlighted that tech companies can also use the Roach Motel dark pattern to keep users’ attention by keeping them as customers of their services, e.g., by depriving users of the possibility of deleting their accounts [
10,
33,
78,
107]. Similarly, the Attentional Roach Motel pattern may be exploited to make account settings difficult to access [
85], e.g., by relegating them to small drop-down menus, thus hindering the possibility of logging out from a digital service [
33,
47,
78,
85]. As discussed by Mildner and Savino [
85] in their study of Facebook’s damaging design patterns, moving logout buttons into drop-down menus can be considered as an interface interference [
46]. All in all, UIs adopting the Attentional Roach Motel pattern affect how alternatives are perceived by promoting a predefined action. A way to hide available settings, in particular, is to use deceptive visualizations that leverage the
salience bias [
121], e.g., to create optical illusions and alter people’s perceptions of the different buttons on the user interface.
Context and Examples. Overall, 3 out of the 6 papers under analysis, i.e., [
13,
78,
85], found the Attentional Roach Motel pattern on Facebook, thus suggesting that this pattern is most problematic and evident on social media. By exploring the effects of damaging patterns on Facebook, for example, Mildner and Savino [
85] found that the logout button was moved from the top-navigation bar into the ‘Account’ menu in 2010, thus limiting discoverability. Bhoot et al. [
78], instead, described the overwhelming process of deactivating or deleting a Facebook account: 1) searching for the settings, 2) finding the ‘Deactivation’ tab (included in the ‘Your Facebook Information’ tab), 3) choosing between ‘Deactivating’ and ‘Permanently deleting’ the account, 4) entering the user’s password, 5) inserting a reason for deactivation/cancellation, and 6) skipping a final pop-up dialog suggesting to ‘log out’ instead of deactivating/canceling the account. As reported on the Facebook website [
38], moreover, users’ information is canceled from the platform after 30 days, only. Furthermore, a login during the 30 days following deletion allows the user to re-activate the account. In the case of a deactivated account, a login anytime after the deactivation will automatically reactive the account.
3.2.10 Time Fog.
Time Fog (N = 1, [
26]) is an attention capture damaging pattern through which designers deliberately
“induce unawareness by reducing autonomy of monitoring user time spent” (Chaudhary et al. [
26], p. 785). With respect to the original name given by Chaudhary et al. in their investigation of damaging patterns on video streaming platforms, i.e., “feature fog,” we decided to use the word “time” to indicate further that the pattern is specifically about obscuring users’ sense of time spent. The goal of this pattern is to reduce the possibilities for users to get feedback on the time they spend on digital services, e.g., by hiding the video elapsed time, thus increasing the chances of longer usage sessions. According to Chaudhary et al. [
26], Time Fog is related to the “hidden information” pattern reported in the
www.deceptive.design website, as well as the “interface interference” described in the Gray et al. taxonomy [
46]. Therefore, as for the Attentional Roach Motel, Time Fog can be considered a deceptive visualization that leverages the
salience bias [
121]. Chaudhary et al. also highlighted a similarity with the ‘menu engineering’ trick [
65], through which restaurants hide costly items in the menus so that they are not directly visible to customers.
Context and Examples. The only paper mentioning Time Fog as an ACDP was in the context of video streaming platforms. One of the examples reported by Chaudhary et al. [
26], in particular, was about Netflix. The authors noted that
“the time elapsed feature that lets a user monitor how much time has elapsed since the start of video is missing from Netflix” (p. 785). Such a feature enforces extended viewing sessions because users cannot easily tell how much time is left until the end of a video. Besides video streaming platforms, another possible example of Time Fog can be found on mobile games that typically start full-screen by hiding the smartphone’s clock.
3.2.11 Fake Social Notifications.
Fake Social Notifications (N = 9, [
10,
41,
46,
47,
61,
78,
125,
130,
132]) refers to the practice of deceiving users with false social activities and information. Following this ACDP, digital services may send messages on behalf of a user [
41,
46,
130], e.g., by pretending to be them. According to Fitton et al. [
41], these messages can be related to the Brignul’s “Friends Spam” deceptive pattern. Similarly, digital services may communicate to a user a social activity of another person about content the user has never interacted with [
61,
132]. All in all, these deceptive techniques violate the expectation that the received messages should actually be from a real person, and are often designed to spur the user receiving the message to open (and start using) a given digital service. Furthermore, being related to social activities, Fake Social Notifications may leverage on our
herd instinct bias of replicating others’ actions [
30] as well as on the
spotlight effect [
43,
84], i.e., an egocentric bias that lead us to perform behaviors that elicit social approval.
Context and Examples. According to the 9 papers under analysis, Fake Social Notifications are common in games hosted on social networks (4, [
41,
46,
82,
130]) and on social networks themselves (2, [
61,
132]). Games like Farmville and the Candy Crush Saga, for example, may
“impersonate other players by communicating actions they never performed, thus misleading the player about the activities of their friends in the game” (Zagal et al. [
130], p. 6). Games on social networks may also send invitations to join the community to the player’s friends [
41] by spamming all the players’ contacts through messages that claim to be from the player [
46]. Regarding social networks, Kollnig et al. [
61] and Zhang et al. [
132] highlighted the presence of Fake Social Notifications on Twitter, e.g., when the platform sends the notification
“user x just tweeted after a while” (Figure
6a). Finally, an instance of this pattern may also be found in some instant messaging applications, e.g., Telegram, that broadcast messages like
“user x just joined, say hello” (Figure
6b).