Nothing Special   »   [go: up one dir, main page]

Wikipedia talk:Bot policy/Archive 29

Latest comment: 4 months ago by SilverLocust in topic Move+ Link Correction

Removing restrictions in software

There is a current thread at WP:VPPOL#Huggle & Rollback which I think deserves some consideration here. Ignoring the specific user (which can be discussed there) and specific thread (that WP:PERM is supposedly a problematic process), a user was sufficiently technical to modify Huggle to remove the permissions check for rollback. He subsequently used that software on wiki.

Another tool this impacts would be WP:AWB, which implements a check on the CheckPage for non-admins and bots, or that you're an administrator by default.

I see this choice as WP:GAMING. At least two other users have either said this is not GAMING or did not consider this might be GAMING.

Should this policy have anything to say about scripted tools being modified to circumvent social controls on the tool that happen to be implemented in the tool directly as being GAMING (rather than e.g. WP:AWB having the social controls external to that software in WP:AWBRULES), or is the fact it is GAMING actually a common sense conclusion and we just found two users who came to a different conclusion? --Izno (talk) 21:42, 18 December 2020 (UTC)

Copying my response over:
I think this is really problematic to add into BOTPOL. How much modification of AWB is acceptable until it's no longer considered AWB & I can use AWB without requesting access to AWB? What if I make my own script from scratch to do the same thing (a particular semi-automated edit)? What if I base this script off AWB's regexes? Similar to the Huggle situation. Lines can't be drawn here, which is why making a BOTPOL restriction for this is difficult. If the issue is with rollback, apply the restriction to that guideline, but honestly I think nothing should be done unless it can be shown that the edits themselves are disruptive, problematic anti-vandalism edits. This is also the only enforceable remedy. Also, if the edits aren't problematic I don't see what the big issue is. And it's quite rare that someone is actually capable of, and does, edit the source of a tool and recompile it, so it's not the kinda thing worth legislating over I think - more harm than good will come of it. ProcrastinatingReader (talk) 21:56, 18 December 2020 (UTC)
To add: By releasing a tool publicly under a permissible license, you accept that people can edit it. We also have no policy requiring people to get any particular right before they can use semi-automated tools in general. Thus, whether one uses a particular tool, a modified version of that tool, a derivative of that tool, or their own totally custom tool, these are all equivalent things. It's somewhat illogical (and hence unenforceable, or only arbitrarily enforceable) to try to restrict how much editing someone can do to a tool, until they've done too much and it's a separate tool. Heck, how do you even prove how much someone has edited a tool? What if they've recoded half of the tool and removed the permission requirement, deciding that they don't want their derivative to have that restriction. Is this now unacceptable, too? ProcrastinatingReader (talk) 22:01, 18 December 2020 (UTC)
  • Two things, firstly the software is under an open license, they can do practically whatever they want with it. If they want to remove the check or rollbacker, they can. Secondly: All bot-like editing is covered under BOTPOL. If a user is using an editing tool to edit in a manner that does not rise to the threshold of being a bot/bot-like/automated per MEATBOT, there is not currently a policy that forbids it. If a user is using an editing tool to make the actions they make on-wiki easier, and doesnt require specific permissions to do so, there isnt a policy that forbids it. If we want to prevent people using editing tools that *mimic* an on-wiki action like rollback, we need to a)get consensus editors should not be performing tasks that normally require a user-right in order to perform and b)add that wording to a relevant policy. Botpol is probably the most likely culprit here, as its the one that most closely deals with automated tools. For what its worth, I personally do not think editors should be doing tasks that mimic rollback, without the relevant right. The point of being *granted permission* to do something is that its a check on if the person has the judgement to do a thing, not just the technical ability. Only in death does duty end (talk) 22:25, 18 December 2020 (UTC)
  • Everyone is free to modify any open source software however they want, including the checking of permissions. We, however, are free to a) prohibit the use of such software b) indef-block those who do use it per WP:DE. In practice? A WP:DE block will be handed out to those who bypass the checks and edit disruptively with it. And if User:Example removes the permission from Huggle but doesn't edit disruptively with it, who cares. WP:BOTPOL isn't what governs this, this is WP:DE and WP:CLUE stuff. Headbomb {t · c · p · b} 22:55, 18 December 2020 (UTC)
  • Was approval needed in the first place? In the case of "rollback with Huggle" the question has two parts:
    1. Can an un-approved editor use Huggle to effect rollbacks? and
    2. Can an un-approved editor use another tool or make his own from scratch to effect rollbacks?
The answer to the first question is clearly "no." The answer to the second appears to be "yes." If this is true, then "modifying Huggle to do rollbacks" should be allowed. If it is not true, then the language of the Bot policy needs to be clarified to prohibit using any tool to effect a rollback except that which is provided by Wikipedia itself if you do not have that user-right.
As a side-note, any editor, logged in or not, can effect a rollback by viewing the edit history, selecting the most recent edit by a different editor, opening it, and saving it. This can be done quite rapidly, without thought, and, as it says in WP:Bot policy, with all of the responsibility that would come with using an automated tool. I mention this only to say that if we ban automated and semi-automated tools from effecting a rollback, we haven't prevented people from rolling back edits without thinking about it. davidwr/(talk)/(contribs) 🎄 23:25, 18 December 2020 (UTC)
Rollback-specific considerations should be built into WP:ROLLBACK. WP:CLUE and WP:DE covers the rest. Headbomb {t · c · p · b} 23:32, 18 December 2020 (UTC)
@Davidwr: "rollback" is a specific technical operation that occurs server-side, no editor can initiate an rollback without rollback permissions. Note, this has nothing to do with "reverting" only "rollback"ing. — xaosflux Talk 00:11, 19 December 2020 (UTC)
The issue here is not WP:DE. See User talk:ChipWolf. The background to this section is that an editor was indef blocked for making 3 correct (as stated by an admin there) Huggle reverts, whilst not being a rollbacker. There is nothing disruptive about it - by definition, 3 good reverts did not disrupt progress toward improving an article or building the encyclopedia - although one may well be on a shorter leash if they mess up with it and hence it is inadvisable. Either way, I think this is a WP:ROLLBACK issue (as I said at VPP). afaik there is no PAG supporting that block, and a block on the basis of a possible 'implicit consensus' (especially without warning) is meh. ProcrastinatingReader (talk) 23:37, 18 December 2020 (UTC)
@Davidwr: as far as I gather, the line here seems to be with the frequency at which a tool can edit. Presumably a tool would be considered to require rollback if the frequency it can edit is over some acceptable threshold. I would imagine this should be achieved with API ratelimiting (for users without explicit permission) rather than a social policy. A blanket prohibiting of all tools except those on a whitelist would cause more problems imho ~ Chip🐺 01:10, 19 December 2020 (UTC)
The issue here is not frequency, and blocks for this haven't been limited to Huggle/rollback. See eg this. The issue seems to be that some people don't like other people editing source code to disable access limitations set by tool developers, and view this as "gaming". If you made your own high frequency editing tool and used it properly presumably nobody would care; you'd be limited solely by policy on WP:ASSISTED, and WP:MEATBOT if disruptive. ProcrastinatingReader (talk) 01:20, 19 December 2020 (UTC)
Hopefully we can establish where that line is; as I mentioned briefly below, if people truly do have an issue with modifying open-source tools with restrictions for their own use, how would this be effectively implemented in policy? There is of course the issue of how bastardized must the code become before it becomes acceptable to use, and how would this be enforced to ensure fairness? I'm of the opinion it would be almost impossible to enforce as the operations these tools have are essentially identical, they just enable the user in different ways which is completely transparent to other editors. ~ Chip🐺 01:39, 19 December 2020 (UTC)
  • @Izno: Your initial summary doesn't seem to have anything to do with "bots" so I don't think the bot policy is at issue. If this software was being used by an actual bot it still wouldn't really matter - as long as the bot was operating within it's approved tasks. As far as the possible actual issue - if anyone is editing disruptively they should be dealt with as a disruptive editor regardless of if they are using scripts, the webui, the api, their own custom software, or someone else's software. If the person is doing bot-like-editing without being an approved bot - that is already unacceptable, we shouldn't need new special rules to control it. — xaosflux Talk 00:06, 19 December 2020 (UTC)
    I think Izno is talking about amending WP:ASSISTED to say something like "Tools where the developer has attempted to restrict access to certain editors must only be used by those editors. Attempts to circumvent these controls is a violation of this policy." (which I think is a bad idea for reasons above) ProcrastinatingReader (talk) 00:13, 19 December 2020 (UTC)
    Don't think we need a policy on that at all - if someone is being disruptive that is already enough reason to step in, and if they are not then why care?. — xaosflux Talk 01:03, 19 December 2020 (UTC)
    This was my initial thought and it circles back to the open-source discussion; at what point is the tool still considered to be the same tool, and indeed how would you possibly prove or enforce such a policy? Unless there's harm to the wiki or intention to harm, I don't think other editors should care ~ Chip🐺 01:27, 19 December 2020 (UTC)
    I agree with Xaosflux: if someone's usage of {huggle, AWB, a pywikibot script, their own completely custom tool} is disruptive, then handle it as any disruptive editing. If their edit rate is so high that they are flooding watchlists or recent changes, tell them to get a bot flag. We already have very clear policies to handle both of those cases, regardless of what tool is used. If they aren't being disruptive, then who cares? Using Huggle without permission is no more disruptive than using pywikibot without permission. The checkpage is good to weed out complete newbies, but anyone who is capable of recompiling Huggle or AWB is also capable of reading the Mediawiki API docs and writing their own tool. Users are allowed to modify free software. That's probably the main point of "free software", and it is expressly allowed according to the licensing information distributed as part of Huggle. We should not be indef blocking editors in good standing just because they used a piece of free software without "permission". ST47 (talk) 03:44, 19 December 2020 (UTC)
    @ST47: Apologies for my lateness, I've just found this discussion. I'm afraid I don't fully agree here. The problem with simply waiting for the unauthorized user to cause disruption is because with these kinds of tools, if the user does become disruptive, then they become very disruptive. Not only might we have to block, but we would potentially have to check hundreds upon hundreds of edits that were submitted in a short time in order to respond to the disruption. Yes, it is "free software", meaning the licensing allows editors to fork and modify the software just like CC BY-SA allows editors to copy the contents of one article and paste it overtop another article as long as they supply attribution—but that doesn't mean it's always a good idea. Just because a user is technically skilled enough to recompile a tool like Huggle does not necessarily mean they can be trusted to use it responsibly. If an editor was denied access to Huggle or AWB via traditional means, presumably there is a good reason for doing so (admin error does happen, but technical circumvention surely isn't the solution to that). Mz7 (talk) 06:01, 27 December 2020 (UTC)
    I should also clarify that I don't necessarily think blocking is the first resort in a case where an editor has done this kind of circumvention, but I am concerned that we seem to be legitimizing this kind of circumvention where it should be discouraged or prohibited. Mz7 (talk) 06:12, 27 December 2020 (UTC)
    AWB is used to do find+replace the same as pywikibot. I can write a few lines of code on top of pywikibot to do a semiauto find+replace, without using AWB/JWB, and it’s totally acceptable without any authorisations. Pywikibot has no checkpage. Or maybe I can write my own API requests, as ProcBot does. It’ll be at the same (or faster) speed, thus same/more “damage”. So the idea that one can write that code and it’s all ok, but cannot edit AWB code, is illogical imho. Besides, where do you draw the line if we’re now making developer source code policy? If pywikibot tomorrow says “users must be +sysop to use this tool” 95% of bots have to shut down their operations? This is why I say I don’t think anybody advocating for this position has really thought this through. ProcrastinatingReader (talk) 11:47, 27 December 2020 (UTC)
    @ProcrastinatingReader: I don't buy this slippery slope argument. If pywikibot tomorrow says that users must be +sysop to use this tool, that would of course be ridiculous, and we would have a discussion to overturn that (note: the proper response would be discussion). However, requiring preauthorization to use Huggle and AWB is a longstanding community expectation on this project, and it makes no sense that we are allowing editors to intentionally skirt around that expectation. (Indeed, in my view, even if we say here that developers can't unilaterally impose enforceable technical restrictions on their tools, the rollback and check page requirements in the narrow context of Huggle and AWB are no longer merely "developer source code", but full-fledged community expectations.)
    With respect to your find+replace pywikibot example, it seems to me that the reason why we don't have a check page currently for pywikibot is because pywikibot requires a nontrivial amount of technical skill to use properly, and historically only seasoned editors are interested enough to learn it, so we haven't bothered with any kind of pre-check. As soon as any tech-savvy LTA figures out how to use pywikibot on a regular basis to disrupt the project on a wide scale (which is certainly not beyond the realm of possibility, and me saying this might honestly be WP:BEANS), I suspect the community will look favorably on imposing some kind of restriction on pywikibot usage. In other words, pywikibot represents more the exception than the norm in this area, and I can't help but see this as a sort of WP:OTHERSTUFFEXISTS situation. Mz7 (talk) 19:23, 27 December 2020 (UTC)
    This is a policy discussion, not a deletion discussion, so OSE is quite valid imo. It’s bad policy making to not consider the broader effects of a policy. I don’t see what’s so legitimate about a RB restriction and illegitimate about a sysop one. And tbh, if one removes the tag on AWB and maybe disables genfixes it’s pretty hard / impossible to tell if AWB is used or pywikibot or raw API requests. And imo I don’t think it matters. But I’ve said my piece, I think. ProcrastinatingReader (talk) 00:01, 28 December 2020 (UTC)
    Anyone capable of writing a for loop can cause widespread disruption. Using pywikibot and most other tools requires more of programming familiarity rather little familiarity with Wikipedia, or in other words, they don't need to be "seasoned editors". It isn't possible to restrict pywikibot usage by policy because it isn't possible in the first place to make out who is using pywikibot. Similarly, for Huggle and other tools, if someone hacks the tool and makes it use different tags and edit summaries than what the tool normally uses, it's impossible for anyone to tell they're using that tool. Policy should always be enforceable, disallowing Huggle forks isn't an enforceable policy. It can at best be a "discouraged practice". – SD0001 (talk) 17:37, 28 December 2020 (UTC)
    Hmm, that is a good point. I suppose it is always possible for editors to obfuscate their tool activity to make it look like they're manually editing, and there is little we can do beyond restricting API editing in and of itself. On the other hand, it is possible for people who have been blocked indefinitely to simply return a few months later under a new username and try to obfuscate their activity so they remain undetected—that doesn't mean this behavior can't be prohibited by policy. At the very least, I would be satisfied if the outcome of this discussion is a clear understanding that this behavior isn't legitimate—i.e. if a user is caught to have hacked Huggle with the express intent of circumventing the rollback restriction, they should be asked to stop. Mz7 (talk) 23:04, 28 December 2020 (UTC)
  • Hypothetical situation to think about as we decide this question: What if a non-Wikipedia editor creates a semi-automated tool the can do reverts very fast, as fast as or faster than Huggle. Let's say he does this because he needs it on a non-WMF project that uses Wikimedia software. Now let's say he publishes it in some form. Maybe he sells it, maybe he gives it away "free as in beer." Maybe he open-sources it. It doesn't matter. It's out there. If the solution we come up with doesn't at least acknowledge this possibility, then it is not well thought-out. "Acknowledging the possibility" may be as simple as "kicking the can down the road" saying "yeah, this could happen, if it does, we will address it at that time, in the meantime, we'll let the issue sit in our minds to percolate so we are ready to discuss it when the need actually arises." But the possibility should at least be acknowledged - that is, whether or not to "kick the decision down the road" when it comes to software that doesn't exist yet - should be intentional and not the result of overlooking something. davidwr/(talk)/(contribs) 🎄 19:39, 27 December 2020 (UTC)
    I edit-conflicted with you, so my response covers a fair amount of "response" to your situation, but as a TLDR, it doesn't matter what they're using - if it's disruptive, we sanction them. If it's not, then it's not. Primefac (talk) 19:43, 27 December 2020 (UTC)
    I see it as a case-by-case approach where we discuss new tools as they arise. If the new tool offers functionality identical to Huggle, I would probably support restricting the tool to rollbackers as a community decision. My understanding of this discussion, however, is more about users who deliberately evade some kind of longstanding restriction on an existing tool, which I claim is easy to tell apart from someone who in good faith creates a new tool from scratch. The key here is intention. Are they intending to get around a longstanding community expectation, or are they legitimately intending to build a new tool to contribute to the encyclopedia? If it's the former, then I think they should be asked to stop and request the relevant permission first, which usually shouldn't be a big deal if they are truly trustworthy enough to use the tool responsibly. Mz7 (talk) 22:21, 27 December 2020 (UTC)
  • If a user is causing disruption, it doesn't matter if they are using AWB/Huggle/Twinkle or some "hacked" variant of it, they will be sanctioned. We've had editors that were using their own scripts to do high-speed editing; some were indeffed for disruption, others finally figured out that the community was serious and "slowed their roll" so to speak. In my mind, the specific tools don't matter, it's the implementation that is the problem. Saying something along the lines of "you can only use AWB and not a single other thing to do mass editing" is, in my mind, dumb, and likely the reason why WP:MEATBOT says nothing about specific tools. Primefac (talk) 19:43, 27 December 2020 (UTC)
This is still happening and still a massive problem, especially where the editors concerned are autopatrolled. FOARP (talk) 10:04, 3 April 2021 (UTC)
As Primefac said above, disruptive editing (automated or not) can be adequately dealt with by existing policies, as can the use of unauthorised bots or bot-like editing. I'm not sure amending BOTPOL will help at all. ƒirefly ( t · c ) 10:46, 3 April 2021 (UTC)

Cutting and pasting = "semi-automated content page creation", right?

I feel this is pretty obvious, but this discussion has highlighted that it is possible less than entirely obvious, that cutting/pasting a sentence and changing one word in it is "semi-automated content page creation". For this reason I'm going to do a small, bold edit to say as much. Please WP:BRD if you disagree. FOARP (talk) 09:48, 3 April 2021 (UTC)

That's rarely how mass creation happens, and that's not an example of "semi-automation". So I don't find the example very helpful, since this is meant to be very general section, and apply to all mass creations. WP:MEATBOT covers the rest, including mass copy-pasting. Headbomb {t · c · p · b} 11:08, 3 April 2021 (UTC)
The examples I've been looking at lately of mass-creation have all been of the cut-paste variety. It may be that there are many others that I haven't seen of course. E.g., the Iranian "village" case. In every case the creator denies completely that either MEATBOT or MASSCREATION apply to what was done, even in cases where the articles were being created at a rate more than 50 or even 100 per day and consisted of the same cut/pasted sentence with one word or two words changed. Were they right to say there was no requirement to seek consensus before doing that? FOARP (talk) 16:14, 3 April 2021 (UTC)
The purpose of MEATBOT, as I understand it, is to prevent wikilawyering in dispute resolution (eg ANI). Such that if the community thinks the activity appears bot-like it can be considered as such, regardless of whether a bot was actually used or not. So whether the creator denies they used a bot becomes somewhat moot. The specific means of automation (a bot changing the word or a person doing so by hand) also becomes moot. ProcrastinatingReader (talk) 16:51, 3 April 2021 (UTC)
"In every case the creator denies completely that either MEATBOT or MASSCREATION apply" and in every case, the creator is wrong about it. Headbomb {t · c · p · b} 17:00, 3 April 2021 (UTC)
I came here spurred by what I saw in the same two ANI threads. WP:MASSCREATE basically says don't use (semi)automated tools to create content pages without approval, and it appears that the emphasis is commonly perceived to fall on the automated part. In the Turkish villages ANI cases, for example, there was a sub-thread where people tried to figure out whether the creator had used automation, with the understanding that it's bad if he had, and kosher if he hadn't (Incidentally, I think that's upside down: it would have been infinitely better if he had used automation because that would have meant less room for human error). I believe that was barking up the wrong tree. What made this mass creation disruptive was not the presumed nature of the tools used, but the simple fact that it was an instance of mass creation. The problem with the resulting articles was that they were based on a rubbish source, and that – possibly – some might not be notable. None of that appears to have been helped by the fact that the mass creation proceeded at a slow pace over several weeks.
This obviously goes beyond the scope of the bot policy, but we should have something, somewhere, telling editors that if they want to create more than n articles of a single type, they should seek consensus first. – Uanfala (talk) 00:03, 6 April 2021 (UTC)

Substantive content

I've created an essay at Wikipedia:Substantive content about topics that don't include any actual content which could be discussed with regards to MASSCREATE. Crouch, Swale (talk) 17:09, 28 June 2021 (UTC)

Not marking bot edits as a "bot edit"

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


xaosflux Talk 10:47, 1 July 2021 (UTC)

I was asked on my bot's talk whether I could not mark some of my bot's edits as bot edits. Is there any policy or guidance or convention on when not to mark a bot's edit as a "bot edit"? Just want to make sure I won't get in trouble for not doing so. – wbm1058 (talk) 04:03, 1 July 2021 (UTC)

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Proposal relating to/modifying WP:MASSCREATE/WP:MEATBOT

  You are invited to join the discussion at Wikipedia talk:Notability § Adding one new thing to the current SNG text. {{u|Sdkb}}talk 05:57, 16 July 2021 (UTC)

I don't see a good place to put this in that discussion, so I'll reply here. Feel free to link this if appropriate.
As far as WP:MASSCREATE and WP:MEATBOT, I don't think we need much if any change here. Further guidance on what sorts of sourcing is valid for a mass creation task would be better in other policies or guideleines, with a "see also" added to WP:MASSCREATE. I also don't think the proposal there is redundant to WP:MASSCREATE: it purports to establish a consensus on what sort of sourcing and/or level of content is sufficient for a mass creation task, which the bot policy does not (and IMO should not) touch on. Anomie 12:05, 16 July 2021 (UTC)

Human editors "should not" make large-scale cosmetic edits

I have boldly changed the wording of the policy to read that human editors "should not" make large-scale cosmetic edits. Please feel free to revert. Enterprisey (talk!) 04:03, 7 January 2022 (UTC)

I've tightened the language, bringing the should together with the bot-like manner. Open to further tweaks. Headbomb {t · c · p · b} 05:34, 7 January 2022 (UTC)
Looks good, thanks. Enterprisey (talk!) 05:44, 7 January 2022 (UTC)

WP:MEATBOT

What's the proper forum for raising concerns about a user using semi-automated tools for an extended period of time at a high rate? ANI seems a bit harsh as they are not acting in a poor manner but multiple editors have expressed concern on their talk page to the point community consensus/guidance may be wise, even if it to approve their conduct and cut down on talk page inquiries.Slywriter (talk) 21:01, 12 January 2022 (UTC)

Village Pump proposal or policy would get the most exposure. Make a proposal "Should a bot do this.." link to the bot and diffs and what it does. -- GreenC 21:10, 12 January 2022 (UTC)
Depends on how disruptive it is; ANI is right, if a single user's behavior seems disruptive and direct discussion is failing, ANI is the normal next step. — xaosflux Talk 23:52, 12 January 2022 (UTC)
ANI/VP can also have other options than blocks, like the removal of WP:AWB access if that's the problematic tool use. Headbomb {t · c · p · b} 01:07, 13 January 2022 (UTC)

I have raised a question at User talk:Ser Amantio di Nicolao on this matter. The edits are not disruptive, but they should be performed on an alternate account with the bot flag, to avoid flooding watchlists and to ensure BAG approval. I will wait to see the response — Martin (MSGJ · talk) 03:57, 24 February 2022 (UTC)

Is MEATBOT not relevant any more?

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


An editor has been running a big AWB task for about a week now, with over five thousand edits each day. This has lead to some issues with people's watchlists, and even ended up at ANI, where the consensus was that they had done nothing wrong and that the watchlist flooding was a price worth paying.

But isn't that precisely what the bot flags were there for? I had earlier asked that editor to go through WP:BRFA for this task, but their response was that they weren't going to because the process was "dysfunctional".

Is this really so? Is bot approval only optional nowadays, or are there some limitations to the applicability of WP:MEATBOT in circumstances like that? – Uanfala (talk) 12:19, 21 April 2022 (UTC)

@Uanfala: In general someone flooding recent changes and watchlists may be Wikipedia:Disruptive editing, if a compromise can't resovle via discussion - you can list them at WP:ANI for administrator review. — xaosflux Talk 13:43, 21 April 2022 (UTC)
"consensus was that they had done nothing wrong and that the watchlist flooding was a price worth paying." This is the part that matters. If there is consensus that it's a productive use of AWB, it's all gravy. Headbomb {t · c · p · b} 14:08, 21 April 2022 (UTC)
So if someone starts doing an AWB task affecting, say, 70,000 articles, they're not required to get the bot flag? – Uanfala (talk) 14:18, 21 April 2022 (UTC)
Like most things, "it depends". If they are doing something like replacing all "xyz's" with "xyz'z" - it probably should be a bot task; if they are doing something like inserting article-specific variable data, it probably shouldn't be flagged bot. While "speed" is an important factor for what should be flagged as bot, if the edit should be subject to reduced editor review is more important (bot flagged edits should never be used to add controversial "facts" to articles). — xaosflux Talk 14:33, 21 April 2022 (UTC)
I think the only way we're going to avoid constant fighting about this is a bright line on when a BRFA is needed. Something like: (Semi-)automated tasks being run as more than a limited-scope one-off (so excluding things like massrollback) need to comply with WP:BOTPERF's limit of 1 edit per 10 seconds (allowed the same moderate amount of flexibility that bots are in that regard). Tasks anticipated to affect more than 2,500 pages per day (about a workday's worth of edits at 6 edits per minute) require BRFA. -- Tamzin[cetacean needed] (she/they) 14:33, 21 April 2022 (UTC)
I don't see constant fighting. I see one user who is not dropping a stick. Levivich 17:27, 21 April 2022 (UTC)
@Levivich: ... if you look at just this one conflict rather than the many others that have popped up at ANI over the years, often pitting our most productiv editors against each other, then I suppose you're right. Given how much time you spend at ANI, I'm surprised you hadn't noticed this trend. I'm not taking Uanfala's side here in the dispute with BHG. I'm acknowledging a reality that we as a community keep fighting over this. -- Tamzin[cetacean needed] (she/they) 18:41, 21 April 2022 (UTC)
User:Uanfala *really* needs to drop this. Malcolmxl5 (talk) 17:46, 21 April 2022 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Regarding WP:BOTPERF

Few comments about WP:BOTPERF. I understand that recently Wbm has ran a script purging a couple million pages on enwiki (per phab:T157670); a couple sysadmins seem to be in the discussion and did not seem to object to this. Reviewing BOTPERF, I notice it (correctly) says performance is an issue for the sysadmins not the community, and plus some notes (like the one on editing speed) is disregarded by many large bots, as well as the guidance on times of day.

I notice there's the sysadmins' policy at wikitech:Robot policy, and I would've suggested just linking to that and not imposing anything further locally (at least not for performance reasons), but then I notice that page has also been tagged as outdated with the note that some things there may no longer apply and it was written in 2009. Tbh I'd suspect that, given the scale of Wikipedia in 2022 and the fact that it also gets a lot of random requests from unaffiliated bots, that the infrastructure can handle high requests and imposing these requirements on our project's bots isn't awfully worth it. Basically wondering if its provisions are still relevant today? ProcrastinatingReader (talk) 23:16, 10 May 2022 (UTC)

I don't think it should be scrapped, and the first two bullet points for example are very relevant to non-system performance factors. Some of it can likely be updated. — xaosflux Talk 23:23, 10 May 2022 (UTC)
The first two seem like community restrictions/considerations rather than performance ones. The first should probably be mentioned in the paragraph about bot trials, and the second maybe in WP:BOTFLAG. ProcrastinatingReader (talk) 23:32, 10 May 2022 (UTC)
I don't think anyone does the whole "slowly during peak hours" thing (correct me if I'm wrong though), and don't think it's that relevant. Using maxlag is definitely the better way to do it if the bots edits really need to be slowed, so I think that part can be scrapped.
I think edit rate limits make sense, but I guess not for performance reasons. It gives people time to spot errors, see that a bot task is happening and object if needed, and avoid spamming watch-lists. Editing every 10 seconds is 8000 edits a day, which is a pretty decent rate. Not many tasks need to be faster than that. I think the current limits are pretty decent defaults - could be increased if current practice is more than that (I think many bots do 10 edits/minute?), but I don't think they should be removed - people shouldn't be doing 30-40 edits/minute unnecessarily.
Maybe rename the section to "Edit rate"/"Run rate", if that's the concern. Galobtter (pingó mió) 23:54, 10 May 2022 (UTC)
I agree that the section should be updated. The only time I have seen the website getting disrupted by a bot is wikitech:Incidents/2021-07-26 ruwikinews DynamicPageList. That was because of a bot importing ~100k pages per day and even then it was resolved in 30 minutes. If it needs such a high load to cause issues, I don't see how advising to slow down during peak hours is still relevant. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ (talk) 04:35, 11 May 2022 (UTC)
Skimming that report, it seems it was primarily due to the DynamicPageList extension (since disabled on ruwikinews). Without that extension, which enwiki does not have per Special:Version, the servers could've handled even that load it seems, and sysadmins didn't seem to require the shutting down of the bot.
To Galobtter, those two primarily, but also the restriction on fetching pages (requesting many individual pages are not permitted) doesn't seem too relevant today. The others also seem to exist for non-performance reasons (namely, for community harmony), so should be described as that I think. It doesn't seem like sysadmins actually require anything performance-related from bots (and otherwise I think it's just the UserAgent stuff they ask for)? Would be best to get sysadmin clarification though, as I'm just guessing. ProcrastinatingReader (talk) 11:18, 12 May 2022 (UTC)
With my sysadmin hat on, the advice about running at different speeds for peak hours vs. quiet time is super outdated and can go. In general the requirements for bots are 1) follow the user-agent policy 2) run with maxlag=5, 3) Action API requests should be made in series, not in parallel. The exception for #2 is for bots that must continue to edit as long as humans are editing, like anti-vandalism bots or my TFA Protector Bot. If you're doing something weird or unusual (like the purging mentioned earlier), it's always nice to flag it for sysadmins ahead of time in case there are concerns. The problem that happened on ruwikinews was that after an action had been identified as problematic (mass creation of articles used in DPL queries), the operator had been asked to tell sysadmins again if they do it in future - they didn't, which unsurprisingly led to the same result of a sitewide outage.
Edit speeds are a bit harder to give advice on, my personal opinion (not backed by science or numbers) is that running at ~10 edits/minute (6 seconds between edits rather than the recommended 10) is usually good enough, and if a bulk task takes a week or a month, so be it. If some bots started editing faster like 15epm or 30epm, no one would notice. If every bot started editing faster, I don't know what would happen. Legoktm (talk) 22:12, 12 May 2022 (UTC)
I think it would be sensible to have some way to run a bot at significantly higher speeds than currently recommended. For example when I was doing my short description additions after thousands of checked edits it would be very convenient to ramp up the speed a few times so the 150-250k edits needed didn't take weeks. There is however significant benefits with starting out slowly and I think that should continue to be the norm even if it isn't for performance reasons but rather to help with finding and resolving potential issues. It is a lot easier to deal with 5 bad edits than 500. --Trialpears (talk) 12:04, 12 May 2022 (UTC)

For some context, some past incidents I remembered off the top of my head where sysadmins intervened: the infamous status bots, a single bot that was constantly null editing that it amounted to 30% of all edits across all Wikimedia projects, a bot that was sending harmless yet bogus requests that were interfering with dashboards that tracked 4xx and 5xx errors, a bot that null edited every page on Wikipedia, exposing a MediaWiki bug!, but combined with some new Parsoid stuff being deployed caused a huge buildup in the job queue. Bots can definitely cause trouble, but it's usually the exception because bot ops are decent about self-regulating. Legoktm (talk) 01:45, 19 May 2022 (UTC)

Proposal

Based on the above discussion as well as an informal discussion with some other sysadmins, I propose replacing the entire WP:BOTPERF section with the following:


While editors generally should not worry about performance, bot operators should recognize that a bot making many requests, editing at a high speed or creating many pages can have a much greater impact and cause genuine problems. Bots are exempted from MediaWiki's normal rate limits and given higher API limits to enable more opportunities at the cost of more responsibility. System administrators expect communities to self-regulate bots but will inform users if issues do arise, and in such situations, their directives must be followed.

  • Bots must follow all applicable global policies, including the User-Agent policy and API etiquette guideline.
  • Bots must use maxlag=5 (see documentation) unless they are e.g. an anti-vandalism bot that should keep editing as long as humans are.
  • Bots should use a speed of about 10 actions/edits per minute. There may be other non-performance related reasons to run at slower speeds, such as clogging up watchlists or potential bugs that require mass-reversion of edits.

Bots should always strive to use the most efficient method to operate. When dealing with bulk data, see m:Research:Data and wikitech:Portal:Data Services for different options.

If you are doing something unusual or different and are unsure about the impact you might have, please feel free to ask system administrators for assistance or guidance.


Let me know what you think. Legoktm (talk) 01:45, 19 May 2022 (UTC)

I find that too big a deviation from the current guideline in subtle but significant ways (e.g. general considerations, trial bots, unflagged bots, etc...). I'd rather keep the current section and tweak the one or two bullets than need adjusting, rather than TNT the section. Headbomb {t · c · p · b} 02:40, 19 May 2022 (UTC)
@Headbomb: It was pointed out above that the parts about trial and unflagged bots aren't relevant to performance, they're for "community harmony" (as @ProcrastinatingReader put it). Probably the trial part can go into WP:BOTAPPROVAL and I don't know where unflagged bots should be discussed.
Which part are you referring to as "general considerations"? If you mean the intro paragraph, the substantive changes (as I intended them) are 1) calling out rapid page creation as a potentially problematic activity 2) explain what special permissions bots have that can lead to perf issues 3) state that sysadmins expect communities to self-regulate. I think all of those are justified changes. Legoktm (talk) 05:00, 19 May 2022 (UTC)
Why it's needed (greater effect/potential to be disruptive), urgency of the task is a factor, the downloading of pages, not making unnecessary requests, making use of dumps, etc...
I find little that needs to be tweaked outside of one or two bullets, which is basically that the guidance on when to make bot edits is superceded by the modern maxlag standards. Headbomb {t · c · p · b} 07:01, 19 May 2022 (UTC)
Do the first two sentences that contains "greater impact" and discuss specific permissions not cover the "why it's needed?" Based on yours and Anomie's feedback, I'll add a "community performance" section as well that covers stuff like urgency of tasks, flagged vs unflagged.
Re: "the downloading of pages, not making unnecessary requests, making use of dumps, etc.", I don't think it really makes sense to call these out anymore. It's probably more efficient to use insource/regex searches than to scan an entire dump. And we have APIs like Restbase that are intended to provide bulk content access. Are there usecases that I'm overlooking? But they're still indirectly mentioned anyways, both the Meta and Wikitech links prominently feature dumps as an option. And "use the most efficient method to operate" is the positive phrasing of not making unnecessary requests. Legoktm (talk) 05:31, 20 May 2022 (UTC)
I personally disagree with 10epm; AWB basically caps bot edits at 20epm these days and that's what I've been using since day 1 (when the general guidance was "max 30epm"). I don't see why we're cutting that by a third. Primefac (talk) 07:20, 19 May 2022 (UTC)
@Primefac: sorry, where does it say 30epm? For urgent stuff, it currently has "once every five seconds" aka 12epm. Legoktm (talk) 05:24, 20 May 2022 (UTC)
PrimeBOT 7's task gave explicit approval, it was discussed briefly in this discussion (and no one contested my statement "recommended 20epm"), 15epm was approved here (though 20 was discussed), and there are a handful of others. I will admit that most of these are either for short-term edits or "we must fix this immediately" mass edits, and there are plenty of discussions where 30epm (or even 8epm) is listed as "too high", but we either need to start enforcing these speed limits (which we apparently, currently, don't) or have them match the standard usage; I can't be the only botop running in the 15-20epm range on a regular basis. Primefac (talk) 07:49, 20 May 2022 (UTC)
Seems ok to me as far as it goes, but finding a home for the community-related performance recommendations (versus systems performance, which this is now focused on) would be good. As it is that's only hinted at now. Anomie 12:23, 19 May 2022 (UTC)
Ack, I think it can just be a second series of bullet points under the current list. I'll probably get to updating the current proposal after the hackathon. Legoktm (talk) 05:32, 20 May 2022 (UTC)

Where

Hey, @Headbomb. Any idea where the lack of opposition was? I saw the assertion, couldn't figure out if the surrounding links helped, decided to query. Valereee (talk) 23:12, 31 August 2022 (UTC)

See the link in 'This requirement initially applied to articles', as put in the edit summary of my revert. Headbomb {t · c · p · b} 23:16, 31 August 2022 (UTC)

pre-RfC mass-article creation discussion has begun

As part of the Conduct in deletion-related editing case, the Arbitration Committee decided to request community comments on issues related to mass nominations at Articles for Deletion in a discussion to be moderated and closed by editors appointed by the committee.

Workshopping for the first of two discussions (which focuses on mass article creation) has begun and feedback can be given at Wikipedia talk:Arbitration Committee/Requests for comment/Article creation at scale. As previously announced, Valereee and Xeno will be co-moderating these discussions.

For the Arbitration Committee, –MJLTalk 16:33, 2 September 2022 (UTC)

Discuss this at: Wikipedia talk:Arbitration Committee/Noticeboard § pre-RfC mass-article creation discussion has begun

RfC which may be of interest

Wikipedia:Arbitration Committee/Requests for comment/Article creation at scale Valereee (talk) 13:58, 4 October 2022 (UTC)

RFC implementation changes

@Headbomb: I don't understand why you are overriding consensus and imposing your preferred language over the RFC language here and here. Please self-revert. Levivich (talk) 21:21, 27 November 2022 (UTC)

Personally I still think that since the RFC has made mass creation not specific to bots/meatbots then the relevant text should be moved to some other policy page and WP:MASSCREATE retargeted there. We could still keep a reference and any bot-specific guidance here. Anomie 12:29, 28 November 2022 (UTC)
"overriding consensus" I'm doing no such thing. The RFC's wording is poor and I improved it. That's it. Headbomb {t · c · p · b} 12:43, 28 November 2022 (UTC)
I have to agree that Headbomb's suggested wording is an improvement in readability, without changing any of the meaning or context of the RfC consensus statement. Let's not get to the stage of violent agreement here. Loopy30 (talk) 13:17, 28 November 2022 (UTC)
Of course it changes the meaning of the text. "Have" is not the same thing as "be cited to", and, though this is pedantic, being required to follow GNG isn't the same thing as being exempt from GNG. The community spent months looking at several alternative versions, and this is the one that won. The RfC wording wasn't poor, these changes were not an improvement (and didn't accurately document consensus) and everyone had months to participate in the multiple RFCs. Anyway, HB's bold edits were reverted so he can start a new RfC to gain consensus if he wants to. Until then I've put it back. Levivich (talk) 13:41, 28 November 2022 (UTC)
These two version have exactly the same meaning, and yes
  • All mass-created articles (except those not required to meet WP:GNG) must be cited to at least one source which would plausibly contribute to WP:GNG
is much poorer than
  • All mass-created articles that are required to meet WP:GNG must have at least one source which would plausibly contribute to WP:GNG.
"cited to" is completely ungrammatical, and the parenthetical is awkward for no reason. Headbomb {t · c · p · b} 15:42, 28 November 2022 (UTC)
"Cited" means WP:CITEd and is more specific and thus clearer than "have", which could be misinterpreted to mean "a GNG source exists". The point of the line is that the article should contain a citation to a GNG source, it's not enough that a source just exist, so we should keep that part clear. As to the rest, I disagree with you that the consensus text is poor or awkward, and I think it's better than your revision. We talked about this sentence for like six months, it was voted on twice, it's the result of a lot of compromise, and it doesn't seem like the dozens of editors supporting it thought it was poorly phrased. But hey, WP:CCC and all that, I'm not saying you can't propose changes, just that it should follow the usual procedure. Levivich (talk) 16:39, 28 November 2022 (UTC)
"The point of the line is that the article should contain a citation to a GNG source"
And that point is unchanged. Headbomb {t · c · p · b} 02:43, 29 November 2022 (UTC)
Changing "cited" to "have" made that point less clear. Levivich (talk) 03:13, 29 November 2022 (UTC)
I agree with Levivich; using "have" will result in some editors interpreting it as a requirement that the source must exist, rather than that the source must be provided in the article. BilledMammal (talk) 04:12, 2 December 2022 (UTC)
An article that doesn't have a source is one where no source is given. No one is talking theoretical sources here. (And you also still don't cite to a source. You cite sources.) Headbomb {t · c · p · b} 04:37, 2 December 2022 (UTC)

Bots directed to edit by other users: Responsibility for edit

Regarding WP:Bot policy § Bots directed to edit by other users -- WP:BOTACC says edits are the responsibility of the operator, and that's true, but some consideration needs to be given to the fact that: a) community consensus supports the operation of a task, as certified by BAG in the BRFA; b) that edits directed by another editor depend on the judgement of that other editor, this principle is outlined in 3. Competence: All users directing a bot must have the required skill and knowledge to ensure their actions are within community consensus.

So I'd like to propose a line be added in this section stating that, if a bot allows some kind of arbitrary actions and this behaviour has been deemed appropriate by community consensus/BRFA, then responsibility for the particular action taken passes to the editor directing the action, and not to the operator (barring technical bugs and so long as the action taken is the expected action). The context of this proposal for me is WP:RESPONDER-RFC but this isn't limited to that; it just feels strange to me for the operator to take responsibility for actions taken by authorised users, if the entire task has been authorised by the community and community consensus is deciding who is authorised to operate the task. So I think this clarification would be worthwhile: botops shouldn't be (and AFAIK currently are not) held responsible for actions by authorised users. ProcSock (talk) 02:50, 30 November 2022 (UTC)

Makes sense to me. GeneralNotability (talk) 03:43, 30 November 2022 (UTC)
I'm a bit wary of this proposal. We don't want to make it so that the operator of such a bot that's frequently misused can claim they don't have to take measures to stop people from misusing it because the responsibility is all on the misusers. We also don't want them to claim they have no responsibility to help in cleaning up after it. Anomie 13:00, 30 November 2022 (UTC)
Sounds a bit like Section 230... but I agree with Anomie; point 3 quoted above should be enough that if a user is not competent enough to be running (or otherwise using) the bot in question, then the bot operator should be making sure that editor is not using the bot. Primefac (talk) 13:04, 30 November 2022 (UTC)
What if the bot's access whitelist isn't decided by the bot operator? Aside from the hypothetical WP:RESPONDER-RFC, there are concrete examples of this like User:DannyS712 bot III's autopatrol tasks, where the list is at Wikipedia:New pages patrol/Redirect autopatrol list and it's a fully protected page managed by admins which the botop can't even edit. To a lesser extent there's also User:ProcBot/PurgeList which I let most people edit and don't really want to micromanage the addition of pages to that list, or play around with the purging frequency set by whoever added it, so long as it's not absurd or disruptive to resources.
I guess in the case of the redirect autopatrol list, if some bad redirects get autopatrolled, I wouldn't imagine Danny is responsible for cleaning that up. The access should be revoked by an admin and any cleanup decided by the community as appropriate. I guess I see these directed bots as a service, and (barring bugs or operation outside BRFA-approved parameters) the botop's responsibility for particular actions as no more than a Wikipedia's sysadmin's responsibility for the content of our articles. Mainly I think it limits the possibilities of bots if the edits of these bots are considered to be the same as if the bot operator made them on their own account, if that's what we mean by being responsible for the edit. I don't think we really want botops deciding which arbitrary actions are appropriate or not. ProcSock (talk) 14:52, 30 November 2022 (UTC)
In your hypotheticals, you have exactly described how the bot operator does take responsibility, so I do not think your last statement that's what we mean by being responsible for the edit is accurate; a botop can be "responsible" for an edit by undoing the type of edit that caused the issue in the first place; Danny might not be responsible for cleaning up bad redirects, but he can remove names from the AP list; you can remove inappropriate null edit runs if necessary (though as you say, this can be done if there is a problem found, and you do not need to micromanage). Primefac (talk) 09:03, 2 December 2022 (UTC)
In the case of Danny I don't think he can remove names from the AP list, since it's fully protected. He could add overrides in the codebase, of course, but I think the point of the fully protected page is that responsibility for that has been delegated to admins, and I imagine that in practice admins deal with problematic editors on that list and not Danny.
But in substance I'm largely happy with what you say, if we assume 'responsibility' means dealing with issues (as reported) by removing access to the tool? But here the botop isn't responsible for every edit on the bot's account as if it were their own, right? This interpretation is the one I don't like as much or think reflects the status quo too well. ProcSock (talk) 14:29, 2 December 2022 (UTC)
I mean, Smith609 doesn't have to fix issues brought on by Citation bot editing in the wrong places - the user that triggered the edit is given in the edit summary and they likely would be responsible for the cleanup. Smith609 would be responsible if the bot started going haywire and/or editing outside of its remit, though. Primefac (talk) 14:37, 2 December 2022 (UTC)
Citation bot may be a useful example. I recall several discussions at WP:BON and elsewhere over the years where people were being able to use it to make edits that some found problematic. IIRC some of those involved operator non-responsiveness to fixing problems, while others went in the other direction. Anomie 14:50, 3 December 2022 (UTC)
Citation bot cannot make arbitrary edits/actions though, right? It makes operator-defined actions on user-specified pages. It could just as easily run on the entire encuclopaedia, in which case it would make operator-defined actions on all pages.
I was referring specifically to the case where a bot makes an action which has some degree of variability depending on the parameters a user feeds into it. ProcrastinatingReader (talk) 09:35, 3 January 2023 (UTC)
Citation bot is a weird case at this point (and I was one of the ones saying the operator was inattentive). At this point he may be the listed operator but the clear maintainer today is AManWithNoPlan, who has push rights on the bot's repository. I don't know who has access to the toolforge configuration and whether the process is running these days though, I assume it's him. Izno (talk) 09:13, 20 January 2023 (UTC)
I do have access. AManWithNoPlan (talk) 15:33, 20 January 2023 (UTC)

Bot and operator inactivity - blocks

Right now our bot policy is fairly lenient on bot inactivity, requiring 2 years inactivity for both the bot and the operator. We have a few bots on the relevant list that aren't running and may never run again because they are blocked, or their operator is blocked, or both.

I think that a longterm block placed against either the bot account or its operator should result in removal of the bot permission. I think being blocked for a minimum of 1 year is a reasonable bound, but we can paint a bikeshed if we want. Izno (talk) 09:20, 20 January 2023 (UTC)

Why not just remove the operator activity requirement for simplicity? Two years of inactivity is more straightforward and sensible than carving out an exception for blocked bots with a different time duration. For what it's worth, this was brought up previously by BU Rob13 here: Wikipedia talk:Bot policy/Archive 25#Activity requirements. --MZMcBride (talk) 09:35, 20 January 2023 (UTC)
Agreed that operator activity should not factor into bot flag status. Additionally, I think if a bot has no active tasks, there is no reason for it to have the bot flag. For example, FlagBot 2 is a one-time run that has completed, and thus does not need the flag. This will also allow us to better track bots that are acting outwith their remit. Primefac (talk) 09:40, 20 January 2023 (UTC)
See my response on BOTN on why I don't think "bot inactive for two years" alone isn't reasonable: IznoBot didn't edit in either 2019 or 2020, and so I would have needed to request a new flag with BRFA 2 in 2021. Maybe if the timeframe on "only bot activity considered" were much longer, say 5-10ish years, or as BU Rob suggested there, allow operators to opt in to keeping the flag after notification.
Those rules with those caveats would also sweep up a whole bunch, but not get the currently blocked ones. I would not however support a short timeframe without the caveat. Small improvements please. :) Izno (talk) 09:48, 20 January 2023 (UTC)
It looks like @Xaosflux has previously done a 5 year a thon without explicit policy backing, also allowing operators to opt to retain the tools. I don't know if I would have done that allowance at 5 years, but that feels reasonable. Izno (talk) 09:50, 20 January 2023 (UTC)
@Izno the rough policy support for that was that bots can be reviewed and revoked if they no longer have consensus to run - it went smoothly last time (especially since any operator that wanted to remain was allowed). — xaosflux Talk 10:31, 20 January 2023 (UTC)
I think that's a real stretch of the relevant policy, but I was more interested in pointing to prior art that 5 years seems to have been a number used. Izno (talk) 00:28, 21 January 2023 (UTC)
I would have needed to request a new flag with BRFA 2 in 2021 - so? I know both xaosflux and I have Wikipedia:Bots/Requests for approval/Approved watchlisted, and if anything AWB bots take longer to activate than bot-flags (mainly because AnomieBOT gives us big bold warning notices if a bot doesn't have a flag). This is also something that should ideally be handled by BAG if/when a bot is approved (i.e. we/BAG should be cross-posting to BN, not the botop). Primefac (talk) 13:10, 20 January 2023 (UTC)
For 'crats adding the flags, my understanding has been that they watch the Approved page instead of anyone having to ping at BN. Anomie 15:26, 20 January 2023 (UTC)
Yup, there is rarely a delay there. But if it gets missed after a reasonable time the botop is welcome to ask at BN. Note: a count of these is listed in the stat box at BN as well. — xaosflux Talk 16:36, 20 January 2023 (UTC)
if anything AWB bots take longer to activate than bot-flags My bot is an AWB bot. So, y'know, you hit the biggest reason why. Then you're managing two places.
But besides that, it's just time consuming for everyone involved - op and crat - where I suspect that the improvement you think would come out of it (This will also allow us to better track bots that are acting outwith their remit.) isn't ultimately much of an improvement. Bots operating with no or questionable consensus are easy to identify from their operators' talk pages or BOTN if that escalates or AN(I) if there is not even an approval on file for the edits, whether or not they have a bot flag. Ultimately the biggest reason for having an inactivity policy at all is that the bot right comes with sboverride and rateunlimiter (or whatever the name is), which Anomie alludes to below.
I proposed the low hanging fruit I did because an indefinite block is an obvious consensus that the op, bot, or both, aren't welcome which is de facto indication of lack of consensus. But I'm happy, as I said above, to having a 5 year inactivity removal or a shorter removal of the operator requirement but "please be gracious to the op" removal. Maybe, for bots that are one-offs, they could instead get the flag with expiry at 5 years, so no-one even has to care a whole lot. I can go for that as well. Izno (talk) 00:26, 21 January 2023 (UTC)
Small FYI: I don't think bots come with sboverride (per Special:ListGroupRights), but I remember it has been proposed before. –Novem Linguae (talk) 00:34, 21 January 2023 (UTC)
  • I'm in support of a 5-year inactivity general rule, but don't want to get too picky on this; if an operator is around and their bot is inactive and wants to keep it in reserve, I'm not worried about it at all. Also, we enforce these sporadically - which seems to work rather well - I don't want to get in to a situation where we need to handle this like inactive admins, with monthly reviews or anything like that. — xaosflux Talk 13:34, 20 January 2023 (UTC)
    Agreed on all points. It looks like the last sweep was 2016, it's time for another. --MZMcBride (talk) 16:10, 20 January 2023 (UTC)
  • Meh. There's something to be said for removing unused rights to reduce the opportunity for a compromised account to cause problems. OTOH, I wouldn't see a problem with a bot op in good standing being able to get their bot's flag restored on request at BN without having to go through a new BRFA (but with a standard caution that they should make sure any still-approved tasks are still wanted, and should feel free to go the re-BRFA route if they want to be sure). Anomie 15:33, 20 January 2023 (UTC)
    @Anomie process-wise I think I'd rather them go to BRFA instead of BN on that -- but there is no need that BRFA needs to be onerous. — xaosflux Talk 15:35, 20 January 2023 (UTC)

RfC on COSMETICBOT

Please see RFC: Clarifications to WP:COSMETICBOT for fixing deprecated HTML tags. Legoktm (talk) 08:00, 8 February 2023 (UTC)

A mop for DYK-Tools-Bot?

I've been talking with @Theleekycauldron about adding a task to DYK-Tools-Bot which would require admin rights (move-protecting pages that are currently on the main page DYK section). I'm a firm believer in running with the minimum privileges required in case something goes haywire. So I'm thinking I should spin up a new DYK-Tools-Admin-Bot account and use that to run just the tasks that require admin rights.

Before I go down that path, is that a reasonable approach to take? Yeah, I know, lots of other steps in the approval process, but for now I'm just looking for a sanity check on the two accounts approach. -- RoySmith (talk) 01:21, 16 February 2023 (UTC)

Yep, TFA Protector Bot is a separate account for that reason. AnomieBOT has multiple accounts with different permission levels as well. Legoktm (talk) 03:16, 16 February 2023 (UTC)
Yes, that option is fine, you could put 2FA on it as well and use a limited access grant (which you won't use on the API, but just to lock it down more). — xaosflux Talk 12:11, 16 February 2023 (UTC)
Sigh. I tried to create DYKToolsAdminBot and got an error that the account name was blacklisted! ACC #330190 pending. It'll be interesting to see what the process looks like from the other side :-) -- RoySmith (talk) 17:58, 27 February 2023 (UTC)
OK, got that sorted... -- RoySmith (talk) 20:15, 27 February 2023 (UTC)

A proposal to move MASSCREATE out of this policy

See Wikipedia:Village pump (policy)#Alternative proposal: Move MASSCREATE out of BOTPOL. Anomie 12:19, 23 April 2023 (UTC)

Systematic mass edits to hidden category dates

WP:COSMETICBOT lists the "administration of the encyclopedia" as something that is not considered a cosmetic edit. But what about systematic mass edits (made by users, not bots) to adjust dates in hidden category templates such as {{Use American English}} and {{Use mdy dates}}? While they technically affect maintenance categories, they are not reader-facing, clog up watchlists, and are not quite the same as fixing errors like filling in a missing date. Would these be considered substantive or cosmetic? InfiniteNexus (talk) 20:31, 3 December 2023 (UTC)

Those would be substantive edits. However, they, like anything bot-related, are still subject to consensus. Headbomb {t · c · p · b} 21:26, 3 December 2023 (UTC)
The documentation at {{Use mdy dates}} is frequently misunderstood. It says that if you check an article and the dates all look fine, you should update the date in the template. I don't see that as a valuable edit unless people are systematically working their way through a backlog, but I am a committed gnome and 90+% of my edits are trivial in nature, so I tend not to complain unless people's edits are, cosmetic, not actually fixing anything, and contrary to guidelines or documentation. – Jonesey95 (talk) 03:20, 4 December 2023 (UTC)
The specific edits I am referring to are those where users go through a bunch of draft articles and change the date in {{Use American English}} and {{Use mdy dates}} from last month to this month, without changing any of the references in the article (since drafts typically only have a few references). This achieves nothing other than clog people's watchlists. Here is an example (this behavior isn't limited to one user, but they conducted the most recent batch of mass edits). InfiniteNexus (talk) 07:17, 7 December 2023 (UTC)
I agree that those edits seem useless, but I don't think it's really a bot policy issue, i.e. those edits are useless regardless of what scale they're done at. Legoktm (talk) 07:37, 7 December 2023 (UTC)
(edit conflict) Agreed. Primefac (talk) 07:38, 7 December 2023 (UTC)
I had been wondering whether this would be considered a violation of WP:COSMETICBOT so I had a policy I could point to when telling the user (and others) to stop. But if it isn't, then I guess I'll just have to ask "pretty please?" and hope they comply. InfiniteNexus (talk) 07:45, 7 December 2023 (UTC)
It's not technically a violation of WP:COSMETICBOT, but it is a likely violation of WP:BOTREQUIRE #2, edits must be deemed useful, 3 (not consume resources unnecessarily, i.e. not pointlessly clog watchlists and edit histories), and possibly 4 (consensus).
Citation bot, for instance, updates broken DOI categories if they're more than 6 months old, rather than every month, to reduce that clogging. But there it also serves a purpose knowing that a broken DOI has been recently checked to still be broken. I don't know what purpose there is in saying In January 2018, the article used DMY date formats, or used British English. If DMY was the format then, it should still be the format today. Likewise for British English. I don't see the purpose of having those categories dated to begin with. Headbomb {t · c · p · b} 11:11, 7 December 2023 (UTC)
For {{use dmy dates}} and {{use mdy dates}}, the templates' documentation explains that the date is supposed to indicate when the article was last checked for consistency and suggests that the point of updating it is to facilitate re-checking articles periodically. OTOH, the docs for {{Use British English}} and {{Use American English}} (I haven't checked the other 20-ish country-English templates) do not indicate that the date should be updated despite similar logic potentially applying there. Anomie 12:52, 7 December 2023 (UTC)
Based mainly on the above (and also partly because this isn't solely a bot issue) I think it might be worth clarifying at some central location (VPP?) about how we really want these templates to be used. I do agree that a template saying "this page should be written in British English" (which for the record gives no visible indication of such) probably does not need to be dated. Who or when someone last checked the page is written in the correct variant is largely irrelevant, as the very next edit could theoretically go against that. Primefac (talk) 14:14, 7 December 2023 (UTC)
Changing the date on {{Use American English}} appears to be contrary to that template's current documentation, so the editor in question should be notified. Changing the date on {{use dmy dates}} is recommended by the documentation but is confusing and probably not necessary. Starting a discussion on that template's talk page (after reviewing the archives to see the confusion over the years) may be fruitful. – Jonesey95 (talk) 17:29, 7 December 2023 (UTC)
Well, this has gone quite beyond the scope of my initial comment... I should note that the effects of changing the date in those templates can be felt by users only if they have hidden categories turned on in their Preferences and can see one of the subcats of Category:Use American English, Category:Use mdy dates, etc. InfiniteNexus (talk) 19:19, 7 December 2023 (UTC)

There is no actual consensus behind this idea: For {{use dmy dates}} and {{use mdy dates}}, the templates' documentation explains that the date is supposed to indicate when the article was last checked for consistency and suggests that the point of updating it is to facilitate re-checking articles periodically. This is against MEATBOT and COSMETICBOT principles and is annoying the hell out of lot of people for no constructive purpose. If there is an actual maintenance rationale to changing the date-stamp in the {{Use xxx dates}} template at all (I've yet to see anyone demonstrate this), then it could only be applicable when dates in the article have actually been found to be inconsistent and have been normalized to the same format again. Otherwise someone could literally set up a robotic process to check every single article on the system with such a template and update its timestamp for no reason, every single month, triggering pretty much every watchlist of every user, repeatedly, for absolutely no useful reason at all.

It's already a severe annoyance just with a handful of, uh, "devoted" users taking someone's one-off and ill-considered idea to put "when the article was last checked" in the /doc page, and running with it as license to futz around with at least thousands of timestamps for no constructive purpose. This kind of has elements of WP:NOT#GAME to it; its like those pointless farming games where you check in over and over again to harvest meaningless virtual plants, all endlessly and to no purpose other than generating more e-plants to farm, repeating it all obsessively just to pass the time.

The template /doc needs to be changed to say "when dates were last changed in the article", or simply have the entire part about changing the template timestamp removed. There was actually value to something like {{Use DMY dates|July 2013}}, since it indicated when the date format was established, but we've now mostly lost this due to all this cosmetic-meatbot fiddling.  — SMcCandlish ¢ 😼  01:48, 19 December 2023 (UTC)

You may very well be correct about there not being consensus behind it, but it's not at all clear enough for me to be willing to take any action to enforce this supposed consensus. If there is a discussion that finds changing these dates to be against consensus and the problem continues I would have no problem removing AWB access or if necessary issue blocks. Before that happens though I don't believe there is much to be done.
I've long considered making a category for backlogs suitable for AWB. Such a category may help users move over to similar higher value tasks. --Trialpears (talk) 06:24, 19 December 2023 (UTC)
Wow, that's quite the rant complete with incorrect references to WP:MEATBOT and WP:COSMETICBOT. I don't know whether there's "consensus" behind what the doc states, but it's a clear fact that the doc does currently state it. If you want to establish whether consensus for it exists or not, a well-balanced RFC at a Village pump would be the way to go. Anomie 11:23, 19 December 2023 (UTC)

Clarifying WP:MEATBOT

This section needs to make it clear that the behaviors described in WP:COSMETICBOT also apply to human WP:MEATBOT editing, namely hitting everyone's watch lists over and over again for no good reason by making trivial, cosmetic, twiddling changes without also in the same edit doing something to improve the content in some way for the reader, or to fix something to comply with a policy or guideline, or to repair a technical problem, or to do something else otherwise substantive.

The consistent interpretation at ANI, etc., is that MEATBOT does include COSMETICBOT-style futzing around, and people have been restricted or warned repeatedly against doing things like just replacing redirects with piped links to the actual page name, adding or removing spaces that do not affect the page rendering, and so on. So MEATBOT needs to account for this consensus application, but it presently only addresses careless speed and failure to review semi-automated edits before saving them.  — SMcCandlish ¢ 😼  01:26, 19 December 2023 (UTC)

Here's the last sentence of WP:COSMETICBOT which makes clear that meat bots also should follow it: While this policy applies only to bots, human editors should also follow this guidance if making such changes in a bot-like manner. I do not believe there would be any backlash to you adding a reference to this consensus in the meatbot section as well. --Trialpears (talk) 01:44, 19 December 2023 (UTC)
Yes, the issue is that ther is no mention of this at MEATBOT. Pretty much no one is going to look in COSMETICBOT for rules about human editing when there is a section for rules about human editing.  — SMcCandlish ¢ 😼  17:48, 19 December 2023 (UTC)
MEATBOT applies to all types of edits, it doesn't need to point to cosmetic bot specifically. If you're being accused of behaving like a bot, it doesn't matter if you are a bot or not, for purpose of dispute resolution knock it off until things are resolved. Headbomb {t · c · p · b} 01:31, 20 December 2023 (UTC)
What exactly needs to be "made clear"? I haven't seen anyone having an alternative interpretation. OTOH, I have seen you in the section just above misinterpreting what both of these sections actually mean. Anomie 11:28, 19 December 2023 (UTC)
I've definitely seen people having an alternative interpretation; several of them hit my watchlist on a daily basis, and I've been involved in a user-talk disputation about this stuff with one of them over the last day or so. What needs to be made clear is that COSMETICBOT cross-references MEATBOT by implication, with "human editors should also follow this guidance", but MEATBOT, which is where people look for what pertains to human editors' bot-like activity, makes no mention of it.  — SMcCandlish ¢ 😼  17:48, 19 December 2023 (UTC)
If you're referring to User talk:Tom.Reding#MEATBOT, you're misinterpreting WP:COSMETICBOT there too. As Tom.Reding noted, the edits you're complaining about there fall under the "administration of the encyclopedia", such as the maintenance of hidden categories used to track maintenance backlogs point. While you clearly disagree that that method of tracking that particular backlog is useful, it still falls under that bullet until a consensus discussion determines otherwise. This is not the place for that discussion.
As for WP:MEATBOT, there's a huge grey area where it comes to whether semi-automated edits need a BRFA or not as noted at WP:SEMIAUTOMATED. The point of WP:MEATBOT is more a special case of WP:DUCK, to cut off the "it's not a bot, I made each edit manually!" argument that was at one point derailing discussions about disruptive mass editing at ANI. Anomie 20:50, 19 December 2023 (UTC)
[sigh] This doesn't have anything of any kind to do with any discussion with Tom.Reding (which I don't even recall) or anyone else in particular. It has to do with having to say "See WP:MEATBOT and see also the human-editor provision in WP:COSMETICBOT". The only reason both policy sections have to be cited individually (when applicable) is lack of two-way cross-referencing. Anyone reading MEATBOT has no idea there is also pertinent material in COSMETICBOT and would never guess that, because the title of MEATBOT is "Bot-like editing", strongly implying that the only thing in the page about editing by humans is in that section, which of course is not true. This would be fixed by simply adding something like "Purely cosmetic changes performed by a human editor in a bot-like fashion may also be considered disruptive.", at the bottom of MEATBOT.  — SMcCandlish ¢ 😼  21:11, 8 January 2024 (UTC)
Done here, since no one objected to that simple cross-reference.  — SMcCandlish ¢ 😼  00:44, 13 January 2024 (UTC)

Does a bot require an authorized account if it doesn't make edits

I'm just curious, do you need permission to use an algorithm to comb through information on Wikipedia (like to find out how many times a word appears on Wikipedia, finding the pages that get edited the least, ect.) Assuming that it's code isn't on Wikipedia. I currently don't have the knowledge or skills to program something like that, but I'm still curious, and I might eventually have the ability to program that. Not a kitsune (talk) 15:26, 7 January 2024 (UTC)

@Not a kitsune in general you don't even need an account to just read pages. However, if you generate some sort of exceptionally high number of requests that cause disruption to the systems the system administrators may block your connection. If you want to do some very heavy mining you are likely going to be better of using a WP:DUMP that you can download and mine off-line - especially as your use case seems to be for looking at the "current version" of pages and not being particular if the page is slightly out of date. — xaosflux Talk 16:12, 7 January 2024 (UTC)
See also WP:EXEMPTBOT Headbomb {t · c · p · b} 23:37, 7 January 2024 (UTC)
thanks for answering my question. Not a kitsune (talk) 21:14, 17 January 2024 (UTC)

Require tracking maxlag

The policy currently does not mandate tracking the maxlag parameter. Wouldn't it make sense to have this tracking be a explicit requirement considering that most bots will already have to follow it to be compliant with the API Etiquette ? Sohom (talk) 22:24, 3 March 2024 (UTC)

Courtesty ping Novem Linguae :) Sohom (talk) 22:28, 3 March 2024 (UTC)
Ah, I didn't know this was in API etiquette. Interesting. I'm still mildly opposed, but let's let others weigh in. –Novem Linguae (talk) 22:33, 3 March 2024 (UTC)
I will preface by saying that I don't know exactly how the backend of AWB works, but if it doesn't track maxlag then we should not mandate its tracking because any AWB bot would automatically be violating it. Primefac (talk) 12:23, 4 March 2024 (UTC)
My understanding is that Pywikibot and AWB both already track maxlag (I might be wrong though). WP:JWB appears to not track the parameter though, maybe we can the ask the maintainer to add support for it. Sohom (talk) 12:46, 4 March 2024 (UTC)
I note the reverted edit to this policy had changed "may" to "should", not "must" as implied by the paragraph here. The API Etiquette page also says "should". That stops short of a requirement, particularly if we're using plain English meanings rather than RFC 2119. Since we seldom directly review the code, and have no way to verify that the code posted is actually the code running or to check the parameters on API queries made, any actual requirement would be nearly unenforceable by us anyway.
As for "may" versus "should", again particularly since we're using plain English meanings rather than RFC 2119, I find myself without a strong opinion on the matter. "Should" seems fine to me, as long as people aren't going to try to misinterpret it as a requirement and start "attacking" bots they don't like over it. Anomie 06:44, 5 March 2024 (UTC)
It is my view that if you put "should" in a Wikipedia policy, that folks will interpret it as a requirement. –Novem Linguae (talk) 14:48, 5 March 2024 (UTC)

Bot policy questions

Have two, separate, questions about the bot policy that I've come across in the last 10 days that I can't find the answers to so far.

1. If a bot (and operator) have been inactive for 10+ years, and the bot has been deflagged, is that all that needs to occur, or does the bot also need to be preventatively blocked without prejudice?

2. Can bots (approved by another language wikipedia) operate here too as is, or do they additionally need en wiki approval? If it is not a one for all situation, how would one determine if they have en wiki approval?

The reasons I'm asking are: For the first case, I've seen a few bots that were blocked so they couldn't be hacked into and become destructive, but I think those were in slightly different situations than this one. And in this case, the operator (in good standing) just never returned. And for the second (unrelated) case, bot #2 has not done anything problematic, I just haven't encountered any other cases like this and being cautious.

I'm not naming case 1 or case 2's bot at this time, but can if there's no issue in doing so, and/or is needed to better answer either case. Thanks, Zinnober9 (talk) 16:21, 4 June 2024 (UTC)

1) I don't think we have a policy/guideline/norm that inactive bots need to be blocked. 2) I don't think global BRFAs are honored by enwiki, with an exception for updating interwiki links. We have some kind of opt out, so bot operators from other wikis need to go through the enwiki WP:BRFA process. I think there's more info at WP:INTERWIKIBOT and WP:GLOBALBOTS. –Novem Linguae (talk) 16:45, 4 June 2024 (UTC)
Blocks are only needed for bots editing without authorization. Approvals on other projects don't count here - however you can point to examples of a successful task on another project when applying here. Keep in mind we require each task a bot is going to be approved, not just the account. — xaosflux Talk 17:02, 4 June 2024 (UTC)
1, That is what I figured, thank you both.
2. Xaosflux, Thank you. I wasn't asking from the operator point of view, as I have no bot and not planning on operating any. I had come across a bot account that had made a few edits here and was from another wiki. Their edits (here on en) have been primarily related to some articles in regard to images that were globally renamed (7 en wiki edits 2014-22), and more recently, has only been editing on two user's subpages (114 since Feb). One is the operator's, the other user I don't know their connection to the operator/bot. Those recent edits in userspace have been to create a list of various user's .js pages (one edit), and to keep and update a report list of recent en wiki draft to mainspace moves by any user (113 edits). Zinnober9 (talk) 20:43, 4 June 2024 (UTC)
Regarding userspace edits, Wikipedia:Bot policy#Valid operations without approval allows for bots to edit their own or their operator's userspace without approval (as long as the edit isn't otherwise disruptive). Editing other users' userspaces isn't allowed without approval under that exception though. Anomie 21:48, 4 June 2024 (UTC)
Approval by BRFA, or approval by the other user to edit in that user's userspace? It seems that the other user asked for the bot to generate the report. My feeling is that it would better for the bot to create the report in its own userspace, keep things neat and tidy within the userspace boundaries and then have on the other user's page a transclusion of the new page "{{BotFooReport}}" from the bot's userspace, but since I'm not sure I can satisfy their questions at this point, the discussion is here. Zinnober9 (talk) 22:01, 5 June 2024 (UTC)
Approval by BRFA. Anomie 12:00, 6 June 2024 (UTC)
I think this rule is sometimes bent. See the bottom of Wikipedia:Bots/Requests for approval/NovemBot 7, for example. –Novem Linguae (talk) 04:34, 6 June 2024 (UTC)
I don't see the linked BRFA itself as bending the rule, it's exactly the kind of thing that should happen if someone wants a bot to edit outside of their own or the bot's userspace. The speedy approval is also fine if the BAGger is confident that a trial isn't needed to expose problems with the task. OTOH, I think @Primefac erred in that case by implying that that BRFA was unnecessary. Anomie 12:00, 6 June 2024 (UTC)
Eh... genuinely don't remember what prompted me to type that, but you're correct. I have amended the close. Primefac (talk) 14:06, 6 June 2024 (UTC)

I've created a script, Move+, that assists with closing requested moves and moving pages. One of its functionalities is to help cleanup after moves by fixing mistargeted Wikilinks, which I believe falls under WP:ASSISTED.

However, I've recently discovered that the number of links to be fixed can sometimes be very high, and following discussion with Ahecht and SilverLocust I now believe that it is too high for WP:ASSISTED. My plan is to only allow normal users to resolve these links when the number is sufficiently small, and otherwise require that a user be a bot, but I'm not sure where to draw the line. My initial thoughts are 1000 links?

Once I've implemented the limit I intend to create User:Platybot and go through WP:BRFA. BilledMammal (talk) 01:33, 4 July 2024 (UTC)

Could you give an example of what's intended to be fixed here? Maybe one with only a few links to fix, and one with a multicrapton of links? And how would this not go against WP:NOTBROKEN? Headbomb {t · c · p · b} 01:59, 4 July 2024 (UTC)
For example, if we decided that China was no longer the primary topic, and instead moved China (disambiguation) to that title, we would need to retarget every link currently to China to China (country), because otherwise they would take readers to the dab page, rather than the intended page.
I can't give real examples because those are fixed quickly, typically by approaches requiring more editor time than Move+ requires, but a hypothetical with a multicrapton would be the China example, while a hypothetical with only a few would be if we decided George Peabody Library should be a disambiguation page. BilledMammal (talk) 02:05, 4 July 2024 (UTC)
See Wikipedia:Cleaning up after a move#Fix mistargeted wikilinks. This is for moves where the old title becomes a disambiguation page, so the links have to be updated. That info page suggests WP:AWB or another semi-automated script. The other one I can think of – and the one I use – is DisamAssist. For either of those, you have to confirm each edit you make. Cf. WP:AWBRULES.
I'd suggest two reasons this feature of Move+ is automated editing rather than semi-automated: (1) the ratio between the number of buttons the user presses to the number of resulting edits is very high, and (2) the a script can keep editing for more than a couple seconds without your input.
In other words, I think 1000 would be way too high. I don't really even like rmCloser's ability to move several pages in a mass move. SilverLocust 💬 02:09, 4 July 2024 (UTC)
rmCloser doesn't really support moving several pages in a mass move, as it starts to fail when moving more than a few pages; its part of the reason I created Move+.
However, there are some scripts that do genuinely support mass moves, such as User:Ahecht/Scripts/massmove.js, but I am willing to add limits to Move+ for this as well? BilledMammal (talk) 02:16, 4 July 2024 (UTC)
I was thinking about the mass move scripts after my reply and seeing Headbomb's. I think Headbomb's point about giving a higher limit based on user rights is a good one. Like how mass mover is limited to page movers and admins. Though I very much don't think pagemovers should have an unlimited amount. Perhaps admins > page movers > extended confirmed > autoconfirmed (along with giving a warning about how many links it will need to change). (I had never tried moving a bunch of pages with rmCloser, given my reluctance to automate things too much [edit: without a bot, that is]. I had looked at rmCloser and didn't see any numerical limit. So I assume that's just from the time limits that it has.) SilverLocust 💬 02:37, 4 July 2024 (UTC)
Currently, it only lists the number of links to update after starting. I'll put that on the todo list.
I'll definitely add limits to the number of pages permitted to be moved in one action by editors who don't have relevant advanced permissions; I'll watch the discussion to see if I should also add them, at a higher level, for page movers.
(Yep; rmCloser doesn't consider server side rate limits, and its actions start to be rejected because of that. Move+ doesn't consider it for one part of the process that has remained unchanged from rmCloser; it causes some issues, but I haven't got around to fixing it yet.) BilledMammal (talk) 03:09, 4 July 2024 (UTC)
Ah I see. Could the script check for pagemover/editconfirmed rights, or something similar (e.g. a whitelist of users, similar to AWB) and have the threshold depend on those rights? Headbomb {t · c · p · b} 02:10, 4 July 2024 (UTC)
It already checks for rights in deciding which page move options to offer, so it could easily check when considering the limit. A whitelist would be a little more work, but still simple enough to implement. BilledMammal (talk) 02:16, 4 July 2024 (UTC)
Just ideas. For users with a regular trust, I wouldn't go over 250. Maybe 100. For admins, or those with advanced trust, I'd say no limits. Headbomb {t · c · p · b} 02:33, 4 July 2024 (UTC)
That sounds reasonable for regular users. For admins and extended movers (I assume that is who you mean by advanced trust), I don't object to no limit, but I'll see what other editors think, or if they believe there is a point where it should be done on a bot account. BilledMammal (talk) 03:01, 4 July 2024 (UTC)
I agree it would be preferable by bot account, if only because you can suppress bot from the watchlist. Headbomb {t · c · p · b} 03:23, 4 July 2024 (UTC)
I've actually tried setting it as a bot edit for all users, but either I'm not calling the morebits function correctly, or accounts need to be flagged as a bot for that to work. I'm assuming the second?
Do you think a hard limit is a good idea then, or would you prefer a soft limit that recommends the use of a bot account? BilledMammal (talk) 03:34, 4 July 2024 (UTC)
Yeah, only bots have the bot edit flag. SilverLocust 💬 03:48, 4 July 2024 (UTC)
A real example would be Talk:Australian#Requested_move_20_October_2022, in which there were 700-1000 links to process post move. It was cleaned up within 24 hours with the help of other editors who picked the newly dabbed title up at the disambiguation wikiproject (DYK they have a report for dab pages with massive numbers of links going there?) – robertsky (talk) 02:38, 4 July 2024 (UTC)
(DYK they have a report for dab pages with massive numbers of links going there?)
I did not! (Although, generally I think manual review of every link is a waste of editor time - it would be better to create a script that finds ones likely to be problematic and presents only those for manual review. It's currently on my todo list for Move+, but I'll prioritize it - I've almost got "Add support for actioning technical requests" done, and once finished I'll do it. One issue I found in the past with dabassist is that it didn't immediately present the next text to review, so I'll make sure that is addressed - if there are other useful features this function could have please let me know, either at User talk:BilledMammal/Move+ or here.) BilledMammal (talk) 03:01, 4 July 2024 (UTC)