Jump to content

Wikipedia:Village pump (technical)

From Wikipedia, the free encyclopedia

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The technical section of the village pump is used to discuss technical issues about Wikipedia. Bug reports and feature requests should be made in Phabricator (see how to report a bug). Bugs with security implications should be reported differently (see how to report security bugs).

If you want to report a JavaScript error, please follow this guideline. Questions about MediaWiki in general should be posted at the MediaWiki support desk. Discussions are automatically archived after remaining inactive for 5 days.

Syncing user scripts from an external Git repository to Wikipedia

[edit]

Hi all,

There are some common problems when developing user scripts:

  • While local development usually occurs through a version control system, usually Git with additional continuous integration provided by sites like GitHub or Wikimedia GitLab, publication of new versions of user scripts still require on-wiki edits to the user script page, which need to be done manually, and can be tedious.
  • Update of user scripts are restricted to their owners. This creates a large bottleneck for projects maintained by multiple people. This can be especially problematic when a script owner leaves Wikipedia or goes on an extended wikibreak.

Many people, including myself, have encountered these problems. Here are some of the solutions that have emerged in the mean time (see also User:Novem Linguae/Essays/Linking GitHub to MediaWiki):

  1. Store a BotPassword/OAuth token of the owner account somewhere, and use it to make an edit whenever new code needs to be deployed (per CI results/manual approval/etc)
  2. Use a reverse proxy hosted on Toolforge, then import a remote script hosted on Wikimedia GitLab via mw.loader.load (see wikitech:Tool:Gitlab-content)

However, 1 to me feels unwieldy and suffers from the amount of effort the engineering/linking everything required, 2 can have issues with regards to caching per the maintainer, and is not as good as hosting the script on-wiki.

My proposal for how to resolve the problems above involves hosting an interface admin bot, and allowing user script authors to opt in to syncing their user script from a Git repository to Wikipedia using webhooks.

Any script wishing to be synced by the bot needs to be edited on-wiki (to serve as an authorization) to have the following header at the top of their file:

// [[User:0xDeadbeef/usync]]: LINK_TO_REPO REF FILE_PATH
// so, for example:
// [[User:0xDeadbeef/usync]]: https://github.com/fee1-dead/usync refs/heads/main test.js

Here are some questions you may have:

  • Why is this being posted here?
    • Running this bot requires community discussion and approval. I'd like to see whether the community is willing to adopt this.
  • What are some benefits of this proposal?
    • Auditability. If this scheme was to be adopted, there is an easy way to know whether a script is being automatically synced, there is an easy way to get the list of all scripts that are being synced. All edit summaries are linked to the Git commit that created that edit.
    • Ease of use. It is very easy to setup a sync for a user script (just insert a header to the file and configure webhooks), and flexible as the format above allows the branch and file name to be configured. It removes the need for all script developers to create BotPasswords or OAuth tokens.
    • Efficiency. Only webhooks will trigger syncs. There is no unnecessary periodic sync being scheduled, nor does it require CI jobs to be run each time the script needs to be deployed.
  • What are some drawbacks of this proposal?
    • Security. Even though there are already ways to allow someone else or an automated process to edit your user script as described above, allowing this bot makes it slightly easier, which could be seen a security issue. My personal opinion is that this shouldn't matter much as long as you trust the authors of all user script developers whose scripts you use. This bot is aimed primarily at user scripts.
    • Centralization of trust. The bot having interface administrator rights requires the bot to be trusted to not go rogue. I have created a new bot account (User:DeadbeefBot II) to have separate credentials, and it will have 2FA enrolled, and the code will be open source and hosted on Toolforge.
  • What are some alternatives?
    • We can do nothing. This remains a pain point for user script developers as syncing is hard to setup with careful CI configuration required or a less reliable reverse proxy would be required.
    • We can create a centralized external service (suggested by BryanDavis on Discord) that stores OAuth tokens and which project files are synced with which titles. There would be a web interface allowing developers to enter in their information to start automating syncs. However, this may not be as auditable as edits would go through the bot owners' accounts and not a bot account. This is less easy to use as an owner-only OAuth token would need to be generated for each sync task.

Feel free to leave a comment on how you think about this proposal. I'd also be happy to answer any questions or respond to potential concerns. beef [talk] 12:03, 23 May 2025 (UTC)[reply]

  • Note: This discussion is for the task of the BRFA that I opened some time ago. beef [talk] 12:16, 23 May 2025 (UTC)[reply]
    Am I reading this correct that one of methods you are proposing is to ask other users to give you their (bot)passwords? That is a horrible idea. — xaosflux Talk 12:25, 23 May 2025 (UTC)[reply]
    Yep. It will probably be stored on Toolforge's tooldb though. Preferably it would be an OAuth token that is only limited to editing the specific user script.
    I personally prefer having a single bot handle it. beef [talk] 12:30, 23 May 2025 (UTC)[reply]
    We explicitly tell our users to never share their authentication secrets with others, I can't possibly support processes that go against that. — xaosflux Talk 14:52, 23 May 2025 (UTC)[reply]
    If the bot receives community approval, then we won't need one that collects OAuth tokens. But according to WP:BOTMULTIOP it might be preferred to use OAuth instead of having a bot?
    A different question would be whether we should require all commits to be associated with a Wikipedia username. I personally don't see a need, but WP:BOTMULTIOP and the community might think otherwise. beef [talk] 15:01, 23 May 2025 (UTC)[reply]
    I think single bot with interface administrator is the way to go. –Novem Linguae (talk) 15:08, 23 May 2025 (UTC)[reply]
    Much more so this way, making on-wiki edits by impersonating other users has a whole host of problems. — xaosflux Talk 15:10, 23 May 2025 (UTC)[reply]
    I don't have a preference to either approach, but let's not confuse things here. No one's asking for passwords to be shared. OAuth tokens are not the same as passwords. Every time you make an edit through an OAuth tool (like Refill), you are sharing your OAuth tokens. This is very normal, and safe because OAuth-based edits are tagged and can be traced back to the application that did it. (Worth noting that owner-only OAuth credentials don't have such protections and indeed should not be shared.) – SD0001 (talk) 15:38, 23 May 2025 (UTC)[reply]
    This. I'm concerned that having people upload a BotPassword or owner-only OAuth token was even considered, when a "normal" OAuth token is so much more obviously the way to go for that option. Anomie 13:03, 24 May 2025 (UTC)[reply]
    Ah, yeah, that would be fine. I guess I wasn't think much about having a non-owner only OAuth application. dbeef [talk] 10:17, 2 June 2025 (UTC)[reply]
I might just be a Luddite here, but I don't think using GitHub for on-wiki scripts is a good idea to begin with. First, I feel that the git model works when there is a "canonical" version of the source code (the main branch, say), that people can branch off of, re-merge into, etc. But the problem here is that a git repo for a MW user script can *never* be the canonical source code; the canonical source code is inherently what's on-wiki, since that's what affects users. There is an inherent disconnect between what's on-wiki and what's elsewhere, and the more we try to pretend that GitHub is the source of truth for a script, the bigger the problems with that disconnect will be. Personally, I've seen many problems caused by the confusion generated just when projects use git branches other than "main" for their canonical code; here, the canon isn't even on git at all. How would this bot handle changes made on-wiki that aren't in git (if it would handle those at all)?
Second, this doesn't solve the problem of "inactive maintainer makes it difficult to push changes to production", since a repo maintainer can disappear just as easily as a mediawiki user; it just adds an ability to diffuse it a little bit by adding multiple maintainers, at the cost of this inherent disconnect.
Third, and easiest to overcome, how does this bot handle attribution of authorship? Writ Keeper  13:36, 23 May 2025 (UTC)[reply]
source of truth is a vague and subjective term. I would personally call the latest version the source of truth, which of course lives on GitHub. Wikipedia hosts the published version, which may not be from the default branch on GitHub (dev branch for development, as the latest source of truth, main branch for the published version).
But that's of course a personal preference. There are many, many people out there that use Git for version control and for development of user scripts. You may be fine with using MediaWiki as version control and primarily updating code on-wiki, but some of us have different workflows. It might be helpful to write unit tests and force them to pass before getting deployed. It might be helpful to use a more preferred language that transpiles to javascript instead of using javascript directly. Having this benefits these use cases.
It does solve the problem by allowing additional maintainers to be added. There's no native MediaWiki support for adding collaborators to a user script, so this can help with that, in addition to the benefits of a Git workflow.
Attribution is given by using the commit author's name in the edit summary. I'm sure user script developers can include a license header and all that to deal with the licensing part.
I think this thing should happen, and I think it will happen even if there is no community support for the bot to run, it will just involve the proposed toolforge service that collects OAuth credentials. I sure hope that the bot proposal passes but I'm fine with writing the extra code for the alternative too. I also want to think about whether I have enough energy to keep justifying for why I think this would be a good bot task, when all the negative feedback I get are from people who won't use it. The automatic syncing has occurred in one form or another. And personally, I want to be able to use TypeScript to write my next sophisticated user script project, and I want to add collaborators. beef [talk] 14:42, 23 May 2025 (UTC)[reply]
So would this bot only be used for edits in userspace? Or also for gadgets in the MediaWiki namespace? Polygnotus (talk) 14:52, 23 May 2025 (UTC)[reply]
I would want to get approval for only userspace edits first. Extending it to gadgets is an even bigger stretch and less likely to get approved. beef [talk] 14:53, 23 May 2025 (UTC)[reply]
I also want to think about whether I have enough energy to keep justifying for why I think this would be a good bot task, when all the negative feedback I get are from people who won't use it: None of this happens in a vacuum. I commented on this because I've *already* had people complaining that I didn't submit a pull request on some GitHub repo when I responded to an intadmin edit request and implemented the change on-wiki--despite the fact that the GitHub repo was already several onwiki edits out of date before I made the change. We already have a process for multiple maintainers and code change requests; it's the intadmin edit request template. It's sub-optimal, for sure, but the solution to a sub-optimal process is not to create an entirely separate process to run in parallel. If development happens on GitHub, it doesn't affect anything unless it gets replicated onwiki. If development happens onwiki, it affects everyone regardless of what GitHub says. That's why I call the onwiki version the canonical source of truth--because that's the one that matters. I could see the benefit here if the bot also worked in reverse--if it were set up to automatically keep the main branch of the git repo in sync with the onwiki script. But as it is, I feel this will add more headache than it's worth. Sorry if that's tiring for you. Writ Keeper  15:03, 23 May 2025 (UTC)[reply]
If there is a critical fix, you can remove the header and the bot will stop syncing. That is by design. And then you can ping the maintainers to incorporate the fix. I personally wouldn't mind giving committer access of my user scripts to every interface admin on this site.
A two-way sync involves storing authentication to the Git repo, and yeah, harder to implement. Everyone that uses this sync scheme will have all development activity on GitHub, with potentially occasional bug reporting happening at the talk page, so I don't see that much point in programming the sync the other way. beef [talk] 15:16, 23 May 2025 (UTC)[reply]
Everyone that uses this sync scheme will have all development activity on GitHub[citation needed] My whole point is that hasn't been my experience so far. Maybe I just caught an unusual case. Writ Keeper  15:25, 23 May 2025 (UTC)[reply]
If someone does choose to sync from Git to Wikipedia, then they must use the Git repo as their primary place for development. I cannot think of any case where people would have an onwiki version that is more up-to-date than the Git version, given that the idea of having it sync is based on the assumption that Git is used as the most up-to-date place. beef [talk] 03:29, 24 May 2025 (UTC)[reply]
We already have a process for multiple maintainers and code change requests; it's the intadmin edit request template. This seems like wishful thinking. It's just not true. I'm reminded of a time when a heavily used script broke and multiple interface admins refused to apply an unambiguous 1-line bug fix.
At best, edit requests get accepted for bug fixes, not for anything else. – SD0001 (talk) 16:26, 23 May 2025 (UTC)[reply]
That's true of almost all kinds of software on GitHub. By your logic, the canonical version of, say mediawiki itself, is what actually runs on the production machines, not what's on GitHub. Similarly, for a library the canon would be what's released to npm/pypi, etc.
How would this bot handle changes made on-wiki that aren't in git (if it would handle those at all)? That's like asking if a wikimedia sysadmin shells into a production host and edits the code there, how is it reflected back to gerrit? It isn't. That might sounds non-ideal, but it isn't unprecedented. Already, most big gadgets including Twinkle, afc-helper, and xfdcloser are developed externally and deployed to wikis via automated scripts. Manual edits on-wiki aren't allowed as they'll end up overwritten.
Second, ... It does solve that problem – a git repo can have multiple maintainers to avoid bus factor, unlike a user script which can only be edited by one single userspace owner (technically interface admins can edit as well, but on this project, we appear to have adopted a mentality that doing so is unethical or immoral).
Having said that, I personally don't use GitHub or Gitlab for any of my user scripts. But I respect the wishes of those who choose to do so. – SD0001 (talk) 15:05, 23 May 2025 (UTC)[reply]
I would argue there is a substantial difference between someone SSHing into a production host to make manual changes and the process of talk-page-int-admin-edit request, and the difference is that the latter *is* a process. But also, yes, to an extent I *would* argue that, from a holistic perspective, the code that is active in production and that users are seeing, interacting with, and using *is* the canonical version, and that what is in a code repo, main, develop, or otherwise, is only important to the extent that it reflects what's on the production machine. The reader or normal editor using a website feature doesn't care what's in the repo, they care what they're using, and they're going to be frustrated if that feature suddenly disappears, regardless of whether that's the fault of some bot overwriting code or some dev not committing their changes to the off-site repo or what have you. Writ Keeper  15:32, 23 May 2025 (UTC)[reply]
If I have to choose between two processes that can't co-exist, I'll choose the one that offers more benefits. A git-based workflow enables unit testing, transpilation, linting and better collaboration. It offers a change review interface that allows for placing comments on specific lines. As for talk page requests, refer to my comment above about how useful they are. – SD0001 (talk) 12:41, 24 May 2025 (UTC)[reply]
There's pros and cons. I talk about it in my essay User:Novem Linguae/Essays/Pros and cons of moving a gadget to a repo. Popular, complex gadgets are often the use case that benefits the most from a github repo. A github repo enables automated tests (CI), a ticket system, and a PR system, among other things. These benefits are well worth the slight downside of having to keep things in sync (deploying). And in fact this proposed bot is trying to fix this pain point of deploying/syncing. –Novem Linguae (talk) 15:16, 23 May 2025 (UTC)[reply]
@0xDeadbeef Don't know if you missed it in the Tech News above, but wikitech:Tool:Gitlab-content describes a new reverse proxy that allows user scripts to directly run code from gitlab. --Ahecht (TALK
PAGE
)
15:06, 23 May 2025 (UTC)[reply]
@Ahecht They mentioned Gitlab-content above. Search for remote script hosted on Wikimedia GitLab Polygnotus (talk) 15:07, 23 May 2025 (UTC)[reply]
I have talked to BDavis on Discord and he said he thinks having it synced to an on-wiki page is better than a reverse proxy. It's in the thread under the #technical channel on Discord. I originally thought that gitlab-content was going to be the ultimate solution but apparently not. And I had already written some code for this thing to happen, so I figured why not propose it. beef [talk] 15:09, 23 May 2025 (UTC)[reply]
  • An alternative that doesn't require any advanced permissions or impersonation issues is for the bot to just sync to itself. It could sync from anywhere upstream to User:Botname/sync/xxxx/scriptyyy.js). Then, any interested user could just import that script. — xaosflux Talk 15:16, 23 May 2025 (UTC)[reply]
    For gadgets, we already have a manual process - a bot that opens an edit request when an upstream repo wants to be loaded to the on-wiki one. That does allow us to ensure that changes are only made when we want them, and allows for local code review. For userscripts, users that want to do what this thread is about are already going to have to just trust the bot directly regardless. — xaosflux Talk 15:22, 23 May 2025 (UTC)[reply]
    That might be fine, but to me less preferable than the main proposal because then it would be harder to know who is maintaining what script. (I guess it wouldn't be the case if the xxxx refers to the user who asked for the script) I'm also slightly lazy about adding a new proxy-script-creation system in addition too.
    A slight concern would be that the name could shift the responsibility of trust and maintaining the script to the bot instead of the actual maintainer. beef [talk] 15:24, 23 May 2025 (UTC)[reply]
    This would absolutely require that anyone's space that you were publishing to trusted the bot. By publishing a revision you would be responsible for the revision you publish. — xaosflux Talk 15:53, 23 May 2025 (UTC)[reply]
    The problem with this alternative approach is that it is just hard to manage.
    If I make a user script, it should be my own. Under a bot's userspace, you'd need a separate process for requesting creation and deletion.
    Also this makes it harder for pre-existing scripts to be synced. People already using and developing a script at an existing location cannot choose to adopt a Git sync. And it makes it much more harder for the person to disable syncing (compared to editing in your own userspace to remove the header). beef [talk] 03:32, 24 May 2025 (UTC)[reply]
  • Support. Deploying gadgets such as Twinkle and AFCH (using fragile and bespoke deploy scripts that have a lot of manual steps), and my user scripts (which I edit in VS Code then copy paste to onwiki) is a pain and not a good use of my time. Let's automate this. –Novem Linguae (talk) 15:24, 23 May 2025 (UTC)[reply]
  • I know this is not going to happen, but i consider it unfortunate that we have to do all these hacks. A more reasonable approach would be if there was a spot on gerrit where script authors could put their gadget scripts (With CR excpectations being similar to on wiki instead of normal gerrit) and have them deployed with normal mediawiki deployments. I guess there's all sorts of political issues preventing that, but it seems like it would be the best approach for everyone. Gadgets deserve to be first-class citizens in the Wikimedia code ecosystem. Bawolff (talk) 18:03, 23 May 2025 (UTC)[reply]
    We're a top-10 website in the world, I wouldn't call it "political" that we could be hesitant about loading executable code from an external commercial platform in to our system without our review. — xaosflux Talk 23:47, 23 May 2025 (UTC)[reply]
    If the community wants to restrict the sync to only Wikimedia GitLab, there wouldn't be any objections on my part, though I don't see why we can't do GitHub as well. beef [talk] 03:37, 24 May 2025 (UTC)[reply]
    To clarify, I'm just saying, in the ideal world, gadgets would be deployed as part of MediaWiki (i.e. They would ride the deployment train). Its weird that this stuff is being layered on top. I understand that there are political & historical reasons why this is not the case, but ideally gadgets would be treated the same as any other site javascript. Alas that is not the world we are living in. Bawolff (talk) 23:55, 25 May 2025 (UTC)[reply]
    The train is slow and mediawiki developers have been known to argue with communities about practices. — xaosflux Talk 00:46, 26 May 2025 (UTC)[reply]
    Well, if gadgets rode the deployment train, they wouldn't exactly be gadgets, would they? They would be indistinguishable from JavaScript loaded by extensions. The point of gadgets was for them to be fully under community control. I think it's intentional they're managed on-wiki, although admittedly at that time JS development tended to be lightweight and the drawbacks of wiki-based editing may not have been a big deal. Making gadgets be part of an extension feels akin to making Community Configuration controlled via ops/mediawiki-config. – SD0001 (talk) 06:17, 30 May 2025 (UTC)[reply]
    There was at least one hackathon project in the past that proposed something like this, but I don't think it ever went anywhere. @Legoktm and I think either @Krinkle or @Catrope (I can't remember which unfortunately) worked on the idea of making a single extension to host the code for multiple gadgets during the Mexico City Wikimania hackathon. Oh my, that was 10 years ago now. Today I assume one of the main blockers to this idea would be finding a Foundation engineering team to claim ownership/sponsorship of the extension. -- BryanDavis (talk) 19:51, 29 May 2025 (UTC)[reply]
  • The only concern I have is that you should require the existing interface administrators be given write access to the repository on request. Otherwise this falls into the ballpark of me not personally seeing the value or using this myself but if other people think it's useful then more power to them. * Pppery * it has begun... 17:37, 25 May 2025 (UTC)[reply]
    It's not something I can require because it involves people that are not me. IAs can disable the sync through removing the line for the sync. I personally would give access to my repos to IAs upon request but that's just me. dbeef [talk] 10:19, 2 June 2025 (UTC)[reply]
  • I'm highly supportive. I hope the default for devs of major scripts will become deployments from GitHub (the current ad hoc system is honestly pretty wild). Best, KevinL (aka L235 · t · c) 23:49, 27 May 2025 (UTC)[reply]

Edit Source (MiniEdit)

[edit]

Currently, you can edit the entire article only .

I suggest making a mechanism that will track individual paragraphs and display a PENCIL "Edit Source" on the right.

You can see something similar in many web-mails (gmail.com).

Of course, the question immediately becomes what to consider a paragraph. This is not so important, you can combine several paragraphs to a large one.

It's better than editing the entire article on 4 screens. Seregadu (talk) 04:40, 29 May 2025 (UTC)[reply]

Check out Wikipedia:MiniEdit. Remsense ‥  04:44, 29 May 2025 (UTC)[reply]
Yes, that's exactly what I'm talking about. It often happens that the user does not see some scripts on the page. I've disabled everything I can.  And on Chromium 136, I don't see this pencil. Like 99% of Wiki users Seregadu (talk) 04:56, 29 May 2025 (UTC)[reply]
I'm not exactly sure what issue you're having, but this is a script you need to add! I don't think you've done that, at least not to User:Seregadu/common.js, which is where you would only have to copy one line to enable MiniEdit. If you need more help, don't hesitate to ask. Remsense ‥  04:59, 29 May 2025 (UTC)[reply]
I'm going to try adding a script now, but why not do it for everyone?
This script only works on the user's side and does not create a load on the Wiki. Thanks for the script ! ). Seregadu (talk) 05:01, 29 May 2025 (UTC)[reply]
Glad if I could help! Honestly, it's always worth considering that most people aren't "power users" like you and I, and maybe you can imagine little symbols showing up all the time being confusing or stressful for someone's grandma or a young child. Remsense ‥  05:03, 29 May 2025 (UTC)[reply]
This script edits only 1 level. Very sadly. Most discussions have 4 pages that are far from level 1. Seregadu (talk) 05:10, 29 May 2025 (UTC)[reply]
Because the mechanism used to edit something is the section, not the paragraph. When you click an edit button next to a paragraph and get the whole section, your user will be like "what happened?!"
And, to be honest, a button every paragraph would be a lot of clutter. Izno (talk) 05:10, 29 May 2025 (UTC)[reply]
No! Exactly every paragraph ! After all, the pencil is already there and it works well. In this conversation, I can edit only the first 5 lines. Let's wait for your opinion when this conversation grows to 4 screens. Seregadu (talk) 05:14, 29 May 2025 (UTC)[reply]
I routinely edit pages much longer than that. Editing with paragraphs wouldn't be useful. Izno (talk) 05:41, 29 May 2025 (UTC)[reply]
What do you mean by 4 screens? And I definitely would not find this useful. Doug Weller talk 06:45, 29 May 2025 (UTC)[reply]

2 problem that , that I found right now! And where is the community see ? I'm not just adding empty lines, I'm testing the script. I see that it requires updating the browser cache after each text change. It's not normal. it's as if adding text removes the script from the browser cache. Seregadu (talk) 05:20, 29 May 2025 (UTC)[reply]

Yes, only 1 level. This feature of a good script makes it useless for serious lengthy discussions. Seregadu (talk) 05:26, 29 May 2025 (UTC)[reply]
Discussions usually shouldn't be edited, the tool is for editing articles. CMD (talk) 06:07, 29 May 2025 (UTC)[reply]
BTW the correct way to say "level 1" is "namespace 0" or just "article". See List of namespaces. Talk pages are namespace 1. -- GreenC 15:58, 29 May 2025 (UTC)[reply]
I finally tried this script now, I wanted to edit my message. The script prompts me to edit the entire header of this page, not my message. This script doesn't work for me. Neither at the 0 level of names, nor at 1, nor at 2. Seregadu (talk) 19:10, 29 May 2025 (UTC)[reply]
I couldn't find a link to my common.js page, and link to all the scripts useful to the user. You don't admit the idea that I should write them myself, do you? The obvious place: "Special pages" -- there is nothing.
I think , Wikipedia should structure useful links for the user inside his profile. Seregadu (talk) 10:25, 30 May 2025 (UTC)[reply]
@Seregadu: The link was given above, in Remsense's post of 04:59, 29 May 2025 (UTC); but it may also be found at:[reply]
So, it's already "inside his profile". --Redrose64 🌹 (talk) 10:16, 31 May 2025 (UTC)[reply]
Yes, I wasn't paying attention. I searched in the top menu, in the side menu, but not in my profile. I was no right. Yes, the script works for editing articles, but not discussions. And that's good too. Although it's strange for a Wiki to invent different text formats.
But you still haven't answered the question: "Why a simple user, even without knowledge of JS, doesn't see a link to a library of useful scripts or styles? It is a pity if it exists, but there is no link to it. Seregadu (talk) 16:51, 2 June 2025 (UTC)[reply]

Scary red main page banner in dark mode

[edit]

This is what jumped out on the main page like an Orange bar of doom:

The English-language Wikipedia thanks its contributors for creating more than seven million articles! Learn how you can take part in the encyclopedia's continued improvement.

Light mode has a gentler desaturated yellow/orange. Can we have something less alarming than red? 174.138.213.2 (talk) 17:10, 29 May 2025 (UTC)[reply]

I've changed it to use the same style as the topbanner for now to reduce the jarring color clash; others should feel free to discuss additional improvements. — xaosflux Talk 17:20, 29 May 2025 (UTC)[reply]
I probably made a 'typo' at some point in the process of generating the dark mode colors. I've swapped it back to the original light mode and instated something a lot less red for dark. Izno (talk) 17:49, 29 May 2025 (UTC)[reply]
The template's documentation was accidentally deleted. I submitted an edit request. – Jonesey95 (talk) 02:00, 30 May 2025 (UTC)[reply]

Providence tracking query params in the iPad app

[edit]

Please remove the providence tracker from the iPad app, it's a time-wasting misfeature that doesn't reflect Wikipedia's values. Jikybebna (talk) 19:50, 29 May 2025 (UTC)[reply]

We are not the WMF. Izno (talk) 20:08, 29 May 2025 (UTC)[reply]

Problem with Marker Position

[edit]

Hi, I recently realized that all Templates I find that do overlays like for example Template:Superimpose do not show the correct positions if you switch to the mobile view, the overlay is shifted a bit. In the template example the overlay moves from the center of colorado to the south. Is this a known issue? Would anyone know how that could be fixed?

A side observation is that with Template:Location mark the position seems correct but it still shows weird. McBayne (talk) 22:33, 29 May 2025 (UTC)[reply]

This is basically not correctable. Izno (talk) 22:54, 29 May 2025 (UTC)[reply]
@McBayne This template (as well as {{superimpose2}} and {{overlay}}, {{site plan}} but not the various Location map templates), never set a fixed line height. This has caused all these designs to be dependent on a specific line height of the skin but also on the fontsize of where they are used. Ideally, these would have all been designed with a line-height of 0, as well as taking into account the size of anything they lay on top of the base layer. This makes all of these things 'broken'. Honestly, the best way to correct this, is to make a new template, which corrects this problem and phase out the older ones. —TheDJ (talkcontribs) 08:15, 30 May 2025 (UTC)[reply]
Looks like 1186 transclusions across the 4 templates. — Qwerfjkltalk 09:15, 30 May 2025 (UTC)[reply]
Thanks a lot for the quick response. How easy do you think a fix (with a new template) is? How much do the templates need to change?--McBayne (talk) 11:14, 30 May 2025 (UTC)[reply]

Quarry (quarry.wmcloud.org) not working

[edit]

@Liz I doubt this is the best place to ask this, but Quarry is not working at all. Pages take 5 minutes to load and it is impossible to submit a query. I am posting this on the pump if anybody knows what is causing this or how to fix this... -1ctinus📝🗨 00:42, 30 May 2025 (UTC)[reply]

I noticed Quarry goes down every few days. If it happens again, you can use the alternative https://superset.wmcloud.org/sqllab/. – DreamRimmer 12:50, 31 May 2025 (UTC)[reply]

Editing references direct from the reflist

[edit]

Sometime fairly recently a change was made to VE that allows you to double-click on a reference in the {{reflist}} and edit it directly, as opposed to having to go track it down in the body of the article. I just want to say that this is wonderful, and a huge timesaver, and thank you to whoever made this happen. RoySmith (talk) 00:57, 30 May 2025 (UTC)[reply]

Oh yeah. Is there a userscript for non-VE? -- GreenC 04:15, 30 May 2025 (UTC)[reply]
GreenC, Factotum can do it, as can User:Ingenuity/ReferenceEditor.js, though there may be better options I don't know of. — Qwerfjkltalk 09:18, 30 May 2025 (UTC)[reply]
Oh nice. I looked at Factotum it's kind of overwhelming the complexity of options and taking over so many things I have yet to try it. I just installed ReferenceEditor and it's great except it only is able to edit a small proportion of citations for some reason. I can understand certain things, but some perfectly formed idiomatic CS1|2 citations it is unable to edit. Maybe I need to spend time with Factotum to see what it can do. -- GreenC 15:44, 30 May 2025 (UTC)[reply]
I tried Factotum. It works better though I wish it was a popup edit window like ReferenceEdtior but it's still a big help with citation maintenance. -- GreenC 16:00, 30 May 2025 (UTC)[reply]
Thanks for the thanks, I've passed it along to the team. It's rare and appreciated. Digging… that was phab:T54750 by the Editing team and specifically Esanders in gerrit:c/mediawiki/extensions/Cite/+/903311 (and the mountain of prior code/collaboration that it all requires!). HTH. Quiddity (WMF) (talk) 04:53, 30 May 2025 (UTC)[reply]
+1 on this! Such a useful feature. JackFromWisconsin (talk | contribs) 03:43, 2 June 2025 (UTC)[reply]

Talk page incorrectly displays as a redirect

[edit]

How to restore Talk:The Love That Whirls (Diary of a Thinking Heart) to a "normal" talk page? The page displays as a redirect talk page, but the source text appears to be what it should be. For context: I created the article, and I asked the draft acceptor afterwards to display the article title's parenthetical in italics. That was fixed, but with the recent page moves, redirects, I'm left confused as to why this current situation is happening. Fundgy (talk) 01:34, 30 May 2025 (UTC)[reply]

the double quoted mark version can be accessed via an action link. I was not under the impression that double quotation marks would cause this kind of behavior (and it's not a redirect apparently, else we'd get the little note that says you've come from the one page). Izno (talk) 03:52, 30 May 2025 (UTC)[reply]
I see now that another editor adding {{italic title|all=yes}} to the article fixed this. I'm afraid I'm (basically) new to editing, and I just don't know what the course of action is here. Should the double-quoted version be deleted? My concern is just so that the "main" talk page stops showing redirect categories and "This redirect does not require a rating on Wikipedia's content assessment scale." Weird that it's essentially a redirect in name only. Fundgy (talk) 04:20, 30 May 2025 (UTC)[reply]
Not sure what page you're ultimately accessing, but I've deleted the ones that shouldn't exist. Izno (talk) 04:29, 30 May 2025 (UTC)[reply]
Thank you! Also, I just purged the page's cache, and everything seems to be in order now. Fundgy (talk) 04:38, 30 May 2025 (UTC)[reply]

JSTOR template has no page parameter?

[edit]

Why does Template:JSTOR not have a page parameter? Can that be added? If you put ?seq=5 in the URL you should go to page 5. https://www.jstor.org/stable/25120881?seq=5 Thanks, Polygnotus (talk) 15:08, 30 May 2025 (UTC)[reply]

Template_talk:JSTOR#Template-protected_edit_request_on_30_May_2025 Polygnotus (talk) 15:13, 30 May 2025 (UTC)[reply]

List of Wikipedians by country project

[edit]

Can somebody use a script or something to generate me a big list of editors listed as participants in each of the Category:WikiProject Countries projects or even Category:Wikipedians by WikiProject. List them as User talk:xxxxx , one per line, at User:Dr. Blofeld/Country WikiProject members after each other? I need it for a message list for Wikipedia:The World Destubathon. It'll take days to even do a few manually.♦ Dr. Blofeld 16:46, 30 May 2025 (UTC)[reply]

 Done Polygnotus (talk) 17:32, 30 May 2025 (UTC)[reply]
Thanks, I've requested mass message rights.♦ Dr. Blofeld 11:24, 31 May 2025 (UTC)[reply]
Is there a way to find a list of the most active editors (who've made the most substantial expansions), to science, technology, engineering, maths, medicine and business articles and geography and city/village/region articles in recent years. Including good and featured article contributors etc? I've been looking through the Science project members and it's difficult to find active editors! ♦ Dr. Blofeld 13:27, 31 May 2025 (UTC)[reply]
@Dr. Blofeld We don't have "most active editor in science/tech/engineering/maths/medicine" (the software does not keep track of that) but we do have Wikipedia:List of Wikipedians by number of edits (the top5k) and Wikipedia:List of Wikipedians by number of edits/5001–10000 and then you can filter out those who have been blocked/are inactive. Polygnotus (talk) 14:38, 31 May 2025 (UTC)[reply]
The database does record the project assessment and association of all pages (mw:Extension:PageAssessments#Database tables). So it's possible to get all pages tagged with the project, get for each page the number of edits for each editor, and then sum up the counts to get the editors with the most edits on that project in a given timeframe. We're going to add similar information (though from the "what are this user's projects" side rather than "what are this project's users") to XTools soon (we're doing a lot of stuff these days, so the change won't go live for a while). Probably this would be a slow query and should be done by batches (such as: first 100 pages, 101-200, and so on). — Alien  3
3 3
14:50, 31 May 2025 (UTC)[reply]
@Alien333 Interesting, is there something I can read about the improvements to XTools? Currently its technically possible but it would require so many API calls that it would be a bad idea. Polygnotus (talk) 14:55, 31 May 2025 (UTC)[reply]
A list of everything that's happening/planned is at phab:tag/xtools. Feel free to drop a task if you've got a suggestion. Stuff that's done and will 100% be in the next update is in the "Pending deployment" column. Changes that still need review are at [1].
It's perfectly doable in reasonable time, just not through the api. The go-to solution for such mass queries to the database that still can finish in reasonable time is quarry. — Alien  3
3 3
15:03, 31 May 2025 (UTC)[reply]
@Alien333 Thank you! Polygnotus (talk) 15:04, 31 May 2025 (UTC)[reply]
@Dr. Blofeld: well, I couldn't help myself fidgeting with the idea. Turns out the query takes about a few minutes in the end.
The MySQL optimiser is a bit dumb, so it can't be one query: first you have to go to a fork of [2], change the project name line 5, start it, wait a few minutes, then you get a comma-separated string of user IDs. Then go to a fork of [3], replace line 4 by what you got in the previous step, and poof, you get the list of the 100 most active users in the given wikiproject, with those with the most edits first.
It's a bit of a mess, but it's prob still much faster than doing it by hand. — Alien  3
3 3
20:09, 31 May 2025 (UTC)[reply]
Thanks both! Is there are way Alien that you could copy into a Wiki list? ♦ Dr. Blofeld 09:33, 1 June 2025 (UTC)[reply]
Quarry has a "download data" at the right that lets you download the CSV of the result; as here there's only one value per row it gives the names one name per line. — Alien  3
3 3
09:48, 1 June 2025 (UTC)[reply]
I don't see why you don't just join actor (or actor_revision, which is a little faster since you're already joining revision anyway). Also, you don't need to go through the page table at all, since page_assessments.pa_page_id is already a page id and that's all you're using it for; the revision_userindex view is usually a pessimization unless you already have a set of actor ids you're looking for; you don't need to select COUNT(*) just so that you can order by it; and you're aware that you're throwing away the ordering in that second query, right? quarry:query/94218 does it in one step; quarry:history/94218/1013390/982681 for a version showing the edit counts. —Cryptic 21:13, 1 June 2025 (UTC)[reply]
I wasn't joining on actor because the MySQL optimiser is dumb and last time I checked it didn't use the index when doing the join, which meant it scanned the whole actor table and took ages. Maybe related to your other points, though.
You're 100% right on the join on page, and the other stuff you said; and no I'd forgotten that the second query threw the ordering away.
I'm a bit rusty at SQL :). — Alien  3
3 3
05:33, 2 June 2025 (UTC)[reply]

I can’t Log In!

[edit]

So, unfortunately, I was logged out of my account, and whenever I try to log in, the following text message appears: “There seems to be a problem with your login session; this action has been canceled as a precaution against session hijacking.” It also further mentions that it may be due to my cookie settings. Well, I can’t access that due to this exact problem. If anyone could help me, I’d be very thankful. BTW, my account is “Long-live-ALOPUS”. This may have something to do with my account completing one year, but, I’m able to log in in other devices, not my iPad. Could it be a problem from my side? I don’t think I forgot my password. Please help. 2405:201:550B:B035:B588:DBDC:3F72:E094 (talk) 11:21, 31 May 2025 (UTC)[reply]

Can you acccess https://auth.wikimedia.org? It redirects to https://www.wikimedia.org/. If the redirect works then try deleting your cookies on the iPad. See [4]. If you don't want to delete data for all websites then try wikimedia.org and wikipedia.org. PrimeHunter (talk) 12:47, 31 May 2025 (UTC)[reply]
Yes, I’m even signed in. The problem is only in the English Wikipedia. I’ll try deleting the cookies. Thank you for your help! 😄 2405:201:550B:B035:FC82:3345:E73B:F763 (talk) 14:13, 31 May 2025 (UTC)[reply]
I deleted the website data for Wikipedia on my iPad, but it still hasn’t worked... what should I do now?! 😞 2405:201:550B:B035:FC82:3345:E73B:F763 (talk) 14:28, 31 May 2025 (UTC)[reply]
It’s working in my other devices, but not on my iPad. What should I do?! 2405:201:550B:B035:64BD:3EBA:4565:5A6C (talk) 14:49, 31 May 2025 (UTC)[reply]
Login uses wikimedia.org. Did you delete the website data for both wikimedia.org and wikipedia.org? PrimeHunter (talk) 17:41, 31 May 2025 (UTC)[reply]
Yes. Even after two days, it’s not working on this device. 2405:201:550B:B035:9D0C:8D1C:83DD:3771 (talk) 03:58, 1 June 2025 (UTC)[reply]
I cleared all the website data in my settings, but it’s still not working. 2405:201:550B:B035:9D0C:8D1C:83DD:3771 (talk) 04:05, 1 June 2025 (UTC)[reply]
Check if the date and time are correct on your device. — xaosflux Talk 17:45, 31 May 2025 (UTC)[reply]
Yes, they are correct. 2405:201:550B:B035:9D0C:8D1C:83DD:3771 (talk) 03:59, 1 June 2025 (UTC)[reply]
Like, this is me on another iPad, but it’s not my main device. Long-live-ALOPUS (talk) 04:14, 1 June 2025 (UTC)[reply]
Try opening an incognito window (that's on Chrome; I think Safari and Firefox call it private browsing) and try to login there. If that works, that's a pretty good indication that you've still got some stale cookies that need removing. RoySmith (talk) 18:04, 1 June 2025 (UTC)[reply]
I use an old iPad (the first generation of the iPad Air), so it doesn’t have that feature. 2405:201:550B:B035:CD9E:1317:5009:A39B (talk) 07:07, 2 June 2025 (UTC)[reply]
That's a 12 year old machine (from when it was introduced). The newest version of iOS it should support is iOS 12. iOS 12 comes with Safari 12, which most definitely has "Private browsing". It is not unlikely that there is some sort of incompatibility with iOS 12 devices and the recent changes to the login methodology as it was likely never tested. Have you tested other language wikipedias ? What about https://en.wikivoyage.org ? —TheDJ (talkcontribs) 09:27, 2 June 2025 (UTC)[reply]
Well, I don’t have that private browsing feature; I think there’s a content filter, that’s why. Also, yes, I’m able to log in to my Arabic and Hindi Wikipedia accounts (which are the same name as my English one), but not Wikivoyage. Also, I’m able to log in from other, non-permanent devices, so this is a problem in my iPad. 2405:201:550B:B035:CD9E:1317:5009:A39B (talk) 12:57, 2 June 2025 (UTC)[reply]

Unwanted box

[edit]

For some reason I'm now seeing a box at the top of every article page with Article Links Tools and Include URLs. All I did was update my common.js to allow mass messages hereDr. Blofeld 11:24, 31 May 2025 (UTC)[reply]

@Dr. Blofeld: You also imported User:Polygnotus/Scripts/ListGenerator.js which makes that box. PrimeHunter (talk) 12:36, 31 May 2025 (UTC)[reply]
Ah OK, thanks! ♦ Dr. Blofeld 12:38, 31 May 2025 (UTC)[reply]

Category not retained in draft with AFC submission template

[edit]

Hello, I'm noticing an issue with Draft:Baba Mosque where manually added categories (such as Category:AfC draft submissions) are not retained or do not appear in the rendered page after saving, especially when the template is used.

Steps to reproduce:

1] Go to Draft:Baba Mosque

2] Add a category like (Category:AfC draft submissions)

3] Save the page — the category doesn’t appear

Is this suppression intentional due to the template? Or is there a technical issue at play? Thanks! Jesus isGreat7 ☾⋆ | Ping Me 11:50, 31 May 2025 (UTC)[reply]

This is a tracking category that is added automatically by the {{AfC submission}} template and does not render as text. – DreamRimmer 12:45, 31 May 2025 (UTC)[reply]
The draft uses {{Draft categories}} which deliberately only displays the categories at the location without actually adding the page to the categories. Don't change this. The categories would be added if they were outside {{Draft categories}} but don't do that. AfC categories should not be added manually. I have added {{AfC submission}} instead.[5] PrimeHunter (talk) 13:01, 31 May 2025 (UTC)[reply]

Request: example of markup for tickable checkboxes

[edit]

(Context) I would like to add a section to an article talk page which contains a list of checkboxes which I can tick and then save the section. Short of using 'pre' tag with '[ ]' and '[X]', is there a civilized way to do it? Gryllida (talk, e-mail) 12:50, 31 May 2025 (UTC)[reply]

@Gryllida: You could use {{Checkbox 2 (simple)}} or another template linked there. PrimeHunter (talk) 13:10, 31 May 2025 (UTC)[reply]
I just use
and don't really need to bother with a parameterized template whose name or parameters I can't remember. Mathglot (talk) 19:07, 1 June 2025 (UTC)[reply]

Gadget to make delete button more accessible?

[edit]

Is there some gadget that would modify Vector Legacy 2010 skin and

  • move "delete" button from "more" panel and would make it more accessible?
  • maybe move it in such way only if delete template is on the page?

Note: I know that I am not an admin on Wikipedia. I have admin rights on other, much smaller, Mediawiki wiki - where there is backlog of many pages to be deleted. Currently I need to click to open a page, click to view history and maybe investigated, click to unroll panel, click delete, confirm delete. I would gladly simplify this process as I will do it about 1200 times or more Mateusz Konieczny (talk) 14:25, 31 May 2025 (UTC)[reply]

@Mateusz Konieczny: Some of our deletion templates make a delete link which is only visible to administrators and has a prefilled reason. If you post a link to a page with a deletion template at your wiki then we can maybe help more. PrimeHunter (talk) 17:38, 31 May 2025 (UTC)[reply]
@PrimeHunter: https://wiki.openstreetmap.org/wiki/Template:Delete Mateusz Konieczny (talk) 21:08, 1 June 2025 (UTC)[reply]
@Mateusz Konieczny: Your wiki already has the required sysop-show code in Openstreetmap:MediaWiki:Common.css and Openstreetmap:MediaWiki:Group-sysop.css so it looks like you only have to edit the template with code like "Message for admins" in the source of {{Db-meta}}. PrimeHunter (talk) 21:51, 1 June 2025 (UTC)[reply]
@PrimeHunter: Thanks, I got it working! Mateusz Konieczny (talk) 00:55, 2 June 2025 (UTC)[reply]
Mateusz, after you post the link, if the button has a CSS class defined for it, you may be able to move it yourself, using custom code at your common.css page. If not, then probably a User script would do it. Mathglot (talk) 18:59, 1 June 2025 (UTC)[reply]

Finding raw text CN tags

[edit]

Quite a few articles contain a <sup>[''[[Wikipedia:Citation needed|citation needed]]'']</sup> tag, usually added via visual edit. These should be converted into standard cn tags. I fixed one at Special:Diff/1293275217. And a search for insource:"[[Wikipedia:Citation needed", returns 145 results. But, when I use the same search term in JWB to create a list of articles to fix, it starts adding infinitely many articles to the list. What search term should I use to generate a correct list? CX Zoom[he/him] (let's talk • {CX}) 20:34, 31 May 2025 (UTC)[reply]

Try insource:"Citation needed" insource:/\[\[Wikipedia:Citation needed/i . The latter search is regex search. insource:"[[Wikipedia:Citation needed" skips all special characters. Ponor (talk) 20:40, 31 May 2025 (UTC)[reply]
The Special:Search filters the articles correctly with this search term. But, WP:JWB keeps adding every article with a cn tag in it, probably a bug with JWB? CX Zoom[he/him] (let's talk • {CX}) 20:47, 31 May 2025 (UTC)[reply]
@CX Zoom: Did you limit JWB search to main(space) only? By default, it includes all name spaces. Still, "[[Wikipedia:Citation needed" does not do what you think it does. Use regex for exact results. Ponor (talk) 20:52, 31 May 2025 (UTC)[reply]
Limiting by namespace works. Thank you! CX Zoom[he/him] (let's talk • {CX}) 20:58, 31 May 2025 (UTC)[reply]

How to pass an article into Python code in AWB

[edit]

So, I have written a Python code that takes a file, does some operations on the text, and replaces the old text with new text. Now, Wikipedia:AutoWikiBrowser/User manual#Tools allows external scripts, but I don't understand how to pass the article through the Python code. What additional code is needed for it? CX Zoom[he/him] (let's talk • {CX}) 22:50, 31 May 2025 (UTC)[reply]

I suggest asking this at Wikipedia talk:AutoWikiBrowser. RoySmith (talk) 20:32, 1 June 2025 (UTC)[reply]
(When you do, I suggest showing the code that you used to do so, or at the very least whether you use pywikibot or handjammed things.) Izno (talk) 20:39, 1 June 2025 (UTC)[reply]
CX Zoom, to my understanding, you have a python script read the content from a file, and then write the changed content back to the file.
So you could set the "Program or script" field to the python executable, then pass the path to the python script as an argument, then you'd have the script with something like:
with open(filename, "r") as file:
    content = file.read()
with open(filename, "w") as file:
    file.write(perform_changes(content))
— Qwerfjkltalk 11:04, 2 June 2025 (UTC)[reply]
@Qwerfjkl: The structure of script is similar. I understood the "Program or script" field also. But I don't understand the "Arguments/Parameters" field. Do we enter the same value in both fields? CX Zoom[he/him] (let's talk • {CX}) 18:22, 2 June 2025 (UTC)[reply]
I am only guessing (and you need to ask this at AWB), the Arguments/Parameters would be something to specify the name of the file to be processed (variable "filename" in above code). If any other arguments were required by the Python program, they would also be given, similar to how you would run a program from a command line. Johnuniq (talk) 05:43, 3 June 2025 (UTC)[reply]
CX Zoom, as I said, you can put the path to the python executable (python.exe) in the Program or script field, and the path to the python script under "Arguments/Parameters". Equivalent to running python script.py in the terminal. — Qwerfjkltalk 15:11, 3 June 2025 (UTC)[reply]
Thank you very much everyone. Issue resolved now. CX Zoom[he/him] (let's talk • {CX}) 22:06, 3 June 2025 (UTC)[reply]

Is "Related changes" working properly? (example: Category:Use Malaysian English)

[edit]

Category:Use Malaysian English transcludes {{Parent monthly clean-up category}}. That template was modified on 31 May 2025, but when I click on "Related changes" in the sidebar of Category:Use Malaysian English, the resulting page says No changes during the given period match these criteria. I have been having a feeling that "Related changes" has not been working properly for a few months, but this is the first time that I have been able to find a concrete example. Am I misunderstanding what "Related changes" is supposed to show? I use it to try to figure out why a page that has not been modified in a while is suddenly showing a change of some kind (e.g. a new category or syntax error). – Jonesey95 (talk) 14:27, 2 June 2025 (UTC)[reply]

@Jonesey95: Related changes doesn't show changes to pages which are transcluded. It only shows changes to pages which are linked on the page or have a link to the page. See more at Help:Related changes. PrimeHunter (talk) 19:51, 2 June 2025 (UTC)[reply]
That is a helpful link. I see an explicit statement there: Changes to transcluded pages like templates are not listed, unless there is also a link to or from the page. Maybe it has just been coincidence that clicking on "Related changes" has often worked for me in these situations. I guess my question is, then, if a page that has not been modified in a while is suddenly showing a change of some kind (e.g. a new category or syntax error), what is a good way to figure out what has caused the change? I seem to remember a script that sorted "Pages transcluded onto the current version of this page" by modified date, which would probably work, but I found it difficult to live with because if I was looking for a specific template, I could never find the template in the long list because it was not alphabetized. – Jonesey95 (talk) 20:06, 2 June 2025 (UTC)[reply]
The script is User:Anomie/previewtemplatelastmod but I also found it difficult to live with. I gave up using it because both the order and added information was unwanted most of the time and made it harder to find templates of interest. @Anomie: It's a great script when I do want the changes it makes. I would love to reinstall it if I had to click something on an edit page to activate it. PrimeHunter (talk) 21:17, 2 June 2025 (UTC)[reply]

Script error from Module:Bracket

[edit]

2025 FIFA Club World Cup qualification seems to have script errors: Template:4TeamBracket-Info, Template:8TeamBracket-2Leg and Template:16TeamBracket-Info are not added to Module:Bracket. Achmad Rachmani (talk) 15:05, 2 June 2025 (UTC)[reply]

@Achmad Rachmani ask @Ahecht: at Module talk:Bracket to add those in. Nthep (talk) 15:23, 2 June 2025 (UTC)[reply]
@Achmad Rachmani, @Nthep: I'm working on those now... --Ahecht (TALK
PAGE
)
15:35, 2 June 2025 (UTC)[reply]
cheers. Nthep (talk) 15:44, 2 June 2025 (UTC)[reply]
@Nthep@Achmad Rachmani:  Done --Ahecht (TALK
PAGE
)
16:20, 2 June 2025 (UTC)[reply]

Complex find and replace

[edit]

Could someone who is good at REGEX please enact the change described at species:Wikispecies:Village Pump#Template:VN - technical change needed?

Feel free to leave the results in my user space, if you're not able to edit a protected template on that project. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:35, 2 June 2025 (UTC)[reply]

Answered over there. – Jonesey95 (talk) 18:06, 2 June 2025 (UTC)[reply]
Likewise; thank you. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:37, 2 June 2025 (UTC)[reply]
Checkmark This section is resolved and can be archived. If you disagree, replace this template with your comment. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:38, 2 June 2025 (UTC)[reply]

Simple summaries: editor survey and 2-week mobile study

[edit]

Hi everyone! I'm writing on behalf of the Web Team. Over the past year, the team has been exploring ways to make the wikis more accessible to readers globally through different projects around content discovery. One of the ideas we’ve been discussing is the presentation of machine-generated, but editor moderated, simple summaries for readers. These summaries take existing Wikipedia text, and simplify it for interested readers. Readers will show interest by opting into the feature and clicking to open the summary on pages where it is available. As part of our exploration into this idea, in the next two weeks we will be launching:

1. An editor survey on English, Spanish, French, and Japanese Wikipedias. This survey will ask editors on their preferences for generating, editing, and moderating summaries, as well as their thoughts on the project overall. We will use the data from this survey to propose the initial moderation workflows for a future version of a summary feature.

2. A two-week experiment on the mobile website. This experiment will allow a small set (10%) of readers to opt into and open pre-generated summaries on a set of articles for two weeks. After two weeks, we will turn the experiment off and use the data collected to determine whether users are interested in summaries and open them frequently, as well as whether summaries aid the overall experience.

After the completion of these two steps, we’ll be publishing our results on the project page and reaching out to discuss whether to proceed with building this feature and provide some options for its associated workflows for editors. You are welcome to leave questions around the project here or on the project talk page. EBlackorby-WMF (talk) 18:20, 2 June 2025 (UTC)[reply]

  • Yuck. --MZMcBride (talk) 20:52, 2 June 2025 (UTC)[reply]
  • Yuck. —Cryptic 21:46, 2 June 2025 (UTC)[reply]
    Yuck. Also, this should probably be at VPPR or VPWMF. Cremastra (uc) 21:58, 2 June 2025 (UTC)[reply]
    @EBlackorby-WMF But seriously. I'm grinning with horror. Just because Google has rolled out its AI summaries doesn't mean we need to one-up them.
    I sincerely beg you not to test this, on mobile or anywhere else. This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word "machine-generated" is used instead
    You also say this has been "discussed" which is thoroughly laughable as the "discussion" you link to has exactly one participant, the original poster, who is another WMF employee. Cremastra (uc) 22:04, 2 June 2025 (UTC)[reply]
  • What a coincidence! I had just read this article (https://www.theverge.com/news/676933/gmail-ai-summaries-workspace-android-ios) a day ago and wondered if there would be a similar feature on Wikipedia. As long as this machine/AI-generated summary feature is opt-in, I don't see any downsides to having it available for interested readers. The attention spans of the younger generations are shrinking, and some would rather read a short summary of the World War II article than a 13,033-word long article; this feature would be useful and beneficial for them. Some1 (talk) 22:43, 2 June 2025 (UTC)[reply]
    We can read the lead, which is a convenient, short summary written by real people. Cremastra (uc) 22:45, 2 June 2025 (UTC)[reply]
    Have you seen our leads lately? Izno (talk) 22:49, 2 June 2025 (UTC)[reply]
    All right, they're a reasonably short summary. In any case, even in articles with longer leads like Romeo and Juliet it is possible to skim over or ignore the parts that disinterest me and still extract valuable information. Cremastra (uc) 22:51, 2 June 2025 (UTC)[reply]
    AI-generated simple summary of Dopamine
    The lead of Romeo and Juliet isn't as long as the lead of World War II, which I'd linked. It seems like these AI-generated simple summaries are ~5 sentences long, which is much shorter (and more digestible) than the average leads of (non-stub) articles. Some1 (talk) 02:28, 3 June 2025 (UTC)[reply]
    Also, concerns about this feature should focus on the "This summary has not been checked for verifiable accuracy" part, not because "it's AI". Some1 (talk) 02:39, 3 June 2025 (UTC)[reply]
    The first paragraph is generally the digestible summary of the summary. This is enforced technologically in mobile views, which is where most of the view of the above-maligned younger generations are going to be coming from, as only the first paragraph is shown before the infobox. For World War II, that is six sentences. CMD (talk) 04:58, 3 June 2025 (UTC)[reply]
@EBlackorby-WMF Hi! As you can tell, your proposal does not align with what the community actually wants.
As you may or may not be aware, the WMF and the Wikipedia community have a very difficult and tense relationship.
It sounds like you guys already did a lot of work, without getting input from the community.
You link to this with the text we’ve been discussing but that must've been an internal WMF discussion because no one responded to that post.
Perhaps the Movement Communications team forgot to actually communicate with the movement?
I recommend stopping, and in the future asking for feedback at a far far earlier stage (but of course I know you won't).
There are many people here who are happy to help you by saying why we dislike certain ideas. But you need to involve those people at an early stage (during brainstorming), otherwise it is difficult to change course and you've wasted a lot of time and energy.
The WMF as a whole makes this same mistake over and over and over again. If you want to hear all the downsides and problems with a proposal, you can ask me on my talkpage. Polygnotus (talk) 05:15, 3 June 2025 (UTC)[reply]
How can you tell that from 5 people responding ? Have you run your own research into this ? —TheDJ (talkcontribs) 14:01, 3 June 2025 (UTC)[reply]
@TheDJ Let's not argue for the sake of arguing. It might confuse them. This isn't a distro-war. Polygnotus (talk) 15:30, 3 June 2025 (UTC)[reply]
  • Keep AI out of Wikipedia. That is all. WMF staffers looking to pad their resumes with AI-related projects need to be looking for new employers. Carrite (talk) 16:01, 3 June 2025 (UTC)[reply]
  • I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)[reply]
    But 99% of the AI consumers knowingly interact with is trained on Wikipedia, so they don't need wikipedia.org for that. So the WMF is proposing making a shittier version of something that already exists. Polygnotus (talk) 16:49, 3 June 2025 (UTC)[reply]
    It would be good if we had our own version of it, where we could control what is shown and how it is shown, instead of having a for-profit company modify our content as they please with no way for anyone to do anything about it, and no free and open alternative. Matma Rex talk 17:39, 3 June 2025 (UTC)[reply]
    That appears to be based on a bunch of incorrect assumptions. It is not like a nuke, we don't need to have it just because others do.
    we could control what is shown and how it is shown Being able to set a system prompt is not control, you'd have to train your own model, which means either copyright violations on a massive scale or training on model exclusively on Wikipedia data, meaning it would be completely inferior to what is available.
    instead of having a for-profit company modify our content as they please with no way for anyone to do anything about it Have you read WP:REUSE? This is what you signed up for.
    and no free and open alternative What are you talking about? Anyone can download ollama. https://ollama.com/ The WMF does not have the money and brainpower required to develop a serious alternative to the existing models, and if they try that is a clear indication that they don't understand their role. But the screenshot says that the name of the model is Aya. Aya is a family of models by Cohere Labs. https://cohere.com/research/aya Which is a for profit company. Polygnotus (talk) 18:10, 3 June 2025 (UTC)[reply]
    @Polygnotus Your comparison to nuclear bombs seems out of proportion.
    Being able to set a system prompt is not control I don't mean a system prompt, I mean the user interface around the summary (see the mockup earlier in the thread, with ample warnings and notes, and compare it to the summary in Google or whatever else) and I mean the editorial control to hide or override these summaries if they're egregiously wrong, which I hope will be available if this experiment becomes a real feature.
    Have you read WP:REUSE I think it's a bit rude of you to imply I don't know what I'm talking about. Anyway, have you seen how the content is actually presented by the various LLM companies? They don't generally cite content as they should (and not just from Wikipedia), and as far as I can tell no one yet has managed to force them to do it.
    What are you talking about? Anyone can download ollama Famously, everyone on the planet has a computer able to run large language models, and will not mind waiting several seconds or minutes for the results to come out. Oh, wait. Local models are only a viable alternative for a small group of people.
    I don't think you replied to what I said, only to things you imagined I said. I'm happy to argue for a bit, but please slow down. Matma Rex talk 21:01, 3 June 2025 (UTC)[reply]
    They don't generally cite content as they should (and not just from Wikipedia), and as far as I can tell no one yet has managed to force them to do it. DuckDuckGo does these days. Izno (talk) 21:06, 3 June 2025 (UTC)[reply]
    @Matma Rex The nuke thing is a famous example of game theory, see Mutually Assured Destruction for more.
    I mean the user interface around the summary (see the mockup earlier in the thread, with ample warnings and notes, and compare it to the summary in Google or whatever else) and I mean the editorial control to hide or override these summaries if they're egregiously wrong, which I hope will be available if this experiment becomes a real feature. People do not read banners and warnings, see Banner blindness. You can never make a banner big enough to force people to read it. override these summaries if they're egregiously wrong Even the example they provided is already egregiously wrong, of course they will be. Having humans override the after the fact is not a reasonable solution to a giant problem.
    I don't think WP:REUSE is a very popular page, and there are tons of people who don't realize that basically anyone can copy anything from Wikipedia, and no one does anything about it, even if they do not follow the terms of the license.
    have you seen how the content is actually presented by the various LLM companies? They don't generally cite content as they should (and not just from Wikipedia), and as far as I can tell no one yet has managed to force them to do it. Yes, I have, which is why my opinion is what it is.
    Local models are only a viable alternative for a small group of people. agreed. You talked about no free and open alternative which is why I mentioned Ollama.
    please slow down I mean if they really do this I think we've lost the war and I'll just leave Wikipedia. Or set up an alternative and then leave. Polygnotus (talk) 21:13, 3 June 2025 (UTC)[reply]
    What war? Whose against whom? And what does MAD has to do with this discussion? Do you think we're building Skynet here or something? I am baffled and at a loss as to how to reply to this. Matma Rex talk 21:18, 3 June 2025 (UTC)[reply]
    @Matma Rex
    You wrote what does MAD has to do with this discussion? in response to me writing The nuke thing is a famous example of game theory, see Mutually Assured Destruction for more. which was my reponse to Your comparison to nuclear bombs seems out of proportion in response to me writing It is not like a nuke, we don't need to have it just because others do.
    See how meta-conversations are near impossible on Wikipedia (and real life)? Polygnotus (talk) 21:24, 3 June 2025 (UTC)[reply]
    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)[reply]
    @Femke You seem to ignore this comment where I explained that the WMF can't compete with AI companies whose core business is to develop AI models, the fact that a model trained exclusively on Wikipedia data would be far inferior to a model trained on a far far larger dataset, and the fact that they are using Aya. as long as we have good safeguards in place What do you mean? Polygnotus (talk) 18:37, 3 June 2025 (UTC)[reply]
    As in: moderation before something is put to readers, rather than after the fact. Which would in practice restrict the feature to high-priority technical articles, given that we have limited editor time for this. I don't know enough about the specifics of Aya to comment intelligently there. —Femke 🐦 (talk) 18:42, 3 June 2025 (UTC)[reply]
    @Femke I think you know that is not what the WMF is proposing. So your comments make no sense. we might try this They are not proposing that we try anything. They are proposing giving the most important screen real estate we have (the WP:LEAD) of every article to a for-profit company. Polygnotus (talk) 18:45, 3 June 2025 (UTC)[reply]
    In the comment above, they say that the moderator workflow is still to be determined. You're probably right they don't have a 'check first' workflow in mind, but if there is consensus to implement this (and it seems from this discussion so far that there probably isn't), I imagine the community would only be okay with this with extremely strong moderation in place. Like, the CMD example below is something that needs to be avoided at all costs.
    Perhaps, it's time to start a Wikiproject and some type of contest to fix the problem identified and ensure we write articles that people can actually understand. My Challenges seem not to work as an encouragement. —Femke 🐦 (talk) 18:52, 3 June 2025 (UTC)[reply]
    You think people are lining up to check the work of an AI model? Especially when summarizing complicated technical topics most people don't even understand? Polygnotus (talk) 18:58, 3 June 2025 (UTC)[reply]
    I think AGF applies here. — Qwerfjkltalk 18:52, 3 June 2025 (UTC)[reply]
    @Qwerfjkl What do you mean? No one believes it is malice, right? Polygnotus (talk) 18:53, 3 June 2025 (UTC)[reply]
    Well, hyperbolic, then. — Qwerfjkltalk 18:55, 3 June 2025 (UTC)[reply]
    ? Polygnotus (talk) 19:17, 3 June 2025 (UTC)[reply]
  • A truly ghastly idea. In other words: Yuck. Since all WMF proposals steamroller on despite what the actual community says, I hope I will at least see the survey and that—unlike some WMF surveys—it includes one or more options to answer "NO". Yngvadottir (talk) 17:02, 3 June 2025 (UTC)[reply]
    It sure looks like they are planning to ask casual readers who use the mobile app. And if you ask them, their answer would probably be "yes". But that doesn't mean that it is a good idea. And introducing AI summaries would probably lead to a fork and an exodus. I would honestly be shocked if AI is not the final straw in the relationship between the WMF and the community. Polygnotus (talk) 17:17, 3 June 2025 (UTC)[reply]
Laudable goal, but if it is to go through, it should be only if established editors, i.e. extended confirmed editors, decide if the generated summary can supercede the current lead, or decide that the generated content requires modifications before using. – robertsky (talk) 19:03, 3 June 2025 (UTC)[reply]
@Robertsky if the generated summary can supercede the current lead That is not what they are proposing at all... if established editors, i.e. extended confirmed editors, decide that is also not what they are proposing decide that the generated content requires modifications before using that is also not what they are proposing. Polygnotus (talk) 19:06, 3 June 2025 (UTC)[reply]
@Polygnotus, The lead is supposed to be the summary of the article. Why have another machine generated summary if the lead is doing the job? editor moderated is what they are proposing, and they asked for editors' preferences for generating, editing, and moderating summaries. So I am suggesting as such. – robertsky (talk) 19:17, 3 June 2025 (UTC)[reply]
@Robertsky Why have another machine generated summary if the lead is doing the job? Are you asking me that? That is the WMFs proposal, and I am saying it is a bad idea...
Look at the screenshot. It shows both the current lead and the AI summary that contains multiple errors.
You think people are lining up to check the work of an AI model? Especially when summarizing complicated technical topics most people don't even understand?
My brother in Zeus, Cohere Labs is worth billions. Do you want Wikipedia volunteers to work for them for free??? You do realize that AI companies hire people to do the work you seem to think should be done by unpaid volunteers?
https://time.com/6247678/openai-chatgpt-kenya-workers/ Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic Polygnotus (talk) 19:19, 3 June 2025 (UTC)[reply]
@Polygnotus, I am not disagreeing with you... 😉 – robertsky (talk) 19:29, 3 June 2025 (UTC)[reply]
Praise be to Zeus! Polygnotus (talk) 19:30, 3 June 2025 (UTC)[reply]
A note that the WMF has begun requesting responses to surveys via the QuickSurveys extension, so some (like me) should get a prompt inviting you to the survey if enabled. Some of the questions... aren't great if I'm honest. – Isochrone (talk) 20:45, 3 June 2025 (UTC)[reply]
@Isochrone How can we opt in? Can we get some screenshots? Polygnotus (talk) 20:49, 3 June 2025 (UTC)[reply]
https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq
Since the WMF is willing to be this sneaky, I don't think we should feel guilty if we fill in the survey a couple hundred times. Polygnotus (talk) 21:04, 3 June 2025 (UTC)[reply]
Whilst I am not against sharing the survey, let's not intentionally skew the results :) – Isochrone (talk) 21:05, 3 June 2025 (UTC)[reply]
Let's intentionally skew the results! The WMF intentionally skewed it by picking who to show it to; the community should skew the results to tell the WMF to stop trying to put AI in Wikipedia! Polygnotus (talk) 21:14, 3 June 2025 (UTC)[reply]
  • If this were to actually happen, some or many readers would just glance at the summary instead of reading the article. Since the summary will form the glancers' entire impression of the subject, it needs to be extremely accurate. I suspect it is often not. Even if editor moderation helps this problem, you may as well just go to Simple English Wikipedia and get the same thing but actually made by humans. doozy (talkcontribs)⫸ 20:54, 3 June 2025 (UTC)[reply]
  • Haven’t we been getting good press for being a more reliable alternative to AI summaries in search engines? If they’re getting the wrong answers, let’s not copy their homework. 3df (talk) 21:16, 3 June 2025 (UTC)[reply]
  • Oppose. We already have summaries of our encyclopedia articles: the lead sections of our encyclopedia articles are the summaries of the article. Also, Wikipedia is already a massive collection of great summaries, because writing an encyclopedia (tertiary source) is the skill of summarizing secondary sources such as newspapers and books. Also, our leads (summaries) are so good that Google and other search engines use them in our knowledge panels. Wikipedia and AI are in the same business (summarizing) and we humans at Wikipedia are better at it than AI. I see little good that can come from mixing in hallucinated AI summaries next to our high quality summaries, when we can just have our high quality summaries by themselves. –Novem Linguae (talk) 22:12, 3 June 2025 (UTC)[reply]

The Dopamine summary

[edit]

I've now read the example image above, File:Dopamine simple Summary.png, which is the only example image given at mediawikiwiki:Reading/Web/Content Discovery Experiments/Simple Article Summaries. Here is the summary:

Dopamine is a neurotransmitter, a chemical messenger that carries signals between brain cells. It plays a vital role in several brain functions, including emotion, motivation, and movement. When we experience something enjoyable or receive a reward, our brain releases dopamine, creating a sense of pleasure and reinforcement. This neurotransmitter also helps us focus and stay motivated by influencing our behavior and thoughts. Dopamine imbalance has been associated with various disorders, such as depression and Parkinson's disease, highlighting its importance in maintaining overall brain health and function.

The first sentence is in the article. However, the second sentence mentions "emotion", a word that while in a couple of reference titles isn't in the article at all. The third sentence says "creating a sense of pleasure", but the article says "In popular culture and media, dopamine is often portrayed as the main chemical of pleasure, but the current opinion in pharmacology is that dopamine instead confers motivational salience", a contradiction. "This neurotransmitter also helps us focus and stay motivated by influencing our behavior and thoughts". Where is this even from? Focus isn't mentioned in the article at all, nor is influencing thoughts. As for the final sentence, depression is mentioned a single time in the article in what is almost an extended aside, and any summary would surely have picked some of the examples of disorders prominent enough to be actually in the lead.

So that's one of five sentences supported by the article. Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it. CMD (talk) 18:43, 3 June 2025 (UTC)[reply]

As someone who has tested a lot of AI models; no current AI technology that is currently available to the public is reliably able to make an accurate summary of a complicated article. We may get there at some point, but we aren't there yet. Polygnotus (talk) 18:47, 3 June 2025 (UTC)[reply]

What we can do now

[edit]

Notifying: @Polygnotus, Cryptic, MZMcBride, Some1, Izno, Chipmunkdavis, TheDJ, Johnuniq, Anomie, Carrite, Femke, Pppery, Jonesey95, Matma Rex, Qwerfjkl, Robertsky, Isochrone, Doozy, and 3df:

A two-week experiment on the mobile website seems to be the most immediate hazard; such an experiment would harm readers and negatively affect our reputation as a fairly reliable, non-AI source of information. Instead of freaking out, we should come up with some plan to persuade the WMF that this not a good idea and to stop them from rolling this out at any level.

Should the Wikipedia community do something to prevent or protest this "experiment", and if yes, what can/should we do? Cremastra (uc) 21:25, 3 June 2025 (UTC)[reply]

@Cremastra We should blast this survey link to everyone and anyone, and have them fill it out. Start an RFC with it. Spread it on Discord and IRC and post it on Village Pumps et cetera.
https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq Polygnotus (talk) 21:28, 3 June 2025 (UTC)[reply]
I already filled out the survey through the usual method. People are welcome to fill out the survey but I don't think we should submit multiple responses each. Something like an open letter to the WMF would be more effective than screwing around with their data. Also, if in reality the survey is an overwhelming "no", intentionally skewing the results would compromise their legitimacy. Cremastra (uc) 21:30, 3 June 2025 (UTC)[reply]
@Cremastra The legitimacy the survey had was already zero, because they are intentionally choosing not to actually ask the community about it. Because we don't use surveys on Wikipedia, we use talkpages and RfCs and Village Pump discussions and the like. So the fact that they are intentionally evading our consensus building mechanisms makes that survey null and void already. Polygnotus (talk) 21:33, 3 June 2025 (UTC)[reply]
We could always cban a few WMF people for WP:IDHT in regard to the insertion of unreliable content. Just spitballing. Thebiguglyalien (talk) 🛸 21:34, 3 June 2025 (UTC)[reply]
I mean, there's nothing wrong with that policy-wise, if they did actually insist on it, but it might be a tad extreme. Cremastra (uc) 21:37, 3 June 2025 (UTC)[reply]
Yeah but now we can negotiate downward. Thebiguglyalien (talk) 🛸 21:39, 3 June 2025 (UTC)[reply]
In the world of community-WMF squabbling, our standard playbook includes an open letter (e.g. WP:OPENLETTER2024), an RfC with community consensus against whatever the WMF wants to do (e.g. WP:FR2022RFC) or in theory some kind of drastic protest like a unilateral blackout (proposed in 2024) or an editor strike. My preference in this case is an RfC to stop the silliness. If the WMF then explicitly overrides what is very clear community consensus, we're in new territory, but I think they're unlikely to go that far. Cremastra (uc) 21:36, 3 June 2025 (UTC)[reply]
@Cremastra Maybe you can start an RfC on a very visible place? Something like:

The WMF has started a survey to ask if we want to put an AI summary in every article's lead section.
https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq
Unsurprisingly, even the example they gave in their screenshot contains hallucinated AI nonsense.
Please voice your opinions! Polygnotus (talk) 21:39, 3 June 2025 (UTC)[reply]
I took the survey. Its questions are confusing, and watch out for the last question: the good-bad, agree-disagree direction for the response buttons is REVERSED. Sloppy survey design. – Jonesey95 (talk) 21:40, 3 June 2025 (UTC)[reply]
As I said at the top, I think our immediate concern should be the actual proposed experimentation, not the survey.
I was thinking something along the lines of
The WMF has proposed testing AI-generated summaries appended in front of article leads (example). Does the community approve of this use of AI, or is this inappropriate and contrary to Wikipedia's mission? Cremastra (uc) 21:42, 3 June 2025 (UTC)[reply]
They will use the survey as a weapon and pretend it gives them free reign to do whatever they want. A lot of people here will simply leave the second they see such an implementation of AI on a Wikipedia page, because that goes against everything we stand for. Getting those people back will be near impossible. Polygnotus (talk) 21:44, 3 June 2025 (UTC)[reply]
If the WMF feels like acting with impunity, they'll do so. There has been little to no response from the WMF on this page, which suggests to me they're just going to roll ahead with their fingers in their ears. Which as thebiguglyalien points out above, may remind you of a certain guideline. Cremastra (uc) 21:46, 3 June 2025 (UTC)[reply]
I am certain @EBlackorby-WMF: is not doing this because they are evil, I honestly believe these are goodfaith people who do not understand what they are saying, and what the consequences of their words are.
If I say things like They are proposing giving the most important screen real estate we have (the WP:LEAD) of every article to a for-profit company. they haven't looked at it that way, because that is not how they think.
I do not think they should be banned/blocked, I think they should be educated. But we must stop them from doing more damage, one way or the other. Polygnotus (talk) 21:51, 3 June 2025 (UTC)[reply]
No one here thinks the WMF or any of their employees are "evil"; that is a ludicrous word to be using. If the WMF would respond to the feedback on this page (which is overwhelmingly against the proposal), it would reasssure me and many others. The present state of silence is what worries me. Cremastra (uc) 21:53, 3 June 2025 (UTC)[reply]
No one here thinks the WMF or any of their employees are "evil" hahahhaha Polygnotus (talk) 21:54, 3 June 2025 (UTC)[reply]
Yes, some people here honestly think the WMF is evil. Seriously. I even had to defend them in the context of the ANI vs WMF courtcase thing. They were falsely accusing the WMF of throwing those editors under the bus and abandoning them. Complete nonsense of course. But yeah some people do harbor an irrational hatred against the WMF. Polygnotus (talk) 21:56, 3 June 2025 (UTC)[reply]
Probably the discussion you all want to be at is the currently-open WP:VPWMF#RfC: Adopting a community position on WMF AI development, which is totally coincidentally also listed on WP:CENT. Izno (talk) 21:40, 3 June 2025 (UTC)[reply]
@Izno Not really, since that is about AI development, something the WMF is incapable of doing. Polygnotus (talk) 21:42, 3 June 2025 (UTC)[reply]
@Polygnotus, Matma said it nicely earlier. Let me say it a little less nicely: Tone it down, now. You are being needlessly antagonistic and on top of that bludgeoning this discussion. Find something else to do for a while. Izno (talk) 21:55, 3 June 2025 (UTC)[reply]
@Izno That is indeed not very nice, and rather antagonistic. Polygnotus (talk) 22:00, 3 June 2025 (UTC)[reply]
I was under the impression that discussion was broader and of the type that spends three months hammering out a wording. This is focused on a quick response to a specific issue. Cremastra (uc) 21:43, 3 June 2025 (UTC)[reply]
Yes, I agree that's the impression, but I don't think that you can demonstrate consensus to do anything about this discussion without showing consensus in that discussion, without your own separate RFC. Izno (talk) 21:57, 3 June 2025 (UTC)[reply]
Can we use site CSS to suppress it? Nardog (talk) 22:33, 3 June 2025 (UTC)[reply]

I am just about the least qualified editor here, but I'd think spreading the survey and participating in the current AI development RfC should come before anything drastic. doozy (talkcontribs)⫸ 21:52, 3 June 2025 (UTC)[reply]

Tech News: 2025-23

[edit]

MediaWiki message delivery 23:52, 2 June 2025 (UTC)[reply]

Chart extension?

[edit]

I'm trying to figure out how to use the new Chart extension. As far as I can tell, your data has to exist in a page on Commons, in the Data namespace? Is that correct? RoySmith (talk) 00:45, 3 June 2025 (UTC)[reply]

Currently, yes. Izno (talk) 00:46, 3 June 2025 (UTC)[reply]
That's astonishing. Why? RoySmith (talk) 00:49, 3 June 2025 (UTC)[reply]
The FAQ says

Chart definitions will live on their own .chart pages on Commons, under the Data: namespace. We want to treat charts as a standalone content type, rather than just a part of an article. It will be easy to reuse the same chart across wikis, and beyond Wikimedia platforms by making them available as links. Editors who want to embed charts in an article will be able to do so with a short piece of wikitext, similar to including an image from Commons, all without needing to interact with complex templates.

We have heard comments that requiring the data come from Commons tabular data may not address some common data sourcing flows, like from MediaWiki APIs or Wikidata Query Service. While those sources are not the focus for this project, we want to ensure the extension is designed in a way that they can be supported in the future.

My memory in addition to that is that it was seen as a minimum viable product. The particular point as been a pain for other editors since the project got to the point of developing this new extension, see mw:Extension talk:Chart/Project#Data source and I suspect other conversations on that talk page. (And I've seen groaning elsewhere.) Izno (talk) 01:22, 3 June 2025 (UTC)[reply]
(And one of the other discussions is the new mw:Extension talk:Chart/Project#Past questions, not yet answered.) Izno (talk) 01:27, 3 June 2025 (UTC)[reply]
I want to use this to chart the sizes of the various queues that feed the WP:DYK system: number of nominations pending, number of approved hooks, etc. I'll have a bot that computes these things and updates the data once a day. I guess that falls into the "some common data sourcing flows" bucket. Logically, I would have that data page live somewhere near the rest of the DYK pages, like Wikipedia:Did you know/DYK hook count. Having to put it on Commons won't break anything, but it seems silly, confusing, and arbitrary. I'm all for getting a MVP out the door, but how does hard-wiring Commons:Data into the path for the source make things simpler on the developers? RoySmith (talk) 10:37, 3 June 2025 (UTC)[reply]
And, since this will involve a bot to write the data files, it will require that I go through the commons bot approval process, when I already have an approved bot on enwiki which could do the same thing with a lot less fuss. RoySmith (talk) 11:13, 3 June 2025 (UTC)[reply]
I don't even see how to run tests without polluting the global namespace. Normally I would start to play with something like this in my sandbox, but the current setup seems to make that impossible. RoySmith (talk) 13:15, 3 June 2025 (UTC)[reply]
RoySmith, it's really no big deal getting bot approval on Commons and once you have it you can do other things. The advantage of Commons is the data is accessible to Lua modules from any wiki. Thus your program can be copied to any Wiki, without having to copy the data. Of course if the data is enwiki specific it wouldn't matter so much, but if the data for each wiki was kept under the same tree on Commons than conceivably someone could write a summary program that looks at all wikis data, and that program would then be installable on any wiki. It's nice to have a universally accessible place to store data even though there is some initial setup to get bot approval. — GreenC 17:33, 3 June 2025 (UTC)[reply]
Hmmm. I asked on commons and was surprised (in a good way) to discover that I didn't actually need permission. And I've since figured out that I can do my testing in Data:Sandbox/RoySmith, which seems a bit weird, but OK, not a blocker, so I've been playing around with commons:Data:Sandbox/RoySmith/DYK-test.chart, which produces:
Y axis labelDate1201501802102402703005/31/20256/1/2025Unapproved NominationsApproved Nominations
so at least I'm making some progress. I still need to figure out some layout issues. And to really make this useful, I'll need @Snævar's Transforms module but that's not quite ready.
The sad part is using Prometheus would be so much easier, but apparently I'm not allowed to do that since it's reserved for production. Even easier would have been Graphite but that's not a thing any more. RoySmith (talk) 17:51, 3 June 2025 (UTC)[reply]
RoySmith, Nice. Glad to know about this. I want to graph User:SDZeroBot/Category counter. The issue with time series it grows forever while Commons has a file size limit. One can create new .tab files for each year, but the plumbing works gets complicated on back and front end. — GreenC 21:49, 3 June 2025 (UTC)[reply]
That was one of the nice things about graphite. It would time-compress older data so it took up less space. You could get (for example) 5 second resolution for the most recent data points, but the really old data might be aggregated to one sample every hour.
I'm thinking I'll want to store maybe 10 parameters, one sample per day. So maybe 200 kB per year which is trivial. If you've got a lot more data, maybe not so trivial for your use case. RoySmith (talk) 21:58, 3 June 2025 (UTC)[reply]
@RoySmith Have you test this in dark mode? Polygnotus (talk) 21:58, 3 June 2025 (UTC)[reply]
That's way, way, down on my list of things to worry about. RoySmith (talk) 21:59, 3 June 2025 (UTC)[reply]
OMG. I have written a module that takes input from Commons data. The system works very well. However, the data is generated by a bot which can easily write JSON. Editing the data manually would be totally stupid (too difficult, too easy to make mistakes, too hard for others to check). Conceivably there could be a wikipage where some simple formatted data was entered (wikitext) and a bot could periodically copy changes to Commons. But using Commons data would be laughably impractical without a workaround. Johnuniq (talk) 05:59, 3 June 2025 (UTC)[reply]
In about 2 weeks you could solve that with Chart transforms, see mw:Extension:Chart/Transforms. Snævar (talk) 09:32, 3 June 2025 (UTC)[reply]
"But using Commons data would be laughably impractical" I still don't get why that is so unpractical. Is this because people don't want to go to Commons ? Why not ? All other media is primarily there as well. IS it because people don't understand JSON and only understand a simple key:value notation ? —TheDJ (talkcontribs) 09:37, 3 June 2025 (UTC)[reply]
When I looked at it (long ago in May 2020), the only way a human could update a number was to edit the whole JSON file, I think. I didn't worry about it because GreenC provided a bot which did all the hard work of maintaining the numbers and writing them in the correct format. I might be missing something, but I clicked 'edit' at c:Data:Wikipedia statistics/data.tab and saw a hard-to-follow table. I could handle it because I would suck it into a proper editor, search for what I wanted, check that it was the right thing, and change it. I suspect most Wikipedians would be uncomfortable with something so unfamiliar. I haven't seen an example of data for a graph—perhaps that works out ok? Johnuniq (talk) 10:05, 3 June 2025 (UTC)[reply]
@Johnuniq You might want to enable "Tabular Import/Export" gadget in your Commons preferences. It adds buttons to .tab pages to import and export from and to csv and excel files. It's 8 years old, but it still seems to work, even though it could really use an update. —TheDJ (talkcontribs) 10:45, 3 June 2025 (UTC)[reply]
"Is it because people don't understand JSON and only understand a simple key:value notation"—That probably applies to the significant majority of contributors. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:22, 3 June 2025 (UTC)[reply]
I don't know. I'd expect anyone working with datasets to have basic knowledge of JSON these days. It is so ubiquitous. —TheDJ (talkcontribs) 13:59, 3 June 2025 (UTC)[reply]
Because most charts are single use, despite what most developers might assume, and every single chart requires 2 different pages to be created. For example, each Demographics wikipedia page (ie Demographics of India) has 2-4 charts (on average). Given the fact there are about 200 of these alone, there will be 400-800 pages in wikimedia commons just for this single use-case. Furthermore none of these charts are legitimately used outside of wikipedia, perhaps a different language would find it useful, but does that require 2 different files, why not have the option to just have one .chart file? It's easy to nest JSON after all. Additionally it is rather repetitive to create these files, so much so I have a bot request in wikimedia commons just for this purpose. GalStar (talk) 22:44, 3 June 2025 (UTC)[reply]

Talk page glitch

[edit]

I see no [reply] link after Jeanette's comment at Wikipedia:Teahouse#Reliable Sources list.

Is her sig the cause, or something else? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:18, 3 June 2025 (UTC)[reply]

I believe it's the comma after the month. In preview, changing "11:00, 3 June, 2025 (UTC)" to "11:00, 3 June 2025 (UTC)" causes it to hyperlink the timestamp, which is a sign it is then detected as a comment. Skynxnex (talk) 15:05, 3 June 2025 (UTC)[reply]
I think that's the issue as well. JeanetteMartin, in at least a few recent comments, your signature has used a non-standard date/time-stamp. Can you tell us more about what's going on? Many gadgets/tools/bots/scripts rely on uniform timestamps. Firefangledfeathers (talk / contribs) 15:57, 3 June 2025 (UTC)[reply]
JeanetteMartin made an odd double signature [9] where the second signature had a valid timestamp but she then deleted the second signature.[10] I guess the first signature was made manually. @JeanetteMartin: If you want a customized signature then use the signature field at Special:Preferences with a checkmark at "Treat the above as wiki markup". When you use the new topic tool or reply tool, your post is automatically signed with your signature in preferences. In other situations, sign with ~~~~. PrimeHunter (talk) 19:51, 3 June 2025 (UTC)[reply]

Time precision

[edit]

Hi everyone,

When using the Wd module to grab dates from Wikidata, how does one change the precision of the returned data? More precisely, I interested in just returning the year from what is usually a down-to-the-day date. Any ideas? Julius Schwarz (talk) 14:35, 3 June 2025 (UTC)[reply]

{{#time:Y|{{#invoke:Wd|...}}}} should work...? Izno (talk) 16:03, 3 June 2025 (UTC)[reply]

Handling information to pass to a bot

[edit]

I've been working on a bot to generate .tab and .chart files from the {{Graph:Chart}} template. It started as a copy-paste thing, but at the moment it requires 2 inputs:

  • Name of the article
  • Names of each of the graphs

I have been trying to convert this into a true bot that doesn't require user intervention and was thinking that somehow using templates to mark graphs needing conversion as well as their names might be the best way forward. Is there any prior example of this that I could use as a template/scaffold?

Cheers, GalStar (talk) 18:26, 3 June 2025 (UTC)[reply]

If you were to follow this approach, perhaps a new parameter for the {{Graph:Chart}} template indicating an identifier to be used would be easiest? isaacl (talk) 21:37, 3 June 2025 (UTC)[reply]
Good point, I'll add a proposal to that talk page. GalStar (talk) 22:05, 3 June 2025 (UTC)[reply]

How to manage the width/height of charts

[edit]

The old graph extension used the Graph:Chart template, this had width and height properties that allowed for this. However the new chart extension says that "Currently, charts occupy the entire container width where you place them". This is highly undesirable. I tried putting them in a div and then styling that div, but to little success as it causes weird scrollbars. Any ideas as to how to fix this? GalStar (talk) 22:09, 3 June 2025 (UTC)[reply]