Jump to content

Wikipedia:Village pump (policy)

From Wikipedia, the free encyclopedia
 Policy Technical Proposals Idea lab WMF Miscellaneous 
The policy section of the village pump is intended for discussions about already-proposed policies and guidelines, as well as changes to existing ones. Discussions often begin on other pages and are subsequently moved or referenced here to ensure greater visibility and broader participation.
  • If you wish to propose something new that is not a policy or guideline, use Village pump (proposals). Alternatively, for drafting with a more focused group, consider starting the discussion on the talk page of a relevant WikiProject, the Manual of Style, or another relevant project page.
  • For questions about how to apply existing policies or guidelines, refer to one of the many Wikipedia:Noticeboards.
  • If you want to inquire about what the policy is on a specific topic, visit the Help desk or the Teahouse.
  • This is not the place to resolve disputes regarding the implementation of policies. For such cases, consult Wikipedia:Dispute resolution.
  • For proposals for new or amended speedy deletion criteria, use Wikipedia talk:Criteria for speedy deletion.

Please see this FAQ page for a list of frequently rejected or ignored proposals. Discussions are automatically archived after two weeks of inactivity.

Should WP:Demonstrate good faith include mention of AI-generated comments?

[edit]

Using AI to write your comments in a discussion makes it difficult for others to assume that you are discussing in good faith, rather than trying to use AI to argue someone into exhaustion (see example of someone using AI in their replies "Because I don't have time to argue with, in my humble opinion, stupid PHOQUING people"). More fundamentally, WP:AGF can't apply to the AI itself as AI lacks intentionality, and it is difficult for editors to assess how much of an AI-generated comment reflects the training of the AI vs. the actual thoughts of the editor.

Should WP:DGF be addended to include that using AI to generate your replies in a discussion runs counter to demonstrating good faith? Photos of Japan (talk) 00:23, 2 January 2025 (UTC)[reply]

No. As with all the other concurrent discussions (how many times do we actually need to discuss the exact same FUD and scaremongering?) the problem is not AI, but rather inappropriate use of AI. What we need to do is to (better) explain what we actually want to see in discussions, not vaguely defined bans of swathes of technology that, used properly, can aid communication. Thryduulf (talk) 01:23, 2 January 2025 (UTC)[reply]
Note that this topic is discussing using AI to generate replies, as opposed to using it as an aid (e.g. asking it to edit for grammar, or conciseness). As the above concurrent discussion demonstrates, users are already using AI to generate their replies in AfD, so it isn't scaremongering but an actual issue.
WP:DGF also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. Photos of Japan (talk) 01:32, 2 January 2025 (UTC)[reply]
And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - some such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this will happen). Thryduulf (talk) 02:34, 2 January 2025 (UTC)[reply]
Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. :bloodofox: (talk) 03:31, 2 January 2025 (UTC)[reply]
Thank you for the assumption of bad faith and demonstrating one of my points about the harm caused. Nobody is forcing you to engage with bad-faith comments, but whether something is or is not bad faith needs to be determined by its content not by its method of generation. Simply using an AI demonstrates neither good faith nor bad faith. Thryduulf (talk) 04:36, 2 January 2025 (UTC)[reply]
I don't see why we have any particular to reason to suspect a respected and trustworthy editor of using AI. Cremastra (uc) 14:31, 2 January 2025 (UTC)[reply]
I'm one of those people who clarified the difference between AI-generated vs. edited, and such a difference could be made explicit with a note. Editors are already accusing others of using AI. Could you clarify how you think addressing AI in WP:DGF would cause actual harm? Photos of Japan (talk) 04:29, 2 January 2025 (UTC)[reply]
By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @Bloodofox has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). Thryduulf (talk) 04:33, 2 January 2025 (UTC)[reply]
I think bloodofox's comment was about "you" in the rhetorical sense, not "you" as in Thryduulf. jlwoodwa (talk) 11:06, 2 January 2025 (UTC)[reply]
Given your relentlessly pro-AI comments here, it seems that you'd be A-OK with just chatting with a group of chatbots here — or leaving the discussion to them. However, most of us clearly are not. In fact, I would immediately tell someone to get lost were it confirmed that indeed that is what is happening. I'm a human being and find the notion of wasting my time with chatbots on Wikipedia to be incredibly insulting and offensive. :bloodofox: (talk) 04:38, 2 January 2025 (UTC)[reply]
My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. Thryduulf (talk) 04:43, 2 January 2025 (UTC)[reply]
Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Wikipedia. Hey, why not just sell the site to Meta, am I right? :bloodofox: (talk) 04:53, 2 January 2025 (UTC)[reply]
I haven't been arguing for more generative AI on the site. I've been arguing against banning it on the grounds that such a ban would be unclear, unenforceable, wouldn't solve any problems (largely because whether something is AI or not is completely irrelevant to the matter at hand) but would instead cause harm. Some of the issues identified are actual problems, but AI is not the cause of them and banning AI won't fix them.
I'm not mocking anybody, nor am I advocating to let chatbots run rampant. I'm utterly confused why you think I might advocate for selling Wikipedia to Meta (or anyone else for that matter)? Are you actually reading anything I'm writing? You clearly are not understanding it. Thryduulf (talk) 05:01, 2 January 2025 (UTC)[reply]
So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. :bloodofox: (talk) 05:13, 2 January 2025 (UTC)[reply]
No, this is not a everyone else is the problem, not me issue because most other people appear to be able to understand my arguments and respond to them appropriately. Not everybody agrees with them, but that's not an issue.
I'm not familiar with Linkedin threads (I don't use that platform) nor what a "FAANG employee" is (I've literally never heard the term before now) so I have no idea whether your characterisation is a compliment or a personal attack, but given your comments towards me and others you disagree with elsewhere I suspect it's closer to the latter.
AI is a tool. Just like any other tool it can be used in good faith or in bad faith, it can be used well and it can be used badly, it can be used in appropriate situations and it can be used in inappropriate situations, the results of using the tool can be good and the results of using the tool can be bad. Banning the tool inevitably bans the good results as well as the bad results but doesn't address the reasons why the results were good or bad and so does not resolve the actual issue that led to the bad outcomes. Thryduulf (talk) 12:09, 2 January 2025 (UTC)[reply]
In the context of generating comments to other users though, AI is much easier to use for bad faith than for good faith. LLMs don't understand Wikipedia's policies and norms, and so are hard to utilize to generate posts that productively address them. By contrast, bad actors can easily use LLMs to make low quality posts to waste people's time or wear them down.
In the context of generating images, or text for articles, it's easy to see how the vast majority of users using AI for those purposes is acting in good faith as these are generally constructive tasks, and most people making bad faith changes to articles are either obvious vandals who won't bother to use AI because they'll be reverted soon anyways, or trying to be subtle (povpushers) in which case they tend to want to carefully write their own text into the article.
It's true that AI "is just a tool", but when that tool is much easier to use for bad faith purposes (in the context of discussions) then it raises suspicions about why people are using it. Photos of Japan (talk) 22:44, 2 January 2025 (UTC)[reply]
LLMs don't understand Wikipedia's policies and norms They're not designed to "understand" them since the policies and norms were designed for human cognition. The fact that AI is used rampantly by people acting in bad faith on Wikipedia does not inherently condemn the AI. To me, it shows that it's too easy for vandals to access and do damage on Wikipedia. Unfortunately, the type of vetting required to prevent that at the source would also potentially require eliminating IP-editing, which won't happen. Duly signed, WaltClipper -(talk) 14:33, 15 January 2025 (UTC)[reply]
You mentioned "FUD". That acronym, "fear, uncertainty and doubt," is used in precisely two contexts: pro-AI propagadizing and persuading people who hold memecoin crypto to continue holding it. Since this discussion is not about memecoin crypto that would suggest you are using it in a pro-AI context. I will note, fear, uncertainty and doubt is not my problem with AI. Rather it's anger, aesthetic disgust and feeling disrespected when somebody makes me talk to their chatbot. Simonm223 (talk) 14:15, 14 January 2025 (UTC)[reply]
That acronym, "fear, uncertainty and doubt," is used in precisely two contexts is simply
FUD both predates AI by many decades (my father introduced me to the term in the context of the phrase "nobody got fired for buying IBM", and the context of that was mainframe computer systems in the 1980s if not earlier. FUD is also used in many, many more contexts that just those two you list, including examples by those opposing the use of AI on Wikipedia in these very discussions. Thryduulf (talk) 14:47, 14 January 2025 (UTC)[reply]
That acronym, "fear, uncertainty and doubt," is used in precisely two contexts is factually incorrect.
FUD both predates AI by many decades (indeed if you'd bothered to read the fear, uncertainty and doubt article you'd learn that the concept was first recorded in 1693, the exact formulation dates from at least the 1920s and the use of it in technology concepts originated in 1975 in the context of mainframe computer systems. That its use, eve in just AI contexts, is limited to pro-AI advocacy is ludicrous (even ignoring things like Roko's basilisk), examples can be found in these sprawling discussions from those opposing AI use on Wikipedia. Thryduulf (talk) 14:52, 14 January 2025 (UTC)[reply]
Not really – I agree with Thryduulf's arguments on this one. Using AI to help tweak or summarize or "enhance" replies is of course not bad faith – the person is trying hard. Maybe English is their second language. Even for replies 100% AI-generated the user may be an ESL speaker struggling to remember the right words (I always forget 90% of my French vocabulary when writing anything in French, for example). In this case, I don't think we should make a blanket assumption that using AI to generate comments is not showing good faith. Cremastra (uc) 02:35, 2 January 2025 (UTC)[reply]
  • Yes because generating walls of text is not good faith. People "touching up" their comments is also bad (for starters, if you lack the English competency to write your statements in the first place, you probably lack the competency to tell if your meaning has been preserved or not). Exactly what AGF should say needs work, but something needs to be said, and AGFDGF is a good place to do it. XOR'easter (talk) 02:56, 2 January 2025 (UTC)[reply]
    Not all walls of text are generated by AI, not all AI generated comments are walls of text. Not everybody who uses AI to touch up their comments lacks the competencies you describe, not everybody who does lack those competencies uses AI. It is not always possible to tell which comments have been generated by AI and which have not. This proposal is not particularly relevant to the problems you describe. Thryduulf (talk) 03:01, 2 January 2025 (UTC)[reply]
Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. :bloodofox: (talk) 03:27, 2 January 2025 (UTC)[reply]
That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. Thryduulf (talk) 04:25, 2 January 2025 (UTC)[reply]
I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Wikipedia. :bloodofox: (talk) 04:34, 2 January 2025 (UTC)[reply]
How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? Thryduulf (talk) 04:39, 2 January 2025 (UTC)[reply]
Wikipedia is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. :bloodofox: (talk) 04:40, 2 January 2025 (UTC)[reply]
You are entitled to that philosophy, but that doesn't actually answer any of my questions. Thryduulf (talk) 04:45, 2 January 2025 (UTC)[reply]
"why does it matter if it was AI generated or not?"
Because it takes little effort to post a lengthy, low quality AI-generated post, and a lot of effort for human editors to write up replies debunking them.
"How will they be enforceable? "
WP:DGF isn't meant to be enforced. It's meant to explain to people how they can demonstrate good faith. Posting replies to people (who took the time to write them) that are obviously AI-generated harms the ability of those people to assume good faith. Photos of Japan (talk) 05:16, 2 January 2025 (UTC)[reply]
The linked "example of someone using AI in their replies" appears – to me – to be a non-AI-generated comment. I think I preferred the allegedly AI-generated comments from that user (example). The AI was at least superficially polite. WhatamIdoing (talk) 04:27, 2 January 2025 (UTC)[reply]
Obviously the person screaming in all caps that they use AI because they don't want to waste their time arguing is not using AI for that comment. Their first post calls for the article to be deleted for not "offering new insights or advancing scholarly understanding" and "merely" reiterating what other sources have written.
Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which looks ok. Except it only superficially looks ok, it doesn't actually accurately describe the articles. Photos of Japan (talk) 04:59, 2 January 2025 (UTC)[reply]
Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially look OK but don't actually accurately relate to anything they are responding to. Thryduulf (talk) 05:03, 2 January 2025 (UTC)[reply]
But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. Photos of Japan (talk) 05:09, 2 January 2025 (UTC)[reply]
True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be [AI-generated]" part. WhatamIdoing (talk) 07:54, 2 January 2025 (UTC)[reply]
All of which was discovered because of my suspicions from their inhuman, and meaningless replies. "Reiteration isn't the problem; redundancy is," maybe sounds pithy in a vacuum, but this was written in reply to me stating that we aren't supposed to be doing OR but reiterating what the sources say.
"Your criticism feels overly prescriptive, as though you're evaluating this as an academic essay" also sounds good, until you realize that the bot is actually criticizing its own original post.
The fact that my suspicions about their good faith were ultimately validated only makes it even harder for me to assume good faith in users who sound like ChatGPT. Photos of Japan (talk) 08:33, 2 January 2025 (UTC)[reply]
I wonder if we need some other language here. I can understand feeling like this is a bad interaction. There's no sense that the person cares; there's no feeling like this is a true interaction. A contract lawyer would say that there's no meeting of the minds, and there can't be, because there's no mind in the AI, and the human copying from the AI doesn't seem to be interested in engaging their brain.
But... do you actually think they're doing this for the purpose of intentionally harming Wikipedia? Or could this be explained by other motivations? Never attribute to malice that which can be adequately explained by stupidity – or to anxiety, insecurity (will they hate me if I get my grammar wrong?), incompetence, negligence, or any number of other "understandable" (but still something WP:SHUN- and even block-worthy) reasons. WhatamIdoing (talk) 08:49, 2 January 2025 (UTC)[reply]
The user's talk page has a header at the top asking people not to template them because it is "impersonal and disrespectful", instead requesting "please take a moment to write a comment below in your own words"
Does this look like acting in good faith to you? Requesting other people write personalized responses to them while they respond with an LLM? Because it looks to me like they are trying to waste other people's time. Photos of Japan (talk) 09:35, 2 January 2025 (UTC)[reply]
Wikipedia:Assume good faith means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. WhatamIdoing (talk) 07:54, 3 January 2025 (UTC)[reply]
"Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. Photos of Japan (talk) 16:08, 3 January 2025 (UTC)[reply]
It feels like there is a lack of reciprocity in the interaction, even leaving aside the concern that the account is a block-evading sock.
But I wonder if you have read AGF recently. The first sentence is "Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Wikipedia, even when their actions are harmful."
So we've got some of this (e.g., harmful actions). But do you really believe this person woke up in the morning and decided "My main goal for today is to deliberately hurt Wikipedia. I might not be successful, but I sure am going to try hard to reach my goal"? WhatamIdoing (talk) 23:17, 4 January 2025 (UTC)[reply]
Trying to hurt Wikipedia doesn't mean they have to literally think "I am trying to hurt Wikipedia", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Wikipedia, but at the least I wouldn't say that they are acting in good faith. Photos of Japan (talk) 23:27, 4 January 2025 (UTC)[reply]
Sure, I'd count that as a case of "trying to hurt Wikipedia-the-community". WhatamIdoing (talk) 06:10, 5 January 2025 (UTC)[reply]
  • The issues with AI in discussions is not related to good faith, which is narrowly defined to intent. CMD (talk) 04:45, 2 January 2025 (UTC)[reply]
    In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. Remsense ‥  05:02, 2 January 2025 (UTC)[reply]
    Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. CMD (talk) 05:07, 2 January 2025 (UTC)[reply]
    All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. Remsense ‥  05:09, 2 January 2025 (UTC)[reply]
    Sure, but WP:DGF doesn't mention any unhelpful rhetorical patterns. CMD (talk) 05:32, 2 January 2025 (UTC)[reply]
    The fact that everyone (myself included) defending "LLM use" says "use" rather than "generated", is a pretty clear sign that no one really wants to communicate with someone using "LLM generated" comments. We can argue about bans (not being proposed here), how to know if someone is using LLM, the nuances of "LLM use", etc., but at the very least we should be able to agree that there are concerns with LLM generated replies, and if we can agree that there are concerns then we should be able to agree that somewhere in policy we should be able to find a place to express those concerns. Photos of Japan (talk) 05:38, 2 January 2025 (UTC)[reply]
    ...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? Gnomingstuff (talk) 06:19, 2 January 2025 (UTC)[reply]
    Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. Photos of Japan (talk) 06:23, 2 January 2025 (UTC)[reply]
    This is just semantics.
    For instance, I am OK with someone using a LLM to post a productive comment on a talk page. I am also OK with someone generating a reply with a LLM that is a productive comment to post to a talk page. I am not OK with someone generating text with an LLM to include in an article, and also not OK with someone using a LLM to contribute to an article.
    The only difference between these four sentences is that two of them are more annoying to type than the other two. Gnomingstuff (talk) 08:08, 2 January 2025 (UTC)[reply]
    Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? Photos of Japan (talk) 09:11, 2 January 2025 (UTC)[reply]
    Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. Thryduulf (talk) 11:59, 2 January 2025 (UTC)[reply]
    LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user posted in this thread earlier, as well as started a disruptive thread here and posted here, all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. Photos of Japan (talk) 21:44, 2 January 2025 (UTC)[reply]
    LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. Thryduulf (talk) 21:56, 2 January 2025 (UTC)[reply]
    A can of spray paint, a kitchen knife, etc., are tools that can be used for good or bad, but if you bring them some place where they have few good uses and many bad uses then people will be suspicious about why you brought them. You can't just assume that a tool in any context is equally harmless. Using AI to generate replies to other editors is more suspicious than using it to generate a picture exemplifying a fashion style, or a description of a physics concept. Photos of Japan (talk) 23:09, 2 January 2025 (UTC)[reply]
No -- whatever you think of LLMs, the reason they are so popular is that the people who use them earnestly believe they are useful. Claiming otherwise is divorced from reality. Even people who add hallucinated bullshit to articles are usually well-intentioned (if wrong). Gnomingstuff (talk) 06:17, 2 January 2025 (UTC)[reply]
It's rarely productive to get mad at someone on Wikipedia for any reason, but if someone uses an LLM and it screws up their comment they don't get any pass just because the LLM screwed up and not them. You are fully responsible for any LLM content you sign your name under. -- LWG talk 05:19, 1 February 2025 (UTC)[reply]
No. When someone publishes something under their own name, they are incorporating it as their own statement. Plagiarism from an AI or elsewhere is irrelevant to whether they are engaging in good faith. lethargilistic (talk) 17:29, 2 January 2025 (UTC)[reply]
  • Comment LLMs know a few tricks about logical fallacies and some general ways of arguing (rhetoric), but they are incredibly dumb at understanding the rules of Wikipedia. You can usually tell this because it looks like incredibly slick and professional prose, but somehow it cannot get even the simplest points about the policies and guidelines of Wikipedia. I would indef such users for lacking WP:CIR. tgeorgescu (talk) 17:39, 2 January 2025 (UTC)[reply]
    That guideline states "Sanctions such as blocks and bans are always considered a last resort where all other avenues of correcting problems have been tried and have failed." Gnomingstuff (talk) 19:44, 2 January 2025 (UTC)[reply]
    WP:CIR isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in an ANI thread concerning a user who can't/won't communicate without an LLM. Photos of Japan (talk) 20:49, 2 January 2025 (UTC)[reply]
    I blocked that user as NOTHERE a few minutes ago after seeing them (using ChatGPT) make suggestions for text to live pagespace while their previous bad behaviors were under discussion. AGF is not a suicide pact. BusterD (talk) 20:56, 2 January 2025 (UTC)[reply]
    ... but somehow it cannot get even the simplest points about the policies and guidelines of Wikipedia: That problem existed with some humans even prior to LLMs. —Bagumba (talk) 02:53, 20 January 2025 (UTC)[reply]
  • No - Not a good or bad faith issue. PackMecEng (talk) 21:02, 2 January 2025 (UTC)[reply]
  • Yes Using a 3rd party service to contribute to the Wikipedia on your behalf is clearly bad-faith, analogous to paying someone to write your article. Zaathras (talk) 14:39, 3 January 2025 (UTC)[reply]
    Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. PackMecEng (talk) 16:55, 3 January 2025 (UTC)[reply]
    That's true, but this and other comments here show that not a few editors perceive it as bad-faith, rude, etc. I take that as an indication that we should tell people to avoid doing this when they have enough CLUE to read WP:AGF and are making an effort to show they're acting in good faith. Daß Wölf 23:06, 9 January 2025 (UTC)[reply]
  • Comment Large language model AI like Chat GPT are in their infancy. The culture hasn't finished its initial reaction to them yet. I suggest that any proposal made here have an automatic expiration/required rediscussion date two years after closing. Darkfrog24 (talk) 22:42, 3 January 2025 (UTC)[reply]
  • No – It is a matter of how you use AI. I use Google translate to add trans-title parameters to citations, but I am careful to check for Google's output making for good English as well as reflecting the foreign title when it is a language I somewhat understand. I like to think that I am careful, and I do not pretend to be fluent in a language I am not familiar with, although I usually don't announce the source of such a translation. If an editor uses AI profligately and without understanding the material generated, then that is the sin; not AI itself. Dhtwiki (talk) 05:04, 5 January 2025 (UTC)[reply]
    There's a legal phrase, "when the exception swallows the rule", and I think we might be headed there with the recent LLM/AI discussions.
    We start off by saying "Let's completely ban it!" Then in discussion we add "Oh, except for this very reasonable thing... and that reasonable thing... and nobody actually meant this other reasonable thing..."
    The end result is that it's "completely banned" ...except for an apparent majority of uses. WhatamIdoing (talk) 06:34, 5 January 2025 (UTC)[reply]
    Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? DS (talk) 06:08, 7 January 2025 (UTC)[reply]
    Most likely you would reply because someone posted a valid comment and you are assuming they are acting in good faith and taking responsibility for what they post. To assume otherwise is kind of weird and not inline with general Wikipedia values. PackMecEng (talk) 15:19, 8 January 2025 (UTC)[reply]
  • No The OP seems to misunderstand WP:DGF which is not aimed at weak editors but instead exhorts stronger editors to lead by example. That section already seems to overload the primary point of WP:AGF and adding mention of AI would be quite inappropriate per WP:CREEP. Andrew🐉(talk) 23:11, 5 January 2025 (UTC)[reply]
  • No. Reading the current text of the section, adding text about AI would feel out-of-place for what the section is about. pythoncoder (talk | contribs) 05:56, 8 January 2025 (UTC)[reply]
  • No, this is not about good faith. Adumbrativus (talk) 11:14, 9 January 2025 (UTC)[reply]
  • Yes. AI use is not a demonstration of bad faith (in any case not every new good-faith editor is familiar with our AI policies), but it is equally not a "demonstration of good faith", which is what the WP:DGF section is about.
It seems some editors are missing the point and !voting as if every edit is either a demonstration of good faith or bad faith. Most interactions are neutral and so is most AI use, but I find it hard to imagine a situation where AI use would point away from unfamiliarity and incompetence (in the CIR sense), and it often (unintentionally) leads to a presumption of laziness and open disinterest. It makes perfect sense to recommend against it. Daß Wölf 22:56, 9 January 2025 (UTC)[reply]
Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that AI use is not a demonstration of bad faith... but it is equally not a "demonstration of good faith", does not justify a proposal to one-sidedly say just half. And among all the actions that don't necessarily demonstrate good faith (and don't necessarily demonstrate bad faith either), it is not the purpose of "demonstrate good faith" and the broader guideline, to single out one kind of action to especially mention negatively. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)[reply]
  • Yes. Per Dass Wolf, though I would say passing off a completely AI-generated comment as your own anywhere is inherently bad-faith and one doesn't need to know Wiki policies to understand that. JoelleJay (talk) 23:30, 9 January 2025 (UTC)[reply]
  • Yes. Sure, LLMs may have utility somewhere, and it might be a crutch for people unfamiliar with English, but as I've said above in the other AI RfC, that's a competence issue. This is about comments eating up editor time, energy, about LLMs easily being used to ram through changes and poke at editors in good standing. I don't see a case wherein a prospective editor's command of policy and language is good enough to discuss with other editors while being bad enough to require LLM use. Iseult Δx talk to me 01:26, 10 January 2025 (UTC)[reply]
    Good faith is separate from competence. Trying to do good is separate from having skills and knowledge to achieve good results. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)[reply]
  • No - anyone using a washing machine to wash their clothes must be evil and inherently lazy. They cannot be trusted. ... Oh, sorry, wrong century. Regards, --Goldsztajn (talk) 01:31, 10 January 2025 (UTC)[reply]
    Using a washing machine still results in washed clothes. Using LLMs results in communication failures because the LLM-using party isn't fully engaging. Hydrangeans (she/her | talk | edits) 04:50, 27 January 2025 (UTC)[reply]
    And before there's a reply of 'the washing machine-using party isn't fully engaging in washing clothes'—washing clothes is a material process. The clothes get washed whether or not you pay attention to the suds and water. Communication is a social process. Users can't come to a meeting of the minds if some of the users outsource the 'thinking' to word salad-generators that can't think. Hydrangeans (she/her | talk | edits) 05:00, 27 January 2025 (UTC)[reply]
  • No - As long as a person understands (and knows) what they are talking about, we shouldn't discriminate against folks using generative AI tech for grammar fixes or minor flow improvements. Yes, AI can create walls of text, and make arguments not grounded in policy, but we could do that even without resorting to generative AI. Sohom (talk) 11:24, 13 January 2025 (UTC)[reply]
To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but using AI should be thrown into the same cross-hairs as completely AI generated comments. Sohom (talk) 11:35, 13 January 2025 (UTC)[reply]
@Sohom Datta You mean shouldn't be thrown? I think that would make more sense given the context of your original !vote. Duly signed, WaltClipper -(talk) 14:08, 14 January 2025 (UTC)[reply]
  • No. Don't make any changes. It's not a good faith/bad faith issue. The 'yes' arguments are most unconvincing with very bizarre analogies to make their point. Here, I can make one too: "Don't edit with AI; you wouldn't shoot your neighbor's dog with a BB-gun, would you?" Duly signed, WaltClipper -(talk) 14:43, 13 January 2025 (UTC)[reply]
  • Yes. If I plug another user's comments into an LLM and ask it to generate a response, I am not participating in the project in good faith. By failing to meaningfully engage with the other user by reading their comments and making an effort to articulate myself, I'm treating the other user's time and energy frivolously. We should advise users that refraining from using LLMs is an important step toward demonstrating good faith. Hydrangeans (she/her | talk | edits) 04:55, 27 January 2025 (UTC)[reply]
  • Yes per Hydrangeans among others. Good faith editing requires engaging collaboratively with your human faculties. Posting an AI comment, on the other hand, strikes me as deeply unfair to those of us who try to engage substantively when there is disagreement. Let's not forget that editor time and energy and enthusiasm are our most important resources. If AI is not meaningfully contributing to our discussions (and I think there is good reason to believe it is not) then it is wasting these limited resources. I would therefore argue that using it is full-on WP:DISRUPTIVE if done persistently enough –– on par with e.g. WP:IDHT or WP:POINT –– but at the very least demonstrates an unwillingness to display good faith engagement. That should be codified in the guideline. Generalrelative (talk) 04:59, 28 January 2025 (UTC)[reply]
  • I appreciate your concern about the use of AI in discussions. It is important to be mindful of how AI is used, and to ensure that it is used in a way that is respectful of others.

I don't think that WP:DGF should be amended to specifically mention AI. However, I do think that it is important to be aware of the potential for AI to be used in a way that is not in good faith. When using AI, it is important to be transparent about it. Let others know that you are using AI, and explain how you are using it. This will help to build trust and ensure that others understand that you are not trying to deceive them. It is also important to be mindful of the limitations of AI. AI is not a perfect tool, and it can sometimes generate biased or inaccurate results. Be sure to review and edit any AI-generated content before you post it.

Finally, it is important to remember that AI is just a tool. It is up to you to use it in a way that is respectful and ethical. |} It's easy to detect for most, can be pointed out as needed. No need to add an extra policy JayCubby

  • Questions: While I would agree that AI may be used as a tool for good, such leveling the field for those with certain disabilities, might it just as easily be used as a tool for disruption? What evidence exists that shows whether or not AI may be used to circumvent certain processes and requirements that make Wiki a positive collaboration of new ideas as opposed to a toxic competition of trite but effective logical fallacies? Cheers. DN (talk) 05:39, 27 January 2025 (UTC)[reply]
    AI can be used to engage positively, it can also be used to engage negatively. Simply using AI is therefore not, in and of itself, an indication of good or bad faith. Anyone using AI to circumvent processes and requirements should be dealt with in the exact same way they would be if they circumvented those processes and requirements using any other means. Users who are not circumventing processes and requirements should not be sanctioned or discriminated against for circumventing processes and requirements. Using a tool that others could theoretically use to cause harm or engage in bad faith does not mean that they are causing harm or engaging in bad faith. Thryduulf (talk) 08:05, 27 January 2025 (UTC)[reply]
    Well said. Thanks. DN (talk) 08:12, 27 January 2025 (UTC)[reply]
    As Hydrangeans explains above, an auto-answer tool means that the person is not engaging with the discussion. They either cannot or will not think about what others have written, and they are unable or unwilling to reply themselves. I can chat to an app if I want to spend time talking to a chatbot. Johnuniq (talk) 22:49, 27 January 2025 (UTC)[reply]
    And as I and others have repeatedly explained, that is completely irrelevant to this discussion. You can use AI in multiple different ways, some of which are productive contributions to Wikipedia, some of which are not. If someone is disruptively not engaging with discussion then they can already be sanctioned for doing so, what tools they are or are not using to do so could not be less relevant. Thryduulf (talk) 02:51, 28 January 2025 (UTC)[reply]
    This implies a discussion that is entirely between AI chatbots deserves the same attention and thought needed to close it, and can effect a consensus just as well, as one between humans, so long as its arguments are superficially reasonable and not disruptive. It implies that editors should expect and be comfortable with arguing with AI when they enter a discussion, and that they should not expect to engage with anyone who can actually comprehend them... JoelleJay (talk) 01:00, 28 January 2025 (UTC)[reply]
    That's a straw man argument, and if you've been following the discussion you should already know that. My comment implied absolutely none of what you claim it does. If you are not prepared to discuss what has actually been written then I am not going to waste more of my time replying to you in detail. Thryduulf (talk) 02:54, 28 January 2025 (UTC)[reply]
    It's not a strawman; it's an example that demonstrates, acutely, the flaws in your premise. Hydrangeans (she/her | talk | edits) 03:11, 28 January 2025 (UTC)[reply]
    If you think that demonstrates a flaw in the premise then you haven't understood the premise at all. Thryduulf (talk) 03:14, 28 January 2025 (UTC)[reply]
    I disagree. If you think it doesn't demonstrate a flaw, then you haven't understood the implications of your own position or the purpose of discussion on Wikipedia talk pages. Hydrangeans (she/her | talk | edits) 03:17, 28 January 2025 (UTC)[reply]
    I refuse to waste any more of my time on you. Thryduulf (talk) 04:31, 28 January 2025 (UTC)[reply]
    Both of the above users are correct. If we have to treat AI-generated posts in good faith the same as human posts, then a conversation of posts between users that is entirely generated by AI would have to be read by a closing admin and their consensus respected provided it didn't overtly defy policy. Photos of Japan (talk) 04:37, 28 January 2025 (UTC)[reply]
    You too have completely misunderstood. If someone is contributing in good faith, we treat their comments as having been left in good faith regardless of how they made them. If someone is contributing in bad faith we treat their comments as having been left in bad faith regardless of how they made them. Simply using AI is not an indication of whether someone is contributing in good or bad faith (it could be either). Thryduulf (talk) 00:17, 29 January 2025 (UTC)[reply]
    But we can't tell if the bot is acting in good or bad faith, because the bot lacks agency, which is the problem with comments that are generated by AI rather than merely assisted by AI. Photos of Japan (talk) 00:31, 29 January 2025 (UTC)[reply]
    But we can't tell if the bot is acting in good or bad faith, because the bot lacks agency exactly. It is the operator who acts in good or bad faith, and simply using a bot is not evidence of good faith or bad faith. What determines good or bad faith is the content not the method. Thryduulf (talk) 11:56, 29 January 2025 (UTC)[reply]
    But the if the bot operator isn't generating their own comments, then their faith doesn't matter, the bot's does. Just like how if I hired someone to edit Wikipedia to me, what would matter is their faith. Photos of Japan (talk) 14:59, 30 January 2025 (UTC)[reply]
    A bot and AI can both be used in good faith and in bad faith. You can only tell which by looking at the contributions in their context, which is exactly the same as contributions made without the use of either. Thryduulf (talk) 23:12, 30 January 2025 (UTC)[reply]
    Not to go off topic, but do you object to any requirements on users for disclosure of use of AI generated responses and comments etc...? DN (talk) 02:07, 31 January 2025 (UTC)[reply]
    I'm not in favour of completely unenforceable requirements that would bring no benefits. Thryduulf (talk) 11:38, 31 January 2025 (UTC)[reply]
    Is it a demonstration of good faith to copy someone else's (let's say public domain and relevant) argument wholesale and paste it in a discussion with no attribution as if it was your original thoughts?
    Or how about passing off a novel mathematical proof generated by AI as if you wrote it by yourself? JoelleJay (talk) 02:51, 29 January 2025 (UTC)[reply]
    Specific examples of good or bad faith contributions are not relevant to this discussion. If you do not understand why this is then you haven't understood the basic premise of this discussion. Thryduulf (talk) 12:00, 29 January 2025 (UTC)[reply]
    If other actions where someone is deceptively appropriating, word-for-word, an entire argument they did not write, are intuitively "not good faith", then why would it be any different in this scenario? JoelleJay (talk) 16:57, 1 February 2025 (UTC)[reply]
    This discussion is explicitly about whether use of AI should be regarded as an indicator of bad faith. Someone deceptively appropriating, word-for-word, an entire argument they did not write is not editing in good faith. It is completely irrelevant whether they do this using AI or not. Nobody is arguing that some uses of AI are bad faith - specific examples are neither relevant nor useful. For simply using AI to be regarded as an indicator of bad faith then all uses of AI must be in bad faith, which they are not (as multiple people have repeatedly explained).
    Everybody agrees that some people who edit using mobile phones do so in bad faith, but we don't regard simply using a mobile phone as evidence of editing in bad faith because some people who edit using mobile phones do so in good faith. Listing specific examples of bad faith use of mobile phones is completely irrelevant to a discussion about that. Replace "mobile phones" with "AI" and absolutely nothing changes. Thryduulf (talk) 18:18, 1 February 2025 (UTC)[reply]
    Except the mobile phone user is actually doing the writing. Hydrangeans (she/her | talk | edits) 19:39, 1 February 2025 (UTC)[reply]
    I know I must be sounding like a stuck record at this point, but there are only so many ways you can describe completely irrelevant things as completely irrelevant before that happens. The AI system is incapable of having faith, good or bad, in the same way that a mobile phone is incapable of having faith, good or bad. The faith comes from the person using the tool not from the tool itself. That faith can be either good or bad, but the tool someone uses does not and cannot tell you anything about that. Thryduulf (talk) 20:07, 1 February 2025 (UTC)[reply]
    That is a really good summary of the situation. Using a widely available and powerful tool does not mean you are acting in bad faith, it is all in how it is used. PackMecEng (talk) 02:00, 28 January 2025 (UTC)[reply]
    A tool merely being widely available and powerful doesn't mean it's suited to the purpose of participating in discussions on Wikipedia. By way of analogy, Infowars is/was widely available and powerful, in the sense of the exercise it influenced over certain Internet audiences, but its very character as a disinformation platform makes it unsuitable for citation on Wikipedia. LLMs are widely available and might be considered 'powerful' in the sense that they can manage a raw output of vaguely plausible-sounding text, but their very character as text prediction models—rather than actual, deliberated communication—make them unsuitable mechanisms for participating in Wikipedia discussions. Hydrangeans (she/her | talk | edits) 03:16, 28 January 2025 (UTC)[reply]
    Even if we assume your premise is true, that does not indicate that someone using an LLM (which come in a wide range of abilities and are only a subset of AI) is contributing in either good or bad faith. It is completely irrelevant to the faith in which they are contributing. Thryduulf (talk) 04:30, 28 January 2025 (UTC)[reply]
    But this isn’t about if you think its a useful tool or not. This is about if someone uses one are they automatically acting in bad faith. We can argue the merits and benefits of AI all day, and they certainly have their place, but nothing you said struck at the point of this discussion. PackMecEng (talk) 13:59, 28 January 2025 (UTC)[reply]
Yes. To echo someone here, no one signed up here to argue with bad AI chat bots. If you're a non native speaker running through your posts through ChatGPT for spelling and grammar that's one thing, but wasting time bickering with AI slop is an insult. Hydronym89 (talk) 16:33, 28 January 2025 (UTC)[reply]
Your comment provides good examples of using AI in good and bad faith, thus demonstrating that simply using AI is not an indication of either. Thryduulf (talk) 00:18, 29 January 2025 (UTC)[reply]
Is that an fair comparison? I disagree that it is. Spelling and grammar checking doesn't seem to be what we are talking about.
The importance of context in which it is being used is, I think, the part that may be perceived as falling through the cracks in relation to AGF or DGF, but I agree there is a legitimate concern for AI being used to game the system in achieving goals that are inconsistent with being WP:HERE.
I think we all agree that time is a valuable commodity that should be respected, but not at the expense of others. Using a bot to fix grammar and punctuation is acceptable because it typically saves more time than it costs. Using AI to enable endless debates, even if both opponents are using it, seems like an awful waste of space, let alone the time it would cost admins that need to sort through it all. DN (talk) 01:16, 29 January 2025 (UTC)[reply]
Engaging in endless debates that waste the time of other editors is disruptive, but this is completely irrelevant to this discussion for two reasons. Firstly, someone engaging in this behaviour may be doing so in either good or bad faith: someone intentionally doing so is almost certainly WP:NOTHERE, and we regularly deal with such people. Other people sincerely believe that their arguments are improving Wikipedia and/or that the people they are arguing with are trying to harm it. This doesn't make it less disruptive but equally doesn't mean they are contributing in bad faith.
Secondly this behaviour is completely independent of whether someone is using AI or not: some people engaging in this behaviour are using AI some people engaging in this behaviour are not. Some people who use AI engage in this behaviour, some people are not.
For the perfect illustration of this see the people in this discussion who are making extensive arguments in good faith, without using AI, while having not understood the premise of the discussion - despite this being explained to them multiple times. Thryduulf (talk) 12:13, 29 January 2025 (UTC)[reply]
Would you agree that using something like grammar and spellcheck is not the same as using AI (without informing other users) to produce comments and responses? DN (talk) 22:04, 29 January 2025 (UTC)[reply]
They are different uses of AI, but that's not relevant because neither use is, in and of itself, evidence of the faith in which the user is contributing. Thryduulf (talk) 22:14, 29 January 2025 (UTC)[reply]
You are conflating "evidence" with "proof". Using AI to entirely generate your comments is not "proof" of bad faith, but it definitely provides less "evidence" of good faith than writing out a comment yourself. Photos of Japan (talk) 03:02, 30 January 2025 (UTC)[reply]
No, it provides no evidence of good or bad faith at all. Thryduulf (talk) 12:54, 30 January 2025 (UTC)[reply]
  • No per WP:CREEP. After reading the current version of the section, it doesn't seem like the right place to say anything about AI. -- King of ♥ 01:05, 29 January 2025 (UTC)[reply]
  • Yes, with caveats this discussion seems to be spiraling into a discussion of several separate issues. I agree with Remsense and Simonm223 and others that using an LLM to generate your reply to a discussion is inappropriate on Wikipedia. Wikipedia runs on consensus, which requires communication between humans to arrive at a shared understanding. Putting in the effort to fully understand and respond to the other parties is an essential part of good-faith engagement in the consensus process. If I hired a human ghost writer to use my Wiki account to argue for my desired changes on a wiki article, that would be completely inappropriate, and using an AI to replace that hypothetical ghost writer doesn't make it any more acceptable. With that said, I understand this discussion to be about how to encourage editors to demonstrate good faith. Many of the people here on both sides seem to think we are discussing banning or encouraging LLM use, which is a different conversation. In the context of this discussion demonstrating good faith means disclosing LLM use and never using LLMs to generate replies to any contentious discussion. This is a subset of "articulating your honest motives" (since we can't trust the AI to accurately convey your motives behind your advocacy) and "avoidance of gaming the system" (since using an LLM in a contentious discussion opens up the concern that you might simply be using minimal effort to waste the time of those who disagree with you and win by exhaustion). I think it is appropriate to mention the pitfalls of LLM use in WP:DGF, though I do not at this time support an outright ban on its use. -- LWG talk 05:19, 1 February 2025 (UTC)[reply]
  • No. For the same reason I oppose blanket statements about bans of using AI elsewhere, it is not only a huge over reach but fundamentally impossible to enforce. I've seen a lot of talk around testing student work to see if it AI, but that is impossible to do reliably. When movable type and the printing press began replacing scribes, the handwriting of scribes began to look like that of a printing press. As AI becomes more prominent, I imagine human writing will begin to look more AI generated. People who use AI for things like helping them translate their native writing into English should not be punished if something leaks through that makes the use obvious. Like anywhere else on the Internet, I foresee any strict rules against the use of AI to quickly be used in bad faith in heated arguments to accuse others of being a bot.
GeogSage (⚔Chat?⚔) 19:12, 2 February 2025 (UTC)[reply]

[tangent] If any of the people who have used LLMs/AI tools would be willing to do me a favor, please see the request at Wikipedia talk:Large language models#For an LLM tester. I think this (splitting a very long page – not an article – by date) is something that will be faster and more accurately done by a script than by a human. WhatamIdoing (talk) 18:25, 29 January 2025 (UTC)[reply]

  • Yes. The purpose of a discussion forum is for editors to engage with each other; fully AI-generated responses serve no purpose but to flood the zone and waste people's time, meaning they are, by definition, bad faith. Obviously this does not apply to light editing, but that's not what we're actually discussing; this is about fully AI-generated material, not about people using it grammar an spellchecking software to clean up their own words. No one has come up with even the slightest rationale for why anyone would do so in good faith - all they've provided is vague "but it might be useful to someone somewhere, hypothetically" - which is, in fact, false, as their total inability to articulate any such case shows. And the fact that some people are determine to defend it regardless shows why we do in fact need a specific policy making clear that it is inappropriate. --Aquillion (talk) 19:08, 2 February 2025 (UTC)[reply]

Contacting/discussing organizations that fund Wikipedia editing

[edit]

I have seen it asserted that contacting another editor's employer is always harassment and therefore grounds for an indefinite block without warning. I absolutely get why we take it seriously and 99% of the time this norm makes sense. (I'm using the term "norm" because I haven't seen it explicitly written in policy.)

In some cases there is a conflict between this norm and the ways in which we handle disruptive editing that is funded by organizations. There are many types of organizations that fund disruptive editing - paid editing consultants, corporations promoting themselves, and state propaganda departments, to name a few. Sometimes the disruption is borderline or unintentional. There have been, for instance, WMF-affiliated outreach projects that resulted in copyright violations or other crap being added to articles.

We regularly talk on-wiki and off-wiki about organizations that fund Wikipedia editing. Sometimes there is consensus that the organization should either stop funding Wikipedia editing or should significantly change the way they're going about it. Sometimes the WMF legal team sends cease-and-desist letters.

Now here's the rub: Some of these organizations employ Wikipedia editors. If a view is expressed that the organizations should stop the disruptive editing, it is foreseeable that an editor will lose a source of income. Is it harassment for an editor to say "Organization X should stop/modify what it's doing to Wikipedia?" at AN/I? Of course not. Is it harassment for an editor to express the same view in a social media post? I doubt we would see it that way unless it names a specific editor.

Yet we've got this norm that we absolutely must not contact any organization that pays a Wikipedia editor, because this is a violation of the harassment policy. Where this leads is a bizarre situation in which we are allowed to discuss our beef with a particular organization on AN/I but nobody is allowed to email the organization even to say, "Hey, we're having a public discussion about you."

I propose that if an organization is reasonably suspected to be funding Wikipedia editing, contacting the organization should not in and of itself be considered harassment. I ask that in this discussion, we not refer to real cases of alleged harassment, both to avoid bias-inducing emotional baggage and to prevent distress to those involved. Clayoquot (talk | contribs) 03:29, 22 January 2025 (UTC)[reply]

I'm not sure the posed question is actually the relevant one. Take as a given that Acme Co. is spamming Wikipedia. Sending Acme Co. a strongly worded letter to cut it out could potentially impact the employment of someone who edits Wikipedia, but is nonspecific as to who. I'd liken this to saying, "Amazon should be shut down." It will doubtless effect SOME Wikipedia editor, but it never targeted them. This should not be sanctioned.
The relevant question is if you call out a specific editor in connection. If AcmeLover123 is suspected or known to be paid by Acme Co. to edit Wikipedia, care should be taken in how it's handled. Telling AcmeLover123, "I'm going to tell your boss to fire you because you're making them look bad" is pretty unambiguous WP:HARRASMENT, and has a chilling effect like WP:NLT. Thus, it should be sanctioned. On the other hand, sending Acme Co. that strongly worded letter and then going to WP:COIN to say, "Acme Co. has been spamming Wikipedia lately. I sent them a letter telling them to stop. AcmeLover123 has admitted to being in the employ of Acme Co." This seems to me to be reasonable. So I think just as WP:NLT has no red-line rule of "using this words means it's a legal threat", contacting an employer should likewise be considered on a case-by-case basis. EducatedRedneck (talk) 14:20, 28 January 2025 (UTC)[reply]
Even if a specific editor is named when contacting an employer, we should be looking at it on a case-by-case basis. My understanding is that in the events that have burned into our collective emotional memory, trolls contacted organizations that had nothing to do with their employee's volunteer Wikipedia activity. Contacting these employers was a gross violation of the volunteer's right to privacy.
Personally, if Acme Co was paying me to edit and someone had a sincere complaint about these edits that they wanted to bring to AN/I, I would actually much prefer them to bring that complaint to Acme Co first to give us a chance to correct the problem with dignity. If a post about an Acme Co-sponsored project on AN/I isn't a violation of privacy, I can't see why sending exactly the same content to Acme Co via less-public channels like email would be one. Whether a communication constitutes harassment depends on the content. Clayoquot (talk | contribs) 00:30, 30 January 2025 (UTC)[reply]
Yes, what you described is why I don't think anyone here thinks contacting an employer is categorically forbidden. Though my concerns are, as I mentioned above, less about privacy (though HEB's comments below are well-taken), and far more about the chilling effect similar to WP:NLT. If there's even a whiff of such a chilling effect, I think it's reasonable to treat it the same. If it's vague, a stern caution is appropriate. If it reads as a clear intimidation, there should be a swift indef until it is clearly and unambiguously stated that there was no attempt to target the editor. Even that is a little iffy; it'd be easy for someone to do the whole, "That's a nice job you have there. It'd be a shame if something happened to it" shtick, then immediately apologize and insist it was expressing concern. The intimidation and chilling effect could remain well after any nominal retraction. EducatedRedneck (talk) 15:00, 30 January 2025 (UTC)[reply]
I think the main problem is we won't have access to the email to evaluate it unless one of the off-wiki parties shares it... We won't even know an email was sent. For accountability and transparency reasons these interactions need to take place on-wiki if they take place at all. Horse Eye's Back (talk) 15:04, 30 January 2025 (UTC)[reply]
@Horse Eye's Back That's fair. I think because off-wiki communications is a black box like you said, I figure we can't police that anyway, so there's no point in trying. The only thing we can police is mentioning it on-wiki. If I understand you right, your thinking is that there is a bright line of contacting an entity off-wiki about Wikipedia matters. It seems like that line extends beyond employers, too. (E.g., sending someone's mother an email saying, "Look what your (grown) child is doing to Wikipedia!")
I assume the bright line is trying to influence how they relate to Wikipedia. That is, emailing Acme Co. and saying, "Hey, your Wikipedia article doesn't have a picture of [$thing]. Can you release one under CC?" seems acceptable, but telling them, "Hey, someone has been editing your article in such-and-such a way. You should try to get them to stop." is firmly in the just-take-it-to-ANI territory. Am I getting that right? EducatedRedneck (talk) 15:29, 30 January 2025 (UTC)[reply]
More or less, for me the bright line is naming a specific editor or editors... However I would interpret "You should try to get them to stop." as an attempt at harassment by proxy, even with no name attached. Horse Eye's Back (talk) 15:38, 30 January 2025 (UTC)[reply]
I see. Okay, that makes sense to me. I'm sure there are WP:BEANS ways to try to game it, but at the very least it'd catch the low-hanging fruit of blatant intimidation. You've convinced me; thanks for taking the time to explain your reasoning to me. EducatedRedneck (talk) 10:49, 31 January 2025 (UTC)[reply]
Just in general you should not be attempting to unilaterally handle AN/I level issues off-wiki. That is entirely inappropriate. Horse Eye's Back (talk) 15:04, 30 January 2025 (UTC)[reply]

Another issue is that it sometimes doing that can place another link or two in a wp:outing chain, and IMO avoiding that is of immense importance. The way that you posed the question with the very high bar of "always" is probably not the most useful for the discussion. Also, a case like this is almost always involves a concern about a particular editor or center around edits made by a particular editor, which I think is a non-typical omission from your hypothetical example. Sincerely, North8000 (talk) 19:41, 22 January 2025 (UTC)[reply]

I'm not sure what you mean by placing a link in an outing chain. Can you explain this further? I used the very high bar of "always" because I have seen admins refer to it as an "always" or a "bright line" and this shuts down the conversation. Changing the norm from "is always harassment" to "is usually harassment" is exactly what I'm trying to do.
Organizations that fund disruptive editing often hire just one person to do it but I've also seen plenty of initiatives that involve money being distributed widely, sometimes in the form of giving perks to volunteers. If the organization is represented by only one editor then there is obviously a stronger argument that contacting the organization constitutes harassment. Clayoquot (talk | contribs) 06:44, 23 January 2025 (UTC)[reply]

What would be the encyclopedic purpose(s) of the communication with the company? You don't describe one and I'm having a hard time coming up with any. Horse Eye's Back (talk) 00:42, 30 January 2025 (UTC)[reply]

It would usually be to tell them that we have a policy or guideline that their project is violating. Clayoquot (talk | contribs) 01:07, 30 January 2025 (UTC)[reply]
And the encyclopedic purpose served by that would be? Also note that if there is no on-wiki discussion then there is no consensus that P+G are being violated, so you're not actually telling them that they're violating P+G you're only telling them at you as a single individual think that they are violating P+G. Horse Eye's Back (talk) 01:16, 30 January 2025 (UTC)[reply]
It serves the same encyclopedic purpose, and carries same level of authoritativeness, as you or I dropping a warning template on a user's talk page. Clayoquot (talk | contribs) 03:08, 30 January 2025 (UTC)[reply]
Those are not at all the same (remember you aren't proposing to email the person, you're proposing to email someone you think is their employer)... At this point I think you want a liscense to harass, what you're proposing is unaccountable vigilante justice and the fact that you think anything you do off-wiki carries on-wiki authority is bizzare and disturbing. How else would you like to be able to harass other editors? Nailing a printed out warning template to someone's front door? Showing up at their place of work in person? Horse Eye's Back (talk) 14:55, 30 January 2025 (UTC)[reply]
Wikivoyage dealt with an apparent case of corporate-authorized spammy editing (or spam-adjacent) in 2020, and I thought that contacting the corporate office (a hotel chain) was a reasonable thing to do.
Paid editing isn't forbidden there, but touting is. Articles started filling up with recommendations to use that particular hotel chain. Contacting the editor(s) directly didn't seem to make a difference. Sending an e-mail message to the marketing department to ask whether they happened to have anybody working on this, and to see if we could get them to do the useful things (e.g., updated telephone numbers) without the not-so-useful things seemed to eventually have the desired effect.
Also, just to be clear, while a private e-mail is one way to go about this, I understand that there's this thing called social media, and I have heard that publicly contacting @CompanyName is supposed to be a pretty reliable way to get the attention of a corporate marketing department. "Hey, @CompanyName, do you know anything about why someone keeps pasting copyrighted content about your company into Wikipedia?" is not "contacting someone's employer"; it's "addressing the likely source of the problem".
In terms of history, I'm aware of two cases that made many editors quite uncomfortable. Without going into too many details, and purely from potentially fallible memory:
  • A banned editor was disrupting Wikipedia from IP addresses controlled by the US government. There was discussion on wiki about reporting this to the relevant agency. The disruption stopped (for a while). Some editors thought that a report could result in the editor losing his job, but (a) AFAICT nobody knows if that happened, and (b) if you have a contract that says misusing government computers could result in losing your job, then choosing to disrupt Wikipedia at work = choosing to lose your job.
  • An editor figured out someone's undisclosed real-world identity and phoned her up at work (i.e., called to talk to the editor herself, not her boss). This was taken as a much bigger deal. A stranger phoning you up at work to argue with you about Wikipedia is much more personal and threatening than a note being dropped in a government agency's public complaint box.
I don't think that either of these are equivalent to telling a company that its marketing plan is causing problems. WhatamIdoing (talk) 05:12, 1 February 2025 (UTC)[reply]

General reliability discussions have failed at reducing discussion, have become locus of conflict with external parties, and should be curtailed

[edit]

The original WP:DAILYMAIL discussion, which set off these general reliability discussions in 2017, was supposed to reduce discussion about it, something which it obviously failed to do since we have had more than 20 different discussions about its reliability since then. Generally speaking, a review of WP:RSNP does not support the idea that general reliability discussions have reduced discussion about the reliability of sources either. Instead, we see that we have repeated discussions about the reliability of sources, even where their reliability was never seriously questioned. We have had a grand total of 22 separate discussions about the reliability of the BBC, for example, 10 of which have been held since 2018. We have repeated discussions about sources that are cited in relatively few articles (e.g., Jacobin).

Moreover these discussions spark unnecessary conflict with parties off wiki that harm the reputation of the project. Most recently we have had an unnecessary conflict with the Anti-Defamation League sparked by a general reliability discussion with them, but the original Daily Mail discussion did this also. In neither case was usage of the source a problem generally on Wikipedia in any way that has been lessened by their deprecation - they were neither widely-used, nor permitted to be used in a way that was problematic by existing policy on using reliable sources.

There is also some evidence, particularly from WP:PIA5, that some editors have sought to "claim scalps" by getting sources they are opposed to on ideological grounds 'banned' from Wikipedia. Comments in such discussions are often heavily influenced by people's impression of the bias of the source.

I think a the very least we need a WP:BEFORE-like requirement for these discussions, where the editors bringing the discussion have to show that the source is one for which the reliability of which has serious consequences for content on Wikipedia, and that they have tried to resolve the matter in other ways. The recent discussion about Jacobin, triggered simply by a comment by a Jacobin writer on Reddit, would be an example of a discussion that would be stopped by such a requirement. FOARP (talk) 15:54, 22 January 2025 (UTC)[reply]

  • The purpose of this proposal is to reduce discussion of sources. I feel that evaluating the reliability of sources is the single most important thing that we as a community can do, and I don't want to reduce the amount of discussion about sources. So I would object to this.—S Marshall T/C 16:36, 22 January 2025 (UTC)[reply]
  • Yeah I would support anything to reduce the constant attempts to kill sources at RSN. It has become one of the busiest pages on all of Wikipedia, maybe even surpassing ANI. -- GreenC 19:36, 22 January 2025 (UTC)[reply]
  • Oddly enough, I am wondering why this discussion is here? And not Talk RSN:Wikipedia talk:Reliable sources/Noticeboard, as it now seems to be a process discussion (more BEFORE) for RSN? Alanscottwalker (talk) 22:41, 22 January 2025 (UTC)[reply]
    Dropped a notice both there and at WT:RSP but I think these are all reasonable venues to have the discussion at, so since it's here we may as well keep it here if people think there's any more to say. Alpha3031 (tc) 12:24, 27 January 2025 (UTC)[reply]
  • Some confusion about pages here, with some mentions of RSP actually referring to RSN. RSN is a type of "before" for RSP, and RSP is intended as a summary of repeated RSN discussions. One purpose of RSP is to put a lid on discussion of sources that have appeared at RSN too many times. This isn't always successful, but I don't see a proposal here to alleviate that. Few discussions are started at RSP; they are started at RSN and may or may not result in a listing or a change at RSP. Also, many of the sources listed at RSP got there due to a formal RfC at RSN, so they were already subject to RFCBEFORE (not always obeyed). I'm wondering how many listings at RSN are created due to an unresolved discussion on an article talk page—I predict it is quite a lot. Zerotalk 04:40, 23 January 2025 (UTC)[reply]
    “Not always obeyed” is putting it mildly. FOARP (talk) 06:47, 23 January 2025 (UTC)[reply]
  • I fully agree that we need a strict interpretation of RFCBEFORE for the big "deprecate this source" RfCs. It must be shown that 1. The source is widely used on Wikipedia. 2. Removal/replacement of the source (on individual articles) has been contested. 3. Talk page discussions on use of the source have been held and have not produced a clear consensus.
We really shouldn't be using RSP for cases where a source is used problematically a single-digit number of times and no-one actually disagrees that the source is unreliable – in that case it can just be removed/replaced, with prior consensus on article talk if needed. Toadspike [Talk] 11:42, 26 January 2025 (UTC)[reply]
The vast majority of discussions at RSN are editors asking for advice, many of which get overlooked due to other more contentious discussions. The header and edit notice already contain wording telling editors not to open RFCs unless there has been prior discussion (as with any new requirement there's no way to make editors obey it).
RSP is a different problem, for example look at the entry for Metro. Ten different discussions are linked and the source rated as unreliable, except if you read those discussions most mention The Metro only in passing. There is also the misconception that RSP is (or should be) a list of all sources. -- LCU ActivelyDisinterested «@» °∆t° 19:55, 26 January 2025 (UTC)[reply]
  • If our processes of ascertaining reliability have become a locus of conflict with external parties I'd contend this is a good and healthy thing. If Wikipedia is achieving its neutrality goal it will not be presenting the propagandized perspective of "external parties" with enough power to worry Wikipedia at all. That we are now facing opposition from far-right groups like the Heritage Foundation demonstrates we are being somewhat successful curtailing propaganda and bias. We should be leaning into this, not shrinking away. Simonm223 (talk) 13:01, 27 January 2025 (UTC)[reply]
    Really, we should be actively seeking out such conflicts, merely for the purposes of having them? Wikipedia is not an advocacy service.
    I don't understand why we are even having a discussion about the Heritage Foundation because on any page where the topic of "should we be using the output of a think-tank for statements of fact about anything except themselves in the voice of WP" the outcome would inevitably be "no", so there's no actual need to make a blanket ban on using them for that purpose. FOARP (talk) 09:49, 31 January 2025 (UTC)[reply]
  • I agree with Simon223. Regarding "these discussions spark unnecessary conflict with parties off wiki that harm the reputation of the project". It takes two to have a conflict and Wikipedia is not a combatant. "reputation" shouldn't be a lever external partisan actors can pull to exert influence. They will never be satisfied. There are incompatible value systems. Wikipedia doesn't need to compromise its values for the sake of reputation. That would be harmful. And it doesn't need to pander to people susceptible to misinformation about Wikipedia. It can just focus on the task of building an encyclopedia according to its rules. Sean.hoyland (talk) 13:45, 27 January 2025 (UTC)[reply]
  • I do note that the vast majority of these disputes relate to the reliability of news outlets. Perhaps what is needed is better guidance on the reliability and appropriate use of such sources. Blueboar (talk) 14:26, 27 January 2025 (UTC)[reply]
  • I'd favour something stronger than "curtailed", such as "stopped" or "rolled back". But in 2019 RFC: Moratorium on "general reliability" RFCs failed. The closer (ToThAc) said most opposers' arguments "basically boil down to WP:CONTEXTMATTERS" which I rather thought was our (supporters') argument; however, we were a minority. Peter Gulutzan (talk) 18:35, 27 January 2025 (UTC)[reply]
    @Peter Gulutzan: I still stand by that closure. I think the real problems are that 1) the credibility of sources changes over time, 2) there may be additional factors the original RfC did not cover, or 3) the submitter failed to check RSPS or related pages. Such discussions are bound to be unavoidable regardless of context. ToThAc (talk) 18:45, 27 January 2025 (UTC)[reply]
  • The current Heritage discussion is a real problem and (if anyone ever dares close it) should make us rethink policy. But I think this proposal overlooks the real value of the RSP system, which is preventing ordinary discussions from ever reaching RSN. I see appeals to RSP all the time on talk pages and edit summaries, and they are usually successful at cutting off debate. RSN is active because editors correctly recognize that the system works and the consensuses reached there are very powerful. I do think that the pace of RFCs is much too strong. Some blame should be placed on the RSP format which marks discussions as stale after 4 years. As there are now many hundreds of listings, necessarily there must be reconsiderations every week just to keep up.
I'm inclined to think that we should
1. Set 3 years as minimum and 5 as stale, and deny RFCs by default unless (A) 3 years have passed since the last discussion or (B) there's been a major development which requires us to reconsider. It's very rare for a source to slide subtly into unreliability. Generally there is a major shift in management or policy which is discussed in the press. Often RFCs start with only handwaving about what warrants a new discussion.
2. Split the RSP-feeder process off from the normal RSN, which should return to its old format. IMO the biggest problem with the constant political news RFCs is that they distract attention from editors who actually need help with a non-perennial source. GordonGlottal (talk) 16:26, 29 January 2025 (UTC)[reply]
I strongly disagree that the Heritage Foundation RfC requires us to rewrite our policies. And blanket strict moratoria on new RFCs that last 36 months is significant overreach. Simonm223 (talk) 15:54, 30 January 2025 (UTC)[reply]
The issue with the Heritage Foundation RFC is that it has little to do with reliability. The problem is that editors wanted a technical solution to the threat that HF poses and think that blacklisting is the solution. But blacklisting states a requirement that the source be discussed at RSN, and RSN says that discussions should only be about reliability.
The discussion should have stayed at the village pump. The community should have been able to make a decision there without the unnecessary bureaucracy. Technically all comments in that RFC that aren't about reliability should be ignored, which would be ridiculous but required by rigidly sticking to process. -- LCU ActivelyDisinterested «@» °∆t° 13:42, 1 February 2025 (UTC)[reply]
  • "General reliability discussions have failed at reducing discussion" is neither provable or falsifiable yet its the core of your argument. You have no idea if thats true or not and pretending otherwise is just insulting the rest of us. What I would support along the lines of your argument is a more efficient way to speedily close discussions which are near repeats. Horse Eye's Back (talk) 15:58, 30 January 2025 (UTC)[reply]
    I would also agree that would be a benefit. In general speedy clerking is good for noticeboards. Simonm223 (talk) 16:06, 30 January 2025 (UTC)[reply]
    I think we also need to make it clear that taking something to the noticeboard for the explicit purpose of generating an additional discussion to meet the perennial sources listing criteria is gaming the system. Those are the only discussions I see that really piss me off. Horse Eye's Back (talk) 16:12, 30 January 2025 (UTC)[reply]
    I hear you. As it is most of those should just be closed as lacking WP:RFCBEFORE. Simonm223 (talk) 16:13, 30 January 2025 (UTC)[reply]
    There's a couple of these currently on the noticeboard. I'd happily just close them (rather than commenting 'Bad RFC'), but there's no policy reason for doing so at the moment that I'm aware of. Unless I've missed something that says RFCs without RFCBEFORE can just be closed.
    An effort not to WP:BITE would be needed though. Due to misconceptions about the RSP inexperienced editors see that the reliable sources for their country aren't on the RSP, and thinking it's a general list of sources want it get those sources added. Making the description of WP:RSP clearer could help clear up those misconceptions. -- LCU ActivelyDisinterested «@» °∆t° 13:32, 1 February 2025 (UTC)[reply]
    Failure to have a prior discussion is not grounds for closing an RFC, just like failure to do a WP:BEFORE search is not grounds for closing an AFD. Sometimes an RFC is necessary because you're on such a low-traffic page that you need the RFC system to draw attention to it.
    An RFC with no prior attempts at discussion doesn't happen very often, and we are not overwhelmed with RFCs in general (it used to be about three a day; now it's about two), so keeping this option open isn't hurting us. WhatamIdoing (talk) 23:37, 1 February 2025 (UTC)[reply]
    Yeah that's what I thought. Honestly the issue isn't the RFCs the issue comes from editors believing they need to add sources to the RSP. Every few months there's a new editor who sees that the sources in their country aren't listed and starts an RFC, mistakenly thinking that getting them on the RSP is necessary for the sources to be considered reliable. However that there's no agreement on whether a generally reliable source that has additional considerations should be yellow or green doesn't make me hopeful that much will change. -- LCU ActivelyDisinterested «@» °∆t° 03:27, 2 February 2025 (UTC)[reply]
  • These general reliability discussions most often refer to something like a newspaper, magazine, or website (which have lots of distinct articles/webpages) rather than something like a book, so I'll limit my discussion to the former. I frequently see editors starting general reliability discussions at the RSN without giving any examples of previous (specific WP text)+(source = specific news article/opinion article/webpage) combinations that call the newspaper's/magazine's/website's general reliability into question, and without introducing an example of this sort. Yes, when we use something from a newspaper/magazine/website, we should be paying attention to its overall "reputation for fact-checking and accuracy," but also WP:RSCONTEXT. I think it's a mistake to launch into an RSN discussion of whether a newspaper/magazine/website is GREL/GUNREL without first having discussions of (specific text)+(specific article/webpage) combinations for that newspaper/magazine/website. I agree with @FOARP's last paragraph. FactOrOpinion (talk) 17:14, 30 January 2025 (UTC)[reply]
    We need some way to differentiate between "reliable in general but not for, you know, just anything" and "reliable for", which is the kind of "That politician's tweet is reliable for what he said, even though it's not reliable in general." WhatamIdoing (talk) 05:18, 1 February 2025 (UTC)[reply]
  • Certainly, one may argue for applying RFCBEFORE more strictly. However, the premise that the general reliability concept has "obviously failed" at reducing discussion is incorrect; the simple counts presented here are not sufficient, for multiple reasons. Using the Daily Mail as an example:
Extended analysis of discussion-counting approach
  • We don't inherently care about the number of discussions, but whether the number decreased. We would need a comparison to the amount of discussion before the Daily Mail RfC. This is perhaps the easiest issue to correct, but the RSP list is not necessarily comprehensive (e.g. older discussions might be under-represented, due to being out of date or because they occurred before 2018 when RSP was created).
  • The number of discussions is much less relevant than the length of the discussions, which is a more accurate measurement for the amount of time and effort spent by editors. Even if discussions were initiated at the same rate, future discussions on the same topic are likely to be shorter.
  • Discussions subsequent to the original 2017 RfC (numbers 28 through 54 on the current RSP list) are not automatically or inherently futile. It's implied that they're simply reiterating the same subjects that were being debated before 2017, but reviewing them shows that this is clearly incorrect.
  • From my quick review, only 3 of the discussions (including the 2019 RfC) were primarily about restarting debate on the Daily Mail's overall reliability. This is an entirely reasonable number, given that a certain amount of re-evaluation is expected in order to determine whether consensus has changed. In other words, the original disputes were resolved and have largely remained resolved.
  • Instead, the largest group of discussions (including the 2020 RfC) involves clarifications and refinements of the general principle. In other words, after consensus was determined, editors moved on to discussing other topics in a way that productively built on the prior consensus, which reflects the normal Wikipedia process. Other types of discussions addressed the implementation mechanisms, questions from relatively inexperienced editors, etc. In addition, many of the discussions were quite short, which I would attribute at least in part to the existence of the pre-existing consensus.
  • RSP only counts discussions on RSN, whereas most discussions on the use of sources happen on individual articles. In fact, this is potentially where we would expect the most benefit. For example, there are 462 direct links to WP:DAILYMAIL from article talk pages, all of which indicate cases where the amount of discussion was potentially reduced. This doesn’t include discussions on user talk (507 links), discussions that used other redirects, or discussions that linked directly to RSP. It also doesn’t include discussions that were pre-empted entirely, by the edit filter or by knowledge of the existing consensus.
Beyond that, of course, reduction in repetitive discussion is not the only possible type of benefit. As determined by consensus, the removal of Daily Mail references since 2017 reflects a major improvement in the quality of our content. Perhaps that is assumed, but it is a major advantage that needs to be included in the cost-benefit analysis.
One thing I do agree with is that Wikipedia's reputation is a relevant factor to consider; our purpose is to serve the readers, and to do that we need them to trust us. It's conceivable that the benefits from classifying or reclassifying a particular source could be outweighed by the risk of igniting a controversy or appearing partisan, especially if a source is rarely used or if its disadvantages could be mitigated in other ways. (And assuming that the alternative isn't likely to alienate a different population that's even larger, etc.) However, there are relatively few sources where this is likely to be an issue, so I would be more likely to support an initiative that applies specifically to the relevant sources. Sunrise (talk) 10:04, 1 February 2025 (UTC)[reply]
I'm not sure that I agree with you that "We don't inherently care about the number of discussions, but whether the number decreased". Sometimes we really do care about how many times ____ gets revisited, because the fact that people are starting discussions indicates that they are uncertain. If you see something from DubiousWebsite.com, and you are dubious about it, and the notes at RSP confirm your initial impression, then you will not start a discussion. If, however, you discover that Fox News is listed, and Fox News happens to be a main source of your own (and your friends' and neighbors') news information, then you are likely to start a discussion because you believe it is wrong and, in good faith and with what you perceive to be Wikipedia's best interests at heart, you want to try to fix the mistake. WhatamIdoing (talk) 23:46, 1 February 2025 (UTC)[reply]
In general terms, yes, but I was speaking in the context of evaluating the effectiveness of an intervention. If N discussions occurred, that can indeed be a relevant issue, and you've given a reasonable argument to that effect. However, the argument that was made is "N discussions occurred, therefore the intervention had no effect at all", which isn't a valid line of reasoning because it doesn't tell us whether there was an improvement over the alternative. Another way to describe this is that the measurement has no control group.
The problem being highlighted by the bullet point you quoted isn't that we never care about discussion counts at all. Instead, the issue is that the count on its own has no meaning for the intervention's effectiveness, because the necessary comparison is missing. Furthermore, even if a correction is made, this is only the first of multiple reasons why the overall logic is insufficient, as I have described in the analysis. Sunrise (talk) 08:42, 2 February 2025 (UTC)[reply]

Primary sources vs Secondary sources

[edit]

The discussion above has spiralled out of control, and needs clarification. The discussion revolves around how to count episodes for TV series when a traditionally shorter episode (e.g., 30 minutes) is broadcast as a longer special (e.g., 60 minutes). The main point of contention is whether such episodes should count as one episode (since they aired as a single entity) or two episodes (reflecting production codes and industry norms).

The simple question is: when primary sources and secondary sources conflict, which we do use on Wikipedia?

  • The contentious article behind this discussion is at List of Good Luck Charlie episodes, in which Deadline, TVLine and The Futon Critic all state that the series has 100 episodes; this article from TFC, which is a direct copy of the press release from Disney Channel, also states that the series has "100 half-hour episodes".
  • The article has 97 episodes listed; the discrepancy is from three particular episodes that are all an hour long (in a traditionally half-hour long slot). These episode receive two production codes, indicating two episodes, but each aired as one singular, continuous release. An editor argues that the definition of an episode means that these count as a singular episode, and stand by these episode being the important primary sources.
  • The discussion above discusses what an episode is. Should these be considered one episode (per the primary source of the episode), or two episodes (per the secondary sources provided)? This is where the primary conflict is.
  • Multiple editors have stated that the secondary sources refer to the production of the episodes, despite the secondary sources not using this word in any format, and that the primary sources therefore override the "incorrect" information of the secondary sources. Some editors have argued that there are 97 episodes, because that's what's listed in the article.
  • WP:CALC has been cited; Routine calculations do not count as original research, provided there is consensus among editors that the results of the calculations are correct, and a meaningful reflection of the sources. An editor argues that there is not the required consensus. WP:VPT was also cited.

Another example was provided at Abbott Elementary season 3#ep36.

  • The same editor arguing for the importance of the primary source stated that he would have listed this as one episode, despite a reliable source[1] stating that there is 14 episodes in the season.
  • WP:PSTS has been quoted multiple times:
    • Wikipedia articles usually rely on material from reliable secondary sources. Articles may make an analytic, evaluative, interpretive, or synthetic claim only if it has been published by a reliable secondary source.
    • While a primary source is generally the best source for its own contents, even over a summary of the primary source elsewhere, do not put undue weight on its contents.
    • Do not analyze, evaluate, interpret, or synthesize material found in a primary source yourself; instead, refer to reliable secondary sources that do so.
  • Other quotes from the editors arguing for the importance of primary over secondary includes:
    • When a secondary source conflicts with a primary source we have an issue to be explained but when the primary source is something like the episodes themselves and what is in them and there is a conflict, we should go with the primary source.
    • We shouldn't be doing "is considered to be"s, we should be documenting what actually happened as shown by sources, the primary authoritative sources overriding conflicting secondary sources.
    • Yep, secondary sources are not perfect and when they conflict with authoritative primary sources such as released films and TV episodes we should go with what is in that primary source.

Having summarized this discussion, the question remains: when primary sources and secondary sources conflict, which we do use on Wikipedia?

  1. Primary, as the episodes are authoritative for factual information, such as runtime and presentation?
  2. Or secondary, which guide Wikipedia's content over primary interpretations?

-- Alex_21 TALK 22:22, 23 January 2025 (UTC)[reply]

  • As someone who has never watched Abbott Elementary, the example given at Abbott Elementary season 3#ep36 would be confusing to me. If we are going to say that something with one title, released as a single unit, is actually two episodes we should provide some sort of explanation for that. I would also not consider this source reliable for the claim that there were 14 episodes in the season. It was published three months before the season began to air; even if the unnamed sources were correct when it was written that the season was planned to have 14 episodes, plans can change. Caeciliusinhorto-public (talk) 10:13, 24 January 2025 (UTC)[reply]
    Here is an alternate source, after the premiere's release, that specifically states the finale episode as Episode 14. (Another) And what of your thoughts for the initial argument and contested article, where the sources were also posted after the multiple multi-part episode releases? -- Alex_21 TALK 10:48, 24 January 2025 (UTC)[reply]
    Vulture does say there were 14 episodes in that season, but it also repeatedly describes "Career Day" (episode 1/2 of season 3) in the singular as "the episode" in its review and never as "the episodes". Similarly IndieWire and Variety refer to "the supersized premiere episode, 'Career Day'" and "the mega-sized opener titled 'Career Day Part 1 & 2'" respectively, and treat it largely as a single episode in their reviews, though both acknowledge that it is divided into two parts.
    If reliable sources do all agree that the one-hour episodes are actually two episodes run back-to-back, then we should conform to what the sources say, but that is sufficiently unexpected (and even the sources are clearly not consistent in treating these all as two consecutive episodes) that we do need to at least explain that to our readers.
    In the case of Good Luck Charlie, while there clearly are sources saying that there were 100 episodes, none of them seem to say which episodes are considered to be two, and I would consider "despite airing under a single title in a single timeslot, this is two episodes" to be a claim which is likely to be challenged and thus require an inline citation per WP:V. I have searched and I am unable to find a source which supports the claim that e.g episode 3x07 "Special Delivery" is actually two episodes. Caeciliusinhorto-public (talk) 12:18, 24 January 2025 (UTC)[reply]
@Caeciliusinhorto-public: That's another excellent way of putting it. Plans change. Sources like Deadline Hollywood are definitely WP:RS, but they report on future information and don't really update to reflect what actually happened. How are sources like Deadline Hollywood supposed to know when two or more episodes are going to be merged for presentation? To use a couple of other examples, the first seasons for both School of Rock and Andi Mack were reported to have 13 episodes each by Deadline Hollywood and other sources. However, the pilot for School of Rock (101) never aired and thus the first season actually only had 12 episodes, while the last episode of Andi Mack's first season (113) was held over to air in the second season and turned into a special and thus the first season only had 12 episodes. Using School of Rock, for example, would we still insist on listing 13 episodes for the season and just make up an episode to fit with the narrative that the source said there are 13 episodes? No, of course not. It's certainly worth mentioning as prose in the Production section, such as: The first season was originally reported to have 13 episodes; however, only 12 episodes aired due to there being an unaired pilot. But in terms of the number of episodes for the first season, it would be 12, not 13. Amaury22:04, 24 January 2025 (UTC)[reply]
And what of the sources published later, after the finale, as provided, in which the producer of the series still says that there are 14 episodes? Guidelines and policies (for example, secondary sources vs primary sources) can easily be confused; for example, claiming MOS:SEASON never applies because we have to quote a source verbatim even if it says "summer 2016", against Wikipedia guidelines. So, if we need to quote a source verbatim, then it is fully support that there are 14 episodes in the AE season, or there are 100 episodes in the GLC series. All of the sources provided (100 episodes, 14 episodes) are not future information. What would you do with this past information? -- Alex_21 TALK 23:56, 24 January 2025 (UTC)[reply]
Nevertheless, the question remains: does one editor's unsourced definition of an episode outrule the basis sourcing policies of Wikipedia? -- Alex_21 TALK 23:58, 24 January 2025 (UTC)[reply]
Usually we don't need to source the meaning of common English language words and concepts. The article at episode reflects common usage and conforms to this dictionary definition - "any installment of a serialized story or drama". Geraldo Perez (talk) 00:27, 25 January 2025 (UTC)[reply]
If a series had 94 half-hour episodes and three of one hour why not just say that? Phil Bridger (talk) 11:04, 24 January 2025 (UTC)[reply]
What would you propose be listed in the first column of the tables at List of Good Luck Charlie episodes, and in the infobox at Good Luck Charlie?
Contentious article aside, my question remains as to whether primary or secondary sources are what we based Wikipedia upon. -- Alex_21 TALK 11:11, 24 January 2025 (UTC)[reply]
  • If only we could divert all this thought and effort to contentious topics.
    Infoboxes cause a high proportion of Wikipedia disputes because they demand very short entries and therefore can't handle nuance. The solution is not to use the disputed parameter of the infobox.
    None of these sources are scholarly analysis or high quality journalism and they're merely repeating the publisher's information uncritically, so none of them are truly secondary in the intended meaning of the word.—S Marshall T/C 13:11, 24 January 2025 (UTC)[reply]
    Yes, secondary sources "contain analysis, evaluation, interpretation, or synthesis of the facts, evidence, concepts, and ideas taken from primary sources", that is correct. -- Alex_21 TALK 23:57, 24 January 2025 (UTC)[reply]
    I agree with S Marshall: if putting "the" number on it is contentious, then leave it out.
    Alternatively, add some text to address it directly. You could say something like "When a double-length special is broadcast, industry standards say that's technically two episodes.[1] Consequently, sources differ over whether 'The Amazing Double Special' should be counted as episode 13 and 'The Dénouement' as episode 14, or if 'The Amazing Double Special' is episodes 13 and 14 and 'The Dénouement' is episode 15. The table below uses natural counting [or the industry counting style; what matters is that you specify, not which one you choose] and thus labels it as episode 13 and the following one as episode 14 [or the other way around]."
    Wikipedia doesn't have to endorse one or the other as the True™ Episode Counting Style. Just educate the reader about the difference, and tell them which one the article is using. WhatamIdoing (talk) 23:54, 1 February 2025 (UTC)[reply]

Request for research input to inform policy proposals about banners & logos

[edit]

I am leading an initiative to review and make recommendations on updates to policies and procedures governing decisions to run project banners or make temporary logo changes. The initiative is focused on ensuring that project decisions to run a banner or temporarily change their logo in response to an “external” event (such as a development in the news or proposed legislation) are made based on criteria and values that are shared by the global Wikimedia community. The first phase of the initiative is research into past examples of relevant community discussions and decisions. If you have examples to contribute, please do so on the Meta-Wiki page. Thanks! --CRoslof (WMF) (talk) 00:04, 24 January 2025 (UTC)[reply]

@CRoslof (WMF): Was this initiative in the works before ar-wiki's action regarding Palestine, or was it prompted by that? voorts (talk/contributions) 02:03, 24 January 2025 (UTC)[reply]
@voorts: Planning for this initiative began several months ago. The banners and logo changes on Arabic Wikipedia were one factor in making this work a higher priority, but by no means the only factor. One of the key existing policies that relates to this topic is the Wikimedia Foundation Policy and Political Association Guideline. The current version of that policy is pretty old at this point, and we've found that it hasn't clearly answered all the questions about banners that have come up since it was last updated. We can also see how external trends, including those identified in the Foundation's annual plan, might result in an increase in community proposals to take action. Updating policies is one way to support decision-making on those possible proposals. CRoslof (WMF) (talk) 01:09, 25 January 2025 (UTC)[reply]

RfC: Amending ATD-R

[edit]

Should WP:ATD-R be amended as follows:

A page can be [[Wikipedia:BLANKANDREDIRECT|blanked and redirected]] if there is a suitable page to redirect to, and if the resulting redirect is not [[Wikipedia:R#DELETE|inappropriate]]. If the change is disputed via a [[Wikipedia:REVERT|reversion]], an attempt should be made to reach a [[Wikipedia:Consensus|consensus]] before blank-and-redirecting again. Suitable venues for doing so include the article's talk page and [[Wikipedia:Articles for deletion]].
+
A page can be [[Wikipedia:BLANKANDREDIRECT|blanked and redirected]] if there is a suitable page to redirect to, and if the resulting redirect is not [[Wikipedia:R#DELETE|inappropriate]]. If the change is disputed, such as by [[Wikipedia:REVERT|reversion]], an attempt should be made to reach a [[Wikipedia:Consensus|consensus]] before blank-and-redirecting again. The preferred venue for doing so is the appropriate [[WP:XFD|deletion discussion venue]] for the pre-redirect content, although sometimes the dispute may be resolved on the page's talk page.

Support (Amending ATD-R)

[edit]
  • As proposer. This reflects existing consensus and current practice. Blanking of article content should be discussed at AfD, not another venue. If someone contests a BLAR, they're contesting the fact that article content was removed, not that a redirect exists. The venue matters because different sets of editors patrol AfD and RfD. voorts (talk/contributions) 01:54, 24 January 2025 (UTC)[reply]
  • Summoned by bot. I broadly support this clarification. However, I think it could be made even clearer that, in lieu of an AfD, if a consensus on the talkpage emerges that it should be merged to another article, that suffices and reverting a BLAR doesn't change that consensus without good reason. As written, I worry that the interpretation will be "if it's contested, it must go to AfD". I'd recommend the following: This may be done through either a merge discussion on the talkpage that results in a clear consensus to merge. Alternatively, or if a clear consensus on the talkpage does not form, the article should be submitted through Articles for Deletion for a broader consensus to emerge. That said, I'm not so miffed with the proposed wording to oppose it. -bɜ:ʳkənhɪmez | me | talk to me! 02:35, 24 January 2025 (UTC)[reply]
    I don't see this proposal as precluding a merge discussion. voorts (talk/contributions) 02:46, 24 January 2025 (UTC)[reply]
    I don't either, but I see the wording of although sometimes the dispute may be resolved on the article's talk page closer to "if the person who contested/reverted agrees on the talk page, you don't need an AfD" rather than "if a consensus on the talk page is that the revert was wrong, an AfD is not needed". The second is what I see general consensus as, not the first. -bɜ:ʳkənhɪmez | me | talk to me! 02:53, 24 January 2025 (UTC)[reply]
  • I broadly support the idea, an AFD is going to get more eyes than an obscure talkpage, so I suspect it is the better venue in most cases. I'm also unsure how to work this nuance in to the prose, and not suspect the rare cases where another forum would be better, such a forum might emerge anyway. CMD (talk) 03:28, 24 January 2025 (UTC)[reply]
  • Support per my extensive comments in the prior discussion. Thryduulf (talk) 11:15, 24 January 2025 (UTC)[reply]
  • Support, although I don't see much difference between the status quo and the proposed wording. Basically, the two options, AfD or the talk page, are just switched around. It doesn't address the concerns that in some cases RfD is or is not a valid option. Perhaps it needs a solid "yes" or "no" on that issue? If RfD is an option, then that should be expressed in the wording. And since according to editors some of these do wind up at RfD when they shouldn't, then maybe that should be made clear here in this policy's wording, as well. Specifically addressing the RfD issue in the wording of this policy might actually lead to positive change. P.I. Ellsworth , ed. put'er there 17:26, 24 January 2025 (UTC)[reply]
  • Support the change in wording to state the preference for AFD in the event of a conflict, because AFD is more likely to result in binding consensus than simply more talk. Robert McClenon (talk) 01:04, 25 January 2025 (UTC)[reply]
  • Support Per Thryduulf's reasoning in the antecedent discussion. Jclemens (talk) 04:45, 25 January 2025 (UTC)[reply]
  • Support. AfD can handle redirects, merges, DABifies...the gamut. This kind of discussion should be happening out in the open, where editors versed in notability guidelines are looking for discussions, rather than between two opposed editors on an article talk page (where I doubt resolution will be easily found anyways). Toadspike [Talk] 11:48, 26 January 2025 (UTC)[reply]
  • Support firstly, because by "blank and redirect" you're fundamentally saying that an article shouldn't exist at that title (presumably either because it's not notable, or it is notable but it's best covered at another location). WP:AFD is the best location to discuss this. Secondly, because this has been abused in the past. COVID-19 lab leak theory is one example; and when it finally reached AFD, there was a pretty strong consensus for an article to exist at that title, which settled a dispute that spanned months. There are several other examples; AFD has repeatedly proven to be the best settler of "blank and redirect" situations, and the best at avoiding the "low traffic talk page" issue. ProcrastinatingReader (talk) 18:52, 26 January 2025 (UTC)[reply]
  • Support, my concerns have been aired and I'm comfortable with using AfD as a primary venue for discussing any pages containing substantial article content. Utopes (talk / cont) 22:30, 29 January 2025 (UTC)[reply]

Oppose (Amending ATD-R)

[edit]
  • Oppose. The status quo reflects the nuances that Chipmunkdavis has vocalized. There are also other venues to consider: if the page is a template, WP:TFD would be better. If this is long-stable as a redirect, RfD is a better venue (as I've argued here, for example). -- Tavix (talk) 17:13, 24 January 2025 (UTC)[reply]
    The intent here is to address articles. Obviously TfD is the place to deal with templates and nobody is suggesting otherwise. voorts (talk/contributions) 17:28, 24 January 2025 (UTC)[reply]
    The section in question is about pages, not articles. If the proposed wording is adapted, it would be suggesting that WP:BLAR'd templates go to AfD. As I explained in the previous discussion, that's part of the reason why the proposed wording is problematic and that it was premature for an RfC on the matter. -- Tavix (talk) 17:35, 24 January 2025 (UTC)[reply]
    As a bit of workshopping, how about changing doing so to articles? -- Tavix (talk) 17:46, 24 January 2025 (UTC)[reply]
    Done. Pinging @Consarn, @Berchanhimez, @Chipmunkdavis, @Thryduulf, @Paine Ellsworth, @Tavix. voorts (talk/contributions) 22:51, 24 January 2025 (UTC)[reply]
    Gentle reminder to editor Voorts: as I'm subscribed to this RfC, there is no need to ping me. That's just an extra unnecessary step. P.I. Ellsworth , ed. put'er there 22:58, 24 January 2025 (UTC)[reply]
    Not everyone subscribes to every discussion. I regularly unsubscribe to RfCs after I !vote. voorts (talk/contributions) 22:59, 24 January 2025 (UTC)[reply]
    I don't. Just saving you some time and extra work. P.I. Ellsworth , ed. put'er there 23:03, 24 January 2025 (UTC)[reply]
    considering the above discussion, my vote hasn't really changed. this does feel incomplete, what with files and templates existing and all that, so that still feels undercooked (and now actively article-centric), hence my suggestion of either naming multiple venues or not naming any consarn (speak evil) (see evil) 23:28, 24 January 2025 (UTC)[reply]
    Agree. I'm beginning to understand those editors who said it was too soon for an RfC on these issues. While I've given this minuscule change my support (and still do), this very short paragraph could definitely be improved with a broader guidance for up and coming generations. P.I. Ellsworth , ed. put'er there 23:38, 24 January 2025 (UTC)[reply]
    If you re-read the RFCBEFORE discussions, the dispute was over what to do with articles that have been BLARed. That's why this was written that way. I think it's obvious that when there's a dispute over a BLARed article, it should go to AfD, not RfD. I proposed this change because apparently some people don't think that's so obvious. Nobody has or is disputing that BLARed templates should go to TfD, files to FfD, or miscellany to MfD. And none of that needs to be spelled out here per WP:CREEP. voorts (talk/contributions) 00:17, 25 January 2025 (UTC)[reply]
    If you want to be fully inclusive, it could say something like "the appropriate deletion venue for the pre-redirect content" or "...the blanked content" or some such. I personally don't think that's necessary, but don't object if others disagree on that score. (To be explicit neither the change that was made, nor a change to along the lines of my first sentence, change my support). Thryduulf (talk) 00:26, 25 January 2025 (UTC)[reply]
    Exactly. And my support hasn't changed as well. Goodness, I'm not saying this needs pages and pages of instruction, nor even sentence after sentence. I think us old(er) farts sometimes need to remember that less experienced editors don't necessarily know what we know. I think you've nailed the solution, Thryduulf! The only thing I would add is something short and specific about how RfD is seldom an appropriate venue and why. P.I. Ellsworth , ed. put'er there 00:35, 25 January 2025 (UTC)[reply]
    Done. Sorry if I came in a bit hot there. voorts (talk/contributions) 00:39, 25 January 2025 (UTC)[reply]
    Also, I think something about RfDs generally not being appropriate could replace the current footnote at the end of this paragraph. voorts (talk/contributions) 00:52, 25 January 2025 (UTC)[reply]
    @Voorts: That latest change moves me to the "strong oppose" category. Again, RfD is the proper venue when the status quo is a redirect. -- Tavix (talk) 01:00, 25 January 2025 (UTC)[reply]
    I'm going to back down a bit with an emphasis on the word "preferred". I agree that AfD is the preferred venue, but my main concern is if a redirect gets nominated for deletion at RfD and editors make purely jurisdictional arguments that it should go to AfD because there's article content in its history even though it's blatantly obvious the article content should be deleted. -- Tavix (talk) 01:22, 25 January 2025 (UTC)[reply]
    this is a big part of why incident 91724 could become a case study. "has history, needs afd" took priority over the fact that the history had nothing worth keeping, the redirect had been stable as a blar for years, and the ages of the folks at rfd (specifically the admins closing or relisting discussions on blars) having zero issue with blars being nominated and discussed there (with a lot of similar blars nominated around the same time as that one being closed with relatively litte fuss, and blars nominated later being closed with no fuss), and at least three other details i'm missing
    as i said before, if a page was blanked relatively recently and someone can argue for there being something worth keeping in it, its own xfd is fine and dandy, but otherwise, it's better to just take it to rfd and leave the headache for them. despite what this may imply, they're no less capable of evaluating article content, be it stashed away in the edit history or proudly displayed in any given redirect's target consarn (speak evil) (see evil) 10:30, 25 January 2025 (UTC)[reply]
    As I've explained time and time again it's primarily not about the capabilities of editors at RfD it's about discoverability. When article content is discussed at AfD there are multiple systems in place that mean everybody interested or potentially interested knows that article content is being discussed, the same is not true when article content is discussed at RfD. Time since the BLAR is completely irrelevant. Thryduulf (talk) 10:39, 25 January 2025 (UTC)[reply]
    if you want to argue that watchlists, talk page notifs, and people's xfd logs aren't enough, that's fine by me, but i at best support also having delsort categories for rfd (though there might be some issues when bundling multiple redirects together, though that's nothing twinkle or massxfd can't fix), and at worst disagree because, respectfully, i don't have much evidence or hope of quake 2's biggest fans knowing what a strogg is. maybe quake 4, but its list of strogg was deleted with no issue (not even a relisting). see also quackifier, just under that discussion consarn (speak evil) (see evil) 11:03, 25 January 2025 (UTC)[reply]
    I would think NOTBURO/IAR would apply in those cases. voorts (talk/contributions) 02:41, 25 January 2025 (UTC)[reply]
    I would think that as well, but unfortunately that's not reality far too often. I can see this new wording being more ammo for process wonkery. -- Tavix (talk) 02:49, 25 January 2025 (UTC)[reply]
    Would a footnote clarifying that ameliorate your concerns? voorts (talk/contributions) 02:53, 25 January 2025 (UTC)[reply]
    Unless a note about RfD being appropriate in any cases makes it clear that it strictly limited to (a) when the content would be speedily deleted if restored, or (b) there has been explicit consensus the content should not be an article (or template or whatever), then it would move me into a strong oppose. This is not "process wonkery" but the fundamental spirit of the entire deletion process. Thryduulf (talk) 03:35, 25 January 2025 (UTC)[reply]
    ^Voorts, see what I mean? -- Tavix (talk) 03:43, 25 January 2025 (UTC)[reply]
    See what I mean this attitude is exactly why we are here. I've spent literal years explaining why I hold the position I do, and how it aligns with the letter and spirit of pretty much every relevant policy and guideline. It shouldn't even be controversial for blatantly obvious the article content should be deleted to mean "would be speedily deleteable if restored", yet on this again a single digit number of editors have spent years arguing that they know better. Thryduulf (talk) 03:56, 25 January 2025 (UTC)[reply]
    both sides are on single digits at the time of writing this, we just need 3 more supports to make it 10 lol
    ultimately, this has its own caveat(s). namely, with the csd not covering every possible scenario. regardless of whether or not it's intentional, it's not hard to look at something and go "this ain't it, chief". following this "process" to the letter would just add more steps to that, by restoring anything that doesn't explicitly fit a csd and dictating that it has to go to afd so it can get the boot there for the exact same reason consarn (speak evil) (see evil) 10:51, 25 January 2025 (UTC)[reply]
    Thanks. That alleviates my concerns. -- Tavix (talk) 23:45, 24 January 2025 (UTC)[reply]
  • oppose, though with the note that i support a different flavor of change. on top of the status quo issue pointed out by tavix (which i think we might need to set a period of time for, like a month or something), there's also the issue of the article content in question. if it's just unsourced, promotional, in-universe, and/or any other kind of fluff or cruft or whatever else, i see no need to worry about the content, as it's not worth keeping anyway (really, it might be better to just create a new article from scratch). if a blar, which has been stable as a redirect, did have sources, and those sources were considered reliable, then i believe restoring and sending to afd would be a viable option (see purple francis for an example). outside of that, i think if the blar is reverted early enough, afd would be the better option, but if not, then it'd be rfd
    for this reason, i'd rather have multiple venues named ("Suitable venues include Articles for Deletion, Redirects for Discussion, and Templates for Discussion"), no specific venue at all ("The dispute should be resolved in a fitting discussion venue"), or conditions for each venue (for which i won't suggest a wording because of the aforementioned status quo time issue) consarn (speak evil) (see evil) 17:50, 24 January 2025 (UTC)[reply]
  • Oppose. The proper initial venue for discussing this should be the talk page; only if agreement can't be reached informally there should it proceed to AfD. Espresso Addict (talk) 16:14, 27 January 2025 (UTC)[reply]
  • Oppose as written to capture some nuances; there may be a situation where you want a BLAR to remain a redirect, but would rather retarget it. I can't imagine the solution there is to reverse the BLAR and discuss the different redirect-location at AfD. Besides that, I think the intention is otherwise solid, as long as its consistent in practice. Moving forward it would likely lead to many old reversions of 15+ year BLAR'd content, but perhaps that's the intention; maybe only reverse the BLAR if you're seeking deletion of the page, at which point AfD becomes preferable? Article deletion to be left to AfD at that point? Utopes (talk / cont) 20:55, 27 January 2025 (UTC), moving to support, my concerns have been resolved and I'm happy to use AfD as a primary venue for discussing article content. Utopes (talk / cont) 22:29, 29 January 2025 (UTC)[reply]

Discussion (Amending ATD-R)

[edit]
  • not entirely sure i should vote, but i should probably mention this discussion in wt:redirect that preceded the one about atd-r, and i do think this rfc should affect that as well, but wouldn't be surprised if it required another one consarn (speak evil) (see evil) 12:38, 24 January 2025 (UTC)[reply]
  • I know it's not really in the scope of this discussion but to be perfectly honest, I'm not sure why BLAR is a still a thing. It's a cliche, but it's a hidden mechanism for backdoor deletion that often causes arguments and edit wars. I think AfDs and talk-page merge proposals where consensus-building exists produce much better results. It makes sense for duplicate articles, but that is covered by A10's redirection clause. J947edits 03:23, 25 January 2025 (UTC)[reply]
    BLARs are perfectly fine when uncontroversial, duplicate articles are one example but bold merges are another (which A10 doesn't cover). Thryduulf (talk) 03:29, 25 January 2025 (UTC)[reply]
    It is my impression that BLARs often occur without intention of an accompanying merge. J947edits 03:35, 25 January 2025 (UTC)[reply]
    Yes because sometimes there's nothing to merge. voorts (talk/contributions) 16:01, 25 January 2025 (UTC)[reply]
    I didn't say, or intend to imply, that every BLAR is related to a merge. The best ones are generally where the target article covers the topic explicitly, either because content is merged, written or already exists. The worst ones are where the target is of little to no (obvious) relevance, contains no (obviously) relevant content and none is added. Obviously there are also ones that lie between the extremes. Any can be controversial, any can be uncontroversial. Thryduulf (talk) 18:20, 25 January 2025 (UTC)[reply]
    BLARs are preferable to deletion for content that is simply non-notable and does not run afoul of other G10/11/12-type issues. Jclemens (talk) 04:46, 25 January 2025 (UTC)[reply]
  • I'm happy to align to whatever consensus decides, but I'd like to discuss the implications because that aspect is not too clear to me. Does this mean that any time an redirect contains any history and deletion is sought, it should be restored and go to AfD? Currently there's some far-future redirects with ancient history, how would this amendment affect such titles? Utopes (talk / cont) 09:00, 29 January 2025 (UTC)[reply]
    see why i wanted that left to editor discretion (status quo, evaluation, chance of an rm or histmerge, etc.)? i trust in editors who aren't that wonk from rfd (cogsan? cornsam?) to see a pile of unsourced cruft tucked away in the history and go "i don't think this would get any keep votes in afd" consarn (speak evil) (see evil) 11:07, 29 January 2025 (UTC)[reply]
    No. This is about contested BLARs, not articles that were long ago BLARed where someone thinks the redirect should be deleted. voorts (talk/contributions) 12:42, 29 January 2025 (UTC)[reply]
    then it might depend. is its status as a blar the part that is being contested? if the title is being contested (hopefully assuming the pre-blar content is fine), would "move" be a fitting outcome outside of rm? is it being contested solely over meta-procedural stuff, as opposed to actually supporting or opposing its content? why are boots shaped like italy? was it stable as a redirect at the time of contest or not? does this account for its status as a blar being contested in an xfd venue (be it for restoring or blanking again)? it's a lot of questions i feel the current wording doesn't answer, when it very likely should. granted, what i suggested isn't much better, but shh
    going back to that one rfd i keep begrudgingly bringing up (i kinda hate it, but it's genuinely really useful), if this wording is interpreted literally, the blar was contested a few years prior and should thus be restored, regardless of the rationales being less than serviceable ("i worked hard on this" one time and... no reason the other), the pre-blar content being complete fancruft, and no one actually supporting the content in rfd consarn (speak evil) (see evil) 13:54, 29 January 2025 (UTC)[reply]
    Well that case you keep citing worked out as a NOTBURO situation, which this clraification would not override. There are obviously edge cases that not every policy is going to capture. IAR is a catch-all exception to every single policy on Wikipedia. The reason we have so much scope creep in PAGs is becaude editors insist on every exception being enumerated. voorts (talk/contributions) 14:51, 29 January 2025 (UTC)[reply]
    if an outcome (blar status is disputed in rfd, is closed as delete anyway) is common enough, i feel the situation goes from "iar good" to "rules not good", at which point i'd rather have the rules adapt. among other things, this is why i want a slightly more concrete time frame to establish a status quo (while i did suggest a month, that could also be too short), so that blars that aren't blatantly worth or not worth restoring after said time frame (for xfd or otherwise) won't be as much of a headache to deal with. of course, in cases where their usefulness or lack thereof isn't blatant, then i believe a discussion in its talk page or an xfd venue that isn't rfd would be the best option consarn (speak evil) (see evil) 17:05, 29 January 2025 (UTC)[reply]
    I think the idea that that redirect you mentioned had to go to AfD was incorrect. The issue was whether the redirect was appropriate, not whether the old article content should be kept. voorts (talk/contributions) 17:41, 29 January 2025 (UTC)[reply]
    sure took almost 2 months to get that sorted out lol consarn (speak evil) (see evil) 17:43, 29 January 2025 (UTC)[reply]
    Bad facts make bad law, as attorneys like to say. voorts (talk/contributions) 17:45, 29 January 2025 (UTC)[reply]
    Alright. @Voorts: in that case I think I agree. I.e., if somebody BLAR's a page, the best avenue to discuss merits of inclusion on Wikipedia, would be at a place like AfD, where it is treated as the article it used to be, as the right eyes for content-deletion will be present at AfD. To that end, this clarification is likely a good change to highlight this fact. I think where I might be struggling is the definition of "contesting a BLAR" and what that might look like in practice. To me, "deleting a long-BLAR'd redirect" is basically the same as "contesting the BLAR", I think?
    An example I'll go ahead and grab is 1900 Lincoln Blue Tigers football team from cat:raw. This is not a great redirect pointed at Lincoln Blue Tigers from my POV, and I'd like to see it resolved at some venue, if not resolved boldly. This page was BLAR'd in 2024, and I'll go ahead and notify Curb Safe Charmer who BLAR'd it. I think I'm inclined to undo the BLAR, not because I think the 1900 season is particularly notable, but because redirecting the 1900 season to the page about the Lincoln Blue Tigers doesn't really do much for the people who want to read about the 1900 season specifically. (Any other day I would do this boldly, but I want to seek clarification).
    But let's say this page was BLAR'd in 2004, as a longstanding redirect for 20 years. I think it's fair to say that as a redirect, this should be deleted. But this page has history as an article. So unless my interpretation is off, wouldn't the act of deleting a historied redirect that was long ago BLAR'd, be equivalent to contesting the BLAR, that turned the page into a redirect in the first place, regardless of the year? Utopes (talk / cont) 20:27, 29 January 2025 (UTC)[reply]
    I don't think so. In 2025, you're contesting that it's a good redirect from 2004, not contesting the removal of article content. If somebody actually thought the article should exist, that's one thing, but procedural objections based on RfD being an improper forum without actually thinking the subject needs an article is the kind of insistence on needless bureaucracy that NOTBURO is designed to address. voorts (talk/contributions) 20:59, 29 January 2025 (UTC)[reply]
    I see, thank you. WP:NOTBURO is absolutely vital to keep the cogs rolling, lol. Very oftentimes at RfD, there will be a "page with history" that holds up the process, all for the discussion to close with "restore and take to AfD". Cutting out the middle, and just restoring article content without bothering with an RfD to say "restore and take to AfD" would make the process and all workflows lot smoother. @Voorts:, from your own point of view, I'm very interested in doing something about 1900 Lincoln Blue Tigers football team, specifically, to remove a redirect from being at this title (I have no opinion as to whether or not an article should exist here instead). Because I want to remove this redirect; do you think I should take it to RfD as the correct venue to get rid of it? (Personally speaking, I think undoing the BLAR is a lot more simple and painless especially so as I don't have a strong opinion on article removal, but if I absolutely didn't want an article here, would RfD still be the venue?) Utopes (talk / cont) 21:10, 29 January 2025 (UTC)[reply]
    I would take that to RfD. If the editor who created the article or someone else reversed the BLAR, I'd bring it to AfD. voorts (talk/contributions) 21:16, 29 January 2025 (UTC)[reply]
    Alright. I think we're getting somewhere. I feel like some editors may consider it problematic to delete a recently BLAR'd article at RfD under any circumstance. Like if Person A BLAR's a brand new article, and Person B takes it to RfD because they disagree with the existence of a redirect at the title and it gets deleted, then this could be considered a "bypassal of the AfD process". Whether or not it is or isn't, people have cited NOTBURO for deleting it. I was under the impression this proposal was trying to eliminate this outcome, i.e. to make sure that all pages with articles in its history should be discussed at AfD under its merits as an article instead of anywhere else. I've nommed redirects where people have said "take to AfD", and I've nommed articles where people have said "take to RfD". I've never had an AfD close as "wrong venue", but I've seen countless RfDs close in this way for any amount of history, regardless of the validity of there being a full-blown article at this title, only to be restored and unanimously deleted at AfD. I have a feeling 1900 Lincoln Blue Tigers football team would close in the same way, which is why I ask as it seems to be restoring the article would just cut a lot of tape if the page is going to end up at AfD eventually. Utopes (talk / cont) 21:36, 29 January 2025 (UTC)[reply]
    I think the paragraph under discussion here doesn't really speak to what should happen in the kind of scenario you're describing. The paragraph talks about "the change" (i.e., the blanking and redirecting) being "disputed", not about what happens when someone thinks a redirect ought not to exist. I agree with you that that's needless formalism/bureaucracy, but I think that changing the appropriate venue for those kinds of redirects would need a separate discussion. voorts (talk/contributions) 21:42, 29 January 2025 (UTC)[reply]
Fair enough, yeah. I'm just looking at the definition of "disputing/contesting a BLAR". For this situation, I think it could be reasoned that I am "disputing" the "conversion of this article into a redirect". Now, I don't really have a strong opinion on whether or not an article should or shouldn't exist, but because I don't think a redirect should be at this title in either situation, I feel like "dispute" of the edit might still be accurate? Even if it's not for a regular reason that most BLARs get disputed 😅. I just don't think BLAR'ing into a page where a particular season is not discussed is a great change. That's what I meant about "saying a redirect ought not to exist" might be equivalent to "disputing/disagreeing with the edit that turned this into a redirect to begin with". And if those things are equivalent, then would that make AfD the right location to discuss the history of this page as an article? That was where I was coming from; hopefully that makes sense lol. If it needs a separate discussion I can totally understand that as well. Utopes (talk / cont) 21:57, 29 January 2025 (UTC)[reply]
In the 1900 Blue Tigers case and others like it where you think that it should not be a redirect but have no opinion about the existence or otherwise of an article then simply restore the article. Making sure it's tagged for any relevant WikiProjects is a bonus but not essential. If someone disputes your action then a talk page discussion or AfD is the correct course of action for them to take. If they think the title should be a red link then AfD is the only correct venue. Thryduulf (talk) 22:08, 29 January 2025 (UTC)[reply]
Alright, thank you Thryduulf. That was kind of the vibe I was leaning towards as well, as AfD would be able to determine the merits the page's existence as a subject matter. This all comes together because not too long ago I was criticized for restoring a page that contained an article in its history. In this discussion for Wikipedia:Articles for deletion/List of cultural icons of Canada, I received the following message regarding my BLAR-reversal: For the record, it's really quite silly and unnecessary to revert an ancient redirect from 2011 back into a bad article that existed for all of a day before being redirected, just so that you can force it through an AFD discussion — we also have the RFD process for unnecessary redirects, so why wasn't this just taken there instead of being "restored" into an article that the restorer wants immediately deleted? I feel like this is partially comparable to 1900 Lincoln Blue Tigers football team, as both of these existed for approx a day before the BLAR, but if restoring a 2024 article is necessary per Thryduulf, but restoring a 2011 article is silly per Bearcat, I'm glad that this has the potential to be ironed out via this RfC, possibly. Utopes (talk / cont) 22:18, 29 January 2025 (UTC)[reply]
There are exactly two situations where an AfD is not required to delete article content:
  1. The content meets one or more criteria for speedy deletion
  2. The content is eligible to be PRODed
Bearcat's comment is simply wrong - RfD is not the correct venue for deleting article content, regardless of how old it is. Thryduulf (talk) 22:25, 29 January 2025 (UTC)[reply]
Understood. I'll keep that in mind for my future editing, and I'll move from the oppose to the support section of this RfC. Thank you for confirmation regarding these situations! Cheers, Utopes (talk / cont) 22:28, 29 January 2025 (UTC)[reply]
@Utopes: Note that is simply Thryduulf's opinion and is not supported by policy (despite his vague waves to the contrary). Any redirect that has consensus to delete at RfD can be deleted. I see that you supported deletion of the redirect at Wikipedia:Redirects for discussion/Log/2024 September 17#List of Strogg in Quake II. Are you now saying that should have procedurally gone to AfD even though it was blatantly obvious that the article content is not suitable for Wikipedia? -- Tavix (talk) 22:36, 29 January 2025 (UTC)[reply]
I'm saying that AfD probably would have been the right location to discuss it at. Of course NOTBURO applies and it would've been deleted regardless, really, but if someone could go back in time, bringing that page to AfD instead of RfD seems like it would have been more of an ideal outcome. I would've !voted delete on either venue. Utopes (talk / cont) 22:39, 29 January 2025 (UTC)[reply]
@Utopes: Note that Tavix's comments are, despite their assertions to the contrary, only their opinion. It is notable that not once in the literal years of discussions, including this one, have they managed to show any policy that backs up this opinion. Content that is blatantly unsuitable for Wikipedia can be speedily deleted, everything that can't be is not blatantly unsuitable. Thryduulf (talk) 22:52, 29 January 2025 (UTC)[reply]
Here you go. Speedy deletion is a process that provides administrators with broad consensus to bypass deletion discussion, at their discretion. RfD is a deletion discussion venue for redirects, so it doesn't require speedy deletion for something that is a redirect to be deleted via RfD. Utopes recognizes there is a difference between "all redirects that have non-speediable article content must be restored and discussed at AfD" and "AfD is the preferred venue for pages with article content", so I'm satisfied to their response to my inquiry. -- Tavix (talk) 23:22, 29 January 2025 (UTC)[reply]
Quoting yourself in a discussion about policy doe not show that your opinion is consistent with policy. Taking multiple different bits of policy and multiple separate facts, putting them all in a pot and claiming the result shows your opinion is supported by policy didn't do that in the discussion you quoted and doesn't do so now. You have correctly quoted what CSD is and what RfD is, but what you haven't done is acknowledged that when a BLARed article is nominated for deletion it is article content that will be deleted, and that article content nominated for deletion is discussed at AfD not RfD. Thryduulf (talk) 02:40, 30 January 2025 (UTC)[reply]

Question About No Quorum Redirect

[edit]

I am confident that I can get a knowledgeable answer here quickly. There is a Deletion Review in progress, where the AFD was held in December 2023, and no one participated except the nominator, even after two Relists. After two relists, the closer closed it as a Redirect, which was consistent with what the nominator had written. In Deletion Review, the appellant is saying that the article should be restored. I understand that in the case of a soft delete, the article should be restored to user or draft space on request, but in this case, the article is already present in the history. So: Does the appellant have a right to have the article restored, or should they submit it to AFC for review, or what? I don't care, but the appellant does care (of course). Robert McClenon (talk) 20:44, 24 January 2025 (UTC)[reply]

Without a second participant, an uncontested AfD is not a discussion and so there is no mandated outcome and the redirect in question can be undone by any editor in good standing, and can be then taken to AfD again by any editor objecting to it. Draft isn't typically mandated in policies, because it's a relatively new invention compared to our deletion policies and isn't referenced everywhere it might be relevant or helpful to specify. Jclemens (talk) 07:04, 25 January 2025 (UTC)[reply]
Thank you, User:Jclemens. Is there an uninvolved opinion also? Robert McClenon (talk) 18:16, 25 January 2025 (UTC)[reply]
Uninvolved opinion: While I agree with Jclemens that the DR appellant can simply revert the redirect within policy, I have not looked at this specific article and it likely makes more sense to restore to draftspace. I believe the appellant can do this themselves and does not need to go through a DR to copy the contents of the article from its history to draftspace. Alternatively, they can revert the BLAR and move to draftspace. The only difference is that if/when the article is moved back from draft to mainspace, a histmerge might be needed. Toadspike [Talk] 11:56, 26 January 2025 (UTC)[reply]
WP:NOQUORUM indicates that such a close should be treated as an expired WP:PROD, which states that restoration of prodded pages can be done via admin or via Requests for undeletion - there's no identified expectation/suggestion that prods should go to DRV. WP:SOFTDELETE states that such a deleted article "can be restored for any reason on request", ie: restoration to mainspace is an expected possibility. It also states that redirection is an option since BLAR can be used by any editor if there are no objections. Putting those together, it's reasonable for a restoration from redirect to be treated as a belated objection, and this can be done by any editor without seeking permission (though it would be nice if valid issues identified in the original AFD were fixed as part of the restoration to avoid a second AFD). ~Hydronium~Hydroxide~(Talk)~ 12:08, 26 January 2025 (UTC)[reply]

Psychological research

[edit]

In recent years, psychological research on social media users and its undesirable side effects have been discussed and criticized. Is there a specific policy on Wikipedia to protect users from covert psychological research? Arbabi second (talk) 00:22, 25 January 2025 (UTC)[reply]

For starters, try Wikipedia is not a laboratory and WP:Ethically researching Wikipedia. Robert McClenon (talk) 01:01, 25 January 2025 (UTC)[reply]
@Robert McClenon
That was helpful, thank you. Arbabi second (talk) 03:34, 26 January 2025 (UTC)[reply]
There are similarities and differences. With most social media, a corporation sets up a site to attract a community. The corporation wants to sell advertising to community members and gather the community members' personal data. The site doesn't have any other purpose. Wikipedia, on the other hand, has a clear purpose: we want to write an encyclopaedia together. Community members' personal data is not collected, except to the extent that we choose to share that data on our own userpages or by way of our contributions. Advertising is not sold, or at least, not by the WMF; some Wikipedians do try to sell pages to commercial interests but that's frowned upon.
We do rely on some of the legal protections meant for social media sites, which is important for a legal case currently in progress in India.
The fact that community members' personal data isn't collected, and where someone does provide personal data, isn't verified, means that it's really hard to carry out many kinds of psychological research because you don't have enough information about the Wikipedians involved. Some Wikipedians have more than one account (legitimately or otherwise); some accounts are shared (always illegitimately). All a psychologist can really do is analyze Wikipedians as a group, and even then, people writing an encyclopaedia have modified their behaviour (hopefully towards encyclopaedia-writing) compared to how they'd behave on a regular social media site.
How could you devise a valid piece of research that targets particular users?—S Marshall T/C 16:38, 1 February 2025 (UTC)[reply]
@S Marshall
I agree with you. You misunderstood me, and it's not your fault. I meant more to protect potential victims of Wikipedia's Breaching experiment.
I am not the author of this essay. I wrote my opinion on the article's talk page here Arbabi second (talk) 04:59, 2 February 2025 (UTC)[reply]
Which breaching experiment specifically?—S Marshall T/C 08:58, 2 February 2025 (UTC)[reply]
@S Marshall
Sorry. I prefer to keep my suspicions to myself for now. As much as it is possible that such a thing exists. I'm not talking about random playfulness by a new user. I'm talking about calculated, organized activity. But my specific question is, what measures does Wikipedia have in place to deal with this kind of harmful activity? Arbabi second (talk) 11:38, 2 February 2025 (UTC)[reply]
I remember a long time ago when "research" like this was performed on Usenet. The remedy now is as it was then. P.I. Ellsworth , ed. put'er there 12:28, 2 February 2025 (UTC)[reply]
Non-specific suspicions, Arbabi second? Are you concern trolling?—S Marshall T/C 16:08, 2 February 2025 (UTC)[reply]
This question inspires more questions:
  • How do you differentiate between "psychological" and "non-psychological" research?
  • What's the standard for "covert"? "Unknown to everyone"? "Something I don't remember agreeing to"?
  • Is a covert study more likely to be harmful? Is disclosed/non-covert research more likely to be harmless? (Generally, the potential for harm is a reason given for disclosing it and requiring informed consent, so it seems likely to be the other way around.)
  • Who do you think would be doing this research?
  • Do you think that a document saying "Covert psychological research is naughty" would stop them?
  • What kind of research do you think they would do on wiki? How do you imagine that harming people?
  • Do you think that an A/B software test is "psychological research"?
WhatamIdoing (talk) 00:06, 2 February 2025 (UTC)[reply]
@WhatamIdoing
I am not the author of this essay. I wrote my opinion on the article's talk page here. But regarding your questions, I should say that I was initially referring to organized Breaching experiment. Arbabi second (talk) 04:34, 2 February 2025 (UTC)[reply]
I'm only familiar with the regulations for research involving human subjects in the US. If you want to know relevant US law, you might start with this FAQ, especially the first section. Observational research on WP using public data (such as how articles change over time through edits, or what editors say on talk pages) is allowed and does not require informed consent. However, research like a "breaching experiment" that you referred to above would require informed consent. As Robert McClenon pointed you to, WP's policy is also that "research projects that are disruptive to the community or which negatively affect articles—even temporarily—are not allowed." Below, you say "Who would conduct this research? Universities, tech companies, and independent researchers may conduct psychological studies, sometimes without users’ awareness." I don't know how tech companies handle research involving human subjects, but universities have institutional review boards (IRBs) and do not allow research without consent except in situations that are exempt by law (e.g., "the observation of public behavior when the investigator(s) do not participate in the activities being observed"). Could someone nonetheless covertly start a breaching experiment involving editors? I don't see how to prevent it, though blocking policies would likely interrupt it. FactOrOpinion (talk) 17:23, 2 February 2025 (UTC)[reply]
@FactOrOpinion
Thank you for your attention and information. The topic of human research is important but very broad. I am most interested in Wikipedia's rules on this matter. My intention is to find and translate these rules into Persian. For example: that WP policy is that "research projects that are disruptive to society or negatively impact articles - even temporarily - are not allowed." On which page is it? If possible, please link to that page. Arbabi second (talk) 20:23, 2 February 2025 (UTC)[reply]
@Robert McClenon@S Marshall@WhatamIdoing
Difference between psychological and non-psychological research
Psychological research focuses on behavior, emotions, cognition, and social interactions, while non-psychological research may study technical data, usage patterns, or system efficiency.
Standard for "covert"
Covert research typically means a study conducted without participants’ knowledge or informed consent. Forgetting prior consent is different from not being informed at all.
Is covert research more likely to be harmful?
Yes, because lack of informed consent can lead to ethical concerns or psychological harm. Open research is usually subject to ethical oversight.
Who would conduct this research?
Universities, tech companies, and independent researchers may conduct psychological studies, sometimes without users’ awareness.
Would a policy against covert research be effective?
A formal policy could discourage such research, but enforcement and oversight would be necessary to prevent violations.
What kind of research could be done on Wikipedia?
Studies on user behavior, editing patterns, social interactions, and how information influences decision-making.
Is an A/B software test considered psychological research?
It depends. If it only tests interface improvements, then no. But if it examines users' perceptions, emotions, or behaviors without their knowledge, then it could be psychological research. Arbabi second (talk) 12:23, 2 February 2025 (UTC)[reply]

RfC: Should the explanation of “self-published” in WP:SPS be revised?

[edit]

 You are invited to join the discussion at Wikipedia_talk:Verifiability/SPS_RfC. FactOrOpinion (talk) 21:17, 26 January 2025 (UTC)[reply]

Loose Restrictions on Free Speech

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


I believe Wikipedians should be able to hold right wing political opinions without huge discrimination against. The sites policies are very much left wing and due to that, Wikipedia should be more free for right wing opinions. What is allowed to be said here should be loosened and more open. We should not listen to the 0.01% of people who are offended, otherwise Wikipedia would be an oligarchy. SimpleSubCubicGraph (talk) 04:01, 27 January 2025 (UTC)[reply]

I think people misunderstood what I meant here. I am not trying to promote an anarchist wikipedia, I am trying to allow more speech but not make Wikipedia a free speech forum (despite the name) I am trying to remove certain limitations that censor right wing opinions. SimpleSubCubicGraph (talk) 05:50, 27 January 2025 (UTC)[reply]
I did change my suggestion but the main point for this suggestion is that right wing opinions are discriminated against and censored on Wikipedia. This violates NPOV as left wing opinions are accepted but right wing opinions are not. SimpleSubCubicGraph (talk) 05:53, 27 January 2025 (UTC)[reply]
This is just disruptive at this point. EvergreenFir (talk) 06:01, 27 January 2025 (UTC)[reply]
@EvergreenFir I'm not trying to be disruptive, I read over Wikipedia policies, I see a left wing bias in there that prevents religious and right wing people from expressing their opinions and I try to fix that. SimpleSubCubicGraph (talk) 06:16, 27 January 2025 (UTC)[reply]
We're not a venue meant to empower you (or anyone) in expressing your opinions. Remsense ‥  07:05, 27 January 2025 (UTC)[reply]
A distinction without a difference. We do not embrace free speech for its own sake, but to the degree it fosters building an encyclopedia. That is, explicitly, the point. Remsense ‥  07:03, 27 January 2025 (UTC)[reply]
@SimpleSubCubicGraph: I think a policy proposal needs to be much more concrete than what you've said here. Could you give a specific example of a "left-wing policy" Wikipedia currently has, and how you think it should be changed? jlwoodwa (talk) 06:02, 27 January 2025 (UTC)[reply]
@Jlwoodwa the ones on pronouns and incivility. Its very left wing to me and it goes against my morals, and religion and thats why I just want a site that is moderate not liberal leaning. SimpleSubCubicGraph (talk) 06:14, 27 January 2025 (UTC)[reply]
That's still not a policy proposal. Here's an example of what I think meets the minimum level of concreteness:

I don't like how the Wikipedia:Article titles policy says to use common names. I think official names should always be used when they exist.

Do you see what I mean? jlwoodwa (talk) 06:34, 27 January 2025 (UTC)[reply]
Wikipedia's policies and guidelines are a product of community consensus. Everyone is free to make proposals and attempt to establish a new consensus. —Bagumba (talk) 08:57, 27 January 2025 (UTC)[reply]
It is entirely possible to be right-leaning and civil, just as it is possible to be left-leaning and uncivil. Incivility is fairly universally condemned as unproductive and unprofessional, and I would much prefer the former over the latter. Also, keep in mind that your definition of "liberal" as "left-wing" is not how most of the world uses that word. Based on your description of your beliefs, you sound like a liberal to me. Toadspike [Talk] 09:07, 27 January 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

The role of ChatGPT in Wikipedia

[edit]

Does ChatGPT play a role in Wikipedia's editorial and administrative affairs? To what extent is this role? If there is a policy, history, or notable case in this regard, please link to it. Arbabi second (talk) 17:29, 28 January 2025 (UTC)[reply]

This is not the right venue to post this topic on, a better place to put this would be the Teahouse. Regardless, WP:CHATGPT is a good starting point to learn about this. For the policy on using it in articles, see WP:RSPCHATGPT. Hope this helps! The 🏎 Corvette 🏍 ZR1(The Garage) 18:28, 28 January 2025 (UTC)[reply]
Not policy, guideline-ish. Gråbergs Gråa Sång (talk) 18:56, 28 January 2025 (UTC)[reply]
I agree the policy village pump isn't the right place to discuss general questions on ChatGPT's usage on Wikipedia, but just in case anyone's interested there's a study interviewing Wikipedian's about their LLM usage which I think should shed some light on how users here are currently using ChatGPT and the like. Photos of Japan (talk) 18:46, 28 January 2025 (UTC)[reply]
@Gråbergs Gråa Sång@Photos of Japan@The Corvette ZR1
It was very useful information but unfortunately not enough. Thank you anyway. Arbabi second (talk) 20:29, 28 January 2025 (UTC)[reply]
We aren't allowed to sign things created by others with our user name. I think using AI generated contents without explicit disclosure should fall under that, either in discussion or article space. Graywalls (talk) 07:19, 31 January 2025 (UTC)[reply]
If you're interested, we also have WP:WikiProject AI Cleanup/Resources that has a list of relevant resources and discussions about that topic! (And an archive of the project's discussions at WP:AINB) Chaotic Enby (talk · contribs) 11:15, 31 January 2025 (UTC)[reply]

Policy on use of interactive image maps

[edit]

There appears to be a slight conflict between MOS:ACCESSIBILITY and MOS:ICONS. The former says:

Do not use techniques that require interaction to provide information, such as tooltips or any other "hover" text. Abbreviations are exempt from these requirements, so the template (a wrapper for the <abbr> element) may be used to indicate the long form of an abbreviation (including an acronym or initialism).

And makes ample reference to ensuring accessibility for screen readers. The latter says

Image maps should specify alt text for the main image and for each clickable area; see Image maps and {{English official language clickable map}} for examples.

And the linked image map no longer has an interactive image map, which I'm uncertain if resulted from a single editor or wider discussion. This feels like one of those small places where policy may have evolved, but as image maps are used so rarely it doesn't seem there's extremely clear guidance here. A good example of this in action is Declaration of Independence (painting) and the monstrosity at Gale (crater)#Interactive_Mars_map. I'd personally interpret MOS:ACCESSIBILITY as dissuading image maps entirely, but that doesn't appear to be a clear policy directive. Warrenᚋᚐᚊᚔ 09:21, 29 January 2025 (UTC)[reply]

Is there any relevant distinction to be made here on which kind of device a user choses to employ? Thanks. Martinevans123 (talk) 11:22, 29 January 2025 (UTC)[reply]
I can't imagine there isn't a policy somewhere that's basically "Don't break the mobile browsing experience". The problem with imagemaps is they don't scale nicely to different sized devices; at some point there's a need for the size to stay fixed so the links map appropriately. This is why I sort of feel there may be a policy gap here, since several things would imply don't use imagemaps but we also have explicit guidance on how to use them. Warrenᚋᚐᚊᚔ 11:26, 29 January 2025 (UTC)[reply]
Ah thanks. So editors/ readers who habitually use only desktop or laptop devices may not ever realise there's a problem? Martinevans123 (talk) 11:31, 29 January 2025 (UTC)[reply]
Or even readers who use more recent phones. It's easy to forget that a high end iPhone/Android device may have a much higher resolution screen than the vast majority of phones globally. Even if it renders properly, the individual click points in an imagemap can get so compressed that they're not interactable. This puts us in a situation of populating articles with navigational elements that can only be utilized a: on desktop and b: by sighted users. Warrenᚋᚐᚊᚔ 11:34, 29 January 2025 (UTC)[reply]
The mobile interface is different. Would it better to simply disable those kinds of images for mobile users (and maybe replace with some kind of advice/apology), instead of taking them away for all users? Perhaps that's too difficult. Thanks. Martinevans123 (talk) 17:03, 29 January 2025 (UTC)[reply]
There's a way to with Template:If mobile but that's apparently depreciated, so it seems like this policy overrides it, which seems like an even further call to avoid using imagemaps (without being exactly clear enough to be a policy guideline on imagemaps). Warrenᚋᚐᚊᚔ 17:12, 29 January 2025 (UTC)[reply]
It's possible to navigate between imagemap links using the Tab key. Hence it seems likely they are rendered by screen readers as if they are a sequence of image links, explaining why MOS:ICONS requires alt text to be specified for each clickable area. I suspect MOS:NOTOOLTIPS is intended to apply to mouse-only interactive elements such as tooltips, rather than Tab-interactive elements such as wikilinks and image links, in which case the two policies are mutually consistent.
Your point about mobile browsers is a good one. WMF's Principal Web Software Engineer briefly commented on this back in 2017 in T181756 and Template talk:Features and memorials on Mars#c-Jon_(WMF)-2017-11-30T22:50:00.000Z-Template not mobile friendly, suggesting wrapping the element [image map] with a scrollable container so as not to break mobile readability. Another possible approach would be to add custom CSS via WP:TemplateStyles. Template:Calculator#Best practices suggests [using] media queries with template styles to adjust the display depending on how wide the screen is, though to pursue this option, I think we'd need to call in someone with more expertise than me. Preimage (talk) 14:14, 31 January 2025 (UTC)[reply]
The problem with that, though, is if you appropriately scale imagemaps for mobile screens that have more than a couple of clickable elements you've basically rendered it unusual just by virtue of the size of fingers and screens. Not sure that's a policy problem, but it is a problem. Warrenᚋᚐᚊᚔ 14:17, 31 January 2025 (UTC)[reply]

Adding the undisclosed use of AI to post a wall of text into discussions as disruptive editing

[edit]

I think participating in discussion process like AfD and consensus building by flooding it with a "wall of text" response generated by AI should be added into disruptive editing. Those kind of discussion are generated quickly with low effort in AI, but consumes considerable time to read and process. Graywalls (talk) 07:22, 31 January 2025 (UTC)[reply]

Courtesy link to the above section § Should WP:Demonstrate good faith include mention of AI-generated comments? where a similar proposal is being discussed. Chaotic Enby (talk · contribs) 11:28, 31 January 2025 (UTC)[reply]
Use a chatbot to generate a reply, that way the two AIs can just talk with each other and it doesn't waste editors time. -- LCU ActivelyDisinterested «@» °∆t° 14:51, 31 January 2025 (UTC)[reply]
Flooding discussions with walls of text is disruptive regardless of whether the walls are AI-generated or human-written. Equally-sized walls of equal relevance to the discussion are equally disruptive regardless of the method used to write them. Thryduulf (talk) 06:54, 1 February 2025 (UTC)[reply]
I agree that equal-sized text of equal quality is equally disruptive (that's tautologically true), however there's a difference in the effort required to create them. Human-generated walls of disruptive text are limited by the time and effort the disruptive human is willing to put in, which means that as long as our community continues to maintain a sufficiently healthy proportion of constructive vs disruptive editors, the problem is manageable. The problem with LLM wallspam is that it takes effort to process and respond to disruptive discussion, and a very small number of disruptive editors using LLMs can consume a very large amount of human bandwidth dealing with them. AI doesn't help the constructive response much, since even if you are using AI to summarize the disruptive text wall, and AI to craft your response to the disruptive text wall, you still need to put in the effort to internalize the content of the wall of text, decide if it is disruptive, and then craft a prompt for your own LLM to use in the response. All of which results in a situation where it takes a disproportionate amount of effort to respond to the disruption compared to the effort it takes to produce it. If LLM use is disclosed that would help the issue, but personally I would prefer that the other person I am communicating simply send me the prompt they put into their LLM and let me use the LLM to elaborate and clarify it myself if I feel that is helpful. -- LWG talk 05:25, 2 February 2025 (UTC)[reply]
How much time and effort the author puts into writing a text wall is irrelevant, what matters is how much time and effort it takes other people to read it. It makes absolutely no difference to this whether it was written by a human or an AI. If someone writes text walls very quickly, they will simply get to the point where people advise them about it (and take action if necessary) sooner. Thryduulf (talk) 12:59, 2 February 2025 (UTC)[reply]
I agree with what matters is how much time and effort it takes other people to read it. The example cited below of the discussion at Wikipedia:Articles_for_deletion/Ribu is a good example of the cost to the project of low-content discussion - it takes a significant amount of time to even determine that the contribution is low-quality, so it's not always a viable option to ignore it. I'm coming to this conversation from a perspective of someone who has in the past spent a lot of effort engaging with newer editors who come here in good faith in the sense that they want to build a better encyclopedia, but who lack understanding of our community norms around consensus building and tend to view POV issues as a battleground. One option is to simply ignore such people's comments, revert their contributions, and wait for them to get frustrated and leave or do something bad enough that they get blocked. But if we make that our default stance towards problem editors the pool of active editors will continue to decline, which hurts the long-term health of the project. To give these editors a chance to develop into useful contributors requires wading through, understanding, and replying to a lot of low-quality comments, and if these comments are AI generated then I'm just wasting my time. -- LWG talk 20:44, 2 February 2025 (UTC)[reply]
  • Support: ban prompt-lawyering and treat it like sockpuppetry. It's obvious that widespread access to generative AI allows editors to flood discussions with prompt-generated text and this is no doubt happening already on the site: it's just too easy to do. While it appears some editors here are keen to downplay the very real danger here (what's that about?), we need a statement noting that this is no OK and a form of not only wikilawyering but outright abuse. When it can be identified, this needs to be treated just as severely as sockpuppetry. :bloodofox: (talk) 07:11, 1 February 2025 (UTC)[reply]
    I've seen so many human-generated walls of text that were repetitive and failed to move discussion forward through new analysis or points. Personally I feel the community needs to deal with this problem, no matter how the text was created. isaacl (talk) 17:38, 1 February 2025 (UTC)[reply]
    Exactly this. The problem is the wall, not how the wall was built. Thryduulf (talk) 19:03, 1 February 2025 (UTC)[reply]
    One solution is AI, which can summarize human generated walls of text. "Summarize the following in 2 sentences". -- GreenC 19:23, 1 February 2025 (UTC)[reply]
    AI summary of this section: The discussion thread debates whether AI-generated "walls of text" in Wikipedia discussions should be considered disruptive editing. While some argue for treating AI-generated content like sockpuppetry, others point out that human-generated walls of text can be equally problematic, suggesting the focus should be on addressing lengthy, unproductive contributions regardless of their source GreenC 19:25, 1 February 2025 (UTC)[reply]
    True, but we shouldn't force everyone to rely on AI writing (potentially inaccurate) summaries if they wish to meaningfully participate in a discussion. Chaotic Enby (talk · contribs) 20:38, 1 February 2025 (UTC)[reply]
    +1 Hydrangeans (she/her | talk | edits) 04:40, 2 February 2025 (UTC)[reply]
    AI summation is a tool you can use, or not, it's your choice to generate and consume it. Obviously posting AI summation is not appropriate unless solicited. -- GreenC 05:03, 2 February 2025 (UTC)[reply]
    Unless the community decides to delegate decision-making to an AI program, repeated redundant verbose comments can swamp Wikipedia's current discussion format, and unnecessarily prolong discussion. This results in participants losing focus and no longer engaging, which makes it harder to build a true consensus. The problem is not trying to understand such comments, but how they slow down progress. isaacl (talk) 22:52, 1 February 2025 (UTC)[reply]
    There is WP:TEXTWALL ("The rush you feel in your veins as you type it"). It has varieties of walls of text. Maybe a new section for AI. -- GreenC 05:21, 2 February 2025 (UTC)[reply]
Oppose (although nothing is being proposed, but whatever). My opinion is unchanged from the last time this wall-of-text-producing topic came up. Gnomingstuff (talk) 22:49, 1 February 2025 (UTC)[reply]
Maybe we should consider a temporary moratorium on proposals to ban uses of AI. Just 30 days? It's the same thing over and over: "I'm worried that someone might use LLM to generate replies that don't represent their real thoughts. Please, let's have a rule that says they're bad, even though it's unenforceable and will result in false accusations." WhatamIdoing (talk) 00:20, 2 February 2025 (UTC)[reply]
Yes. Perhaps followed by a requirement that all future proposals explicitly state how they differ from the previous ones that have been rejected or failed to reach consensus, how/why the proposer believes that the differences will overcome the objections and/or why those objections do not apply to their proposal, and why AI needs to be called out specifically. This last point is poorly worded, I'm thinking of a requirement to explain why e.g. AI walls of text are a different problem to non-AI walls of text that mean we need a policy/guideline/whatever for AI walls of text specifically rather than walls of text in general. Thryduulf (talk) 06:52, 2 February 2025 (UTC)[reply]
The most recent major RfCs on generative AI were closed with strong consensuses supporting restrictions. We aren't going to put a moratorium on such discussions just because you and Thryduulf ardently opposed their outcomes. JoelleJay (talk) 17:59, 2 February 2025 (UTC)[reply]
Agree that such behavior is disruptive if it is in fact happening. I haven't personally seen it, but I think it is reasonable for the community to set expectations before the problem behavior becomes widespread. I would like the community to 1) encourage transparency in the use of LLMs to write content, and 2) recognize that it is unreasonable and disruptive to expect the other party to put a lot more effort into comprehending and replying to your comments than you put into creating them. If all you did to contribute to the discussion is spend 30 seconds putting the prompt "summarize the reasons to keep/delete this article" into your LLM of choice, then I shouldn't be expected to do more in response than spend 30 seconds saying "looks like they have an opinion about this, but couldn't be bothered to articulate it themselves." As Thryduulf has pointed out, this principle extends beyond LLMs: any low-effort, low-quality contribution to the discussion merits a similarly minimal response, however the extreme ease of generating responses with LLMs and the difficulty of quickly identifying their use makes them of special concern. -- LWG talk 05:34, 2 February 2025 (UTC)[reply]
I've previously written about being respectful of other editors, which includes being respectful of the time of others, such as making a concerted effort to be up-to-date on discussions when making a comment, copy editing one's remarks to be concise, avoiding comments that aren't germane to the discussion at hand, being understanding if no one responds to your inquiries, and considering how your actions affect the time spent by others on Wikipedia. Focusing on the time spent writing a comment is a distraction from the real problem of poor communication. I don't want editors to argue that their comments deserve a response because they spent a lot of time writing them. isaacl (talk) 06:16, 2 February 2025 (UTC)[reply]
Oh I definitely don't want to imply that comments deserve a response because of the time spent writing them. But I'm coming at this from the perspective of someone who has spent a lot of hours over the years responding to comments that don't deserve a response, because failing to respond will either result in escalating anger and continued disruption, or will drive editors away from the project. We can say "good riddance" to such editors, but a lot of us weren't the most consensus and culture savvy in our earlier days as editors, and we already struggle with editor retention and development. If someone is writing human-generated textwalls but is redeemable, I'd like to engage with them and try to mentor them into a better understanding of our culture, but I'm wasting my time if I'm doing that with LLMs, and it takes a lot of effort from me to tell the difference. Hence why I feel that the use of LLMs is acceptable, but should be disclosed. -- LWG talk 20:44, 2 February 2025 (UTC)[reply]
This is one example Wikipedia:Articles_for_deletion/Ribu. I believe signing your user name to your comment that was written by someone else (including AI) without attribution shouldn't be allowed in the first place. Graywalls (talk) 06:56, 2 February 2025 (UTC)[reply]
Oh it is definitely happening. Here's just the most recent example that I personally collapsed. JoelleJay (talk) 18:03, 2 February 2025 (UTC)[reply]
Thanks to Graywalls and JoelleJay for the examples. I agree that that behavior is a drain on valuable project resources. -- LWG talk 20:44, 2 February 2025 (UTC)[reply]
Seems a variation of WP:TEXTWALL. There is already an established AN/I practice for this, refuse to read it and ask for something shorter. CMD (talk) 11:26, 2 February 2025 (UTC)[reply]
We recently got strong consensus (a super-majority) that it is within admins' and closers' discretion to discount, strike, or collapse obvious use of generative LLMs, it makes perfect sense to reflect this in DE policy. JoelleJay (talk) 17:27, 2 February 2025 (UTC)[reply]
That makes sense, although I don't recall that consensus being directly related to walls of text. CMD (talk) 18:58, 2 February 2025 (UTC)[reply]

Looking at RfCs in AP areas, I see a lot of very new editors, maybe EC should be required

[edit]

I think that requiring EC in such RfCs is important - I've seen a lot of new editors on both sides of issues who clearly haven't much understanding of our policies and guidelines. Doug Weller talk 12:17, 31 January 2025 (UTC)[reply]

I would generally support such a measure at this time. We are certainly seeing a bunch of "chatter" from new and ip contributors. In any normal discussion, I'd say fine. But in CTs or meta discussion, requiring entry permissions to formal processes is not an unreasonable step in adding layers of protection to vital conversations. Contributors who have no stake in the continued function of en.wiki are less concerned about its continued function than are long time contributors, generally speaking. Any rando can hurl bombs with impunity. This impunity is not always great for civil disagreement. BusterD (talk) 13:57, 31 January 2025 (UTC)[reply]
That's too extreme imo. If these new editors are making non-policy based arguments, surely the RfC closer will take that into account when they make their close. Some1 (talk) 14:11, 31 January 2025 (UTC)[reply]
I'm with Some1 in thinking this is too extreme, especially when sometimes RfCs come about because of new eyes seeing them and mentioning something on the talk page. I do think that maybe a more explicit statement about sticking to established policy would be helpful, but not simply making EC privileges even more fundamental to being able to use wikipedia. I do wish it were considered a policy violation to suggest just ignoring Wikipedia policy during RfCs that could warrant a minor sanction if not just very clearly a good faith suggestion, though.
Is this an issue with RfCs becoming disproportionately new users making bad suggestions junking things up? Or more of a general thing. Warrenᚋᚐᚊᚔ 14:22, 31 January 2025 (UTC)[reply]
You think that suggesting that we follow the long-standing official policy that Wikipedia:If a rule prevents you from improving or maintaining Wikipedia, ignore it. should be sanctionable? If so, you'll be the first in line for punishment, because you just suggested that we ignore that policy. WhatamIdoing (talk) 00:35, 2 February 2025 (UTC)[reply]
The EC system for ARBPIA talkpages is a mess. Banning people from participating, but still leaving all the technical tools for them to participate, and so enforcing bans by reverting their contributions, both takes up editor time to enforce and seems a deeply poor way to treat good faith contributors. I would oppose this system being extended elsewhere in that way. It should only be considered if we first have agreed technical ways to manage it, for example we hold all AP RfCs in EC-protected subpages and have big labels informing editors of the situation at the top and bottom of the RfCs. CMD (talk) 14:36, 31 January 2025 (UTC)[reply]
Noting that, since a few days ago, editors don't have all the technical tools for them to participate anymore, as an edit filter disallows non-edit request posts. My bad, it looks like the edit filter is still being tested and doesn't block posts yet. Chaotic Enby (talk · contribs) 16:51, 31 January 2025 (UTC)[reply]
Thanks for the update, I wonder how it excepts requests. That said, this would block for the entire talkpage wouldn't it, not just RfCs as is being proposed? CMD (talk) 01:32, 1 February 2025 (UTC)[reply]
This does affect the entire page, so a similar edit filter for RfCs would likely need the RfC itself to be transcluded from a separate page. For the edit request part, we "just" had to make a regex looking for every single redirect of {{edit protected}} and {{edit extended-protected}} (a lot!) Chaotic Enby (talk · contribs) 01:47, 1 February 2025 (UTC)[reply]
Just! Thanks for the work. CMD (talk) 01:51, 1 February 2025 (UTC)[reply]
Wait, you're actually planning to block people for using the Edit buttons? Even if they don't know what's going on? If you don't want non-EC folks participating on a page, then you really need to use page protection. Don't give them an Edit button and then block them for not noticing that they weren't supposed to use it. WhatamIdoing (talk) 00:37, 2 February 2025 (UTC)[reply]
This edit at Talk:Gulf of Mexico [2] is not unusual, see also [3] or [4] - I wish I could easily find out how many new editors there are there. — Preceding unsigned comment added by Doug Weller (talkcontribs) 14:52, 31 January 2025 (UTC)[reply]
New accounts whose sole (or essentially sole) purpose is to comment in contentious RFCs should be tagged with Template:Single-purpose account. Hemiauchenia (talk) 17:24, 31 January 2025 (UTC)[reply]
Well... I dunno about that. You shouldn't label someone as an SPA if they've only made one edit, because that's not sensible. We were all "single-purpose accounts" on our first edit. For example, your first four edits were about skunks.
Maybe we need two different SPA labels, one of which rather benignly says something like "Welcome to Wikipedia! If you have any questions, you can get answers at the Wikipedia:Teahouse" and the other says "This account has made more than n edits but has made few or no edits outside this topic area". WhatamIdoing (talk) 00:47, 2 February 2025 (UTC)[reply]
We can describe a temporal version of the latter now thanks to WP:ARBBER. CMD (talk) 02:16, 2 February 2025 (UTC)[reply]
So maybe retrofit the concept of an SPA to say that if you've made 11 edits, and "only" 7 are about American politics, then you're not an SPA? WhatamIdoing (talk) 02:22, 2 February 2025 (UTC)[reply]
The opposite, ARBBER expects no more than 3 of 11 edits to be about PIA. CMD (talk) 02:37, 2 February 2025 (UTC)[reply]

Guideline against use of AI images in BLPs and medical articles?

[edit]

I have recently seen AI-generated images be added to illustrate both BLPs (e.g. Laurence Boccolini, now removed) and medical articles (e.g. Legionella#Mechanism). While we don't have any clear-cut policy or guideline about these yet, they appear to be problematic. Illustrating a living person with an AI-generated image might misinform as to how that person actually looks like, while using AI in medical diagrams can lead to anatomical inaccuracies (such as the lung structure in the second image, where the pleura becomes a bronnchiole twisting over the primary bronchi), or even medical misinformation. While a guideline against AI-generated images in general might be more debatable, do we at least have a consensus for a guideline against these two specific use cases?

To clarify, I am not including potentially relevant AI-generated images that only happen to include a living person (such as in Springfield pet-eating hoax), but exclusively those used to illustrate a living person in a WP:BLP context. Chaotic Enby (talk · contribs) 12:11, 30 December 2024 (UTC)[reply]

What about any biographies, including dead people. The lead image shouldn't be AI generated for any biography. - Sebbog13 (talk) 12:17, 30 December 2024 (UTC)[reply]
Same with animals, organisms etc. - Sebbog13 (talk) 12:20, 30 December 2024 (UTC)[reply]
I personally am strongly against using AI in biographies and medical articles - as you highlighted above, AI is absolutely not reliable in generating accurate imagery and may contribute to medical or general misinformation. I would 100% support a proposal banning AI imagery from these kinds of articles - and a recommendation to not use such imagery other than in specific scenarios. jolielover♥talk 12:28, 30 December 2024 (UTC)[reply]
I'd prefer a guideline prohibiting the use of AI images full stop. There are too many potential issues with accuracy, honesty, copyright, etc. Has this already been proposed or discussed somewhere? – Joe (talk) 12:38, 30 December 2024 (UTC)[reply]
There hasn't been a full discussion yet, and we have a list of uses at Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts, but it could be good to deal with clear-cut cases like this (which are already a problem) first, as the wider discussion is less certain to reach the same level of consensus. Chaotic Enby (talk · contribs) 12:44, 30 December 2024 (UTC)[reply]
Discussions are going on at Wikipedia_talk:Biographies_of_living_persons#Proposed_addition_to_BLP_guidelines and somewhat at Wikipedia_talk:No_original_research#Editor-created_images_based_on_text_descriptions. I recommend workshopping an RfC question (or questions) then starting an RfC. Some1 (talk) 13:03, 30 December 2024 (UTC)[reply]
Oh, didn't catch the previous discussions! I'll take a look at them, thanks! Chaotic Enby (talk · contribs) 14:45, 30 December 2024 (UTC)[reply]
There is one very specific exception I would put to a very sensible blanket prohibition on using AI images to illustrate people, especially BLPs. That is where the person themselves is known to use that image, which I have encountered in Simon Ekpa. CMD (talk) 15:00, 30 December 2024 (UTC)[reply]
While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --Nat Gertler (talk) 16:04, 30 December 2024 (UTC)[reply]
For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. Thryduulf (talk) 17:45, 30 December 2024 (UTC)[reply]
The issue is, either there are public domain/CC0 images of the person (in which case they can be used directly) or there aren't, in which case the AI is making up how a person looks. Chaotic Enby (talk · contribs) 20:00, 30 December 2024 (UTC)[reply]
We tend to use art representations either where no photographs are available (in which case, AI will also not have access to photographs) or where what we are showing is an artist's insight on how this person is perceived, which is not something that AI can give us. In any case, we don't have to build policy now around some theoretical AI in the future; we can deal with the current reality, and policy can be adjusted if things change in the future. And even that theoretical AI does make it more difficult to detect copyvio -- Nat Gertler (talk) 20:54, 30 December 2024 (UTC)[reply]
I wouldn't call it an upscale given whatever was done appears to have removed detail, but we use that image as it was specifically it is the edited image which was sent to VRT. CMD (talk) 10:15, 31 December 2024 (UTC)[reply]
Is there any clarification on using purely AI-generated images vs. using AI to edit or alter images? AI tools have been implemented in a lot of photo editing software, such as to identify objects and remove them, or generate missing content. The generative expand feature would appear to be unreliable (and it is), but I use it to fill in gaps of cloudless sky produced from stitching together photos for a panorama (I don't use it if there are clouds, or for starry skies, as it produces non-existent stars or unrealistic clouds). Photos of Japan (talk) 18:18, 30 December 2024 (UTC)[reply]
Yes, my proposal is only about AI-generated images, not AI-altered ones. That could in fact be a useful distinction to make if we want to workshop a RfC on the matter. Chaotic Enby (talk · contribs) 20:04, 30 December 2024 (UTC)[reply]
I'm not sure if we need a clear cut policy or guideline against them... I think we treat them the same way as we would treat an editor's kitchen table sketch of the same figure. Horse Eye's Back (talk) 18:40, 30 December 2024 (UTC)[reply]
For those wanting to ban AI images full stop, well, you are too late. Most professional image editing software, including the software in one's smartphone as well as desktop, uses AI somewhere. Noise reduction software uses AI to figure out what might be noise and what might be texture. Sharpening software uses AI to figure out what should be smooth and what might have a sharp detail it can invent. For example, a bird photo not sharp enough to capture feather detail will have feather texture imagined onto it. Same for hair. Or grass. Any image that has been cleaned up to remove litter or dust or spots will have the cleaned area AI generated based on its surroundings. The sky might be extended with AI. These examples are a bit different from a 100% imagined image created from a prompt. But probably not in a way that is useful as a rule.
I think we should treat AI generated images the same as any user-generated image. It might be a great diagram or it might be terrible. Remove it from the article if the latter, not because someone used AI. If the image claims to photographically represent something, we may judge whether the creator has manipulated the image too much to be acceptable. For example, using AI to remove a person in the background of an image taken of the BLP subject might be perfectly fine. People did that with traditional Photoshop/Lightroom techniques for years. Using AI to generate what claims to be a photo of a notable person is on dodgy ground wrt copyright. -- Colin°Talk 19:12, 30 December 2024 (UTC)[reply]
I'm talking about the case of using AI to generate a depiction of a living person, not using AI to alter details in the background. That is why I only talk about AI-generated images, not AI-altered images. Chaotic Enby (talk · contribs) 20:03, 30 December 2024 (UTC)[reply]
Regarding some sort of brightline ban on the use of any such image in anything article medical related: absolutely not. For example, if someone wanted to use AI tools as opposed to other tools to make an image such as this one (as used in the "medical" article Fluconazole) I don't see a problem, so long as it is accurate. Accurate models and illustrations are useful and that someone used AI assistance as opposed to a chisel and a rock is of no concern. — xaosflux Talk 19:26, 30 December 2024 (UTC)[reply]
I believe that the appropriateness of AI images depends on how its used by the user. In BLP and medical articles, it is inappropriate for the images, but it is inappropriate to ban it completely across thw site. By the same logic, if you want full ban of AI, you are banning fire just because people can get burned, without considering cooking. JekyllTheFabulous (talk) 13:33, 31 December 2024 (UTC)[reply]
AI generated medical related image. No idea if this is accurate, but if it is I don't see what the problem would be compared to if this was made with ink and paper. — xaosflux Talk 00:13, 31 December 2024 (UTC)[reply]
I agree that AI-generated images should not be used in most cases. They essentially serve as misinformation. I also don't think that they're really comparable to drawings or sketches because AI-generation uses a level of photorealism that can easily trick the untrained eye into thinking it is real. Di (they-them) (talk) 20:46, 30 December 2024 (UTC)[reply]
AI doesn't need to be photorealistic though. I see two potential issues with AI. The first is images that might deceive the viewer into thinking they are photos, when they are not. The second is potential copyright issues. Outside of the copyright issues I don't see any unique concerns for an AI-generated image (that doesn't appear photorealistic). Any accuracy issues can be handled the same way a user who manually drew an image could be handled. Photos of Japan (talk) 21:46, 30 December 2024 (UTC)[reply]
AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. Some1 (talk) 00:05, 31 December 2024 (UTC)[reply]
I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. Chaotic Enby (talk · contribs) 00:31, 31 December 2024 (UTC)[reply]
AI-generated images should always say "AI-generated image of [X]" in the image caption. No misleading readers that way. Some1 (talk) 00:36, 31 December 2024 (UTC)[reply]
Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. Chaotic Enby (talk · contribs) 00:40, 31 December 2024 (UTC)[reply]
always end up with "no consensus" and no guidelines on use at all, even if most people are against it Agreed. Even a simple proposal to have image captions note whether an image is AI-generated will have editors wikilawyer over the definition of 'AI-generated.' I take back my recommendation of starting an RfC; we can already predict how that RfC will end. Some1 (talk) 02:28, 31 December 2024 (UTC)[reply]
Of interest perhaps is this 2023 NOR noticeboard discussion on the use of drawn cartoon images in BLPs. Zaathras (talk) 22:38, 30 December 2024 (UTC)[reply]
We should absolutely not be including any AI images in anything that is meant to convey facts (with the obvious exception of an AI image illustrating the concept of an AI image). I also don't think we should be encouraging AI-altered images -- the line between "regular" photo enhancement and what we'd call "AI alteration" is blurry, but we shouldn't want AI edits for the same reason we wouldn't want fake Photoshop composites.
That said, I would assume good faith here: some of these images are probably being sourced from Commons, and Commons is dealing with a lot of undisclosed AI images. Gnomingstuff (talk) 23:31, 30 December 2024 (UTC)[reply]
Do you really mean to ban single images showing the way birds use their wings?
Why wouldn't we want "fake Photoshop composites"? A Composite photo can be very useful. I'd be sad if we banned c:Category:Chronophotographic photomontages. WhatamIdoing (talk) 06:40, 31 December 2024 (UTC)[reply]
Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. Gnomingstuff (talk) 20:20, 31 December 2024 (UTC)[reply]
Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. Simonm223 (talk) 18:03, 15 January 2025 (UTC)[reply]
Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop others do not. If you want to ban or restrict one but not the other then you need to explain how the difference can be reliably determined, and how one is materially different to the other in ways other than your personal opinion. Thryduulf (talk) 18:45, 15 January 2025 (UTC)[reply]
I don't think any guideline, let alone policy, would be beneficial and indeed on balance is more likely to be harmful. There are always only two questions that matter when determining whether we should use an image, and both are completely independent of whether the image is AI-generated or not:
  1. Can we use this image in this article? This depends on matters like copyright, fair use, whether the image depicts content that is legal for an organisation based in the United States to host, etc. Obviously if the answer is "no", then everything else is irrelevant, but as the law and WMF, Commons and en.wp policies stand today there exist some images in both categories we can use, and some images in both categories we cannot use.
  2. Does using this image in this article improve the article? This is relative to other options, one of which is always not using any image, but in many cases also involves considering alternative images that we can use. In the case of depictions of specific, non-hypothetical people or objects one criteria we use to judge whether the image improves the article is whether it is an accurate representation of the subject. If it is not an accurate representation then it doesn't improve the article and thus should not be used, regardless of why it is inaccurate. If it is an accurate representation, then its use in the article will not be misrepresentative or misleading, regardless of whether it is or is not AI generated. It may or may not be the best option available, but if it is then it should be used regardless of whether it is or is not AI generated.
The potential harm I mentioned above is twofold, firstly Wikipedia is, by definition, harmed when an images exists we could use that would improve an article but we do not use it in that article. A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.
Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. Thryduulf (talk) 00:52, 31 December 2024 (UTC)[reply]
I agree with almost the entirety of your post with a caveat on whether something "is an accurate representation". We can tell whether non-photorealistic images are accurate by assessing whether the image accurately conveys the idea of what it is depicting. Photos do more than convey an idea, they convey the actual look of something. With AI generated images that are photorealistic it is difficult to assess whether they accurately convey the look of something (the shading might be illogical in subtle ways, there could be an extra finger that goes unnoticed, a mole gets erased), but readers might be deceived by the photo-like presentation into thinking they are looking at an actual photographic depiction of the subject which could differ significantly from the actual subject in ways that go unnoticed. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)[reply]
A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. That's why I'm suggesting a guideline, not a policy. Guidelines are by design more flexible, and WP:IAR still does (and should) apply in edge cases.
The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been. In that case, there is a licensing problem. AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. In that case, it's mostly because the ambiguity in wording: AI-edited images are very common, and are sometimes called "AI-generated", but here we should focus on actual prompt outputs, of the style "I asked a model to generate me an image of a BLP". Chaotic Enby (talk · contribs) 11:13, 31 December 2024 (UTC)[reply]
Simply not having a completely unnecessary policy or guideline is infinitely better than relying on IAR - especially as this would have to be ignored every time it is relevant. When the AI image is not the best option (which obviously includes all the times its unsuitable or inaccurate) existing policies, guidelines, practice and frankly common sense mean it won't be used. This means the only time the guideline would be relevant is when an AI image is the best option and as we obviously should be using the best option in all cases we would need to ignore the guideline against using AI images.
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated. The key words here are "supposed to be" and "shouldn't", editors absolutely will speculate that images are AI-generated and that the Commons labelling is incorrect. We are supposed to assume good faith, but this very discussion shows that when it comes to AI some editors simply do not do that.
Regarding your final point, that might be what you are meaning but it is not what all other commenters mean when they want to exclude all AI images. Thryduulf (talk) 11:43, 31 December 2024 (UTC)[reply]
For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.
Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. Chaotic Enby (talk · contribs) 11:49, 31 December 2024 (UTC)[reply]
the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image). There are only two possible scenarios regarding verifiability:
  1. The image is an accurate representation and we can verify that (e.g. by reference to non-free photos).
    • Verifiability is no barrier to using the image, whether it is AI generated or not.
    • If it is the best image available, and editors agree using it is better than not having an image, then it should be used whether it is AI generated or not.
  2. The image is either not an accurate representation, or we cannot verify whether it is or is not an accurate representation
    • The only reasons we should ever use the image are:
      • It has been the subject of notable commentary and we are presenting it in that context.
      • The subject verifiably uses it as a representation of themselves (e.g. as an avatar or logo)
    This is already policy, whether the image is AI generated or not is completely irrelevant.
You will note that in no circumstance is it relevant whether the image is AI generated or not. Thryduulf (talk) 13:27, 31 December 2024 (UTC)[reply]
In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.
In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. Chaotic Enby (talk · contribs) 13:52, 31 December 2024 (UTC)[reply]
If the image is misleading or unverifiable it should not be used, regardless of why it is misleading or unverifiable. This is existing policy and we don't need anything specifically regarding AI to apply it - we just need consensus that the image is misleading or unverifiable. Whether it is or is not AI generated is completely irrelevant. Thryduulf (talk) 15:04, 31 December 2024 (UTC)[reply]
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
I mean... yes, we should? At the very least Commons should go hunting for mislabeled images -- that's the whole point of license review. The thing is that things are absolutely swamped over there and there are hundreds of thousands of images waiting for review of some kind. Gnomingstuff (talk) 20:35, 31 December 2024 (UTC)[reply]
Yes, but that's a Commons thing. A guideline on English Wikipedia shouldn't decide of what is to be done on Commons. Chaotic Enby (talk · contribs) 20:37, 31 December 2024 (UTC)[reply]
I just mean that given the reality of the backlogs, there are going to be mislabeled images, and there are almost certainly going to be more of them over time. That's just how it is. We don't have control over that, but we do have control over what images go into articles, and if someone has legitimate concerns about an image being AI-generated, then they should be raising those. Gnomingstuff (talk) 20:45, 31 December 2024 (UTC)[reply]
  • Support blanket ban on AI-generated images on Wikipedia. As others have highlighted above, the is not just a slippery slope but an outright downward spiral. We don't use AI-generated text and we shouldn't use AI-generated images: these aren't reliable and they're also WP:OR scraped from who knows what and where. Use only reliable material from reliable sources. As for the argument of 'software now has AI features', we all know that there's a huge difference between someone using a smoothing feature and someone generating an image from a prompt. :bloodofox: (talk) 03:12, 31 December 2024 (UTC)[reply]
    Reply, the section of WP:OR concerning images is WP:OI which states "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments". Using AI to generate an image only violates WP:OR if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)[reply]
    Prompt generated images are unquestionably violation of WP:OR and WP:SYNTH: Type in your description and you get an image scraping who knows what and from who knows where, often Wikipedia. Wikipedia isn't an WP:RS. Get real. :bloodofox: (talk) 23:35, 1 January 2025 (UTC)[reply]
    "Unquestionably"? Let me question that, @Bloodofox. ;-)
    If an editor were to use an AI-based image-generating service and the prompt is something like this:
    "I want a stacked bar chart that shows the number of games won and lost by FC Bayern Munich each year. Use the team colors, which are red #DC052D, blue #0066B2, and black #000000. The data is:
    • 2014–15: played 34 games, won 25, tied 4, lost 5
    • 2015–16: played 34 games, won 28, tied 4, lost 2
    • 2016–17: played 34 games, won 25, tied 7, lost 2
    • 2017–18: played 34 games, won 27, tied 3, lost 4
    • 2018–19: played 34 games, won 24, tied 6, lost 4
    • 2019–20: played 34 games, won 26, tied 4, lost 4
    • 2020–21: played 34 games, won 24, tied 6, lost 4
    • 2021–22: played 34 games, won 24, tied 5, lost 5
    • 2022–23: played 34 games, won 21, tied 8, lost 5
    • 2023–24: played 34 games, won 23, tied 3, lost 8"
    I would expect it to produce something that is not a violation of either OR in general or OR's SYNTH section specifically. What would you expect, and why do you think it would be okay for me to put that data into a spreadsheet and upload a screenshot of the resulting bar chart, but you don't think it would be okay for me to put that same data into a image generator, get the same thing, and upload that?
    We must not mistake the tools for the output. Hand-crafted bad output is bad. AI-generated good output is good. WhatamIdoing (talk) 01:58, 2 January 2025 (UTC)[reply]
    Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. As for the charts and graphs, there are any number of ways to produce these. :bloodofox: (talk) 03:07, 2 January 2025 (UTC)[reply]
    We're discussing generating images of people, places, and objects here The proposal contains no such limitation. and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. Do you have a citation for that? Other people have explained better than I can how that it is not necessarily true, and certainly not unquestionable. Thryduulf (talk) 03:14, 2 January 2025 (UTC)[reply]
    As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure WP:OR to produce these fake images and they're a straightforward product of synthesis of multiple sources (WP:SYNTH) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Wikipedia, which is an already a flailing and shrinking project. :bloodofox: (talk) 03:23, 2 January 2025 (UTC)[reply]
    So you think the lead image at Gisèle Pelicot is a SYNTH violation? Its (human) creator explicitly says "This is not done from one specific photo. As I usually do when I draw portraits of people that I can't see in person, I look at a lot of photos of them and then create my own rendition" in the image description, which sounds like the product of synthesis of multiple sources" to me, and "these sources are unknown because" the the images the artist looked at are not disclosed.
    A lot of my concern about blanket statements is the principle that what's sauce for the goose is sauce for the gander, too. If it's okay for a human to do something by hand, then it should be okay for a human using a semi-automated tool to do it, too.
    (Just in case you hadn't heard, the rumors that the editor base is shrinking have been false for over a decade now. Compared to when you created your account in mid-2005, we have about twice as many high-volume editors.) WhatamIdoing (talk) 06:47, 2 January 2025 (UTC)[reply]
    Review WP:SYNTH and your attempts at downplaying a prompt-generated image as "semi-automated" shows the root of the problem: if you can't detect the difference between a human sketching from a reference and a machine scraping who-knows-what on the internet, you shouldn't be involved in this discussion. As for editor retention, this remains a serious problem on the site: while the site continues to grow (and becomes core fodder for AI-scraping) and becomes increasingly visible, editorial retention continues to drop. :bloodofox: (talk) 09:33, 2 January 2025 (UTC)[reply]
    Please scroll down below SYNTH to the next section titled "What is not original research" which begins with WP:OI, our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original depiction of something, so long as the idea of that thing is not original. Photos of Japan (talk) 09:55, 2 January 2025 (UTC)[reply]
    Yes, which explicitly states:
    It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Wikipedia:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.
    Using a machine to generate a fake image of someone is far beyond "manipulation" and it is certainly "false". Clearly we need explicit policies on AI-generated images of people or we wouldn't be having this discussion, but this as it stands clarly also falls under WP:SYNTH: there is zero question that this is a result of "synthesis of published material", even if the AI won't list what it used. Ultimately it's just a synthesis of a bunch of published composite images of who-knows-what (or who-knows-who?) the AI has scraped together to produce a fake image of a person. :bloodofox: (talk) 10:07, 2 January 2025 (UTC)[reply]
    The latter images you describe should be SVG regardless. If there are models that can generate that, that seems totally fine since it can be semantically altered by hand. Any generation with photographic or "painterly" characteristics (e.g. generating something in the style of a painting or any other convention of visual art that communicates aesthetic particulars and not merely abstract visual particulars) seems totally unacceptable. Remsense ‥  07:00, 31 December 2024 (UTC)[reply]
    100 dots: 99 chocolate-colored dots and 1 baseball-shaped dot
    @Bloodofox, here's an image I created. It illustrates the concept of 1% in an article. I made this myself, by typing 100 emojis and taking a screenshot. Do you really mean to say that if I'd done this with an image-generating AI tool, using a prompt like "Give me 100 dots in a 10 by 10 grid. Make 99 a dark color and 1, randomly placed, look like a baseball" that it would be hopelessly tainted, because AI is always bad? Or does your strongly worded statement mean something more moderate?
    I'd worry about photos of people (including dead people). I'd worry about photos of specific or unique objects that have to be accurate or they're worse than worthless (e.g., artwork, landmarks, maps). But I'm not worried about simple graphs and charts like this one, and I'm not worried about ordinary, everyday objects. If you want to use AI to generate a photorealistic image of a cookie, or a spoon, and the output you get genuinely looks like those objects, I'm not actually going to worry about it. WhatamIdoing (talk) 06:57, 31 December 2024 (UTC)[reply]
    As you know, Wikipedia has the unique factor of being entirely volunteer-ran. Wikipedia has fewer and fewer editors and, long-term, we're seeing plummeting birth rates in areas where most Wikipedia editors do exist. I wouldn't expect a wave of new ones aimed at keeping the site free of bullshit in the near future.
    In addition, the Wikimedia Foundation's hair-brained continued effort to turn the site into its political cash machine is no doubt also not helping, harming the site's public perception and leading to fewer new editors.
    Over the course of decades (I've been here for around 20 years), it seems clear that the site will be negatively impacted by all this, especially in the face of generative AI.
    As a long-time editor who has frequently stumbled upon intense WP:PROFRINGE content, fended off armies of outside actors looking to shape the site into their ideological image (and sent me more than a few death threats), and who has identified large amount of politically-motivated nonsense explicitly designed to fool non-experts in areas I know intimately well (such as folklore and historical linguistics topics), I think it need be said that the use of generative AI for content is especially dangerous because of its capabilities of fooling Wikipedia readers and Wikipedia editors alike.
    Wikipedia is written by people for people. We need to draw a line in the sand to keep from being flooded by increasingly accessible hoax-machines.
    A blanket ban on generative AI resolves this issue or at least hands us another tool with which to attempt to fight back. We don't need what few editors we have here wasting what little time they can give the project checking over an ocean of AI-generated slop: we need more material from reliable sources and better tools to fend off bad actors usable by our shrinking editor base (anyone at the Wikimedia Foundation listening?), not more waves of generative AI garbage. :bloodofox: (talk) 07:40, 31 December 2024 (UTC)[reply]
    A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Wikipedia articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A wikipedia editor could train an AI to convert their voice into Wikipedia-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. Photos of Japan (talk) 08:26, 31 December 2024 (UTC)[reply]
    I think this is the least convincing one I've seen here yet: It contains the ol' 'there are AI features in programs now' while also attempting to invoke accessibility and a little bit of 'we must have machines to translate!'.
    As a translator myself, I can only say: Oh please. Generative AI is notoriously terrible at translating and that's not likely to change. And I mean ever beyond a very, very basic level. Due to the complexities of communication and little matters like nuance, all machine translated material must be thoroughly checked and modified by, yes, human translators, who often encounter it spitting out complete bullshit scraped from who-knows-where (often Wikipedia itself).
    I get that this topic attracts a lot of 'but what if generative AI is better than humans?' from the utopian tech crowd but the reality is that anyone who needs a machine to invent text and visuals for whatever reason simply shouldn't be using it on Wikipedia.
    Either you, a human being, can contribute to the project or you can't. Slapping a bunch of machine-generated (generative AI) visuals and text (much of it ultimately coming from Wikipedia in the first place!) isn't some kind of human substitute, it's just machine-regurgitated slop and is not helping the project.
    If people can't be confident that Wikipedia is made by humans, for humans the project is finally on its way out.:bloodofox: (talk) 09:55, 31 December 2024 (UTC)[reply]
    I don't know how up to date you are on the current state of translation, but:
    In a previous State of the industry report for freelance translators, the word on TMs and CAT tools was to take them as "a given." A high percentage of translators use at least one CAT tool, and reports on the increased productivity and efficiency that can accompany their use are solid enough to indicate that, unless the kind of translation work you do by its very nature excludes the use of a CAT tool, you should be using one.
    Over three thousand full-time professional translators from around the world responded to the surveys, which were broken into a survey for CAT tool users and one for those who do not use any CAT tool at all.
    88% of respondents use at least one CAT tool for at least some of their translation tasks.
    Of those using CAT tools, 83% use a CAT tool for most or all of their translation work.
    Mind you, traditionally CAT tools didn't use AI, but many do now, which only adds to potential sources of confusion in a blanket ban of AI. Photos of Japan (talk) 17:26, 31 December 2024 (UTC)[reply]
    You're barking up the tree with the pro-generative AI propaganda in response to me. I think we're all quite aware that generative AI tool integration is now common and that there's also a big effort to replace human translators — and anything that can be "written" with machines-generated text. I'm also keenly aware that generative AI is absolutely horrible at translation and all of it must be thoroughly checked by humans, as you would be if you were a translator yourself. :bloodofox: (talk) 22:20, 31 December 2024 (UTC)[reply]
    "all machine translated material must be thoroughly checked and modified by, yes, human translators"
    You are just agreeing with me here.
    "if you’re just trying to convey factual information in another language that machine translation engines handle well, AI/MT with a human reviewer can be a great option. -American Translation Society
    There are translators (particularly with non-creative works) who are using these tools to shift more towards reviewing. It should be up to them to decide what they think is the most efficient method for them. Photos of Japan (talk) 06:48, 1 January 2025 (UTC)[reply]
    And any translator who wants to use generative AI to attempt to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. :bloodofox: (talk) 11:09, 1 January 2025 (UTC)[reply]
    I wonder what you mean by "on the site". The question here is "Is it okay for an editor to go to a completely different website, generate an image all by themselves, upload it to Commons, and put it in a Wikipedia article?" The question here is not "Shall we put AI-generating buttons on Wikipedia's own website?" WhatamIdoing (talk) 02:27, 2 January 2025 (UTC)[reply]
    I'm talking about users slapping machine-translated and/or machine-generated nonsense all over the site, only for us to have to go behind and not only check it but correct it. It takes users minutes to do this and it's already happening. It's the same for images. There are very few of us who volunteer here and our numbers are growing fewer. We need to be spending our time improving the site rather than opening the gate as wide as possible for a flood of AI-generated/rendered garbage. The site has enough problems that compound every day rather than having to fend off users armed with hoax machines at every corner. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)[reply]
    Sure, we're all opposed to "nonsense", but my question is: What about when the machine happens to generate something that is not "nonsense"?
    I have some worries about AI content. I worry, for example, that they'll corrupt our sources. I worry that List of scholarly publishing stings will get dramatically longer, and also that even more undetected, unconfessed, unretracted papers will get published and believed to be true and trustworthy. I worry that academia will go back to a model in which personal connections are more important, because you really can't trust what's published. I worry that scientific journals will start refusing to publish research unless it comes from someone employed by a trusted institution, that is willing to put its reputation on the line by saying they have directly verified that the work described in the paper was actually performed to their standards, thus scuttling the citizen science movement and excluding people whose institutions are upset with them for other reasons (Oh, you thought you'd take a job elsewhere? Well, we refuse to certify the work you did for the last three years...).
    But I'm not worried about a Wikipedia editor saying "Hey AI, give me a diagram of swingset" or "Make a chart for me out of the data I'm going to give you". In fact, if someone wants to pull the numbers out of Template:Wikipedia editor graph (100 per month), feed it to an AI, and replace the template's contents with an AI-generated image (until they finally fix the Graphs extension), I'd consider that helpful. WhatamIdoing (talk) 07:09, 2 January 2025 (UTC)[reply]
    Translators are not using generative AI for translation, the applicability of LLMs to regular translation is still in its infancy and regardless will not be implementing any generative faculties to its output since that is the exact opposite of what translation is supposed to do. JoelleJay (talk) 02:57, 2 January 2025 (UTC)[reply]
    Translators are not using generative AI for translation this entirely depends on what you mean by "generative". There are at least three contradictory understandings of the term in this one thread alone. Thryduulf (talk) 03:06, 2 January 2025 (UTC)[reply]
    Please, you can just go through the entire process with a simple prompt command now. The results are typically shit but you can generate a ton of it quickly, which is perfect for flooding a site like this one — especially without a strong policy against it. I've found myself cleaning up tons of AI-generated crap (and, yes, rendered) stuff here and elsewhere, and now I'm even seeing AI-generated responses to my own comments. It's beyond ridiculous. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)[reply]
  • Ban AI-generated from all articles, AI anything from BLP and medical articles is the position that seems it would permit all instances where there are plausible defenses that AI use does not fabricate or destroy facts intended to be communicated in the context of the article. That scrutiny is stricter with BLP and medical articles in general, and the restriction should be stricter to match. Remsense ‥  06:53, 31 December 2024 (UTC)[reply]
    @Remsense, please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? WhatamIdoing (talk) 07:00, 31 December 2024 (UTC)[reply]
    I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. Remsense ‥  07:02, 31 December 2024 (UTC)[reply]
    A quick web search indicates that there are generative AI programs that create SVG files. WhatamIdoing (talk) 07:16, 31 December 2024 (UTC)[reply]
    Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) Remsense ‥  07:18, 31 December 2024 (UTC)[reply]
    Like everyone said, there should be a de facto ban on using AI images in Wikipedia articles. They are effectively fake images pretending to be real, so they are out of step with the values of Wikipedia.--♦IanMacM♦ (talk to me) 08:20, 31 December 2024 (UTC)[reply]
    Except, not everybody has said that, because the majority of those of us who have refrained from hyperbole have pointed out that not all AI images are "fake images pretending to be real" (and those few that are can already be removed under existing policy). You might like to try actually reading the discussion before commenting further. Thryduulf (talk) 10:24, 31 December 2024 (UTC)[reply]
    @Remsense, exactly how much "ability to check what the thing is doing" do you need to be able to do, when the image shows 99 dots and 1 baseball, to illustrate the concept of 1%? If the image above said {{pd-algorithm}} instead of {{cc-by-sa-4.0}}, would you remove if from the article, because you just can't be sure that it shows 1%? WhatamIdoing (talk) 02:33, 2 January 2025 (UTC)[reply]
    The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. Remsense ‥  04:43, 2 January 2025 (UTC)[reply]
    How do we differentiate in policy between a "toy example" (that really would be used in an article) and "real" examples? Is it just that if I upload it, then you know me, and assume I've been responsible? WhatamIdoing (talk) 07:13, 2 January 2025 (UTC)[reply]
    There definitely exist generative AI for SVG files. Here's an example: I used generative AI in Adobe Illustrator to generate the SVG gear in File:Pinwheel scheduling.svg (from Pinwheel scheduling) before drawing by hand the more informative parts of the image. The gear drawing is not great (a real gear would have uniform tooth shape) but maybe the shading is better than I would have done by hand, giving an appearance of dimensionality and surface material while remaining deliberately stylized. Is that the sort of thing everyone here is trying to forbid?
    I can definitely see a case for forbidding AI-generated photorealistic images, especially of BLPs, but that's different from human oversight of AI in the generation of schematic images such as this one. —David Eppstein (talk) 01:15, 1 January 2025 (UTC)[reply]
    I'd include BDPs, too. I had to get a few AI-generated images of allegedly Haitian presidents deleted a while ago. The "paintings" were 100% fake, right down to the deformed medals on their military uniforms. An AI-generated "generic person" would be okay for some purposes. For a few purposes (e.g., illustrations of Obesity) it could even be preferable to have a fake "person" than a real one. But for individual/named people, it would be best not to have anything unless it definitely looks like the named person. WhatamIdoing (talk) 07:35, 2 January 2025 (UTC)[reply]
  • I put it to you that our decision on this requires nuance. It's obviously insane to allow AI-generated images of, for example, Donald Trump, and it's obviously insane to ban AI-generated images from, for example, artificial intelligence art or Théâtre D'opéra Spatial.—S Marshall T/C 11:21, 31 December 2024 (UTC)[reply]
    Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in Springfield pet-eating hoax), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate Donald Trump himself, which is what my proposal would recommend against. Chaotic Enby (talk · contribs) 11:32, 31 December 2024 (UTC)[reply]
    That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—S Marshall T/C 11:34, 31 December 2024 (UTC)[reply]
    Thanks for the (very reasoned) addition, I just wanted to make my original proposal clear. Chaotic Enby (talk · contribs) 11:43, 31 December 2024 (UTC)[reply]
  • Going off WAID's example above, perhaps we should be trying to restrict the use of AI where image accuracy/precision is essential, as it would be for BLP and medical info, among other cases, but in cases where we are talking generic or abstract concepts, like the 1% image, it's use is reasonable. I would still say we should strongly prefer am image made by a human with high control of the output, but when accuracy is not as important as just the visualization, it's reasonable to turn to AI to help. Masem (t) 15:12, 31 December 2024 (UTC)[reply]
  • Support total ban of AI imagery - There are probable copyright problems and veracity problems with anything coming out of a machine. In a word of manipulated reality, Wikipedia will be increasingly respected for holding a hard line against synthetic imagery. Carrite (talk) 15:39, 31 December 2024 (UTC)[reply]
    For both issues AI vs not AI is irrelevant. For copyright, if the image is a copyvio we can't use it regardless of whether it is AI or not AI, if it's not a copyvio then that's not a reason to use or not use the image. If the images is not verifiably accurate then we already can (and should) exclude it, regardless of whether it is AI or not AI. For more detail see the extensive discussion above you've either not read or ignored. Thryduulf (talk) 16:34, 31 December 2024 (UTC)[reply]
  • Yes, we absolutely should ban the use of AI-generated images in these subjects (and beyond, but that's outside the scope of this discussion). AI should not be used to make up a simulation of a living person. It does not actually depict the person and may introduce errors or flaws that don't actually exist. The picture does not depict the real person because it is quite simply fake.
  • Even worse would be using AI to develop medical images in articles in any way. The possibility for error there is unacceptable. Yes, humans make errors too, but there there is a) someone with the responsibility to fix it and b) someone conscious who actually made the picture, rather than a black box that spat it out after looking at similar training data. Cremastra 🎄 uc 🎄 20:08, 31 December 2024 (UTC)[reply]
    It's incredibly disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Wikipedia is not going to be taken over by AI, AI is not out to subvert Wikipedia, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. Thryduulf (talk) 20:31, 31 December 2024 (UTC)[reply]
    So what benefit does allowing AI images bring? We shouldn't be forced to decide these on a case-by-case basis.
    I'm sorry to dishearten you, but I still respectfully disagree with you. And I don't think this is "scaremongering" (although I admit that if it was, I would of course claim it wasn't). Cremastra 🎄 uc 🎄 21:02, 31 December 2024 (UTC) Cremastra 🎄 uc 🎄 20:56, 31 December 2024 (UTC)[reply]
    Determining what benefits any image brings to Wikipedia can only be done on a case-by-case basis. It is literally impossible to know whether any image improves the encyclopaedia without knowing the context of which portion of what article it would illustrate, and what alternative images are and are not available for that same spot.
    The benefit of allowing AI images is that when an AI image is the best option for a given article we use it. We gain absolutely nothing by prohibiting using the best image available, indeed doing so would actively harm the project without bringing any benefits. AI images that are misleading, inaccurate or any of the other negative things any image can be are never the best option and so are never used - we don't need any policies or guidelines to tell us that. Thryduulf (talk) 21:43, 31 December 2024 (UTC)[reply]
  • Support blanket ban on AI-generated text or images in articles, except in contexts where the AI-generated content is itself the subject of discussion (in a specific or general sense). Generative AI is fundamentally at odds with Wikipedia's mission of providing reliable information, because of its propensity to distort reality or make up information out of whole cloth. It has no place in our encyclopedia. pythoncoder (talk | contribs) 21:34, 31 December 2024 (UTC)[reply]
  • Support blanket ban on AI-generated images except in ABOUTSELF contexts. This is especially a problem given the preeminence Google gives to Wikipedia images in its image search. JoelleJay (talk) 22:49, 31 December 2024 (UTC)[reply]
  • Ban across the board, except in articles which are actually about AI-generated imagery or the tools used to create them, or the image itself is the subject of substantial commentary within the article for some reason. Even in those cases, clearly indicating that the image is AI-generated should be required. Seraphimblade Talk to me 00:29, 1 January 2025 (UTC)[reply]
  • Oppose blanket bans that would forbid the use of AI assistance in creating diagrams or other deliberately stylized content. Also oppose blanket bans that would forbid AI illustrations in articles about AI illustrations. I am not opposed to banning photorealistic AI-generated images in non-AI-generation contexts or banning AI-generated images from BLPs unless the image itself is specifically relevant to the subject of the BLP. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)[reply]
Lachlan Macquarie?
  • Oppose blanket bans AI is just a new buzzword so, for example, Apple phones now include "Apple Intelligence" as a standard feature. Does this means that photographs taken using Apple phones will be inadmissable? That would be silly because legacy technologies are already rife with issues of accuracy and verification. For example, there's an image on the main page right now (right). This purports to be a particular person ("The Father of Australia") but, if you check the image description, you find that it may have been his brother and even the attribution to the artist is uncertain. AI features may help in exposing such existing weaknesses in our image use and so we should be free to use them in an intelligent way. Andrew🐉(talk) 08:03, 1 January 2025 (UTC)[reply]
    So, you expect an the AI, notoriously trained on Wikipedia (and whatever else is floating around on the internet), to correct Wikipedia where humans have failed... using the data it scraped from Wikipedia (and who knows where else)? :bloodofox: (talk) 11:12, 1 January 2025 (UTC)[reply]
    I tried using the Deep Research option of Gemini to assess the attribution of the Macquarie portrait. Its stated methodology seemed quite respectable and sensible.
The Opie Portrait of Lachlan Macquarie: An Examination of its Attribution: Methodology

To thoroughly investigate the attribution of the Opie portrait of Lachlan Macquarie, a comprehensive research process was undertaken. This involved several key steps:

  1. Gathering information on the Opie portrait: This included details about its history, provenance, and any available information on its cost.
  2. Reviewing scholarly articles and publications: This step focused on finding academic discussions specifically addressing the attribution of the portrait to John Opie.
  3. Collecting expert opinions: Statements and opinions from art experts and historians were gathered to understand the range of perspectives on the certainty of the attribution.
  4. Examining historical documents and records: This involved searching for any records that could shed light on the portrait's origins and authenticity, such as Macquarie's personal journals or contemporary accounts.
  5. Exploring scientific and technical analyses: Information was sought on any scientific or technical analyses conducted on the portrait, such as pigment analysis or canvas dating, to determine its authenticity.
  6. Comparing the portrait to other Opie works: This step involved analyzing the style and technique of the Opie portrait in comparison to other known portraits by Opie to identify similarities and differences.
  • It was quite transparent in listing and citing the sources that it used for its analysis. These included the Wikipedia image but if one didn't want that included, it would be easy to exclude it.
    So, AIs don't have to be inscrutable black boxes. They can have programmatic parameters like the existing bots and scripts that we use routinely on Wikipedia. Such power tools seem needed to deal with the large image backlogs that we have on Commons. Perhaps they could help by providing captions and categories where these don't exist.
    Andrew🐉(talk) 09:09, 2 January 2025 (UTC)[reply]
    They don't have to be black boxes but they are by design: they exist in a legally dubious area and thus hide what they're scraping to avoid further legal problems. That's no secret. We know for example that Wikipedia is a core data set for likely most AIs today. They also notoriously and quite confidently spit out a lie ("hallucinate") and frequently spit out total nonsense. Add to that that they're restricted to whatever is floating around on the internet or whatever other data set they've been fed (usually just more internet), and many specialist topics, like texts on ancient history and even standard reference works, are not accessible on the internet (despite Google's efforts). :bloodofox: (talk) 09:39, 2 January 2025 (UTC)[reply]
    While its stated methodology seems sensible, there's no evidence that it actually followed that methodology. The bullet points are pretty vague, and are pretty much the default methodologies used to examine actual historical works. Chaotic Enby (talk · contribs) 17:40, 2 January 2025 (UTC)[reply]
    Yes, there's evidence. As I stated above, the analysis is transparent and cites the sources that it used. And these all seem to check out rather than being invented. So, this level of AI goes beyond the first generation of LLM and addresses some of their weaknesses. I suppose that image generation is likewise being developed and improved and so we shouldn't rush to judgement while the technology is undergoing rapid development. Andrew🐉(talk) 17:28, 4 January 2025 (UTC)[reply]
  • Oppose blanket ban: best of luck to editors here who hope to be able to ban an entirely undefined and largely undetectable procedure. The term 'AI' as commonly used is no more than a buzzword - what exactly would be banned? And how does it improve the encyclopedia to encourage editors to object to images not simply because they are inaccurate, or inappropriate for the article, but because they subjectively look too good? Will the image creator be quizzed on Commons about the tools they used? Will creators who are transparent about what they have created have their images deleted while those who keep silent don’t? Honestly, this whole discussion is going to seem hopelessly outdated within a year at most. It’s like when early calculators were banned in exams because they were ‘cheating’, forcing students to use slide rules. MichaelMaggs (talk) 12:52, 1 January 2025 (UTC)[reply]
    I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on AI-generated images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. Chaotic Enby (talk · contribs) 12:58, 1 January 2025 (UTC)[reply]
    That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. MichaelMaggs (talk) 13:15, 1 January 2025 (UTC)[reply]
    I used the text prompt example because that is the most common way to have an AI model generate an image, but I recognize that I should've clarified it better. There is definitely a distinction between an image being generated by AI (like the Laurence Boccolini example below) and an image being altered or retouched by AI (which includes many features integrated in smartphones today). I don't think it's a "buzzword" to say that there is a meaningful difference between an image being made up by an AI model and a preexisting image being altered in some way, and I am surprised that many people understand "AI-generated" as including the latter. Chaotic Enby (talk · contribs) 15:24, 1 January 2025 (UTC)[reply]
  • Oppose as unenforceable. I just want you to imagine enforcing this policy against people who have not violated it. All this will do is allow Wikipedians who primarily contribute via text to accuse artists of using AI because they don't like the results to get their contributions taken down. I understand the impulse to oppose AI on principle, but the labor and aesthetic issues don't actually have anything to do with Wikipedia. If there is not actually a problem with the content conveyed by the image—for example, if the illustrator intentionally corrected any hallucinations—then someone objecting over AI is not discussing page content. If the image was not even made with AI, they are hallucinating based on prejudices that are irrelevant to the image. The bottom line is that images should be judged on their content, not how they were made. Besides all the policy-driven stuff, if Wikipedia's response to the creation of AI imaging tools is to crack down on all artistic contributions to Wikipedia (which seems to be the inevitable direction of these discussions), what does that say? Categorical bans of this kind are ill-advised and anti-illustrator. lethargilistic (talk) 15:41, 1 January 2025 (UTC)[reply]
    And the same applies to photography, of course. If in my photo of a garden I notice there is a distracting piece of paper on the lawn, nobody would worry if I used the old-style clone-stamp tool to remove it in Photoshop, adding new grass in its place (I'm assuming here that I don't change details of the actual landscape in any way). Now, though, Photoshop uses AI to achieve essentially the same result while making it simpler for the user. A large proportion of all processed photos will have at least some similar but essentially undetectable "generated AI" content, even if only a small area of grass. There is simply no way to enforce the proposed policy, short of banning all high-quality photography – which requires post-processing by design, and in which similar encyclopedically non-problematic edits are commonplace. MichaelMaggs (talk) 17:39, 1 January 2025 (UTC)[reply]
    Before anyone objects that my example is not "an image generated from a text prompt", note that there's no mention of such a restriction in the proposal we are discussing. Even if there were, it makes no difference. Photoshop can already generate photo-realistic areas from a text prompt. If such use is non-misleading and essentially undetectable, it's fine; if if changes the image in such a way as to make it misleading, inaccurate or non-encycpopedic in any way it can be challenged on that basis. MichaelMaggs (talk) 17:58, 1 January 2025 (UTC)[reply]
    As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say if if changes the image), while I am talking about creating an image ex nihilo, which is what "generating" means. Chaotic Enby (talk · contribs) 18:05, 1 January 2025 (UTC)[reply]
    I'm talking about a photograph with AI-generated areas within it. This is commonplace, and is targeted by the proposal. Categorical bans of the type suggested are indeed ill-advised. MichaelMaggs (talk) 18:16, 1 January 2025 (UTC)[reply]
    Even if the ban is unenforceable, there are many editors who will choose to use AI images if they are allowed and just as cheerfully skip them if they are not allowed. That would mean the only people posting AI images are those who choose to break the rule and/or don't know about it. That would probably add up to many AI images not used. Darkfrog24 (talk) 22:51, 3 January 2025 (UTC)[reply]
  • Support blanket ban because "AI" is a fundamentally unethical technology based on the exploitation of labor, the wanton destruction of the planetary environment, and the subversion of every value that an encyclopedia should stand for. ABOUTSELF-type exceptions for "AI" output that has already been generated might be permissible, in order to document the cursed time in which we live, but those exceptions are going to be rare. How many examples of Shrimp Jesus slop do we need? XOR'easter (talk) 23:30, 1 January 2025 (UTC)[reply]
  • Support blanket ban - Primarily because of the "poisoning the well"/"dead internet" issues created by it. FOARP (talk) 14:30, 2 January 2025 (UTC)[reply]
  • Support a blanket ban to assure some control over AI-creep in Wikipedia. And per discussion. Randy Kryn (talk) 10:50, 3 January 2025 (UTC)[reply]
  • Support that WP:POLICY applies to images: images should be verifiable, neutral, and absent of original research. AI is just the latest quickest way to produce images that are original, unverifiable, and potentially biased. Is anyone in their right mind saying that we allow people to game our rules on WP:OR and WP:V by using images instead of text? Shooterwalker (talk) 17:04, 3 January 2025 (UTC)[reply]
    As an aside on this: in some cases Commons is being treated as a way of side-stepping WP:NOR and other restrictions. Stuff that would get deleted if it were written content on WP gets in to WP as images posted on Commons. The worst examples are those conflict maps that are created from a bunch of Twitter posts (eg the Syrian civil war one). AI-generated imagery is another field where that appears to be happening. FOARP (talk) 10:43, 4 January 2025 (UTC)[reply]
  • Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. I support an exception for the when the article is about the image itself and that image is notable, such as the photograph of the black-and-blue/gold-and-white dress in The Dress and/or examples of AI images in articles in which they are relevant. E.g. "here is what a hallucination is: count the fingers." Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)[reply]
  • First, I think any guidance should avoid referring to specific technology, as that changes rapidly and is used for many different purposes. Second, assuming that the image in question has a suitable copyright status for use on Wikipedia, the key question is whether or not the reliability of the image has been established. If the intent of the image is to display 100 dots with 99 having the same appearance and 1 with a different appearance, then ordinary math skills are sufficient and so any Wikipedia editor can evaluate the reliability without performing original research. If the intent is to depict a likeness of a specific person, then there needs to be reliable sources indicating that the image is sufficiently accurate. This is the same for actual photographs, re-touched ones, drawings, hedcuts, and so forth. Typically this can be established by a reliable source using that image with a corresponding description or context. isaacl (talk) 17:59, 4 January 2025 (UTC)[reply]
  • Support Blanket Ban on AI generated imagery per most of the discussion above. It's a very slippery slope. I might consider a very narrow exception for an AI generated image of a person that was specifically authorized or commissioned by the subject. -Ad Orientem (talk) 02:45, 5 January 2025 (UTC)[reply]
  • Oppose blanket ban It is far too early to take an absolutist position, particularly when the potential is enormous. Wikipedia is already is image desert and to reject something that is only at the cusp of development is unwise. scope_creepTalk 20:11, 5 January 2025 (UTC)[reply]
  • Support blanket ban on AI-generated images except in ABOUTSELF contexts. An encyclopedia should not be using fake images. I do not believe that further nuance is necessary. LEPRICAVARK (talk) 22:44, 5 January 2025 (UTC)[reply]
  • Support blanket ban as the general guideline, as accuracy, personal rights, and intellectual rights issues are very weighty, here (as is disclosure to the reader). (I could see perhaps supporting adoption of a sub-guideline for ways to come to a broad consensus in individual use cases (carve-outs, except for BLPs) which address all the weighty issues on an individual use basis -- but that needs to be drafted and agreed to, and there is no good reason to wait to adopt the general ban in the meantime). Alanscottwalker (talk) 15:32, 8 January 2025 (UTC)[reply]
Which parts of this photo are real?
  • Support indefinite blanket ban except ABOUTSELF and simple abstract examples (such as the image of 99 dots above). In addition to all the issues raised above, including copyvio and creator consent issues, in cases of photorealistic images it may never be obvious to all readers exactly which elements of the image are guesswork. The cormorant picture at the head of the section reminded me of the first video of a horse in gallop, in 1878. Had AI been trained on paintings of horses instead of actual videos and used to "improve" said videos, we would've ended up with serious delusions about the horse's gait. We don't know what questions -- scientific or otherwise -- photography will be used to settle in the coming years, but we do know that consumer-grade photo AI has already been trained to intentionally fake detail to draw sales, such as on photos of the Moon[6][7]. I think it's unrealistic to require contributors to take photos with expensive cameras or specially-made apps, but Wikipedia should act to limit its exposure to this kind of technology as far as is feasible. Daß Wölf 20:57, 9 January 2025 (UTC)[reply]
  • Support at least some sort of recomendation against the use AI generated imagery in non-AI contexts−except obviously where the topic of the article is specificly related to AI generated imagery (Generative artificial intelligence, Springfield pet-eating hoax, AI slop, etc.). At the very least the consensus bellow about BLPs should be extened to all historical biographies, as all the examples I've seen (see WP:AIIMAGE) fail WP:IMAGERELEVANCE (failing to add anything to the sourced text) and serving only to mislead the reader. We inclued images for a reason, not just for decoration. I'm also reminded the essay WP:PORTRAIT, and the distinction it makes between notable depictions of histoical people (which can be useful to illustarate articles) and non-notable fictional portraits which in its (imo well argued) view have no legitimate encyclopedic function whatsoever. Cakelot1 ☞️ talk 14:36, 14 January 2025 (UTC)[reply]
    Anything that fails WP:IMAGERELEVANCE can be, should be, and is, excluded from use already, likewise any images which have no legitimate encyclopedic function whatsoever. This applies to AI and none AI images equally and identically. Just as we don't have or need a policy or guideline specifically saying don't use irrelevant or otherwise non-encyclopaedic watercolour images in articles we don't need any policy or guideline specifically calling out AI - because it would (as you demonstrate) need to carve out exceptions for when it's use is relevant. Thryduulf (talk) 14:45, 14 January 2025 (UTC)[reply]
    That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. WhatamIdoing (talk) 23:24, 14 January 2025 (UTC)[reply]
    Except that is both not true and irrelevant. Some AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. Thryduulf (talk) 13:43, 15 January 2025 (UTC)[reply]
    Can you give an example of an AI-generated image of an individual person that is (a) not primarily decorative and also (b) not copied from the person's social media/own publications, and that (c) at least some editors think would be a good idea?
    "Hey, AI, please give me a realistic-looking photo of this person who died in the 12th century" is not it. "Hey, AI, we have no freely licensed photos of this celebrity, so please give me a line-art caricature" is not it. What is? WhatamIdoing (talk) 17:50, 15 January 2025 (UTC)[reply]
    Criteria (b) and (c) were not part of the statement I was responding to, and make it a very significantly different assertion. I will assume that you are not making motte-and-bailey arguments in bad faith, but the frequent fallacious argumentation in these AI discussions is getting tiresome.
    Even with the additional criteria it is still irrelevant - if no editor thinks an image is a good idea, then it won't be used in an article regardless of why they don't think it's a good idea. If some editors think an individual image is a good idea then it's obviously potentially encyclopaedic and needs to be judged on its merits (whether it is AI-generated is completely irrelevant to it's encyclopaedic value). An image that the subject uses on their social media/own publications to identify themselves (for example as an avatar) is the perfect example of the type of image which is frequently used in articles about that individual. Thryduulf (talk) 18:56, 15 January 2025 (UTC)[reply]
  • This was archived despite significant participation on the topic of whether AI-generated images should be used at all on Wikipedia. I believe a consensus has been/can be achieved here and should be closed, so I have unarchived it. JoelleJay (talk) 17:37, 2 February 2025 (UTC)[reply]

BLPs

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Are AI-generated images (generated via text prompts, see also: text-to-image model) okay to use to depict BLP subjects? The Laurence Boccolini example was mentioned in the opening paragraph. The image was created using Grok / Aurora, a text-to-image model developed by xAI, to generate images...As with other text-to-image models, Aurora generates images from natural language descriptions, called prompts.
AI-generated image of Laurence Boccolini
Some1 (talk) 12:34, 31 December 2024 (UTC)[reply]
AI-generated cartoon portrait of Germán Larrea Mota-Velasco

03:58, January 3, 2025: Note: that these images can either be photorealistic in style (such as the Laurence Boccolini example) or non-photorealistic in style (see the Germán Larrea Mota-Velasco example, which was generated using DALL-E, another text-to-image model).

Some1 (talk) 11:10, 3 January 2025 (UTC)[reply]

notified: Wikipedia talk:Biographies of living persons, Wikipedia talk:No original research, Wikipedia talk:Manual of Style/Images, Template:Centralized discussion -- Some1 (talk) 11:27, 2 January 2025 (UTC)[reply]

  • No. I don't think they are at all, as, despite looking photorealistic, they are essentially just speculation about what the person might look like. A photorealistic image conveys the look of something up to the details, and giving a false impression of what the person looks like (or, at best, just guesswork) is actively counterproductive. (Edit 21:59, 31 December 2024 (UTC): clarified bolded !vote since everyone else did it) Chaotic Enby (talk · contribs) 12:46, 31 December 2024 (UTC)[reply]
    That AI generated image looks like Dick Cheney wearing a Laurence Boccolini suit. ScottishFinnishRadish (talk) 12:50, 31 December 2024 (UTC)[reply]
    There are plenty of non-free images of Laurence Boccolini with which this image can be compared. Assuming at least most of those are accurate representations of them (I've never heard of them before and have no other frame of reference) the image above is similar to but not an accurate representation of them (most obviously but probably least significantly, in none of the available images are they wearing that design of glasses). This means the image should not be used to identify them unless they use it to identify themselves. It should not be used elsewhere in the article unless it has been the subject of notable commentary. That it is an AI image makes absolutely no difference to any of this. Thryduulf (talk) 16:45, 31 December 2024 (UTC)[reply]
  • No. Well, that was easy.
    They are fake images; they do not actually depict the person. They depict an AI-generated simulation of a person that may be inaccurate. Cremastra 🎄 uc 🎄 20:00, 31 December 2024 (UTC)[reply]
    Even if the subject uses the image to identify themselves, the image is still fake. Cremastra (uc) 19:17, 2 January 2025 (UTC)[reply]
  • No, with the caveat that its mostly on the grounds that we don't have enough information and when it comes to BLP we are required to exercise caution. If at some point in the future AI generated photorealistic simulacrums living people become mainstream with major newspapers and academic publishers it would be fair to revisit any restrictions, but in this I strongly believe that we should follow not lead. Horse Eye's Back (talk) 20:37, 31 December 2024 (UTC)[reply]
  • No. The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. pythoncoder (talk | contribs) 21:30, 31 December 2024 (UTC)[reply]
  • No except perhaps, maybe, if the subject explicitly is already using that image to represent themselves. But mostly no. -Kj cheetham (talk) 21:32, 31 December 2024 (UTC)[reply]
  • Yes, when that image is an accurate representation and better than any available alternative, used by the subject to represent themselves, or the subject of notable commentary. However, as these are the exact requirements to use any image to represent a BLP subject this is already policy. Thryduulf (talk) 21:46, 31 December 2024 (UTC)[reply]
    How well can we determine how accurate a representation it is? Looking at the example above, I'd argue that the real Laurence Boccolini has a somewhat rounder/pointier chin, a wider mouth, and possibly different eye wrinkles, although the latter probably depends quite a lot on the facial expression.
    How accurate a representation a photorealistic AI image is is ultimately a matter of editor opinion. Cremastra 🎄 uc 🎄 21:54, 31 December 2024 (UTC)[reply]
    How well can we determine how accurate a representation it is? in exactly the same way that we can determine whether a human-crafted image is an accurate representation. How accurate a representation any image is is ultimately a matter of editor opinion. Whether an image is AI or not is irrelevant. I agree the example image above is not sufficiently accurate, but we wouldn't ban photoshopped images because one example was not deemed accurate enough, because we are rational people who understand that one example is not representative of an entire class of images - at least when the subject is something other than AI. Thryduulf (talk) 23:54, 31 December 2024 (UTC)[reply]
    I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra 🎄 uc 🎄 00:14, 1 January 2025 (UTC)[reply]
    I'm guessing your filter bubble doesn't include Facetune and their notorious Filter (social media)#Beauty filter problems. WhatamIdoing (talk) 02:46, 2 January 2025 (UTC)[reply]
    A photo of a person can be connected to a specific time, place, and subject that existed. It can be compared to other images sharing one or more of those properties. A photo that was PhotoShopped is still either a generally faithful reproduction of a scene that existed, or has significant alterations that can still be attributed to a human or at least to a specific algorithm, e.g. filters. The artistic license of a painting can still be attributed to a human and doesn't run much risk of being misidentified as real, unless it's by Chuck Close et al. An AI-generated image cannot be connected to a particular scene that ever existed and cannot be attributable to a human's artistic license (and there is legal precedent that such images are not copyrightable to the prompter specifically because of this). Individual errors in a human-generated artwork are far more predictable, understandable, identifiable, traceable... than those in AI-generated images. We have innate assumptions when we encounter real images or artwork that are just not transferable. These are meaningful differences to the vast majority of people: according to a Getty poll, 87% of respondents want AI-generated art to at least be transparent, and 98% consider authentic images "pivotal in establishing trust".
    And even if you disagree with all that, can you not see the larger problem of AI images on Wikipedia getting propagated into generative AI corpora? JoelleJay (talk) 04:20, 2 January 2025 (UTC)[reply]
    I agree that our old assumptions don't hold true. I think the world will need new assumptions. We will probably have those in place in another decade or so.
    I think we're Wikipedia:Here to build an encyclopedia, not here to protect AI engines from ingesting AI-generated artwork. Figuring out what they should ingest is their problem, not mine. WhatamIdoing (talk) 07:40, 2 January 2025 (UTC)[reply]
  • Absolutely no fake/AI images of people, photorealistic or otherwise. How is this even a question? These images are fake. Readers need to be able to trust Wikipedia, not navigate around whatever junk someone has created with a prompt and presented as somehow representative. This includes text. :bloodofox: (talk) 22:24, 31 December 2024 (UTC)[reply]
  • No except for edge cases (mostly, if the image itself is notable enough to go into the article). Gnomingstuff (talk) 22:31, 31 December 2024 (UTC)[reply]
  • Absolutely not, except for ABOUTSELF. "They're fine if they're accurate enough" is an obscenely naive stance. JoelleJay (talk) 23:06, 31 December 2024 (UTC)[reply]
  • No with no exceptions. Carrite (talk) 23:54, 31 December 2024 (UTC)[reply]
  • No. We don't permit falsifications in BLPs. Seraphimblade Talk to me 00:30, 1 January 2025 (UTC)[reply]
    For the requested clarification by Some1, no AI-generated images (except when the image itself is specifically discussed in the article, and even then it should not be the lead image and it should be clearly indicated that the image is AI-generated), no drawings, no nothing of that sort. Actual photographs of the subject, nothing else. Articles are not required to have images at all; no image whatsoever is preferable to something which is not an image of the person. Seraphimblade Talk to me 05:42, 3 January 2025 (UTC)[reply]
  • No, but with exceptions. I could imagine a case where a specific AI-generated image has some direct relevance to the notability of the subject of a BLP. In such cases, it should be included, if it could be properly licensed. But I do oppose AI-generated images as portraits of BLP subjects. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)[reply]
    Since I was pinged on this point: when I wrote "I do oppose AI-generated images as portraits", I meant exactly that, including all AI-generated images, such as those in a sketchy or artistic style, not just the photorealistic ones. I am not opposed to certain uses of AI-generated images in BLPs when they are not the main portrait of the subject, for instance in diagrams (not depicting the subject) to illustrate some concept pioneered by the subject, or in case someone becomes famous for being the subject of an AI-generated image. —David Eppstein (talk) 05:41, 3 January 2025 (UTC)[reply]
  • No, and no exceptions or do-overs. Better to have no images (or Stone-Age style cave paintings) than Frankenstein images, no matter how accurate or artistic. Akin to shopped manipulated photographs, they should have no room (or room service) at the WikiInn. Randy Kryn (talk) 01:34, 1 January 2025 (UTC)[reply]
    Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. Photograph manipulation) or excluding those that are not misleading or inaccurate. AI images are no different. Thryduulf (talk) 02:57, 1 January 2025 (UTC)[reply]
    Assuming we know. Assuming it's material. The infobox image in – and the only extant photo of – Blind Lemon Jefferson was "photoshopped" by a marketing team, maybe half a century before Adobe Photoshop was created. They wanted to show him wearing a necktie. I don't think that this level of manipulation is actually a problem. WhatamIdoing (talk) 07:44, 2 January 2025 (UTC)[reply]
  • Yes, so long as it is an accurate representation. Hawkeye7 (discuss) 03:40, 1 January 2025 (UTC)[reply]
  • No not for BLPs. Traumnovelle (talk) 04:15, 1 January 2025 (UTC)[reply]
  • No Not at all relevant for pictures of people, as the accuracy is not enough and can misrepresent. Also (and I'm shocked as it seems no one has mentioned this), what about Copyright issues? Who holds the copyright for an AI-generated image? The user who wrote the prompt? The creator(s) of the AI model? The creator(s) of the images in the database that the AI used to create the images? It's sounds to me such a clusterfuck of copyright issues that I don't understand how this is even a discussion. --SuperJew (talk) 07:10, 1 January 2025 (UTC)[reply]
    Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.
    What is still under legal concern is whether the use of bodies of copyrighted works without any approve or license from the copyright holders to train AI models is under fair use or not. There are multiple court cases where this is the primary challenge, and none have yet to reach a decision yet. Assuming the courts rule that there was no fair use, that would either require the entity that owns the AI to pay fines and ongoing licensing costs, or delete their trained model to start afresh with free licensed/works, but in either case, that would not impact how we'd use any resulting AI image from a copyright standpoint. — Masem (t) 14:29, 1 January 2025 (UTC)[reply]
  • No, I'm in agreeance with Seraphimblade here. Whether we like it or not, the usage of a portrait on an article implies that it's just that, a portrait. It's incredibly disingenuous to users to represent an AI generated photo as truth. Doawk7 (talk) 09:32, 1 January 2025 (UTC)[reply]
    So you just said a portrait can be used because wikipedia tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. Fyunck(click) (talk) 10:07, 2 January 2025 (UTC)[reply]
    To clarify, I didn't mean "portrait" as in "painting," I meant it as "photo of person."
    However, I really want to stick to what you say at the end there: Heck, most AI looks closer to the real thing than any portrait.
    That's exactly the problem: by looking close to the "real thing" it misleads users into believing a non-existent source of truth.

    Per the wording of the RfC of "depict BLP subjects," I don't think there would be any valid case to utilize AI images. I hold a strong No. Doawk7 (talk) 04:15, 3 January 2025 (UTC)[reply]
  • No. We should not use AI-generated images for situations like this, they are basically just guesswork by a machine as Quark said and they can misinform readers as to what a person looks like. Plus, there's a big grey area regarding copyright. For an AI generator to know what somebody looks like, it has to have photos of that person in its dataset, so it's very possible that they can be considered derivative works or copyright violations. Using an AI image (derivative work) to get around the fact that we have no free images is just fair use with extra steps. Di (they-them) (talk) 19:33, 1 January 2025 (UTC)[reply]
    Gisèle Pelicot?
  • Maybe There was a prominent BLP image which we displayed on the main page recently. (right) This made me uneasy because it was an artistic impression created from photographs rather than life. And it was "colored digitally". Functionally, this seems to be exactly the same sort of thing as the Laurence Boccolini composite. The issue should not be whether there's a particular technology label involved but whether such creative composites and artists' impressions are acceptable as better than nothing. Andrew🐉(talk) 08:30, 1 January 2025 (UTC)[reply]
    Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. Cremastra (uc) 14:18, 1 January 2025 (UTC)[reply]
    Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." Fyunck(click) (talk) 10:12, 2 January 2025 (UTC)[reply]
    Commons descriptions do not appear on our articles. CMD (talk) 10:28, 2 January 2025 (UTC)[reply]
    People taking a quick glance at an infobox image that looks pretty like a photograph are not going to scrutinize commons tagging. Cremastra (uc) 14:15, 2 January 2025 (UTC)[reply]
    Keep in mind that many AIs can produce works that match various styles, not just photographic quality. It is still possible for AI to produce something that looks like a watercolor or sketched drawing. — Masem (t) 14:33, 1 January 2025 (UTC)[reply]
    Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts for some examples. Cremastra (uc) 14:37, 1 January 2025 (UTC)[reply]
    Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —David Eppstein (talk) 20:06, 1 January 2025 (UTC)[reply]
    Same thing I wrote above, but for "photoshopping" read "drawing": (Bold added for emphasis)
    ...human [illustration] is not going to change or distort a person's appearance in the same way an AI image would. [Drawings] done by a [competent] person who is paying attention to what they are doing [...] by person who is aware, while they are making [the drawing], that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra (uc) 20:56, 1 January 2025 (UTC)[reply]
    @Cremastra then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? Thryduulf (talk) 22:12, 1 January 2025 (UTC)[reply]
    I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above: The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. Cremastra (uc) 00:16, 2 January 2025 (UTC)[reply]
    Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. Thryduulf (talk) 01:17, 2 January 2025 (UTC)[reply]
    I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. Cremastra (uc) 02:30, 2 January 2025 (UTC)[reply]
    To clarify, I'm willing to make an exception in this proposal for very simple geometric diagrams. Cremastra (uc) 02:38, 2 January 2025 (UTC)[reply]
    Despite the fact that not all AI-generated images are misleading, not all misleading images are AI-generated and it is not always possible to tell whether an image is or is not AI-generated? Thryduulf (talk) 02:58, 2 January 2025 (UTC)[reply]
    Enforcement is a separate issue. Whether or not all (or the vast majority) of AI images are misleading is the subject of this dispute.
    I'm not going to mistreat the horse further, as we've each made our points and understand where the other stands. Cremastra (uc) 15:30, 2 January 2025 (UTC)[reply]
    Even "simple diagrams" are not clear-cut. The process of AI-generating any image, no matter how simple, is still very complex and can easily follow any number of different paths to meet the prompt constraints. These paths through embedding space are black boxes and the likelihood they converge on the same output is going to vary wildly depending on the degrees of freedom in the prompt, the dimensionality of the embedding space, token corpus size, etc. The only thing the user can really change, other than switching between models, is the prompt, and at some point constructing a prompt that is guaranteed to yield the same result 100% of the time becomes a Borgesian exercise. This is in contrast with non-generative AI diagram-rendering software that follow very fixed, reproducible, known paths. JoelleJay (talk) 04:44, 2 January 2025 (UTC)[reply]
    Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. Thryduulf (talk) 04:48, 2 January 2025 (UTC)[reply]
    If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is and confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. JoelleJay (talk) 05:40, 2 January 2025 (UTC)[reply]
    If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". WhatamIdoing (talk) 07:47, 2 January 2025 (UTC)[reply]
    The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. Photos of Japan (talk) 07:56, 2 January 2025 (UTC)[reply]
    Yes, but since "the laboriousness of verifying the accuracy of the image" is exactly what she doesn't want to undertake for 18,000 dots, then I think that's very relevant. WhatamIdoing (talk) 07:58, 2 January 2025 (UTC)[reply]
    And where is that cutoff supposed to be? 1000 dots? A single straight line? An atomic diagram? What is "simple" to someone unfamiliar with a topic may be more complex.
    And I don't want to count 100 dots either! JoelleJay (talk) 17:43, 2 January 2025 (UTC)[reply]
    Maybe you don't. But I know for certain that you can count 10 across, 10 down, and multiply those two numbers to get 100. That's what I did when I made the image, after all. WhatamIdoing (talk) 07:44, 3 January 2025 (UTC)[reply]
  • Comment: when you Google search someone (at least from the Chrome browser), often the link to the Wikipedia article includes a thumbnail of the lead photo as a preview. Even if the photo is labelled as an AI image in the article, people looking at the thumbnail from Google would be misled (if the image is chosen for the preview). Photos of Japan (talk) 09:39, 1 January 2025 (UTC)[reply]
    This is why we should not use inaccurate images, regardless of how the image was created. It has absolutely nothing to do with AI. Thryduulf (talk) 11:39, 1 January 2025 (UTC)[reply]
  • Already opposed a blanket ban: It's unclear to me why we have a separate BLP subsection, as BLPs are already included in the main section above. Anyway, I expressed my views there. MichaelMaggs (talk)
    Some editors might oppose a blanket ban on all AI-generated images, while at the same time, are against using AI-generated images (created by using text prompts/text-to-image models) to depict living people. Some1 (talk) 14:32, 1 January 2025 (UTC)[reply]
  • No For at least now, let's not let the problems of AI intrude into BLP articles which need to have the highest level of scrutiny to protect the person represented. Other areas on WP may benefit from AI image use, but let's keep it far out of BLP at this point. --Masem (t) 14:35, 1 January 2025 (UTC)[reply]
  • I am not a fan of “banning” AI images completely… but I agree that BLPs require special handling. I look at AI imagery as being akin to a computer generated painting. In a BLP, we allow paintings of the subject, but we prefer photos over paintings (if available). So… we should prefer photos over AI imagery.
    That said, AI imagery is getting good enough that it can be mistaken for a photo… so… If an AI generated image is the only option (ie there is no photo available), then the caption should clearly indicate that we are using an AI generated image. And that image should be replaced as soon as possible with an actual photograph. Blueboar (talk) 14:56, 1 January 2025 (UTC)[reply]
    The issue with the latter is that Wikipedia images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. Chaotic Enby (talk · contribs) 15:27, 1 January 2025 (UTC)[reply]
    We're here to build an encyclopedia, not to protect commercial search engine companies.
    I think my view aligns with Blueboar's (except that I find no firm preference for photos over classical portrait paintings): We shouldn't have inaccurate AI images of people (living or dead). But the day appears to be coming when AI will generate accurate ones, or at least ones that are close enough to accurate that we can't tell the difference unless the uploader voluntarily discloses that information. Once we can no longer tell the difference, what's the point in banning them? Images need to look like the thing being depicted. When we put an photorealistic image in an article, we could be said to be implicitly claiming that the image looks like whatever's being depicted. We are not necessarily warranting that the image was created through a specific process, but the image really does need to look like the subject. WhatamIdoing (talk) 03:12, 2 January 2025 (UTC)[reply]
    You are presuming that sufficient accuracy will prevent us from knowing whether someone is uploading an AI photo, but that is not the case. For instance, if someone uploads large amounts of "photos" of famous people, and can't account for how they got them (e.g. can't give a source where they scraped them from, or dates or any Exif metadata at all for when they were taken), then it will still be obvious that they are likely using AI. Photos of Japan (talk) 17:38, 3 January 2025 (UTC)[reply]
    As another editor pointed out in their comment, there's the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet, especially on a site such as Wikipedia and especially on their own biography. WP:BLP says the bios must be written conservatively and with regard for the subject's privacy. Some1 (talk) 18:37, 3 January 2025 (UTC)[reply]
    Once we can no longer tell the difference, what's the point in banning them? Sounds like a wolf's in sheep's clothing to me. Just because the surface appeal of fake pictures gets better, doesn't mean we should let the horse in. Cremastra (uc) 18:47, 3 January 2025 (UTC)[reply]
    If there are no appropriately-licensed images of a person, then by definition any AI-generated image of them will be either a copyright infringement or a complete fantasy. JoelleJay (talk) 04:48, 2 January 2025 (UTC)[reply]
    Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. Thryduulf (talk) 04:52, 2 January 2025 (UTC)[reply]
    But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to even copy the watermark from Getty's images. Photos of Japan (talk) 05:50, 2 January 2025 (UTC)[reply]
    Ooooh, I'm not sure that we can assume that humans aren't blatantly copying something. We can assume that they meant to be helpful, but that's not quite the same thing. WhatamIdoing (talk) 07:48, 2 January 2025 (UTC)[reply]
  • Oppose. Yes. I echo my comments from the other day regarding BLP illustrations:

    What this conversation is really circling around is banning entire skillsets from contributing to Wikipedia merely because some of us are afraid of AI images and some others of us want to engineer a convenient, half-baked, policy-level "consensus" to point to when they delete quality images from Wikipedia. [...] Every time someone generates text based on a source, they are doing some acceptable level of interpretation to extract facts or rephrase it around copyright law, and I don't think illustrations should be considered so severely differently as to justify a categorical ban. For instance, the Gisele Pelicot portrait is based on non-free photos of her. Once the illustration exists, it is trivial to compare it to non-free images to determine if it is an appropriate likeness, which it is. That's no different than judging contributed text's compliance with fact and copyright by referring to the source. It shouldn't be treated differently just because most Wikipedians contribute via text.
    Additionally, [when I say say "entire skillsets," I am not] referring to interpretive skillsets that synthesize new information like, random example, statistical analysis. Excluding those from Wikipedia is current practice and not controversial. Meanwhile, I think the ability to create images is more fundamental than that. It's not (inheretly) synthesizing new information. A portrait of a person (alongside the other examples in this thread) contains verifiable information. It is current practice to allow them to fill the gaps where non-free photos can't. That should continue. Honestly, it should expand.

    lethargilistic (talk) 15:41, 1 January 2025 (UTC)[reply]
    Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Wikipedia has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Wikipedia via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Wikipedia. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Wikipedia. lethargilistic (talk) 15:59, 1 January 2025 (UTC)[reply]
    By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using AI-generated images (generated via text prompts, see also: text-to-image model) to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. Some1 (talk) 16:09, 1 January 2025 (UTC)[reply]
    I've changed it to "yes" to reflect the reversed question. I think all of this is related because there is no coherent distinguishing point; AI can be used to create images in a variety of styles. These discussions have shown that a policy of banning AI images will be used against non-AI images of all kinds, so I think it's important to say these kinds of things now. lethargilistic (talk) 16:29, 1 January 2025 (UTC)[reply]
    Photorealistic images scraped from who knows where from who knows what sources are without question simply fake photographs and also clear WP:OR and outright WP:SYNTH. There's no two ways about it. Articles do not require images: An article with some Frankenstein-ed image scraped from who knows what, where and, when that you "created" from a prompt is not an improvement over having no image at all. If we can't provide a quality image (like something you didn't cook up from a prompt) then people can find quality, non-fake images elsewhere. :bloodofox: (talk) 23:39, 1 January 2025 (UTC)[reply]
    I really encourage you to read the discussion I linked before because it is on the WP:NOR talk page. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles require. It is about improvements to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. lethargilistic (talk) 03:21, 2 January 2025 (UTC)[reply]
    Images like these are absolutely nothing more than synthesis in the purest sense of the world and are clearly a violation of WP:SYNTH: Again, you have no idea what data was used to generate these images and you're going to have a very hard time convincing anyone to describe them as anything other than outright fakes.
    A reminder that WP:SYNTH shuts down attempts at manipulation of images ("It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Wikipedia:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.") and generating a photorealistic image (from who knows what!) is far beyond that.
    Fake images of people do not improve our articles in any way and only erode reader trust. What's next, an argument for the fake sources LLMs also love to "hallucinate"? :bloodofox: (talk) 03:37, 2 January 2025 (UTC)[reply]
    So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion: Do not combine material from multiple sources to state or imply a conclusion not explicitly stated by any of the sources.. My primary example has been a picture of a person; what a person looks like is verifiable by comparing the image to non-free images that cannot be used on Wikipedia. If the image resembles the person, it is not SYNTH. An illustration of a person created and intended to look like that person is not a manipulation. The training data used to make the AI is irrelevant to whether the image in fact resembles the person. You should also review WP:NOTSYNTH because SYNTH is not a policy; NOR is the policy: If a putative SYNTH doesn't constitute original research, then it doesn't constitute SYNTH. Additionally, not all synthesis is even SYNTH. A categorical rule against AI cannot be justified by SYNTH because it does not categorically apply to all use cases of AI. To do so would be illogical on top of ill-advised. lethargilistic (talk) 08:08, 2 January 2025 (UTC)[reply]
    "training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a human being. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as WP:SYNTH as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. :bloodofox: (talk) 09:44, 2 January 2025 (UTC)[reply]
    NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not about the content for you, NOR and SYNTH are irrelevant to your argument, which boils down to WP:IDONTLIKEIT and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. lethargilistic (talk) 09:52, 2 January 2025 (UTC)[reply]
    This is in fact entirely about content (why the hell else would I bother?) but it is true that I also dismissed your pro-AI 'it's just like a human drawing a picture!' as outright nonsense a while back. Good luck convincing anyone else with that line - it didn't work here. :bloodofox: (talk) 09:59, 2 January 2025 (UTC)[reply]
  • Maybe: there is an implicit assumption with this RFC that an AI generated image would be photorealistic. There hasn't been any discussion of an AI generated sketch. If you asked an AI to generate a sketch (that clearly looked like a sketch, similar to the Gisèle Pelicot example) then I would potentially be ok with it. Photos of Japan (talk) 18:14, 1 January 2025 (UTC)[reply]
    That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (see the NORN discussion), now they'll be tasked with dealing with AI-generated ones in BLP articles. Some1 (talk) 20:28, 1 January 2025 (UTC)[reply]
    It sounds like your problem is not with AI but with cartoon/sketch images in BLP articles, so AI is once again completely irrelevant. Thryduulf (talk) 22:14, 1 January 2025 (UTC)[reply]
    That is a good concern you brought up. There is a possibility of the spamming of low quality AI-generated images which would be laborious to discuss on a case-by-case basis but easy to generate. At the same time though that is a possibility, but not yet an actuality, and WP:CREEP states that new policies should address current problems rather than hypothetical concerns. Photos of Japan (talk) 22:16, 1 January 2025 (UTC)[reply]
  • Easy no for me. I am not against the use of AI images wholesale, but I do think that using AI to represent an existent thing such as a person or a place is too far. Even a tag wouldn't be enough for me. Cessaune [talk] 19:05, 1 January 2025 (UTC)[reply]
  • No obviously, per previous discussions about cartoonish drawn images in BLPs. Same issue here as there, it is essentially original research and misrepresentation of a living person's likeness. Zaathras (talk) 22:19, 1 January 2025 (UTC)[reply]
  • No to photorealistic, no to cartoonish... this is not a hard choice. The idea that "this has nothing to do with AI" when "AI" magnifies the problem to stupendous proportions is just not tenable. XOR'easter (talk) 23:36, 1 January 2025 (UTC)[reply]
    While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. Thryduulf (talk) 01:16, 2 January 2025 (UTC)[reply]
    The thing that amplifies the problem is necessarily a problem. XOR'easter (talk) 02:57, 2 January 2025 (UTC)[reply]
    That is arguable, but banning the amplifier does not do anything to solve the problem. In this case, banning the amplifier would cause multiple other problems that nobody supporting this proposal as even attempted to address, let alone mitigate. Thryduulf (talk) 03:04, 2 January 2025 (UTC)[reply]
  • No for all people, per Chaotic Enby. Nikkimaria (talk) 03:23, 2 January 2025 (UTC) Add: no to any AI-generated images, whether photorealistic or not. Nikkimaria (talk) 04:00, 3 January 2025 (UTC)[reply]
  • No - We should not be hosting faked images (except as notable fakes). We should also not be hosting copyvios ("Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant" is just totally wrong - we should be steering clear of copyvios, and if the issue is unsettled then we shouldn't use them until it is).
  • If people upload faked images to WP or Commons the response should be as it is now. The fact that fakes are becoming harder to detect simply from looking at them hardly affects this - we simply confirm when the picture was supposed to have been taken and examine the plausibility of it from there. FOARP (talk) 14:39, 2 January 2025 (UTC) FOARP (talk) 14:39, 2 January 2025 (UTC)[reply]
    we should be steering clear of copyvio we do - if an image is a copyright violation it gets deleted, regardless of why it is a copyright violation. What we do not do is ban using images that are not copyright violations because they are copyright violations. Currently the WMF lawyers and all the people on Commons who know more about copyright than I do say that at least some AI images are legally acceptable for us to host and use. If you want to argue that, then go ahead, but it is not relevant to this discussion.
    if people upload faked images [...] the response should be as it is now in other words you are saying that the problem is faked images not AI, and that current policies are entirely adequate to deal with the problem of faked images. So we don't need any specific rules for AI images - especially given that not all AI images are fakes. Thryduulf (talk) 15:14, 2 January 2025 (UTC)[reply]
    The idea that current policies are entirely adequate is like saying that a lab shouldn't have specific rules about wearing eye protection when it already has a poster hanging on the wall that says "don't hurt yourself". XOR'easter (talk) 18:36, 2 January 2025 (UTC)[reply]
    I rely on one of those rotating shaft warnings up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ScottishFinnishRadish (talk) 18:41, 2 January 2025 (UTC)[reply]
    "in other words you are saying that the problem is faked images not AI" - AI generated images *are* fakes. This is merely confirming that for the avoidance of doubt.
    "at least some AI images are legally acceptable for us" - Until they decide which ones that isn't much help. FOARP (talk) 19:05, 2 January 2025 (UTC)[reply]
    Yes – what FOARP said. AI-generated images are fakes and are misleading. Cremastra (uc) 19:15, 2 January 2025 (UTC)[reply]
    Those specific rules exist because generic warnings have proven not to be sufficient. Nobody has presented any evidence that the current policies are not sufficient, indeed quite the contrary. Thryduulf (talk) 19:05, 2 January 2025 (UTC)[reply]
  • No! This would be a massive can of worms; perhaps, however, we wish to cause problems in the new year. JuxtaposedJacob (talk) | :) | he/him | 15:00, 2 January 2025 (UTC)[reply]
    Noting that I think that no AI-generated images are acceptable in BLP articles, regardless of whether they are photorealistic or not. JuxtaposedJacob (talk) | :) | he/him | 15:40, 3 January 2025 (UTC)[reply]
  • No, unless the AI image has encyclopedic significance beyond "depicts a notable person". AI images, if created by editors for the purpose of inclusion in Wikipedia, convey little reliable information about the person they depict, and the ways in which the model works are opaque enough to most people as to raise verifiability concerns. ModernDayTrilobite (talkcontribs) 15:25, 2 January 2025 (UTC)[reply]
    To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict[ing] a notable person"? Thryduulf (talk) 15:54, 2 January 2025 (UTC)[reply]
    If the subject uses the image for self-identification, I'd be fine with it - I think that'd be analogous to situations such as "cartoonist represented by a stylized self-portrait", which definitely has some precedent in articles like Al Capp. I agree with your second sentence as well; if there's notable discussion around a particular AI image, I think it would be reasonable to include that image on Wikipedia. ModernDayTrilobite (talkcontribs) 19:13, 2 January 2025 (UTC)[reply]
  • No, with obvious exceptions, including if the subject theyrself uses the image as a their representation, or if the image is notable itself. Not including the lack of a free aleternative, if there is no free alternative... where did the AI find data to build an image... non free too. Not including images generated by WP editors (that's kind of original research... - Nabla (talk) 18:02, 2 January 2025 (UTC
  • Maybe I think the question is unfair as it is illustrated with what appears to be a photo of the subject but isn't. People are then getting upset that they've been misled. As others note, there are copyright concerns with AI reproducing copyrighted works that in turn make an image that is potentially legally unusable. But that is more a matter for Commons than for Wikipedia. As many have noted, a sketch or painting never claims to be an accurate depiction of a person, and I don't care if that sketch or painting was done by hand or an AI prompt. I strongly ask Some1 to abort the RFC. You've asked people to give a yes/no vote to what is a more complex issue. A further problem with the example used is the unfortunate prejudice on Wikipedia against user-generated content. While the text-generated AI of today is crude and random, there will come a point where many professionally published photos illustrating subjects, including people, are AI generated. Even today, your smartphone can create a groupshot where everyone is smiling and looking at the camera. It was "trained" on the 50 images it quickly took and responded to the build-in "text prompt" of "create a montage of these photos such that everyone is smiling and looking at the camera". This vote is a knee jerk reaction to content that is best addressed by some other measure (such as that it is a misleading image). And a good example of asking people to vote way too early, when the issues haven't been throught out -- Colin°Talk 18:17, 2 January 2025 (UTC)[reply]
  • No This would very likely set a dangerous precedent. The only exception I think should be if the image itself is notable. If we move forward with AI images, especially for BLPs, it would only open up a whole slew of regulations and RfCs to keep them in check. Better no image than some digital multiverse version of someone that is "basically" them but not really. Not to mention the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet. Tepkunset (talk) 18:31, 2 January 2025 (UTC)[reply]
  • No. LLMs don't generate answers, they generate things that look like answers, but aren't; a lot of the time, that's good enough, but sometimes it very much isn't. It's the same issue for text-to-image models: they don't generate photos of people, they generate things that look like photos. Using them on BLPs is unacceptable. DS (talk) 19:30, 2 January 2025 (UTC)[reply]
  • No. I would be pissed if the top picture of me on Google was AI-generated. I just don't think it's moral for living people. The exceptions given above by others are okay, such as if the subject uses the picture themselves or if the picture is notable (with context given). win8x (talk) 19:56, 2 January 2025 (UTC)[reply]
  • No. Uploading alone, although mostly a Commons issue, would already a problem to me and may have personality rights issues. Illustrating an article with a fake photo (or drawing) of a living person, even if it is labeled as such, would not be acceptable. For example, it could end up being shown by search engines or when hovering over a Wikipedia link, without the disclaimer. ~ ToBeFree (talk) 23:54, 2 January 2025 (UTC)[reply]
  • I was going to say no... but we allow paintings as portraits in BLPs. What's so different between an AI generated image, and a painting? Arguments above say the depiction may not be accurate, but the same is true of some paintings, right? (and conversely, not true of other paintings) ProcrastinatingReader (talk) 00:48, 3 January 2025 (UTC)[reply]
    A painting is clearly a painting; as such, the viewer knows that it is not an accurate representation of a particular reality. An AI-generated image made to look exactly like a photo, looks like a photo but is not.
    DS (talk) 02:44, 3 January 2025 (UTC)[reply]
    Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. Thryduulf (talk) 02:55, 3 January 2025 (UTC)[reply]
    Not to mention, hyper-realism is a style an artist may use in virtually any medium. Colored pencils can be used to make extremely realistic portraits. If Wikipedia would accept an analog substitute like a painting, there's no reason Wikipedia shouldn't accept an equivalent painting made with digital tools, and there's no reason Wikipedia shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. lethargilistic (talk) 03:45, 3 January 2025 (UTC)[reply]
    For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, faked) photos of human article subjects are somehow a good thing, I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Wikipedia readers when they would encounter fake photos on our BLP articles especially. :bloodofox: (talk) 03:54, 3 January 2025 (UTC)[reply]
    Wikipedia's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, WP:AGF means assuming that people you disagree with are not trying to hurt Wikipedia. Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Wikipedia and why our opposition to these immediate proposals comes from a desire to prevent harm to Wikipedia. I suggest taking a break to reflect on that, matey. lethargilistic (talk) 04:09, 3 January 2025 (UTC)[reply]
    Look, I don't know if you've been living under a rock or what for the past few years but the reality is that people hate AI images and dumping a ton of AI/fake images on Wikipedia, a place people go for real information and often trust, inevitably leads to a huge trust issue, something Wikipedia is increasingly suffering from already. This is especially a problem when they're intended to represent living people (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. :bloodofox: (talk) 04:55, 3 January 2025 (UTC)[reply]
    Those trust issues are born from the publication of hallucinated information. I have only said that it should be OK to use an image on Wikipedia when it contains only verifiable information, which is the same standard we apply to text. That standard is and ought to be applied independently of the way the initial version of an image was created. lethargilistic (talk) 06:10, 3 January 2025 (UTC)[reply]
    To my eye, the distinction between AI images and paintings here is less a question of medium and more of verifiability: the paintings we use (or at least the ones I can remember) are significant paintings that have been acknowledged in sources as being reasonable representations of a given person. By contrast, a purpose-generated AI image would be more akin to me painting a portrait of somebody here and now and trying to stick that on their article. The image could be a faithful representation (unlikely, given my lack of painting skills, but let's not get lost in the metaphor), but if my painting hasn't been discussed anywhere besides Wikipedia, then it's potentially OR or UNDUE to enshrine it in mainspace as an encyclopedic image. ModernDayTrilobite (talkcontribs) 05:57, 3 January 2025 (UTC)[reply]
    An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Wikipedia. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically not OR. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Wikipedia would have very few images. lethargilistic (talk) 06:18, 3 January 2025 (UTC)[reply]
    Verifiable how? Only by the editor themselves comparing to a real photo (which was probably used by the LLM to create the image…).
    These things are fakes. The analysis stops there. FOARP (talk) 10:48, 4 January 2025 (UTC)[reply]
    Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Wikipedia is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Wikipedia because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently for years. In the related NOR thread, they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that those decisions were consensus. The motivated reasoning of these discussions has been as blatant as that.
    At the bottom of this dispute, I take issue with trying to alter the rules in a way that creates a new double-standard within verifiability that applies to all images but not text. That's especially upsetting when (despite my and others' best efforts) so many of us are still focusing SOLELY on their hatred for AI rather than considering the obvious second-order consequences for user-generated images as a whole.
    Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake." The issue has always been verifiability, not provenance or falsity. Sometimes, IMO, that has lead to disaster and Wikipedia saying things I know to be factually untrue despite the contents of reliable sources. But that is the policy. We compare the contents of Wikipedia to reliable sources, and the contents of Wikipedia are considered verifiable if they cohere.
    I ask again: If Wikipedia's response to the creation of AI imaging tools is to crack down on all artistic contributions to Wikipedia (which seems to be the inevitable direction of these discussions), what does that say? If our negative response to AI tools is to limit what humans can do on Wikipedia, what does that say? Are we taking a stand for human achievements, or is this a very heated discussion of cutting off our nose to save our face? lethargilistic (talk) 23:31, 4 January 2025 (UTC)[reply]
    "Verifiable by comparing them to a reliable source" - comparing two images and saying that one looks like the other is not "verifying" anything. The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing.
    "Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake."" - Try presenting a paraphrasing as a quotation and see what happens.
    "Imagine it's 2002 and Wikipedia is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures..." - This basically happened, and is the origin of WP:NOTGALLERY. Wikipedia is not a host for original works. FOARP (talk) 22:01, 6 January 2025 (UTC)[reply]
    Comparing two images and saying that one looks like the other is not "verifying" anything. Comparing text to text in a reliable source is literally the same thing.
    The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing. No it isn't. The text equivalent is writing a sentence in an article and putting a ref tag on it. Perhaps there is room for improving the referencing of images in the sense that they should offer example comparisons to make. But an image created by a person is not unverifiable simply because it is user-generated. It is not somehow more unverifiable simply because it is created in a lifelike style.
    Try presenting a paraphrasing as a quotation and see what happens. Besides what I just said, nobody is even presenting these images as equatable to quotations. People in this thread have simply been calling them "fake" of their own initiative; the uploaders have not asserted that these are literal photographs to my knowledge. The uploaders of illustrations obviously did not make that claim either. (And, if the contents of the image is a copyvio, that is a separate issue entirely.)
    This basically happened, and is the origin of WP:NOTGALLERY. That is not the same thing. User-generated images that illustrate the subject are not prohibited by WP:NOTGALLERY. Wikipedia is a host of encyclopedic content, and user-generated images can have encyclopedic content. lethargilistic (talk) 02:41, 7 January 2025 (UTC)[reply]
    Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. Cremastra (uc) 02:44, 7 January 2025 (UTC)[reply]
    Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. lethargilistic (talk) 02:57, 7 January 2025 (UTC)[reply]
    So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still not an image of the person regardless of whether random Wikipedia editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. JoelleJay (talk) 22:52, 7 January 2025 (UTC)[reply]
    +1 to what JoelleJay said. The problem is that AI-generated images are simulations trying to match existing images, sometimes, yes, with an impressive degree of accuracy. But they will always be inferior to a human-drawn painting that's trying to depict the person. We're a human encyclopedia, and we're built by humans doing human things and sometimes with human errors. Cremastra (uc) 23:18, 7 January 2025 (UTC)[reply]
    You can't just raise this to an "ethical" issue by saying the word "ethical." You also can't just invoke copyright without articulating an actual copyright issue; we are not discussing copyvio. Everyone agrees that a photo with an actual copyvio in it is subject to that policy.
    But to address your actual point: Any image—any photo—beneath the resolution necessary to depict the mole would be missing the mole. Even with photography, we are never talking about science-fiction images that perfectly depict every facet of a person in an objective sense. We are talking about equipment that creates an approximation of reality. The same is true of illustrations and AI imagery.
    Finally, a human being is responsible for the contents of the image because a human is selecting it and is responsible for correcting any errors. The result is an image that someone is choosing to use because they believe it is an appropriate likeness. We should acknowledge that human decision and evaluate it naturally—Is it an appropriate likeness? lethargilistic (talk) 10:20, 8 January 2025 (UTC)[reply]
    (Second comment because I'm on my phone.) I realize I should also respond to this in terms of additive information. What people look like is not static in the way your comment implies. Is it inappropriate to use a photo because they had a zit on the day it was taken? Not necessarily. Is an image inappropriate because it is taken at a bad angle that makes them look fat? Judging by the prolific ComicCon photographs (where people seem to make a game of choosing the worst-looking options; seriously, it's really bad), not necessarily. Scars and bruises exist and then often heal over time. The standard for whether an image with "extra" details is acceptable would still be based on whether it comports acceptably with other images; we literally do what you have capriciously described as "unethical" and supplement it with our compassionate desire to not deliberately embarrass BLPs. (The ComicCon images aside, I guess.) So, no, I would not be a fan of using images that add prominent scars where the subject is not generally known to have one, but that is just an unverifiable fact that does not belong in a Wikipedia image. Simple as. lethargilistic (talk) 10:32, 8 January 2025 (UTC)[reply]
    We don't evaluate the reliability of a source solely by comparing it to other sources. For example, there is an ongoing discussion at the baseball WikiProject talk page about the reliability of a certain web site. It lists no authors nor any information on its editorial control policy, so we're not able to evaluate its reliability. The reliability of all content being used as a source, including images, needs to be considered in terms of its provenance. isaacl (talk) 23:11, 7 January 2025 (UTC)[reply]
  • Can you note in your !vote whether AI-generated images (generated via text prompts/text-to-image models) that are not photo-realistic / hyper-realistic in style are okay to use to depict BLP subjects? For example, see the image to the right, which was added then removed from his article:
    AI-generated cartoon portrait of Germán Larrea Mota-Velasco by DALL-E
    Pinging people who !voted No above: User:Chaotic Enby, User:Cremastra, User:Horse Eye's Back, User:Pythoncoder, User:Kj cheetham, User:Bloodofox, User:Gnomingstuff, User:JoelleJay, User:Carrite, User:Seraphimblade, User:David Eppstein, User:Randy Kryn, User:Traumnovelle, User:SuperJew, User:Doawk7, User:Di (they-them), User:Masem, User:Cessaune, User:Zaathras, User:XOR'easter, User:Nikkimaria, User:FOARP, User:JuxtaposedJacob, User:ModernDayTrilobite, User:Nabla, User:Tepkunset, User:DragonflySixtyseven, User:Win8x, User:ToBeFree --- Some1 (talk) 03:55, 3 January 2025 (UTC)[reply]
    Still no, I thought I was clear on that but we should not be using AI-generated images in articles for anything besides representing the concept of AI-generated images, or if an AI-generated image is notable or irreplaceable in its own right -- e.g, a musician uses AI to make an album cover.
    (this isn't even a good example, it looks more like Steve Bannon)
    Gnomingstuff (talk) 04:07, 3 January 2025 (UTC)[reply]
    Was I unclear? No to all of them. XOR'easter (talk) 04:13, 3 January 2025 (UTC)[reply]
    Still no, because carving out that type of exception will just lead to arguments down the line about whether a given image is too realistic. pythoncoder (talk | contribs) 04:24, 3 January 2025 (UTC)[reply]
    I still think no. My opposition isn't just to the fact that AI images are misinformation, but also that they essentially serve as a loophole for getting around Enwiki's image use policy. To know what somebody looks like, an AI generator needs to have images of that person in its dataset, and it draws on those images to generate a derivative work. If we have no free images of somebody and we use AI to make one, that's just using a fair use copyrighted image but removed by one step. The image use policy prohibits us from using fair use images for BLPs so I don't think we should entertain this loophole. If we do end up allowing AI images in BLPs, that just disqualifies the rationale of not allowing fair use in the first place. Di (they-them) (talk) 04:40, 3 January 2025 (UTC)[reply]
    No those are not okay, as this will just cause arguments from people saying a picture is obviously AI-generated, and that it is therefore appropriate. As I mentionned above, there are some exceptions to this, which Gnomingstuff perfectly describes. Fake sketches/cartoons are not appropriate and provide little encyclopedic value. win8x (talk) 05:27, 3 January 2025 (UTC)[reply]
    No to this as well, with the same carveout for individual images that have received notable discussion. Non-photorealistic AI images are going to be no more verifiable than photorealistic ones, and on top of that will often be lower-quality as images. ModernDayTrilobite (talkcontribs) 05:44, 3 January 2025 (UTC)[reply]
    Thanks for the ping, yes I can, the answer is no. ~ ToBeFree (talk) 07:31, 3 January 2025 (UTC)[reply]
    No, and that image should be deleted before anyone places it into a mainspace article. Changing the RfC intro long after its inception seems a second bite at an apple that's not aged well. Randy Kryn (talk) 09:28, 3 January 2025 (UTC)[reply]
    The RfC question has not been changed; another editor was complaining that the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add a note to the intro and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. Some1 (talk) 11:18, 3 January 2025 (UTC)[reply]
    Also answering No to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. Chaotic Enby (talk · contribs) 14:52, 3 January 2025 (UTC)[reply]
    The RfC question hasn't been changed; see my response to Zaathras below. Some1 (talk) 15:42, 3 January 2025 (UTC)[reply]
    No, that's even a worse possible approach. — Masem (t) 13:24, 3 January 2025 (UTC)[reply]
    No. We're the human encyclopedia. We should have images drawn or taken by real humans who are trying to depict the subject, not by machines trying to simulate an image. Besides, the given example is horribly drawn. Cremastra (uc) 15:03, 3 January 2025 (UTC)[reply]
    I like these even less than the photorealistic ones... This falls into the same basket for me: if we wouldn't let a random editor who drew this at home using conventional tools add it to the article why would we let a random editor who drew this at home using AI tools at it to the article? (and just to be clear the AI generated image of Germán Larrea Mota-Velasco is not recognizable as such) Horse Eye's Back (talk) 16:06, 3 January 2025 (UTC)[reply]
    I said *NO*. FOARP (talk) 10:37, 4 January 2025 (UTC)[reply]
    No Having such images as said above means the AI had to use copyrighted pictures to create it and we shouldn't use it. --SuperJew (talk) 01:12, 5 January 2025 (UTC)[reply]
    Still no. If for no other reason than that it's a bad precedent. As others have said, if we make one exception, it will just lead to arguments in the future about whether something is "realistic" or not. I also don't see why we would need cartoon/illustrated-looking AI pictures of people in BLPs. Tepkunset (talk) 20:43, 6 January 2025 (UTC)[reply]
  • Absolutely not. These images are based on whatever the AI could find on the internet, with little to no regard for copyright. Wikipedia is better than this. Retswerb (talk) 10:16, 3 January 2025 (UTC)[reply]
  • Comment The RfC question should not have been fiddled with, esp. for such a minor argument that the complai9nmant could have simply included in their own vote. I have no need to re-confirm my own entry. Zaathras (talk) 14:33, 3 January 2025 (UTC)[reply]
    The RfC question hasn't been modified; I've only added a 03:58, January 3, 2025: Note clarifying that these images can either be photorealistic in style or non-photorealistic in style. I pinged all the !No voters to make them aware. I could remove the Note if people prefer that I do (but the original RfC question is the exact same [8] as it is now, so I don't think the addition of the Note makes a whole ton of difference). Some1 (talk) 15:29, 3 January 2025 (UTC)[reply]
  • No At this point it feels redundant, but I'll just add to the horde of responses in the negative. I don't think we can fully appreciate the issues that this would cause. The potential problems and headaches far outweigh whatever little benefit might come from AI images for BLPs. pillowcrow 21:34, 3 January 2025 (UTC)[reply]
  • Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)[reply]
  • No. Wikipedia is made by and for humans. I don't want to become Google. Adding an AI-generated image to a page whose topic isn't about generative AI makes me feel insulted. SWinxy (talk) 00:03, 4 January 2025 (UTC)[reply]
  • No. Generative AI may have its place, and it may even have a place on Wikipedia in some form, but that place isn't in BLPs. There's no reason to use images of someone that do not exist over a real picture, or even something like a sketch, drawing, or painting. Even in the absence of pictures or human-drawn/painted images, I don't support using AI-generated images; they're not really pictures of the person, after all, so I can't support using them on articles of people. Using nothing would genuinely be a better choice than generated images. SmittenGalaxy | talk! 01:07, 4 January 2025 (UTC)[reply]
  • No due to reasons of copyright (AI harvests copyrighted material) and verifiability. Gamaliel (talk) 18:12, 4 January 2025 (UTC)[reply]
  • No. Even if you are willing to ignore the inherently fraught nature of using AI-generated anything in relation to BLP subjects, there is simply little to no benefit that could possibly come from trying something like this. There's no guarantee the images will actually look like the person in question, and therefore there's no actual context or information that the image is providing the reader. What a baffling proposal. Ithinkiplaygames (talk) 19:53, 4 January 2025 (UTC)[reply]
    There's no guarantee the images will actually look like the person in question there is no guarantee any image will look like the person in question. When an image is not a good likeness, regardless of why, we don't use it. When am image is a good likeness we consider using it. Whether an image is AI-generated or not it is completely independent of whether it is a good likeness. There are also reason other then identification why images are used on BLP-articles. Thryduulf (talk) 20:39, 4 January 2025 (UTC)[reply]
  • Foreseeably there may come a time when people's official portraits are AI-enhanced. That time might not be very far in the future. Do we want an exception for official portraits?—S Marshall T/C 01:17, 5 January 2025 (UTC)[reply]
    This subsection is about purely AI-generated works, not about AI-enhanced ones. Chaotic Enby (talk · contribs) 01:23, 5 January 2025 (UTC)[reply]
  • No. Per Cremastra, "We should have images drawn or taken by real humans who are trying to depict the subject," - User:RossEvans19 (talk) 02:12, 5 January 2025 (UTC)[reply]
  • Yes, depending on specific case. One can use drawings by artists, even such as caricature. The latter is an intentional distortion, one could say an intentional misinformation. Still, such images are legitimate on many pages. Or consider numerous images of Jesus. How realiable are they? I am not saying we must deliberatly use AI images on all pages, but they may be fine in some cases. Now, speaking on "medical articles"... One might actually use the AI generated images of certain biological objects like proteins or organelles. Of course a qualified editorial judgement is always needed to decide if they would improve a specific page (frequently they would not), but making a blanket ban would be unacceptable, in my opinion. For example, the images of protein models generatated by AlphaFold would be fine. The AI-generated images of biological membranes I saw? I would say no. It depends. My very best wishes (talk) 02:50, 5 January 2025 (UTC)[reply]
    This is complicated of course. For example, there are tools that make an image of a person that (mis)represents him as someone much better and clever than he really is in life. That should be forbidden as an advertisement. This is a whole new world, but I do not think that a blanket rejection would be appropriate. My very best wishes (talk) 03:19, 5 January 2025 (UTC)[reply]
  • No, I think there's legal and ethical issues here, especially with the current state of AI. Clovermoss🍀 (talk) 03:38, 5 January 2025 (UTC)[reply]
  • No: Obviously, we shouldn't be using AI images to represent anyone. Lazman321 (talk) 05:31, 5 January 2025 (UTC)[reply]
  • No Too risky for BLP's. Besides if people want AI generated content over editor made content, we should make it clear they are in the wrong place, and readers should be given no doubt as to our integrity, sincerity and effort to give them our best, not a program's. Alanscottwalker (talk) 14:51, 5 January 2025 (UTC)[reply]
  • No, as AI's grasp on the Internet takes hold stronger and stronger, it's important Wikipedia, as the online encyclopedia it sets out to be, remains factual and real. Using AI images on Wiki would likely do more harm than good, further thinning the boundaries between what's real and what's not. – zmbro (talk) (cont) 16:52, 5 January 2025 (UTC)[reply]
  • No, not at the moment. I think it will hard to avoid portraits that been enhanced by AI, as it already been on-going for a number of years and there is no way to avoid it, but I don't want arbitary generated AI portraits of any type. scope_creepTalk 20:19, 5 January 2025 (UTC)[reply]
  • No for natural images (e.g. photos of people). Generative AI by itself is not a reliable source for facts. In principle, generating images of people and directly sticking them in articles is no different than generating text and directly sticking it in articles. In practice, however, generating images is worse: Text can at least be discussed, edited, and improved afterwards. In contrast, we have significantly less policy and fewer rigorous methods of discussing how AI-generated images of natural objects should be improved (e.g. "make his face slightly more oblong, it's not close enough yet"). Discussion will devolve into hunches and gut feelings about the fidelity of images, all of which essentially fall under WP:OR. spintheer (talk) 20:37, 5 January 2025 (UTC)[reply]
  • No I'm appalled that even a small minority of editors would support such an idea. We have enough credibility issues already; using AI-generated images to represent real people is not something that a real encyclopedia should even consider. LEPRICAVARK (talk) 22:26, 5 January 2025 (UTC)[reply]
  • No I understand the comparison to using illustrations in BLP articles, but I've always viewed that as less preferable to no picture in all honestly. Images of a person are typically presented in context, such as a performer on stage, or a politician's official portrait, and I feel like there would be too many edge cases to consider in terms of making it clear that the photo is AI generated and isn't representative of anything that the person specifically did, but is rather an approximation. Tpdwkouaa (talk) 06:50, 6 January 2025 (UTC)[reply]
  • No - Too often the images resemble caricatures. Real caricatures may be included in articles if the caricature (e.g., political cartoon) had significant coverage and is attributed to the artist. Otherwise, representations of living persons should be real representations taken with photographic equipment. Robert McClenon (talk) 02:31, 7 January 2025 (UTC)[reply]
    So you will be arguing for the removal of the lead images at Banksy, CGP Grey, etc. then? Thryduulf (talk) 06:10, 7 January 2025 (UTC)[reply]
    At this point you're making bad-faith "BY YOUR LOGIC" arguments. You're better than that. Don't do it. DS (talk) 19:18, 7 January 2025 (UTC)[reply]
  • Strong no per bloodofox. —Nythar (💬-🍀) 03:32, 7 January 2025 (UTC)[reply]
No for AI-generated BLP images Mrfoogles (talk) 21:40, 7 January 2025 (UTC)[reply]
  • No - Not only is this effectively guesswork that usually includes unnatural artefacts, but worse, it is also based on unattributed work of photographers who didn't release their work into public domain. I don't care if it is an open legal loophole somewhere, IMO even doing away with the fair use restriction on BLPs would be morally less wrong. I suspect people on whose work LLMs in question were trained would also take less offense to that option. Daß Wölf 23:25, 7 January 2025 (UTC)[reply]
  • NoWP:NFC says that Non-free content should not be used when a freely licensed file that serves the same purpose can reasonably be expected to be uploaded, as is the case for almost all portraits of living people. While AI images may not be considered copyrightable, it could still be a copyright violation if the output resembles other, copyrighted images, pushing the image towards NFC. At the very least, I feel the use of non-free content to generate AI images violates the spirit of the NFC policy. (I'm assuming copyrighted images of a person are used to generate an AI portrait of them; if free images of that person were used, we should just use those images, and if no images of the person were used, how on Earth would we trust the output?) RunningTiger123 (talk) 02:43, 8 January 2025 (UTC)[reply]
  • No, AI images should not be permitted on Wikipedia at all. Stifle (talk) 11:27, 8 January 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Expiration date?

[edit]

"AI," as the term is currently used, is very new. It feels like large language models and the type of image generators under discussion just got here in 2024. (Yes, I know it was a little earlier.) The culture hasn't completed its initial response to them yet. Right now, these images do more harm than good, but that may change. Either we'll come up with a better way of spotting hallucinations or the machines will hallucinate less. Their copyright status also seems unstable. I suggest that any ban decided upon here have some expiration date or required rediscussion date. Two years feels about right to me, but the important thing would be that the ban has a number on it. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)[reply]

  • No need for any end-date. If there comes a point where consensus on this changes, then we can change any ban then. FOARP (talk) 05:27, 5 January 2025 (UTC)[reply]
  • An end date is a positive suggestion. Consensus systems like Wikipedia's are vulnerable to half-baked precedential decisions being treated as inviolate. With respect, this conversation does not inspire confidence that this policy proposal's consequences are well-understood at this time. If Wikipedia goes in this direction, it should be labeled as primarily reactionary and open to review at a later date. lethargilistic (talk) 10:22, 5 January 2025 (UTC)[reply]
  • Agree with FOARP, no need for an end date. If something significantly changes (e.g. reliable sources/news outlets such as the New York Times, BBC, AP, etc. start using text-to-image models to generate images of living people for their own articles) then this topic can be revisited later. Editors will have to go through the usual process of starting a new discussion/proposal when that time comes. Some1 (talk) 11:39, 5 January 2025 (UTC)[reply]
    Seeing as this discussion has not touched at all on what other organizations may or may not do, it would not be accurate to describe any consensus derived from this conversation in terms of what other organizations may or may not be doing. That is, there has been no consensus that we ought to be looking to the New York Times as an example. Doing so would be inadvisable for several reasons. For one, they have sued an AI company over semi-related issues and they have teams explicitly working on what the future of AI in news ought to look like, so they have some investment in what the future of AI looks like and they are explicitly trying to shape its norms. For another, if they did start to use AI in a way that may be controversial, they would have no positive reason to disclose that and many disincentives. They are not a neutral signal on this issue. Wikipedia should decide for itself, preferably doing so while not disrupting the ability of people to continue creating user-generated images. lethargilistic (talk) 03:07, 6 January 2025 (UTC)[reply]
  • WP:Consensus can change on an indefinite basis, if something changes. An arbitrary sunset date doesn't seem much use. CMD (talk) 03:15, 6 January 2025 (UTC)[reply]
  • No need per others. Additionally, if practices change, it doesn't mean editors will decide to follow new practices. As for the technology, it seems the situation has been fairly stable for the past two years: we can detect some fakes and hallucinations immediately, many more in the past, but certainly not all retouched elements and all generated photos available right now, even if there was a readily accessible tool or app that enabled ordinary people to reliably do so.
Through the history, art forgeries have been fairly reliably detected, but rarely quickly. Relatedly, I don't see why the situation with AI images would change in the next 24 months or any similar time period. Daß Wölf 22:17, 9 January 2025 (UTC)[reply]