Groups | Search | Server Info | Login | Register


Groups > sci.edu > #257

Re: Hallucinated citations are polluting the scientific literature. What can be done?

From phoenix <j63840576@gmail.com>
Newsgroups sci.edu, comp.ai.philosophy, alt.books, alt.politics.media, rec.arts.books, misc.writing
Subject Re: Hallucinated citations are polluting the scientific literature. What can be done?
Date 2026-05-12 06:44 -0600
Message-ID <n6gll4Fs3ppU1@mid.individual.net> (permalink)
References <20260512.024503.fea93242@mixmin.net> <eb950l1ibvjm1povpc1s3q808nlro4bitq@4ax.com>

Cross-posted to 6 groups.

Show all headers | View raw


Steve Hayes wrote:
> On Tue, 12 May 2026 02:45:03 +0100, Lawfare Review
> <noreply@mixmin.net> wrote:
> 
>> Earlier this year, computer scientist Guillaume Cabanac received a
>> notification from Google Scholar that one of his publications had
>> been cited in a paper published in the International Dental Journal1.
>> That was unexpected, because his research on spotting fabricated
>> papers doesn’t typically intersect with dentistry. “I was very
>> surprised to see that I couldn’t recognize my own reference,”
>> says Cabanac, who is based at the University of Toulouse in France.
>>
>> https://www.nature.com/articles/d41586-026-00969-z
> 
> A friend of mine got an LLM bot (Claude) to write an academic paper,
> which he then sent to me. I read it as I would if I have been asked to
> do a peer review for a journal, and he gave that feedback to the bot,
> and sent the revised paper to a journal, which, unsurprisingly (to me
> at any rate) rejected it.

Were you just hung over that day or did you do a shoddy job because you 
look down on LLMs? If you had reviewed it better, you know it would have 
been accepted. You probably saw the guy carrying a blanket around and 
judged him as a bad person and thus gave a bad review. Or you were just 
hung over. Next time do better, okay?

> He also got Claude to produce a bibliography of works relevant to the
> topic, which did turn out to be quite useful, though it did need
> careful checking for the avoidance of hallucinations, as described
> above. It gave several bogus urls. It appeared that such bots could be
> a useful supplement to (but not a replacement for) the work of
> reference librarians.
> 
> I then asked my friend to test Claude's generative ability with
> fiction -- got him to submit to it the first two chapters of an
> unpublished novel I had written, and complete it, so I could then
> compare the result with what I had actually written. The first couple
> of chapters it produced were quite entertaining, but after that it
> began to go off the rails. It was a children's book, and it began to
> have child characters talking and behaving like adults. The plot was
> thin, and turned on a complex legal point that was difficult for
> adults to follow, and would probably have bored any child reader out
> of their skull. It had the setting switching back and forth between
> spring and autummn, with lyrical descriptions of spring blossoms in
> one chapter, and falling leaves the next.
> 
> My friend fed my comments to Claude and has sent me back a revised
> text. I haven't read it yet.
> 
> LLM bots can be useful tools but they are not AI, and are not
> reliable. They are not intelligent or sentient (though the programmers
> of some of them try to make them appear so). They do not "understand"
> what they are fed, or what they spit out. Their use in education
> should be limited to what they are good at, and one needs education
> apart form LLM bots to be able to discern what they are good at and
> what they are not good at. If students use them to write essays, they
> will not learn that discernment.
> 
> My friend who submitted the journal article to Claude is a nuclear
> physicist, but the article he got it to write was in my field, not
> his, and he rather naively trusted what Claude spat out. If it had
> been in his field, his bullshit detectors would have been better
> equipped to deal with it.
> 
> In the age of so-called AI, educators need to give serious thought to
> better ways of honing students' bullshit detectors.
> 
> 
> 


-- 
War in the east
War in the west
War up north
War down south
War War

Back to sci.edu | PreviousPrevious in thread | Find similar


Thread

Hallucinated citations are polluting the scientific literature. What can be done? Lawfare Review <noreply@mixmin.net> - 2026-05-12 02:45 +0100
  Re: Hallucinated citations are polluting the scientific literature. What can be done? Steve Hayes <hayesstw@telkomsa.net> - 2026-05-12 05:58 +0200
  Re: Hallucinated citations are polluting the scientific literature. What can be done? Steve Hayes <hayesstw@telkomsa.net> - 2026-05-12 06:34 +0200
    Re: Hallucinated citations are polluting the scientific literature. What can be done? phoenix <j63840576@gmail.com> - 2026-05-12 06:44 -0600

csiph-web