Groups | Search | Server Info | Login | Register


Groups > comp.ai.shells > #257

Re: Chatbots produce bogus citations

From vallor <vallor@vallor.earth>
Subject Re: Chatbots produce bogus citations
Newsgroups comp.ai.shells
References <1qbffa5.1adff801ndrrcbN%snipeco.2@gmail.com> <y4OdM.3421888$iS99.3297960@fx16.iad> <1qbnhi9.3p966e1nlu1yjN%snipeco.2@gmail.com> <ue6eM.2302478$gGD7.405554@fx11.iad> <1qbnyg1.11uoqwl1ybtsjsN%snipeco.2@gmail.com>
Message-ID <9s8gM.975$tol1.585@fx09.iad> (permalink)
Organization blocknews - www.blocknews.net
Date 2023-06-07 23:32 +0000

Show all headers | View raw


On Thu, 1 Jun 2023 20:45:36 +0100, Sn!pe wrote:

> vallor <vallor@vallor.earth> wrote:
> 
>> On Thu, 1 Jun 2023 14:33:13 +0100, Sn!pe wrote:
>> 
>> > vallor <vallor@vallor.earth> wrote:
>> > 
>> >> On Sun, 28 May 2023 06:17:15 +0100, Sn!pe wrote:
>> >> 
>> >> > Lawyer admits using AI for research after citing 'bogus' cases
>> >> > from ChatGPT.
>> > 
>> > <https://tinyurl.com/yntupbe4>
>> >  
>> >> Poor example of "don't trust, do verify".
>> >> 
>> >> 
>> > Hence my earlier (unchallenged) point that ChatGPT does not provide
>> > citations.
>> > 
>> > 
>> It does provide citations sometimes.  And sometimes, those citations
>> are bogus, which is what happened to our hero the cyberlawyer...
>> 
>> 
> You'd think that an educated person like a lawyer would check.
> I wonder how many naïve people would bother, rather than just accept the
> results as facts.  I hear that some people even believe what they read
> in the papers or see on TV  (strange but true).
> 
> [...]


So is that the last word on these AI shells?  I still
use them, even though the novelty has worn off a bit.  (I'd
still be much more interested if they were "answer machines"
instead of "say what sounds good" machines. :)

I don't trust them, but do verify them -- and I recommend
others do the same.

(Still, is it not amusing to see what it comes up with on its own?)

-- 
-v

Back to comp.ai.shells | Previous | NextPrevious in thread | Next in thread | Find similar


Thread

Lawyer admits using AI for research after citing 'bogus' cases from ChatGPT snipeco.2@gmail.com (Sn!pe) - 2023-05-28 06:17 +0100
  Re: Lawyer admits using AI for research after citing 'bogus' cases from ChatGPT Nic <Nic@none.net> - 2023-05-28 08:29 -0400
  Re: Lawyer admits using AI for research after citing 'bogus' cases from ChatGPT Blue-Maned_Hawk <bluemanedhawk@gmail.com> - 2023-05-28 15:53 -0400
  Re: Lawyer admits using AI for research after citing 'bogus' cases from ChatGPT vallor <vallor@vallor.earth> - 2023-05-31 20:27 +0000
    Re: Lawyer admits using AI for research after citing 'bogus' cases from ChatGPT snipeco.2@gmail.com (Sn!pe) - 2023-06-01 14:33 +0100
      Chatbots produce bogus citations (was: Re: Lawyer admits using AI for research after citing 'bogus' cases from ChatGPT vallor <vallor@vallor.earth> - 2023-06-01 19:23 +0000
        Re: Chatbots produce bogus citations snipeco.2@gmail.com (Sn!pe) - 2023-06-01 20:45 +0100
          Re: Chatbots produce bogus citations vallor <vallor@vallor.earth> - 2023-06-07 23:32 +0000
            Re: Chatbots produce bogus citations Andy Burns <usenet@andyburns.uk> - 2023-06-08 06:07 +0100
              Re: Chatbots produce bogus citations snipeco.2@gmail.com (Sn!pe) - 2023-06-08 10:37 +0100
              Re: Chatbots produce bogus citations Nic <Nic@none.net> - 2023-06-08 10:46 -0400

csiph-web