Groups | Search | Server Info | Login | Register


Groups > sci.logic > #254579

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

From olcott <polcott2@gmail.com>
Newsgroups comp.theory, sci.logic, comp.ai.philosophy
Subject Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question
Date 2023-06-18 13:47 -0500
Organization A noiseless patient Spider
Message-ID <u6njgi$1nnnq$1@dont-email.me> (permalink)
References (14 earlier) <5FGjM.3718$a0G8.2055@fx34.iad> <u6ndou$1n25p$2@dont-email.me> <tpHjM.1403$JLp4.393@fx46.iad> <u6nh17$1ne5g$1@dont-email.me> <0WHjM.9605$8fUf.6382@fx16.iad>

Cross-posted to 3 groups.

Show all headers | View raw


On 6/18/2023 1:20 PM, Richard Damon wrote:
> On 6/18/23 2:05 PM, olcott wrote:
>> On 6/18/2023 12:46 PM, Richard Damon wrote:
>>> On 6/18/23 1:09 PM, olcott wrote:
>>>> On 6/18/2023 11:54 AM, Richard Damon wrote:
>>>>> On 6/18/23 12:41 PM, olcott wrote:
>>>>>> On 6/18/2023 11:31 AM, Richard Damon wrote:
>>>>>>> On 6/18/23 10:32 AM, olcott wrote:
>>>>>>>> On 6/18/2023 7:02 AM, Richard Damon wrote:
>>>>>>>>> On 6/17/23 11:10 PM, olcott wrote:
>>>>>>>>>> On 6/17/2023 9:57 PM, Richard Damon wrote:
>>>>>>>>>>> On 6/17/23 10:29 PM, olcott wrote:
>>>>>>>>>>>> On 6/17/2023 8:31 PM, Richard Damon wrote:
>>>>>>>>>>>>> On 6/17/23 7:58 PM, olcott wrote:
>>>>>>>>>>>>>> On 6/17/2023 6:13 PM, Richard Damon wrote:
>>>>>>>>>>>>>>> On 6/17/23 5:46 PM, olcott wrote:
>>>>>>>>>>>>>>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>>>>>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Except that the Halting Problem isn't a 
>>>>>>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so
>>>>>>>>>>>>>>>>>> the answer doesn't apply.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> That's an interesting point that would often catch 
>>>>>>>>>>>>>>>>> students out. And
>>>>>>>>>>>>>>>>> the reason /why/ it catches so many out eventually led 
>>>>>>>>>>>>>>>>> me to stop using
>>>>>>>>>>>>>>>>> the proof-by-contradiction argument in my classes.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The thing is, it looks so very much like a 
>>>>>>>>>>>>>>>>> self-contradicting question
>>>>>>>>>>>>>>>>> is being asked.  The students think they can see it 
>>>>>>>>>>>>>>>>> right there in the
>>>>>>>>>>>>>>>>> constructed code: "if H says I halt, I don't halt!".
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Of course, they are wrong.  The code is /not/ there. 
>>>>>>>>>>>>>>>>> The code calls a
>>>>>>>>>>>>>>>>> function that does not exist, so "it" (the constructed 
>>>>>>>>>>>>>>>>> code, the whole
>>>>>>>>>>>>>>>>> program) does not exist either.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The fact that it's code, and the students are almost 
>>>>>>>>>>>>>>>>> all programmers and
>>>>>>>>>>>>>>>>> not mathematicians, makes it worse.  A mathematician 
>>>>>>>>>>>>>>>>> seeing "let p be
>>>>>>>>>>>>>>>>> the largest prime" does not assume that such a p 
>>>>>>>>>>>>>>>>> exists. So when a
>>>>>>>>>>>>>>>>> prime number p' > p is constructed from p, this is not 
>>>>>>>>>>>>>>>>> seen as a
>>>>>>>>>>>>>>>>> "self-contradictory number" because neither p nor p' 
>>>>>>>>>>>>>>>>> exist. But the
>>>>>>>>>>>>>>>>> halting theorem is even more deceptive for programmers, 
>>>>>>>>>>>>>>>>> because the
>>>>>>>>>>>>>>>>> desired function, H (or whatever), appears to be so 
>>>>>>>>>>>>>>>>> well defined -- much
>>>>>>>>>>>>>>>>> more well-defined than "the largest prime".  We have an 
>>>>>>>>>>>>>>>>> exact
>>>>>>>>>>>>>>>>> specification for it, mapping arguments to returned 
>>>>>>>>>>>>>>>>> values. It's just
>>>>>>>>>>>>>>>>> software engineering to write such things (they 
>>>>>>>>>>>>>>>>> erroneously assume).
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> These sorts of proof can always be re-worded so as to 
>>>>>>>>>>>>>>>>> avoid the initial
>>>>>>>>>>>>>>>>> assumption.  For example, we can start "let p be any 
>>>>>>>>>>>>>>>>> prime", and from p
>>>>>>>>>>>>>>>>> we construct a prime p' > p.  And for halting, we can 
>>>>>>>>>>>>>>>>> start "let H be
>>>>>>>>>>>>>>>>> any subroutine of two arguments always returning true 
>>>>>>>>>>>>>>>>> or false". Now,
>>>>>>>>>>>>>>>>> all the objects /do/ exist.  In the first case, the 
>>>>>>>>>>>>>>>>> construction shows
>>>>>>>>>>>>>>>>> that no prime is the largest, and in the second it 
>>>>>>>>>>>>>>>>> shows that no
>>>>>>>>>>>>>>>>> subroutine computes the halting function.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This issue led to another change.  In the last couple 
>>>>>>>>>>>>>>>>> of years, I would
>>>>>>>>>>>>>>>>> start the course by setting Post's correspondence 
>>>>>>>>>>>>>>>>> problem as if it were
>>>>>>>>>>>>>>>>> just a fun programming challenge.  As the days passed 
>>>>>>>>>>>>>>>>> (and the course
>>>>>>>>>>>>>>>>> got into more and more serious material) it would start 
>>>>>>>>>>>>>>>>> to become clear
>>>>>>>>>>>>>>>>> that this was no ordinary programming challenge.  Many 
>>>>>>>>>>>>>>>>> students started
>>>>>>>>>>>>>>>>> to suspect that, despite the trivial sounding 
>>>>>>>>>>>>>>>>> specification, no program
>>>>>>>>>>>>>>>>> could do the job.  I always felt a bit uneasy doing 
>>>>>>>>>>>>>>>>> this, as if I was
>>>>>>>>>>>>>>>>> not being 100% honest, but it was a very useful 
>>>>>>>>>>>>>>>>> learning experience for
>>>>>>>>>>>>>>>>> most.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a 
>>>>>>>>>>>>>>>> truthful
>>>>>>>>>>>>>>>>     yes/no answer to the following question:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>     Will Jack's answer to this question be no?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>     Jack can't possibly give a correct yes/no answer to 
>>>>>>>>>>>>>>>> the question.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> It is an easily verified fact that when Jack's question 
>>>>>>>>>>>>>>>> is posed to Jack
>>>>>>>>>>>>>>>> that this question is self-contradictory for Jack or 
>>>>>>>>>>>>>>>> anyone else having
>>>>>>>>>>>>>>>> a pathological relationship to the question.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> But the problem is "Jack" here is assumed to be a 
>>>>>>>>>>>>>>> volitional being.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> H is not, it is a program, so before we even ask H what 
>>>>>>>>>>>>>>> will happen, the answer has been fixed by the definition 
>>>>>>>>>>>>>>> of the codr of H.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> It is also clear that when a question has no yes or no 
>>>>>>>>>>>>>>>> answer because
>>>>>>>>>>>>>>>> it is self-contradictory that this question is aptly 
>>>>>>>>>>>>>>>> classified as
>>>>>>>>>>>>>>>> incorrect.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> And the actual question DOES have a yes or no answer, in 
>>>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual 
>>>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> You just confuse yourself by trying to imagine a program 
>>>>>>>>>>>>>>> that can somehow change itself "at will".
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> It is incorrect to say that a question is not 
>>>>>>>>>>>>>>>> self-contradictory on the
>>>>>>>>>>>>>>>> basis that it is not self-contradictory in some 
>>>>>>>>>>>>>>>> contexts. If a question
>>>>>>>>>>>>>>>> is self-contradictory in some contexts then in these 
>>>>>>>>>>>>>>>> contexts it is an
>>>>>>>>>>>>>>>> incorrect question.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> In what context is "Does the Machine D(D) Halt When run" 
>>>>>>>>>>>>>>> become self-contradictory?
>>>>>>>>>>>>>> When this question is posed to machine H.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jack could be asked the question:
>>>>>>>>>>>>>> Will Jack answer "no" to this question?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> For Jack it is self-contradictory for others that are not
>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the 
>>>>>>>>>>>>>> semantics.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> But you are missing the difference. A Decider is a fixed 
>>>>>>>>>>>>> piece of code, so its answer has always been fixed to this 
>>>>>>>>>>>>> question since it has been designed. Thus what it will say 
>>>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction 
>>>>>>>>>>>>> cycle, but a fixed result that will either be correct or 
>>>>>>>>>>>>> incorrect.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Every input to a Turing machine decider such that both 
>>>>>>>>>>>> Boolean return
>>>>>>>>>>>> values are incorrect is an incorrect input.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Except it isn't. The problem is you are looking at two 
>>>>>>>>>>> different machines and two different inputs.
>>>>>>>>>>>
>>>>>>>>>> If no one can possibly correctly answer what the correct 
>>>>>>>>>> return value that any H<n> having a pathological relationship 
>>>>>>>>>> to its input D<n> could possibly provide then that is proof 
>>>>>>>>>> that D<n> is an invalid input for H<n> in the same way that 
>>>>>>>>>> any self-contradictory question is an incorrect question.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> But you have the wrong Question. The Question is Does D(D) 
>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) returns 
>>>>>>>>> 0, the answer is that D(D) does Halt, and thus H was wrong.
>>>>>>>>>
>>>>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>>>>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>>>>>>     yes/no answer to the following question:
>>>>>>>>
>>>>>>>>     Will Jack's answer to this question be no?
>>>>>>>>
>>>>>>>> For Jack the question is self-contradictory for others that
>>>>>>>> are not Jack it is not self-contradictory.
>>>>>>>>
>>>>>>>> The context (of who is asked) changes the semantics.
>>>>>>>>
>>>>>>>> Every question that lacks a correct yes/no answer because
>>>>>>>> the question is self-contradictory is an incorrect question.
>>>>>>>>
>>>>>>>> If you are not a mere Troll you will agree with this.
>>>>>>>>
>>>>>>>
>>>>>>> But the ACTUAL QUESTION DOES have a correct answer.
>>>>>> The actual question posed to Jack has no correct answer.
>>>>>> The actual question posed to anyone else is a semantically
>>>>>> different question even though the words are the same.
>>>>>>
>>>>>
>>>>> But the question to Jack isn't the question you are actaully saying 
>>>>> doesn't have an answer.
>>>>>
>>>> The question posed to Jack does not have an answer because within the
>>>> context that the question is posed to Jack it is self-contradictory.
>>>> You can ignore that context matters yet that is not any rebuttal.
>>>>
>>>
>>> Right, but that has ZERO bearig on the Halting Problem, 
>> That is great we made excellent progress on this.
>>
>> When ChatGPT understood that Jack's question is self-contradictory for
>> Jack then it was also able to understand the following isomorphism:
>>
>> For every H<n> on pathological input D<n> both Boolean return values 
>> from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a 
>> self-contradictory question for every H<n>.
>>
> 
> No, because a given H<n> can only give one result, 
Some of the elements of H<n>/D<n> are identical except for the return
value from H. In both of these cases the return value is incorrect.

Since I have just defined the set of every halting problem {decider /
input} pair that can possibly exist in any universe there is no rebuttal
of: What about this element of this set?



-- 
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Back to sci.logic | Previous | NextPrevious in thread | Next in thread | Find similar


Thread

ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 00:54 -0500
  Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-17 00:54 -0700
  Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 08:09 -0400
    Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 11:59 -0500
      Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-17 10:24 -0700
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 12:35 -0500
      Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 13:43 -0400
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 13:23 -0500
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 16:27 -0400
    Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-17 22:09 +0100
      Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 16:46 -0500
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Jeff Barnett <jbb@notatt.com> - 2023-06-17 16:03 -0600
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 19:18 -0400
            Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 18:44 -0500
              Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 21:46 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 21:35 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 23:03 -0400
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 19:13 -0400
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 18:58 -0500
            Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 21:31 -0400
              Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 21:29 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 22:57 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 22:10 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 08:02 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 09:32 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 08:50 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 08:59 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 12:31 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 11:41 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 09:54 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 12:03 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 10:18 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 12:24 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 11:05 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:09 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 11:44 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:55 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 11:56 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 14:10 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 12:30 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 18:41 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 20:01 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 17:38 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 19:59 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 21:29 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 20:43 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 22:38 -0400
                Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-18 22:31 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-19 07:38 -0400
                Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-19 09:30 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 08:07 -0700
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-19 20:45 -0400
                Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-19 22:57 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-20 07:19 -0400
                Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-20 10:09 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-20 11:48 -0400
                Termination Analyzer H determines the semantic property of .. olcott <polcott2@gmail.com> - 2023-06-18 23:58 -0500
                Re: Termination Analyzer H determines the semantic property of .. Richard Damon <Richard@Damon-Family.org> - 2023-06-19 07:38 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 20:27 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 21:34 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 17:15 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 19:46 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 12:54 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 12:09 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 13:46 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:05 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 14:20 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:30 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 14:43 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:47 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 15:19 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 14:26 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 16:10 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 18:43 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 19:59 -0400
  Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 08:37 -0700
    Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-19 10:58 -0500
      Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 11:18 -0700
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-19 15:04 -0500
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 14:32 -0700
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-19 21:08 +0100
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question PLEASE LOOK AT MT REPLY [Ben Bacarisse] olcott <polcott2@gmail.com> - 2023-06-19 15:22 -0500
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 14:17 -0700
            Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-19 23:48 +0100
              Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 17:10 -0700
          Ben Bacarisse specifically targets my posts to discourage honest dialogue olcott <polcott2@gmail.com> - 2023-06-20 10:06 -0500
            Re: Ben Bacarisse specifically targets my posts to discourage honest dialogue Richard Damon <Richard@Damon-Family.org> - 2023-06-20 11:48 -0400
              Re: dishonest subject lines Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-20 17:02 +0100
                Ben Bacarisse specifically targets my posts to discourage honest dialogue olcott <polcott2@gmail.com> - 2023-06-20 12:25 -0500
                Re: Bla Bla bla Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-20 10:33 -0700
                Ben Bacarisse specifically targets my posts to discourage honest dialogue olcott <polcott2@gmail.com> - 2023-06-20 13:17 -0500
          Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 14:57 -0500
            Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 16:34 -0400
              Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 15:42 -0500
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 16:52 -0400
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 16:39 -0500
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 17:53 -0400
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 17:07 -0500
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 18:52 -0400
          Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts] olcott <polcott2@gmail.com> - 2023-06-20 14:59 -0500
          Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts] olcott <polcott2@gmail.com> - 2023-06-20 15:00 -0500
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-22 23:12 -0500
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-22 23:01 -0500
  ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question vallor <vallor@cultnix.org> - 2023-06-21 19:10 +0000
    Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question vallor <vallor@vallor.earth> - 2023-06-21 19:23 +0000
    Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 14:59 -0500
      Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-21 19:01 -0400
        Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 19:40 -0500
          Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-21 22:47 -0400
            Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 21:58 -0500
              Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-22 07:26 -0400
                Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-22 09:18 -0500
                Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-22 21:06 -0400

csiph-web