Groups | Search | Server Info | Login | Register


Groups > comp.ai.philosophy > #29722

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

From Jeff Barnett <jbb@notatt.com>
Newsgroups comp.theory, sci.logic, comp.ai.philosophy
Subject Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question
Date 2023-06-17 16:03 -0600
Organization A noiseless patient Spider
Message-ID <u6lak5$1ck00$1@dont-email.me> (permalink)
References <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad> <871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>

Cross-posted to 3 groups.

Show all headers | View raw


On 6/17/2023 3:46 PM, olcott wrote:
> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>> Richard Damon <Richard@Damon-Family.org> writes:
>>
>>> Except that the Halting Problem isn't a "Self-Contradictory" 
>>> Quesiton, so
>>> the answer doesn't apply.
>>
>> That's an interesting point that would often catch students out.  And
>> the reason /why/ it catches so many out eventually led me to stop using
>> the proof-by-contradiction argument in my classes.
>>
>> The thing is, it looks so very much like a self-contradicting question
>> is being asked.  The students think they can see it right there in the
>> constructed code: "if H says I halt, I don't halt!".
>>
>> Of course, they are wrong.  The code is /not/ there.  The code calls a
>> function that does not exist, so "it" (the constructed code, the whole
>> program) does not exist either.
>>
>> The fact that it's code, and the students are almost all programmers and
>> not mathematicians, makes it worse.  A mathematician seeing "let p be
>> the largest prime" does not assume that such a p exists.  So when a
>> prime number p' > p is constructed from p, this is not seen as a
>> "self-contradictory number" because neither p nor p' exist.  But the
>> halting theorem is even more deceptive for programmers, because the
>> desired function, H (or whatever), appears to be so well defined -- much
>> more well-defined than "the largest prime".  We have an exact
>> specification for it, mapping arguments to returned values.  It's just
>> software engineering to write such things (they erroneously assume).
>>
>> These sorts of proof can always be re-worded so as to avoid the initial
>> assumption.  For example, we can start "let p be any prime", and from p
>> we construct a prime p' > p.  And for halting, we can start "let H be
>> any subroutine of two arguments always returning true or false".  Now,
>> all the objects /do/ exist.  In the first case, the construction shows
>> that no prime is the largest, and in the second it shows that no
>> subroutine computes the halting function.
>>
>> This issue led to another change.  In the last couple of years, I would
>> start the course by setting Post's correspondence problem as if it were
>> just a fun programming challenge.  As the days passed (and the course
>> got into more and more serious material) it would start to become clear
>> that this was no ordinary programming challenge.  Many students started
>> to suspect that, despite the trivial sounding specification, no program
>> could do the job.  I always felt a bit uneasy doing this, as if I was
>> not being 100% honest, but it was a very useful learning experience for
>> most.
>>
> 
> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>     You ask someone (we'll call him "Jack") to give a truthful
>     yes/no answer to the following question:
> 
>     Will Jack's answer to this question be no?
> 
>     Jack can't possibly give a correct yes/no answer to the question.
> 
> It is an easily verified fact that when Jack's question is posed to Jack
> that this question is self-contradictory for Jack or anyone else having
> a pathological relationship to the question.
> 
> It is also clear that when a question has no yes or no answer because
> it is self-contradictory that this question is aptly classified as
> incorrect.
> 
> It is incorrect to say that a question is not self-contradictory on the
> basis that it is not self-contradictory in some contexts. If a question
> is self-contradictory in some contexts then in these contexts it is an
> incorrect question.
> 
> When we clearly understand the truth of this then and only then we have
> the means to overcome the enormous inertia of the [received view] of
> the conventional wisdom regarding decision problems that are only
> undecidable because of pathological relationships.
> 
> Because of the brilliant work of Daryl McCullough we can see the actual
> reality behind decision problems that are undecidable because of their
> pathological relationships.
> 
> It only took ChatGPT a few hours and 60 pages of dialogue
> to understand and agree with this.
> https://www.liarparadox.org/ChatGPT_HP.pdf
> 
> ChatGPT:
>    "Therefore, based on the understanding that self-contradictory
>     questions lack a correct answer and are deemed incorrect, one could
>     argue that the halting problem's pathological input D can be
>     categorized as an incorrect question when posed to the halting
>     decider H."
Ben was describing an improved approach to teaching some theoretical 
results to CS pupils. Those pupils were assumed to have some grounding 
in practical aspects such as programming and at least a small interest 
and competence in basic mathematics. You seemed to not be there when god 
handed out those basic components of a human brain. You are neither the 
exception or the rule; just an arrogant dumb fuck.

By the way, we have noticed that you haven't played the big "C" card 
recently. Is this 1) an immaculate cure, 2) you putting on your big boy 
pants and taking responsibility for your own sorry life and mind, or 3) 
the time where you try to wiggle out of a past sequel of lies? We've 
seen all but variation 2 in past interactions. The curious want to know 
the real skinny so speak up!
-- 
Jeff Barnett

Back to comp.ai.philosophy | Previous | NextPrevious in thread | Next in thread | Find similar


Thread

ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 00:54 -0500
  Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 08:09 -0400
    Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 11:59 -0500
      Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 13:43 -0400
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 13:23 -0500
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 16:27 -0400
    Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-17 22:09 +0100
      Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 16:46 -0500
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Jeff Barnett <jbb@notatt.com> - 2023-06-17 16:03 -0600
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 19:18 -0400
            Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 18:44 -0500
              Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 21:46 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 21:35 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 23:03 -0400
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 19:13 -0400
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 18:58 -0500
            Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 21:31 -0400
              Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 21:29 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 22:57 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 22:10 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 08:02 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 09:32 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 12:31 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 11:41 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 12:54 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 12:09 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 13:46 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:05 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 14:20 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:30 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 14:43 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:47 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 15:19 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 14:26 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 16:10 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 18:43 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 19:59 -0400
                Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-18 22:31 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-19 07:38 -0400
                Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-19 09:30 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-19 20:45 -0400
                Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-19 22:57 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Don Stockbauer <donstockbauer@hotmail.com> - 2023-06-20 00:33 -0700
                ChatGPT discussion (was: Re: Does input D have semantic property S or is input D [BAD INPUT]? vallor <vallor@vallor.earth> - 2023-06-20 11:16 +0000
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-20 07:19 -0400
                Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-20 10:09 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-20 11:48 -0400
  Ben Bacarisse specifically targets my posts to discourage honest dialogue olcott <polcott2@gmail.com> - 2023-06-20 10:06 -0500
    Re: Ben Bacarisse specifically targets my posts to discourage honest dialogue Richard Damon <Richard@Damon-Family.org> - 2023-06-20 11:48 -0400
      Re: dishonest subject lines Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-20 17:02 +0100
        Ben Bacarisse specifically targets my posts to discourage honest dialogue olcott <polcott2@gmail.com> - 2023-06-20 12:25 -0500
  Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 14:57 -0500
    Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 16:34 -0400
      Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 15:42 -0500
        Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 16:52 -0400
          Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 16:39 -0500
            Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 17:53 -0400
              Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 17:07 -0500
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 18:52 -0400
  Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts] olcott <polcott2@gmail.com> - 2023-06-20 14:59 -0500
  Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts] olcott <polcott2@gmail.com> - 2023-06-20 15:00 -0500
  ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question vallor <vallor@cultnix.org> - 2023-06-21 19:10 +0000
    Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question vallor <vallor@vallor.earth> - 2023-06-21 19:23 +0000
    Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 14:59 -0500
      Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-21 19:01 -0400
        Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 19:40 -0500
          Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-21 22:47 -0400
            Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 21:58 -0500
              Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-22 07:26 -0400
                Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-22 09:18 -0500
                Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-22 21:06 -0400

csiph-web