Groups | Search | Server Info | Login | Register
Groups > comp.ai.philosophy > #29750
| Subject | Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question |
|---|---|
| Newsgroups | comp.theory, sci.logic, comp.ai.philosophy |
| References | (19 earlier) <u6njgi$1nnnq$1@dont-email.me> <NMIjM.3721$a0G8.1033@fx34.iad> <u6nlol$1nnnq$4@dont-email.me> <dxJjM.614$L836.450@fx47.iad> <u6o4r4$1q1sr$2@dont-email.me> |
| From | Richard Damon <Richard@Damon-Family.org> |
| Message-ID | <lTMjM.289$_%y4.154@fx48.iad> (permalink) |
| Organization | Forte - www.forteinc.com |
| Date | 2023-06-18 19:59 -0400 |
Cross-posted to 3 groups.
On 6/18/23 7:43 PM, olcott wrote: > On 6/18/2023 3:10 PM, Richard Damon wrote: >> On 6/18/23 3:26 PM, olcott wrote: >>> On 6/18/2023 2:19 PM, Richard Damon wrote: >>>> On 6/18/23 2:47 PM, olcott wrote: >>>>> On 6/18/2023 1:20 PM, Richard Damon wrote: >>>>>> On 6/18/23 2:05 PM, olcott wrote: >>>>>>> On 6/18/2023 12:46 PM, Richard Damon wrote: >>>>>>>> On 6/18/23 1:09 PM, olcott wrote: >>>>>>>>> On 6/18/2023 11:54 AM, Richard Damon wrote: >>>>>>>>>> On 6/18/23 12:41 PM, olcott wrote: >>>>>>>>>>> On 6/18/2023 11:31 AM, Richard Damon wrote: >>>>>>>>>>>> On 6/18/23 10:32 AM, olcott wrote: >>>>>>>>>>>>> On 6/18/2023 7:02 AM, Richard Damon wrote: >>>>>>>>>>>>>> On 6/17/23 11:10 PM, olcott wrote: >>>>>>>>>>>>>>> On 6/17/2023 9:57 PM, Richard Damon wrote: >>>>>>>>>>>>>>>> On 6/17/23 10:29 PM, olcott wrote: >>>>>>>>>>>>>>>>> On 6/17/2023 8:31 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>> On 6/17/23 7:58 PM, olcott wrote: >>>>>>>>>>>>>>>>>>> On 6/17/2023 6:13 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 6/17/23 5:46 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote: >>>>>>>>>>>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Except that the Halting Problem isn't a >>>>>>>>>>>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so >>>>>>>>>>>>>>>>>>>>>>> the answer doesn't apply. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> That's an interesting point that would often catch >>>>>>>>>>>>>>>>>>>>>> students out. And >>>>>>>>>>>>>>>>>>>>>> the reason /why/ it catches so many out eventually >>>>>>>>>>>>>>>>>>>>>> led me to stop using >>>>>>>>>>>>>>>>>>>>>> the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>>>>>> self-contradicting question >>>>>>>>>>>>>>>>>>>>>> is being asked. The students think they can see >>>>>>>>>>>>>>>>>>>>>> it right there in the >>>>>>>>>>>>>>>>>>>>>> constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Of course, they are wrong. The code is /not/ >>>>>>>>>>>>>>>>>>>>>> there. The code calls a >>>>>>>>>>>>>>>>>>>>>> function that does not exist, so "it" (the >>>>>>>>>>>>>>>>>>>>>> constructed code, the whole >>>>>>>>>>>>>>>>>>>>>> program) does not exist either. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> The fact that it's code, and the students are >>>>>>>>>>>>>>>>>>>>>> almost all programmers and >>>>>>>>>>>>>>>>>>>>>> not mathematicians, makes it worse. A >>>>>>>>>>>>>>>>>>>>>> mathematician seeing "let p be >>>>>>>>>>>>>>>>>>>>>> the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>>>>>> exists. So when a >>>>>>>>>>>>>>>>>>>>>> prime number p' > p is constructed from p, this is >>>>>>>>>>>>>>>>>>>>>> not seen as a >>>>>>>>>>>>>>>>>>>>>> "self-contradictory number" because neither p nor >>>>>>>>>>>>>>>>>>>>>> p' exist. But the >>>>>>>>>>>>>>>>>>>>>> halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>>>>>> programmers, because the >>>>>>>>>>>>>>>>>>>>>> desired function, H (or whatever), appears to be >>>>>>>>>>>>>>>>>>>>>> so well defined -- much >>>>>>>>>>>>>>>>>>>>>> more well-defined than "the largest prime". We >>>>>>>>>>>>>>>>>>>>>> have an exact >>>>>>>>>>>>>>>>>>>>>> specification for it, mapping arguments to >>>>>>>>>>>>>>>>>>>>>> returned values. It's just >>>>>>>>>>>>>>>>>>>>>> software engineering to write such things (they >>>>>>>>>>>>>>>>>>>>>> erroneously assume). >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> These sorts of proof can always be re-worded so as >>>>>>>>>>>>>>>>>>>>>> to avoid the initial >>>>>>>>>>>>>>>>>>>>>> assumption. For example, we can start "let p be >>>>>>>>>>>>>>>>>>>>>> any prime", and from p >>>>>>>>>>>>>>>>>>>>>> we construct a prime p' > p. And for halting, we >>>>>>>>>>>>>>>>>>>>>> can start "let H be >>>>>>>>>>>>>>>>>>>>>> any subroutine of two arguments always returning >>>>>>>>>>>>>>>>>>>>>> true or false". Now, >>>>>>>>>>>>>>>>>>>>>> all the objects /do/ exist. In the first case, >>>>>>>>>>>>>>>>>>>>>> the construction shows >>>>>>>>>>>>>>>>>>>>>> that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>>>>>> shows that no >>>>>>>>>>>>>>>>>>>>>> subroutine computes the halting function. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> This issue led to another change. In the last >>>>>>>>>>>>>>>>>>>>>> couple of years, I would >>>>>>>>>>>>>>>>>>>>>> start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>>>>>> problem as if it were >>>>>>>>>>>>>>>>>>>>>> just a fun programming challenge. As the days >>>>>>>>>>>>>>>>>>>>>> passed (and the course >>>>>>>>>>>>>>>>>>>>>> got into more and more serious material) it would >>>>>>>>>>>>>>>>>>>>>> start to become clear >>>>>>>>>>>>>>>>>>>>>> that this was no ordinary programming challenge. >>>>>>>>>>>>>>>>>>>>>> Many students started >>>>>>>>>>>>>>>>>>>>>> to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>>>>>> specification, no program >>>>>>>>>>>>>>>>>>>>>> could do the job. I always felt a bit uneasy >>>>>>>>>>>>>>>>>>>>>> doing this, as if I was >>>>>>>>>>>>>>>>>>>>>> not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>>>>>> learning experience for >>>>>>>>>>>>>>>>>>>>>> most. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give >>>>>>>>>>>>>>>>>>>>> a truthful >>>>>>>>>>>>>>>>>>>>> yes/no answer to the following question: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Jack can't possibly give a correct yes/no >>>>>>>>>>>>>>>>>>>>> answer to the question. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> It is an easily verified fact that when Jack's >>>>>>>>>>>>>>>>>>>>> question is posed to Jack >>>>>>>>>>>>>>>>>>>>> that this question is self-contradictory for Jack >>>>>>>>>>>>>>>>>>>>> or anyone else having >>>>>>>>>>>>>>>>>>>>> a pathological relationship to the question. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>>>>>> volitional being. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> H is not, it is a program, so before we even ask H >>>>>>>>>>>>>>>>>>>> what will happen, the answer has been fixed by the >>>>>>>>>>>>>>>>>>>> definition of the codr of H. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> It is also clear that when a question has no yes or >>>>>>>>>>>>>>>>>>>>> no answer because >>>>>>>>>>>>>>>>>>>>> it is self-contradictory that this question is >>>>>>>>>>>>>>>>>>>>> aptly classified as >>>>>>>>>>>>>>>>>>>>> incorrect. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> And the actual question DOES have a yes or no >>>>>>>>>>>>>>>>>>>> answer, in this case, since H(D,D) says 0 >>>>>>>>>>>>>>>>>>>> (non-Halting) the actual answer to the question does >>>>>>>>>>>>>>>>>>>> D(D) Halt is YES. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> You just confuse yourself by trying to imagine a >>>>>>>>>>>>>>>>>>>> program that can somehow change itself "at will". >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> It is incorrect to say that a question is not >>>>>>>>>>>>>>>>>>>>> self-contradictory on the >>>>>>>>>>>>>>>>>>>>> basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>>>>>> contexts. If a question >>>>>>>>>>>>>>>>>>>>> is self-contradictory in some contexts then in >>>>>>>>>>>>>>>>>>>>> these contexts it is an >>>>>>>>>>>>>>>>>>>>> incorrect question. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> In what context is "Does the Machine D(D) Halt When >>>>>>>>>>>>>>>>>>>> run" become self-contradictory? >>>>>>>>>>>>>>>>>>> When this question is posed to machine H. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Jack could be asked the question: >>>>>>>>>>>>>>>>>>> Will Jack answer "no" to this question? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> For Jack it is self-contradictory for others that are >>>>>>>>>>>>>>>>>>> not >>>>>>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes >>>>>>>>>>>>>>>>>>> the semantics. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> But you are missing the difference. A Decider is a >>>>>>>>>>>>>>>>>> fixed piece of code, so its answer has always been >>>>>>>>>>>>>>>>>> fixed to this question since it has been designed. >>>>>>>>>>>>>>>>>> Thus what it will say isn't a varialbe that can lead >>>>>>>>>>>>>>>>>> to the self-contradiction cycle, but a fixed result >>>>>>>>>>>>>>>>>> that will either be correct or incorrect. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Every input to a Turing machine decider such that both >>>>>>>>>>>>>>>>> Boolean return >>>>>>>>>>>>>>>>> values are incorrect is an incorrect input. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Except it isn't. The problem is you are looking at two >>>>>>>>>>>>>>>> different machines and two different inputs. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> If no one can possibly correctly answer what the correct >>>>>>>>>>>>>>> return value that any H<n> having a pathological >>>>>>>>>>>>>>> relationship to its input D<n> could possibly provide >>>>>>>>>>>>>>> then that is proof that D<n> is an invalid input for H<n> >>>>>>>>>>>>>>> in the same way that any self-contradictory question is >>>>>>>>>>>>>>> an incorrect question. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> But you have the wrong Question. The Question is Does D(D) >>>>>>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) >>>>>>>>>>>>>> returns 0, the answer is that D(D) does Halt, and thus H >>>>>>>>>>>>>> was wrong. >>>>>>>>>>>>>> >>>>>>>>>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>>> yes/no answer to the following question: >>>>>>>>>>>>> >>>>>>>>>>>>> Will Jack's answer to this question be no? >>>>>>>>>>>>> >>>>>>>>>>>>> For Jack the question is self-contradictory for others that >>>>>>>>>>>>> are not Jack it is not self-contradictory. >>>>>>>>>>>>> >>>>>>>>>>>>> The context (of who is asked) changes the semantics. >>>>>>>>>>>>> >>>>>>>>>>>>> Every question that lacks a correct yes/no answer because >>>>>>>>>>>>> the question is self-contradictory is an incorrect question. >>>>>>>>>>>>> >>>>>>>>>>>>> If you are not a mere Troll you will agree with this. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> But the ACTUAL QUESTION DOES have a correct answer. >>>>>>>>>>> The actual question posed to Jack has no correct answer. >>>>>>>>>>> The actual question posed to anyone else is a semantically >>>>>>>>>>> different question even though the words are the same. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> But the question to Jack isn't the question you are actaully >>>>>>>>>> saying doesn't have an answer. >>>>>>>>>> >>>>>>>>> The question posed to Jack does not have an answer because >>>>>>>>> within the >>>>>>>>> context that the question is posed to Jack it is >>>>>>>>> self-contradictory. >>>>>>>>> You can ignore that context matters yet that is not any rebuttal. >>>>>>>>> >>>>>>>> >>>>>>>> Right, but that has ZERO bearig on the Halting Problem, >>>>>>> That is great we made excellent progress on this. >>>>>>> >>>>>>> When ChatGPT understood that Jack's question is >>>>>>> self-contradictory for >>>>>>> Jack then it was also able to understand the following isomorphism: >>>>>>> >>>>>>> For every H<n> on pathological input D<n> both Boolean return >>>>>>> values from H<n> are incorrect for D<n> proving that D<n> is >>>>>>> isomorphic to a self-contradictory question for every H<n>. >>>>>>> >>>>>> >>>>>> No, because a given H<n> can only give one result, >>>>> Some of the elements of H<n>/D<n> are identical except for the return >>>>> value from H. In both of these cases the return value is incorrect. >>>> >>>> Nope, can't be. >>> >>> The only difference between otherwise identical pairs of pairs H<n>/D<n> >>> and H<m>/D<m> is the single integer values of 0/1 within H<n> and H<m> >>> respectively thus proving that both True and False are the wrong return >>> value for the identical finite string pairs D<n>/D<m>. >>> >>> >> >> So they are different programs. Different is different. Almost the >> same is not the same. >> >> Unless you are claiming that 1 is the same as 0, they are different. >> >> So, your claim is based on a LIE, or you are admitting you are insane. > > > > > The key difference with my work that is a true innovation in this field > is that H specifically recognizes self-contradictory inputs and rejects > them. > > *Termination Analyzer H prevents Denial of Service attacks* > https://www.researchgate.net/publication/369971402_Termination_Analyzer_H_prevents_Denial_of_Service_attacks > > Except the input isn't self-contradictory, since the input can't exist until H is defined, and once H is defined, the input has definite behavior, so there is no self-contradiction possilble, only error. SInce your H that you are analyzing isn't actually a program yet, since its behavior has not been fixed, the point where you hit yoru contradiction is just in the DESIGN phase, showing that no H that meets the requirements can be built, proving the theorem you claim to be refuting, showing yourself to be a LIAR. You are just showing you don't understand what a program actually is.
Back to comp.ai.philosophy | Previous | Next — Previous in thread | Next in thread | Find similar
ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 00:54 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 08:09 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 11:59 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 13:43 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 13:23 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 16:27 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-17 22:09 +0100
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 16:46 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Jeff Barnett <jbb@notatt.com> - 2023-06-17 16:03 -0600
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 19:18 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 18:44 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 21:46 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 21:35 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 23:03 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 19:13 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 18:58 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 21:31 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 21:29 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 22:57 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 22:10 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 08:02 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 09:32 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 12:31 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 11:41 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 12:54 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 12:09 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 13:46 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:05 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 14:20 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:30 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 14:43 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:47 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 15:19 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 14:26 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 16:10 -0400
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 18:43 -0500
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 19:59 -0400
Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-18 22:31 -0500
Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-19 07:38 -0400
Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-19 09:30 -0500
Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-19 20:45 -0400
Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-19 22:57 -0500
Re: Does input D have semantic property S or is input D [BAD INPUT]? Don Stockbauer <donstockbauer@hotmail.com> - 2023-06-20 00:33 -0700
ChatGPT discussion (was: Re: Does input D have semantic property S or is input D [BAD INPUT]? vallor <vallor@vallor.earth> - 2023-06-20 11:16 +0000
Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-20 07:19 -0400
Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-20 10:09 -0500
Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-20 11:48 -0400
Ben Bacarisse specifically targets my posts to discourage honest dialogue olcott <polcott2@gmail.com> - 2023-06-20 10:06 -0500
Re: Ben Bacarisse specifically targets my posts to discourage honest dialogue Richard Damon <Richard@Damon-Family.org> - 2023-06-20 11:48 -0400
Re: dishonest subject lines Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-20 17:02 +0100
Ben Bacarisse specifically targets my posts to discourage honest dialogue olcott <polcott2@gmail.com> - 2023-06-20 12:25 -0500
Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 14:57 -0500
Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 16:34 -0400
Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 15:42 -0500
Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 16:52 -0400
Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 16:39 -0500
Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 17:53 -0400
Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 17:07 -0500
Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 18:52 -0400
Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts] olcott <polcott2@gmail.com> - 2023-06-20 14:59 -0500
Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts] olcott <polcott2@gmail.com> - 2023-06-20 15:00 -0500
ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question vallor <vallor@cultnix.org> - 2023-06-21 19:10 +0000
Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question vallor <vallor@vallor.earth> - 2023-06-21 19:23 +0000
Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 14:59 -0500
Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-21 19:01 -0400
Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 19:40 -0500
Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-21 22:47 -0400
Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 21:58 -0500
Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-22 07:26 -0400
Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-22 09:18 -0500
Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-22 21:06 -0400
csiph-web