Groups | Search | Server Info | Login | Register


Groups > sci.logic > #254689

Re: Does input D have semantic property S or is input D [BAD INPUT]?

From olcott <polcott2@gmail.com>
Newsgroups sci.logic, comp.theory, comp.ai.philosophy
Subject Re: Does input D have semantic property S or is input D [BAD INPUT]?
Date 2023-06-19 22:57 -0500
Organization A noiseless patient Spider
Message-ID <u6r83f$2bliv$1@dont-email.me> (permalink)
References (23 earlier) <TcPjM.7724$4_6d.3325@fx06.iad> <u6oi69$1vh00$1@dont-email.me> <V6XjM.9621$8fUf.906@fx16.iad> <u6popm$23c2e$1@dont-email.me> <lE6kM.5961$zcM5.4010@fx11.iad>

Cross-posted to 3 groups.

Show all headers | View raw


On 6/19/2023 7:45 PM, Richard Damon wrote:
> On 6/19/23 10:30 AM, olcott wrote:
>> On 6/19/2023 6:38 AM, Richard Damon wrote:
>>> On 6/18/23 11:31 PM, olcott wrote:
>>>> On 6/18/2023 9:38 PM, Richard Damon wrote:
>>>>> On 6/18/23 9:43 PM, olcott wrote:
>>>>>> On 6/18/2023 8:29 PM, Richard Damon wrote:
>>>>>>> On 6/18/23 8:59 PM, olcott wrote:
>>>>>>>> On 6/18/2023 7:01 PM, Richard Damon wrote:
>>>>>>>>> On 6/18/23 7:41 PM, olcott wrote:
>>>>>>>>>> On 6/18/2023 1:56 PM, Fritz Feldhase wrote:
>>>>>>>>>>> On Sunday, June 18, 2023 at 8:09:51 PM UTC+2, olcott wrote 
>>>>>>>>>>> <nonsense>
>>>>>>>>>>>
>>>>>>>>>>> A possible "practical solution" for an actual "halt decider" 
>>>>>>>>>>> might be something I will call a semi-halt-decider here.
>>>>>>>>>>>
>>>>>>>>>>> The latter allows for 3 answers (return values) when called:
>>>>>>>>>>>
>>>>>>>>>>> H(P, d) -> 1 "P(d) halts"
>>>>>>>>>>> H(P, d) -> -1 "P(d) doesn't halt."
>>>>>>>>>>> H(P, d) -> 0 "Don't know/can't tell if P(d) halts or not"
>>>>>>>>>>>
>>>>>>>>>>> Such a semi-halt-decider might be able to determine _the 
>>>>>>>>>>> correct_ answer (1, -1) for a big class of casses. On the 
>>>>>>>>>>> other hand, it would always have the possibility to "give up" 
>>>>>>>>>>> (for certain cases) and anwer with 0: "Don't know/can't tell" 
>>>>>>>>>>> (and this way be able to avoid INCORRECT ANSWERS concerning 
>>>>>>>>>>> the actual behavior of P(d)).
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> The key difference with my work that is a true innovation in 
>>>>>>>>>> this field
>>>>>>>>>> is that H doesn't simply give up. H specifically recognizes self-
>>>>>>>>>> contradictory inputs and rejects them.
>>>>>>>>>>
>>>>>>>>>> *Termination Analyzer H prevents Denial of Service attacks*
>>>>>>>>>> https://www.researchgate.net/publication/369971402_Termination_Analyzer_H_prevents_Denial_of_Service_attacks
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Except the input isn't self-contradictory, since the input 
>>>>>>>>> can't exist until H is defined, and once H is defined, the 
>>>>>>>>> input has definite behavior, so there is no self-contradiction 
>>>>>>>>> possilble, only error.
>>>>>>>> If I ask you what correct (yes or no) answer of could Jack reply 
>>>>>>>> with?
>>>>>>>> Exactly why can’t you answer this?
>>>>>>>
>>>>>>> He has no answer that is correct, but that doesn't matter and is 
>>>>>>> just you faliing into the fallacy of the Red Herring.
>>>>>>>
>>>>>> // The following is written in C
>>>>>> //
>>>>>> 01 typedef int (*ptr)(); // pointer to int function
>>>>>> 02 int H(ptr x, ptr y)   // uses x86 emulator to simulate its input
>>>>>> 03
>>>>>> 04 int D(ptr x)
>>>>>> 05 {
>>>>>> 06   int Halt_Status = H(x, x);
>>>>>> 07   if (Halt_Status)
>>>>>> 08     HERE: goto HERE;
>>>>>> 09   return Halt_Status;
>>>>>> 10 }
>>>>>> 11
>>>>>> 12 void main()
>>>>>> 13 {
>>>>>> 14   H(D,D);
>>>>>> 15 }
>>>>>>
>>>>>> Since the above H is an unspecified wildcard you are free to 
>>>>>> encode it
>>>>>> in any one of an infinite number of different ways and return any
>>>>>> Boolean value that you want.
>>>>>
>>>>> Nope, D isn't a PROGRAM until H is DEFINED. 
>>>> That is why I triple dog dare you to define it or acknowledge that no
>>>> such program can possibly be defined because the input D to any
>>>> corresponding H is isomorphic to Jack's question posed to Jack.
>>>
>>> SO, you AGREE that a "Correct Halt Decider", as defined by the 
>>> Halting Problem, can't exist.
>>>
>>
>> I don't agree that your understanding of the halting problem is correct.
>> H is required to report on the actual behavior that it actually sees.
> 
> Where does THAT come from. It may only be ABLE to do so, but the 
> REQUIREMENT is the behavior of the actual machine.
> 
> You seem to have trouble with the English Languge.
> 
> Please show me any reputable reference that says you get to disregard 
> the ACTUAL REQUIREMENTS because you can't see what you need to do so
> 
>>
>> You and others are requiring H to report on behavior that it does not
>> see. You already also admitted that when H reports on this behavior that
>> it does not see that this changes this behavior that it does not see
>> making its report incorrect.
> 
> Yes, because that is what the requirements say. The requirements are 
> what the requirements say, because that is the requirements needed to 
> solve the mathematical problems that a Halt Decider is hoped to be able 
> to help with.
> 

When the requirements are self-contradictory then they are incorrect.

>>
>> Within the false hypothesis that H is incorrect to report that its input
>> does not halt, the only alternative is to change the meaning of what H
>> reports. When H becomes a [BAD INPUT] decider no one can correctly say
>> that H is wrong. This also refutes Rice which is more important that
>> solving the halting problem because it has a much broader scope.
> 
> That isn't a "false hypothesis", it is a stated requirement.
> 
> Since D(D) Halts, by the definition of the problem, H, to be correct, 
> must report Halting.
> 
> Remember:
> In computability theory, the halting problem is the problem of 
> determining, from a description of an arbitrary computer program and an 
> input, whether the program will finish running, or continue to run forever.
> 
> Thus the thing to look at is the PROGRAM itself and its behavior. 
> DEFINITION.
> 

When the requirements are self-contradictory then they are incorrect.
When the bible says that God <is> and God <has> wrath the bible lies.

>>
>> Termination Analyzer H determines the semantic property of
>> [GOOD INPUT] meaning that input D halts <and>
> 
> Since the machine represented by the input does Halt, that condition is 
> statisfied.
> 
> Note you bad terminology, "Inputs" are just data, and don't actually DO 
> anything. They can have "syntactic properties", but not "Behavior". They 
> can represent something that does have behavior, and from the definiton 
> above, that is the machine they represent, NOT H's (partial) simulation 
> of them.
> 

Simply ignoring that a question is self-contradictory doesn't make it 
not self-contradictory.

>>
>> [BAD INPUT] meaning
>> (a) input D doesn't halt <or>
>> (b) D has a pathological relationship to H. This means that D calls H 
>> and does the opposite of the Boolean value that H returns.
> 
> Which your H never atually confirms. You H will also call an HH that 
> does what H says to be pathological too, so you fail at this side.
> 
>>
>>> It is easy to make D a program, just define some H, any H, then D is 
>>> a valid program, and will either Halt or not. D's validity as a 
>>> program is NOT dependent on H getting the right answer. Thus an H 
>>> that just immediately returns 0 makes D a valid program.
>>>
>>
>> H correctly determines that D has the semantic property of [BAD INPUT]
>> making Denial of Service (DoS) attack detector H correct to reject D.
> 
> Which isn't a criterial for a Halt Decider, and as I just explained 
> above, you don't actually detect the pathological relationship, just 
> that D calls H.
> 

Instead it refutes Rice's theorem.

>>
>>
>>>>
>>>> Once we acknowledge that the halting problem input to H is an incorrect
>>>> to H then we can understand that this incorrect question is aptly re-
>>>> framed into the correct question:
>>>
>>> Why is it "Incorrect"? The fact that H can't give the right answer is 
>>> a problem with H, not with the input.
>>>
>>
>> Then the problem with Jack's question is Jack not the fact that Jack's
>> question is self-contradictory for Jack. Jack is simply too stupid to
>> give a correct yes or no answer to a self-contradictory question. We all
>> know that Jack's question has a correct answer, yet Jack is simply too
>> stupid to decide between yes and no.
> 
> The problem with "Jack's Question" is it asks about something that 
> doesn't have a correct answer NOW.
> 

Sure it does you ask three people
(a) Bill says Jack will say yes
(b) John says that Jack will say no
(c) Harry say Jack will say nothing or something besides yes or no
One of them is right.

Because our imaginary Jack is fictional Harry was right.

>>
>>> The definition of a "Valid Input" for H, is that it represents a 
>>> Program and its input. This call sequence does that, so the input is 
>>> valid.
>>>
>>
>> A syntactically valid input is not the same as a semantically valid
>> input. Any input that makes both Boolean return values the wrong answer
>> is a semantically invalid input.
> 
> Nope, it is a PROGRAM, thus it is VALID. If you try to define it as not 
> valid, you are just admitting that H isn't a "Halt Decider" by the 
> definition of Computation Theory.
> 

Saying that it is valid because it is a program simply ignores bugs and 
indicates you know hardly anything about programming.

> You clearly don't understand what you are talking about.
> 
>>
>>>>
>>>> Does input D halt on its input [GOOD INPUT] or is D [BAD INPUT] that
>>>> either fails to halt or defines a pathological relationship to H.
>>>
>>> And D DOES halt on its input, since it will "call" H(D,D), which your 
>>> H has been defined so that it will return 0 from that call.
>>>
>>
>> Which is a correct return value for the semantic property of [BAD INPUT].
> 
> But makes D(D) Halt, so it is the wrong answer for a Halt Decider.
> 

Not at all 0 means halts or D does the opposite of whatever Boolean
value that H returns.

> You are just admitting that you have been lying about working on the 
> Halting Problem of Computation Theory, the one descibed by the Linz 
> paper you quote.
> 

When I point out that the conventional halting problem is self 
contradictory this is the actual halting problem that I am referring to.

> Fine, everything you have said thus becomes a LIE.
> 
>>
>>> There is nothing "BAD" about a D that doesn't halt, 
>>
>> Sure everyone knows that Denial of Service attacks are great. My
>> hospital loved it when they had no access to patient records for several
>> days.
> 
> Except the only DOS was to the Decider. If they just ran the program, it 
> would have ended just fine.
> 
> You just don't understand the problem you are talking about and thus you 
> keep lying about it. You can't use the "honest mistake" excues, as the 
> errors have been pointed out, but you refuse to correct yourself.
> 
>>
>>> that just means it is an input that H needs to "reject" (return the 
>>> "Non-Halting" value for). There is also nothing "Bad" about the 
>>> "pathological" relationship between D and H, as that is just part of 
>>> "Any Program".
>>>
>>
>> Yes that is true everyone loves successful Denial of Service attacks.
>> If there was a DoS detector that could correctly reject every
>> [malevolent input] people would really hate that. They love successful
>> DoS attacks.
> 
> But this isn't the DOS detector problem, that allows false positives. 
> This is the accurate Halt Decider problem, which H fails at.
> 
> You are just admitting that you have been LYING for years about what you 
> are working on.
> 
>>
>>> Remember, if you change H to be the Hn, non-aborting version of it, 
>>> and the make the Dn from that Hn, we find that Dn(Dn) will not halt, 
>>> so Hn should have returned 0, but it just never returns an answer, 
>>> showing that *H* is a defective machine, not meeting its requirements.
>>>
>>
>> When H reports on the semantic property of [BAD INPUT] the labels could
>> be switched to account for all of the people that love successful Denial
>> of Service attacks. Only inputs that allow DoS attacks are construed as
>> [GOOD INPUTS]. Inputs that simply halt are now called [BAD INPUTS].
>>
>> H still correctly decides a semantic property of D, thus H still refutes
>> Rice.
> 
> 
> Nope. You can't refute Rice by saying that a machine gets one input right.
> 
> FALLACY of proof by example
> 
> You are just proving your logic system is full of fallacies.
> 
>>
>>>>
>>>> This does overcome Rice's theorem for at least the reduction of Rice's
>>>> theorem to the halting problem.
>>>>
>>>> Does input D have semantic property S or is input D [BAD INPUT]?
>>>>
>>>
>>> No, because Rice's theorem is does the input have Semantic Property 
>>> S, and a "pathological relationship" isn't considered a "BAD INPUT".
>>>
>>
>> That is the only reason that Rice has not been overcome. No one ever
>> thought of a way to exclude [BAD INPUTS] thus making semantic properties
>> undecidable. Once we do exclude [BAD INPUTS] then semantic properties
>> are decidable.
> 
> But you H doesn't successful decide on your property, as the DD that 
> does what H says is called "Bad input" when it doesn't meet the criteria 
> you have defined.
> 
>>
>>> ALL PROGRAMS means ALL PROGRAMS, not all the ones I can handle.
>>>
>>
>> H correctly determines the semantic property of [BAD INPUT] prior to my
>> work no H could ever correctly determine any semantic property. That H
>> does correctly determine at least a single semantic property when Rice
>> claims that no H can every determine any semantic property refutes Rice.
>>
> 
> Nope, H gets DD wrong.
> 
>>> IF you wnat to try to define a Semntic Property S that somehow 
>>> includes this pathology in its criteria, you need to FORMALLY define 
>>> what you mean by it. You also need to show that the property is still 
>>> wholly Semantic, and that you haven't given yourself a Syntactic 
>>> property.
>>>
>>
>> When-so-ever any input to any decider calls this decider with an input
>> that does the opposite of whatever Boolean value that this decider
>> returns this input <is> a pathological input. My H has been able to do
>> that for more than two years.
> 
> But it fails on DD, so it still fail.
> 
>>
>> My system also works with embedded copies of deciders yet this makes the
>> code much more difficult to understand so I didn't implement it.
> 
> I don't think it does. I think you don't understand the nature of that 
> problem.
> 
>>
>>> You also then need to show that you can get the correct answer for 
>>> ALL inputs, the Achilies Heel for a Halt Decider might not be the 
>>> Achilies Heel for your new decider, so just because you handle it, 
>>> doesn't mean you have PROVEN that you can answwer that property.
>>
>> H does correctly refute Rice's theorem for the halting problem's
>> pathological input. This is much more success than anyone else has ever
>> achieved. Once this success is acknowledged a well funded large team of
>> experts can work on extending my ideas.
>>
> 
> Nope. Remember, by YOUR definiton of Pathological, your H fails for 
> DD(DD) as described above.

-- 
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Back to sci.logic | Previous | NextPrevious in thread | Next in thread | Find similar


Thread

ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 00:54 -0500
  Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-17 00:54 -0700
  Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 08:09 -0400
    Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 11:59 -0500
      Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-17 10:24 -0700
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 12:35 -0500
      Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 13:43 -0400
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 13:23 -0500
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 16:27 -0400
    Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-17 22:09 +0100
      Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 16:46 -0500
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Jeff Barnett <jbb@notatt.com> - 2023-06-17 16:03 -0600
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 19:18 -0400
            Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 18:44 -0500
              Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 21:46 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 21:35 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 23:03 -0400
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 19:13 -0400
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 18:58 -0500
            Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 21:31 -0400
              Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 21:29 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-17 22:57 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-17 22:10 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 08:02 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 09:32 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 08:50 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 08:59 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 12:31 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 11:41 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 09:54 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 12:03 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 10:18 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 12:24 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 11:05 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:09 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 11:44 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:55 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 11:56 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 14:10 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 12:30 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 18:41 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 20:01 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 17:38 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 19:59 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 21:29 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 20:43 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 22:38 -0400
                Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-18 22:31 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-19 07:38 -0400
                Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-19 09:30 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 08:07 -0700
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-19 20:45 -0400
                Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-19 22:57 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-20 07:19 -0400
                Re: Does input D have semantic property S or is input D [BAD INPUT]? olcott <polcott2@gmail.com> - 2023-06-20 10:09 -0500
                Re: Does input D have semantic property S or is input D [BAD INPUT]? Richard Damon <Richard@Damon-Family.org> - 2023-06-20 11:48 -0400
                Termination Analyzer H determines the semantic property of .. olcott <polcott2@gmail.com> - 2023-06-18 23:58 -0500
                Re: Termination Analyzer H determines the semantic property of .. Richard Damon <Richard@Damon-Family.org> - 2023-06-19 07:38 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 20:27 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 21:34 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-18 17:15 -0700
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 19:46 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 12:54 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 12:09 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 13:46 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:05 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 14:20 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:30 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 14:43 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 13:47 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 15:19 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 14:26 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 16:10 -0400
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-18 18:43 -0500
                Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-18 19:59 -0400
  Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 08:37 -0700
    Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-19 10:58 -0500
      Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 11:18 -0700
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-19 15:04 -0500
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 14:32 -0700
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-19 21:08 +0100
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question PLEASE LOOK AT MT REPLY [Ben Bacarisse] olcott <polcott2@gmail.com> - 2023-06-19 15:22 -0500
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 14:17 -0700
            Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-19 23:48 +0100
              Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-19 17:10 -0700
          Ben Bacarisse specifically targets my posts to discourage honest dialogue olcott <polcott2@gmail.com> - 2023-06-20 10:06 -0500
            Re: Ben Bacarisse specifically targets my posts to discourage honest dialogue Richard Damon <Richard@Damon-Family.org> - 2023-06-20 11:48 -0400
              Re: dishonest subject lines Ben Bacarisse <ben.usenet@bsb.me.uk> - 2023-06-20 17:02 +0100
                Ben Bacarisse specifically targets my posts to discourage honest dialogue olcott <polcott2@gmail.com> - 2023-06-20 12:25 -0500
                Re: Bla Bla bla Fritz Feldhase <franz.fritschee.ff@gmail.com> - 2023-06-20 10:33 -0700
                Ben Bacarisse specifically targets my posts to discourage honest dialogue olcott <polcott2@gmail.com> - 2023-06-20 13:17 -0500
          Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 14:57 -0500
            Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 16:34 -0400
              Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 15:42 -0500
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 16:52 -0400
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 16:39 -0500
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 17:53 -0400
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] olcott <polcott2@gmail.com> - 2023-06-20 17:07 -0500
                Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue] Richard Damon <Richard@Damon-Family.org> - 2023-06-20 18:52 -0400
          Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts] olcott <polcott2@gmail.com> - 2023-06-20 14:59 -0500
          Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts] olcott <polcott2@gmail.com> - 2023-06-20 15:00 -0500
          Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-22 23:12 -0500
        Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-22 23:01 -0500
  ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question vallor <vallor@cultnix.org> - 2023-06-21 19:10 +0000
    Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question vallor <vallor@vallor.earth> - 2023-06-21 19:23 +0000
    Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 14:59 -0500
      Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-21 19:01 -0400
        Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 19:40 -0500
          Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-21 22:47 -0400
            Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-21 21:58 -0500
              Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-22 07:26 -0400
                Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question olcott <polcott2@gmail.com> - 2023-06-22 09:18 -0500
                Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question Richard Damon <Richard@Damon-Family.org> - 2023-06-22 21:06 -0400

csiph-web