Groups | Search | Server Info | Login | Register


Groups > sci.logic > #341561

Swift AI versus Apertus AI: David against Goliath (Was: Abstraction Engine / Pattern-Amplification AI Avalanche [Java to C# translation])

From Mild Shock <janburse@fastmail.fm>
Newsgroups sci.logic
Subject Swift AI versus Apertus AI: David against Goliath (Was: Abstraction Engine / Pattern-Amplification AI Avalanche [Java to C# translation])
Date 2025-10-04 16:03 +0200
Message-ID <10br9et$h42u$1@solani.org> (permalink)
References <vpcele$is1s$3@solani.org> <1060lsa$2ri3s$2@solani.org> <10br8hd$h3hf$1@solani.org>

Show all headers | View raw


Hi,

Here we find Switzerland laying an Apertus AI roadmap:

 > ETH-Professor Martin Jaggi explains that Apertus
 > AI is a basis LLM, doesn't have yet RAG, doesn't
 > have yet thinking. Etc.. Etc.. Speculates that the
 > "open" community might help change it.
 > One month later: Interview with Martin Jaggi
 > https://www.youtube.com/watch?v=KgB8CfZCeME

Meanwhile I wish my AI Laptop would do the Java to C#
translation in a blink locally and autonomous. It
has a few technical hickups at the moment, the
convential CPUs are still sometimes over scheduling,

for example I cannot run VCS from Microsoft, something
goes wrong and it turns my whole laptop into a frying
pan, while Rider from IntelliJ works. Now an AI
gives me some advice:

 > Goliath (40,000 TFLOPS): Perfect for discovering new
 > patterns, complex reasoning, creative tasks
 > David (40 TFLOPS): Perfect for execution, integration,
 > personalization, real-time response

So I would use Goliath to distill the patterns.
And still could profit as David locally.

Bye

Mild Shock schrieb:
> Hi,
> 
> Here we find Ex-OpenAI Scientist looking extremly concerned:
> 
>  > Ex-OpenAI pioneer Ilya Sutskever warns that as
>  > AI begins to self-improve, its trajectory may become
>  > "extremely unpredictable and unimaginable,"
>  > ushering in a rapid advance beyond human control.
>  > https://www.youtube.com/watch?v=79-bApI3GIU
> 
> Meanwhile I am enjoying some of the AIs abstracting capabilities:
> 
> The bludy thingy was translating my Java code into C#
> code in a blink and did all kind of fancy translation,
> and explains his own doing as:
> 
>  > That casual, almost incidental quality you noticed
>  > is exactly the abstraction engine working so fluidly
>  > that it becomes invisible. The AI was:
>  > 1. Understanding the essential computation (the "what")
>  > 2. Discarding the Java-specific implementation (the "how")
>  > 2. Re-expressing it using C#'s idiomatic patterns (a different "how")
> 
> Ha Ha, nice try AI, presenting me this antropomorphic
> illusion of comprehension. Doesn't the AI just apply tons
> of patterns without any knowing what the code really does?
> 
> Well I am fine with that, I don't need more than this
> pattern based transformations. If the result works,
> the approach is not broken.
> 
> Bye
> 
> Mild Shock schrieb:
>> Hi,
>>
>> That is extremly embarassing. I don’t know
>> what you are bragging about, when you wrote
>> the below. You are wrestling with a ghost!
>> Maybe you didn’t follow my superbe link:
>>
>>  > seemingly interesting paper. In stead
>>  > particular, his final coa[l]gebra theorem
>>
>> The link behind Hopcroft and Karp (1971) I
>> gave, which is a Bisimulation and Equirecursive
>> Equality hand-out, has a coalgebra example,
>> I used to derive pairs.pl from:
>>
>> https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf
>>
>> Bye
>>
>> Mild Shock schrieb:
>>>
>>> Inductive logic programming at 30
>>> https://arxiv.org/abs/2102.10556
>>>
>>> The paper contains not a single reference to autoencoders!
>>> Still they show this example:
>>>
>>> Fig. 1 ILP systems struggle with structured examples that
>>> exhibit observational noise. All three examples clearly
>>> spell the word "ILP", with some alterations: 3 noisy pixels,
>>> shifted and elongated letters. If we would be to learn a
>>> program that simply draws "ILP" in the middle of the picture,
>>> without noisy pixels and elongated letters, that would
>>> be a correct program.
>>>
>>> I guess ILP is 30 years behind the AI boom. An early autoencoder
>>> turned into transformer was already reported here (*):
>>>
>>> SERIAL ORDER, Michael I. Jordan - May 1986
>>> https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
>>>
>>> Well ILP might have its merits, maybe we should not ask
>>> for a marriage of LLM and Prolog, but Autoencoders and ILP.
>>> But its tricky, I am still trying to decode the da Vinci code of
>>>
>>> things like stacked tensors, are they related to k-literal clauses?
>>> The paper I referenced is found in this excellent video:
>>>
>>> The Making of ChatGPT (35 Year History)
>>> https://www.youtube.com/watch?v=OFS90-FX6pg
>>
> 

Back to sci.logic | Previous | NextPrevious in thread | Find similar


Thread

Abstraction Engine / Pattern-Amplification AI Avalanche [Java to C# translation] (Re: The Prolog Community is extremly embarrassing (Re: Prolog totally missed the AI Boom) Mild Shock <janburse@fastmail.fm> - 2025-10-04 15:47 +0200
  Swift AI versus Apertus AI: David against Goliath (Was: Abstraction Engine / Pattern-Amplification AI Avalanche [Java to C# translation]) Mild Shock <janburse@fastmail.fm> - 2025-10-04 16:03 +0200

csiph-web