Path: csiph.com!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail From: rbowman Newsgroups: comp.os.linux.misc,alt.folklore.computers Subject: Re: naughty Python Date: 31 Dec 2025 19:03:14 GMT Lines: 38 Message-ID: References: <10i8usb$2oo2c$3@dont-email.me> <10icd30$2ck7$1@gal.iecc.com> <10idu04$7inn$1@dont-email.me> <10if4lo$jr1h$1@dont-email.me> <6decndo7ib2Df8z0nZ2dnZfqn_adnZ2d@giganews.com> <10iu02q$1029n$12@dont-email.me> <10iu3g7$11u10$3@dont-email.me> <10iutjt$1c0aq$2@dont-email.me> <6I-cnbjTB7jSssj0nZ2dnZfqn_ednZ2d@giganews.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Trace: individual.net 7vITwnJGCa9K02mEO7leNgtqAzUzFlgXz6NKVEsuLVNcb5MDlE Cancel-Lock: sha1:WZEqqmyDhLovo0/Kh7pAtnGaDXk= sha256:LKdZbAcW+5l0w4adahIlBrd+EueI+7On8RtJMzlSvmg= User-Agent: Pan/0.162 (Pokrosvk) Xref: csiph.com comp.os.linux.misc:80183 alt.folklore.computers:232946 On Wed, 31 Dec 2025 09:12:30 -0500, c186282 wrote: > As for practical NNs ... the physical layer can be several things. > You can emulate the required bits of a neuron in software. The holy > grail is very tiny dedicated components the size of on-chip > transistors but ultimately do much more. > About 10 years ago the "memristor" was seen as a step in that > direction but frankly I haven't seen them used in much of anything > practical. Memristor-LIKE characteristics will be desired in any > all-in-one solution. I haven't followed the neuromorphic approaches. It gets a bit out of hand for fiddling around at home. https://en.wikipedia.org/wiki/SpiNNaker I've been reading Trappenberg's book https://www.amazon.com/gp/product/B09TB9YH9M/ which also gets a bit deep. He digs into the Hodgkin–Huxley model. There is a lot going on when you start looking at the selective ion permeability of the cell wall creating a potential, neurotransmitters like GABA and dopamine, and the spike timing it's not easy to model. https://www.amazon.com/dp/0262181231 is the text we used in the '80s before the onset of another AI winter. Rumelhart's backpropagation solved some of the earlier problems starting with the perceptron. Improvements have been made with the activation functions, simulated annealing to solve the local minimum problem and so forth but the concepts in a 2025 NN tutorial are familiar. The nice part of PyTorch, TensorFlow, and other libraries is you can patch together high level functions without getting into the nuts and bolts of calculating gradient descent and all that grunt work.