Comments on: What is the “forward-forward” algorithm, Geoffrey Hinton’s new AI technique? https://bdtechtalks.com/2022/12/19/forward-forward-algorithm-geoffrey-hinton/?utm_source=rss&utm_medium=rss&utm_campaign=forward-forward-algorithm-geoffrey-hinton Technology solving problems... and creating new ones Mon, 20 Mar 2023 15:17:45 +0000 hourly 1 By: Jason https://bdtechtalks.com/2022/12/19/forward-forward-algorithm-geoffrey-hinton/comment-page-1/#comment-35607 Mon, 20 Mar 2023 15:17:45 +0000 https://bdtechtalks.com/?p=15397#comment-35607 So this seems to enjoin issues of building neurosymbolic computing using FF for hardware evolutionarily structured to causal input as pervasive. I think ENN’s (epistemic neural networks) will find once FF data is trained that Markov time series as pervasive to data training on accuracy models I.e. each year from 1960-1970 being trained one year at a time predicting each forward years events to establish accuracy at year 1969 predicts best result 1970 events in news. This suggest data is temporal spatial and to adjust learning that is biological in terms of a blood brain interface using corelet ibm software to mediate biological neuronal development and artificial neural domain mapping from a->b that we still lack a data time domain for evolutionary learning that attest moral computing where hardware/software are mutually evolutionary to for example a model of neurosynaptic cores used for a vision processing that can equate objects weights to embed object dates. Ultimately, shifting Boolean logic to compute models and energy modalities of a hardware specific to a language for a biological emulating process requires a neural compute revolution that in contrast insures dualistic biological tree parity planning of hardware to software. Thus substrate medium of computing methods quantum or neutosynaptic will need better parameters for time/space domain that still has yet to explore back propagation through structure/bavkpropgation throigh time asserting one is spatial one is time ordered. To fully understand the analogous model of how humans sense encode memories in neural biological evolution we should assert FF need to ascertain some domain encoded in training data to unify origin of data and date of creation of a data. While off topic a bit the recent stable diffusion ai technology using diffusion methods may yield better results when a name like Jesus is used to represent a character. He is widely trained on yet no time ordered sequence of social data in data training can be made to train origin of Jesus to 2,000 plus years of references thus his name in ai unless removed with pre training takes on a vague concept of frequency/time rate to suggest underlying religious concepts in society have epistemic roots that weighting values become deep belief driven and even when an author has data of a story for example he may be religious and belief in Jesus but his novels story may represent a character that he himself does not explicit state has s deep religious context. Weighting in this context of data adjusting using a BP method may argue that time is both flowing backward in terms of memory which is the embedding of learned events as knowledge itself in learning has to come with referential domains of rating good/bad as judgement and awareness equates consciousness.So how can negative/ positive weighting in FF DERIDE a instance training by a mortal computing doctrine to enjoin mutually constructed hardware/software as a duality of a learner I.e. a FF algorithm? I think data scientist need to look at time order accuracy models in training to suggest while a absurd notion that somewhere aged light I.e. research in aged light can create a metric to debate how Data can have a date of origin. Absurd yes, but conceptually to move toward an analog ai space where biological substrate and medium of grey matter or cellular substrates used to model unique life from synthetic biology —which is where FF would in my opinion best be developed, that we need to see data evolution has a time/space domain as intrinsic to energy laws of conservation. What does this mean? Simple…data training that sees data created after a date for example yesterday cannot be treated as data from 1845 in weighting not to text but a embedded value to date of origin. It suggest that the bp training is erroneously sorting data out of context of space/time origin I.e. a duality of structure/time such that a middle step to FF may be a hardware/software problem for neurosymbolic computing to equate how to blend bptt/bpts (back propagation through time/backpropgation through structure) using a blood brain interface language and syntax to weights that see mobility and mobilization of synthetic life that demands sense encoding of a neurobestibular senses as a common coding issue with training robotics to have a synthetic inferior parietal cortex for goal reasoning and location data. Not to mention most importantly when we encode memories in human biology we still don’t know at scale 10**16 if location data of neurons is mapped at a genetic recall level to physical content addressable memory as a mind map that suggest there is a get/set feature of neurons that while activation occurs to call memories for knowledge to plan a forward-forward decision that there may be a genetic universal neural weight/location map that states a memory recall goes to a location+weight. I think Hinton is right about knowledge in FF but are we excluding the bp pass that adjust in favor of a sort of biological contiguous learner who relies on some intrinsic epigenetic model. If so then are we looking at new substrate mediums that encode experiential data? That a data is born when it is made and can evolve if it only has a quaila of some base grounded paradigm I.e. it can be good or bad? If so this is a metaphysical debate on social perception of the good/bad as a moral story of ai seeking morality? Then we have to have bad? What a bummer….

]]>
By: Rox https://bdtechtalks.com/2022/12/19/forward-forward-algorithm-geoffrey-hinton/comment-page-1/#comment-35474 Tue, 28 Feb 2023 17:36:23 +0000 https://bdtechtalks.com/?p=15397#comment-35474 Fitting NN’s to real problems datasets is way more cucumber-some and trouble rich wit a lot of hyperparameters guessing. NN’s can learn non linear multi parameters problems – yes on paper in real world problems with which accuracy and how many competent man power wasting how many time ? Until NN structure is “guessing” based there is more luck as science involved. We all are into data analysis and there is everything about the particular data and not all about NN’s – there developers are wrong or they are into wrong assumptions.
NN’s are only the tools for help into data processing and analysis and mostly hard to use.

]]>
By: DQBO https://bdtechtalks.com/2022/12/19/forward-forward-algorithm-geoffrey-hinton/comment-page-1/#comment-34702 Sat, 31 Dec 2022 17:30:43 +0000 https://bdtechtalks.com/?p=15397#comment-34702 In reply to Rebel Science.

Any for form of discovery exercise can be phrased as an optimization problem (and therefore have an objective function). So, optimizing after objective functions is not inherently non-generalizing.

]]>
By: DQBO https://bdtechtalks.com/2022/12/19/forward-forward-algorithm-geoffrey-hinton/comment-page-1/#comment-34701 Sat, 31 Dec 2022 17:28:55 +0000 https://bdtechtalks.com/?p=15397#comment-34701 In reply to Rebel Science.

You misrepresented the whole field of ML based on an arbitrary definition of intelligence. What’s more, you seem to believe systematic generalization is somehow a fundamentally different approach to AI. Hint: systematic generalization refers to a learning algorithm’s ability to extrapolate learned behavior to unseen situations that are distinct but semantically similar to its training data.

Optimizing after an objective function is not inherently against the idea of generalization. It’s just that the optimization step is not the whole picture. An intelligent agent must also develop its own preferences to sample new data from the environment. It is true that backprop is not efficient for real time applications such as reinforcement learning (and hopefully in the future there will be more specialized optimization algorithms), but it is not fundamentally obstructive. What you can optimize with one algorithm, you can optimize with another, as long as both can navigate the same loss landscapes.

]]>
By: Rebel Science https://bdtechtalks.com/2022/12/19/forward-forward-algorithm-geoffrey-hinton/comment-page-1/#comment-34419 Tue, 20 Dec 2022 18:50:28 +0000 https://bdtechtalks.com/?p=15397#comment-34419 In reply to Nitin Malik.

Hinton is clueless about the brain, I’m sorry to say. The cortex uses massive feedback pathways for both learning and top-down attention purposes. However, feedback in the brain does not propagate error signals for gradient learning a la DL. It generates success signals using a winner-take-all mechanism. The signals are used not to modify weights, but to strengthen synaptic connections until they become permanent. It’s called STDP, a form of Hebbian learning.

Furthermore, the brain does not optimize objective functions, a learning approach that is inherently and hopelessly non-generalizing. The brain discovers context, which is temporal at its core. Spike timing is essential to context. Thus generalized intelligence is context-bound. The work of Hinton and the rest of the DL community, while valuable to automation applications, is irrelevant to AGI. One man’s opinion, of course.

]]>
By: Nitin Malik https://bdtechtalks.com/2022/12/19/forward-forward-algorithm-geoffrey-hinton/comment-page-1/#comment-34418 Tue, 20 Dec 2022 18:13:21 +0000 https://bdtechtalks.com/?p=15397#comment-34418 Prof. Geoffrey Hinton did not create a Backpropagation (BP) algorithm. But surely, his work popularized it.

Feedforward is a simplified architecture assumed for the actual complex brain architecture which forms feedback (as opposed to feedforward) and loop structures.

In real-world, the precise details of the system is rarely know. BP won’t work if the activation function is not differentiable. The alternative Forward-forward algorithm proposed by Geoffrey Hinton, it seems at the moment, can be applied only to a small subset of problems as it has generalisation and scaling-up issues. Hopefully, it will get resolved soon.

]]>
By: Rebel Science https://bdtechtalks.com/2022/12/19/forward-forward-algorithm-geoffrey-hinton/comment-page-1/#comment-34415 Mon, 19 Dec 2022 19:58:17 +0000 https://bdtechtalks.com/?p=15397#comment-34415 Interesting article. I’m sorry but I don’t see how FF can be an advance toward biologically plausible neural networks. Like conventional DL, FF does not generalize. Curve fitting is not generalization. Generalization is the ability of an intelligent system to perceive any object or pattern without recognizing it. An Amazon Indian, for example, can instantly perceive a bicycle even if he has never seen one before. He can instantly see its 3D shape, size, borders, colors, its various parts, its position relative to other objects, whether it is symmetrical, opaque, transparent or partially occluding, etc. He can perceive all these things because his brain has the ability to generalize. Moreover, his perception of the bicycle is automatically invariant to transformations in his visual field. Edges remain sharp and, if the bicycle is moved to another location or falls on the ground, he remains cognizant of the fact that he is still observing the same object after the transformation.

By contrast, with either FF or DL, perception is impossible without recognition, i.e., without prior learned representations of the objects to be perceived. Automatic universal invariance is also non-existent. This is a fatal flaw if AGI is the goal.

Back-propagation is merely a symptom of a much bigger problem in mainstream AI: the notion that learning consists of optimizing an objective function. Function optimization is the opposite of generalization. The brain can perceive anything without optimization. I believe that AGI research should focus exclusively on systematic generalization. That’s where almost all the research money should go in my opinion.

In conclusion, I’m afraid that Geoffrey Hinton is just spinning his wheels. Same with the rest of the DL community. Fortunately, a few researchers are working on systematic generalization. The best times are still ahead of us.

]]>