• 30 Posts
  • 668 Comments
Joined 2 years ago
cake
Cake day: August 4th, 2023

help-circle








  • I don’t think so. Think about the training process. All the nodes in those neural networks were trained on sets of data together. While certain sets of nodes may be more heavily responsible for the word, “pineapple”, for example, those subsets cannot function outside the context of the whole network.

    Now, what if you had an LLM that hallucinates, and you trained a second neural network whose only job was to fact-check the output of the first neural network? Those are 2 neural networks trained on separate data, for separate purposes that work in tandem to provide accurate responses.




  • You jest, but only parts of the brain are responsible for decoding auditory signals. It doesn’t require weighted input from the whole brain to process these signals. In theory, one could surgically remove Brocha’s and Werniche’s areas, and they would function completely independent of the rest of the brain.

    My understanding is that you cannot identify a region of chatgpt that identifies plant leaves and separate it from the rest of the neural network without destroying its function. In this sense, the entire set of billions of weights is required for each function that chatgpt does.

    The brain has many separate components that function independent of each other, each of which is a fully functional neural network, all working in tandem.








  • I think it is inevitable. The main flaw I see from a lay perspective in current methodology is trying to make one neural network that does everything. Our own brains are composed of multiple neural networks with different jobs interacting with each other, so I assume that AGI will require this approach.

    For example: we are currently struggling with LLM hallucinations. What could reduce this? A separate fact-checking neural network.

    Please keep in mind that my opinion is almost worthless, but you asked.