Social Question

Shuttle128's avatar

Do the Problem of Induction and the Problem of Universals stem from the same underlying mechanism?

Asked by Shuttle128 (2991points) November 3rd, 2009
18 responses
“Great Question” (2points)

The Problem of Universals can be explained by showing the two viewpoints held. One view is that universals (qualities or characteristics that objects can have) are real, that is that they exist separately from individual instances that contain these qualities. The other is that we cannot justify this claim and that universals do not exist, that these qualities can only exist within actual instances of the qualities.

The Problem of Induction deals with the justification of the usage of induction (a form of logic in which generalizations are made from particular instances). The Problem of Induction is that in order to attempt to justify that induction is logical one must use induction. This circular reasoning cannot justify induction logically. Because we move from particulars to generalizations, inductive logic is underdetermined by the instances. We cannot observe all objects that a generalization subsumes, so the same generalization can follow from an infinite number of variations on the same evidence.

Neural Networks explain a vast majority of how our brains store and recall information. Our brains do three very important things that may explain why we use, but cannot justify, Universals and Induction.

1) Our brains store data from similar experiences in similar places. This may not seem like a big deal but it’s important. The data we observe is stored under a system that naturally classifies it under its observable qualities.

2) Our brains can interpret incomplete data. In neural networks when enough instances of an input have trained the network (like your brain learning from observations) incomplete inputs (ones that resemble the earlier instances but do not contain all of the same inputs) will produce a very similar result to the original instances. This shows that when the qualities of an object are observed and coincide with another the qualities of another object there is an inherent link between them. These shared qualities link the two physically through their shared neural connections.

3) The brain learns by repeated experience of certain inputs. The more an experience is repeated the “better” the brain is trained to give an output. As more and more observations of qualities are made the brain embeds these qualities as structures used in connecting observations.

From these three premises we can see that induction appears to happen simply due to neural network processes in the brain. When many instances of observation occur with similar qualities they create similar outputs. Your brain has modeled what it expects to be the outcome of specific instances. This is exactly what induction is!

The qualities of things that we observe behave similarly. These repeated observations of similar qualities create models of these qualities in our brains. This means we can think about these qualities without observing individual instances which leads us to believe that these qualities are universal.

Observing members: 0
Composing members: 0

Answers

virtualist's avatar

@Shuttle128 So you are saying that our brain can be accurately modeled by Solomonoff Induction, in a practical sense?

We know the brain can mislead us (dangerous optical illusions) or we can mislead it(ergo prolonged wearing of inversion eyeglass lens).

In what dangerous ways could an AI be misled by the Solomonoff induction scheme at its core?

the100thmonkey's avatar

I’m at work now, so don’t have access to my PC, but there’s an interesting article on sciencedaily.com about this.

I’ll post it later when I find the link.

mattbrowne's avatar

Well, first of all, both the brain and a PC are universal (Turing) machines. If a problem is computable they can solve it in principle. Doing it efficiently is another matter. The problem of induction as far as I know belongs to the field of proof theory which is part of meta-mathematics. To solve the problem of induction in general you need some kind of Hilbert’s program which unfortunately doesn’t exist. Therefore I don’t think the neural network processes in the brain could act as a kind of Hilbert program. But maybe I misunderstood the problem you’re trying to solve.

Shuttle128's avatar

@mattbrowne I’m not saying that anything solves the Problem of Induction, what I’m getting at is that the neural network topology of the brain is the underlying cause of the intuition that induction is reliable and that universals exist. If I thought this solved the problem of Induction I’d have probably stated that in bold somewhere. Just because our brains perform induction on a regular basis does not mean it can be justified by this fact.

@virtualist From what I’ve read and understand the brain works much like the Hopfield and Perceptron types of neural networks in that it varies strengths of the nodes based on usage and can retrieve similar results from similar inputs. I’m not sure how Solomonoff Induction has to do with how the brain actually functions. I’m just stating that induction is quite possibly a side effect from the way the neural network functions.

the100thmonkey's avatar

I can’t find it any more :|

It was an article about basic research to find out what goes on in the brain when people are categorising an animal – something which Plato would acknowledge as invoking universals in order to categorise an example.

Apparently, areas in the brain become active that are also involved in decision-making.

I’ll keep looking.

Shuttle128's avatar

@the100thmonkey Sounds very interesting. I originally came across this idea thinking about the implications of a neural network brain on classification. The idea that Forms were really just an outcome of neural network topology came out of that line of thinking.

nebule's avatar

can you give me tomorrow before anyone gives any really good answers??? I’m a bit drunk right now xxxxxxxxxxxxx

nebule's avatar

(of course there is always @RealEyesRealizeRealLies and @mattbrowne ) they will give you GREAT answers)

RealEyesRealizeRealLies's avatar

First of all, this is a very interesting question, and I like the way you tie it together.

No time for another debate currently, so please just take my comments as a different perspective. I hope to offer support to your theory. I like it, but take issues with some of the terminologies.

I have a big problem with the way you describe Neural Networks, and of course, it’s a spin off from our old debates. Again, you seem to think that “data” is everywhere, and that we “read” it somehow. I disagree.

Line by line to illustrate our differences…

@Shuttle128 said:
“The data we observe is stored under a system that naturally classifies it under its observable qualities.”

What is a natural classification? Classification can only take place with codified sentient descriptions.

As well, we don’t “observe data” unless there is a code present to reference data. Observable phenomenon is not “data”. It’s just observable phenomenon, and any data about it is authored by observers to describe that phenomenon. That’s why two different observers will author different Information about the same observable phenomenon. If the phenomenon itself was actually data, then the observers would always read the same Info, and never have room for different perspectives.

The medium is not the message. The phenomenon is not data.

@Shuttle128 said:
“Our brains can interpret incomplete data.”

Own-lee wenn thair izz kode. But when describing data-less observable phenomenon, we author descriptions relative to our ability to observe it. Only the description produces data, not the phenomenon. Beyond that, we theorize. The new Hubble Space Telescope will provide greater observations, which will allow more detailed descriptions. The new descriptions will support or refute the previous theories.

@Shuttle128 said:
“In neural networks when enough instances of an input have trained the network (like your brain learning from observations)…”

Observation alone is pure awareness. No data exists until codified Information is authored. I’m not convinced that observation alone is to be considered as “learning”. I prefer to think that learning (knowledge) is dependent upon Information.

Pure awareness through observation can produce emotion, but without codified Information, no knowledge is available.

Pure awareness through observation seems more akin to cause and reaction. Loud noise causes fear reaction. Cool breeze causes goose bump reaction. Threatened children causes anger reaction. No real thinking or knowledge is required for cause/reaction relationships. Nothing is really “learned” in these instances. To learn something, these instances must be described.

Cause/reaction is completely different from thought/action. Thought/action requires codified Information. Cause/reaction does not.

@Shuttle128 said:
“…when enough instances of an input have trained the network…”
“…incomplete inputs …will produce a very similar result to the original instances.”
“The brain learns by repeated experience of certain _inputs_”

Though I agree in principle, that “The more an experience is repeated the “better” the brain is trained”_… my problem is with the word “input_”. I understand you’re just using established terminology. I believe this terminology is erroneous, and prevents science from advancing.

Input requires a transmitter, code, and Information to produce data. Since observable phenomenon does not have a transmitter, code, or Information, then it cannot possibly input anything.

But in principle, I do agree with you that “As more and more observations of qualities are made the brain embeds these qualities as structures used in connecting observations.”

@Shuttle128 said:
—“When many instances of observation occur with similar qualities they create similar outputs.”—

Again, I agree in principle. But what output can occur without codified Information?

@Shuttle128 said:
—“The qualities of things that we observe behave similarly. These repeated observations of similar qualities create models of these qualities in our brains.”—

That could very well be the case, as no actual thought/action is required for the neural networks to form the model. Just the pure awareness of observable phenomenon would seem to do the trick. Akin to background processing? Might help explain phobic reactions.

@Shuttle128 said:
“This means we can think about these qualities without observing individual instances…”

We can’t think about anything without a code to think the thought upon. Pure experiential awareness is not necessarily “thinking”.

Consider this scenario of the first observable sunrise. Pure experiential awareness is upon us, and nothing more. It may be the “cause” for a “reaction” of worship, happiness, fear… But thinking doesn’t occur until we describe the observation with code… bright, round, yellow, warm, big

Upon that description, a thought is formed by authoring codified Information to produce data.

@Shuttle128 said:
“…which leads us to believe that these qualities are universal.”

That could be. The inductive stepchild of abstract reasoning?

RealEyesRealizeRealLies's avatar

BTW… we love questions like this over at FrostCloud.

http://www.frostcloud.com/forum/

nebule's avatar

thanks @RealEyesRealizeRealLies I might see you there

RealEyesRealizeRealLies's avatar

@lynneblundell

Well come on over then. It’s where all the mean folk hang out. Fierce Theist / Atheist debates. Hell, even the Theists argue with each other like a bunch of lunatics. It’s delicious!

I am QuinticNon over there.

Shuttle128's avatar

@RealEyesRealizeRealLies Sorry for the ‘book’ again, but I think it was necessary to try to clarify my position.

It’s rather hard to talk about computation methods of an object that does not compute data. Yes technically the sensory perceptions are not really data, but the states of these sensory perceptions are analogous to the term data when speaking of computing. Thanks to our previous encounters I do have a good understanding of what constitutes true data. I used the term only because the system of our brain truly works as a neural network. Whether the sensory perceptions are true data or not does not change the operations that our brains perform on them. Experiential knowledge of the world is stored and computed in the same way that a neural network would store and compute data, I see very little problem with using the term data in this manner. I chose the word data to make the analogies between brain and computation more easily distinguishable. You’ll have to give me a break I’m working on the philosophical, physical, and computational levels all at the same time and it’s rather hard to connect all three without making some simplifications for brevity’s sake.

A natural classification is an arrangement of neurons that places similar experiential knowledge within certain neural pathways. Basically when something is experienced, ‘redness’ for example, it activates pathways in the neural network. These pathways when stimulated many times creates a stored representation (model) of the stimulation that we can recall separately from its experience due to its relations to other observed or non-observed properties. When we observe something that contains ‘redness’ the observation of the item in question stimulates certain neuronal pathways, one (or many) of which is attributed to the observation of ‘redness.’ This explains why we think of ‘redness’ when we think of a Coke can, since the ‘redness’ pathway is invoked every time we observe a Coke can the two are linked. Because the two concepts are physically linked in the brain’s structure when we think about a Coke can we can imagine ‘redness’ as one of its qualities. The same could be said of ‘can-ness’ but I think I’ve made my point. When we experience things that are similar to Coke cans (perhaps a Pepsi can) they stimulate already established pathways (such as ‘can-ness’) that subsume the new observation under its pathway along with establishing new pathway arrangements relating to other observed qualities (such as ‘blueness’). The repeated observation of these similar qualities creates models in the brain that generalize observed events or qualities naturally by subsuming new observations under established pathways.

When we experience incomplete observations of these events the paths that were excited by the complete observations are still partially excited. This excitement invokes a partial experience of the qualities that are attributed to the complete sensation even when we do not fully experience the complete sensation. This can allow us to predict what observations we may expect based on incomplete sensations. When we do observe these predicted sensations the perceptual connection between the two are reinforced. This is exactly like Hume’s view on causality. We view these coincident perceptions as causal simply due to induction at the physical level of the brain. With things that always happen coincidentally there is only one outcome to the initial perception that the brain has experienced. The less strong the coincidental connection the less we believe an inductive argument is confirmed. This explains why we believe that increasing the number of observed instances of something increases the likely hood that it is always this way.

I don’t see what you have against calling experiential knowledge data (except that you don’t seem to accept transmitterless information). You’ve said that information is a product of mind, well we can see very clearly that the brain makes classifications. Since the mind is a product of the brain’s configuration, the generalization of qualities and the linking of observed phenomenon within the brain appears to create information. It may very well be that all of the information in our minds are a direct result of experiential knowledge. At one point we did not have language and we have language now, therefore there must have been some point along the way where society developed language. If the pre-language period was based solely on experiential knowledge and there was no transmitter of language, then it must have been a product of experiential knowledge within the brain. If language is a product of experiential knowledge then either code was spontaneously created or information already existed within experiential knowledge.

Things such as bright, round, yellow, and warm are observable and can be stored as experiential knowledge. These qualia are subsumed under specific neural pathways and can be generalized. Once the brain can undergo these generalizations combinations of these individual qualia can be observed. Obviously the brain is more complicated than I have shown here, there are millions of feedback control loops that allow higher thought processes to be based upon these foundational neural pathways of experience. Based on the development of the brain and its higher thinking abilities I don’t believe thought or information is possible without these fundamental experiences.

RealEyesRealizeRealLies's avatar

Thanks for clarifying your position on this. We are truly experiencing a semantics issue. Which obviously supports your other claim about meaning, transmitters and receivers. I have more to say on that, but will leave it alone for now.

Oh boy, where to go from here?

Let’s isolate our differences on the term “experience”. That might clear the air on other issues. It seems what I call “Pure Experiential Awareness” is taken by you to be “Experiential Knowledge”. Is this correct? Is that why you hope to call it “data” when I reject that proposition?

If so, I might suggest that our two chosen terms are actually pinpointing the very hinge of when “awareness” becomes “knowledge”… Are we crossing the very bridge between the two?

The “Experiential Knowledge” that you speak of sounds like what Schrodinger called “negative entropy”. Brillioun modified it to “negentropy” and Szent-Gyorgyi called “syntropy”. Unfortunately “syntropy” never really caught on, but I prefer that term the best of all. Information Theory utilizes the term Negentropy so I’ll have to stick with that. Negentropy is our friend. My understanding of it is that it’s not so much a form of usable Information, but rather considered as extra Information, and one that we may not even be aware we have.

Negentropy requires a normal (gaussian) distribution to exist. Your description of a Natural Classification “an arrangement of neurons that places similar experiential knowledge within certain neural pathways” sounds eerily similar to a Gaussian Distribution. If this is so, then when you say, “This can allow us to predict what observations we may expect based on incomplete sensations”, I would propose that to be a quantity of Negentropy.

Let’s start from this and see if we indeed can build a bridge between “Pure Experiential Awareness” and “Experiential Knowledge”.

Hope this helps.

Shuttle128's avatar

I take experiential knowledge to be the storage and recall of experience based observations. After reading about qualia and knowledge a while back, I was made aware that qualia must be a form of knowledge. Without the actual experience of qualia a person cannot say that they have all knowledge of ‘redness’ for example. They would be missing the experience of redness even if they knew every detail of how photons and the electromagnetic spectrum behaved. The same could be said for ‘circleness’ or any number of classifications. From this and my understanding of neural networks it appeared that all things we experience behave the same way. The way the brain works, it seems that all knowledge stems from our experience and classification of these experiences. The abstraction our brain performs while experiencing creates experiential knowledge. I suppose I call this data because there doesn’t appear to be any division between experiential knowledge and information based knowledge in the way it is codified in the brain.

To me, there is no fundamental difference between ‘cause—reaction’ and ‘thought—action.’ Every thought we have is ultimately the result of some cause, be it the previous state of the brain or some outside perturbation.

I always thought syntropy sounded better myself too though I’ll use negentropy as well. The way I see negentropy, is that it is a consequence of the rules imposed by living systems. Simply a way of creating a state of order by exporting entropy from the system. I do agree with Schroedinger that negentropy seems to be a very good indicator of life, but as we’ve seen with man made devices, negentropy does not necessarily mean life, nor does it necessarily mean information. I would say that the brain’s classification would be creating negentropy since it expends energy to increase the order of a lesser ordered system. However, through the expenditure of additional energy later, the brain can recall the states saved by the negentropy present. I would call this knowledge.

RealEyesRealizeRealLies's avatar

“it seems that all knowledge stems from our experience and classification of these experiences.”

Our differences are at the “and”.

What you call “experiential knowledge” is what I refer to as “pure experiential awareness”. Both, I take it, are simple reactions to stimuli based upon our sensory equipment. But you take it a step further to also include “classification”. And there is where we differ.

Classification requires Code. And although we may be quite aware of an object that has cause/effected our senses, by no means does that suggest any thinking has occurred. Thinking occurs at the point of Codified Description (classification). And that is the line I draw between cause/effect and thought/action.

From his birth, I used to take my son to see his grandmother twice a week. As he approached 7–8 months, he would start to look out the car window during the trip. He began to notice a giant blowup green dinosaur sitting in the parking lot of a used car dealer that we passed regularly. His jibber/jabber would increase as we passed the big green blob. Later as his vocabulary improved, he would become very excited and start to scream out “Thahhh”. He was trying to say “that”, and pretty soon he did… though it sounded more like “dat”.

As he grew, he gradually learned concepts and words for big, green, round, square, scary, and raurwwwrrr. He used these words and concepts to communicate his thoughts about the big green dinosaur to me on our drive. It got so bad at times that I often thought of taking a different route.

But the point is, his senses made him aware of the object, and his language skills allowed him to actually think about the object. He’s 13 now, and we just completed his science project on Bounce Effect. He also has limited knowledge of polymers and rubber compounds, and is also quite aware of what a used car dealership and fake cartoon dinosaur promotion balloons are. I have every confidence that if we were to see that old green dinosaur today, that his expanded vocabulary would allow him a much deeper thought capacity for that object.

If he pursues marketing, promotions, or some scientific textile career, I’m sure his ability to think about that old balloon would be expanded even further, and specifically because of his expanded vocabulary.

Shuttle128's avatar

I agree with just about everything you said, and I really enjoyed the explanation you gave.

What I think is that language allows more fluent and standardized classification of observations and abstract concepts. Language certainly allows a much better grasp of and capacity for thinking and expressing those thoughts. When language is learned, the natural classification that your brain performs is accelerated due to the connections we can make between language and real world entities. Since language is all about abstraction we can think about and communicate abstract things much more effectively.

Upon studying a little bit about feral children I saw that not being socialized causes severe limitation in ability to think abstractly. This may very well be because they didn’t learn language. However, I wouldn’t go as far to say that they aren’t thinking, just that they are arrested in their ability to.

Answer this question

Login

or

Join

to answer.

Mobile | Desktop


Send Feedback   

`