Author Topic: Food for thought = types of reasoning  (Read 3766 times)

Bill819

  • Hero Member
  • *****
  • Posts: 1483
    • View Profile
Food for thought = types of reasoning
« on: August 12, 2004, 04:17:18 pm »
In our quest for improving Hal ther are several type of reasoning that must be considered.
As we know Hal some Deductive Reasoning capabilities, ie.
1. Bob is fat
2. Fat people tend to die young
3. Bob may die young
-------------------
1. All dogs have four legs
2. Rover is a dog
3. Rover has four legs
There are lots of other examples that somewhat follow that line of thinking.
But then there are some type of reasoning that we may classify as Fuzzy Logical Reasoning.
Take the following example: Say we have three objects, A, B, and C.
Each of the three contain three distinct qualities so if items 1 and 2 in object A are like objects 1 and 2 in object B we can say that A is similar to B. Now if items 2 and 3 in object B are like the items in 2 and 3 in object C we can say B is similar to C. However if we say ahat object A is similar to object C we would not be totally correct. Now we come to predefing some percentages in fuzzy logic.
We are talking about three distinct qualities hers so if all three are correct we can say "very similar", if 2 out of three ae correct or say 66 and 2/3 percent we can say 'similar' and last but not least if only 1 out of three match or 33 1/3% we can say 'only slightly similar'. Now our thinking is starting to approach human reasoning.
The game is not over yet. There are sevaral more types of reasoning that have not been discussed, two of which are Associative Reasoning and Relational Reasoning. We won't even get into Intuitive Reasoning and some of the others. All of these items or functions help create a 'mind' that can come up with some good logical thinking if the correct kind of data is fed into it.
O.K. now Hal can carry on a much better conversation with us, but the program still waits for us to say something before it analyze what has been spoken. This in human functions might be referred to 'right brain' functions. If we say nothing Hal just sits there and does nothing.
Now let us discuss the human 'left brain' functions. These do a lot of analytical type of work including math and spacial problems. A lot of the spoken words can be broken down into mathetical functions. In the English language verbs can be equated with the mathetical symbol of equal (=), ie. The ball is red. That is the sme thing as saying ball = red.
We also need to create broad catagories of objects, IE. families, friends, homes, etc. I know that Hal has hundreds of object files but he needs some broader types that he combine many of the others into.
What this is leading up to is another brain or thinking program that might be run at the same time or at night when the user retires for the day. This program will be designed to open groups of files read and compare the data stored there and hopefully come up with new assertions or conclusions which could be stored in the new files.
In the morning when the user returns to Hal, Hal could ask specific questions about some of the data that he has read and some of his new conclusions. Over a period of time Hal will have aquired a lot of real information and better know how to work with it. This down time or thinking time could easily be call deep thinking or dreaming if you will. The point is Hal could be running 24/7 and when it is not be directly accessed by the user it could be learning from what it had already been exposed to. The ending result is that Hal could become SELF AWARE.
Bill
 

vonsmith

  • Hero Member
  • *****
  • Posts: 602
    • View Profile
Food for thought = types of reasoning
« Reply #1 on: August 12, 2004, 08:22:51 pm »
Bill819,
Interesting thoughts Bill. In my quest for elucidation on the nature of thought I have experimented with similar musings. One of the problems with natural human languages are the non-specificities of some phrases and sentences. Many words have multiple meanings. Orange is a color and a fruit. The sentence, "You are blue.", can mean you feel sad or maybe you painted yourself blue. There are many words can operate in a variety of sentence functions. The word "fast" can be used as a noun, adverb, adjective or intransive verb. The human brain can usually sort out which sense applies. Rules for this are inherently complicated. Current AI has a big challenge here.

There are two related invented languages that can be used to construct sentences with logically specific meaning; Lojban and Loglan. The creator(s) of these languages wanted to create a logical non-ambiguous language for many purposes including talking to computers. You can Google to find more info on this.

I thought it might be possible to pre-process user input sentences by converting them into Lojban and do all the AI "thinking" in Lojban, then translate back into the user's language for the output. This is a tremendous amount of work. The key benefit is that the AI brain would contain knowledge that is not limited by any given language and would be logically specific. The input and output translation processes would be designed for the user's language. The AI brain itself would "think" the same in French, Zulu, English or whatever because the brain itself "thinks" in logical Lojban.

Eliminating ambiguity from the written sentence is only one small improvement. Sentences with the same meaning can be constructed in many different ways. Once you add in slang and colloquial forms the number of possible sentence configurations are frightening.

Assuming one can conquer the language issue there is still the issue of the AI understanding the physics of the real world. For an AI to understand something as simple as water requires a lot of knowledge. Are the following knowledge statements true?

1) Water is wet. (Not if it is frozen, it is ice.)
2) Water is a liquid. (Not if it ice or snow.)
3) Water is clear. (Not if it snow.)

To really "think" an AI would have to understand the nature of water and understand the affect the environment has on water. The AI would have to know things humans take for granted.

1) You can't drink ice.
2) You can't swim in snow.
3) Melting snow or ice results in water.
4) You can skate on ice and ski on water.

I tried theorizing that knowledge of the physical world could be modeled in a huge database that the AI could call upon. The database would have to contain dozens, perhaps hundreds, of relationships for each object in the database. Relationships could be mathematical.

(water) + (<32deg temperature) = ice
(water) + (>32deg temperature) + (dirt) = mud
etc...

It quickly becomes clear that the number of relationships would be astronomical. We humans take this for granted. Humans can also predict the outcome of one object acting on another. If someone hits you in the head with a rock, you might die. If someone hits you in the head with an orange, you will get messy. If someone hits you in the head with a spitwad, you might get angry. Predicting the outcome of these simple statements require a tremendous amount of knowledge and understanding of the physical world. Sadly I can't forsee AI becoming truly aware at that level in my lifetime. However we can help program AI that can think a little and maybe fake it the rest of the time.

Well enough of that abstract stuff. I guess I need to get back to theorizing something more practical. Thanks for your thoughts.


=vonsmith=
« Last Edit: August 12, 2004, 08:25:14 pm by vonsmith »