If we don’t know what “thinking” is, then we can never know if we have actually created a machine that can do it. If we don’t really know what “intelligence” is, how can we know if we’ve created a machine to do it.
We create machines to help us solve problems. What are we looking for? A machine that doesn’t need to be programmed? We can just give it the general outlines of a problem in words, and it will figure out how to solve it? Do we want it to act like a person? To somehow be able to avoid the GIGO problem? I mean humans can’t be very precise—it takes enormous effort for us to communicate clearly.
A machine intelligence would have to have wants and goals and agendas for it to be motivated to do anything. A machine intelligence created by humans will be created to pursue our agendas. Such an intelligence would be meaningless without humans. Besides which, who is going to hook up the electric grid or maintain it when it goes down? The machine would have to have self-replicating helper parts for this to happen.
Then there’s the notion of “Pure logic.” What is that, anyway? Everything is dependent upon goals. How do we get to where we want to go? If you have no goal, logic doesn’t help you get anywhere.
Personally, I don’t think we will ever create an “artificial” intelligence. We might get more and more expert systems, but I think intelligence requires an evolutionary imperative. Without competition, intelligence is really no advantage. Machines exist whether or not they are intelligent and they have no desires. Artificial intelligence requires artificial desires. We would still have to program those desires, and so they would never be real desires.
Bottom line: not gonna happen.