It’s worth studying AI partly because it’s very disillusioning. I wrote a Bayesian spam filter for a Mac program (Spamfire). When I was done, I thought it was really cool how quickly the program learned what was spam and what wasn’t. But it’s completely mechanical; it’s just sorting words and doing a little statistics. All AI is ultimately like that: completely mechanical. Seeing the mystery taken out of it was very sobering.
Neural networks get a lot of airtime because they’re modeling the human brain. Imagine trying to emulate a computer by modeling the transistors. I don’t think you’d get very far. Even if you did, you still have the same input/output problem: you have to define inputs and outputs and train the thing. There’s no such thing as undirected learning for a computer. There’s an AI koan about randomly wiring a neural network. Same thing would go for random training. We only have results when we have a defined problem domain and do some directed training. We only have rote, mechanical methods because that’s all a computer can help you do.
In short, strong AI belongs to fiction, not science.