First, a note about the Turing test: it is not meant as a direct test for consciousness (which I take it is what you mean by the term “self-awareness,” though serious philosophers would distinguish the two). Daniel Dennett has discussed this at length in various places and formats. One of the thought experiments he uses is called the “great cities test.”
This test picks out a set of criteria that are indicative—but not determinative—of a great city. Such a test may be fooled if one is stupid and rich enough to waste resources securing the indicators while failing to secure those things that typically accompany them, but it is simply not worth doing so. Should we know that a city passes the great city test, then, we have no grounds for doubting whether or not that city is great.
Similarly, the Turing test focuses on something that is indicative—but not determinative—of consciousness. If a machine can behave in a certain way (in this case, if it can imitate unrestricted human conversation) as convincingly as a human being, then we have no grounds for doubting that it is conscious. This is, after all, one of the key ways in which we come to believe that one another is a conscious being.
Note, then, that the Turing test is not meant to define consciousness or characterize it in any way. All it does is give us a method for detecting conscious entities. The claim made by Turing is exceedingly modest. After all, he is not even saying that the Turing test separates conscious beings from non-conscious beings. For all he has said, there may be conscious beings who cannot pass the Turing test. What he restricts himself to is the claim that any entity that does pass the test must be presumed to be conscious.
Second, it seems to me that the linked article sets up a straw man by taking the functionalist paradigm of conscious to stand or fall with Ray Kurzweil’s specific views. The broader project of artificial intelligence research can survive every single one of the points made therein, even if Kurzweil cannot win his bet with Mitch Kapor if they are true. That is, the so-called “singulatarians” are not coextensive with the functionalists, meaning that consciousness could still be an information process—though I don’t know anyone who would say there is anything “mere” about it—should Kurzweil’s particular model prove to be flawed.