Note: I mentioned in the above-mentioned thread an observation that “The average person in today’s high-tech society will process in one day the same amount of information the average person in Elizabethan times would process in a lifetime.”
@RealEyesRealizeRealLies replied, asking if I thought it was related to Moore’s Law.
I don’t necessarily see this as an application of Moore’s Law, unless one sees it in relation to some other futurist theories, such as Ray Kurzweil’s Technological Singularity, or Terrence McKenna’s Novelty Theory.
All three theories are similar, in that they look at trends of occurrences in the past and, finding a pattern, extrapolate that pattern, maintaining certain assumptions, until a particular point in the future.
Moore’s Law was originally an observation about the number of integrated circuits in a computer, and the fact that that ratio has been increasing exponentially and will continue to do so, until some point in the future.
Although Moore’s law deals with computing power, it says nothing about artificial intelligence. That is where Kurzweil’s Technological singularity comes in. The theory is that once AI is a reality, AI will increase to the point where an AI machine will actually design and produce a more advanced AI, increasing exponentially until it reaches a technological-evolutionary point known as “the singularity”. Currently calculated speculation places this point somewhere in the 2040’s.
The Novelty Theory, also know as Time Wave Zero, states that every great change in the history of the universe has been as a result of “novelty”, or something that never existed. Some examples: technology where there was none; economy where there was none; agriculture where there was none; Life where none existed before; organic compounds where none existed. These novelties, according to McKenna, are occuring at an increasing rate, and even the rate of increase is increasing.