PaKii94 wrote:Well theoretically if we were able to create a legitimate AI, it wouldn't have biological restraints that humans have (speed & capacity of the nervous system, energy requirements, sleep requirements, etc. ) so it wouldn't need millions of years to evolved. Think of how many repetitions over minutes/hours/days people as children need to learn simple math concepts. Those repetitions for an AI (with enough power) would happen in unfathomable numbers in fractions of a second. Now give it hours/days/years to learn and advance
There's a pretty huge gap between "needing millions of years to develop" and "instantaneous God powered AI that can do anything" isn't there?
And the AI is bound by physical constraints in terms of how much processing power it has. The assumption that the AI can become infinitely smart and have infinite power without building infinitely better hardware doesn't follow or make sense. An improved algorithm can only do so much without improved horse power to run it.
Moore's law is falling apart, which is also one of the fundamental underpinning's of this research that it would continue for the next 50 years, but we have hit the physical limits of current processor design and are now in a stagnation period, far away from the exponential growth required to hit the processing power predicted to be necessary to reach "as smart as a human".
This isn't to say in the future (be it 1, 10, 100, or 1000 years), that we won't have this ability to create a smarter than human AI that can improve itself. I'm just saying that this smarter than human AI reaches a singularity and becomes godlike in a matter of instants because it reaches some point that it can improve itself faster and faster ad infinium is unlikely.
It assumes that an algorithm simply CAN be improved this infinite amount rather than that the algorithm may very quickly reach a point of maximum efficiency and that further improvements will only be gained by more hardware cycles which is an extremely poor assumption (one I would say borders on ridiculous).
If you took the most efficient programming deemed possible today, and tried to run it on a 20mhz computer from the 1980s, do you think you'd be able to run modern software by simply improving the algorithm? Or do you think we required the 100 fold improvement in processing power?