top of page

Why Computers Won't Improve Their Own Intelligence and Make Themselves Smarter???

We both fear and desire "the singularity." Yet it's unlikely to happen.



St. Anselm of Canterbury introduced an argument for God's existence in the eleventh century that went something like this: God is, by definition, the greatest being we can imagine; a God who doesn't exist is simply not as awesome as a God who does; ergo, God must exist. This is referred to as the ontological statement, and it has persuaded enough people that it is still being debated nearly a thousand years later. Some critics argue that the ontological claim simply determines a being into existence, which is not how definitions operate.


People have tried to convince God to live, but he isn't the only one. In 1965, mathematician Irving John Good wrote, "Let an ultra-intelligent machine be described as a machine that can far surpass all the intellectual activities of any man, however clever":


Since one of these intellectual tasks is computer design, an ultra-intelligent machine could design even better machines; there would certainly be an "intelligence explosion," and man's intelligence would be far behind. As a result, the first ultra-intelligent computer is the last invention that man will ever need to create, assuming that the machine is docile enough to tell us how to keep it in check.


The notion of an intelligence explosion was resurrected in 1993 by author and computer scientist Vernor Vinge, who coined the word "singularity," and it has since gained prominence among technologists and philosophers. Books like Nick Bostrom's "Superintelligence: Routes, Threats, Tactics," Max Tegmark's "Life 3.0: Being Human in the Age of Artificial Intelligence," and Stuart Russell's "Human Compatible: Artificial Intelligence and the Issue of Power" all depict scenarios of "recursive self-improvement," in which an artificial-intelligence program repeatedly designs a better version of itself.