Skip to content
Gallery
SB Graveyard
Share
Explore
The Content Graveyard

Why Computers Won’t Make Themselves Smarter

Loading…

TLDR

‘Technological singularity’ describes the hypothetical point at which computers have achieved a rate of growth that is both uncontrollable and irreversible, resulting in unforeseen changes to civilization. The article addresses this idea and its hyptothetical feasibility.
In the case of humans, this would imply that a certain ‘IQ’ would warrant the ability to move to the next IQ. A IQ of 100 would know how to solve the problem of increasing to an IQ of 120, and so on. It could be said that this intellect only comes at a certain point (say IQ of 300). But even then, there is no proof that this is a recursive process. The typical software program must go through a compiler, which translates the code written into computer-readable language. If you had slow compiler (C1), you could always write a new compiler and run it through C1 to create an executable (C2). To speed up this process, you could also now use C2 to compile itself, which would quickly create an optimized C3. But this is the end of the road. It has achieved efficiency and speed, no recursive optimization.
Recursive improvement relies on the work of a compiler, since this is what builds the new systems. Currently compilers can only optimize for what they know - some are built for specific languages and domains, but general purpose compilers simply don’t have the capabilities to both compile anything while also improving on itself. Even the best AI programs we have out there learn on patterns and inputs that have been defined or computed. Writing an AI program without knowledge of the inputs nor correct responses would be close to impossible. Optimization is desired, but achieving general purpose optimization requires a true ‘human’ understanding of what that even means.
Chiang finishes with a comparison to human civilization as a whole. We have come a long way due to technologies and tools. We have used simple, primitive tools to create more complex, special-use tools. We rely on the work of those civilizations in the past, groups on individuals determining context-specific problems and identifying the tools that could be used to solve such problems. For computers, why would the same not be true? How could an AI optimize for itself in isolation without understanding of context and opinion of what is best? A network of optimized computers could ‘act’ as a general intelligence explosion, but not in singularity.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.