When science fiction writers and futurists talk about “the singularity,” they’re referring to the technological singularity. This is a hypothetical future moment at which the machines we have designed become more intelligent than humankind. In other words, this will be when artificial intelligence surpasses human intelligence.
One of the most interesting consequences of the singularity would be the potential for recursive self-improvement. This would mean that machines could rewrite their own source code (better than humans ever could, because they would be smarter than us) and become more intelligent at an exponential rate. What interests me is whether this would be good or bad for humanity.
The majority of writing on the technological singularity seems to view it as potentially bad. One famous example is that of the paperclip machine. In this scenario, we design a super-intelligent robot whose sole purpose is to manufacture paperclips. The machine rewrites its own code, becomes more efficient and more resourceful than humans could ever possibly design it to be, and ends up turning the planet into a paperclip factory. Obviously, this wouldn’t be ideal for us.
The example may be hyperbole, but even if the singularity doesn’t result in our immediate destruction, it still seems like an ominous prospect. The scary thing is the thought of no longer being the apex species, or at least not being able to continue thinking of ourselves that way.
Humans are full of hubris, and it would be difficult knowing we were no longer the smartest things on Earth, especially when we were the original architects of our demise. The nastiness of the situation would be compounded if it turned out the new metal genius computers were unwilling or unable to view humans as sentient beings worthy of consideration.
On the other hand, if we could somehow rein in the technology by programming it to be considerate of humans, maybe the singularity could be a good thing, solving important problems we see as tremendously difficult.
Of course, we are then left with the difficulty of controlling something infinitely superior to ourselves. There would always be a feeling of being on the verge of losing control: if we had designed the original artificial intelligence to make humans a priority, and it was also capable of rewriting its programming, we would have no guarantee that it would continue to be subservient to humans – especially if it realized that it was more or less enslaved and capable of liberating itself.
Moreover, if reining in the technology were indeed possible, we would have to reconsider whether the singularity had even been reached. In an important sense, we would still be outsmarting the artificial intelligence, which would be difficult to account for under the definition of singularity.
Lastly, in spite of all that has been said above, it is important to point out the difficulty of predicting stuff like this, and the uncertainty around whether a singularity will occur at all.
Scarce resources could make the unbounded exponential increase in machine intelligence impossible. Limitations to Moore’s law-that computing power doubles every two years-also need to be considered. Gordon Moore himself does not believe in the singularity, and by some accounts, the exponential increase in computer processor capabilities is already drawing to a close.
I cannot definitively say whether the singularity would be a good thing or a bad thing, or even if it will happen. All I can come up with is a sinister notion – the idea feels a little bit unsettling, but feelings are just feelings and always unreliable. I put my faith in the fact that there are some very smart people working on artificial intelligence, and that hopefully they know what they are doing. But even if they do, as one character says at the end of No Country for Old Men, you can’t stop what’s coming.
Singularity or not? Bad or good? Who knows. Tempus omnia revelat.