Spec Tech/Future Culture – Ethical to Life Saving

Inventor, and engineer, Ray Kurzweil, described his goals for inventing as “beating immortality” and “living forever.” He always knew he wanted to be an inventor since he first created his own company at the age of 18. Bill Gates himself has said Kurzweil was best at predicting the future. I think it’s important to examine the factor behind his motivations, and not just his futuristic mindset.

As a child, he would always sit at the dinner table with his family, and their discussions would always pertain to new ideas, inventions, and more insights of that source. Kurzweil’s thoughts on human life are that death is a profound tragedy, and there is nothing good about it. His father’s death impacted Kurzweil in such a deep way, that his pain transformed into his biggest idea yet. Kurzweil always had the mindset of finding ideas to overcome challenges, but when he suggested the “Singularity”, it raised questions on how exactly could that be done?

The Singularity is basically the creation of a future, where technology will evoke rapidly, and humans won’t be able to keep up, unless they enhance their own intelligence, artificially. In order to meet this expectation,  the Exponential Curve must be met head on. The Exponential Curve is the accelerated speed in growth of information technology. For example, we can say the evolution of humans is a good example of an Exponential Curve in itself, as evolution was accelerated from basic micro organisms all the way into the creation of a human being in the physical flesh.

So with Kurzweil’s set goals in mind, he does answer idealistically on how he can achieve immortality.  He believe  in GNR (Genetics Reprogramming) could deter the biology of disease and death. He suggest that with Nano technology (small sized technology), Robotics (gateways of A.I.’s), and A.I. itself (matching human intelligence), are three basic ideals of Kurzweil’s solutions. Eventually, he wants to create machines that stop us humans from aging, of course not without enhancing ourselves first. Yet is it overly optimistic to achieve immortality within our lifetime, or is it just plain dangerous to imagine it?

Let’s say we do reach the era of A.I., now what about if we are eventually replaced by them? A.I. robotics essentially have the same level of intelligence as us humans do, so how can this not be dangerous to our survival as the human race? First off, they can take away our jobs, which is not so far from happening completely. With manual labor such as medical procedures and toll oversights being taken over by robotics, whose to say this won’t extend to every job that once required manual labor?

In this video from the Idea channel, the question of how ethical can it be for us as humans to depend on A.I. is raised. Putting a limit on progress is something yet unexplored, as the idea of futuristic robots doing all of our jobs for us (including staying alive), is all we’re focused on at the moment. If we can harness those ideas for the future, we can perhaps regulate it’s creations so that we safely enter into the realm of A.I., yet I currently think it’s something either not considered, or not favorable to think of by creators.

Wherever the concept of A.I. goes, we cannot become reckless in creating it. Whether it’s Kurzweil’s idea of living forever, or it’s our basic use of A.I. to do hard labor jobs that humans do right now, we cannot rule out the fact that by creating such intelligence, can come with great consequences. That should be what we explore next.