Convergance Paper
Convergance Paper
Convergance Paper
Introduction to Nanotechnology
Dr. James Smith
CONVERGENCE
Introduction
In the nanotech world, the concept of convergence originated with a 2001
conference organized by the U.S. National Science Foundation and the U.S.
Department of Commerce titled Nanotechnology, Biotechnology, Information
Technology and Cognitive Science (NBIC): Converging Technologies for Improving
Human Performance.
It was a recognition that the research being conducted in a number of various
disciplines was coming closer together around new developments at the nanoscale.
Nanoscience was seen as revolutionary way of thinking about, and solving
problems, at the molecular level in a variety of fields; the intersection between
classical physics, which governs the world that we see and interact with, and
quantum mechanics, which governs interactions at an atomic level. The included
fields of biotechnology, information technology, and cognitive science are obviously
not comprehensive. Several other fields are also being fundamentally affected by
nanoscience, such as materials science, chemistry, MEMS, and synthetic biology.
The focus of the conference was a deliberate effort to attract funding for
basic research projects.1 Nanotechnology was used as the centerpiece since the
perceived potential upside of nanotechnology is almost limitless. The organizers
also deliberately emphasized the subtitle, Converging Technologies for Improving
Human Performance in the belief that improved human performance would be
attractive to various stakeholders, including the federal government, which could
provide the necessary funding for basic research, and in an attempt to distance
themselves from the negative publicity that already existed around the Grey Goo
nanobot.2
ongoing medical advances, will extend human life, perhaps indefinitely. Medical
advances in the diagnosis and treatment of diseases have already significantly
extended the human lifespan, but not to the degree that Kurzweil anticipates. The
average American male born in 1900 had a life expectancy of 48 years, born in
2000, he had a life expectancy of 74 years, an increase of over 50% 2. It is no longer
uncommon for people to live well into their 90s. Kurzweils foresees a day in which
worn out parts can be simply replaced and life continued indefinitely.
Kurzweil uses the term singularity, rather than convergence. It is a
reference to the event horizon around a black hole, the boundary beyond which all
matter will be pulled into the black hole and from which nothing can escape. Since
no information is available from beyond the boundary, we dont know what happens
inside a black hole. Ray Kurzweils concept of singularity is similar. Once the merger
of human and machine intelligence is achieved, new developments will occur so
quickly in unforeseen ways that we cannot see what the future will be from our side
of the divide.
Ray Kurzweil is not a science fiction writer. He has a lifetime of scientific
achievements. In fact, Inc. magazine called him the rightful heir to Thomas
Edison. One of his early developments was a voice recognition program, Dragon
Naturally Speaking, which allowed computers to understand the spoken word. His
early speech recognition work laid the foundation for Siri, Google Now, Alexa and
other computer-based assistants. He is currently a Director of Engineering with
Google, leading a team developing machine intelligence and natural language
understanding.
We dont have to accept Kurzweils vision of the future (which has been
disputed by many and which is counter-balanced by those who foresee a dystopian
outcome if artificial intelligence is achieved). However, we are currently making
strides in both artificial intelligence and the potential for improved human
intelligence which may lead to some version of his cyborg combination of human
and machine intelligence.
Artificial Intelligence
Alan Turing initially postulated the Turing Test in 1950. He proposed that if a
person could have a natural language conversation with a machine and was unable
to determine whether they were talking to a machine or another human, we would
have achieved artificial intelligence. In 1950! Computers only existed in what we
would consider a very primitive form and he was already thinking about the
possibility of artificial intelligence rivaling human intelligence at some point.
Since then, there has been an ongoing debate about how smart machines
can be made. One impossible goal was achieved in 1997 when the IBM computer
Deep Blue defeated the then reigning world chess champion, Garry Kasparov, under
tournament conditions. Once accomplished, the feat was dismissed by sceptics as
only involving a game with defined rules and a limited (technically) number of
moves that was particularly subject to massive computational power.
Computers are far better than humans at computation and at maintaining
and retrieving accurate information. The latest studies of the brain indicate that we
have a single read/write head5. That is, any time we access a memory, we may
also modify it. For example, when we see a person we know, we automatically
update our memory of that persons image. That is why when we see someone
every day we may not notice changes that happen slowly over time, but can be
shocked by the changes in appearance of someone that we havent seen in several
years. It also is a reason why our memories can be notoriously unreliable.
On the other hand, humans are much better than computers at
understanding and communicating in natural language. We use a large number of
clues, including context, a common culture, visual input, facial expressions, known
speech patterns of the person we are talking to, and the emphasis placed on the
words to extract meaning from what is spoken (which, if interpreted literally, can
often be quite nonsensical). For a long time programmers thought that programing
computers to understand natural language was impossible.
In 2011, Watson, Deep Blues successor, defeated Ken Jennings and Brad
Rutter (two former champions) on the Jeopardy show. Jeopardy is designed to test
contestants on their ability to remember a large number of facts and to make
logical, and sometimes whimsical, connections. According to IBM, Watson was
designed to apply advanced natural language processing, information retrieval,
knowledge representation, automated reasoning and machine learning technologies
to the field of open domain question answering.
Watson is now being used at Memorial Sloan Kettering Cancer Center to
provide feedback to doctors and nurses about the choice and implementation of
various courses of cancer treatment. IBM has also announced that it plans to invest
$100 million over a 10 year period to use Watson to help African countries address
development problems, beginning with healthcare and education.
Artificial intelligence does not need to be limited to a single, high-powered
computer. In a speech at a Google event in 2014, Kurzweil said that Google was
developing new search software capable of understanding text and providing a fully
reasoned response to a natural language question, not just a list of inter-related
websites that might be helpful. 18 months later glimpses of those systems are
becoming available. On your phone! Google may become the basis for our first
interaction with a computer system as if we were communicating with another
human.
Between Watson and the natural language computer assistants currently
becoming available, it would seem that the Turing Test may be within reach.
It may not matter if a new species (artificial intelligence/self-conscious
machine/robot) is created or if a hybrid between humans and machines develops; if
we get to a point that we can naturally communicate a problem to a computer and
rely on that computer to analyze the problem and give us a reasoned solution or
work side by side with us in reviewing the best course of action to address a
problem, we will have greatly expanded our capabilities.
Communication with the Brain
In order to take advantage of developments in artificial intelligence to
improve the capabilities of humans, the computer-human interface must be
improved. Initially we interfaced by trying to talk computer language through the
use of punch cards or a computer-based interface such as DOS. The adoption of
icons to simplify a large number of common instructions was an improvement, but
recently we have focused on getting computers to understand us in our language.
However, the interface is still extremely slow. What is needed is direct
communication with the brain. Is that possible?
In May 2012 MIT announced the implanting of a chip directly into a womens
brain that permitted her to control a robotic arm. Neurological impulses were sent
to a computer that interpreted the pulses to control the robotic arm. This
technology has subsequently developed sufficiently to permit the direct control of
prosthetic devices. This is possible because the brain is electrical. Both inputs and
outputs are information coded in electrical impulses.
In June 2014 Ohio State University Wexner Medical Center announced they
had implanted a chip in the brain of a patient that gave him control over his arm
that had been paralyzed for 4 years. This was a major development in moving from
control of a mechanical device to control of the patient's own limb.
These developments deal with getting information from the brain. What
about the other direction? Can we deliver information directly to the brain that can
be understood? On September 11, 2015 DARPA (Defense Advanced Research
Projects Agency) announced that it had successfully connected a prosthetic hand to
both the motor cortex and the sensory cortex of the brain, giving the person control
over a prosthetic device with real time sensory feedback. DARPA also has other
current projects designed to create closed-loop direct interface to the brain 6.
Clinical work has also been done in the area of restoring eyesight. Trials to
date have enabled blind people to see light sources and highcontrast edges.
Sheila Nirenberg of Cornell University, a recipient of a Genius Award from the
MacArthur Foundation, has developed algorithms that mirror the coded information
that the eyes send to the brain7. The front-end of the visual system can be replaced
by photographic technology (which has been expanded to use more than the visual
spectrum to collect information) and, if we know the code, this information can be
communicated directly to the brain. She has successfully tested the system in
animals and is working on human applications.
All of these are examples of first steps. They dont necessarily work well or
smoothly in all circumstances. However, we are developing ways to send
information directly to the brain and using neurological impulses to get instructions
directly from the brain. How long will it be before we consider the interfaces we
currently use with computers to be insufferably slow and archaic? Or, alternatively,
before we implant a chip in a persons head with memory and computational power
directly accessible by the brain?
Conclusion
As one author has put it, when does all of this change from WOW to YUCK 8?
We already know some limits in other areas. People are leery of genetically
modified foods. While we are ok with gene therapies to address gene-base
diseases, are we ready for designer babies? Is it alright to clone humans? Is it
acceptable to use artificially grown replacement organs? Even if grown from
harvested human cells? Is it better or worse to use organs harvested from animals?
Nanotechnology presents its own issues. Will we accept implants or drugs
that increase our physical and mental abilities? If so, will we create a privileged
class that has access to that technology and is bigger, stronger, faster, and smarter
than those that do not? How do we treat the lesser, non-improved humans? Are
their rights not as important? Can extending the human life span be balanced with
avoiding people becoming jaded and not interested in life? Will we really improve
human performance in fundamental ways? If so, we will continue to be human, or
some combination of human and machine? What advances are improvements?