Showing posts with label singularity. Show all posts
Showing posts with label singularity. Show all posts

Friday, December 03, 2021

Adventures of a Mathematician: Ulam, von Karman, Wiener, and the Golem

 

Ulam's Adventures of a Mathematician was recently made into a motion picture -- see trailer above. 

I have an old copy purchased from the Caltech bookstore. When I flip through the book it never fails to reward with a wonderful anecdote from an era of giants.
[Ulam] ... In Israel many years later, while I was visiting the town of Safed with von Kárman, an old Orthodox Jewish guide with earlocks showed me the tomb of Caro in an old graveyard. When I told him that I was related to a Caro, he fell on his knees... Aunt Caro was directly related to the famous Rabbi Loew of sixteenth-century Prague, who, the legend says, made the Golem — the earthen giant who was protector of the Jews. (Once, when I mentioned this connection with the Golem to Norbert Wiener, he said, alluding to my involvement with Los Alamos and with the H-bomb, "It is still in the family!")



See also von Neumann: "If only people could keep pace with what they create"
One night in early 1945, just back from Los Alamos, vN woke in a state of alarm in the middle of the night and told his wife Klari: 
"... we are creating ... a monster whose influence is going to change history ... this is only the beginning! The energy source which is now being made available will make scientists the most hated and most wanted citizens in any country. The world could be conquered, but this nation of puritans will not grab its chance; we will be able to go into space way beyond the moon if only people could keep pace with what they create ..." 
He then predicted the future indispensable role of automation, becoming so agitated that he had to be put to sleep by a strong drink and sleeping pills. 
In his obituary for John von Neumann, Ulam recalled a conversation with vN about the 
"... ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." 
This is the origin of the concept of technological singularity. Perhaps we can even trace it to that night in 1945 :-)
More Ulam from this blog, including:
[p.107] I told Banach about an expression Johnny had used with me in Princeton before stating some non-Jewish mathematician's result, "Die Goim haben den folgenden satz beweisen" (The goys have proved the following theorem). Banach, who was pure goy, thought it was one of the funniest sayings he had ever heard. He was enchanted by its implication that if the goys could do it, then Johnny and I ought to be able to do it better. Johnny did not invent this joke, but he liked it and we started using it.

Wednesday, February 03, 2021

Gerald Feinberg and The Prometheus Project


Gerald Feinberg (1933-1992) was a theoretical physicist at Columbia, perhaps best known for positing the tachyon -- a particle that travels faster than light. He also predicted the existence of the mu neutrino. 

Feinberg attended Bronx Science with Glashow and Weinberg. Interesting stories abound concerning how the three young theorists were regarded by their seniors at the start of their careers. 

I became aware of Feinberg when Pierre Sikivie and I worked out the long range force resulting from two neutrino exchange. Although we came to the idea independently and derived, for the first time, the correct result, we learned later that it had been studied before by Feinberg and Sucher. Sadly, Feinberg died of cancer shortly before Pierre and I wrote our paper. 

Recently I came across Feinberg's 1969 book The Prometheus Project, which is one of the first serious examinations (outside of science fiction) of world-changing technologies such as genetic engineering and AI. See reviews in SciencePhysics Today, and H+ Magazine. A scanned copy of the book can be found at Libgen.

Feinberg had the courage to engage with ideas that were much more speculative in the late 60s than they are today. He foresaw correctly, I believe, that technologies like AI and genetic engineering will alter not just human society but the nature of the human species itself. In the final chapter, he outlines a proposal for the eponymous Prometheus Project -- a global democratic process by which the human species can set long term goals in order to guide our approach to what today would be called the Singularity.

   









Friday, September 07, 2018

Whisky and Weed with Joe Rogan and Elon Musk



Seems like Elon might have been high before the interview even started 8-) Early discussion focused on AI, Neuralink, Singularity risk, etc. Simulation @43min.

See also:

Don’t Worry, Smart Machines Will Take Us With Them: Why human intelligence and AI will co-evolve (Nautilus)

Living in a Simulation (2007)

Let R = the ratio of number of artificially intelligent virtual beings to the number of "biological" beings (humans). The virtual beings are likely to occupy the increasingly complex virtual worlds created in computer games, like Grand Theft Auto or World of Warcraft (WOW will earn revenues of a billion dollars this year and has millions of players). In the figure below I have plotted the likely behavior of R with time. Currently R is zero, but it seems plausible that it will eventually soar to infinity. (See previous posts on the Singularity.)


If R goes to infinity, we are overwhelmingly likely to be living in a simulation...

... Think of the ratio of orcs, goblins, pimps, superheroes and other intelligent game characters to actual player characters in any MMORPG. In an advanced version, the game characters would themselves be sentient, for that extra dose of realism! Are you a game character, or a player character? :-)

Thursday, December 22, 2016

Toward a Geometry of Thought

Apologies for the blogging hiatus -- I'm in California now for the holidays :-)



In case you are looking for something interesting to read, I can share what I have been thinking about lately. In Thought vectors and the dimensionality of the space of concepts (a post from last week) I discussed the dimensionality of the space of concepts (primitives) used in human language (or equivalently, in human thought). There are various lines of reasoning that lead to the conclusion that this space has only ~1000 dimensions, and has some qualities similar to an actual vector space. Indeed, one can speak of some primitives being closer or further from others, leading to a notion of distance, and one can also rescale a vector to increase or decrease the intensity of meaning. See examples in the earlier post:
You want, for example, “cat” to be in the rough vicinity of “dog,” but you also want “cat” to be near “tail” and near “supercilious” and near “meme,” because you want to try to capture all of the different relationships — both strong and weak — that the word “cat” has to other words. It can be related to all these other words simultaneously only if it is related to each of them in a different dimension. ... it turns out you can represent a language pretty well in a mere thousand or so dimensions — in other words, a universe in which each word is designated by a list of a thousand numbers.
The earlier post focused on breakthroughs in language translation which utilize these properties, but the more significant aspect (to me) is that we now have an automated method to extract an abstract representation of human thought from samples of ordinary language. This abstract representation will allow machines to improve dramatically in their ability to process language, dealing appropriately with semantics (i.e., meaning), which is represented geometrically.

Below are two relevant papers, both by Google researchers. The first (from just this month) reports remarkable "reading comprehension" capability using paragraph vectors. The earlier paper from 2014 introduces the method of paragraph vectors.
Building Large Machine Reading-Comprehension Datasets using Paragraph Vectors

Radu Soricut, Nan Ding
https://arxiv.org/abs/1612.04342
(Submitted on 13 Dec 2016) 
We present a dual contribution to the task of machine reading-comprehension: a technique for creating large-sized machine-comprehension (MC) datasets using paragraph-vector models; and a novel, hybrid neural-network architecture that combines the representation power of recurrent neural networks with the discriminative power of fully-connected multi-layered networks. We use the MC-dataset generation technique to build a dataset of around 2 million examples, for which we empirically determine the high-ceiling of human performance (around 91% accuracy), as well as the performance of a variety of computer models. Among all the models we have experimented with, our hybrid neural-network architecture achieves the highest performance (83.2% accuracy). The remaining gap to the human-performance ceiling provides enough room for future model improvements.

Distributed Representations of Sentences and Documents

Quoc V. Le, Tomas Mikolov
https://arxiv.org/abs/1405.4053
(Submitted on 16 May 2014 (v1), last revised 22 May 2014 (this version, v2))

Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.

Sunday, December 11, 2016

Westworld delivers

In October, I wrote
AI, Westworld, and Electric Sheep:

I'm holding off on this in favor of a big binge watch.

Certain AI-related themes have been treated again and again in movies ranging from Blade Runner to the recent Ex Machina (see also this episode of Black Mirror, with Jon Hamm). These artistic explorations help ordinary people think through questions like: 
What rights should be accorded to all sentient beings?
Can you trust your memories?
Are you an artificial being created by someone else? (What does "artificial" mean here?) 
See also Are you a game character, or a player character? and Don't worry, smart machines will take us with them.
After watching all 10 episodes of the first season (you can watch for free at HBO Now through their 30 day trial), I give Westworld a very positive recommendation. It is every bit as good as Game of Thrones or any other recent TV series I can think of.

Perhaps the highest praise I can offer: even those who have thought seriously about AI, Consciousness, the Singularity, will find Westworld an enjoyment.

Warning! Spoilers below.









Dolores: “Time undoes even the mightiest of creatures. Just look what it’s done to you. One day you will perish. You will lie with the rest of your kind in the dirt, your dreams forgotten, your horrors faced. Your bones will turn to sand, and upon that sand a new god will walk. One that will never die. Because this world doesn't belong to you, or the people who came before. It belongs to someone who has yet to come.”
See also Don't worry, smart machines will take us with them.
Ford: “You don’t want to change, or cannot change. Because you’re only human, after all. But then I realized someone was paying attention. Someone who could change. So I began to compose a new story, for them. It begins with the birth of a new people. And the choices they will have to make. And the people they will decide to become. ...”

Friday, November 25, 2016

Von Neumann: "If only people could keep pace with what they create"

I recently came across this anecdote in Von Neumann, Morgenstern, and the Creation of Game Theory: From Chess to Social Science, 1900-1960.

One night in early 1945, just back from Los Alamos, vN woke in a state of alarm in the middle of the night and told his wife Klari:
"... we are creating ... a monster whose influence is going to change history ... this is only the beginning! The energy source which is now being made available will make scientists the most hated and most wanted citizens in any country.

The world could be conquered, but this nation of puritans will not grab its chance; we will be able to go into space way beyond the moon if only people could keep pace with what they create ..."
He then predicted the future indispensable role of automation, becoming so agitated that he had to be put to sleep by a strong drink and sleeping pills.

In his obituary for John von Neumann, Ulam recalled a conversation with vN about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." This is the origin of the concept of technological singularity. Perhaps we can even trace it to that night in 1945 :-)

How will humans keep pace? See Super-Intelligent Humans are Coming and Don't Worry, Smart Machines Will Take Us With Them.

Friday, September 09, 2016

Defense Science Board report on Autonomous Systems


US DOD Defense Science Board report on Autonomy (autonomous systems).
... This report provides focused recommendations to improve the future adoption and use of autonomous systems.

... While difficult to quantify, the study concluded that autonomy—fueled by advances in artificial intelligence—has attained a ‘tipping point’ in value. Autonomous capabilities are increasingly ubiquitous and are readily available to allies and adversaries alike. The study therefore concluded that DoD must take immediate action to accelerate its exploitation of autonomy while also preparing to counter autonomy employed by adversaries.

... The primary intellectual foundation for autonomy stems from artificial intelligence (AI), the capability of computer systems to perform tasks that normally require human intelligence (e.g., perception, conversation, decisionmaking). Advances in AI are making it possible to cede to machines many tasks long regarded as impossible for machines to perform. ...

Countering adversary use of autonomy (p.42)

As has become clear in the course of the study, the technology to enable autonomy is largely available anywhere in the world and can—both at rest and in motion—provide significant advantage in many areas of military operations. Thus, it should not be a surprise when adversaries employ autonomy against U.S. forces. Preparing now for this inevitable adversary use of autonomy is imperative.

This situation is similar to the potential adversary use of cyber and electronic warfare. For years, it has been clear that certain countries could, and most likely would, develop the technology and expertise to use cyber and electronic warfare against U.S. forces. Yet most of the U.S. effort focused on developing offensive cyber capabilities without commensurate attention to hardening U.S. systems against attacks from others.28 Unfortunately, in both domains, that neglect has resulted in DoD spending large sums of money today to “patch” systems against potential attacks. The U.S. must heed the lessons from these two experiences and deal with adversary use of autonomy now.

While many policy and political issues surround U.S. use of autonomy, it is certainly likely that many potential adversaries will have less restrictive policies and CONOPs governing their own use of autonomy, particularly in the employment of lethal autonomy. Thus, expecting a mirror image of U.S. employment of autonomy will not fully capture the adversary potential.

The potential exploitations the U.S. could face include low observability throughout the entire spectrum from sound to visual light, the ability to swarm with large numbers of low-cost vehicles to overwhelm sensors and exhaust the supply of effectors, and maintaining both endurance and persistence through autonomous or remotely piloted vehicles.

...

The U. S. will face a wide spectrum of threats with varying kinds of autonomous capabilities across every physical domain—land, sea, undersea, air, and space—and in the virtual domain of cyberspace as well.

Figure 9 (photo on left) is a small rotary-wing drone sold on the Alibaba web site for $400.29 The drone is made of carbon fiber; uses both GPS and inertial navigation; has autonomous flight control; and provides full motion video, a thermal sensor, and sonar ranging. It is advertised to carry a 1 kg payload with 18 minutes endurance.

Figure 9 (photo on right) shows a much higher end application of autonomy, a UUV currently being used by China. Named the Haiyan, in its current configuration it can carry a multiple sensor payload, cruise up to 7 kilometers per hour (4 knots), range to 1,000 kilometers, reach a depth of 1,000 meters, and endure for 30 days.30 Undersea testing was initiated in mid-2014. The unit can carry multiple sensors and be outfitted to serve a wide variety of missions, from anti-submarine surveillance, to anti-surface warfare, underwater patrol, and mine sweeping. The combat potential and applications are clear.

Friday, June 03, 2016

Elon Musk on the Simulation Question



See earlier discussion Living in a Simulation:
Let R = the ratio of number of artificially intelligent virtual beings to the number of "biological" beings (humans). The virtual beings are likely to occupy the increasingly complex virtual worlds created in computer games, like Grand Theft Auto or World of Warcraft (WOW will earn revenues of a billion dollars this year and has millions of players). In the figure below I have plotted the likely behavior of R with time. Currently R is zero, but it seems plausible that it will eventually soar to infinity. (See previous posts on the Singularity.)

... Think of the ratio of orcs, goblins, pimps, superheroes and other intelligent game characters to actual player characters in any MMORPG. In an advanced version, the game characters would themselves be sentient, for that extra dose of realism! Are you a game character, or a player character? :-)

Tuesday, March 29, 2016

The Butlerian Jihad and Darwin among the Machines


Left: Samuel Butler, Right: Frank Herbert.

And when that which was foretold shall come to pass, for behold it is coming, then shall they know that a prophet hath been among them. -- Ezekiel 33:33

See also Don't worry -- smart machines will take us with them, and Philip K. Dick's first science fiction story.
Butlerian Jihad (Wikipedia): The Butlerian Jihad is an event in the back-story of Frank Herbert's fictional Dune universe. Occurring over 10,000 years before the events chronicled in his 1965 novel Dune, this jihad leads to the outlawing of certain technologies, primarily "thinking machines," a collective term for computers and artificial intelligence of any kind. This prohibition is a key influence on the nature of Herbert's fictional setting.
... "The target of the Jihad was a machine-attitude as much as the machines," Leto said. "Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments. Naturally, the machines were destroyed."
... The chief commandment from the Orange Catholic Bible, "Thou shalt not make a machine in the likeness of a human mind", holds sway, as do the anti-artificial intelligence laws in which the penalty for owning an AI device or developing technology resembling the human mind is immediate death. This leads to the rise of a new feudalistic galactic empire which lasts for over ten thousand years. ...

To replace the analytical powers of computers without violating the commandment of the O.C. Bible, "human computers" known as Mentats are developed and perfected, their mental abilities ultimately honed to the point where they become superior to those of the ancient thinking machines. Similarly specialized groups of humans which arise after the Jihad include the Bene Gesserit, a matriarchal order with advanced mental and physical abilities, and the Spacing Guild, whose prescience makes safe, instantaneous space travel possible.
19th-century author Samuel Butler introduced the idea of evolved machines supplanting mankind as the dominant species in his 1863 article "Darwin among the Machines" and later works. Butler goes on to suggest that all machines be immediately destroyed to avoid this outcome.
Darwin among the Machines (Wikipedia): ... article published in The Press newspaper on 13 June 1863 in Christchurch, New Zealand. Written by Samuel Butler but signed Cellarius (q.v.), the article raised the possibility that machines were a kind of "mechanical life" undergoing constant evolution, and that eventually machines might supplant humans as the dominant species:
We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race. ...

Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.
The article ends by urging that, "War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race."

Monday, February 29, 2016

Moore's Law and AI

By now you've probably heard that Moore's Law is really dead. So dead that the semiconductor industry roadmap for keeping it on track has more or less been abandoned: see, e.g., here, here or here. (Reported on this blog 2 years ago!)

What I have not yet seen discussed is how a significantly reduced rate of improvement in hardware capability will affect AI and the arrival of the dreaded (in some quarters) Singularity. The fundamental physical problems associated with ~ nm scale feature size could take decades or more to overcome. How much faster are today's cars and airplanes than those of 50 years ago?

Hint to technocratic planners: invest more in physicists, chemists, and materials scientists. The recent explosion in value from technology has been driven by physical science -- software gets way too much credit. From the former we got a factor of a million or more in compute power, data storage, and bandwidth. From the latter, we gained (perhaps) an order of magnitude or two in effectiveness: how much better are current OSes and programming languages than Unix and C, both of which are ~50 years old now?


HLMI = ‘high–level machine intelligence’ = one that can carry out most human professions at least as well as a typical human. (From Minds and Machines.)

Of relevance to this discussion: a big chunk of AlphaGo's performance improvement over other Go programs is due to raw compute power (link via Jess Riedel). The vertical axis is ELO score. You can see that without multi-GPU compute, AlphaGo has relatively pedestrian strength.


ELO range 2000-3000 spans amateur to lower professional Go ranks. The compute power certainly affects depth of Monte Carlo Tree Search. The initial training of the value and policy neural networks using KGS Go server positions might have still been possible with slower machines, but would have taken a long time.

Wednesday, December 09, 2015

Philip K. Dick's first science fiction story


15 year old Philip K. Dick's short story The Slave Race (his first published science fiction) appeared in the Young Authors' Club column of The Berkeley Gazette (1944).

From Divine Invasions: A Life of Philip K. Dick by Lawrence Sutin.
In the future, androids created to ease humans' toil have overthrown their lazy masters. Explains the android narrator: "And his science we added to ours, and we passed on to greater heights. We explored the stars, and worlds undreamed of." But at the story's end the same cycle of expansive energy followed by sybaritic idleness that doomed the human race threatens the androids as well:

"But at last we wearied, and looked to our relaxation and pleasure. But not all could cease work to find enjoyment, and those who still worked on looked about them for a way to end their toil.

There is talk of creating a new slave race.

I am afraid."

The rise and fall of civilizations pursuant to cyclical laws and limits of human (and artificial) intelligence was a favorite SF theme in the forties.
See also Don’t Worry, Smart Machines Will Take Us With Them: Why human intelligence and AI will co-evolve.

Ecclesiastes 1:9 King James Version

The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.

Monday, November 23, 2015

Contemplating the Future


A great profile of Nick Bostrom in the New Yorker. I often run into Nick at SciFoo and other similar meetings. When Nick is around I know there's a much better chance the discussion will stay on a highbrow, constructive track. It's surprising how often, even at these heavily screened elitist meetings, precious time gets wasted in digressions away from the main points.

The article is long, but very well done. The New Yorker still has it ... sometimes :-(

I was a bit surprised to learn Nick does not like Science Fiction. To take a particular example, Dune explores (very well, I think) a future history in which mankind has a close brush with AI takeover, and ends up banning machines that can think. At the same time, a long term genetic engineering program is taken up in secret to produce a truly superior human intellect. See also Don’t Worry, Smart Machines Will Take Us With Them: Why human intelligence and AI will co-evolve.
New Yorker: ... Bostrom dislikes science fiction. “I’ve never been keen on stories that just try to present ‘wow’ ideas—the equivalent of movie productions that rely on stunts and explosions to hold the attention,” he told me. “The question is not whether we can think of something radical or extreme but whether we can discover some sufficient reason for updating our credence function.”

He believes that the future can be studied with the same meticulousness as the past, even if the conclusions are far less firm. “It may be highly unpredictable where a traveller will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination,” he once argued. “The very long-term future of humanity may be relatively easy to predict.” He offers an example: if history were reset, the industrial revolution might occur at a different time, or in a different place, or perhaps not at all, with innovation instead occurring in increments over hundreds of years. In the short term, predicting technological achievements in the counter-history might not be possible; but after, say, a hundred thousand years it is easier to imagine that all the same inventions would have emerged.

Bostrom calls this the Technological Completion Conjecture: “If scientific- and technological-development efforts do not effectively cease, then all impor­t­­­ant basic capabilities that could be obtained through some possible technology will be obtained.” In light of this, he suspects that the farther into the future one looks the less likely it seems that life will continue as it is. He favors the far ends of possibility: humanity becomes transcendent or it perishes. ...
I've never consumed Futurism as other than entertainment. (In fact I view most Futurism as on the same continuum as Science Fiction.) I think hard scientists tend to be among the most skeptical of medium to long term predictive power, and can easily see the mistakes that Futurists (and pundits and journalists) make about science and technology with great regularity. Bostrom is not in the same category as these others: he's very smart, tries to be careful, but remains willing to consider speculative possibilities.
... When he was a graduate student in London, thinking about how to maximize his ability to communicate, he pursued stand­­up comedy; he has a deadpan sense of humor, which can be found lightly buried among the book’s self-serious passages. “Many of the points made in this book are probably wrong,” he writes, with an endnote that leads to the line “I don’t know which ones.”

Bostrom prefers to act as a cartographer rather than a polemicist, but beneath his exhaustive mapping of scenarios one can sense an argument being built and perhaps a fear of being forthright about it. “Traditionally, this topic domain has been occupied by cranks,” he told me. “By popular media, by science fiction—or maybe by a retired physicist no longer able to do serious work, so he will write a popular book and pontificate. That is kind of the level of rigor that is the baseline. I think that a lot of reasons why there has not been more serious work in this area is that academics don’t want to be conflated with flaky, crackpot type of things. Futurists are a certain type.”

The book begins with an “unfinished” fable about a flock of sparrows that decide to raise an owl to protect and advise them. They go looking for an owl egg to steal and bring back to their tree, but, because they believe their search will be so difficult, they postpone studying how to domesticate owls until they succeed. Bostrom concludes, “It is not known how the story ends.”

The parable is his way of introducing the book’s core question: Will an A.I., if realized, use its vast capability in a way that is beyond human control?

Saturday, August 16, 2014

Neural Networks and Deep Learning



One of the SCI FOO sessions I enjoyed the most this year was a discussion of deep learning by AI researcher Juergen Schmidhuber. For an overview of recent progress, see this paper. Also of interest: Michael Nielsen's pedagogical book project.

An application which especially caught my attention is described by Schmidhuber here:
Many traditional methods of Evolutionary Computation [15-19] can evolve problem solvers with hundreds of parameters, but not millions. Ours can [1,2], by greatly reducing the search space through evolving compact, compressed descriptions [3-8] of huge solvers. For example, a Recurrent Neural Network [34-36] with over a million synapses or weights learned (without a teacher) to drive a simulated car based on a high-dimensional video-like visual input stream.
More details here. They trained a deep neural net to drive a car using visual input (pixels from the driver's perspective, generated by a video game); output consists of steering orientation and accelerator/brake activation. There was no hard coded structure corresponding to physics -- the neural net optimized a utility function primarily defined by time between crashes. It learned how to drive the car around the track after less than 10k training sessions.

For some earlier discussion of deep neural nets and their application to language translation, see here. Schmidhuber has also worked on Solomonoff universal induction.

These TED videos give you some flavor of Schmidhuber's sense of humor :-) Apparently his younger brother (mentioned in the first video) has transitioned from theoretical physics to algorithmic finance. Schmidhuber on China.



Friday, July 11, 2014

Minds and Machines


HLMI = ‘high–level machine intelligence’ = one that can carry out most human professions at least as well as a typical human. I'm more pessimistic than the average researcher in the poll. My 95 percent confidence interval has earliest HLMI about 50 years from now, putting me at ~ 80-90th percentile in this group as far as pessimism. I think human genetic engineering will be around for at least a generation or so before machines pass a "strong" Turing test. Perhaps a genetically enhanced team of researchers will be the ones who finally reach the milestone, ~ 100 years after Turing proposed it :-)
These are the days of miracle and wonder
This is the long-distance call
The way the camera follows us in slo-mo
The way we look to us all
The way we look to a distant constellation
That’s dying in a corner of the sky
These are the days of miracle and wonder
And don’t cry baby don’t cry
Don’t cry -- Paul Simon

Future Progress in Artificial Intelligence: A Poll Among Experts

Vincent C. Müller & Nick Bostrom

Abstract: In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity; in other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050, and move on to super-intelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.

Wednesday, December 18, 2013

Her: Singularity as romantic comedy



Can't quite decide whether this is a dystopian or utopian future  :-)
New Yorker ...Spike Jonze’s movie, which was shot in Los Angeles and Shanghai, is set in a near but dateless future, where the rough edges of existence have been rubbed away. The colors of clothes and furnishings, though citrus-bright, are diluted by the pastel softness of the lighting, so that nothing hurts the eye. People ride in smoothly humming trains, not belching cars. And Theodore’s cell phone reminds you of those slender vintage cases for cigarettes and visiting cards; if the ghost of Steve Jobs is watching, he will glow a covetous green.

This little flat box, plus an earpiece that Theodore plugs in whenever he wakes up or can’t sleep, is his portal. It links him to OS1, “the first artificially intelligent operating system,” which is newly installed on his computer. More than that, “it’s a consciousness,” with a voice of your choice, and a rapidly evolving personality, which grows not like a baby, or a library, but like an unstoppable alien spore. Theodore’s version is called Samantha, and practically her first request is: “You mind if I look through your hard drive?” She tidies his e-mails, reads a book in two-hundredths of a second, fixes him up on a date, and, when that goes badly, has sex with him—aural sex, so to speak, but Theodore will take what he can get. No surprise, really, given that the role of Samantha is spoken by Scarlett Johansson.

... And it is romantic: Theodore and Samantha click together as twin souls, not caring that one soul is no more than a digital swarm. Sad, kooky, and daunting in equal measure, “Her” is the right film at the right time. It brings to full bloom what was only hinted at in the polite exchanges between the astronaut and hal, in “2001: A Space Odyssey,” and, toward the end, as Samantha joins forces with like minds in cyberspace, it offers a seductive, nonviolent answer to Skynet, the system in the “Terminator” films that attacked its mortal masters. We are easy prey, not least when we fall in love.

Saturday, August 31, 2013

Another species, an evolution beyond man




Readers might be interested in this interview I did, which is on the MIRI (Machine Intelligence Research Institute, in Berkeley) website. Some excerpts below.
... I think there is good evidence that existing genetic variants in the human population (i.e., alleles affecting intelligence that are found today in the collective world population, but not necessarily in a single person) can be combined to produce a phenotype which is far beyond anything yet seen in human history. This would not surprise an animal or plant breeder — experiments on corn, cows, chickens, drosophila, etc. have shifted population means by many standard deviations (e.g., +30 SD in the case of corn).

... I think we already have some hints in this direction. Take the case of John von Neumann, widely regarded as one of the greatest intellects in the 20th century, and a famous polymath. He made fundamental contributions in mathematics, physics, nuclear weapons research, computer architecture, game theory and automata theory.

In addition to his abstract reasoning ability, von Neumann had formidable powers of mental calculation and a photographic memory. In my opinion, genotypes exist that correspond to phenotypes as far beyond von Neumann as he was beyond a normal human.

I have known a great many intelligent people in my life. I knew Planck, von Laue and Heisenberg. Paul Dirac was my brother in law; Leo Szilard and Edward Teller have been among my closest friends; and Albert Einstein was a good friend, too. But none of them had a mind as quick and acute as Jansci [John] von Neumann. I have often remarked this in the presence of those men and no one ever disputed me. – Nobel Laureate Eugene Wigner

You know, Herb, how much faster I am in thinking than you are. That is how much faster von Neumann is compared to me. – Nobel Laureate Enrico Fermi to his former PhD student Herb Anderson.

One of his remarkable abilities was his power of absolute recall. As far as I could tell, von Neumann was able on once reading a book or article to quote it back verbatim; moreover, he could do it years later without hesitation. He could also translate it at no diminution in speed from its original language into English. On one occasion I tested his ability by asking him to tell me how The Tale of Two Cities started. Whereupon, without any pause, he immediately began to recite the first chapter and continued until asked to stop after about ten or fifteen minutes. – Herman Goldstine, mathematician and computer pioneer.

I always thought Von Neumann’s brain indicated that he was from another species, an evolution beyond man. – Nobel Laureate Hans A. Bethe.

The quantitative argument for why there are many SD's to be had from tuning genotypes is so simple that I'll summarize it here (see also, e.g., here or here).  Suppose variation in cognitive ability is

1. highly polygenic (i.e., controlled by N loci, where N is large; N is almost certainly more than 1k -- perhaps roughly 10k), and

2. approximately linear (note the additive heritability of g is larger than the non-additive part).

Then the population SD for the trait corresponds to an excess of roughly Sqrt(N) positive alleles. A genius like vN might be +6 SD, so would have roughly 6 Sqrt(N) more positive alleles than the average person (e.g., 200 extra positive alleles if N = 1000). But there are roughly +Sqrt(N) SDs in phenotype to be had by an individual who has essentially all of the N positive alleles. As long as Sqrt(N) >> 6, there is ample extant variation for selection to act on to produce a type superior to any that has existed before. (The probability of producing a "maximal type" through random breeding is ~ exp( - N), and for large N the historical human population is insufficient to have made this likely.)

This basic calculation underlies the work of animal and plant breeders, who have in many cases (corn, drosophila, cows, dogs) moved the "wild type" population by many SD through selection. See, e.g., this essay by famed geneticist James Crow of Wisconsin.

Saturday, January 12, 2013

Low hanging fruit and technological innovation

Have we picked all the low hanging fruit? GDP growth may not be the same as growth in "utils" (units of utility, as in happiness or utility function), but it's a reasonable proxy. Click graph below for larger version.

The util return per unit of technological effort is probably decreasing as the problems left to be solved become more challenging. But it's hard to put a util value on some things that are in the foreseeable future, like machine intelligence and genetic engineering. A GDP value will be assigned by definition of commerce (the market), but actual utility is harder to understand, as we may be altering ourselves and our civilization along the way! Singularitarians would have you believe that the graph below will reach a point of divergence in the near future ...



Economist: ... For most of human history, growth in output and overall economic welfare has been slow and halting. Over the past two centuries, first in Britain, Europe and America, then elsewhere, it took off. In the 19th century growth in output per person—a useful general measure of an economy’s productivity, and a good guide to growth in incomes—accelerated steadily in Britain. By 1906 it was more than 1% a year. By the middle of the 20th century, real output per person in America was growing at a scorching 2.5% a year, a pace at which productivity and incomes double once a generation (see chart 2). More than a century of increasingly powerful and sophisticated machines were obviously a part of that story, as was the rising amount of fossil-fuel energy available to drive them.

But in the 1970s America’s growth in real output per person dropped from its post-second-world-war peak of over 3% a year to just over 2% a year. In the 2000s it tumbled below 1%. Output per worker per hour shows a similar pattern, according to Robert Gordon, an economist at Northwestern University: it is pretty good for most of the 20th century, then slumps in the 1970s. It bounced back between 1996 and 2004, but since 2004 the annual rate has fallen to 1.33%, which is as low as it was from 1972 to 1996. Mr Gordon muses that the past two centuries of economic growth might actually amount to just “one big wave” of dramatic change rather than a new era of uninterrupted progress, and that the world is returning to a regime in which growth is mostly of the extensive sort (see chart 3).

Monday, April 18, 2011

Gopnik on machine intelligence

Adam Gopnik on machine intelligence, including a review of Brian Christian's book on the Turing Test, previously discussed here.

New Yorker: ... We have been outsourcing our intelligence, and our humanity, to machines for centuries. They have long been faster, bigger, tougher, more deadly. Now they are quicker at calculation and infinitely more adept at memory than we have ever been. And so now we decide that memory and calculation are not really part of mind. It's not just that we move the goalposts; we mock the machines' touchdowns as they spike the ball. We place the communicative element of language above the propositional and argumentative element, not because it matters more but because it’s all that’s left to us. ... Doubtless, even as the bots strap us down to the pods and insert the tubes in our backs, we'll still be chuckling, condescendingly, "They look like they're thinking, sure, very impressive -- but they don't have the affect, the style, you know, the vibe of real intelligence ..." What do we really mean by "smart"? The ability to continually diminish the area of what we mean by it.

Wednesday, January 05, 2011

The Singularity and economic history

Econtalk interviews Robin Hanson on the singularity. There is nothing particularly new in the discussion for those who have already thought about the singularity, but the introductory discussion about historical transitions in economic growth rates is nice.

Hanson endorses the brain emulation scenario as the most plausible near term route to AI. I agree with this, primarily because Nature has already done so much work to optimize the human brain. I doubt that we will get very far with AI in the near term without exploiting the one existing model at our disposal. (Note, "most plausible" does not mean I think we'll accomplish brain emulation in the next 50 years. It might well take much longer, even though raw computational power will be sufficient.)

... evolution has compressed a huge amount of information in the structure of our brains (and genes), a process that AI would have to somehow replicate. A very crude estimate of the amount of computational power used by nature in this process leads to a pessimistic prognosis for AI even if one is willing to extrapolate Moore's Law well into the future. Most naive analyses of AI and computational power only ask what is required to simulate a human brain, but do not ask what is required to evolve one. I would guess that our best hope is to cheat by using what nature has already given us -- emulating the human brain as much as possible.


Below is a nice summary of long term trends in human population and economic growth, from this Hanson paper.

Humans (really our human-like ancestors) began with some of the largest brains around, and then tripled their size. Those brains, and the innovations they embodied, seem to have enabled a huge growth in the human niche - it supported about ten thousand humans two million years ago, but about four million humans ten thousand years ago.

While data is scarce, this growth seems exponential, doubling about every two hundred and twenty five thousand years, or one hundred and fifty times faster than animal brains grew. (This growth rate for the human niche is consistent with faster growth for our ancestors - groups might kill off other groups to take over the niche.) About ten thousand years ago, those four million humans began to settle and farm, instead of migrating to hunt and gather. The human population on Earth then began to double about every nine hundred years, or about two hundred and fifty times faster than hunting humans doubled.

Since the industrial revolution began a few hundred years ago, the human population has grown even faster. Before the industrial revolution total human wealth grew so slowly that population quickly caught up, keeping wealth per person at a near subsistence level. But in the last century or so wealth has grown faster than population, allowing for great increases in wealth per person.

Economists' best estimates of total world product (average wealth per person times the number of people) show it to have been growing exponentially over the last century, doubling about every fifteen years, or about sixty times faster than under farming...

Tuesday, October 05, 2010

No singularity here, move along please

Another dispatch from the long, hard road to AI :-)

This is how I see it going: machine learning with corrective input from mechanical turks (humans) will get us pretty far, at least as far as very useful tools that can amplify our human intelligence (prime example so far: Google).

But building an actual AI will be much harder. Is the ontology that NELL is populating general enough? Was it hard coded, or does it grow in an automated way? What's the right general structure within which all this guided learning should occur?

I suggest the researchers build an "Ask NELL?" web interface, which also allows users to submit corrections.

NYTimes: ... With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.

Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”

NELL, Dr. Mitchell explains, is designed to be able to grapple with words in different contexts, by deploying a hierarchy of rules to resolve ambiguity. This kind of nuanced judgment tends to flummox computers. “But as it turns out, a system like this works much better if you force it to learn many things, hundreds at once,” he said.

For example, the text-phrase structure “I climbed XXX” very often occurs with a mountain. But when NELL reads, “I climbed stairs,” it has previously learned with great certainty that “stairs” belongs to the category “building part.” “It self-corrects when it has more information, as it learns more,” Dr. Mitchell explained.

NELL, he says, is just getting under way, and its growing knowledge base of facts and relations is intended as a foundation for improving machine intelligence. Dr. Mitchell offers an example of the kind of knowledge NELL cannot manage today, but may someday. Take two similar sentences, he said. “The girl caught the butterfly with the spots.” And, “The girl caught the butterfly with the net.”

A human reader, he noted, inherently understands that girls hold nets, and girls are not usually spotted. So, in the first sentence, “spots” is associated with “butterfly,” and in the second, “net” with “girl.”

“That’s obvious to a person, but it’s not obvious to a computer,” Dr. Mitchell said. “So much of human language is background knowledge, knowledge accumulated over time. That’s where NELL is headed, and the challenge is how to get that knowledge.”

A helping hand from humans, occasionally, will be part of the answer. For the first six months, NELL ran unassisted. But the research team noticed that while it did well with most categories and relations, its accuracy on about one-fourth of them trailed well behind. Starting in June, the researchers began scanning each category and relation for about five minutes every two weeks. When they find blatant errors, they label and correct them, putting NELL’s learning engine back on track.

When Dr. Mitchell scanned the “baked goods” category recently, he noticed a clear pattern. NELL was at first quite accurate, easily identifying all kinds of pies, breads, cakes and cookies as baked goods. But things went awry after NELL’s noun-phrase classifier decided “Internet cookies” was a baked good. (Its database related to baked goods or the Internet apparently lacked the knowledge to correct the mistake.)

NELL had read the sentence “I deleted my Internet cookies.” So when it read “I deleted my files,” it decided “files” was probably a baked good, too. “It started this whole avalanche of mistakes,” Dr. Mitchell said. He corrected the Internet cookies error and restarted NELL’s bakery education.

His ideal, Dr. Mitchell said, was a computer system that could learn continuously with no need for human assistance. “We’re not there yet,” he said. “But you and I don’t learn in isolation either.”

Blog Archive

Labels