Tuesday, October 6, 2009

Reflections and Inspirations

This post is the conclusion of the Singularity Summit series, which began with this post.

I enjoyed and was impressed by several of the presentations.  I'll focus on those and say what reaearch or actions they insipre me to do.  Overall, I'm really glad I went to the conference.  It was energizing and interesting.

Michael Nielsen's Mass Collaboration in Science makes me think there are some fun activities one could do with little commitment, yet be contributing to finding some scientific result.  Galaxy Zoo might be fun if you want to pass the time with visual pattern recognition.  I'd like to find out what the Polymath project is, since it's got a cool name and I like math.  I wonder what else is out there?

Gregory Benford was a good advertisement for the products of his company, Genescient.  Said he had more energgy, and he was energetic.  He looks pretty good for 68.  But mostly I liked that he said he wanted to live longer, but wasn't willing to do the calorie restriction thing.  At least we agree on that point.  His companiy's products don't treat disease, so they will be marketed as nutritional supplements.  Hopefully they will be affordable.  It would be nice to take a pill and reverse aging.  Better than taking Kurzweil's 250 pills/day too.

I keep thinking how compelling Jurgen Schmidhuber's simple theory of beauty is.  Linking pleasure and curiosity to ability to further compress data is cool in a nerdy way, and is similar in its simplicity to my theory of humor.  (Jokes set up an expectatio in our heads that causes us to briefly interpret the punchline one way, but then internally switch to another interpretation as we "get" the joke, this sudden switch being pleasurable).  Schmidhuber's theory about converging history may also help explain why time seems to speed up as we age (and why the Singularity may be late).


I'd like to be able to see the wonders a singularity could bring,  If I really want that, then I should be trying to live longer, and working to help bring about the Singularity in a very safe way.

As for how I can do this.  I really like the casually intelligent way Ben Goertzel talks, and I'd like to work with him.  His company, Novamente, is local to Boston, but I don't think I'm qualified to work there.  I may want to try contributing to his open source AI project OpenCog/OpenCogPrime.  This could help me get a job, help bring the Singularity, and help me institue some safety measures into this project, just in case it succeeds.

I'd also like to start working to live longer, but without the deprivation of calorie restriction.  I'd like to get Kurzweil's new book, Transcend, since it discussed several longevity strategies.

Sunday, October 4, 2009

Sunday morning speakers

Gary Wolf: “Petaflop macroscope” is a terrible, nerdy name for what turns out to be mostly iPhone apps to collect personal data.  Besides collecting data on yourself, you can help do collaborative science.  QuantifiedSelf.org.

Michael Nielsen: Mass Collaboration in Science.  Sucessful examples of mass collaboration: Linux, Wikipedia.  Galaxy Zoo project allows anyone to be an astronomer by classifying galaxies by type in difficult to read photographs.  Anyone can be a scientist mining bioinformatic data.  Polymath project.  The most successful projects support traditional expert activities like publishing papers.  Projects that don’t support these career-building activities are seen as a waste of time by experts.

Gregory Benford, UC-Irvine, physicist & science fiction author.  His company Genescient is about longevity.  They use artificial selection of fruit flies as a supercomputer to determine which genes most affect aging.  This information can then be used in humans, since we have many similar genes.  These genes affect various chemical pathways, and we can manipulate these pathways with drugs.  The company has already found substances that allow normal (not longevity-selected) animals to enjoy the same benefits of long, healthy life, and even reverse aging.

Brad Templeton, EFF (Electronic Frontier Foundation).  “The Finger of AI: Automated Electrical Vehicles and Oil Independence”.  www.robocars.net.  Human drivers suck.  6 million accidents a year in USA, 1.8 million with injuries.  It’s not human-level intelligence, but rather horse-level intelligence, or locust-level intelligence.  (Locusts are able to move in large swarms without hitting each other or obstacles). 

The X-prize foundation produces innovation competitions, and Templeton proposes a competition where NASCAR drivers compete with robocars to see who is the safest in avoiding (fake) pedestrians. 

Is this Brad Templeton the same one who moderated rec.humor.funny?

Ray Kurzweil himself

Ray Kurzweil himself closed the first day of the conference.  Many of the conference attendees were introduced to the He didn’t really provide any new information, instead commenting on the things he had heard that day from the other presenters.  He did this very well, with insight and humor.  Of course he was warmly welcomed by the crowd, who had waited until 6:30 in the evening to hear him.  I’m glad I stuck it out.  His speaking manner gave me more confidence in his ideas, since he was able to very casually ad-lib about some complex topics in a way that showed he understood them all, including their interrelations.

William Dickens

William Dickens of Northeastern University is somewhat of a rarity at this conference: an economist.  However, he only talked about classical economics for the last five seconds of his talk.   The rest was about the surprising observation that IQ scores are going up around the world for about the past two decades.  There is some debate about what these test measure, but the tests and sub-tests most closely correlated with on-the-fly solving of novel problems are the ones where scores have increased most dramatically.  Long story short, he believes that the ability of environment (as opposed to genetics) to influence intelligence has been historically underestimated.  These modern times demand more intelligence of us, so we encourage these abilities with training and practice.  This multiplies any built-in abilities we may have had, and is producing real gains.  Recognizing that this is happening, and working well, can lead us to more effectively and proactively enhance our intelligence through social environmental means.

Ed Boyden

Ed Boyden from MIT talked about “synthetic neurobiology”, which is a fancy way of saying we’re going to put electronics in our brains.  Or we already do if we’re one of the hundreds of thousands of people living with an artificial neural implant for treatment of deafness, blindness, Parkinson’s disease, or Tourette’s syndrome. 

In addition to battery-powered electronic implants (with downloadable software, and maybe computer viruses), brain augmentation can happen through trans-cranial magnetic stimulation, which is getting more precise, or something called two-photon microscopy.  From what I can tell, this technology involves pumping laser light into certain molecules in hopes of getting the molecules to absorb two photons simultaneously, which causes the molecule to re-emit a single photon of higher energy than either of the two input photons.  I must have missed the part where he explained how this could augment a person’s brain function.  Maybe it has something to do with the compounds that make neurons sensitive to certain wavelengths of light.  Boyden is part of a start-up “neurotech” firm called EOS Neurosciences, dedicated to commercializing the technology to confer photosensitivity to neurons not normally responsive to light. 

Boyden said (not in so many words) that electrode (or fiber optic) implantation will suffer from the same “tyranny of numbers” problem that led electronics engineers to invent the integrated circuit.  The IC lets us connect up a much larger number of components because we can fabricate them right next to each other, from the same material.  Could an “IC” approach help neural prostheses?  It’s a little harder than the electronics case since one of the circuit components (the neuron) is given to us and can’t be changed.

Jurgen Schmidhuber

Jurgen Schmidhuber presented the simple yet fascinating idea that we get all of our pleasure from finding better ways to compress the information we see.  We do so by detecting patterns in the information, which allows us to compress it by describing it at a higher level.  Google: “artificial curiosity”, “theory of beauty”, and “converging history”.  The last term describes his theory that everyone sees history accelerating towards their own time.  He showed how similar evidence could have convinced a Ray Kurzweil that the singularity was near in the 16th century.  There’s never been shortage of crazy people claiming “The End is Near”.  This is why so many people think the Singularity is near.  It’s a bias from living in this time.  So far nobody has had the guts to respond to that criticism.  Today’s Kurzweil is supposed to address critics tomorrow, so we’ll see if he talks about Schimdhuber’s theories.

David Chalmers

David Chalmers, the consciousness guy, spoke of how important simulation is in ensuring that strong AI is possible & controllable.  He used a very formal logical argument to show that 1) we can create AI with intelligence equal to our own, 2) AI can create AI+ (AI with greater than human intelligence, and 3) AI+ can create AI++ (super intelligence).  It was all very plausible except for the assumption that AI+ or AI++ is even possible to construct, using any technique at all.  Maybe human-level intelligence is all that’s possible.  Chalmers argues that we should isolate our AI experiments in closed simulated worlds, and avoid talking to them or giving them information.  This will keep them from learning too much about us and taking over our world.  This is the most concrete proposal I’ve heard for having a safe, beneficial intelligence explosion. 

In general, Chalmers thinks the Singularity will take centuries to arrive, rather than decades.  He also thinks it will be more productive to create super intelligence through a “dumb” path like simulated evolution.  This makes sense to me as the easiest way to create strong AI, and answers critics who say designing AI or AI+ is inherently beyond our capabilities. 

Ultimately, the Singularity and intelligence explosion will happen.  We can choose to upload ourselves into the “simulated” world of AI++ and then enhance ourselves to try to keep up.  Chalmers argues that gradual uploading will work and even preserve our sense of self.  It’s worth a shot, since the alternative is extinction.

Random Observations - Saturday

Funniest presenters: Ned Seeman, NYU and Jurgen Schmidhuber, IDSIA (but Jurgen was funny in a very German way).

Phone of choice among these techie futurists?  You guessed it, the iPhone. 

We are like patterns of water in a flowing stream.  While the water moves through quickly, the pattern persists.   The pattern must be us, since we feel the same sense of identity even if all of our molecules at this time were not part of us at an earlier time.

Saturday, October 3, 2009

Is the brain way more complicated than most brain scientists think?

Stuart Hameroff’s talk was unexpectedly interesting.  He talked about how gap junctions and microtubules make neural operation in our brains much more compklext than the synaptic model  implies.  Gamma synchrony, microtubule memory.  Makes the computational requirements for consciousness more demanding than other singularity proponents have been assuming.

First talks - AGI

What is AGI (Artificial General Intelligence)? This new term for AI seeks to differentiate itself from the old-style "narrow AI" that can only accomplish specific tasks, not be generally intelligent.

10:00 am -- A couple of the talks this morning were about whole brain emulation, which could lead to a way to back up or transfer our intelligence to a new substrate, such as a computer.  Some of the most interesting questions about this are the ethical one.  Does a conscious emulated intelligence (assuming for the moment such a thing is possible) have legal rights, such as informed consent for experiments? 

11:10 -- Ben Goertzel seems to have his act together around AI.  He runs Novamente (http://www.novamente.net/) and Open Cog (http://www.opencog.org/wiki/The_Open_Cognition_Project), so he’s doing things for real, and not in academia.  But still, how is it different from AI as I learned it in college in the 80’s?

Friday, October 2, 2009

What is the Singularity, and why does it matter?

I know I won’t get any sympathy from most of you, but I’m not usually awake at this hour (8:05 am). Today I’m on the Acela Express, already speeding between Providence and Stamford, CT. They say it reaches a top speed of 150 mph, but it only does so briefly, because our country is too shortsighted to invest in a decent set of tracks on the nation’s busiest rail corridor. The regular train takes only 30 minutes longer to travel between Boston and NYC. Regardless, I should be at Penn Station by 10:45 am.

After checking in at the Americana Inn, I plan to take in a museum or two today. The conference starts bright and early on Saturday, so I don’t plan to stay up late.

I was hoping to finish Kurzweil’s book, The Singularity Is Near, but I won‘t have time. The book’s subtitle is ’When humans transcend biology’. The Singularity has also been described as the era when humans merge with their technology. Most important in my mind, however, is the notion that the Singularity is a point where technology will advance at such a rate that it will feed on itself and appear to progress almost infinitely fast. This will happen because of the confluence of several forces. The lower cost and higher power of computing. The advent of AGI (Artificial General Intelligence, to distinguish itself from so-called Narrow AI, such as chess-playing computers, or financial trading programs). The fruition of the Human Genome Project in new medicines extending our lives. Nanotechnology transforming everything from manufacturing to medicine.

I really want to believe in the bright, limitless future being painted by most of the presenters at the conference. However, I am a little skeptical that this is just another flying car prediction from middle of last century, dusted off for this century. This latest round is a little different, however, with much greater emphasis on evidence-based predictions. The thrust of Kurzweil’s book is that technology always has and always will advance exponentially. Supposedly we’re now entering the ’knee of the curve’ where technology acceleration becomes so rapid that the Singularity will happen, after which all bets are off.

I’m also a little concerned with the short shrift given to ethics and safety in the realms of AGI and nanotechnology. There’s some talk of it, but I’d like to see every speaker address these topics with more than patronizing assurances. As in Donald Fagen’s ‘Glorious Time to be Free’, we need to be wary of a future where we have “Just machines to make big decisions, programmed by fellows with compassion and vision.” Fagen’s lyric subtly implies that an invention is only as smart as its creator. But what about when the computers are programmed by other computers? How will we even know what’s going on, let alone control it?

I will also be shopping for new career paths here. This is based on the realization that whether the Singularity comes in 2030 or never, my job is likely to be outsourced to a computer before I’m ready to retire. Even narrow AI could do what I do for a living (design and construct software). I’m intrigued by making the programs that will make future programs, but such a worker inherently seeks to make himself obsolete. What jobs will never be outsourced? If you assume that moderately skilled and intelligent robots will be around in the next 20 years, few jobs are safe. Will humans become obsolete? Are The Matrix and The Terminator accurate predictions after all? What can we do to ensure AGI, robotics, and nanotechnology remain beneficial to humans? Or at least beneficial to what humans are evolving into? Because that’s the real story here: evolution by natural selection is coming to a close. From now on, evolution by intelligent artificial selection will leave natural selection in the dust. What will you do when the future arrives? Happily go with the flow, possibly transforming yourself radically? Or be one of the biological Luddites whose bloodlines eventually peter out?

“Homo sapiens, the first truly free species, is about to decommission natural selection, the force that made us… [S]oon we must look deep within ourselves and decide what we wish to become.” -- E.O. Wilson, Consilience: The Unity of Knowledge, 1998.