May 2010 – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Mon, 05 Jun 2017 03:31:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 A Space Elevator in 7 https://spanish.lifeboat.com/blog/2010/05/space-elevator-in-7 https://spanish.lifeboat.com/blog/2010/05/space-elevator-in-7#comments Mon, 31 May 2010 15:48:27 +0000 http://lifeboat.com/blog/?p=933 I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog (Software and the Singularity, AI and Driverless cars) Here are the sections on the Space Elevator. I hope you find these pages food for thought and I appreciate any feedback.


A Space Elevator in 7

Midnight, July 20, 1969; a chiaroscuro of harsh contrasts appears on the television screen. One of the shadows moves. It is the leg of astronaut Edwin Aldrin, photographed by Neil Armstrong. Men are walking on the moon. We watch spellbound. The earth watches. Seven hundred million people are riveted to their radios and television screens on that July night in 1969. What can you do with the moon? No one knew. Still, a feeling in the gut told us that this was the greatest moment in the history of life. We were leaving the planet. Our feet had stirred the dust of an alien world.

—Robert Jastrow, Journey to the Stars

Management is doing things right, Leadership is doing the right things!

—Peter Drucker

SpaceShipOne was the first privately funded aircraft to go into space, and it set a number of important “firsts”, including being the first privately funded aircraft to exceed Mach 2 and Mach 3, the first privately funded manned spacecraft to exceed 100km altitude, and the first privately funded reusable spacecraft. The project is estimated to have cost $25 million dollars and was built by 25 people. It now hangs in the Smithsonian because it serves no commercial purpose, and because getting into space is no longer the challenge — it is the expense.

In the 21st century, more cooperation, better software, and nanotechnology will bring profound benefits to our world, and we will put the Baby Boomers to shame. I focus only on information technology in this book, but materials sciences will be one of the biggest tasks occupying our minds in the 21st century and many futurists say that nanotech is the next (and last?) big challenge after infotech.

I’d like to end this book with one more big idea: how we can jump-start the nanotechnology revolution and use it to colonize space. Space, perhaps more than any other endeavor, has the ability to harness our imagination and give everyone hope for the future. When man is exploring new horizons, there is a swagger in his step.

Colonizing space will change man’s perspective. Hoarding is a very natural instinct. If you give a well-fed dog a bone, he will bury it to save it for a leaner day. Every animal hoards. Humans hoard money, jewelry, clothes, friends, art, credit, books, music, movies, stamps, beer bottles, baseball statistics, etc. We become very attached to these hoards. Whether fighting over $5,000 or $5,000,000 the emotions have the exact same intensity.

When we feel crammed onto this pale blue dot, we forget that any resource we could possibly want is out there in incomparably big numbers. If we allocate the resources merely of our solar system to all 6 billion people equally, then this is what we each get:

Resource Amount
Hydrogen 34,000 billion Tons
Iron 834 billion Tons
Silicates (sand, glass) 834 billion Tons
Oxygen 34 billion Tons
Carbon 34 billion Tons
Energy production 64 trillion Kilowatts per hour

Even if we confine ourselves only to the resources of this planet, we have far more than we could ever need. This simple understanding is a prerequisite for a more optimistic and charitable society, which has characterized eras of great progress. Unfortunately, NASA’s current plans are far from adding that swagger.

If NASA follows through on its 2004 vision to retire the Space Shuttle and go back to rockets, and go to the moon again, this is NASA’s own imagery of what we will be looking at on DrudgeReport.com in 2020.

Our astronauts will still be pissing in their space suits in 2020.

According to NASA, the above is what we will see in 2020, but if you squint your eyes, it looks just like 1969:

All this was done without things we would call computers.

Only a government bureaucracy can make such little progress in 50 years and consider it business as usual. There are many documented cases of large government organizations plagued by failures of imagination, yet no one considers that the rocket-scientist-bureaucrats at NASA might also be plagued by this affliction. This is especially ironic because the current NASA Administrator, Michael Griffin, has admitted that many of its past efforts were failures:

  • The Space Shuttle, designed in the 1970s, is considered a failure because it is unreliable, expensive, and small. It costs $20,000 per pound of payload to put into low-earth orbit (LEO), a mere few hundred miles up.
  • The International Space Station (ISS) is small, and only 200 miles away, where gravity is 88% of that at sea-level. It is not self-sustaining and doesn’t get us any closer to putting people on the moon or Mars. (By moving at 17,000 miles per hour, it falls fast enough to stay in the same orbit.) America alone spent $100 billion on this boondoggle.

The key to any organization’s ultimate success, from NASA to any private enterprise, is that there are leaders at the top with vision. NASA’s mistakes were not that it was built by the government, but that the leaders placed the wrong bets. Microsoft, by contrast, succeeded because Bill Gates made many smart bets. NASA’s current goal is “flags and footprints”, but their goal should be to make it cheap to do those things, a completely different objective.1

I don’t support redesigning the Space Shuttle, but I also don’t believe that anyone at NASA has seriously considered building a next-generation reusable spacecraft. NASA is basing its decision to move back to rockets primarily on the failures of the first Space Shuttle, an idea similar to looking at the first car ever built and concluding that cars won’t work.

Unfortunately, NASA is now going back to technology even more primitive than the Space Shuttle. The “consensus” in the aerospace industry today is that rockets are the future. Rockets might be in our future, but they are also in the past. The state-of-the-art in rocket research is to make them 15% more efficient. Rocket research is incremental today because the fundamental chemistry and physics hasn’t changed since their first launches in the mid-20th century.

Chemical rockets are a mistake because the fuel which propels them upward is inefficient. They have a low “specific impulse”, which means it takes lots of fuel to accelerate the payload, and even more more fuel to accelerate that fuel! As you can see from the impressive scenes of shuttle launches, the current technology is not at all efficient; rockets typically contain 6% payload and 94% overhead. (Jet engines don’t work without oxygen but are 15 times more efficient than rockets.)

If you want to know why we have not been back to the moon for decades, here is an analogy:

What would taking delivery of this car cost you?
A Californian buys a car made in Japan.
The car is shipped in its own car carrier.
The car is off-loaded in the port of Los Angeles.
The freighter is then sunk.

The latest in propulsion technology is electrical ion drives which accelerate atoms 20 times faster than chemical rockets, which mean you need much less fuel. The inefficiency of our current chemical rockets is what is preventing man from colonizing space. Our simple modern rockets might be cheaper than our complicated old Space Shuttle, but it will still cost thousands of dollars per pound to get to LEO, a fancy acronym for 200 miles away. Working on chemical rockets today is the technological equivalent of polishing a dusty turd, yet this is what our esteemed NASA is doing.


The Space Elevator

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

—Arthur C. Clarke RIP, 1962

The best way to predict the future is to invent it. The future is not laid out on a track. It is something that we can decide, and to the extent that we do not violate any known laws of the universe, we can probably make it work the way that we want to. —Alan Kay

A NASA depiction of the space elevator. A space elevator will make it hundreds of times cheaper to put a pound into space. It is an efficiency difference comparable to that between the horse and the locomotive.

One of the best ways to cheaply get back into space is kicking around NASA’s research labs:

Scale picture of the space elevator relative to the size of Earth. The moon is 30 Earth-diameters away, but once you are at GEO, it requires relatively little energy to get to the moon, or anywhere else.

A space elevator is a 65,000-mile tether upon which we can launch things into space in a slow, safe, and cheap way.

And these climbers don’t even need to carry their energy as you can use solar panels to provide the energy for the climbers. All this means you need much less fuel. Everything is fully reusable, so when you have built such a system, it is easy to have daily launches.

The first elevator’s climbers will travel into space at just a few hundred miles per hour — a very safe speed. Building a device which can survive the acceleration and jostling is a large part of the expense of putting things into space today. This technology will make it hundreds, and eventually thousands of times cheaper to put things, and eventually people, into space.

A space elevator might sound like science fiction, but like many of the ideas of science fiction, it is a fantasy that makes economic sense. While you needn’t trust my opinion on whether a space elevator is feasible, NASA has never officially weighed in on the topic — also a sign they haven’t given it serious consideration.

This all may sound like science fiction, but compared to the technology of the 1960s, when mankind first embarked on a trip to the moon, a space elevator is simple for our modern world to build. In fact, if you took a cellphone back to the Apollo scientists, they’d treat it like a supercomputer and have teams of engineers huddled over it 24 hours a day. With only the addition of the computing technology of one cellphone, we might have shaved a year off the date of the first moon landing.

Carbon Nanotubes

Nanotubes are Carbon atoms in the shape of a hexagon. Graphic created by Michael Ströck.

We have every technological capability necessary to build a space elevator with one exception: carbon nanotubes (CNT). To adapt a line from Thomas Edison, a space elevator is 1% inspiration, and 99% perspiration.

Carbon nanotubes are extremely strong and light, with a theoretical strength of three million kilograms per square centimeter; a bundle the size of a few hairs can lift a car. The theoretical strength of nanotubes is far greater than what we would need for our space elevator; current baseline designs specify a paper-thin, 3-foot-wide ribbon. These seemingly flimsy dimensions would be strong enough to support their own weight, and the 10-ton climbers using the elevator.

The nanotubes we need for our space elevator are the perfect place to start the nanotechnology revolution because, unlike biological nanotechnology research, which uses hundreds of different atoms in extremely complicated structures, nanotubes have a trivial design.

The best way to attack a big problem like nanotechnology is to first attack a small part of it, like carbon nanotubes. A “Manhattan Project” on general nanotechnology does not make sense because it is too unfocused a problem, but such an effort might make sense for nanotubes. Or, it might simply require the existing industrial expertise of a company like Intel. Intel is already experimenting with nanotubes inside computer chips because metal loses the ability to conduct electricity at very small diameters. But no one has asked them if they could build mile-long ropes.

The US government has increased investments in nanotechnology recently, but we aren’t seeing many results. From space elevator expert Brad Edwards:

There’s what’s called the National Nanotechnology Initiative. When I looked into it, the budget was a billion dollars. But when you look closer at it, it is split up between a dozen agencies, and within each agency it’s split again into a dozen different areas, much of it ends up as $100,000 grants. We looked into it with regards to carbon nanotube composites, and it appeared that about thirty million dollars was going into high-strength materials — and a lot of that was being spent internally in a lot of the agencies; in the end there’s only a couple of million dollars out of the billion-dollar budget going into something that would be useful to us. The money doesn’t have focus, and it’s spread out to include everything. You get a little bit of effort in a thousand different places. A lot of the budget is spent on one entity trying to play catch-up with whoever is leading. Instead of funding the leader, they’re funding someone else internally to catch up.

Again, here is a problem similar to the one we find in software today: people playing catchup rather than working together. I don’t know what nanotechnology scientists do every day, but it sounds like they would do well to follow in the footsteps of our free software pioneers and start cooperating.

The widespread production of nanotubes could be the start of a nanotechnology revolution. And the space elevator, the killer app of nanotubes, will enable the colonization of space.

Why?

William Bradford, speaking in 1630 of the founding of the Plymouth Bay Colony, said that all great and honorable actions are accompanied with great difficulties, and both must be enterprised and overcome with answerable courage.

There is no strife, no prejudice, no national conflict in outer space as yet. Its hazards are hostile to us all. Its conquest deserves the best of all mankind, and its opportunity for peaceful cooperation may never come again. But why, some say, the moon? Why choose this as our goal? And they may well ask why climb the highest mountain? Why, 35 years ago, fly the Atlantic? Why does Rice play Texas?

We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.

It is for these reasons that I regard the decision last year to shift our efforts in space from low to high gear as among the most important decisions that will be made during my incumbency in the office of the Presidency.

In the last 24 hours we have seen facilities now being created for the greatest and most complex exploration in man’s history. We have felt the ground shake and the air shattered by the testing of a Saturn C-1 booster rocket, many times as powerful as the Atlas which launched John Glenn, generating power equivalent to 10,000 automobiles with their accelerators on the floor. We have seen the site where five F-1 rocket engines, each one as powerful as all eight engines of the Saturn combined, will be clustered together to make the advanced Saturn missile, assembled in a new building to be built at Cape Canaveral as tall as a 48 story structure, as wide as a city block, and as long as two lengths of this field.

The growth of our science and education will be enriched by new knowledge of our universe and environment, by new techniques of learning and mapping and observation, by new tools and computers for industry, medicine, the home as well as the school.

I do not say that we should or will go unprotected against the hostile misuse of space any more than we go unprotected against the hostile use of land or sea, but I do say that space can be explored and mastered without feeding the fires of war, without repeating the mistakes that man has made in extending his writ around this globe of ours.

We have given this program a high national priority — even though I realize that this is in some measure an act of faith and vision, for we do not now know what benefits await us. But if I were to say, my fellow citizens, that we shall send to the moon, 240,000 miles away from the control station in Houston, a giant rocket more than 300 feet tall, the length of this football field, made of new metal alloys, some of which have not yet been invented, capable of standing heat and stresses several times more than have ever been experienced, fitted together with a precision better than the finest watch, carrying all the equipment needed for propulsion, guidance, control, communications, food and survival, on an untried mission, to an unknown celestial body, and then return it safely to earth, re-entering the atmosphere at speeds of over 25,000 miles per hour, causing heat about half that of the temperature of the sun — almost as hot as it is here today — and do all this, and do it right, and do it first before this decade is out — then we must be bold.

John F. Kennedy, September 12, 1962

Lunar Lander at the top of a rocket. Rockets are expensive and impose significant design constraints on space-faring cargo.

NASA has 18,000 employees and a $17-billion-dollar budget. Even with a fraction of those resources, their ability to oversee the design, handle mission control, and work with many partners is more than equal to this task.

If NASA doesn’t build the space elevator, someone else might, and it would change almost everything about how NASA does things today. NASA’s tiny (15-foot-wide) new Orion spacecraft, which was built to return us to the moon, was designed to fit atop a rocket and return the astronauts to Earth with a 25,000-mph thud, just like in the Apollo days. Without the constraints a rocket imposes, NASA’s spaceship to get us back to the moon would have a very different design. NASA would need to throw away a lot of the R&D they are now doing if a space elevator were built.

Another reason the space elevator makes sense is that it would get the various scientists at NASA to work together on a big, shared goal. NASA has recently sent robots to Mars to dig two-inch holes in the dirt. That type of experience is similar to the skills necessary to build the robotic climbers that would climb the elevator, putting those scientists to use on a greater purpose.

Space debris is a looming hazard, and a threat to the ribbon:

Map of space debris. The US Strategic Command monitors 10,000 large objects to prevent them from being misinterpreted as a hostile missile. China blew up a satellite in January, 2007 which created 35,000 pieces of debris larger than 1 centimeter.

The space elevator provides both a motive, and a means to launch things into space to remove the debris. (The first elevator will need to be designed with an ability to move around to avoid debris!)

Once you have built your first space elevator, the cost of building the second one drops dramatically. A space elevator will eventually make it $10 per pound to put something into space. This will open many doors for scientists and engineers around the globe: bigger and better observatories, a spaceport at GEO, and so forth.

Surprisingly, one of the biggest incentives for space exploration is likely to be tourism. From Hawaii to Africa to Las Vegas, the primary revenue in many exotic places is tourism. We will go to the stars because man is driven to explore and see new things.

Space is an extremely harsh place, which is why it is such a miracle that there is life on Earth to begin with. The moon is too small to have an atmosphere, but we can terraform Mars to create one, and make it safe from radiation and pleasant to visit. This will also teach us a lot about climate change, and in fact, until we have terraformed Mars, I am going to assume the global warming alarmists don’t really know what they are talking about yet.2 One of the lessons in engineering is that you don’t know how something works until you’ve done it once.

Terraforming Mars may sound like a silly idea today, but it is simply another engineering task.3 I worked in several different groups at Microsoft, and even though the set of algorithms surrounding databases are completely different from those for text engines, they are all engineering problems and the approach is the same: break a problem down and analyze each piece. (One of the interesting lessons I learned at Microsoft was the difference between real life and standardized tests. In a standardized test, if a question looks hard, you should skip it and move on so as not to waste precious time. At Microsoft, we would skip past the easy problems and focus our time on the hard ones.)

Engineering teaches you that there are an infinite number of ways to attack a problem, each with various trade-offs; it might take 1,000 years to terraform Mars if we were to send one ton of material, but only 20 years if we could send 1,000 tons of material. Whatever we finally end up doing, the first humans to visit Mars will be happy that we turned it green for them. This is another way our generation can make its mark.

A space elevator is a doable mega-project, but there is no progress beyond a few books and conferences because the very small number of people on this planet who are capable of initiating this project are not aware of the feasibility of the technology.

Brad Edwards, one of the world’s experts on the space elevator, has a PhD and a decade of experience designing satellites at Los Alamos National Labs, and yet he has told me that he is unable to get into the doors of leadership at NASA, or the Gates Foundation, etc. No one who has the authority to organize this understands that a space elevator is doable.

Glenn Reynolds has blogged about the space elevator on his very influential Instapundit.com, yet a national dialog about this topic has not yet happened, and NASA is just marching ahead with its expensive, dim ideas. My book is an additional plea: one more time, and with feeling!

How and When

It does not follow from the separation of planning and doing in the analysis of work that the planner and the doer should be two different people. It does not follow that the industrial world should be divided into two classes of people: a few who decide what is to be done, design the job, set the pace, rhythm and motions, and order others about; and the many who do what and as they are told.

—Peter Drucker

There are a many interesting details surrounding a space elevator, and for those interested in further details, I recommend The Space Elevator, co-authored by Brad Edwards.

The size of the first elevator is one of biggest questions to resolve. If you were going to lay fiber optic cables across the Atlantic ocean, you’d set aside a ton of bandwidth capacity. Likewise, the most important metric for our first space elevator is its size. I believe at least 100 tons / day is a worthy requirement, otherwise the humans will revert to form and start hoarding the cargo space.

The one other limitation with current designs is that they assume climbers which travel hundreds of miles per hour. This is a fine speed for cargo, but it means that it will take days to get into orbit. If we want to send humans into space in an elevator, we need to build climbers which can travel at least 10,000 miles per hour. While this seems ridiculously fast, if you accelerate to this speed over a period of minutes, it will not be jarring. Perhaps this should be the challenge for version two if they can’t get it done the first time.

The conventional wisdom amongst those who think it is even possible is that it will take between 20 and 50 years to build a space elevator. However, anyone who makes such predictions doesn’t understand that engineering is a fungible commodity. I can just presume they must never had the privilege of working with a team of 100 people who in 3 days accomplish as much as you will in a year. Two people will, in general, accomplish something twice as fast as one person.4 How can you say something will unequivocally take a certain amount of time when you don’t specify how many resources it will require or how many people you plan to assign to the task?

Furthermore, predictions are usually way off. If you asked someone how long it would take unpaid volunteers to make Wikipedia as big as the Encyclopedia Britannica, no one would have guessed the correct answer of two and a half years. From creating a space elevator to world domination by Linux, anything can happen in far less time than we think is possible if everyone simply steps up to play their part. The way to be a part of the future is to invent it, by unleashing our scientific and creative energy towards big, shared goals. Wikipedia, as our encyclopedia, was an inspiration to millions of people, and so the resources have come piling in. The way to get help is to create a vision that inspires people. In a period of 75 years, man went from using horses and wagons to landing on the moon. Why should it take 20 years to build something that is 99% doable today?

Many of the components of a space elevator are simple enough that college kids are building prototype elevators in their free time. The Elevator:2010 contest is sponsored by NASA, but while these contests have generated excitement and interest in the press, they are building toys, much like a radio-controlled airplane is a toy compared to a Boeing airliner.

I believe we could have a space elevator built in 7 years. If you divvy up five years of work per person, and add in a year to ramp up and test, you can see how seven years is quite reasonable. Man landed on the moon 7 years after Kennedy’s speech, exactly as he ordained, because dates can be self-fulfilling prophecies. It allows everyone to measure themselves against their goals, and determine if they need additional resources. If we decided we needed an elevator because our civilization had a threat of extermination, one could be built in a very short amount of time.

If the design of the hardware and the software were done in a public fashion, others could take the intermediate efforts and test them and improve them, therefore saving further engineering time. Perhaps NASA could come up with hundreds of truly useful research projects for college kids to help out on instead of encouraging them to build toys. There is a lot of software to be written and that can be started now.

The Unknown Unknown is the nanotubes, but nearly all the other pieces can be built without having any access to them. We will only need them wound into a big spool on the launch date.

I can imagine that any effort like this would get caught up in a tremendous amount of international political wrangling that could easily add years on to the project. We should not let this happen, and we should remind each other that the space elevator is just the railroad car to space — the exciting stuff is the cargo inside and the possibilities out there. A space elevator is not a zero sum endeavor: it would enable lots of other big projects that are totally unfeasible currently. A space elevator would enable various international space agencies that have money, but no great purpose, to work together on a large, shared goal. And as a side effect it would strengthen international relations.5


1 The Europeans aren’t providing great leadership either. One of the big investments of their Space agencies, besides the ISS, is to build a duplicate GPS satellite constellation, which they are doing primarily because of anti-Americanism! Too bad they don’t realize that their emotions are causing them to re-implement 35 year-old technology, instead of spending that $5 Billion on a truly new advancement. Cloning GPS in 2013: Quite an achievement, Europe!

2 Carbon is not a pollutant and is valuable. It is 18% of the mass of the human body, but only .03% of the mass of the Earth. If Carbon were more widespread, diamonds would be cheaper. Driving very fast cars is the best way to unlock the carbon we need. Anyone who thinks we are running out of energy doesn’t understand the algebra in E = mc2.

3 Mars’ moon, Phobos, is only 3,700 miles above Mars, and if we create an atmosphere, it will slow down and crash. We will need to find a place to crash the fragments, I suggest in one of the largest canyons we can find; we could put them next to a cross dipped in urine and call it the largest man-made art.

4 Fred Brooks’ The Mythical Man-Month argues that adding engineers late to a project makes a project later, but ramp-up time is just noise in the management of an engineering project. Also, wikis, search engines, and other technologies invented since his book have lowered the overhead of collaboration.

5 Perhaps the Europeans could build the station at GEO. Russia could build the shuttle craft to move cargo between the space elevator and the moon. The Middle East could provide an electrical grid for the moon. China could take on the problem of cleaning up the orbital space debris and build the first moon base. Africa could attack the problem of terraforming Mars, etc.

]]>
https://spanish.lifeboat.com/blog/2010/05/space-elevator-in-7/feed 30
AI and Driverless cars https://spanish.lifeboat.com/blog/2010/05/ai-and-driverless-cars https://spanish.lifeboat.com/blog/2010/05/ai-and-driverless-cars#comments Thu, 27 May 2010 07:17:22 +0000 http://lifeboat.com/blog/?p=910 I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here are several more sections on AI topics. I hope you find these pages food for thought and I appreciate any feedback.


The future is open source everything.

—Linus Torvalds

That knowledge has become the resource, rather than a resource, is what makes our society post-capitalist.

—Peter Drucker, 1993

Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.1 Some say free software doesn’t work in theory, but it does work in practice. In truth, it “works” in proportion to the number of people who are working together, and their collective efficiency.

In early drafts of this book, I had positioned this chapter after the one explaining economic and legal issues around free software. However, I now believe it is important to discuss artificial intelligence separately and first, because AI is the holy-grail of computing, and the reason we haven’t solved AI is that there are no free software codebases that have gained critical mass. Far more than enough people are out there, but they are usually working in teams of one or two people, or proprietary codebases.

Deep Blue has been Deep-Sixed

Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.

—Alan Kay, computer scientist

The source code for IBM’s Deep Blue, the first chess machine to beat then-reigning World Champion Gary Kasparov, was built by a team of about five people. That code has been languishing in a vault at IBM ever since because it was not created under a license that would enable further use by anyone, even though IBM is not attempting to make money from the code or using it for anything.

The second best chess engine in the world, Deep Junior, is also not free, and is therefore being worked on by a very small team. If we have only small teams of people attacking AI, or writing code and then locking it away, we are not going to make progress any time soon towards truly smart software.

Today’s chess computers have no true AI in them; they simply play moves, and then use human-created analysis to measure the result. If you were to go tweak the computer’s value for how much a queen is worth compared to a pawn, the machine would start losing and wouldn’t even understand why. It comes off as intelligent only because it has very smart chess experts programming the computer precisely how to analyze moves, and to rate the relative importance of pieces and their locations, etc.

Deep Blue could analyze two hundred million positions per second, compared to grandmasters who can analyze only 3 positions per second. Who is to say where that code might be today if chess AI aficionados around the world had been hacking on it for the last 10 years?

DARPA Grand Challenge

Proprietary software developers have the advantages money provides; free software developers need to make advantages for each other. I hope some day we will have a large collection of free libraries that have no parallel available to proprietary software, providing useful modules to serve as building blocks in new free software, and adding up to a major advantage for further free software development. What does society need? It needs information that is truly available to its citizens—for example, programs that people can read, fix, adapt, and improve, not just operate. But what software owners typically deliver is a black box that we can’t study or change.

—Richard Stallman

The hardest computing challenges we face are man-made: language, roads and spam. Take, for instance, robot-driven cars. We could do this without a vision system, and modify every road on the planet by adding driving rails or other guides for robot-driven cars, but it is much cheaper and safer to build software for cars to travel on roads as they exist today — a chaotic mess.

At the annual American Association for the Advancement of Science (AAAS) conference in February 2007, the “consensus” among the scientists was that we will have driverless cars by 2030. This prediction is meaningless because those working on the problem are not working together, just as those working on the best chess software are not working together. Furthermore, as American cancer researcher Sidney Farber has said, “Any man who predicts a date for discovery is no longer a scientist.”

Today, Lexus has a car that can parallel park itself, but its vision system needs only a very vague idea of the obstacles around it to accomplish this task. The challenge of building a robot-driven car rests in creating a vision system that makes sense of painted lines, freeway signs, and the other obstacles on the road, including dirtbags not following “the rules”.

The Defense Advanced Research Projects Agency (DARPA), which unlike Al Gore, really invented the Internet, has sponsored several contests to build robot-driven vehicles:


Stanley, Stanford University’s winning entry for the 2005 challenge. It might not run over a Stop sign, but it wouldn’t know to stop.

Like the parallel parking scenario, the DARPA Grand Challenge of 2004 required only a simple vision system. Competing cars traveled over a mostly empty dirt road and were given a detailed series of map points. Even so, many of the cars didn’t finish, or perform confidently. There is an expression in engineering called “garbage in, garbage out”; as such, if a car sees “poorly”, it drives poorly.

What was disappointing about the first challenge was that an enormous amount of software was written to operate these vehicles yet none of it has been released (especially the vision system) for others to review, comment on, improve, etc. I visited Stanford’s Stanley website and could find no link to the source code, or even information such as the programming language it was written in.

Some might wonder why people should work together in a contest, but if all the cars used rubber tires, Intel processors and the Linux kernel, would you say they were not competing? It is a race, with the fastest hardware and driving style winning in the end. By working together on some of the software, engineers can focus more on the hardware, which is the fun stuff.

The following is a description of the computer vision pipeline required to successfully operate a driverless car. Whereas Stanley’s entire software team involved only 12 part-time people, the vision software alone is a problem so complicated it will take an effort comparable in complexity to the Linux kernel to build it:

Image acquisition: Converting sensor inputs from 2 or more cameras, radar, heat, etc. into a 3-dimensional image sequence

Pre-processing: Noise reduction, contrast enhancement

Feature extraction: lines, edges, shape, motion

Detection/Segmentation: Find portions of the images that need further analysis (highway signs)

High-level processing: Data verification, text recognition, object analysis and categorization

The 5 stages of an image recognition pipeline.

A lot of software needs to be written in support of such a system:


The vision pipeline is the hardest part of creating a robot-driven car, but even such diagnostic software is non-trivial.

In 2007, there was a new DARPA Urban challenge. This is a sample of the information given to the contestants:


It is easier and safer to program a car to recognize a Stop sign than it is to point out the location of all of them.

Constructing a vision pipeline that can drive in an urban environment presents a much harder software problem. However, if you look at the vision requirements needed to solve the Urban Challenge, it is clear that recognizing shapes and motion is all that is required, and those are the same requirements as had existed in the 2004 challenge! But even in the 2007 contest, there was no more sharing than in the previous contest.

Once we develop the vision system, everything else is technically easy. Video games contain computer-controlled drivers that can race you while shooting and swearing at you. Their trick is that they already have detailed information about all of the objects in their simulated world.

After we’ve built a vision system, there are still many fun challenges to tackle: preparing for Congressional hearings to argue that these cars should have a speed limit controlled by the computer, or telling your car not to drive aggressively and spill your champagne, or testing and building confidence in such a system.2

Eventually, our roads will get smart. Once we have traffic information, we can have computers efficiently route vehicles around any congestion. A study found that traffic jams cost the average large city $1 billion dollars a year.

No organization today, including Microsoft and Google, contains hundreds of computer vision experts. Do you think GM would be gutsy enough to fund a team of 100 vision experts even if they thought they could corner this market?

There are enough people worldwide working on the vision problem right now. If we could pool their efforts into one codebase, written in a modern programming language, we could have robot-driven cars in five years. It is not a matter of invention, it is a matter of engineering.

1 One website documents 60 pieces of source code that perform Fourier transformations, which is an important software building block. The situation is the same for neural networks, computer vision, and many other advanced technologies.

2 There are various privacy issues inherent in robot-driven cars. When computers know their location, it becomes easy to build a “black box” that would record all this information and even transmit it to the government. We need to make sure that machines owned by a human stay under his control, and do not become controlled by the government without a court order and a compelling burden of proof.

]]>
https://spanish.lifeboat.com/blog/2010/05/ai-and-driverless-cars/feed 11
Nuclear Winter and Fire and Reducing Fire Risks to Cities https://spanish.lifeboat.com/blog/2010/05/nuclear-winter-and-fire-and-reducing-fire-risks-to-cities https://spanish.lifeboat.com/blog/2010/05/nuclear-winter-and-fire-and-reducing-fire-risks-to-cities#comments Sun, 02 May 2010 16:24:46 +0000 http://lifeboat.com/blog/?p=890 This is a crosspost from Nextbigfuture

I looked at nuclear winter and city firestorms a few months ago I will summarize the case I made then in the next section. There is significant additions based on my further research and email exchanges that I had with Prof Alan Robock and Brian Toon who wrote the nuclear winter research.

The Steps needed to prove nuclear winter:
1. Prove that enough cities will have firestorms or big enough fires (the claim here is that does not happen)
2. Prove that when enough cities in a suffient area have big fire that enough smoke and soot gets into the stratosphere (trouble with this claim because of the Kuwait fires)
3. Prove that condition persists and effects climate as per models (others have questioned that but this issue is not addressed here

The nuclear winter case is predictated on getting 150 million tons (150 teragram case) of soot, smoke into the stratosphere and having it stay there. The assumption seemed to be that the cities will be targeted and the cities will burn in massive firestorms. Alan Robock indicated that they only included a fire based on the radius of ignition from the atmospheric blasts. However, in the scientific american article and in their 2007 paper the stated assumptions are:

assuming each fire would burn the same area that actually did burn in Hiroshima and assuming an amount of burnable material per person based on various studies.

The implicit assumption is that all buildings react the way the buildings in Hiroshima reacted on that day.

Therefore, the results of Hiroshima are assumed in the Nuclear Winter models.
* 27 days without rain
* with breakfast burners that overturned in the blast and set fires
* mostly wood and paper buildings
* Hiroshima had a firestorm and burned five times more than Nagasaki. Nagasaki was not the best fire resistant city. Nagasaki had the same wood and paper buildings and high population density.
Recommendations
Build only with non-combustible materials (cement and brick that is made fire resistant or specially treated wood). Make the roofs, floors and shingles non-combustible. Add fire retardants to any high volume material that could become fuel loading material. Look at city planning to ensure less fire risk for the city. Have a plan for putting out city wide fires (like controlled flood from dams which are already near cities.)



Fire and Details
If there are not multiple citywide firestorms then there is no trigger for nuclear winter even if the later modeling (which is still uncertain) would even need to be considered.
- The Material of the Houses in India and Pakistan do not appear to be Right for Firestorms (mostly burnt brick and mud)

Firestorms have always required at least 50% of buildings to be ignited. High percentage of builtupness is also necessary
Hiroshima had most fires from overturned breakfast charcoal braziers.

References that provide the basis of my case Problems of Fire in Nuclear Warfare, Jerome Hill, Rand, 1961
- Nagasaki had no firestorm

4.4 square miles Hiroshima actual burned area

0.9 square miles Nagasaki

13.5 square miles maximum theoretical .

India building census indicates that most of the buildings are made from Burnt Brick, mud which are non-combustible.
Exploratory Analysis of Fire Storms, Stanford Research Institute, 1965

The frequency of rain in India or Pakistan is important because of the effect of recent rain on burning of cities.

Quantitive — seasonal rainfall patterns for India and Pakistan

Fire Factors

“Mass fires and Fire Behaviors” which studies fires in cities and other places indicates that combustible roofing material is a major cause of fires spreading in cities. This document also discusses how to reduce fire risks and lessen spreading of fires. The usual vegetation management that is commonly mentioned for LA fires and having non-combustible roofing materials and gaps between buildings. These anti-fire measures would apply to lessoning fires ignited by nuclear bombs too.
http://www.fs.fed.us/psw/publications/documents/psw_rp019/psw_rp019_part1.pdf
http://www.fs.fed.us/psw/publications/documents/psw_rp019/psw_rp019_part2.pdf
http://www.fs.fed.us/psw/publications/documents/psw_rp019/psw_rp019_part3.pdf

Steps to Reduce Damage

Simple civil defence and building improvements would reduce damage from nuclear war and from hurricanes and earthquakes. Haiti and New Orleans show the importance of building better to greatly reduce problems.

Hurriquake nails, outside paint that reflects heat and other cheap retrofits would reduce the damage radius and roofing material that does not burn.

Further improvement for buildings
Fighting Fires with Water From Behind Dams

There are also some basic counter measures against city wide fire.

More than 2,000 dams near population centers are in the United States are in need of repair, according to statistics released this month by the Association of State Dam Safety Officials. there are more dams near population centers which do not need repair. The water could be released in an emergency to more effectively fight any large area of fire.

]]>
https://spanish.lifeboat.com/blog/2010/05/nuclear-winter-and-fire-and-reducing-fire-risks-to-cities/feed 4