Keith Curtis – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Thu, 02 Jul 2020 20:25:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Software Wars, The Movie, Free download https://spanish.lifeboat.com/blog/2020/07/software-wars-the-movie-free-download https://spanish.lifeboat.com/blog/2020/07/software-wars-the-movie-free-download#comments Thu, 02 Jul 2020 20:25:36 +0000 https://lifeboat.com/blog/?p=109424

Software Wars is a 70 minute documentary about the ongoing battle between proprietary versus free and open-source software. The more we share scientific information, the faster we can solve the challenges of the future. It also discusses biology and the space elevator.

Here is the feature trailer:

For now, you can watch the movie for free or download it via BitTorrent here: https://video.detroitquaranteam.com/videos/watch/07696431&#4…ac9c7d22b1

]]>
https://spanish.lifeboat.com/blog/2020/07/software-wars-the-movie-free-download/feed 1
PyTorch Should Be Copyleft https://spanish.lifeboat.com/blog/2018/04/pytorch-should-be-copyleft Wed, 04 Apr 2018 15:33:32 +0000 https://lifeboat.com/blog/?p=77489

Neural networks have started to take off since AlexNet in 2012. We don’t have to call it a software war, but there’s a competition for mindshare and community contributors in neural networks.

Of course, AI needs more than a neural network library, it needs the configuration hyperparameters, training datasets, trained models, test environments, and more.

Most people have heard of Google’s Tensorflow which was released at the end of 2015, but there’s an active codebase called PyTorch which is easier to understand, less of a black box, and more dynamic. Tensorflow does have solutions for some of those limitations (such as Tensorflow-fold, and Tensorflow-Eager) but these new capabilities remove the need for other features and complexity of Tensorflow. Google built a high-performance system for doing static computation graphs before realizing that most people want dynamic graphs. Doh!

And how much do you trust Google, anyway?

PyTorch was created by people from Idiap Research Institute in Switzerland, who went to Facebook and Google. Doh!

I posted a bug report on the PyTorch license, asking for a copyleft one: https://github.com/pytorch/pytorch/issues/5270

I think you should consider a copyleft license. I realize it’s a pain to change the license, but it never gets easier. I read the license and it’s mostly a disclaimer and a warning. There’s nothing in there about protecting the freedom of the users.

There are lots of projects with lax licenses that are successful, so maybe it will work out okay, but the Linux kernel took off because of the copyleft license. It nudges people to give back.

Lax licenses let companies take advantage of the individual contributors. I don’t understand how someone who believes in free software also believes letting big companies turn it back into proprietary software is fine.

I realize lawyers might like that, and proprietary software companies might want it, but this group is more than just those people. It’s great you’ve got 100s of contributors already, but if you know the way corporations work, you should be pushing for copyleft.

My bug was closed within 8 hours with the following response from a Facebook employee:

we’ve definitely thought about this in the past. We have no plans of changing our license.

The bug was closed but I could keep commenting:

When you say “we”, are you talking about Facebook or the random smaller contributors? Given you work for a large company, I hope you realize you could be biased. At the same time, you should know the way large corporations work even better. You won’t be there forever. Copyleft is stronger protection for the software and the users, do you disagree?

When you say “thought”, have you written any of it down with a link you can post for archival purposes? That way if others come along, they’ll have a good answer. I may quote your non-defense of your lax license in my writings if you don’t mind, but I’d prefer if you gave me a bit more.

I just spend several minutes looking for a discussion on PyTorch license, and came up with nothing except another bug report closed with a similar short answer.

Your last dismissive answer could motivate people to create a copyleft fork!

I got one more response:

We = the authors of the project.

“thought” = this is a topic that came up in the past, we discussed it among ourselves. I don’t have it written down, we don’t plan to have it written down.

I wrote one more response:

It don’t know any of these names:
https://www.openhub.net/p/pytorch/contributors

I don’t know who the authors are of this project, and how much is big companies versus academics and small contributors, how much interest there is in making a copyleft version, etc.

BTW, relicensing would get you plenty of news articles. It’s also tough because Facebook doesn’t have the same reputation as the FSF or EFF for protecting user’s freedom. The Tensorflow license is lax also so you don’t have that competitive advantage.

To some it’s a disadvantage, but it did make a difference in the Linux scheme, and you would hope to have your work be relevant for that long, and without a bunch of proprietary re-implementations over time that are charged for. The lax license could also slow software innovation because everyone is mostly improving their secret code on top.

LibreOffice was able to convince a lot of people that a copyleft license was better than the OpenOffice scheme, but I don’t know what people here think. One interesting data point would be to find out what percent of the patches and other work are by small contributors.

Anyway, you’ve got a cool project, and I wish you the best, partially because I don’t trust Google. Tensorflow is just some sample code for others to play with while they advance the state of the art and keep 95% proprietary. It also seems they made a few mistakes in the design and now will carry baggage.

There is a deep learning software wars going on. It’s kind of interesting to almost be on the side of Facebook wink

It’s a shame that copyleft seems to be losing mindshare. If the contributors who like copyleft lit some torches, and created a fork, or threatened to, it could get the attention of the large corporations and convince them to relicense rather than risk the inefficiencies, bad press, slower progress and loss of relevance. Forks are a bad thing, but copyleft can prevent future forks, and prevent people from taking but not giving back.

Whether a PyTorch fork makes sense depends on a number of factors. The LibreOffice fork was created because people were unhappy about how Sun and then Oracle were working with the community, etc. If the only thing wrong with PyTorch is the lax license, it might become successful without needing the copyleft nudge, but how much do you trust Facebook and Google to do the right thing long-term?

I wish PyTorch used the AGPL license. Most neural networks are run on servers today, it is hardly used on the Linux desktop. Data is central to AI and that can stay owned by FB and the users of course. The ImageNet dataset created a revolution in computer vision, so let’s not forget that open data sets can be useful.

A license like the GPL wouldn’t even apply to Facebook because the code runs on servers, but it would make a difference in other places where PyTorch could be used. You’d think Facebook could have just agreed to use a GPL or LGPL license, and silently laugh as they know the users don’t run their AI software.

Few people run Linux kernels remotely so the GPL is good enough for it. Perhaps it isn’t worth making a change to the PyTorch license unless they switch to AGPL. Or maybe that’s a good opening bid for those with torches and pitchforks.

I posted a link to this on the Facebook Machine Learning group, and my post was deleted and I was banned from the group!

I posted a link to the Google Deep Learning group and got some interesting responses. One person said that copyleft is inhibiting. I replied that if keeping free software free is inhibiting, there isn’t a word to describe the inhibitions with proprietary software!

One of the things I notice is that even though many people understand and prefer copyleft, they often encourage a lax license because they think other people want that also. There are a lot of people pushing for lax licenses even though they actually prefer copyleft.

People inside Facebook and Google know the pressure to write proprietary code better than those outside. They should be pushing for copyleft the most! On Reddit, someone suggested the MPL license. It does seem another reasonable compromise similar to LGPL.

]]>
Crowdfunding campaign for Software Wars, the movie https://spanish.lifeboat.com/blog/2012/12/crowdfunding-campaign-for-software-wars-the-movie https://spanish.lifeboat.com/blog/2012/12/crowdfunding-campaign-for-software-wars-the-movie#comments Thu, 13 Dec 2012 04:26:52 +0000 http://lifeboat.com/blog/?p=6378

Watch this video on YouTube.

I’d like to announce the start of the Indiegogo.com campaign for Software Wars, the movie. It is called Software Wars, but it also talks about biotechnology, the space elevator and other futuristic topics. This movie takes many of the ideas I’ve posted here and puts them into video form. It will be understandable to normal people but interesting to people like us. I would appreciate the support of Lifeboat for this project.

]]>
https://spanish.lifeboat.com/blog/2012/12/crowdfunding-campaign-for-software-wars-the-movie/feed 1
Response to the Global Futures 2045 Video https://spanish.lifeboat.com/blog/2012/06/response-to-the-global-futures-2045-video https://spanish.lifeboat.com/blog/2012/06/response-to-the-global-futures-2045-video#comments Fri, 01 Jun 2012 21:36:49 +0000 http://lifeboat.com/blog/?p=4121 I have just watched this video by Global Futures 2045.

This is my list of things I disagree with:

It starts with scary words about how every crisis comes faster and faster. However this is untrue. Many countries have been running deficits for decades. The financial crisis is no surprise. The reason the US has such high energy costs goes back to government decisions made in the 1970s. And many things that used to be crises no longer happen, like the Black Plague. We have big problems, but we’ve also got many resources we’ve built up over the centuries to help. Much of the challenges we face are political and social, not technical.

We will never fall into a new Dark Ages. The biggest problem is that we aren’t advancing as fast as we could and many are still starving, sick, etc. However, it has always been this way. The 20th century was very brutal! But we are advancing and it is mostly known threats like WMDs which could cause a disaster. In the main, the world is getting safer every day as we better understand it.

We aren’t going to build a new human. It is more like a Renaissance. Those who lost limbs will get increasingly better robotic ones, but they will still be humans. The best reason to build a robotic arm is to attach it to a human.

The video had a collectivist and authoritarian perspective when it said:

“The world’s community and leaders should encourage mankind instead of wasting resources on solving momentary problems.”

This sentence needs to be deconstructed:

1. Government acts via force. Government’s job is to maintain civil order, so having it also out there “encouraging” everyone to never waste resources is creepy. Do you want your policeman to also be your nanny? Here is a quote from C.S. Lewis:

“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.”

2. It is wrong to think government is the solution to our problems. Most of the problems that exist today like the Greek Debt Crisis, and the US housing crisis were caused by governments trying to do too much.

3. There is no such thing as the world’s leaders. There is the UN, which doesn’t act in a humanitarian crisis until after everyone is dead. In any case, we don’t need the governments to act. We built Wikipedia.

4. “Managing resources” is codeword for socialism. If their goal is to help with the development of new technologies, then the task of managing existing resources is totally unrelated. If your job is to build robots, then your job is not also to worry about whether the water and air are dirty. Any scientist who talks about managing resources is actually a politician. Here is a quote from Frederic Hayek:

“The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design. Before the obvious economic failure of Eastern European socialism, it was widely thought that a centrally planned economy would deliver not only “social justice” but also a more efficient use of economic resources. This notion appears eminently sensible at first glance. But it proves to overlook the fact that the totality of resources that one could employ in such a plan is simply not knowable to anybody, and therefore can hardly be centrally controlled.”

5. We should let individuals decide what to spend their resources on. People don’t only invest in momentary things. People build houses. In fact, if you are looking for an excuse to drink, being poor because you live in a country with 70% taxes is a good one.

The idea of tasking government to finding the solutions and to do all futuristic research and new products to shove down our throats is wrong and dangerous. We want individuals, and collections of them (corporations) to do it because they will best put it to use in ways that actually improve our lives. Everything is voluntary which encourages good customer relationships. The money will be funded towards the products people actually care about, instead of what some mastermind bureaucrat thinks we should spend money on. There are many historical examples of how government doesn’t innovate as well as the private sector: the French telephone system, Cuba, expensive corn-based ethanol, the International Space Station, healthcare. The free market is imperfect but it leads to fastest technological and social progress for the reasons Frederic Hayek has explained. A lot of government research today is wasted because it never gets put to use commercially. There are many things that can be done to make the private sector more vibrant. There are many ways government can do a better job, and all that evidence should be a warning to not use governments to endorse programs with the goal of social justice. NASA has done great things, but it was only because it existed in a modern society that it was possible.

They come up with a nice list of things that humanity can do, but they haven’t listed that the one of the most important first steps is more Linux. We aren’t going to get cool and smart robots, etc. without a lot of good free software first.

The video says:

“What we need is not just another technological revolution, but a new civilization paradigm, we need philosophy and ideology, new ethics, new culture, new psychology.”

It minimizes the technology aspect when this is the hard work by disparate scientists that will bring us the most benefits.

It is true that we need to refine our understandings of many things, but we are not starting over, just evolving. Anyone who thinks we need to start over doesn’t realize what we’ve already built and all the smart people who’ve come before. The basis of good morals from thousands of years ago still apply. It will just be extended to deal with new situations, like cloning. The general rules of math, science, and biology will remain. In many cases, we are going back to the past. The Linux and free software movement is simply returning computer software to the hundreds of years-old tradition of science. Sometimes the idea has already been discovered, but it isn’t widely used yet. It is a social problem, not a technical one.

The repeated use of the word “new”, etc. makes this video like propaganda. Cults try to get people to reset their perspective into a new world, and convince them that only they have the answers. This video comes off as a sales pitch with them as the solution to our problems, ignoring that it will take millions. Their lists of technologies are random. Some of these problems we could have solved years ago, and some we can’t solve for decades, and they mix both examples. It seems they do no know what is coming next given how disorganized they are. They also pick multiple words that are related and so are repeating themselves. Repetition is used to create an emotional impact, another trait of propaganda.

The thing about innovation and the future is that it is surprising. Many futurists get things wrong. If these guys really had the answers, they’d have invented it and made money on it. And compared to some of the tasks, we are like cavemen.

Technology evolves in a stepwise fashion, and so looking at it as some clear end results on some day in the future is wrong.

For another example: the video makes it sound like going beyond Earth and then beyond the Solar System is a two-step process when in fact it is many steps, and the journey is the reward. If they were that smart, they’d endorse the space elevator which is the only cheap way to get out there, and we can do it in 10 years.

The video suggests that humanity doesn’t have a masterplan, when I just explained that you couldn’t make one.

It also suggests that individuals are afraid of change, when in fact, that is a trait characteristic of governments as well. The government class has known for decades that Social Security is going bankrupt, but they’d rather criticize anyone who wants to reform it rather than fix the underlying problem. This video is again trying to urge collectivism with its criticism of the “mistakes” people make. The video is very arrogant at how it looks down at “the masses.” This is another common characteristic of collectivism.

Here is the first description of their contribution:

“We integrate the latest discoveries and developments from the sciences: physics, energetics, aeronautics, bio-engineering, nanotechnology, neurology, cybernetics, cognitive science.”

That sentence is laughable because it is an impossible task. To understand all of the latest advances would involve talking with millions of scientists. If they are doing all this integration work, what have they produced? They want everyone to join up today, work to be specified later.

The challenge for nuclear power is not the science, it is the lawyers who outlawed new ones in 1970s, and basically have halted all advancements in building safer and better ones. Some of these challenges are mostly political, not scientific. We need to get engineers in corporations like GE, supervised by governments, building safer and cleaner nuclear power.

If you wanted to create all of what they offer, you’d have to hire a million different people. If you were building the pyramids, you could get by with most of your workers having one skill, the ability to move heavy things around. However, the topics they list are so big and complicated, I don’t think you could build an organization that could understand it all, let alone build it.

They mention freedom and speak in egalitarian terms, but this is contradicted by their earlier words. In their world, we will all be happy worker bees, working “optimally” for their collective. Beware of masterminds offering to efficiently manage your resources.

I support discussion and debate. I am all for think-tanks and other institutions that hire scientists. However, those that lobby government to act on their behalf are scary. I don’t want every scientist lobbying the government to institute their pet plan, no matter how good it sounds. They will get so overwhelmed that they won’t be able to do their actual job. The rules of the US Federal government are very limited and generally revolve around an army and a currency. Social welfare is supposed to be handled by the states.

Some of their ideas cannot be turned into laws by the US Congress because they don’t have this authority — the States do. Obamacare is likely to be ruled unconstitutional, and their ideas are potentially much more intrusive towards individual liberty. It would require a Constitutional Amendment, which would never pass and we don’t need.

They offer a social network where scientists can plug in and figure out what they need to do. This could also be considered an actual concrete example of something they are working on. However, there are already social networks where people are advancing the future. SourceForge.net is the biggest community of programmers. There is also Github.com with 1,000,000 projects. Sage has a community advancing the state of mathematics.

If they want to create their own new community solving some aspect, that is great, especially if they have money. But the idea that they are going to make it all happen is impossible. And it will never replace all the other great communities that already exist. Even science happens on Facebook, when people chat about their work.

If they want to add value, they need to specialize. Perhaps they come up with millions of dollars and they can do research in specific areas. However, their fundamental research would very likely get used in ways they never imagined by other people. The more fundamental, the more no one team can possibly take advantage of all aspects of the discovery.

They say there is some research lab they’ve got working on cybernetics. However they don’t demonstrate any results. I don’t imagine they can be that much ahead of the rest of the world who provides them the technology they use to do their work. Imagine a competitor to Henry Ford. Could he really build a car much better given the available technology at the time? My response to anyone who has claims of some advancements is: turn it into a demo or useful product and sell it. All this video offer as evidence here is CGI, which any artist can make.

I support the idea of flying cars. First we need driverless cars and cheaper energy. Unless they are a car or airplane company, I don’t see what this organization will have to do with that task. I have nothing against futuristic videos, but they don’t make clear what is their involvement and instances of ambiguity should be noted.

They are wrong when they say we won’t understand consciousness till 2030 because we already understand it at some level today. Neural networks have been around for decades. IBM’s Jeopardy-playing Watson was a good recent example. However, it is proprietary so not much will come of that particular example. Fortunately, Watson was built on lots of free software, and the community will get there. Google is very proprietary with their AI work. Wolfram Alpha is also proprietary. Etc. We’ve got enough the technical people for an amazing world if we can just get them to work together in free software and Python.

The video’s last sentence suggests that spiritual self-development is the new possibility. But people can work on that today. And again, enlightenment is not a destination but a journey.

We are a generation away from immortality unless things greatly change. I think about LibreOffice, cars that drive themselves and the space elevator, but faster progress in biology is also possible as well if people will follow the free software model. The Microsoft-style proprietary development model has infected many fields.

]]>
https://spanish.lifeboat.com/blog/2012/06/response-to-the-global-futures-2045-video/feed 3
OpenOffice / LibreOffice & A Warning For Futurists https://spanish.lifeboat.com/blog/2012/05/openoffice-libreoffice-a-warning-for-futurists https://spanish.lifeboat.com/blog/2012/05/openoffice-libreoffice-a-warning-for-futurists#comments Fri, 25 May 2012 21:18:05 +0000 http://lifeboat.com/blog/?p=4067 I spend most of my time thinking about software, and occasionally I come across issues that are relevant to futurists. I wrote my book about the future of software in OpenOffice, and needed many of its features. It might not be the only writing / spreadsheet / diagramming / presentation, etc. tool in your toolbox, but it is a worthy one. OpenDocument Format (ODF) is the best open standard for these sorts of scenarios and LibreOffice is currently the premier tool to handle that format. I suspect many of the readers of Lifeboat have a variant installed, but don’t know much of the details of what is going on.

The OpenOffice situation has been a mess for many years. Sun didn’t foster a community of developers around their work. In fact, they didn’t listen to the community when it told them what to do. So about 18 months ago, after Oracle purchased Sun and made the situation worse, the LibreOffice fork was created with most of the best outside developers. LibreOffice quickly became the version embraced by the Linux community as many of the outside developers were funded by the Linux distros themselves. After realizing their mess and watching LibreOffice take off within the free software community, Oracle decided to fire all their engineers (50) and hand the trademark and a copy of the code over to IBM / Apache.

Now it would be natural to imagine that this should be handed over to LibreOffice, and have all interested parties join up with this effort. But that is not what is happening. There are employees out there whose job it is to help Linux, but they are actually hurting it. You can read more details on a Linux blog article I wrote here. I also post this message as a reminder about how working together efficiently is critical to have faster progress on complicated things.

]]>
https://spanish.lifeboat.com/blog/2012/05/openoffice-libreoffice-a-warning-for-futurists/feed 2
GC Lingua Franca(s) https://spanish.lifeboat.com/blog/2011/02/gc-lingua-francas https://spanish.lifeboat.com/blog/2011/02/gc-lingua-francas#comments Tue, 08 Feb 2011 16:09:10 +0000 http://lifeboat.com/blog/?p=1528 This is an email to the Linux kernel mailing list, but it relates to futurism topics so I post a copy here as well.
———
Science doesn’t always proceed at the speed of thought. It often proceeds at sociological or even demographic speed. — John Tooby

Open Letter to the LKML;

If we were already talking to our computers, etc. as we should be, I wouldn’t feel a need to write this to you. Given current rates of adoption, Linux still seems a generation away from being the priceless piece of free software useful to every child and PhD. This army your kernel enables has millions of people, but they often lose to smaller proprietary armies, because they are working inefficiently. My mail one year ago (http://keithcu.com/wordpress/?p=272) listed the biggest workitems, but I realize now I should have focused on one. In a sentence, I have discovered that we need GC lingua franca(s). (http://www.merriam-webster.com/dictionary/lingua%20franca)

Every Linux success builds momentum, but the desktop serves as a powerful daily reminder of the scientific tradition. Many software PhDs publish papers but not source, like Microsoft. I attended a human genomics conference and found that the biotech world is filled with proprietary software. IBM’s Jeopardy-playing Watson is proprietary, like Deep Blue was. This topic is not discussed in any of the news articles, as if the license does not matter. I find widespread fear of having ideas stolen in the software industry, and proprietary licenses encourage this. We need to get these paranoid programmers, hunched in the shadows, scribbled secrets clutched in their fists, working together, for any of them to succeed. Desktop world domination is not necessary, but it is sufficient to get robotic chaffeurs and butlers. Windows is not the biggest problem, it is the proprietary licensing model that has infected computing, and science.

There is, unsurprisingly, a consensus among kernel programmers that usermode is “a mess” today, which suggests there is a flaw in the Linux desktop programming paradigm. Consider the vast cosmic expanse of XML libraries in a Linux distribution. Like computer vision (http://www.cs.cmu.edu/~cil/v-source.html), there are not yet clear places for knowledge to accumulate. It is a shame that the kernel is so far ahead of most of the rest of user mode.

The most popular free computer vision codebase is OpenCV, but it is time-consuming to integrate because it defines an entire world in C++ down to the matrix class. Because C/C++ didn’t define a matrix, nor provide code, countless groups have created their own. It is easier to build your own computer vision library using standard classes that do math, I/O, and graphics, than to integrate OpenCV. Getting productive in that codebase is months of work and people want to see results before then. Building it is a chore, and they have lost users because of that. Progress in the OpenCV core is very slow because the barriers to entry are high. OpenCV has some machine learning code, but they would be better delegating that out to others. They are now doing CUDA optimizations they could get from elsewhere. They also have 3 Python wrappers and several other wrappers as well; many groups spend more time working on wrappers than the underlying code. Using the wrappers is fine if you only want to call the software, but if you want to improve OpenCV then the programming environment instantly becomes radically different and more complicated.

There is a team working on Strong AI called OpenCog, a C++ codebase created in 2001. They are evolving slowly as they do not have a constant stream of demos. They don’t consider their codebase is a small amount of world-changing ideas buried in engineering baggage like STL. Their GC language for small pieces is Scheme, an unpopular GC language in the FOSS community. Some in their group recommend Erlang. The OpenCog team looks at their core of C++, and over to OpenCV’s core of C++, and concludes the situation is fine. One of the biggest features of the ROS (Robot OS), according to its documentation, is a re-implementation of RPC in C++, not what robotics was missing. I’ve emailed various groups and all know of GC, but they are afraid of any decrease in performance, and they do not think they will ever save time. The transition from brooms to vacuum cleaners was disruptive, but we managed.

C/C++ makes it harder to share code amongst disparate scientists than a GC language. It doesn’t matter if there are lots of XML parsers or RSS readers, but it does matter if we don’t have an official computer vision codebase. This is not against any codebase or language, only for free software lingua franca(s) in certain places to enable faster knowledge accumulation. Even language researchers can improve and create variants of a common language, and tools can output it from other domains like math. Agreeing on a standard still gives us an uncountably infinite number of things to disagree over.

Because the kernel is written in C, you’ve strongly influenced the rest of community. C is fully acceptable for a mature kernel like Linux, but many concepts aren’t so clear in user mode. What is the UI of OpenOffice where speech input is the primary means of control? Many scientists don’t understand the difference between the stack and the heap. Software isn’t buildable if those with the necessary expertise can’t use the tools they are given.

C is a flawed language for user mode because it is missing GC, invented a decade earlier, and C++ added as much as it took away as each feature came with an added cost of complexity. C++ compilers converting to C was a good idea, but being a superset was not. C/C++ never died in user mode because there are now so many GC replacements, it created a situation paralyzing many to inaction, as there seems no clear place to go. Microsoft doesn’t have this confusion as their language, as of 2001, is C#. Microsoft is steadily moving to C#, but it is 10x easier to port a codebase like MySQL than SQL Server, which has an operating system inside. C# is taking over at the edges first, where innovation happens anyway. There is a competitive aspect to this.

Lots of free software technologies have multiple C/C++ implementations, because it is often easier to re-write than share, and an implementation in each GC language. We all might not agree on the solution, so let’s start by agreeing on the problem. A good example for GC is how a Mac port can go from weeks to hours. GC also prevents code from being able to use memory after freeing, free twice, etc. and therefore that user code is less likely to corrupt your memory hardware. If everyone in user mode were still writing in assembly language, you would obviously be concerned. If Git had been built in 98% Python and 2% C, it would have become easier to use faster, found ways to speed up Python, and set a good example. It doesn’t matter now, but it was an opportunity in 2005.

You can “leak” memory in GC, but that just means that you are still holding a reference. GC requires the system to have a fuller understanding of the code, which enables features like reflection. It is helpful to consider that GC is a step-up for programming like C was to assembly language. In Lisp the binary was the source code — Lisp is free by default. The Baby Boomer generation didn’t bring the tradition of science to computers, and the biggest legacy of this generation is if we remember it. Boomers gave us proprietary software, C, C++, Java, and the bankrupt welfare state. Lisp and GC were created / discovered by John McCarthy, a mathematician of the WW II greatest generation. He wrote that computers of 1974 were fast enough to do Strong AI. There were plenty of people working on it back then, but not in a group big enough to achieve critical mass. If they had, we’d know their names. If our scientists had been working together in free software and Lisp in 1959, the technology we would have developed by today would seem magical to us. The good news is that we have more scientists than we need.

There are a number of good languages, and it doesn’t matter too much what one is chosen, but it seems the Python family (Cython / PyPy) require the least amount of work to get what we need as it has the most extensive libraries: http://scipy.org/Topical_Software. I don’t argue the Python language and implementation is perfect, only good enough, like how the shape of the letters of the English language are good enough. Choosing and agreeing on a lingua franca will increase the results for the same amount of effort. No one has to understand the big picture, they just have to do their work in a place where knowledge can easily accumulate. A GC lingua franca isn’t a silver bullet, but it is the bottom piece of a solid science foundation and a powerful form of social engineering.

The most important thing is to get lingua franca(s) in key fields like computer vision and Strong AI. However, we should also consider a lingua franca for the Linux desktop. This will help, but not solve, the situation of the mass of Linux apps feeling dis-integrated. The Linux desktop is a lot harder because code here is 100x bigger than computer vision, and there is a lot of C/C++ in FOSS user mode today. In fact it seems hopeless to me, and I’m an optimist. It doesn’t matter; every team can move at a different pace. Many groups might not be able to finish a port for 5 years, but agreeing on a goal is more than half of the battle. The little groups can adopt it most quickly.

There are a lot of lurkers around codebases who want to contribute but don’t want to spend months getting up to speed on countless tedious things like learning a new error handling scheme. They would be happy to jump into a port as a way to get into a codebase. Unfortunately, many groups don’t encourage these efforts as they feel so busy. Many think today’s hardware is too slow, and that running any slower would doom the effort; they are impervious to the doublings and forget that algorithm performance matters most. A GC system may add a one-time cost of 5–20%, but it has the potential to be faster, and it gives people more time to work on performance. There are also real-time, incremental, and NUMA-aware collectors. The ultimate in performance is taking advantage of parallelism in specialized hardware like GPUs, and a GC language can handle that because it supports arbitrary bitfields.

Science moves at demographic speed when knowledge is not being reused among the existing scientists. A lingua franca makes more sense as more adopt it. That is why I send this message to the main address of the free software mothership. The kernel provides code and leadership, you have influence and the responsibility to lead the rest, who are like wandering ants. If I were Linus, I would threaten to quit Linux and get people going on AI wink There are many things you could do. I mostly want to bring this to your attention. Thank you for reading this.

I am posting a copy of this open letter on my blog as well (http://keithcu.com/wordpress/?p=1691). Reading the LKML for more than one week could be classified as torture under the Geneva conventions.

]]>
https://spanish.lifeboat.com/blog/2011/02/gc-lingua-francas/feed 2
Open Letter to Ray Kurzweil https://spanish.lifeboat.com/blog/2010/10/open-letter-to-ray-kurzweil https://spanish.lifeboat.com/blog/2010/10/open-letter-to-ray-kurzweil#comments Mon, 25 Oct 2010 11:47:34 +0000 http://lifeboat.com/blog/?p=1281 Dear Ray;

I’ve written a book about the future of software. While writing it, I came to the conclusion that your dates are way off. I talk mostly about free software and Linux, but it has implications for things like how we can have driverless cars and other amazing things faster. I believe that we could have had all the benefits of the singularity years ago if we had done things like started Wikipedia in 1991 instead of 2001. There is no technology in 2001 that we didn’t have in 1991, it was simply a matter of starting an effort that allowed people to work together.

Proprietary software and a lack of cooperation among our software scientists has been terrible for the computer industry and the world, and its greater use has implications for every aspect of science. Free software is better for the free market than proprietary software, and there are many opportunities for programmers to make money using and writing free software. I often use the analogy that law libraries are filled with millions of freely available documents, and no one claims this has decreased the motivation to become a lawyer. In fact, lawyers would say that it would be impossible to do their job without all of these resources.

My book is a full description of the issues but I’ve also written some posts on this blog, and this is probably the one most relevant for you to read: https://spanish.lifeboat.com/blog/2010/06/h-conference-and-faster-singularity

Once you understand this, you can apply your fame towards getting more people to use free software and Python. The reason so many know Linus Torvalds’s name is because he released his code as GPL, which is a license whose viral nature encourages people to work together. Proprietary software makes as much sense as a proprietary Wikipedia.

I would be happy to discuss any of this further.

Regards,

-Keith
—————–
Response from Ray Kurzweil 11/3/2010:

I agree with you that open source software is a vital part of our world allowing everyone to contribute. Ultimately software will provide everything we need when we can turn software entities into physical products with desktop nanofactories (there is already a vibrant 3D printer industry and the scale of key features is shrinking by a factor of a hundred in 3D volume each decade). It will also provide the keys to health and greatly extended longevity as we reprogram the outdated software of life. I believe we will achieve the original goals of communism (“from each according to their ability, to each according to their need”) which forced collectivism failed so miserably to achieve. We will do this through a combination of the open source movement and the law of accelerating returns (which states that the price-performance and capacity of all information technologies grows exponentially over time). But proprietary software has an important role to play as well. Why do you think it persists? If open source forms of information met all of our needs why would people still purchase proprietary forms of information. There is open source music but people still download music from iTunes, and so on. Ultimately the economy will be dominated by forms of information that have value and these two sources of information – open source and proprietary – will coexist.
———
Response back from Keith:
Free versus proprietary isn’t a question about whether only certain things have value. A Linux DVD has 10 billion dollars worth of software. Proprietary software exists for a similar reason that ignorance and starvation exist, a lack of better systems. The best thing my former employer Microsoft has going for it is ignorance about the benefits of free software. Free software gets better only as more people use it. Proprietary software is an inferior development model and an anathema to science because it hinders people’s ability to work together. It has infected many corporations, and I’ve found that PhDs who work for public institutions often write proprietary software.

Here is a paragraph from my writings I will copy here:

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

We’ve known approximately what a neural network should look like for many decades. We need “places” for people to work together to hash out the details. A free software repository provides such a place. We need free software, and for people to work in “official” free software repositories.

“Open source forms of information” I have found is a separate topic from the software issue. Software always reads, modifies, and writes data, state which lives beyond the execution of the software, and there can be an interesting discussion about the licenses of the data. But movies and music aren’t science and so it doesn’t matter for most of them. Someone can only sell or give away a song after the software is written and on their computer in the first place. Some of this content can be free and some can be protected, and this is an interesting question, but mostly this is a separate topic. The important thing to share is scientific knowledge and software.

It is true that software always needs data to be useful: configuration parameters, test files, documentation, etc. A computer vision engine will have lots of data, even though most of it is used only for testing purposes and little used at runtime. (Perhaps it has learned the letters of the alphabet, state which it caches between executions.) Software begets data, and data begets software; people write code to analyze the Wikipedia corpus. But you can’t truly have a discussion of sharing information unless you’ve got a shared codebase in the first place.

I agree that proprietary software is and should be allowed in a free market. If someone wants to sell something useful that another person finds value in and wants to pay for, I have no problem with that. But free software is a better development model and we should be encouraging / demanding it. I’ll end with a quote from Linus Torvalds:

Science may take a few hundred years to figure out how the world works, but it does actually get there, exactly because people can build on each others’ knowledge, and it evolves over time. In contrast, witchcraft/alchemy may be about smart people, but the knowledge body never “accumulates” anywhere. It might be passed down to an apprentice, but the hiding of information basically means that it can never really become any better than what a single person/company can understand.
And that’s exactly the same issue with open source (free) vs proprietary products. The proprietary people can design something that is smart, but it eventually becomes too complicated for a single entity (even a large company) to really understand and drive, and the company politics and the goals of that company will always limit it.

The world is screwed because while we have things like Wikipedia and Linux, we don’t have places for computer vision and lots of other scientific knowledge to accumulate. To get driverless cars, we don’t need any more hardware, we don’t need any more programmers, we just need 100 scientists to work together in SciPy and GPL ASAP!

Regards,

-Keith

]]>
https://spanish.lifeboat.com/blog/2010/10/open-letter-to-ray-kurzweil/feed 8
The problems in our world aren’t technical, but social https://spanish.lifeboat.com/blog/2010/09/the-problems-in-our-world-arent-technical-but-social https://spanish.lifeboat.com/blog/2010/09/the-problems-in-our-world-arent-technical-but-social#comments Sun, 26 Sep 2010 15:38:24 +0000 http://lifeboat.com/blog/?p=1246 If the WW II generation was The Greatest Generation, the Baby Boomers were The Worst. My former boss Bill Gates is a Baby Boomer. And while he has the potential to do a lot for the world by giving away his money to other people (for them to do something they wouldn’t otherwise do), after studying Wikipedia and Linux, I see that the proprietary development model Gates’s generation adopted has stifled the progress of technology they should have provided to us. The reason we don’t have robot-driven cars and other futuristic stuff is that proprietary software became the dominant model.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones.

Simply put, there is no computer vision codebase with critical mass.

We can blame the Baby Boomers for making proprietary software the dominant model. We can also blame them for outlawing nuclear power, never drilling in ANWR despite decades of discussion, never fixing Social Security, destroying the K-12 education system, handing us a near-bankrupt welfare state, and many of the other long-term problems that have existed in this country for decades that they did not fix, and the new ones they created.

It is our generation that will invent the future, as we incorporate more free software, more cooperation amongst our scientists, and free markets into society. The boomer generation got the collectivism part, but they failed on the free software and the freedom from government.

My book describes why free software is critical to faster technological development, and it ends with some pages on why our generation needs to build a space elevator. I believe that in addition to driverless cars, and curing cancer, building a space elevator, getting going on nanotechnology, and terraforming Mars are also in reach. Wikipedia surpassed Encyclopedia Britanicca in 2.5 years. The problems in our world are not technical, but social. Let’s step up. We can make much of it happen a lot faster than we think.

]]>
https://spanish.lifeboat.com/blog/2010/09/the-problems-in-our-world-arent-technical-but-social/feed 3
More on a Space Elevator in <7 https://spanish.lifeboat.com/blog/2010/08/more-on-a-space-elevator-in-7 https://spanish.lifeboat.com/blog/2010/08/more-on-a-space-elevator-in-7#comments Wed, 18 Aug 2010 05:13:49 +0000 http://lifeboat.com/blog/?p=1181 I gave the following speech at the Space Elevator Conference.

——

“Waste anything but time.”

—Motto of the NASA Apollo missions

The consensus amongst those of us who think it is even possible to build a space elevator is that it will take more than 20 years. But how can you say how long it will take to do something until you specify how many resources it will require and how many people you’ve assigned to the task?

For the first part of this speech, let’s pretend we can make the nanotubes and focus on the remaining 99%. When analyzing a task you generally know how to do, it is best to take a top-down approach. If you are painting a room, you would divide this task into the prep, the actual painting, and the cleanup, and then organize the work in each one of those phases.

In my former life at Microsoft, I learned to appreciate the power of educated and focused large-scale teams as the best tool to beat the competition. With a 1,000 person team, 1 man-year of work is accomplished every 2 hours. Work is generally fungible so a 20 year project could definitely use more people and go faster.

The goal in a project is for everyone to always be moving ahead full speed and to finish on the same day. What slips schedules is when you have people with dependencies on each other. If one person needs something from another to do their work, you have the potential for that person to go idle and to slip the entire project.

You can prevent that from happening with strong leadership. In the recent BP oil spill, Louisiana tried to get permission to build berms, but the EPA and the other agencies took a long time to analyze the environmental impact. The federal bureaucracy with all of its technology moved slower than lifeless oil floating in the ocean. A good leader can cut through red tape and bring in outside assets to unblock a situation.

The various big pieces of the space elevator have clear boundaries. Those building the solar panels need work with the climber team only to come up with a way to attach the panels. The physical shape of the climber impacts little on the anchor station. The primary issues are the throughput of tons per day and the process to load a climber. Even mission control looks at pieces as black boxes. Mega-projects can be broken down into efforts with clear boundaries so this means that in general, once commenced, everyone should be able to work in parallel.

The robotic climber is one of the most complicated pieces of hardware that the space elevator needs, and it has many of the same requirements as one of Seattle’s Boeing airplanes: both will move a few hundred miles per hour, and have to deal with difficult changes in temperature, pressure, and radiation.

Boeing is at least on a 7 year timeframe with its 787, compared to NASA which seems to takes decades to do anything. The goal is to be the quality of NASA, but faster than the speed of Boeing. Engineering is about humans and their computers, and both can be improved.

At least some of the 787’s delays were not technically related, as the local papers documented months of labor disputes. Boeing is also working more closely with its suppliers over the Internet than ever before, and learning how to do this.

Man landed on the moon 7 years after Kennedy’s speech, exactly as he ordained, because dates can be self-fulfilling prophecies. It allows everyone to measure their work against their plan and determine if they need additional resources. If you give out a few years of work per person, and allow for time for ramp-up and test, then about 7 years is quite reasonable. Long timelines encourage procrastination. If you want something to happen more slowly, you can find always ways to succeed.

It is cheaper to get loans for shorter terms, so it is cheaper to build something in 7 years than in 20. A 20-year plan is almost a guaranteed way to get a “no” answer. Even the U.S. Congress doesn’t think more than a few months ahead.

Boeing has the requisite technical skills, and they have 160,000 employees, so we could use them as a baseline of an estimate on how many people it would take. Here is what those 160,000 people work on:

Boeing Projects

2018 Bomber

737 Airborne Early Warning and Control (AEW&C)

737 AEW&C Peace Eagle

737 AEW&C Wedgetail

767 Airborne Warning and Control System (AWACS)

A-10 Thunderbolt II

A160 Hummingbird

AC-130U Gunship

Aegis SM-3

Airborne Early Warning and Control

AGM 86-C Conventional Air-Launched Cruise Missile (CALCM)

AH-64 Apache

AV-8B Harrier II Plus

Airborne Battle Management (ABM)

Airborne Warning and Control System (AWACS)

Airlift and Tankers (A&T)

Advanced Global Services & Support

Advanced Tanker

Air Force One

Airborne Battle Management (ABM)

Airborne Laser Test Bed (ALTB)

Ares I Crew Launch Vehicle

Arrow Interceptor

Avenger

B-1B Lancer

B-2 Spirit

B-52 Stratofortress

BattleScape

Boeing 376 Fleet

Boeing 601 Fleet

Boeing 702 Fleet

Boeing 702MP Spacecraft

Boeing Australia

Boeing Launch Services

Boeing Military Aircraft

Boeing Satellites

Brigade Combat Team Modernization (BCTM)

Brimstone Precision Guided Missile

C-17 Globemaster III

C-130 Avionics Modernization Program

C-32A Executive Transport

C-40A Clipper Military Transport

C-40B Special-Mission Aircraft

C-40C Operational Support and Team Travel Aircraft

Canard Rotor/Wing

CH-46E Sea Knight

CH-47D/F Chinook

Cargo Mission Contract (CMC)

Checkout, Assembly & Payload Processing Services (CAPPS)

Combat Survivor Evader Locator (CSEL)

Commercial/Civil Satellite Programs

Constellation/Ares I Crew Launch Vehicle

Conventional Air-Launched Cruise Missile (CALCM)

Cyber and Information Solutions

DataMaster

Defense & Government Services

Delta II

Delta IV

Directed Energy Systems (DES)

DIRECTV 1, 2, 3

DIRECTV 10, 11, 12

DRT

E-3 AWACS

E-4B Advanced Airborne Command Post

E-6 Tacamo

EA-18G Airborne Electronic Attack Aircraft

Engineering & Logistics Services

F-15E Strike Eagle

F-15K — Republic of Korea

F/A-18 Hornet

F/A-18E/F Super Hornet

F-22 Raptor

F/A-18E/F Integrated Readiness Support Teaming (FIRST)

Family of Advanced Beyond Line-of-Sight Terminals (FAB-T)

Global Broadcast Service (GBS)

Global Services & Support

Global Positioning System

Global Positioning System (GPS) IIF

Global Security Systems

GSA

GOES N-P

Ground-based Midcourse Defense (GMD) System

Harpoon

Harrier

Hornet

I&SS Mission Systems

Insitu

Integrated Logistics

Integrated Weapons System Support Program

Intelligence and Security Systems

Intelsat

International Space Station (ISS)

Iridium

Intelligence, Surveillance, Reconnaissance (ISR) Services

Joint Direct Attack Munition (JDAM)

Joint Effects-Based Command and Control (JEBC2)

Joint Helmet-Mounted Cueing System (JHMCS)

Joint Recovery and Distribution System (JRaDS)

Joint Tactical Radio System Ground Mobile Radios (JTRS GMR)

KC-10 Extender

KC-135 Stratotanker

KC-767 Advanced Tanker

Lancer

Laser & Electro-Optical Systems (LEOS)

Laser Joint Direct Attack Munition (LJDAM)

Leasat

MH-47E/G Special Operations Chinook

Maintenance, Modifications & Upgrades

Measat-3

Military Satellite Systems

Milstar II

Mission Operations

Mission Systems

Military Satellite Systems

Network and Space Systems

Network and Tactical Systems

Network Centric Operations

NSS-8

Orbital Express

P-8

Patriot Advanced Capability-3 (PAC-3)

Peace Eagle

Phantom Works

Raptor

Rotorcraft Systems

SQS

ScanEagle

Sea Knight

Sea Launch

SBInet

SkyTerra

Small Diameter Bomb (SDB)

SoftPlotter

SOSCOE

Space and Intelligence Systems

Space Based Space Surveillance (SBSS) System

Space Exploration

Space Flight Awareness

Space Shuttle

SPACEWAY 1, 2 North

Special Operations Chinook

Spectrolab

Spirit

St. Louis Flight Operations

Standoff Land Attack Missile Expanded Response SLAM ER

Strategic Missile & Defense Systems

Strategic Missile Systems

Stratofortress

Super Hornet

Supply Chain Services

T-45 Training System

Tacamo

TACSAT I

Tanker

Thuraya-2, 3

Training Support Center

Training Systems and Services

Transformational Wideband Communication Capabilities for the Warfighter

UH-46D Sea Knight

UHF Follow-On

Unmanned Airborne Systems

Unmanned Little Bird

V-22 Osprey

VSOC Sentinel

Wedgetail

Wideband Global SATCOM (WGS)

X-37B Orbital Test Vehicle

X-51 WaveRider

XM Satellite Radio

XM-3, 4

XSS Micro-Satellite

The news in Seattle was how Boeing’s 787 was continually being delayed, but they are involved in so many military and space efforts, it is surprising they find any time at all to work on their Dreamliner!

Boeing is working on 150 projects, so they have 1,100 people per project. Averages are more prone to error, so we can assume a space elevator is 10 times bigger than average. This gives you 11,000 people. If you knew the size of the teams at Boeing, something which is not public information, you could better refine the estimates. A 11,000 person team would be a sight to behold.

If we landed on the moon 7 years after Kennedy told us we would, and if Boeing can build the 787 in 7 years, they we can build the rest of the space elevator in 7 years. It is just a matter of having enough of the right people. So 11,000 people in about 7 years is a first estimate. But this is the 21st century, and we landed on the moon 40 years ago.

Software

Software is my training, and what I will turn to now. Ford Motor Company made an ad that said before they build a car, they build it inside a computer. If you are satisfied with the design inside a computer, you are ready to start production. What is true for a car is even more true for an airplane, and there is a lot of software involved in designing, testing, running and maintaining an airplane, and I’ve had the chance to talk to some Boeing engineers in my years in Seattle. It would not be surprising if the majority of engineers at Boeing knew how to program, and that software is a large part of Boeing’s investments. On the Wikipedia page for the 787, their (proprietary) software is mentioned several times as being a reason for delays.

Setting aside the space elevator, the key to faster technological progress is the more widespread use of free software in all aspects of science. For example, I believe there are more than enough computer vision PhDs, but there are 200+ different codebases and countless proprietary ones. Simply put, there is no computer vision codebase with critical mass, and this problem exists for a number of problem domains. The lessons of Wikipedia have not been learned.

We are not lacking hardware. Computers today can do billions of additions per second. If you could do 32-bit addition in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

While a brain is different from a computer in that it does work in parallel, such parallelization only makes it happen faster, it does not change the result. Anything accomplished in our parallel brain could also be accomplished on computers of today, which can do only one thing at a time, but at the rate of billions per second. A 1-gigahertz processor can do 1,000 different operations on a million pieces of data in one second. With such speed, you don’t even need multiple processors. Even so, more parallelism is coming via GPUs.

I have written a book that has ideas on how to write better software faster. Today, too many programmers of this world have not adopted free software and modern programming languages. I cannot speak for the shortest amount of time it would take to build the hardware for the space elevator, but I can speak a little bit about the software. Software is interesting because it seems there is no limit on the number of people who can work together.

Linux’s first release in 1991 was built by one programmer and had 10,000 lines of code. It is now 1,000 times bigger and has 1,000 times as many people working on it. Software is something like Wikipedia, which started with a handful but now has millions of people who have made contributions. I grabbed a random article on Wikipedia: it was 5,000 words which is a decent hunk of intellectual property, about as long as this speech which is half-over. It had 1,500 revision and 923 contributors. Each person noticed something different; not every change is perfect, but newer changes can further polish the work, and it usually heads in the right direction evolving towards a good state. A corollary of the point is the line by Eric Raymond that with enough eyeballs, all bugs in software are shallow.

Leonardo Da Vinci said that: “Art is never finished, only abandoned.” This is true of software as well because both are perfectable to an arbitrary degree. Every software programmer has had a feeling in his gut that if he had more resources, he could do more things. Software is different than Wikipedia, but I have found generally that problems in software, assuming you have the right expertise, can be broken up into arbitrarily small tasks. Every interesting problem can be expressed as a functional interface and a graph of code that someone else can maintain.

Some think that the AI problems are so hard that it isn’t a matter of writing code, it is a matter of coming up with the breakthroughs on a chalkboard. But people can generally agree at a high level how the software for solving many problems will work and there has been code for all manner of interesting AI kicking around for decades.

What we never built, and still don’t have, are some places where lots of people have come together to hash out the details, which is a lot closer to Wikipedia than it first appears. Software advances in a steady, stepwise fashion, which is why we need free software licenses: to incorporate all the incremental advancements that each random scientist is making. Even if you believe we need more scientific breakthroughs, it should be clear that things like robust computer vision are complicated enough that you would want 100s of people working together on the vision pipeline. So, while we are waiting for those “breakthroughs, let’s just get the 100 people together.

A big part of the problem is that C and C++ have not been retired. These languages make it hard for programmers to work together, even if they wanted to. There are all sorts of inefficiencies of time, from learning the archane rules about these ungainly languages (especially C++), to the fact that libraries often use their own utility classes, synchronization primitives, error handling schemes, etc.

It is easier to write a specialized and custom computer vision library in C/C++ than to integrate OpenCV, the most popular free computer vision engine. OpenCV defines an entire world, down to the matrix class so it cannot just plug into whatever code you already have. It takes months to get familiar with everything. Most people just want to work. To facilitate cooperation, I recommend Python. Python is usable by PhDs and 8 year olds and it is a productive, free, reliable and rich language. Linux and Python are a big part of what we need. That gives a huge and growing baseline, but we have to choose to use it.

This is a screenshot of a fluid analysis of an internal combustion engine, and is built using a Python science library known as SciPy that can also do neural networks, and computer vision.

We might come up with a better language one day, but Python is good enough. The problem in software today is not a lack of hardware, or the technical challenge of writing code, it is the social challenge of making sure we are all working together productively. If we fix this, the future will arrive very fast. Another similarity between Wikipedia, free software, and the space elevator, is that all are cheaper than their alternatives.

So given all this technology at our disposal, we should be able to build this elevator in less than 7 years. Few would have predicted that it would take the unpaid volunteers of Wikipedia only 2.5 years to surpass Encyclopedia Britannica. Anything can happen in far less time than we think is possible if everyone steps up today to play their part. The way to be a part of the future is to invent it. We need to focus our scientific and creative energy towards big, shared goals. Wikipedia, as the world’s encyclopedia, is a useful and inspiring tool, and so people have come pouring in.

Future software advancements like cars that drive themselves will trigger a new perspective on whether we can build a space elevator. My backup plan to hitching a ride on the space elevator is to encourage people to build robot-driven cars first. Today, I’m trying the reverse approach.

The way to get help for a project is to create a vision that inspires others, but it would also be helpful if we got ten billion dollars. If the US can afford a $1.4 trillion dollar deficit, we can afford a space elevator.

There are already millions of people working in the free software movement today, so in a sense there already are millions of people working on the space elevator. If we had people with the right skills working, we could start writing the actual software for the space elevator. We could in principle write all of the software for the space elevator, just as Boeing and Ford do, which would further shrink the estimates.

Unfortunately, writing all the software now is theoretically possible but not practical. The problem is that a lot of what we need are device drivers. There are many ways to design the cargo door of the climber, and what the various steps of opening this door are. The software that controls the opening and closing of that door is a device driver, a state machine that coordinates all the littler pieces of hardware. You can even think of mission control as the software that orders all of the hardware pieces around. It is a meta-device-driver, so it can’t be written yet either. So, we are mostly stuck with our attempts to write too much software now, but there are a few things we can do.

We could use hardware designs. The hard part about us talking about a design aspect of a climber at a conference like this is that there is no canonical designs or team. Today, there is much interesting intellectual property locked up besides software.

Free data is also important however. Wikipedia has 2.6M lines of code to edit and display the encyclopedia, but it is gigabytes of data. Different projects have different ratios but software is useless without data. Everything Boeing does is proprietary today. We should fix that for the space elevator to encourage faster progress. If we all agree on free software and formats as baseline, it means people can work together. One big challenge is there is no free Solidworks replacement.

Even today, not everything that Boeing has locked up is innovative and strategic. They use standard military encryption algorithms which are public and free. Much of software is boring infrastructure code.

With free software and free formats, we can most quickly build the space elevator. So while it is bad news is that much of the required software efforts will be device drivers, the good news is that are some little software things we can work on today.

Dave Lang’s work on tethers is very useful, and it could use a team of people to work with him to port it from Fortran to Python. Dave started, but he didn’t know Python and the interop tools well enough to make progress. It would also be nice to get some people with supercomputers analyzing ribbon designs, and ways to bootstrap and repair a ribbon. NASA has people, but they don’t have this as their job.

I am hosting spaceelevatorwiki.com on my server and I plan on handing it over to ISEC, and it could serve as a place to coordinate various kinds of software or other R&D. If we could get some people to work, it would push others to get going. Nobody wants to be the only worker on a project. Even millions of dollars of money can be useful to jumpstart software efforts.

The 1% of the Carbon Nanotubes


Okay, so now on to the carbon nanotubes. This is not my area of expertise so it will be short. I am satisfied to make the case that a space elevator is 99% doable in less than 7 years and leave the resolution of the last 1% for another day. To adapt a line from Thomas Edison: success at building a space elevator is 99% perspiration and 1% inspiration.

Many futurists believe that nanotechnology is the next big challenge after information technology. When analyzing a system you know how to build, it is best to work top-down. But when trying to do something new, you work your way up. When learning to cook, you start with an egg, not filet mignon. A good way to attack a big problem like nanotechnology is to first attack a small part of it, like carbon nanotubes. A Manhattan Project on general nanotechnology is too big and unfocused of a problem. Protein folding is by itself a Manhattan project!

A carbon nanotube is a simple and useful nanoscale structure and could be a great way to launch atomically precise manufacturing. The ribbon needs some science related to the design of the ribbon, dealing with friction, damage, and decay, but that work can be done today on supercomputers. There are people at NASA that have the expertise and equipment, but they don’t have this as a goal. One of the points Kennedy made is that sending a man to the moon served as a goal to: “organize and measure.”

One concern is that there is a lot of money being spent on nanotube manufacturing research, but it is doled up in amounts of $100K. I am not convinced that such a small investment can bring any major new advancements.

Nanotubes might require the existing industrial expertise of a company like Intel. We all know that NASA has not seriously considered building a space elevator, and similarly, I think that no one at Intel has considered the benefits to creating the world’s best nanotube threads. They already experimenting with nanotubes inside computer chips because metal loses the ability to conduct electricity at very small diameter, but they aren’t producing them as an independent product for purchase now.

Intel is working in the 35 nm scale today which is a long way from the 2nm nanotube scale. But Intel’s only goal today is faster and cheaper. Intel can fit 11 of their Atom processors on the surface area of a penny. Such a powerful processor is small enough for iPhone sized devices, let alone laptops which is their actual market.

Size is just a side-battle in their goals of more speed and lower production cost. So perhaps Intel would build a nanotube fabrication plant that looks nothing like what they are trying to do today:


Intel Itanium Processor

The first nanotube threads will likely not be good enough for the space elevator, but Intel learns how to build a better and smaller chip in the process of designing and building their current chip. So after they build this manufacturing plant, they could sell their product while they build their next one. Who knows how many of these iterations it would take, or ways to speed the progress up.

Brad Edwards tells me that with one-inch fibers, you can spin arbitrarily long carbon nanotube threads, using the textile process we’ve been following for centuries. Carbon nanotube are the simplest interesting nanoscale structure. Carbon nanotubes were discovered in 1991, and growing fibers in an oven and spinning them into threads is something we could have done back then. Companies like Hexcel, one of the world’s leaders in carbon fiber, is afraid to invest in carbon nanotubes even though they are the company in the world closest to being able to produce them. They are afraid of failure. I have discovered in software that it is about constantly adding new features which enable the new scenarios. Software is therefore constantly about generalizing. From where I sit, carbon fiber and carbon nanotubes are nearly the same thing! Even if it required new investments, Hexcel should be able to do it faster, better, and cheaper than anyone else, and they should have the most customers lined up who might want their new product. Hexcel, the company that should be leading in this market, is paralyzed into inaction by fear of failure. There is a moral obligation to innovation.

In conclusion, there is a new generation of kids maturing known as the Millennials. Their perspective is unique because they’ve been using Youtube and Google for as long as they can remember. They expect to get an answer to any question they pose in 100 milliseconds on their phone. The fact that Social Security is bankrupt is not acceptable. E=mc2 is sufficient proof that nuclear power is a good idea. If you tell them you’ve got a 20-year plan, they will reply that you don’t know what you are doing yet, and you need to develop better plans. Waiting 20 years for a space elevator once makes as much sense as waiting 30 minutes at the gas station. And they are right — they don’t need to change their perspective, the rest of us need to change ours.

I’m not a Millennial, I’m a Generation X’er, and we are the ones building it. But I’m a software person. It would require 10,000 of my first computer to have the same capacity as an iPhone. I see today’s hardware as magic, so I believe someone can conjure up high quality nanotube rope if they invested enough resources. It might not be good enough for the elevator, but it could be a revenue-generating business. In Kennedy’s Rice speech, he mentioned that the Apollo program needed “new metal alloys” that hadn’t been invented. He didn’t think it would be a problem back then and we shouldn’t be 100% convinced now either.

The International Space Station is a tin can in space. We can do a lot better. A space elevator is a railway to space. Scramjets, space tethers, rockets, or reusable launch vehicles, none of them are the way. Perhaps the Europeans could build the station at GEO. Russia could build the shuttle craft to move cargo between the space elevator and the moon. The Middle East could provide an electrical grid for the moon. China could take on the problem of cleaning up the orbital space debris and build the first moon base. Africa could design the means to terraform Mars, etc. This could all be done completely in parallel with the space elevator construction. We went to the moon 40 years ago, and the space elevator is our generation’s moon mission. Let’s do as Kennedy exhorted and: “Be bold”.

There are legal issues to consider. But when this project commences, we need to tell the bureaucrats to get out of the way. We should also approach the global warming crowd and tell them that even better than living in rice patties, and driving electric rickshaws, the best way to help comrade mother earth is with a space elevator. Colonizing space will changes man’s perspective. When we feel crammed onto this pale blue dot, we forget that any resource we could possibly want is out there in incomparably big numbers. This simple understanding is a prerequisite for a more optimistic and charitable society, which has characterized eras of great progress.

We have given this program a high national priority — even though I realize that this is in some measure an act of faith and vision, for we do not now know what benefits await us. But if I were to say, my fellow citizens, that we shall send to the moon, 240,000 miles away from the control station in Houston, a giant rocket more than 300 feet tall, the length of this football field, made of new metal alloys, some of which have not yet been invented, capable of standing heat and stresses several times more than have ever been experienced, fitted together with a precision better than the finest watch, carrying all the equipment needed for propulsion, guidance, control, communications, food and survival, on an untried mission, to an unknown celestial body, and then return it safely to earth, re-entering the atmosphere at speeds of over 25,000 miles per hour, causing heat about half that of the temperature of the sun …, and do all this, and do it right, and do it first before this decade is out — then we must be bold.

John F Kennedy, 1962

]]>
https://spanish.lifeboat.com/blog/2010/08/more-on-a-space-elevator-in-7/feed 15
H+ Conference and the Singularity Faster https://spanish.lifeboat.com/blog/2010/06/h-conference-and-faster-singularity https://spanish.lifeboat.com/blog/2010/06/h-conference-and-faster-singularity#comments Fri, 11 Jun 2010 10:17:50 +0000 http://lifeboat.com/blog/?p=1012

We can only see a short distance ahead, but we can see plenty there that needs to be done.
—Alan Turing

As a programmer, I look at events like the H+ Conference this weekend in a particular way. I see all of their problems as software: not just the code for AI and friendly AI, but also that for DNA manipulation. It seems that the biggest challenge for the futurist movement is to focus less on writing English and more on getting the programmers working together productively.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

Some think that these problems are so hard that it isn’t a matter of writing code, it is a matter of coming up with the breakthroughs on a chalkboard. But people can generally agree at a high level how the software for solving many problems will work. There has been code for doing OCR and neural networks and much more kicking around for years. The biggest challenge right now is getting people together to hash out the details, which is a lot closer to Wikipedia than it first appears. Software advances in a steady, stepwise fashion, which is why we need free software licenses: to incorporate all the incremental advancements that each scientist is making. Advances must eventually be expressed in software (and data) so it can be executed by a computer. Even if you believe we need certain scientific breakthroughs, it should be clear that things like robust computer vision are complicated enough that you would want 100s of people working together on the vision pipeline. So, while we are waiting for those breakthroughs, let’s get 100 people together!

There is an additional problem: that C/C++ have not been retired. These languages make it hard for programmers to work together, even if they wanted to. There are all sorts of taxes on time, from learning the archane rules about these ungainly languages, to the fact that libraries often use their own string classes, synchronization primitives, error handling schemes, etc. In many cases, it is easier to write a specialized and custom computer vision library in C/C++ than to integrate something like OpenCV which does everything by itself down to the Matrix class. The pieces for building your own computer vision library (graphics, i/o, math, etc.) are in good shape, but the computer vision is not, so that we haven’t moved beyond that stage! Another problem with C/C++ is that they do not have garbage collection which is necessary but insufficient for reliable code.

A SciPy-based computational fluid dynamic (CFD) visualization of a combustion chamber.

I think scientific programmers should move to Python and build on SciPy. Python is a modern free language, and has quietly built up an extremely complete set of libraries for everything from gaming to scientific computing. Specifically, its SciPy library with various scikit extensions are a solid baseline patiently waiting for more people to work on all sorts of futuristic problems. (It is true that Python and SciPy both have issues. One of Python’s biggest issues is that the default implementation is interpreted, but there are several workarounds being built [Cython, PyPy, Unladen Swallow, and others]. SciPy’s biggest challenge is how to be expansive without being duplicative. It is massively easier to merge English articles in Wikipedia that discuss the same topics than to do this equivalent in code. We need to share data in addition to code, but we need to share code first.)

Some think the singularity is a hardware problem, and won’t be solved for a number of years. I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. In fact, we could have built intelligent machines and cured cancer years ago. The problems right now are much more social than technical.

    1. We can only see a short distance ahead, but we can see plenty there that needs to be done.

—Alan Turing

]]>
https://spanish.lifeboat.com/blog/2010/06/h-conference-and-faster-singularity/feed 6