Toggle light / dark theme

Recent revelations of NASA’s Eagleworks Em Drive caused a sensation on the internet as to why interstellar propulsion can or cannot be possible. The nay sayers pointed to shoddy engineering and impossible physics, and ayes pointed to the physics of the Alcubierre-type warp drives based on General Relativity.

So what is it? Are warp drives feasible? The answer is both yes and no. Allow me to explain.

The empirical evidence of the Michelson-Morley experiment of 1887, now known as the Lorentz-FitzGerald Transformations (LFT), proposed by FitzGerald in 1889, and Lorentz in 1892, show beyond a shadow of doubt that nothing can have a motion with a velocity greater than the velocity of light. In 1905 Einstein derived LFT from first principles as the basis for the Special Theory of Relativity (STR).

So if nothing can travel faster than light why does the Alcubierre-type warp drive matter? The late Prof. Morris Klein explained in his book, Mathematics: The Loss of Certainty, that mathematics has become so powerful that it can now be used to prove anything, and therefore, the loss of certainty in the value of these mathematical models. The antidote for this is to stay close to the empirical evidence.

My good friend Dr. Andrew Beckwith (Prof., Chongqing University, China) explains that there are axiomatic problems with the Alcubierre-type warp drive theory. Basically the implied axioms (or starting assumptions of the mathematics) requires a multiverse universe or multiple universes, but the mathematics is based on a single universe. Thus even though the mathematics appears to be sound its axioms are contradictory to this mathematics. As Dr. Beckwith states, “reducto ad absurdum”. For now, this unfortunately means that there is no such thing as a valid warp drive theory. LFT prevents this.

For a discussion of other problems in physical theories please see my peer reviewed 2013 paper “New Evidence, Conditions, Instruments & Experiments for Gravitational Theories” published in the Journal of Modern Physics. In this paper I explain how General Relativity can be used to propose some very strange ideas, and therefore, claiming that something is consistent with General Relativity does not always lead to sensible outcomes.

The question we should be asking is not, can we travel faster than light (FTL) but how do we bypass LFT? Or our focus should not be how to travel but how to effect destination arrival.

Let us take one step back. Since Einstein, physicists have been working on a theory of everything (TOE). Logic dictates that for a true TOE, the TOE must be able to propose from first principles, why conservation of mass-energy and conservation of momentum hold. If these theories cannot, they cannot be TOEs. Unfortunately all existing TOEs have these conservation laws as their starting axioms, and therefore, are not true TOEs. The importance of this requirement is that if we cannot explain why conservation of momentum is true, like Einstein did with LFT, how do we know how to apply this in developing interstellar propulsion engines? Yes, we have to be that picky, else we will be throwing millions if not billions of dollars in funding into something that probably won’t work in practice.

Is a new physics required to achieve interstellar propulsion? Does a new physics exists?

In 2007, after extensive numerical modeling I discovered the massless formula for gravitational acceleration, g=τc^2, where tau τ is the change in the time dilation transformation (dimensionless LFT) divided by that distance. (The error in the modeled gravitational acceleration is less than 6 parts per million). Thereby, proving that mass is not required for gravitational theories and falsifying the RSQ (Relativity, String & Quantum) theories on gravity. There are two important consequences of this finding, (1) we now have a new propulsion equation, and (2) legacy or old physics cannot deliver.

But gravity modification per g=τc^2 is still based on motion, and therefore, constrained by LFT. That is, gravity modification cannot provide for interstellar propulsion. For that we require a different approach, the new physics.

At least from the perspective of propulsion physics, having a theoretical approach for a single formula g=τc^2 would not satisfy the legacy physics community that a new physics is warranted or even exists. Therefore, based on my 16 years of research involving extensive numerical modeling with the known empirical data, in 2014, I wrote six papers laying down the foundations of this new physics:

1. “A Universal Approach to Forces”: There is a 4th approach to forces that is not based on Relativity, String or Quantum (RSQ) theories.
2. “The Variable Isotopic Gravitational Constant”: The Gravitational Constant G is not a constant, and independent of mass, therefore gravity modification without particle physics is feasible.
3. “A Non Standard Model Nucleon/Nuclei Structure”: Falsifies the Standard Model and proposes Variable Electric Permittivity (VEP) matter.
4. “Replacing Schrödinger”: Proposes that the Schrödinger wave function is a good but not an exact model.
5. “Particle Structure”: Proposes that the Standard Model be replaced with the Component Standard Model.
6. “Spectrum Independence”: Proposes that photons are spectrum independent, and how to accelerate nanowire technology development.

This work, published under the title Super Physics for Super Technologies is available for all to review, critique and test its validity. (A non-intellectual emotional gut response is not a valid criticism). That is, the new physics does exist. And the relevant outcome per interstellar propulsion is that subspace exists, and this is how Nature implements probabilities. Note, neither quantum nor string theories ask the question, how does Nature implement probabilities? And therefore, are unable to provide an answer. The proof of subspace can be found in how the photon electromagnetic energy is conserved inside the photon.

Subspace is probabilistic and therefore does not have the time dimension. In other words destination arrival is not LFT constrained by motion based travel, but is effected by probabilistic localization. We therefore, have to figure out navigation in subspace or vectoring and modulation. Vectoring is the ability to determine direction, and modulation is the ability to determine distance. This approach is new and has an enormous potential of being realized as it is not constrained by LFT.

Yes, interstellar propulsion is feasible, but not as of the warp drives we understand today. As of 2012, there are only about 50 of us on this planet working or worked towards solving the gravity modification and interstellar propulsion challenge.

So the question is not, whether gravity modification or interstellar propulsion is feasible, but will we be the first nation to invent this future?

(Originally published in the Huffington Post)

Companies looking to launch satellites into space typically spend anywhere from $10–50 million per launch but thanks to 3D printing, those costs are set to drop in a big way.

For $4.9 million, businesses can use RocketLab to send small satellites into orbit. The firm’s engine, called the Rutherford, is powered by an electric motor and is the first oxygen and hydrocarbon engine to use 3D printing for all its primary components. The New Zealand company is set to begin test flights this year and aims to launch weekly commercial operations next year. Read more

Cryptocurrency aficionados have been discussing Bitcoin limitations ever since the blockchain buzz hit the street. Geeks toss around ideas for clearing transactions faster, resisting potential attacks, rewarding miners after the last coin is mined, and supporting anonymity (or the opposite—if you lean toward the altcoinsdark side). There are many areas in which Bitcoin could be improved, or made more conducive to one camp or another.

Distinguished Penn State professor, John Carroll, believes that Bitcoin may eventually be marginalized due to its early arrival. He believes that its limitations will eventually be overcome by newer “altcoins”, presumably with improved mechanisms.

So, does progress in any of these areas threaten the reigning champ? It’s unlikely…

Andreas-transparentMore than any other individual, Andreas Antonopoulos is the face of Bitcoin. We discussed this very issue in the outer lobby of the MIT Bitcoin Expo at which he was keynote speaker (March 2015). Then, we discussed it again, when I hosted his presentation at The Bitcoin Event in New York (also in March). He clearly and succinctly explained to me why it is unlikely that an altcoin will replace Bitcoin as the dominant—and eventually surviving—cryptocurrency

It is not simply that Bitcoin was first or derived from Satoshi’s original paper, although this clearly established precedent, propelled it into the media, and ignited a grassroots industry. More importantly, Bitcoin is unlikely to be surpassed by an altcoin because:

  1. Bitcoin is open source. It is difficult enough for skeptics to trust that an open source protocol can be trusted. Users, businesses, banks, exchanges and governments may eventually trust a distributed, open source movement. After all, math is more trustworthy and less transient than governments. Math cannot inflate itself, bend to political winds, or print future generations into debt if it is tied to a cap. But it is unlikely that these same skeptics will allow an inventor with a proprietary mechanism to take custody of their wealth, or one in which the content of all wallets cannot be traced back to the origin.
  2. If we accept #1 (that a viable contender must be open source and either public or freely licensed), then Bitcoin developers or wallet vendors are free to incorporate the best protocols and enhancements from the alt-developers. They can gradually be folded into Bitcoin and adopted by consensus. This is what Gavin and the current developers at Bitcoin Prime do. They protect, enhance, extend, and promote. Looked at another way, when a feature or enhancement is blessed—and when 3 or 4 of the leading 7 wallets honor it, it becomes part of Bitcoin.

Bitcoin has achieved a two-sided network effect, just like Acrobat PDF. Unseating an entrenched two-sided network requires disruptive technology and implementation with clear benefits. But in the case of a widely distributed, trusted and universally adopted tool (such as a public-use monetary instrument), a contender must be open source. The Cryptocurrency Standards Association, The Bitcoin Foundation and the leading wallet vendors have always been open and eager to incorporate the best open source ideas into Bitcoin.

Even if Bitcoin were replaced by an altcoin or by “Bitcoin 2.0”, it is likely that the public would only migrate to the enhanced coin if it were tied to the original equity corpus of earned and mined coins from the Bitcoin era. That is, we all know that Satoshi may have thousands of original Bitcoins, but few among us would tolerate (a) losing all of our Bitcoin value, and (b) rewarding a blockchain wannabe who declares that his coins are worth more than the grassroots legacy of vested millions that came before.

string_can_phoneConsider Prof Carroll’s analogy: “Who will use an acoustic string telephone when he could access a mobile phone.” A more accurate analogy is the evolution of the 32 year old AMPS phone network (the first widely deployed cell phone network). In 1983, the original phones were analogue and limited to 400 channels. Like their non-cellular predecessors, user equipment was bulky. Phones were divided into bulky components in the trunk, under the seat and a corded handset. They lacked GPS, LTE and many signaling features that we now take for granted. Yet carriers, equipment manufacturers and users were never forced to throw away equipment and start over. The network grew, adopted, and yielded incentives for incremental user-equipment upgrade.

With all due respect to the distinguished Penn State professor, John Carroll, I stand with Andreas. Bitcoin need’t relinquish the throne. It is evolving!

Philip Raymond is Co-Chair of The Cryptocurrency Standards Association and CEO of Vanquish Labs.
This is his first article for Lifeboat Foundation

Related: Stellar & Ripple: Pretender to Bitcoin throne?

Game-changing technologies can be a waste of money or a competitive advantage. It depends on the technology and the organization.

It seems like the term “game-changing” gets tossed around a lot lately. This is particularly true with respect to new technologies. But what does the term mean, what are the implications, and how can you measure it?

With regarding to what it means, I like the MacMillan dictionary definition for game-changing. It is defined as “Completely changing the way that something is done, thought about, or made.” The reason I like this definition is it captures the transformational nature of what springs to mind when I hear the term game-changing. This should be just what it says. Not just a whole new ball game, but a whole new type of game entirely.

Every industry is unique. What is a game-changer for one, might only be a minor disruption or improvement for another. For example, the internal combustion engine was a game-changer for the transportation industry. It was important, though less of a game-changer for the asphalt industry due to secondary effect of increased demand for paved roads.

Just as every industry is unique, so is every organization. In order to prosper in a dynamic environment, an organization must be able to evaluate how a particular technology will affect its strategic goals, as well as its current operations. For this to happen, an organization’s leadership must have a clear understanding of itself and the environment in which it is operating. While this seems obvious, for large complex organizations, it may not be as easy as it sounds.

In addition to organizational awareness, leadership must have the inclination and ability to run scenarios of how it the organization be affected by the candidate game-changer. These scenarios provides the ability to peek a little into the future, and enables leadership to examine different aspects of the potential game-changer’s immediate and secondary impacts.

Now there are a lot of potential game-changers out there, and it is probably not possible to run a full evaluation on all of them. Here is where an initial screening comes in useful. An initial screen might ask is it realistic, actionable, and scalable? Realistic means does it appear to be feasible from a technical and financial standpoint? Actionable means does this seem like something that can actually be produced? Scalable means will the infrastructure support rapid adoption? If a potentially transformational technology passes this initial screening, then its impact on the organization should be thoroughly evaluated.

Let’s run an example with augmented reality as the technology and a space launch services company. Despite the (temporary?) demise of Google Glass, augmented reality certainly seems to have the potential to be transformational. It literally changes how we can look at the world! Is it realistic? I would say yes, the technology is almost there, as evidenced by Google Glass and Microsoft HoloLens. Is it actionable? Again, yes. Google Glass was indeed produced. Is it scalable? The infrastructure seems available to support widespread adoption, but the market readiness is a bit of an issue. So yes, but perhaps with qualifications.

With the initial screening done, let’s look at the organizational impact. A space launch company’s leadership knows that due to the unforgiving nature of spaceflight, reliability has to be high. They also know that they need to keep costs low in order to be competitive. Inspection of parts and assembly is expensive but necessary in order to maintain high reliability. With this abbreviated information as the organizational background, it’s time to look at scenarios. This is the “What if?” part of the process. Taking into account the known process areas of the company and the known and projected capabilities of the technology in question, ask “what would happen if we applied this technology?” Don’t forget to try to look for second order effects as well.

One obvious scenario for the space launch company would be to examine what if augmented reality was used in the inspection and verification process? One could imagine an assembly worker equipped with augmented reality glasses seeing the supply chain history of every part that is being worked on. Perhaps getting artificial intelligence expert guidance during assembly. The immediate effect would be reduced inspection time which equates to cost savings and increased reliability. A second order effect could be greater market share due to a better competitive advantage.

The bottom line is this hypothetical example is that for the space launch company, augmented reality stands a good chance of greatly improving how it does business. It would be a game-changer in at least one area of operations, but wouldn’t completely re-write all the rules.

As the company runs additional scenarios and visualizes the potential, it could determine whether or not this technology is something they want to just wait and see, or be an early adopter, or perhaps directly invest in to bring it along a little bit faster.

The key to all of this is that organizations have to be vigilant in knowing what new technologies and capabilities are on the horizon, and proactive in evaluating how they will be affected by them. If something can be done, it will be done, and if one organization doesn’t use it to create a competitive advantage, rest assured its competitors will.

I read all the news about SpaceX’s Falcon 9 latest “failure” to land on an autonomous spaceport drone ship aka barge. I view these as trials to success. Here’s why.

1. Grasshopper Successes: The two videos below show that the early landing trials aka Grasshopper from several heights between 250m and 1,000m.

The lessons here are:

a) Pinpoint landing of a 1st stage rocket is technologically feasible.

b) This 1st stage rocket has to attain zero vertical velocity at a minimum 250m above the barge.

Video of 250m test

Video of 1,000m test

2. Falcon 9 1st stage crash landing — 1st attempt: SpaceX tells us that the failure was due to a hard landing (see video below) but at 0:03 minutes into the video one can see that the 1st stage has substantially tilted before it hit the deck i.e. the 1st stage did not tilt because it hit the deck.

The lessons here:

a) A wobble — a dynamic instability — occurs before landing.

b) The guidance systems are unable to cope with new wobble.

Video of 1st attempt

3. Falcon 9 1st stage crash landing — 2nd attempt: The video of the second attempt (below) confirms that indeed a wobble has been introduced before the stabilization fins are deployed. Further, this deployment exacerbates the wobble, and the guidance systems is unable to handle this exacerbated wobble.

The lessons here:

a) 1st stage vertical velocity needs to be zero by at least 250m above deck.

b) The stabilization fins need to be redesigned to alleviate exacerbation.

c) Like the Space Ship One’s shuttlecock approach, the 1st stage upper fins need to be deployed before the lower fins are.

d) Upgrade the landing guidance system to account for more severe wobbles.

If at a minimum, SpaceX achieves zero velocity at 250m before deployment of landing gear it will be successful. The other recommendations are good to have.

I expect SpaceX to be successful by their 3rd try.

Tyler Hayes | Fast Company


“The inherently robotic system begs to be humanized and explained. The first question Taylor had to ask himself was if what Tocchini was attempting was even possible. Could he translate design intention into an algorithm that was always producing new and relevant results—something that satisfied a broad range of needs and desires?” Read more

— WiredAutonomous car from Delphi drives on Treasure Island in preparation for a cross-country trip from San Francisco to New York City in San FranciscoAn autonomous car just drove across the country.

Nine days after leaving San Francisco, a blue car packed with tech from a company you’ve probably never heard of rolled into New York City after crossing 15 states and 3,400 miles to make history. The car did 99 percent of the driving on its own, yielding to the carbon-based life form behind the wheel only when it was time to leave the highway and hit city streets.

This amazing feat, by the automotive supplier Delphi, underscores the great leaps this technology has taken in recent years, and just how close it is to becoming a part of our lives. Yes, many regulatory and legislative questions must be answered, and it remains to be seen whether consumers are ready to cede control of their cars, but the hardware is, without doubt, up to the task. Read More

One of the things that I’ve always liked about Star Trek, is the concept of a galaxy spanning civilization. I would expect that before we ever get to that point, we will have a civilization that spans our solar system. Having a solar system spanning civilization has many advantages. It would give us access to resources many times greater than what is found here on Earth. It also provides the opportunity for civilization to expand, and in a worst case scenario, help ensure the survival of humanity.

Millions of people living in spacious environmentally controlled cities on planetary surfaces and in rotating cylinders in free space, with industry that extends from Mercury to the comets is to me, a grand vision worthy of an ambitious civilization. But trying to make that vision a reality will be difficult. The International Space Station has the capacity to house just six people and cost approximately $100B to put in place. With a little simple division, that works out to about $17B per inhabitant! If we used that admittedly crude figure, it would cost $17 trillion to build a 1,000 person habitat in Earth orbit. Clearly, the approach we used to build the ISS will not work for building a solar system civilization!

The ISS model relies on building everything on Earth, and launching it into space. A different model championed by Dr. Philip Metzger, would develop industrial capacity in space, using resources close to home, such as from the Moon. This has the potential to greatly reduce the cost of building and maintaining systems in space. But how to develop that industrial capacity? Remember we can’t afford to launch and house thousands of workers from Earth. The answer it would seem, is with advanced robotics and advanced manufacturing.

But is even this possible? The good news is that advanced robotics and advanced manufacturing are already being rapidly developed here on Earth. The driver for this development is economics, not space. These new tools will still have to be modified to work in the harsh environment of space, and with resources that are different from what are commonly used here on Earth. While learning to adapt those technologies to the Moon and elsewhere in the solar system is not trivial, it is certainly better that having to develop them from scratch!

Advanced robots are already having a very positive impact on our economy and play a significant role in what is referred to as next shoring. Next shoring is the move to bring manufacturing closer to the customer. For U.S. customers, that means the return of manufacturing to the U.S. This only makes business sense if the manufacturing costs are as low or lower here, than elsewhere. It is evident that the labor share of manufacturing is lower in those countries that have a high degree of automation. We are getting closer and closer to the cost of materials and transportation being the main drivers in manufactured goods. This is just what we would like to see happen for our solar system civilization. Developing materials and transportation systems from local resources through automation will drive down the cost of opening up the solar system.

While this is great news for developing space, there are implications for our terrestrial civilization. Remember the push for automation in manufacturing isn’t driven by a space need, it is driven by Earthly economics. That means getting to market faster with a better product and at a lower cost. The implication to our terrestrial civilization is twofold. First, manufacturing corporations see greater productivity per employee. This makes sense in that you can increase productivity by adding robots while maintaining or even reducing the workforce. This equates to greater profits and happier shareholders! The downside of that increased productivity however, is a reduction in job growth, if not an outright decline. The very technologies that will make a solar system civilization possible may very well stifle the job market here.

Are people worried about this? You bet they are! This issue became apparent at the 2015 SXSW Festival in Texas. At that event a protest (a staged marketing stunt as reported in IO9) to “stop the robots,” occurred. Much to the surprise of the organizers, this protest generated a lot more attention than they planned, and was picked up by multiple news organizations. It would seem that this interest is driven by very real personal concerns that people have about losing jobs to robots.

Now this is where things get interesting. Economics is driving the development of advanced robots. These advanced robots enable space development but can potentially hurt the job market. From a systems viewpoint, we cannot champion the arrival of advanced robots as a boon for both industry and space development, without considering the potential downside.

People need jobs, and not surprisingly, they particularly like important work. We need to welcome the benefits that advanced robotics bring and at the same time, we need to be proactive in stepping up to the challenges that come along with the benefits. There are lots of ways to address this issue, but it is apparent from the concern shown at SXSW, that this is a conversation that needs to begin sooner rather than later.

We do live in a changing world, which is changing at an ever increasing pace. As leaders, we can chose to either react to the changes being brought on by a growing robotic workforce, or we can chose to lead the transformation of our institutions to successfully accommodate that change. If we chose to lead, the first step, as identified by Professor John Kotter, is a sense of urgency. That sense seems already to be building. Next steps are to develop a vision for ways to accommodate the necessary changes, and building coalitions necessary to implement it. It might be wise to start looking at these steps now.

Like any complex problem, multiple approaches will most likely be needed in order to address workforce disruption. With broad discussion, planning and leadership now, we can mitigate the downside issues, while enabling a strengthened economy now and a bright future among the stars.

Mark

CoverThumbnailTitle: Super Physics for Super Technologies
Sub Title: Replacing Bohr, Heisenberg, Schrödinger & Einstein
Author: Benjamin T Solomon
Paperback: 154 pages
Publisher: Propulsion Physics, Inc. (March 19, 2015)
ISBN-10: 1508948011
ISBN-13: 978–1508948018
Language: English

Publisher’s Link: Super Physics for Super Technologies
Amazon’s Link: Super Physics for Super Technologies

Reviewer’s comments: “Benjamin is the second researcher I have met who has tried to consider a nonsingular cosmology. The first was Christi Stoica, which I met in 2010″.
Andrew Beckwith PhD

The Objective: This book, Super Physics for Super Technologies, proposes that a new physics exists. The findings are based on 16 years of extensive numerical modeling with empirical data, and therefore, both testable and irrefutable.

The Need: In 2012 Prof. Nemiroff, using Hubble photographs, showed that quantum foam cannot exists. In 2013, Solomon showed that both exotic matter and strings could not exists. In 2015 the Kavli Foundation, with Prof. Efstathiou, Prof. Pryke, Prof. Steinhard discussed the issues with the Planck Space Telescope findings of a Universe that is significantly simpler than our theories. Therefore the need for new physics.

The Benefits: The replacement of the Schrödinger wave function with a simpler probabilistic wave function, results in a new electron shell model based on the Rydberg equation, giving exact results with quantum mechanics; leading to a new Standard Model and the unification of photon shielding, transmission and invisibility as the same phenomenon. Solomon’s inference is that any current or future stealth technology can be neutralized.

The Possibilities: Is it possible to rewrite physics and the old great cherished masters? This work is based on extensive numerical modeling of known empirical data and theorizing. Therefore, the answer must be YES.

Acknowledgements: I would like to thank Lockheed for nominating me to the position of Committee Member, Nuclear and Future Flight Propulsion Technical Committee, American Institute of Aeronautics & Astronautics (AIAA)