Will technological singularity affect me? (AI Part 3)

 

In the last two articles we took an introductory look at the use of artificial intelligence in maritime and then, more specifically, SnP. Whilst we aim to explore both subjects in greater depth in the near future, an obvious question to be asking is where the explosive growth in the development of AI could lead, not only in maritime but in all sectors, especially the military? After that, how will it impact individuals? Technological singularity will be a term you may have already come across in this context.

In the first article, Maritime meets AI – Part 1, we referred to narrow and general AI. The former being where a machine performs a specific task better than a human – intelligent behaviour but only for specific tasks. The latter where a machine has the same intelligence level as humans or better, this time across a wide variety of tasks.

What lies beyond general AI? This is the concept known as technological singularity or “the singularity”.

Whilst still some years away, some say 25 to 30 years – Elon Musk thinks it could be much less – and subject to much hypothesising as to the form it might take – it’s a subject everyone should be concerned about.

 

Definition -the singularity

In mathematics, singularity is a point where a given mathematical object is not defined, such as where a measurable variable becomes unmeasurable or has infinite value, the point where parallel lines meet for example. In other words, it’s where a mathematical object ceases to be well behaved.

 

In the context of computing, it’s a currently only theoretical point in time when artificial intelligence moves beyond human intelligence and is able to self-replicate and improve itself autonomously; the hypothetical point when computers are able to improve themselves, each iteration leading to yet more cycles of improvement, and when technological growth becomes uncontrollable and irreversible. If and when this point is reached, although subject to much speculation, the impact it will have on human civilisation will without doubt be something that has never been seen before.

 

The historical background to singularity

The Hungarian-American mathematician and scientist, a pioneer in the many fields in which he worked, John von Neuman (1903-1957), is said to have been the first to discuss the concept of singularity.

His colleague working on the Manhattan Project during WW2, the mathematician Stanislaw Ulam, referred to a conversation he had in 1950 with von Neuman in which he referred to “the ever accelerating progress of technology and changes in the mode of human life which gives the appearance of some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.

The mathematician Irving Good (born Isadore Gudak), who worked with Alan Turing at Bletchley Park in WW2, in 1965 anticipated superhuman intelligence, when “ an ultra-intelligent machine could design even better machines”, hence “the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”.

In December 1993 Vernor Vinge, an American science fiction author as well as a retired Professor of Mathematics at San Diego State University, wrote “The Coming Technological Singularity – How to Survive in the Post-Human Era”. Vinge argued that “we are on the edge of change comparable to the rise of human life on Earth” and was predicting this to occur between 2005 and 2030.

In his 2005 book “The Singularity Is Near: When Humans Transcend Biology”, Ray Kurzweil, an American inventor and futurist (who was hired by Google in 2012), wrote about the exponential growth in technology, reaching the singularity when “human life will be irreversibly transformed”, when the intelligence of humans and that of machines will merge.

 

Some more recent thoughts on singularity

The late Professor Stephen Hawking, whilst supportive of the uses of basic AI and the benefits it can bring, expressed his fear that the development of full artificial intelligence, technology that matches or exceeds that of humans, could spell the end of the human race, “taking off on its own, and re-designing itself at an ever increasing rate”. Humans, he said, could not compete and would be superseded, warning that AI is potentially more dangerous than nuclear weapons.

 

Another well known figure to have expressed concerns about full AI is Elon Musk, claiming the threat posed will be here sooner than expected, warning this could be as close as 2025. In a famous quote he said that humans risk being treated like house pets by AI unless we develop technology that connects brains to computers. His company Neuralink is seeking to achieve this by creating an interface between the two.

In 2015 Elon Musk donated US$10m to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity. In 2015 he co-founded OpenAI, with the goal being to develop and promote “friendly AI”. He has since exited, although the parting was said to be on good terms.

The Future of Life Institute has acknowledged the opportunities that AI can bring but, referencing the concerns, published a series of principles that it feels should guide its development.

Bill Gates has also made his position clear: “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” Whilst not trying to hold back development, he says his aim is to raise awareness of the issues involved.

 

A hard or soft take-off?

How AI develops remains to be seen, but there are two main theories as to how this might happen: it could either be a hard or a soft take-off, with AI take-off defined by Matthew Barnett as “the dynamics of the world associated with the development of powerful artificial intelligence”.

With a hard take-off artificial general intelligence self-improves with great speed (possibly measured in minutes, hours or days), moving too quickly for humans to control, surpassing human intelligence and taking control in a very short space of time. Essentially, an accelerating rate of change that cannot be turned off.

A soft take-off on the other hand, whilst the artificial intelligence becomes just as powerful, the pace is much slower (possibly over years or decades), permitting human interaction and thus allowing development to be controlled.

 

Similar to the hypothesising surrounding the time it might take for general AI to be with us, with arguments about even whether machines will eventually be able to achieve human intelligence at all, there are many differing opinions on whether we will see a hard or soft take-off.

AI has the potential to hugely benefit society in so many ways, from increasing efficiency through to predicting and solving problems. Our lifestyles can be enhanced and healthcare improved, at a speed never before possible. Clearly, it’s a subject we need to think seriously about, and work towards ensuring a soft take-off, doing all we can to reduce the risk of the hard version – which represents a potential threat to civilisation.

 

Is technological singularity inevitable?

Computing power is growing exponentially, AI is getting faster and smarter and this acceleration appears to be unavoidable. Human intelligence on the other hand is relatively fixed. Does this mean that technological singularity is inevitable?

Given the above, it seems reasonable to conclude that at some point in time the intelligence of machines will surpass our own. According to most experts singularity will happen, and as well as what it might look like, the other question is when might that be? Estimates range from the next five years to around the end of this century.

Other experts argue that general artificial intelligence will never exist, one factor being the belief that the current rate of growth in computing power is not sustainable. Other objections include the idea that it will always be impossible to model the human brain. Regarding the computing power argument, the counter comes from the advances being made in the development of Quantum Computing, seen by many as the way by which many of the computing power obstacles will be overcome.

 

Is it possible to control the development of AI?

It’s probably not an exaggeration to say that at the present time AI development is out of control. Indeed, some development taking place is without doubt dangerous. The technology on its own cannot tell right from wrong, it has no emotions, conscience or values. It must therefore be guided in a direction that is both safe and useful. But how can this be done?

 

One suggested route is through coding – in other words, by means of an algorithm that ensures no harm can come to humans. The downside to this is that, if truly intelligent, the AI will be coding itself. If it therefore considers code to be self-limiting, code that is there to control it, the chances are that, if truly intelligent, it will re-write or delete what it considers as not serving its best interests.

Another possible solution that has been proposed is to isolate the technology from the internet, preventing its spread. In other words, controlling what it learns. However, even if this were possible, by doing so the technology could not be considered to be truly intelligent. Its capability would be restricted and hence would undermine the goal for which it was created, namely to become a source of superior intelligence.

Trying to place restrictions on its development seems futile – for example we can expect military applications to be taking place in secret, the risks of which are obvious.

So what to do? There certainly needs to be more public and political discussions taking place, before much more time passes, about the risks, moral principles and legalities of AI, all of which will require agreement on how they should be managed. If there is to be regulation it needs to concern standards and testing protocols that are customised for specific industries, and due consideration must also given to cybersecurity. The challenges in so doing are significant, first in terms of the manpower and expertise required and not the least getting nations to agree on goals and processes – not promising if recent experience on nuclear proliferation is anything to go by.

The challenge almost seems so great as to make it insurmountable. However given the consequences for mankind, with the stakes being so high, much like climate change, it’s a challenge that needs to be addressed without much more delay.

 
 

A few words about CompassAir


Creating solutions for the global maritime sector, CompassAir develops state of the art messaging and business application software designed to maximise ROI. Our software is used across the sector, including by Sale and Purchase brokers (S&P/SnP), Chartering brokers, Owners, Managers and Operators.

 

Through its shipping and shipbroking clients, ranging from recognised World leaders through to the smallest, most dynamic independent companies, CompassAir has a significant presence in the major maritime centres throughout Europe, the US and Asia.

 

Our flagship solution is designed to simplify collaboration for teams within and across continents, allowing access to group mailboxes at astounding speed using tools that remove the stress from handling thousands of emails a day. It can be cloud based or on premise. To find out more contact solutions@thinkcompass.io. If you are new to shipping, or just want to find out more about this exciting and challenging sector, the CompassAir Shipping Guide might prove to be an interesting read.

 

Contact us for more information or a short demonstration on how CompassAir can benefit your business, and find out how we can help your teams improve collaboration and increase productivity.