Professor Brian Cox discusses aliens and the risks of tech advances

In an entertaining discussion at VMworld, Brian Cox raised questions over why technologically advanced societies don’t exist elsewhere in the universe

Discussing the idea of whether it is possible to build a general-purpose artificial intelligence (AI) that is as intelligent as people, Professor Brian Cox drew a parallel with one of the most challenging ideas in cosmology – the Fermi Paradox, which asks the question: “Where are the aliens?”

Speaking at a panel discussion during the VMworld conference in Barcelona, Cox said: “There are two trillion galaxies in the universe, so it is incredible that there isn’t life out there. Most physicists think it will be impossible to travel between galaxies, but we estimate that one in 20 stars in our own Milky Way has Earth-like planets. Why don’t we see any evidence of life?”

Cox said scientists have predicted that within 50 years, a civilised society will be able to develop machines that can replicate. “If this replication is already happening across the universe,” he said, “then why don’t we see any signs of these machines?”

Cox said the Milky Way has 400 billion stars and has existed for 13.5 billion years. Intelligent life on Earth has existed for only about a million years. But given the age of the universe, and the high probability that intelligent life should exist somewhere out there, evidence of alien civilisation should be detectable. “Imagine what we could do in a million years,” he said. “Why have we not seen interstellar travel?”

The question, for Cox, is whether a technologically advanced civilisation could be sustained or would it ultimately destroy itself – which may be why there is no evidence of its existence out in the universe. “Maybe the challenges of knowledge such as the acquisition of nuclear weapons and AI means that civilisations don’t last long enough,” he said.

Read more about AI and ethics

  • The House of Lords report on AI and UK economy and society came out this week, with the guardedly bullish title: “AI in the UK: ready, willing and able?”
  • For AI to improve our lives, it needs to reflect the real world, but regulating algorithms to be how we would like them to be risks introducing an unreality that makes them ineffective.

During the debate, Cox discussed a meeting he had had with former Google chief Eric Schmidt. “The issue of allowing algorithms to make decisions has  a moral component,” he said. “Does the [autonomous] car protect the driver or the pedestrian? You can code a car to protect the driver at all costs, but AI controls require us to face philosophical dilemmas that we don’t normally face. You can’t just neatly hand over AI to do the stuff you don’t want to do.”

Returning to general-purpose AI, Cox said: “Consciousness is a hard problem because we don’t know how it works. But, in principle, there is no reason why we can’t build an artificial brain.”

Read more on Artificial intelligence, automation and robotics

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

This article talks about the Great Filter without actually using the phrase "Great Filter". The lack of awareness of the Great Filter is the reason I founded reddit's r/GreatFilter subreddit. Mankind is vulnerable to imminent extinction, and the only way to survive is to be aware of how precarious, and special, our technological civilization is.
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close