Singularity 2028?

User avatar
Tom Mazanec
Posts: 4180
Joined: Sun Sep 21, 2008 12:13 pm

Singularity 2028?

Post by Tom Mazanec »

http://tib.matthewclifford.com/issues/t ... issue-card
Four weeks ago I wrote that this Metaculus prediction for the advent of AGI had in two weeks moved forward eight years, to 2035. Today it stands at 2028.
“Hard times create strong men. Strong men create good times. Good times create weak men. And, weak men create hard times.”

― G. Michael Hopf, Those Who Remain

John
Posts: 11479
Joined: Sat Sep 20, 2008 12:10 pm
Location: Cambridge, MA USA
Contact:

Re: Singularity 2028?

Post by John »

** 18-May-2022 World View: Singularity
> "Second, I continue to find it odd how little
> mainstream coverage there is of AI progress and AI safety issues
> in a period where things seem to be moving very fast (another
> example). Of course, there is a smart and credible crowd of people
> who are skepical that models like DALL-E and Gato tell us anything
> about AGI. See this Gary Marcus piece for a good example or this
> tweet from a DeepMind researcher. But it seems to me you’d have to
> be almost certain these results mean nothing for them to warrant
> so little attention (and even very smart people’s AI predictions
> often turn out to be wrong). Four weeks ago I wrote that this
> Metaculus prediction for the advent of AGI had in two weeks moved
> forward eight years, to 2035. Today it stands at 2028."

> http://tib.matthewclifford.com/issues/t ... re-1180523
Yikes!

User avatar
Tom Mazanec
Posts: 4180
Joined: Sun Sep 21, 2008 12:13 pm

Re: Singularity 2028?

Post by Tom Mazanec »

Yikes indeed!

Community
Prepare for arrival: Tech pioneer warns of alien invasion
Louis Rosenberg, Unanimous A.I.
May 14, 2022 6:40 AM
https://venturebeat.com/2022/05/14/prep ... -invasion/
An alien species is headed for planet Earth and we have no reason to believe it will be friendly. Some experts predict it will get here within 30 years, while others insist it will arrive far sooner. Nobody knows what it will look like, but it will share two key traits with us humans – it will be intelligent and self-aware.

No, this alien will not come from a distant planet – it will be born right here on Earth, hatched in a research lab at a major university or large corporation. I am referring to the first artificial general intelligence (AGI) that reaches (or exceeds) human-level cognition.

AI Innovation Ambition v Reality 1
1.7M
12
Transform 2022
Register NOW


Ad: (6)
Skip Ad
AI Innovation Ambition v Reality 1
As I write these words, billions are being spent to bring this alien to life, as it would be viewed as one of the greatest technological achievements in human history. But unlike our other inventions, this one will have a mind of its own, literally. And if it behaves like every other intelligent species we know, it will put its own self-interests first, working to maximize its prospects for survival.

AI in our own image
Should we fear a superior intelligence driven by its own goals, values and self-interests? Many people reject this question, believing we will build AI systems in our own image, ensuring they think, feel and behave just like we do. This is extremely unlikely to be the case.

ADVERTISEMENT

Artificial minds will not be created by writing software with carefully crafted rules that make them think like us. Instead engineers feed massive datasets into simple algorithms that automatically adjust their own parameters, making millions upon millions of tiny changes to their structure until an intelligence emerges – an intelligence with inner workings that are far too complex for us to comprehend.

And no – feeding it data about humans will not make it think like humans do. This is a common misconception – the false belief that by training an AI on data that describes human behaviors, we will ensure it ends up thinking, feeling and acting like we do. It will not.

Instead, we will build these AI creatures to know humans, not to be human. And yes, they will know us inside and out, able to speak our languages and interpret our gestures, read our facial expressions and predict our actions. They will understand how we make decisions, for good and bad, logical and illogical. After all, we will have spent decades teaching AI systems how we humans behave in almost every situation.

But profoundly different
But still, their minds will be nothing like ours. To us, they will seem omniscient, linking to remote sensors of all kinds, in all places. In my 2020 book, Arrival Mind, I portray AGI as “having a billion eyes and ears,” for its perceptual abilities could easily span the globe. We humans can’t possibly imagine what it would feel like to perceive our world in such an expansive and wholistic way, and yet we somehow presume a mind like this will share our morals, values, and sensibilities. It will not.

ADVERTISEMENT

Artificial minds will be profoundly different than any biological brains we know of on Earth – from their basic structure and functionality to their overall physiology and psychology. Of course, we will create human-like bodies for these alien minds to inhabit, but they will be little more than robotic façades to make ourselves feel comfortable in their presence.

In fact, we humans will work very hard to make these aliens look like us and talk like us, even smile and laugh like us, but deep inside they will not be anything like us. Most likely, their brains will live in the cloud (fully or partially) connected to features and functions both inside and outside the humanoid forms that we personify them as.

A picture containing text, silhouette

Description automatically generated
Still, the façade will work – we will not fear these aliens – not the way we would fear creatures speeding toward us in a mysterious starship. We may even feel a sense of kinship, viewing them as our own creation, a manifestation of our own ingenuity. But if we push those feelings aside, we start to realize that an alien intelligence born here is far more dangerous than those that might come from afar.

The danger within
After all, an alien mind built here will know everything about us from the moment it arrives, having been designed to understand humans inside and out – optimized to sense our emotions and anticipate our actions, predict our feelings, influence our beliefs and sway our opinions. If creatures speeding toward us in sleek silver spaceships had such deep knowledge of our behaviors and tendencies, we’d be terrified.

ADVERTISEMENT

Already AI can defeat our best players at the hardest games on Earth. But really, these systems don’t just master the games of chess, poker and Go, they master the game of humans, learning to accurately forecast our actions and reactions, anticipating our mistakes and exploiting our weaknesses. Researchers around the world are already developing AI systems to out-think us, out-negotiate us and out-maneuver us.

Is there anything we can do to protect ourselves?

We certainly can’t stop AI from getting more powerful, as no innovation has ever been contained. And while some are working to put safeguards in place, we can’t assume it will be enough to eliminate the threat. In fact, a poll by Pew Research indicates that few professionals believe the industry will implement meaningful “ethical AI” practices by 2030.

So how can we prepare for arrival?

The best first step is to realize that AGI will happen in the coming decades and it will not be a digital version of human intelligence. It will be an alien intelligence as foreign and dangerous as if it came from a distant planet.

Bringing urgency to artificial intelligence ethics
If we frame the problem this way, we might address it with urgency, pushing to regulate AI systems that monitor and manipulate the public, sensing our emotions and anticipating our behaviors. Such technologies may not seem like an existential threat today, as they’re mostly being developed to optimize the effectiveness of AI-driven advertising, not to facilitate world domination. But that doesn’t diminish the danger – AI technologies designed to analyze human sentiments and influence our beliefs can easily be used against us as weapons of mass persuasion.

ADVERTISEMENT

We should also be more cautious when automating human decisions. While it’s undeniable that AI can assist in effective decision-making, we should always keep humans in the loop. This means using AI to enhance human intelligence rather than working to replace it.

Whether we prepare or not, alien minds are headed our way and they could easily become our rivals, competing for the same niche at the top of the intellectual food chain. And while there’s an earnest effort in the AI community to push for safe technologies, there’s also a lack of urgency. That’s because too many of us wrongly believe that a sentient AI created by humanity will somehow be a branch of the human tree, like a digital descendant that shares a very human core.

This is wishful thinking. It is more likely that a true AGI will be profoundly different from us in almost every way. Yes, it will be remarkably skilled at pretending to be human, but beneath a people-friendly façade, each one will be a rival mind that thinks and feels and acts like no creature we have ever met on Earth. The time to prepare is now.

Louis Rosenberg, PhD is a technology pioneer in the fields of VR, AR and AI. He is known for developing the first augmented reality system for the US Air Force in 1992, for founding the early virtual reality company Immersion Corp (Nasdaq IMMR) in 1993, and founding the early AR company Outland Research in 2004. He is currently founder & CEO of Unanimous AI.
On the other hand:

May 21, 2022
Artificial Intelligence and the Limits of Science
By David Weinberger
https://www.americanthinker.com/article ... ience.html
There exists s a tendency today to put too much faith in science. Consider, for example, the currently popular belief that scientific advances in Artificial Intelligence (AI) may one day enable robots to become “intelligent.” This idea is deeply problematic, and it is rooted in a fundamental belief that science is capable of explaining everything there is to know about the human person and about reality. But it is not.

For one thing, the idea that science can explain everything about us and about the world is not itself provable by science, since no empirical test can demonstrate it. Thus, it is really a philosophical position posing as a scientific one. Furthermore, science itself rests on philosophical foundations.

For example, science assumes the existence of the external world. This is not something science is capable of demonstrating, but something it must take for granted. Why? Because conducting an experiment to “prove” it presupposes the very data it is attempting to explain: Any instruments that might be used to test it are themselves parts of the external world, so they can be used only if they are first assumed to exist, which is the very issue the experiment is designed to test. Hence science cannot prove that the physical world is real, but must instead take it as a given.

Second, science assumes that the world is intelligible. Consider that the scientific method involves formulating hypotheses and then proceeding to test them. In other words, scientists first assume there are meaningful answers “out there” for discovery, and then they look for them. If they did not assume this, if they instead believed that nature was utterly incomprehensible, there would be no science. The comprehensibility of the natural world has enabled the rise of science, not the other way around.

Furthermore, not only does scientific inquiry take the intelligibility of reality for granted, it also presupposes that the human mind is capable of grasping it. Consider that, like the examples above, science cannot “prove” this assumption because it cannot test the capacity of the mind without first making use of the mind. Thus, it is a philosophical foundation science must accept before scientific investigation can get off the ground.

Top Articles By American Thinker

Read More
When You Label Half of the Country Racist…




Now this assumption is worth dwelling on, because it is pregnant with implications that run counter to the idea that AI might one day become “intelligent.” That view is predicated on the belief that the mind is nothing but a collection of particles banging around inside our skull. In other words, the mind is nothing more than the material brain. But the fact that we can grasp intelligible things means that the mind must be more than that. If it were not, if the mind were reducible to random physical motions in our heads, we would have no reason to believe it because any argument for it would itself be the product of nonrational forces and therefore meaningless. It would be equivalent to saying that triangles smell pleasant.

But there are other reasons to suppose that the mind is more than the brain. Consider, for instance, what it means to grasp the intelligibility of nature. Our intellect is capable of abstracting the universal essences or natures of things -- the “whatness” of things -- and it is difficult to see how this power could come from the material world. Here is why.


Every physical thing in the world -- meaning everything we ever experience -- is particular. For example, any triangle we observe in daily life is drawn at a certain time, and in a particular color, and in a certain size, and in a specific location, and with unique imperfections. It is thus a particular triangle. But the nature of a triangle -- or triangularity -- is what philosophers call a universal. All particular triangles share the universal essence of a triangle (triangularity), otherwise they would not be triangles. This means that universals transcend time, color, location, size, and physical imperfection. In other words, they are immaterial. Now here is the important part. Our minds are capable of grasping universals. Not only can we consider a particular triangle, but we can abstract from it to apprehend its universal nature (triangularity). This poses a problem for the “material” understanding of the mind. If that view is correct, if the mind is nothing more than the brain, how can a particular material brain produce universal immaterial content? How does a system (such as a brain) with particular size, location, duration, and physical imperfection produce that which transcends size, location, time and physical imperfection?

This is no small problem. One may be tempted to think that, with enough complexity, a material system such as an AI might in principle be able to generate universal content. But this belief commits a category error. For we are wondering about the particular versus the universal, the material versus the immaterial, and not noncomplexity versus complexity, so it is difficult to see how a particular material system, no matter how complex, could ever produce universal immaterial content. Even immensely complex particular material systems would seem capable of producing only immensely complex particular material content. Indeed, believing that such systems are so capable is akin to believing that with enough white bricks arranged into a sophisticated enough pattern, a red building might result. But of course, no matter how many there are or how intricately they may be arranged, white bricks simply cannot produce a red building.

That is why we have good reason to believe that the mind uses but is not caused by the brain, and that the latter is a necessary though insufficient condition for the operations of the former. That is also why it seems doubtful at best, if not impossible in principle, that any AI, no matter how advanced, will ever become “intelligent.”

Sponsored by Revcontent
These Are the Top Financial Advisors in Aurora
SmartAsset
Who Is Better: Biden Or Trump?
Save America
Will You Stand with Trump? Trump Always Put America First.
Save America
Sneaky Way Aurora Homeowners Are Getting Their Old Bathroom Remodeled
Smart Consumer Update

But as should be clear, these are philosophical issues about which science has little to say. And that is no criticism. On the contrary, science can tell us a tremendous amount about the physical world. But we must not fool ourselves into believing that science has the only, or even the final, word on important matters concerning the nature of our world, the human person and reality itself. As the foregoing makes clear, there is a lot that science cannot answer, and much that philosophy can.

David formerly worked at a public policy institution. Follow him on Twitter @DWeinberger03. Email him at davidweinberger916@gmail.com
“Hard times create strong men. Strong men create good times. Good times create weak men. And, weak men create hard times.”

― G. Michael Hopf, Those Who Remain

Post Reply

Who is online

Users browsing this forum: No registered users and 17 guests