Reddit Reddit reviews Superintelligence: Paths, Dangers, Strategies

We found 41 Reddit comments about Superintelligence: Paths, Dangers, Strategies. Here are the top ones, ranked by their Reddit score.

Computers & Technology
Books
Computer Science
AI & Machine Learning
Artificial Intelligence & Semantics
Superintelligence: Paths, Dangers, Strategies
Oxford University Press
Check price on Amazon

41 Reddit comments about Superintelligence: Paths, Dangers, Strategies:

u/Scarbane · 70 pointsr/Futurology

He should brush up on his knowledge about general AI. Nick Bostrom's Superintelligence is a good starting place, even though it's already a few years old.

I recommend the rest of you /r/Futurology people read it, too. It'll challenge your preconceived notions of what to expect from AI.

u/ytterberg_ · 18 pointsr/changemyview

The problem is AI alignment: how do we make sure that the AI wants good stuff like "acting like a neutral arbiter" and not bad stuff like "world domination"? This turns out to be a very hard question, and a lot of very smart people believe that a superintelligence would destroy humanity unless we are very very careful. Bostroms Superintelligence is a good introduction to this topic.

> The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

If you don't have the time for the book, this FAQ is good:

> 4: Even if hostile superintelligences are dangerous, why would we expect a superintelligence to ever be hostile?

> The argument goes: computers only do what we command them; no more, no less. So it might be bad if terrorists or enemy countries develop superintelligence first. But if we develop superintelligence first there’s no problem. Just command it to do the things we want, right?

> Suppose we wanted a superintelligence to cure cancer. How might we specify the goal “cure cancer”? We couldn’t guide it through every individual step; if we knew every individual step, then we could cure cancer ourselves. Instead, we would have to give it a final goal of curing cancer, and trust the superintelligence to come up with intermediate actions that furthered that goal. For example, a superintelligence might decide that the first step to curing cancer was learning more about protein folding, and set up some experiments to investigate protein folding patterns.

> A superintelligence would also need some level of common sense to decide which of various strategies to pursue. Suppose that investigating protein folding was very likely to cure 50% of cancers, but investigating genetic engineering was moderately likely to cure 90% of cancers. Which should the AI pursue? Presumably it would need some way to balance considerations like curing as much cancer as possible, as quickly as possible, with as high a probability of success as possible.

> But a goal specified in this way would be very dangerous. Humans instinctively balance thousands of different considerations in everything they do; so far this hypothetical AI is only balancing three (least cancer, quickest results, highest probability). To a human, it would seem maniacally, even psychopathically, obsessed with cancer curing. If this were truly its goal structure, it would go wrong in almost comical ways.

> If your only goal is “curing cancer”, and you lack humans’ instinct for the thousands of other important considerations, a relatively easy solution might be to hack into a nuclear base, launch all of its missiles, and kill everyone in the world. This satisfies all the AI’s goals. It reduces cancer down to zero (which is better than medicines which work only some of the time). It’s very fast (which is better than medicines which might take a long time to invent and distribute). And it has a high probability of success (medicines might or might not work; nukes definitely do).

> So simple goal architectures are likely to go very wrong unless tempered by common sense and a broader understanding of what we do and do not value.

u/Jetbooster · 12 pointsr/Futurology

Why would it care if the goal we gave it didn't actually align with what we wanted? It has no reasons to care, unless these things were explicitly coded in, and as I said, morality is super hard to code into a machine.

To address your second point, I understand my example wasn't perfect, but say it understands that the more physical material a company controls, the more assets it has. So it lays claim to the entire universe, and sets out to control it. eventually, it is the company, and growing the company's assets just requires it to have more processing power. Again, it is an illustrative point, loosely derived from my reading of Superintelligence by Nick Bostrom. I would highly recommend it.

u/lowlandslinda · 11 pointsr/Futurology

Musk keeps up with what philosopher Nick Bostrom writes. Same reason why he knows about the simulation theory which is also popularised by this philosopher Nick Bostrom. And lo and behold, Bostrom has also a paper and a book on AI.

u/Philipp · 9 pointsr/Showerthoughts

For a deeper look into the subject of what the AI may want, which goes far beyond "it clearly won't harm us/ it clearly will kill us", I recommend Superintelligence by Nick Bostrom. Fantastic book!

u/mastercraftsportstar · 8 pointsr/ShitPoliticsSays

I don't even think we'll get that far. I honestly believe that once we create proper A.I. it will snowball out of control in a matter of months and it will turn against us. Their communist plans are mere fever dream when it comes to A.I. "Well, if the robots are nice to us, don't destroy the human species, and actually are subservient to us, then our Communist fever dream could work"

Yeah, okay, it's like trying to decide whether you want chicken or fish for the in-flight meal while the plane is going down.



I recommend reading Superintelligence if you want to get more theroies about it.

u/ringl-bells · 8 pointsr/technology

Everyone should read SuperIntelligence by Nick Bostrom.

Non-affiliate Amazon link: Superintelligence: Paths, Dangers, Strategies

u/IlluminateTruth · 7 pointsr/technology

The Swedish philosopher Nick Bostrum wrote a book called Superintelligence that covers much of this topic. I'd recommend it to anyone as it's not technical at all.

He maintains a strong position that the dangers of AI are many and serious, possibly existential. Finding solutions to these problems is an extremely arduous task.

u/kcin · 6 pointsr/programming

Actually, it's a pretty comprehensive take on the subject: http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834/

u/coHomerLogist · 5 pointsr/math

>I didn't say it was correct but it makes it more likely that people will dismiss it out of hand.

That's fair, I agree. It's just frustrating: there are so many strawmen arguments related to AI that a huge number of intelligent people dismiss it outright. But if you actually look into it, it's a deeply worrying issue-- and the vast majority of people who actually engage with the good arguments are pretty damn concerned.

I would be very interested if anyone can produce a compelling rebuttal to the main points in Superintelligence, for instance. I recommend this book very highly to anyone, but especially people who wonder "is AI safety just bullshit?"

>Especially when those people get significant amounts of funding

Numerically speaking, this is inaccurate. Cf. this article.

u/Mohayat · 3 pointsr/ElectricalEngineering

Read Superintelligence by Nick Bostrom , it answered pretty much all the questions I had about AI and learned a ton of new things from it. It’s not too heavy on the math but there is a lot of info packed into it, highly recommend it.

u/j4nds4 · 3 pointsr/elonmusk

>Really? It's still their opinion, there's no way to prove or disprove it. Trump has an opinion that global warming is faked but it doesn't mean it's true.

From my perspective, you have that analogy flipped. Even if we run with it, it's impossible to ignore the sudden dramatic rate of acceleration in AI capability and accuracy over just the past few years, just as it is with the climate. Even the CEO of Google was caught off-guard by the sudden acceleration within his own company. Scientists also claim that climate change is real and that it's an existential threat; should we ignore them though because they can't "prove" it? What "proof" can be provided for the future? You can't, so you predict based on the trends. And their trend lines have a lot of similarities.

>Also, even if it's a threat(i don't think so, but let's assume it is), how putting it in your brain will help? That's kind of ridiculous. Nowadays you can turn your PC off or even throw it away. You won't be able to do that once it's in your brain. Also, what if the chip decides to take control over your arms and legs one day? It's insane to say that AI is a threat but to plan to put it inside humans' brain. AI will change your perception input and you will be thinking you are living your life but in reality you will be sitting in a cell somewhere. Straight up some Matrix stuff. Don't want that.

The point is that, in a hypothetical world where AI becomes so intelligent and powerful that you are effectively an ant in comparison, both in intelligence and influence, a likely outcome is death just as it is for billions of ants that we step on or displace without knowing or caring; think of how many species we humans have made extinct. Or if an AI is harnessed by a single entity, those controlling it become god-like dictators because they can prevent the development of any further AIs and have unlimited resources to grow and impose. So the Neuralink "solution" is to 1) Enable ourselves to communicate with computer-like bandwidth and elevate ourselves to a level comparable to AI instead of being left in ant territory, and 2) make each person an independent AI on equal footing so that we aren't controlled by a single external force.

It sounds creepy in some ways to me too, but an existential threat sounds a lot worse. And there's a lot of potential for amazement as well. Just like with most technological leaps.

I don't know how much you've read on the trends and future of AI. I would recommend Nick Bostrom's book "Superintelligence: Paths, Dangers, Strategies", but it's quite lengthy and technical. For a shorter thought experiment, the Paperclip Maximizer scenario.

Even if the threat is exaggerated, I see no problem with creating this if it's voluntary.

u/maurice_jello · 3 pointsr/elonmusk

Read Superintelligence. Or check out Bostrom's TED talk.

u/Liface · 2 pointsr/ultimate

You're strawmanning. I am not insinuating that we should not protest or report human rights violations and social injustice — simply that identity politics is being used as a distraction by, well, both parties, but annoyingly by the left, and is disproportionately represented in present minds and the mainstream media due to human cognitive biases.

Also, your use of scare quotes around artificial intelligence risk suggests to me that you lack information and context. Not surprising, given that the issue is often treated as a joke in the public discourse.

I recommend informing yourself with at least a basic overview, and then you're free to form your own opinions. Nick Bostrom's Superintelligence is a good primer.

u/[deleted] · 2 pointsr/artificial

I highly recommend this book if you want to read up on some thought experiments around AGI. Spoiler alert: not great for mankind.

It's easy to come up with a lot of different ways a AGI plays out if one of it's main goals is to save the environment (alone with some other reasonable assumptions about it's ability o navigate in the world).

There is lots of low hanging fruit that humanity could do tomorrow to dramatically help the planet, be we are all selfish assholes, so we don't.

An AGI would/could basically pull us into a post scarcity economy by automating everything. It could then stick and carrot humanity into not destroying planet.

Not eating meat and eliminating private car ownership would go a long way to save the environment. Throw in free birth control/paying people to not have kids takes care of population growth.

But like the other commenter says, we just don't know.

u/skepticalspectacle1 · 2 pointsr/worldnews

I'd HIGHLY recommend reading Superintelligence. http://www.amazon.com/gp/aw/d/0198739834/ref=tmm_pap_title_0?ie=UTF8&qid=&sr= The approaching singularity event is maybe the worst thing to ever happen to mankind... Fascinating read!

u/Gars0n · 2 pointsr/suggestmeabook

Superintelligence by Nick Bostrum seems to be just ehat you are looking for. It straddles the line of being too technical for someone with no background knowledge but accessible enough for the kind of people who are already interested in this kind of thing. The book is quitr thorough in its analysis providing a clear map of potential futures and reasons to worry, but also hope.

u/wufnu · 2 pointsr/news

I think the problem is that AIs will not think like us in any way imaginable and what is reasonable to us may be irrelevant to an AI. There are literally hundreds of books about this, describing in excruciating detail the many, many thousands of ways we can fuck it up with nobody even getting close to anything approaching a fullproof way of getting it right. The problem is, any screw up will be catastrophically bad for us. Here's a cheap, easy to understand (if a bit dry) book that will describe the basics, if you're really interested.

u/Havelok · 2 pointsr/teslamotors

Read this and you'll understand why he's so sober about it: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834

u/funkypunkydrummer · 2 pointsr/intj

Yes, I believe it is very possible.

After reading [Superintelligence] (https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834/ref=sr_1_1?s=books&ie=UTF8&qid=1479779790&sr=1-1&keywords=superintelligence), it is very likely that we may have whole brain emulation as a precursor to true AI. If we are on that path, it makes sense that we would be running tests in order to remove AI as an existential threat to humanity. We would need to run these tests in real, life-like simulations, that can run continuously and without detection by the emulations themselves in order to be sure we will have effective AI controls.

Not only could humans run these emulations in the future (past? present?), but the Superintelligent agent itself may run emulations that would enable it to test scenarios that would help it achieve its goals. By definition, a Superintelligent agent would be smarter than humans and we would not be able to detect or possibly even understand the level of thinking such an agent would have. It would essentially be our God with as much intellectual capacity beyond us as we have above ants. Time itself could run at nanosecond speeds for the AI given enough computational resources while we experience it as billions of years.

So who created the AI?
Idk, but that was not the question here...

u/subdep · 2 pointsr/Futurology

Anybody who is seriously interested in this subject, you must read Nick Bostrom’s book: Superintelligence: Paths, Dangers, Strategies https://www.amazon.com/dp/0198739834/ref=cm_sw_r_cp_api_1PpMAbJBDD4T0

u/TrumpRobots · 2 pointsr/artificial

If you haven't already, read this book. It really made me realize that there are very few paths forward for humanity if an AI comes online.

u/antiharmonic · 2 pointsr/rickandmorty

He also wrote the wonderful book Superintelligence that explores routes and concerns with the possible creation of AGI.

u/mossyskeleton · 2 pointsr/Showerthoughts

If you haven't read Superintelligence by Nick Bostrom yet, you should probably read it. (Or don't, if you don't want your supercomputer/AI fears to be amplified a hundred-fold.)

u/flaz · 2 pointsr/DebateEvolution

Okay, so that makes sense with Mormons I've met then. The "bible talking" Mormons, as I call them, seemed to me to be of the creation viewpoint. That's why I was confused about your view on it. I didn't know the church had no official position.

I read some of your blog posts. Very nice! It is interesting and intelligent. Your post about the genetic 1% is good. Incidentally, that is also why many folks are hypothesizing about the extreme danger of artificial intelligence -- the singularity, they call it, when AI becomes just a tiny bit smarter than humans, and potentially wipes out humanity for its own good. That is, if we are merely 1% more intelligent than some primates, then if we create an AI a mere 1% more intelligent than us, would we just be creating our own master? We'd make great pets, as the saying goes. I somehow doubt it, but Nick Bostrom goes on and on about it in his book, Superintelligence, if you haven't already read it.

Continuing with the "genetic 1%", it is possible we may be alone in our galaxy. That is, while abiogenesis may be a simple occurrence, if we think about the fact that in the 4.5 billion years of earth's existence there is only one known strain of life that began, it might be extremely rare for life to evolve to our level of intelligence. Some have speculated that we may be alone because we developed early. The idea is that the universe was cooling down for the first few billion years, which completely rules out life anywhere. Then another few billion years to create elements heavy enough for complex compounds and new star systems to emerge from the debris. Then the final few billion years when we came to be. Who knows?

u/florinandrei · 2 pointsr/baduk

> AI is too dangerous and the humanity is doomed.

There are undergoing efforts to mitigate that.

https://openai.com/about/

https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834/

u/RedHotChiliRocket · 1 pointr/technology

https://www.amazon.com/gp/aw/d/0198739834/

Consciousness is a hard to define word, but he talks about what it would mean if you had an artificial general intelligence significantly smarter than humans, possible paths to create one, and dangers of doing so. I haven't looked into any of his other stuff (talks or whatever).

u/sjforman · 1 pointr/singularity

There's an old saying that you don't really understand something until you can make it yourself. So I think the biggest and most interesting considerations are the meta-ethical questions. Any responsible attempt to create an AGI has to grapple not only with the fundamental question of what constitutes ethical behavior, but with the immense challenge of implementing it in software. As a species, we're either going to need to understand ethics much more deeply, soon, or we're going to be doomed.

Must-read book on this subject: Superintelligence (http://amzn.to/24USaWX).

u/hahanawmsayin · 1 pointr/gadgets

For sure - it's the old saw about technology being a force for good or evil, depending entirely on how it's used.

(I'm actually reading "Superintelligence" right now)

Where does that leave you in terms of caring about privacy, though? You said you're not swayed by the argument about giving up your email password... is there another argument you find compelling? Do you think it's pretty much irrelevant if there's no oppressive regime to abuse the citizenry?

u/neoneye · 1 pointr/scifi
u/Parsias · 1 pointr/videos

Anyone interested in AI should read Nick Bostrom's book, Superintelligence. Fair warning, it is very dense but rewarding.

One take away here is he did a survey of leading AI researchers who were asked to predict when General AI might arrive - the majority (~67%) believe it will take more than 25 years, interestingly 25% believe it might never happen. Source

Also, really great panel discussion about AI with Elon Musk, Bostrom, others.

u/_immute_ · 1 pointr/WormFanfic

Maybe not exactly what you're asking for, but here's one by Bostrom: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834

u/skmz · 1 pointr/artificial

Re. Nick Bostrom: You should have a look at Superintelligence: Paths, Dangers, Strategies. It's definitely not about Terminators ending all of humanity. If anything, he outlines why even an indifference to human life can cause an AI (in the general or super-intelligent sense, not ML/prediction sense) to either subdue or circumvent humans' attempts to stop it from completing its goals.

If you believe that artificial general intelligence is possible, then he writes about things worth considering.

u/_infavol · 1 pointr/sociology

Superintelligence by Nick Bostrom is supposed to be good (I've been meaning to read it). There's also the YouTube video Humans Need Not Apply by C.G.P Grey which sounds like exactly what you need and the description has links to most of his sources.

u/entropywins8 · 1 pointr/nyc

I'm just going on the opinions of experts in the field:

https://en.wikipedia.org/wiki/The_Second_Machine_Age

Superintelligence: Paths, Dangers, Strategies https://www.amazon.com/dp/0198739834/

Yes we've had cotton gins and such but Artificial General Intelligence and Superintelligence is a game changer.

u/ZodiacBrave98 · 1 pointr/PurplePillDebate

>no work from humans

On that day the machines cut out the humans, not implement Basic Income.

​

https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834/

u/the_medicine · 1 pointr/CatholicPhilosophy

I had dismissed artificial consciousness for a long time because I believe it to be impossible and in fact still do. But I realized my outright dismissal was really a defense against the reality of superintelligence which is not that machine consciousness has major implications but but machines becoming competent does. I think for many (perhaps this is unfair) who assert that consciousness is purely material and therefore can be reproduced just see artificial consciousness as a big score for the naturalistic or material reductionist worldview. Then there are experts who are only interested in taking machine intelligence as far as it possibly can go whatever that means. Significantly there is a smaller group calling for caution and prudence in that endeavor. Have you seen the Story of the Sparrows? I can't find a link to it but it's at the beginning of Superintelligence: Paths, Dangers, Strategies.