Reddit Reddit reviews Life 3.0: Being Human in the Age of Artificial Intelligence

We found 8 Reddit comments about Life 3.0: Being Human in the Age of Artificial Intelligence. Here are the top ones, ranked by their Reddit score.

Business & Money
Books
Industries
Computers & Technology Industry
Life 3.0: Being Human in the Age of Artificial Intelligence
Check price on Amazon

8 Reddit comments about Life 3.0: Being Human in the Age of Artificial Intelligence:

u/artifex0 · 6 pointsr/slatestarcodex

A couple of good books that take on this question in detail are:

Superintelligence, by Nick Bostrom, who is a philosophy professor at Oxford, and

Life 3.0, by Max Tegmark, from MIT

The short of it is: we may be able to keep superintelligent AI with motivations not aligned with our own under control through restrictions on access to the outside world. However, superintelligent AI can, by definition, outsmart us, and may be able to figure a way to weasel out of any restrictions we put on it. The consequences of that could be very bad.

Therefore, it would be much safer for us to figure out how to design AI with motivations fundamentally aligned with our own.

This is a problem that researchers should probably start thinking seriously about now, since superintelligent AI development may turn into an arms race, and organizations may cut corners on safety unless there's already a body of work on the subject. To that end, Tegmark has been organizing science conferences on AI alignment, and organizations like MIRI are funding papers.

u/MisanthropicScott · 5 pointsr/atheism

> What is to truly do?

Dunno. But, I've been retired for a while now and am seriously enjoying it!

I went to a lecture and book signing, but haven't yet read the book, called
Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark. It explores this topic in depth.

We need to, at a minimum:

  1. Make sure that the robots' goals are aligned with ours. We wouldn't want, for example, a society led by robots who were negative utilitarians and decided to reduce human suffering by eliminating humans. Well, most of us wouldn't want that. Me? Sometimes.

  2. Make sure that the wealth generated by the huge productivity of robots is considered a societal resource, not something that belongs only to the owners of the corporations that own them. With no human jobs, we will need, at the very least, some form of a universal basic income.

  3. We need to stigmatize autonomous killing robots the way we do with chemical and germ warfare. We definitely do not want a world that contains slaughterbots.

    There were many more points on the topic that were made by Max Tegmark at the lecture. I expect there are even more in the book. When my wife is done with it, I'll start reading it.
u/banduzo · 2 pointsr/Screenwriting

It's on my reading list, but I've heard good things about https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598. It's a non-fiction book that looks at the effects AI will have on the world. I'm also writing about an AI, and I hope this book helps my understanding as well.

Beyond that and the suggestions I've seen below, Westworld is another show with AI in it to check out.

u/olangalactica · 2 pointsr/artificial

there‘s a lot of possible outcomes. one of them would be our extinction, yes. not because AGI is evil, but it may be misaligned with or goals.

check life 3.0 by max tegmark:
https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598

and the youtube channel by robert miles:
https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg

u/z57 · 1 pointr/sciencefiction

Read this book. Or listen to it on audible

The first short story presented at the start of the book is compelling

https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598

u/hack-man · 1 pointr/Futurology

As a short-term goal, I would say creating AGI, which should lead to the technological singularity. I like to believe that once that happens, what we have created (and has self-improved) will be "smart enough" to solve things that will (at that point, not now) be "trivial" for it: climate change, poverty, war, free energy, etc

I started reading Life 3.0 14 months ago (switched from reading the book to listening to the audiobook a couple months in). I'm deliberately reading it slowly (and often going back to re-read slightly before where I left off) so I can savor it

I would love if everything turned out as awesome as that book paints a picture of humanity's future

Post-singularity, the possibilities are (nearly) endless: colonize Mars and several moons, maybe a few O'Neill cylinders and then spread throughout the galaxy (either in person or sending out robot ships while we all relax in our own VR worlds)

u/SrecaJ · 1 pointr/magicleap

>so as long as they dont have any biological part in them and they have no actual feelings

Lol. What makes you think you have actual feelings? I think therefore I am. It thinks therefore it is, and it will think quicker and better. It will have a larger brain and more room for improvement.

>I think its just farfetched to think that a robot AI woman made for acting as your partner is ever going to get out of control and use a gun to shoot stuff, they will make sure that it will be a safe tech obviously, its going to be their number one priority, to make sure such a thing isnt going to ever happen with an AI robot.

You can't box in an AI like that. You can make it safe enough for market, but sooner or later one is going to go rogue and one is all it takes.

>I think robots are not to be afraid of at all, the real threat to humanity is humanity itself...

How many AI's have you built? How much do you know about the topic? I'm glad you're more of an expert then idiots like Elon Musk and Steven Hawking. AGI is rediculusly dangarous, and using that tech for a sex toy is beyond stupid and irresponsible. Then again we're more then likely to get AGI over the next couple decades and this Reddit will stay that lang. So when some abused AGI goes back and starts looking for who to blame... well I wouldn't want to be you dude...
Here is a book to read https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1101946598&pd_rd_r=VRR6AQ5K4WE1BXNWW0EP&pd_rd_w=FIKob&pd_rd_wg=x2u0q&psc=1&refRID=VRR6AQ5K4WE1BXNWW0EP

Look man if you're unhappy Openwater will give you matrix you can be anything in
https://www.reddit.com/r/openwaterBCI/
Dev kits this year, consumer product the next. As soon as they get enough data and they will they will be running a full blown matrix. Smell, touch, everything... No need to endanger humanity by messing with stuff you really shouldn't mess with. AGI is better off doing more significant things.

u/AnythingApplied · 1 pointr/AskScienceDiscussion

Yes, most scientists (but not all) do believe in an AI singularity. And when polled AI researchers' have a median prediction of that occurring within 45 years.

The idea is that once you've created an AI smarter than us (or at least better at AI programming) it will be able to program a better AI than us. Since we were able to program it and it is better at programming AIs, it will be able to program a better AI than itself. You would then have iterative generations each one smarter than the previous.

Some things to note however is that this won't be infinitely smart. Physics puts some upper limits on how much information can be processed how quickly and with how much heat and entropy. That being said those limits are huge and scientists don't know how much smarter it'll be than us. But even the idea that it could be just a little bit smarter than us but that you could network a bunch of brains just slightly smarter than us together is pretty scary.

Scientists also don't agree on how much of a risk this poses to humanity, but most believe it is a risk that needs to be taken very very seriously. But many also believe it is a risk that when taken very very seriously can be properly managed. Look at how successful Bezos or other Billionaires are. An AI like this could absolutely run the world if it wanted. And forget about shutting it down. It would be smart enough not to do anything that would scare us enough into shutting it down until it had protected against that possibility.

Where the world ends up after the AI singularity depends so much on the goals of that initial AI superintelligence.

For more information on AI's check out this computerphile video. That researcher has about 10 or so videos on computerphile on the same subject. If you want a really in depth view on the state of AI super intelligence, I'd recommend Life 3.0 which is by an AI researcher who has been organizing AI saftey conferences and been working with Elon Musk and others to fund AI researchers' work. They discuss what are the different types of scenarios we could end up with and asks interesting moral questions about where we want to end up. For example, do we want to be in a world where nobody has to work or would that lead to lack of fulfillment in people's lives? Would we want an AI who would only minimally interfere and mainly function to prevent malicious AIs from emerging? Or would we want one that would push the frontiers of science for us?