Top products from r/Futurology

We found 106 product mentions on r/Futurology. We ranked the 907 resulting products by number of redditors who mentioned them. Here are the top 20.

Next page

Top comments that mention products on r/Futurology:

u/cybrbeast · 19 pointsr/Futurology

This was originally posted as an image but got deleted for IMO in this case, the irrelevant reason that picture posts are not allowed, though this was all about the text. We had an interesting discussion going: http://www.reddit.com/r/Futurology/comments/2mh0y1/elon_musks_deleted_edge_comment_from_yesterday_on/

I'll just post my relevant contributions to the original to maybe get things started.



---------------------------

And it's not like he's saying this based on his opinion after a thorough study online like you or I could do. No, he has access to the real state of the art:

> Musk was an early investor in AI firm DeepMind, which was later acquired by Google, and in March made an investment San Francisco-based Vicarious, another company working to improve machine intelligence.

> Speaking to US news channel CNBC, Musk explained that his investments were, "not from the standpoint of actually trying to make any investment return… I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there."

*Also I love it that Elon isn't afraid to speak his mind like this. I think it might well be PR or the boards of his companies that reigned him in here. Also in television interviews he is so open and honest, too bad he didn't speak those words there.

----------------------------

I'm currently reading Superintelligence which is mentioned in the article and by Musk. One of the ways he describes an unstoppable scenario is that the AI seems to function perfectly and is super friendly and helpful.

However on the side it's developing micro-factories which can assemble from a specifically coded string of DNA (this is already possible to a limited extent). These factories then use their coded instructions to multiply and spread and then start building enormous amount of nanobots.

Once critical mass and spread is reached they could instantly wipe out humanity through some kind of poison/infection. The AI isn't physical, but the only thing it needs in this case is to place an order to a DNA printing service (they exist) and then mail it to someone it has manipulated into adding water, nutrients, and releasing the DNA nanofactory.

If the AI explodes in intelligence as predicted in some scenarios this could be set up within weeks/months of it becoming aware. We would have nearly no chance of catching this in time. Bostrom gives the caveat that this was only a viable scenario he could dream up, the super intelligence should by definition be able to make much more ingenious methods.

u/apocalypsemachine · 5 pointsr/Futurology

Most of my stuff is going to focus around consciousness and AI.

BOOKS

Ray Kurzweil - How to Create a Mind - Ray gives an intro to neuroscience and suggests ways we might build intelligent machines. This is a fun and easy book to read.

Ray Kurzweil - TRANSCEND - Ray and Dr. Terry Grossman tell you how to live long enough to live forever. This is a very inspirational book.

*I'd skip Kurzweil's older books. The newer ones largely cover the stuff in the older ones anyhow.

Jeff Hawkins - On Intelligence - Engineer and Neuroscientist, Jeff Hawkins, presents a comprehensive theory of intelligence in the neocortex. He goes on to explain how we can build intelligent machines and how they might change the world. He takes a more grounded, but equally interesting, approach to AI than Kurzweil.

Stanislas Dehaene - Consciousness and the Brain - Someone just recommended this book to me so I have not had a chance to read the whole thing. It explains new methods researchers are using to understand what consciousness is.

ONLINE ARTICLES

George Dvorsky - Animal Uplift - We can do more than improve our own minds and create intelligent machines. We can improve the minds of animals! But should we?

David Shultz - Least Conscious Unit - A short story that explores several philosophical ideas about consciousness. The ending may make you question what is real.

Stanford Encyclopedia of Philosophy - Consciousness - The most well known philosophical ideas about consciousness.

VIDEOS

Socrates - Singularity Weblog - This guy interviews the people who are making the technology of tomorrow, today. He's interviewed the CEO of D-Wave, Ray Kurzweil, Michio Kaku, and tons of less well known but equally interesting people.

David Chalmers - Simulation and the Singularity at The Singularity Summit 2009 - Respected Philosopher, David Chalmers, talks about different approaches to AI and a little about what might be on the other side of the singularity.

Ben Goertzel - Singularity or Bust - Mathematician and computer Scientist, Ben Goertzel, goes to China to create Artificial General Intelligence funded by the Chinese Government. Unfortunately they cut the program.



PROGRAMMING

Daniel Shiffman - The Nature of Code - After reading How to Create a Mind you will probably want to get started with a neural network (or Hidden Markov model) of your own. This is your hello world. If you get past this and the math is too hard use this

Encog - A neural network API written in your favorite language

OpenCV - Face and object recognition made easy(ish).

u/ItsAConspiracy · 2 pointsr/Futurology

My suggestion is to opensource it under the GPL. That would mean people can use your GPL code in commercial enterprises, but they can't resell it as commercial software without paying for a license.

By opensourcing it, people can verify your claims and help you improve the software. You don't have to worry about languishing as an unknown, or taking venture capital and perhaps ultimately losing control of your invention in a sale or IPO. Scientists can use it to help advance knowledge, without paying the large license fees that a commercial owner might charge. People will find all sorts of uses for it that you never imagined. Some of them will pay you substantial money to let them turn it into specialized commercial products, others will pay you large consulting fees to help them apply the GPL version to their own problems.

You could also write a book on how it all works, how you figured it out, the history of your company, etc. If you're not a writer you could team up with one. Kurzweil and Jeff Hawkins have both published some pretty popular books like this, and there are others about non-AGI software projects (eg. Linux, Doom). If the system is successful enough to really make an impact, I bet you could get a bestseller.

Regarding friendliness, it's a hard problem that you're probably not going to solve on your own. Nor is any large commercial firm likely to solve it own their own; in fact they'll probably ignore the whole problem and just pursue quarterly profits. So it's best to get it out in the open, so people can work on making it friendly while the hardware is still weak enough to limit the AGI's capabilities.

This would probably be the ideal situation from a human survival point of view. If someone were to figure out AGI after the hardware is more powerful than the human brain, we'd face a hard takeoff scenario with one unstoppable AGI that's not necessarily friendly. Having the software in a lot of hands while we're still waiting for Moore's Law to catch up to the brain, we have a much more gradual approach, we can work together on getting there safely, and when AGI does get smarter than us there will be lots of them with lots of different motivations. None of them will be able to turn us all into paperclips, because doing that would interfere with the others and they won't allow it.

u/BroGinoGGibroni · 1 pointr/Futurology

wow, yeah, 10 years is closer than 50 that's for sure. If you are right, that is something to be very excited about for sure. Just think of the possibilities. Can I ask where you get the estimate of 10 years? I am fairly uneducated on the subject, and admittedly I haven't even read the book about it therefore I am hesitant to even mention it, but I am familiar with Ray Kurzweil and his theories about The Singularity (basically when man and machine combine, hence "redefining" what it means to be human). I found his recent comments on nano-bots in our brains making us "God-like" intriguing to say the least, and if ever we will be able lay back, close our eyes, and experience some sort of virtual reality, it just makes sense to me that the most likely time when that will happen is when we have super intelligent nano-bots inside of our brains manipulating the way they work. I, personally, can't wait for this to happen, but I also know that I will be very apprehensive when it will come down to willfully injecting into my body millions and millions of nano-bots that have been specially designed to 'hijack' my brain, and make it work better. I think I will probably wait 10 years or so after people start doing it, maybe longer.

Here is Ray Kurzweil's book I was referring to that I really want to read: The Singularity Is Near: When Humans Transcend Biology

EDIT: I forgot to mention why I really brought up the singularity-- Mr. Kurzweil initially predicted that the singularity would occur sometime before 2030 (aka in the 2020's), but I believe he has now modified that to say that it will occur in the 2030's. Either way, that is not far away, and, being a pretty tech-savvy person myself (I pay attention to a thing or two) I think the 2030's is a reasonable estimate for something like this, but, as I mentioned earlier, I think it is the ethics of such a thing that will slow down true VR development (see: how the world responded to cloning)

double EDIT: just another thought (albeit quite a tangent)-- once a true singularity has been achieved (if ever?), 'transplanting' our consciousnesses into another body all of a sudden becomes quite a bit less sci-fi and altogether a more realistic possibility...

u/solarpoweredbiscuit · -2 pointsr/Futurology

Welcome to futurology. No offense intended, but I think it would be in the best interests of you and our community if you would lurk more before posting to this sub, as subreddits exist for a reason and are each separate communities that are best served by people who have an understanding of it.

Now the reason I asked you about technological unemployment is because universal basic income, as it is discussed on this subreddit, is intrinsically tied to the rise in jobs lost due to automation. People here view basic income at least as a stopgap measure to prevent societal upheaval during a time when wealth is becoming increasing concentrated at the top due to automation technology.

If you want to get a better understanding of this issue, I recommend this talk by Brynjolfsson and McAfee (authors of The Second Machine Age, probably the best book on this issue).

u/Ignate · 2 pointsr/Futurology

Superintelligence

Good book.

I think of the human mind as a very specific intelligence designed to meet the demands of a natural life. A tailor made intelligence that is ultra specific seems like an incredibly difficult thing to recreate. I wouldn't be surprised if after AGI was created, it proved that our brains are both works of art, and only useful in specific areas.

They say a Philosopher is comparable to a dog standing on it's hind legs and trying to walk. Our brains are not setup to think about big problems and big solutions. Our brains are very specific. So, certainly, we shouldn't be using it as a model to build AGI.

As far as self awareness, I don't think we understand what that is. I think the seed AI's we have are already self-aware. They just have a very basic drive which is entirely reactionary. We input, it outputs.

It's not that if we connect enough dot's it'll suddenly come alive like Pinocchio. More, it will gradually wake up the more complex the overall program becomes.

u/Leninmb · 1 pointr/Futurology

I was actually thinking this a few days ago about my dog. Having read The Singularity Is Near by Ray Kurzweil, there are a few sections devoted to uploading the brain and using technology to augment brain capabilities. What it boils down to is that the truly unique things about our brain are 'past memories', 'emotions', and 'personality'. Every thing else is the brain is just stuff that regulates our bodies and processes information.

If we take the personality, memories, and emotions of my dog, and improve on the other parts of the brain by adding better memory, speech recognition, etc. Then we might just be able to create another biological species that rivals our intelligence.

We already are making the assumption that technology will make humans more advanced, the same thing should eventually apply to all other biological animals as well. (Except Mosquitos, of couse)

u/federicopistono · 133 pointsr/Futurology

That's a good question. I believe the answer is split in two parts.

Optimism. I consider myself a rational optimist. I know that things can go very bad (and often times they do), but research in neuroscience suggests over and over that the way we look at the world influences greatly the outcome of our actions and that of the people around us. This of course has nothing to do with any quantum-woo bullshit, it's simply a recognition that if you feel hopeless, scared, and defeated, you are less likely to come up with solutions to whatever problem you are facing than when you are open to the possibilities.

Also, we are objectively getting better at most everything (see the book of my good friend Peter Diamandis Abundance, the Future is Better Than You Think): better health, less violence, fever wars, etc. This is an often overlooked and underplayed fact by the pessimists and by the environmentalist community. However, there are two things that are getting progressively worse: wealth inequality and environmental degradation. This is an often overlooked and underplayed fact by the techno-optimists and by the Singularity crowd. I stand right in the middle, I see the opportunities, as well as the perils, and I try to think of solutions accordingly.

Achievement. I honestly have no way of knowing if humanity will achieve the goals that I propose. All I can do is strive to make it happen, and inspire others to do the same. Since it's not an impossible goal, merely a very difficult one, it's not a delusional state of mind. It's simply a rational optimist approach. By having this attitude I'm increasing the probability of achieving the goal, and even if I contribute to a mere 1 part in 10 thousand, the collective effort of others like me has more chances of succeeding.

u/dolphonebubleine · 5 pointsr/Futurology

I don't know who is doing PR for this book but they are amazing. It's not a good book.

My review on Amazon:

> The most interesting thing about this book is how Bostrom managed to write so much while saying so little. Seriously, there is very little depth. He presents an idea out of nowhere, says a little about it, and then says [more research needs to be done]. He does this throughout the entire book. I give it two stars because, while extremely diluted, he does present an interesting idea every now and then.

Read this or this or this instead.

u/SUOfficial · 21 pointsr/Futurology

This is SO important. We should be doing this faster than China.

A branch of artificial intelligence is that of breeding and gene editing. Selectively selecting for genetic intelligence could lead to rapid advances in human intelligence. In 'Superintelligence: Paths, Dangers, Strategies', the most recent book by Oxford professor Nick Bostrum, as well as his paper 'Embryo Selection for Cognitive Enhancement', the case is made for very simple advances in IQ by selecting certain embryos for genetic attributes or even, in this case, breeding for them, and the payoff in terms of raw intelligence could be staggering.

u/FeepingCreature · 1 pointr/Futurology

The premise of AI safety is roughly "we're dominating the planet because we're smarter than our surroundings" + "AI capable of self-improvement can get a lot smarter really fast" + "it's surprisingly hard to tell an AI what to do in a way that does not transparently result in it wiping out everything we value once it has crushingly superior intelligence." The long form of the argument can be read in Bostrom's book on the topic.

u/bostoniaa · 4 pointsr/Futurology

Can I convince you the future will be perfect? no.
Are all of your concerns legitimate? Absolutely.

However, as NewFuturist said, this is only the latest in a long line of periods in which people thought that the world was ending. While there is a sort of deep, visceral sense that our problems are more serious than those in other times, we must examine that belief to see if its really true. Personally, although I do believe that humanity is facing its most difficult period ever, we also have the most amazing tools to defeat it.

If I could recommend one book to read to convince you that we at least have a shot, please read Abundance by Peter Diamandis. It is a wonderful book that breaks down the challenges that humanity faces one by one. You can see that there is significant progress being made in all of the areas where humanity is in trouble. We can't know for sure if we're going to make it. But personally, I believe its a very real possibility. That's why I've decided to make a career out of this stuff.

Step 1: Watch this

http://vimeo.com/34984088

Step 2: Read this

http://www.amazon.com/Abundance-Future-Better-Than-Think/dp/1451614217

Step 3: Post here. Tell me what you thought.

u/blank89 · 3 pointsr/Futurology

If you mean strong AI, there are many pathways for how we could get there. 15 years is probably a bit shorter than most expert estimates for mind scanning or evolution based AI. This book, which discusses different methods, will be available in the states soon:
http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ref=sr_1_1?ie=UTF8&qid=1406007274&sr=8-1&keywords=superintelligence

> We went from the horse and buggy to landing on the moon in 80 years

Past events are not necessarily good indicators of future events. In this case, faster computers are a mechanism for bringing about AI faster. How much faster we get in how much time will probably be the influencing factor in all this. There is quite a bit of uncertainty surrounding whether that will be post-silicon or not. We don't have post-silicon computing up and running yet.

The other factor may be incentive. Maybe specific purpose AI will meet all such demand for the next 20 years, and nobody will have any incentive to create strong AI. This is especially true given the risks of creating strong AI (both to the world and to the organization or individual who creates the AI).

u/sturle · 4 pointsr/Futurology

That is not how it is going to happen.

The rift will not be among countries, but between classes inside each country. The filthy rich will benefit from this. The specialized white collar workers of the upper middle class will become richer. The lower middle class will disappear. There will be a huge, unemployed lower class with no work. It will destabilize countries without systems to deal with this.

If you want to read a good version of this, find a copy of Kurt Vonnegut's
Player piano.

u/BenInEden · 2 pointsr/Futurology

A couple books that come to mind that do this are 2312 By Kim Stanley Robinson. And to a lesser degree Deepness in the Sky by Vernor Vinge. 2312 is kinda boring since Robinson does world building at the expense of story line and character development ... but it is IMO one of the most robust and coherent pictures of the future I've ever read in SciFi. Vinge's book is more balanced and thus entertaining. Both of them are mostly hard science books, that is they don't break the laws of physics per se. Great reads.



u/adam_dorr · 5 pointsr/Futurology

This is the latest sector report from RethinkX, the think tank founded by entrepreneur and technology theorist Tony Seba who literally wrote the book on the Clean Disruption of energy and transportation.

A few highlights of our findings from the report:

Industry Impacts

  • By 2030, the number of cows in the U.S. will have fallen by 50%. Production volumes of the U.S. beef and dairy industries and their suppliers will be cut by more than half.
  • By 2030, the market for ground beef by volume will have shrunk by 70%, the steak market by 30% and the dairy market by almost 90%. The markets for other cow products (leather, collagen, etc.) are likely to decline by more than 90%. In total, demand for cow products will fall by 70%.
  • By 2030, the U.S. dairy and cattle industries will have collapsed, leaving only local specialty farms in operation.
  • By 2035, demand for cow products will fall by 80%-90% and U.S. beef and dairy industry (and their suppliers) revenues, at current prices, will be down nearly 90%.
  • Farmland values will collapse by 40%-80%.
  • The volume of crops needed to feed cattle in the U.S. will fall by 50% from 155 million tons in 2018 to 80 million tons in 2030, causing cattle feed production revenues, at current prices, to fall by more than 50% from 60 billion in 2019 to less than $30 billion in 2030.
  • All other livestock industries including fisheries will follow cattle and suffer similar disruptions, while the knock-on effects for throughout the value chain will be severe.

    Food Cost Savings:

  • The cost of modern foods and products will be at least 50% and as much as 80% lower than the animal products they replace, which will translate into substantially lower prices and increased disposable incomes. The average U.S. family will save more than $1,200 a year in food costs, keeping an additional $100bn a year in Americans’ pockets by 2030.

    Jobs Lost and Gained:

  • Half of the 1.2 million jobs in U.S. beef and dairy production (including supply chain), along with their associated industries, will be lost by 2030, climbing toward 90% by 2035.
  • The emerging U.S. modern foods industry will create at least 700,000 jobs by 2030 and up to 1 million jobs by 2035.

    Land Use and Environmental Impacts:

  • Modern foods will be far more efficient than animal-derived products: Up to 100 times more land efficient, 10-25 times more feedstock efficient, 20 times more time efficient, and 10 times more water efficient than industrial livestock. They will also produce an order of magnitude less waste.
  • By 2035, 60% of the land currently used for livestock and feed production will be freed for other uses. These 485 million acres equate to 13 times the size of Iowa, an area almost the size of the Louisiana Purchase. If all this land were dedicated to maximize carbon sequestration, all current sources of U.S. greenhouse gas emissions could be fully offset by 2035.
  • U.S. greenhouse gas emissions from cattle will drop by 60% by 2030, on course to nearly 80% by 2035. Even when the modern food production that replaces animal agriculture is included, net emissions from the sector as a whole will decline by 45% by 2030, on course to 65% by 2035.
  • Water consumption in cattle production and associated feed cropland irrigation will fall by 50% by 2030, on course to 75% by 2035. Even when the modern food production that replaces animal agriculture is included, net water consumption in the sector as a whole will decline by 35% by 2030, on course to 60% by 2035.
  • Oil demand from the U.S. agriculture industry (currently 150 million barrels of oil equivalent a year) will fall by at least 50% by 2030.

    Health & Food Security:

  • Nutritional benefits could have profound impact on health, particularly conditions such as heart disease, obesity, cancer, and diabetes that are estimated to cost the U.S. $1.7 trillion each year. The way they are produced should also ensure a sharp reduction in foodborne illness.
  • The modern food system will be decentralized and therefore more stable and resilient, thereby increasing food security.

    Geopolitical Implications:

  • Trade relations and geopolitics will shift due to a decentralized food production system.
  • Any country will be able to capture the opportunities associated with a global industry worth hundreds of billions of dollars.
u/Jetbooster · 12 pointsr/Futurology

Why would it care if the goal we gave it didn't actually align with what we wanted? It has no reasons to care, unless these things were explicitly coded in, and as I said, morality is super hard to code into a machine.

To address your second point, I understand my example wasn't perfect, but say it understands that the more physical material a company controls, the more assets it has. So it lays claim to the entire universe, and sets out to control it. eventually, it is the company, and growing the company's assets just requires it to have more processing power. Again, it is an illustrative point, loosely derived from my reading of Superintelligence by Nick Bostrom. I would highly recommend it.

u/starkprod · 7 pointsr/Futurology

I suggest taking a read up on Nick Bostrom - Superintelligence the whole book is interesting (but dry as drywall), but mostly the first few chapter apply here. The technological advances and problems towards getting to singularity he paints are very well structured and well reasoned. A lot of the things I see in this chart for that reason feel unlikely or unrealistic. (Kurtzweil might know his stuff super well though), but there are things like:Mind machine interface or mind uploading that have some disturbingly complex issues when you start delving into the practical details, making them an unviable path / inefficient path towards singularity, or atleast an “AI complete” problem if just being a parallel technology. While that doesn’t disprove the whole chart, I feel that this chart paints a very vague and broad timeline that tries to capture all current “wouldn’t this be awesome?” Techs and place them in a “believable” timeline. All in all, he might be correct, but the more stuff you state and ‘kiiind of hit the target for’ (feel free to insert Texas sharpshooter fallacy here) doesn’t make all your future predictions true. Taking this with a major grain of salt.

u/lfancypantsl · 1 pointr/Futurology

Give this a read. This isn't some crackpot, this is Google's director of engineering. I'm not saying it contradicts what you are saying.

>I doubt we'd anything like a true AI in 20 or so years

Is pretty close to his timetable too, but honestly even getting close to that computational power is well over what is needed to drive a car.

u/DesertCamo · 3 pointsr/Futurology

I found this book great for a solution that could replace our current economic and political systems:

http://www.amazon.com/Open-Source-Everything-Manifesto-Transparency-Truth/dp/1583944435/ref=sr_1_1?s=books&ie=UTF8&qid=1406124471&sr=1-1&keywords=steele+open+source

This book is great as well. It is, Ray Kurzweil, explaining how the human brainn function as he attempts to reverse engineer it for Google in order to create an AI.

http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=sr_1_1?s=books&ie=UTF8&qid=1406124597&sr=1-1&keywords=kurzweil

u/linuxjava · 2 pointsr/Futurology

While all his books are great. He talks a lot about exponential growth in "The Age of Spiritual Machines: When Computers Exceed Human Intelligence" and "The Singularity Is Near: When Humans Transcend Biology"

His most recent book, "How to Create a Mind" is also a must read.

u/PirateNinjaa · 1 pointr/Futurology

the movie "Her" was a good example in a lot of ways, the book 2312 has a lot of awesome possibilities in it as well.

u/jeremyhoward · 1 pointr/Futurology

I agree macroeconomic prediction has a poor track record. I believe this is because it generally tries to extrapolate from past trends, rather than looking at the first principles causality - e.g. macroeconomics through extrapolation could not have predicted the impact of the internet, but looking at the underlying capability of the technology could (and did).

I think we're already seeing service sector jobs being obsoleted. See http://www.amazon.com/The-Second-Machine-Age-Technologies/dp/0393239357 for examples and data backing this up.

Job sectors relying primarily on perception will be the first and hardest hit, since perception is what computers are most rapidly improving at thanks to deep learning.

u/greggers23 · -1 pointsr/Futurology

Few will read this but I highly recommend reading 8 steps to colonize the Galaxy.

The Millennial Project: Colonizing the Galaxy in Eight Easy Steps https://www.amazon.com/dp/0316771635/ref=cm_sw_r_cp_apa_ZWAPBbT92H2SG

u/[deleted] · 1 pointr/Futurology

Based on vague hints from a trusted person with clearance at DARPA they already are and have been for quite some time. But aside from the wild speculation, what really hammered home the staggering gravity of the situation for me was this superb book by Nick Bostrom. If you're at all interested in this sort of thing I'd highly recommend it.

u/notsointelligent · 2 pointsr/Futurology

I've read a few. My interest is AI. Of them all I'll recommend two:

  • Consciousness and the Brain
  • On Intelligence


    edit - sigh I am now unable to reply to people who have replied to me. Would love to talk about neuroscience and consciousness and Ai but I guess well meet on another sub
u/HalfAlligator · 1 pointr/Futurology

I don't quite buy the "I work in A.I so I have a special perspective" idea. People couldn't fathom what the internet would be in the early 1990's and they were I.T professionals. I understand there is a huge variety of A.I research but I think the worry is with the kind of A.I that learns to enhance itself in a general sense faster than we can. Forget purpose built focused A.I and think more "general" intelligence. Very hard to implement but in principal it's possible. It need not be sentient... that is basically irrelevant... it's the intelligence explosion and who controls it that matters.

Maybe read this: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

u/steamywords · 13 pointsr/Futurology

This does nothing to address the difficulty of the control issue. He's basically just saying we'll figure it out before we get AI, don't worry about it.

SuperIntelligence actually spells out why control is so hard. None of those points are touched even generally. He's Director of Engineer at Google, which actually created an AI ethics board because an AI company they bought was afraid that the tech could lead to the end of the human species, yet none of that is even briefly mentioned.

There is very good reason to be cautious around developing an intellect that can match ours, never mind rapidly exceed it. I don't see the necessity for repeated calls to let our guard down.

u/stupidpart · 2 pointsr/Futurology

Doesn't anyone remember this? Posted here, on /r/futurology three weeks ago. It was about this book. Based on Musk's recommendation I read the book. This article is basically what Bostrom says in his book. But I don't believe Bostrom because his basic premise is that AI will will be completely stupid (like a non-AI computer program) but also smart enough to do anything it wants. Like it will just be an amazing toaster and none of the AI used to make it superintelligent will be applied to its goal system. His opinions are bullshit.

u/draknir · 1 pointr/Futurology

False. You are demonstrating that you are not familiar with the field. There are many possible approaches to programming an AI. One example is full scale brain emulation, in which we begin by modelling the entirety of a human brain down to each and every last neuron. Given sufficient computing power (probably this demands a quantum computer) it is possible to run this brain simulation under different test conditions, allowing it to evolve with different values. This is only one possible method. If you want to read about some of the alternatives, I highly recommend this book: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

u/Scarbane · 70 pointsr/Futurology

He should brush up on his knowledge about general AI. Nick Bostrom's Superintelligence is a good starting place, even though it's already a few years old.

I recommend the rest of you /r/Futurology people read it, too. It'll challenge your preconceived notions of what to expect from AI.

u/Philipp · 1 pointr/Futurology

You clearly thought a lot about this. Now you may benefit treating yourself to Nick Bostrom's book Superintelligence. It might blow your mind by opening up wholly new territory for you on this discussion.

u/Mindrust · 4 pointsr/Futurology

>Is such a machine possible?

Yes, it's called a molecular assembler. It's the holy grail of nanotechnology.

Eric Drexler (the guy who popularized the idea in the 80s and 90s) has a new book on this subject, titled Radical Abundance. You should check it out.

u/elborghesan · 1 pointr/Futurology

An interesting read should be Superintelligence, I've just bought it but it seems promising from the reviews.

u/GreyRobb · 1 pointr/Futurology

Read 2312, by Kim Stanley Robinson. Great read about a future where humans have colonized the solar system.
http://www.amazon.com/2312-Kim-Stanley-Robinson/dp/0316098124

u/IAMARobotBeepBoop · 8 pointsr/Futurology

Kim Stanley Robinson's novel 2312 covers some of these themes, if anyone is interested.

u/madp1atypus · 3 pointsr/Futurology

I wonder how many people in this thread would enjoy reading Robert Zubrin's work. He laid out a solid plan over 2 decades ago with existing tech. for those interested

u/samsdeadfishclub · 1 pointr/Futurology

Well I'm not sure that's entirely true:

http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=asap_bc?ie=UTF8

At the very least, he's trying to understand the brain.

u/rojobuffalo · 6 pointsr/Futurology

He is amazingly articulate on this subject, probably more so than anyone. I really enjoyed his book Superintelligence.

u/gonzoblair · 2 pointsr/Futurology

I've been enjoying this book, 'Abundance' lately. It's a detailed examination of where technologies might lead us to a post-scarcity society.
http://www.amazon.com/dp/1451614217

u/Simcurious · 6 pointsr/Futurology

I'm reading Drexler's new book Radical Abundance (2013), it's quite good. Except perhaps for the chapters dedicated to the difference between science and engineering, which i thought were too long.

http://www.amazon.com/Radical-Abundance-Revolution-Nanotechnology-Civilization/dp/1610391136

He counters many of the claims made by his detractors. Explaining how it's often just a case of misunderstanding.

u/alnino2005 · 1 pointr/Futurology

Why does that statement not hold up? Check out Superintelligence. Specialized machine learning is not the same as strong generalized AI.

u/happybadger · 6 pointsr/Futurology

History isn't a gradual incline though. There's this book I'm reading called The Second Machine Age, really good read by the way, which has this graph in its early pages. Now I don't know how they source that information and am not claiming it to be fully accurate, but as an illustrative example it shows that total societal upheaval is a pretty rapid thing that compounds earlier periods of change. What we think of as tremendous periods critical to social development were, in this era which is the only one you can really form comparisons in as the dissemination of information is so radically different from what it was prior to the industrial age, as short in duration as the span between Pokemon Red and Pokemon Black.

Two decades is a lot when your society is literate and informed.

u/JAFO_JAFO · 1 pointr/Futurology

Im not so sure...
There is an interesting assertion from Tony Seba regarding fossil fuels and nuclear going obselete. He's saying that solar & battery are technologies and will continue to drop in price, like they have done for the last 30 years, and that the cost of fossil fuels will continue to be static or rise. When the cost of solar & battery drops below 6c/KWH, it will be cheaper for many people to produce their own power than to buy off the grid, because the cost of delivery of electricity over the grid is around 6c/KWH. There is still a place for utility generation and the main method of production will be solar & battery.

His book [Clean Disruption of Energy and Transportation: How Silicon Valley Will Make Oil, Nuclear, Natural Gas, Coal, Electric Utilities and Conventional Cars Obsolete by 2030] (http://www.amazon.com/Clean-Disruption-Energy-Transportation-Conventional/dp/0692210539?ie=UTF8&keywords=clean%20disruption&qid=1462589361&ref_=sr_1_1&sr=8-1 ) discusses this in detail.

Here is a [Short presentation] (https://www.youtube.com/watch?v=0L0JAnACdyc) and
a [long presentation] (https://www.youtube.com/watch?v=Kxryv2XrnqM) .

Useful: Nuclear has a negative learning curve for the past 30 years - The more we research and deploy the technology, the more expensive the technology gets. Not sure if Thorium or new technologies are going to change this assertion, or if they can do so before Solar elipses them completely. Tony lays it out here

u/rumblestiltsken · 3 pointsr/Futurology

The Second Machine Age, by McAfee and Brynjolfsson.

The Zero Marginal Cost Society by Rifkin.

Ending Aging by De Grey.

Although I would personally argue that you get a good understanding of their material from the numerous talks they give, and learning the background science is probably more important for assessing their claims than simply reading their books.

u/tbiko · 40 pointsr/Futurology

The Case for Mars is a great read, often recommended on reddit. Published in 1996 but sadly a still relevant proposal for a low cost manned Mars mission using currently available rockets and tech. At the low end it was estimated to be $20B in 1996 dollars ($30B now). It details why NASA departments lobby for far more expensive tech that needs developing to justify their existence and boost their department.

The proposal in the article is for $19.5B annual funding.

If you don't want to read a whole book there is good info at the Mars Direct website or the wiki.

Does anyone know if this type of plan has any current traction?

u/RippyZ · 27 pointsr/Futurology

So is this https://www.amazon.com/Sensodyne-Repair-Protect-Whitening-Toothpaste/dp/B008VPSTOA a good buy then?

How good is this stuff? Like on a scale of my teeth have wear from a life of chewing things, but I usually brush and teeth that would make someone with trypophobia pass out where would it sit?

u/spyderskill · 2 pointsr/Futurology

This picture is from the book The Millenial Project: Colonizing the Galaxy in Eight Easy Steps by Marshall T. Savage. Some of the calculations are wrong, but it is an interesting read. But you don't have to take my word for it.

u/lowlandslinda · 11 pointsr/Futurology

Musk keeps up with what philosopher Nick Bostrom writes. Same reason why he knows about the simulation theory which is also popularised by this philosopher Nick Bostrom. And lo and behold, Bostrom has also a paper and a book on AI.

u/dmkerr · 9 pointsr/Futurology

The Second Machine Age by Erik Brynjolfsson and Andrew McAfee, is a popular approach to the topic but they are both academics and their research is quite sound.

u/DisconsolateBro · 3 pointsr/Futurology

>Given what Musk does with other technologies, he is by no means a luddite or a technophobe. He's seen something that's disturbing. Given the guys track record, it's probably worth investigating

I agree. There's also a point to be made that one of the recent books Musk mentioned he read in a few interviews (and was acknowledged by the author in, too) was this http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111

I started reading it a few nights ago. It's painting an interesting picture about the future of AI. I'm looking forward to finishing it to discuss further

u/PutinButtplug · 2 pointsr/Futurology

funny that you mention the MIT.
http://www.amazon.com/The-Second-Machine-Age-Technologies/dp/0393239357

Both guys are from the MIT and all my arguments are from this Book.
You should read it.

u/thatguyworks · 1 pointr/Futurology

"2312". Everyone should read Kim Stanley Robinson's "2312". Now!

u/grumpieroldman · 43 pointsr/Futurology

The blueprint for this effort appears to cost $14.
Diamond Age

u/DokuHimora · 1 pointr/Futurology

Actually it does. Read this book and you'll see we could have already established a base there years ago: http://www.amazon.com/The-Case-Mars-Settle-Planet/dp/145160811X

u/bombula · 5 pointsr/Futurology

> I don't think Drexler really gets this.

I assure you he does. Read his new book, Radical Abundance.

u/magnafix · 1 pointr/Futurology

I just finished 2312 which paints a pretty interesting projection of the next three centuries.

You should read the book, but the portion somewhat relevant to this discussion posits that capitalism is pushed to the fringes of luxury and niche goods and services, because all receive basic necessities of food, housing, and clothing. Unfortunately, as humanity settles the rest of the solar system, earth gets culturally left behind, too entrenched in nationalism and classism of its history.

u/Hyperion1144 · 1 pointr/Futurology

That's not a "hyperloop." It's called a Mass Driver, and it is a trope of sci-fi for decades and also is extensively discussed in the Millennial Project by Marshall Savage.

If you think you are depressed by the Trump administration now, read this book and leave yourself feeling like you want to eat a shotgun blast over the things we should be doing and aren't.

The Millennial Project: Colonizing the Galaxy in Eight Easy Steps

IMHO nobody should call themselves a futurist until they have read this book.

u/bluehands · 2 pointsr/Futurology

There are huge swaths of the AI community that think this could be a real issue. A recent book goes on about how this could be an issue and what we maybe able to do about it.

All technology has dangers contain within it but AI is one of the most credible that could take us out as a species beyond our control.

u/lukeprog · 6 pointsr/Futurology

For a more detailed analysis of the "AIs stealing human jobs" situation, see Race Against the Machine.

AIs will continue to take jobs from less-educated workers and create a smaller number of jobs for highly educated people. So unless we plan to do a much better job of educating people, the net effect will be tons of jobs lost to AI.

I have a wide probability distribution over the year of the first creation of superhuman AI. The mode of that distribution is on 2060, conditioning in no global catastrophes (e.g. from superviruses) before that.

u/subdep · 2 pointsr/Futurology

Anybody who is seriously interested in this subject, you must read Nick Bostrom’s book: Superintelligence: Paths, Dangers, Strategies https://www.amazon.com/dp/0198739834/ref=cm_sw_r_cp_api_1PpMAbJBDD4T0

u/ckcollab · -2 pointsr/Futurology

What about the 5 years part? That the next 5 years of technological advancements will be like the last 25? I didn't say in the next 5 years unemployment will be out of control... checkout the book The Singularity Is Near to learn more about that. Every technological advancement builds off of the last and helps the rest, i.e. self driving cars in the future making everything cheaper -> cheaper to do all kinds of scientific/medical research.

I can see how that was confusing, probably more like 20 years from now we'll absolutely have to have basic income or negative income tax. We'll have more people than ever with fewer jobs than ever!

u/TooOld4Reddit · 1 pointr/Futurology

If he were trying to understand the brain, instead of explaining it in a way that matches his assumptions - he would stand the shoulders of neuroscientists who are writing on the topic. But I get it, e.g., it's difficult to describe the brain as a computer when in a brain processing and storage are the same thing.

> http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=asap_bc?ie=UTF8

u/ph3l0n · 12 pointsr/Futurology

For those looking for this product: https://www.amazon.com/Sensodyne-Repair-Protect-Whitening-Toothpaste/dp/B008VPSTOA

All these comments and not one person linked the damn item.

u/peppaz · 1 pointr/Futurology

How can you be right or wrong about something that doesn't exist yet? I recommend this book.

https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889

u/fenrir849 · 1 pointr/Futurology

$20? I think you may be looking at the wrong seller. Here is the one I order on Amazon, the only downside is it takes 2 months to arrive so I buy them in bulk.

u/1_________________11 · 12 pointsr/Futurology

Just gonna drop this gem here. http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

Doesn't have to be skynet level smart to fuck shit up. Also once its self modifying it's a whole other ballgame.

u/ideophobic · 3 pointsr/Futurology

Lets do some math.

My son was born this year. With an average life span of 76 years, he should most likely die by 2090. But, I will also make the assumption that in the years between 2014 and 2090 we will find ways to advance the average life span to a little bit longer, lets say 30 years. So now, his average life span is 106 years and the "death year' is extended to 2120.

But between 2090 and 2020 science will continue to advance and we will probably have a life expectance of 136 years by then, which now make his death year "2050". And so forth until science finds a way to keep him alive for ever. Even if it takes the better part of a century, some of the younger people will still be beyond the cutoff.

Now. If you actually talk to real scientists who have studied this in much more detail, they are saying that this will not take a century, and should just take a few decades to achieve " escape velocity" for immortality. There is a book written about this, and how in the next few decades, we will unlock the masteries of aging. The Singularity Is Near: When Humans Transcend Biology

u/Paul_Revere_Warns · 5 pointsr/Futurology

You can learn about Drexler's explanation of what Robert is basing his predictions off of in Engines of Creation, or his newer book Radical Abundance. Additionally, some way less digestible stuff can be found on Robert Freitas' website. I think this video is the only thing I've really understood when it comes to his work and findings. Ray Kurzweil is also very accessible but a lot of people are skeptical about him because of things unrelated to his rational predictions.

Here's a back-and-forth between Drexler and Richard Smalley, an accomplished chemist who criticises Drexler's vision of nanotechnology. I find it important to understand the criticism lobbied against nanotechnology, and in my opinion the criticism from Smalley is paper thin. He is constantly conceding to Drexler until he has to end his last response with some nonsense about children being afraid of what he's saying. I haven't come across a truly substantial argument against the possibility of manipulating matter at the scale Drexler describes with nanofactories and fleets of medical nanobots, but I hope whatever criticism that is helps the technology become more substantial in our lives.