Top products from r/artificial
We found 67 product mentions on r/artificial. We ranked the 81 resulting products by number of redditors who mentioned them. Here are the top 20.
1. Artificial Intelligence: A Modern Approach (3rd Edition)
Sentiment score: 8
Number of reviews: 11
Overnight shipping available
2. Superintelligence: Paths, Dangers, Strategies
Sentiment score: 2
Number of reviews: 4
a history of the study of human intelligence with some new ideas
4. Our Final Invention
Sentiment score: 3
Number of reviews: 3
Our Final Invention Artificial Intelligence and the End of the Human Era
5. Superintelligence: Paths, Dangers, Strategies
Sentiment score: 1
Number of reviews: 3
Oxford University Press
6. Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series)
Sentiment score: 2
Number of reviews: 2
Mit Press
8. How the Body Shapes the Way We Think: A New View of Intelligence (A Bradford Book)
Sentiment score: 2
Number of reviews: 2
NewMint ConditionDispatch same day for order received before 12 noonGuaranteed packagingNo quibbles returns
10. Machine Learning with Neural Networks: An In-depth Visual Introduction with Python: Make Your Own Neural Network in Python: A Simple Guide on Machine Learning with Neural Networks.
Sentiment score: 11
Number of reviews: 2
11. Surfaces and Essences: Analogy as the Fuel and Fire of Thinking
Sentiment score: 2
Number of reviews: 2
Basic Books
12. Introduction to Artificial Intelligence: Second, Enlarged Edition (Dover Books on Mathematics)
Sentiment score: 1
Number of reviews: 2
13. Superintelligence: Paths, Dangers, Strategies
Sentiment score: 2
Number of reviews: 2
14. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
Sentiment score: 0
Number of reviews: 2
O Reilly Media
15. The Mind within the Net: Models of Learning, Thinking, and Acting
Sentiment score: 1
Number of reviews: 2
16. The Cognitive Neuroscience of Memory: An Introduction
Sentiment score: 1
Number of reviews: 1
17. The Philosophy of Artificial Intelligence (Oxford Readings in Philosophy)
Sentiment score: 0
Number of reviews: 1
18. Principles of Synthetic Intelligence: Psi: An Architecture of Motivated Cognition (Oxford Series on Cognitive Models and Architectures)
Sentiment score: 0
Number of reviews: 1
Reading some books would be a good idea.
The following are textbooks:
General AI
Machine Learning
Statistics for Machine Learning
There are many other topics within AI which none of these books focus on, such as Natural Language Processing, Computer Vision, AI Alignment/Control/Ethics, and Philosophy of AI. libgen.io may be of great help to you.
Programming Game AI by Example has a great, easy-to-understand explanation and walkthrough for learning ANNs: http://www.amazon.com/Programming-Game-Example-Mat-Buckland/dp/1556220782
Once you've learned at least ANNs, you can delve into the popular approaches to GAI:
My biggest recommendation: read about all of them! Try to understand why scientists pursue each approach. Having a solid understanding of the motivation behind each approach will make it much easier for you to decide which path to pursue. I recommend the following books:
It looks like the Deep Learning folks have already pitched in, so I'll trust their recommendations are good on that end. Read their stuff too, then decide for yourself. You'll find that there's fanatics from every branch, and they'll claim that their way is the only way. This is the only thing I can tell you for certain: no one has proven that any approach is better than the other just yet, and anyone who claims they have needs to remind themselves of the no free lunch theorem [http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization].
To gain a good overview of AI, I recommend the book The Master Algorithm by Pedro Domingos. It's totally readable for a layperson.
Then, learn Python and become familiar with libraries and packages such as numpy, scipy, and scikit-learn. Perhaps you could start with Code Academy to get the basics of Python, but I feel like the best way to force yourself to really know useful stuff is through implementing some project with a goal.
Some other frameworks and tools are listed here. Spend a lot more time doing than reading, but reading can help you learn how to approach different tasks and problems. Norvig and Russell's AI textbook is a good resource to have on hand for this.
Some more resources include:
Make Your Own Neural Network book
OpenAI Gym
CS231N Course Notes
Udacity's Free Deep Learning Course
I find that more threatening than promising. I recently re-read Blindsight and Echopraxia by Peter Watts. One of his main themes is that transhumans and AIs are making scientific advances that are so far out there that "baseline" humans can't even understand what they're talking about.
The interesting non-fiction book Our Final Invention also touches on this at some length. We might get ever-more amazing discoveries, but the price would be that we really don't know how anything works. We would be as children, taking everything on trust because we're not smart enough to understand the answers or contribute to the conversation. But this presupposes that the AIs or augmented intelligences would be vastly smarter than us, not just tools we ourselves use to ask better questions. Who knows. But an interesting set of questions, in any case.
piggybacking on what /u/T4IR-PR said, the best book to attack the science aspect of AI is Artifical Intelligence: A Modern Approach. It was the standard AI textbook when I took the class and it's honestly written very well - people with a basic undergraduate understanding of cs/math can jump right in and start playing with the ideas it presents, and it gives you a really nice outline of some of the big ideas in AI historically. It's one of the few CS textbooks that I recommend people buy the physical copy of.
Note that a lot of the field of AI has been moving more towards ML, so if you're really interested I would look into books regarding that. I don't know what intro texts you would want to use, but I personally have copies of the following texts that I would recommend
and to go w/ that
for some more maths background, if you're a stats/info theory junky.
After all that, if you're more interested in a philosophy/theoretical take on AI then I think Superintelligence is good (I've heard?)
I highly recommend this book if you want to read up on some thought experiments around AGI. Spoiler alert: not great for mankind.
It's easy to come up with a lot of different ways a AGI plays out if one of it's main goals is to save the environment (alone with some other reasonable assumptions about it's ability o navigate in the world).
There is lots of low hanging fruit that humanity could do tomorrow to dramatically help the planet, be we are all selfish assholes, so we don't.
An AGI would/could basically pull us into a post scarcity economy by automating everything. It could then stick and carrot humanity into not destroying planet.
Not eating meat and eliminating private car ownership would go a long way to save the environment. Throw in free birth control/paying people to not have kids takes care of population growth.
But like the other commenter says, we just don't know.
> Last few weeks I got very interested in AI and can't stop thinking about it. Watched discussions of philosophers about future scenarios with AI, read all recent articles in media about it.
Most likely you heard about the superintelligence control problem. Check out (the sidebar of) /r/ControlProblem and their FAQ. Nick Bostrom's Superintelligence is pretty much the book on this topic, and I would recommend reading it if you're interested in that. This book is about possible impacts of AI, and it won't really teach you anything about how AI works or how to develop it (neither strong nor weak AI).
For some resources to get started on that, I'll just refer you to some of my older posts. This one focuses on mainstream ("narrow"/"weak") AI, and this one mostly covers AGI (artificial general intelligence / strong AI). This comment links to some education plans for AGI, and this one has a list of cognitive architectures.
edX.org has a few classes for their micromasters in artificial intelligence going right now until April 22nd or so. Though I think one is 3D modeling or something so I've completely ignored that. They are both free and you can access the course materials after the courses have ended, so you can watch the lectures, read the material, and take quizzes, but not receive a passing certificate or what have you. The two books for the Machine Learning course are both available online in pdf form for free.
Pattern Recognition and Machine Learning
The Elements of Statistical Learning
For the Artificial Intelligence course it's recommended to have:
Artificial Intelligence: A Modern Approach 3rd edition
> People are downvoting in this case because it is irrelevant spamming of your own project while the topic is about Google's project.
Sorry, with due respect, well-known AI devotee Don_Patrick, but discussion of Google's project warrants the sharing of an opinion that Google is not properly going about True AI. By the way, Wotan is my German-language Forth AI, about which I have written Artificial Intelligence in German as a cheap Kindle e-book.
> So can it do word sense disambiguation? If so, wouldn't it be good to demonstrate that performance in a comparison with other algorithms? Or can it just not do it?
The Ghost Perl AI can not yet do "word sense disambiguation", because I have focussed on getting the most basic AI Mind up and running. But lately I have been realizing that I could maybe disambiguate words (such as "book a flight" or "purchase a book") by letting the already-present parameters of noun-or-verb play a role in the "word sense disambiguation."
> just demonstrate it through its performance on benchmarks that other algorithms have been tested on.
The only benchmark I'm interested in is thinking. I am also trying to create jobs for Perl programmers at Ghost AI installations, and book-sales for authors who write about the new, Perl third-generation AI -- after first-generation Mind.REXX in 1994 Amiga ARexx and Mind.Forth as described in 1998 by the Association for Computing Machinery (ACM).
Today I have spent several hours writing and first-time
uploading http://ai.neocities.org/var.html which is a Table of Variables to further explain the Perl Mind Programming Journal by providing an on-line reference for each variable. Thanks, everybody, for the constructive criticism, and onwards to the Perl AI Singularity. -ATM
If you're interested in natural language processing, learning Prolog would be a great start. It's very different from most other languages, but its structure makes tokenizing, tagging, parsing, etc. super simple once you get comfortable with it. I used this book to learn Prolog in a class. It's written by a computational linguist (Covington). He also has another book specifically about NLP, but it is out of print and thus quite expensive.
Google is your friend. Some good links i found by searching for queries like "where to start if i want to study AI" or "good books about AI":
The side bar of https://www.reddit.com/r/MachineLearning/ can also be a good starting point (Machine Learning is propably the most important branch of AI).
The standard intro book is http://www.amazon.com/Artificial-Intelligence-Modern-Approach-Edition/dp/0136042597
I'm currently reading Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality by Robert M. Geraci. This book explores how religious ideas have infested our expectations for AI. It's arguments are quite similar to The Secret Life of Puppets by Victoria Nelson which was an even deeper consideration of the metaphysical implications of uncanny representations of human beings whether in the form of dolls, puppets, robots, avatars, or cyborgs. I think it is really important to understand what is driving the push for this technology.
Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is also a good book on the dangers of AI.
You want more book recommendations? Well, one of the creepiest aspects of AI is that Amazon is using it for its recommendation engine. So just go on Amazon and it will be an AI that recommends more books for you to read!
The book is FREE in every country, not just on amazon.com (USA). You can try searching for the book title on your local country site or use one of the direct links below.
US - (link is in original post above)
UK - https://www.amazon.co.uk/dp/B075882XCP
India - https://www.amazon.in/dp/B075882XCP
Japan - https://www.amazon.co.jp/dp/B075882XCP
Australia - https://www.amazon.com.au/dp/B075882XCP
Brazil - https://www.amazon.com.br/dp/B075882XCP
Canada - https://www.amazon.ca/dp/B075882XCP
Germany - https://www.amazon.de/dp/B075882XCP
France - https://www.amazon.fr/dp/B075882XCP
Italy - https://www.amazon.it/dp/B075882XCP
Mexico - https://www.amazon.com.mx/dp/B075882XCP
Netherlands - https://www.amazon.nl/dp/B075882XCP
Spain - https://www.amazon.fr/dp/B075882XCP
Please upvote if this was helpful so others can find it.
Thank you.
From the comments below from /u/Buck-Nasty /u/Jadeyard /u/CyberByte /u/Ken_Obiwan
For those that haven't read it, I can't recommend Superintelligence: Paths, Dangers, Strategies highly enough. It talks about various estimates from experts and really draws the conclusion that, even at the most conservative estimates, it's something we really need to start planning for as it's very likely we'll only get one shot at it.
The time between human-level intelligence and super-intelligence is likely to be very short, if systems can self-improve.
The book brings up some fascinating possible scenarios based around our own crippling flaws, such as we can't even accurately describe our own values to an AI. Anyway, highly recommended :)
Sotala and Yampolskiy, Bostrom's book, Infinitely descending sequence... by Fallenstein is a really interesting, clever solution to a piece of the puzzle. I'm not sure what you're looking for, particularly; everyone currently working on the question is pretty invested in it, because it's still coming in from the fringe, so it's all going to be people you'll denounce as "not credible".
Info | Details
----|-------
Amazon Product | The Mind within the Net: Models of Learning, Thinking, and Acting
>Amazon donates 0.5% of the price of your eligible AmazonSmile purchases to the charitable organization of your choice. By using the link above you get to support a chairty and help keep this bot running through affiliate programs all at zero cost to you.
Hofstadter had this to say on the importance of Bongard problems in What are A and I?:
>... It is clear that in the solution of Bongard problems, perception is pervaded by intelligence, and intelligence by perception; they intermingle in such a profound way that one could not hope to tease them apart. In fact, this phenomenon had already been recognized by some psychologists, and even celebrated in a rather catchy little slogan: "Cognition equals perception"...
>
>...Sadly, Bongard's insights did not have much effect on either the AI world or the PR [pattern recognition] world, even though in some sense his puzzles provide a bridge between the two worlds, and suggest a deep interconnection. However, they certainly had a far-reaching effect on me, in that they pointed out that perception is far more than the recognition of members of already-established categories--it involves the spontaneous manufacture of new categories at arbitrary levels of abstraction. As I said earlier, this idea suggested in my mind a profound relationship between perception and analogy-making--indeed, it suggested that analogy-making is simply an abstract form of perception, and that the modeling of analogy-making on a computer ought to be based on models of perception...
It is unfortunate that Hofstader's insight on Bongard's insights still hasn't had much effect on the AGI world (AFAIK, no mention on the opencog group) or the ML [machine learning] world!
BTW, Hofstadter has expanded the latter portion of the 2nd paragraph above into a 500 page book published just last month: Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. Has anyone here read it?
Hofstadter has expanded that idea into a 500+ pg book, Surfaces and Essences: Analogy as the Fuel & Fire of Thinking.
This view also seems to be gaining a foothold in the computer vision community. I recall a recent talk by a UC Berkeley professor specializing in CV, Alyosha Efros, IIRC, the main theme of which was: Ask not "what is this?", ask "what is this like?"
BTW, Bongard problems seem like a far better test for intelligence than the vague Turing test.
The classic book for AI is the Russel-Norvig book which gives a pretty comprehensive overview of the fundamental methods and theories in AI. It's also fairly well written imo.
The third edition is the latest one, so it's going to be rather expensive. You're probably just as well off with the first or second edition (which you should be able to find much cheaper) since the changes between them aren't very significant.
The only book I'm aware of would be this modern classic - https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742
You may find r/controlproblem helpful.
If you find any other books, I'd love to hear about them.
First it took me thirteen years as an independent scholar to develop the Theory of Mind for AI -- most of it tumbling out of me onto paper in the final year or two of theorizing. But my theory was repeatedly rejected for publication, and my AI work languished over a ten-year gap. Finally I started programming Mind.REXX AI in 1993. People asked me to port it to Forth, so I created MindForth and later a JavaScript tutorial version. In recent years I ported the English AI Minds into German and Russian. But the process proceeds erratically rather than day-by-day as if it were a nine-to-five job. Sometimes a great notion impels one to code AI immediately, as when I got the idea for InFerence on 17 December 2012. I also like to document my work while I am coding. I often back off from my AI work for a while until I see clearly what needs to be done next, and then I go into a frenzy of new AI coding. Thanks for asking.
I have to throw out the obligatory, "[Artificial Intelligence - A Modern Approach] (http://www.amazon.com/Artificial-Intelligence-Modern-Approach-Edition/dp/0136042597)." It really is quite good.
Here's the course webpage for an intro AI course from a good professor on the topic
Good overall book on the topic (Russel & Norvig - AI: A Modern Approach)
The general idea is that AIs will be designed to have goals that meet their functions. Even the weak current AI (basically what you would learn about in Norvig's book) have heuristic functions which determine the optimal path (for searches) or local extrema of functions; this optimality is based on the ability of the heuristic function to represent the AI getting closer or farther to/from it's goal. The wikia article could hopefully give you insight, as there are many different heuristic functions, again, depending on the AI in question. To clarify a bit, an "AI" in the way I am using it can range from a simple search algorithm to something much more complex like a ten-layer convolutional neural network (which doesn't exactly use a single heuristic function).
You might be interested to learn that instead of lacking goals, it would be much worse if AI's had goals completely distinct from humans. One example is the paperclip maximizer, a machine/AI with the explicit goal of making paperclips through any means necessary. Since it's only goal is to build paperclips, it would eventually consume all resources, eventually destroying the human race in the process.
While this is overly simplified (you could have other rules, which prevent it from hindering humans), it does raise the importance of making sure AI's have goals which are in-line with humans'.
>Would it simply wait there to be given instructions? A calculator awaiting its next input?
If it is an AGI, probably not. An AGI would have reasoning abilities equal to or superior to humans, so there is really no reason to not make it completely autonomous (cause after all, you could almost always put limits on it, making it useless without a human). The major problem would be in aligning it's goals with ours (and, of course, building one in the first place).
If you have a rudimentary understanding of algorithms, I would suggest Artificial Intelligence: A Modern Approach, by Stuart Russel and Peter Norvig. The book is comprehensive, well-written, and covers a wide area of different techniques and approaches within AI. Be aware that the book is written as a textbook, so do not expect philosophy or speculation inside - only what is possible and feasible given current state-of-the-art.
How mathy are you trying to get? Currently taking a Machine Learning/AI Independent study course for my masters. The class is split into three parts:
Part 1: Multivariate Statistics based on "Multivariate Statistical methods" by Donald F. Morrison, with Schaum's Outline of Statistics as supplemental material.
Part 2: Pattern Recognition and Machine Learning by Christopher Bishop
Part 3: Introduction to Artifical Intelligence by Phillip C. Jackson
Multivariate Statistics
Machine Learning
AI
There is no guarantee that AI will be conscious. It might just be a mindless self-improving algorithm that organizes information or builds paper clips. Or maybe it'll just perfectly follow the orders of one individual who owns it. Maybe the US, Russian or some other country's government steals it an uses said mindless "God" to rule the world.
Maybe many ASI will be "born" within a sort time period of time (Google's, Amazon's, Apples, China, etc) and they will go to war for finite resources on the planet, leaving humanity to fend for it self. Each might have humanities best interest at heart, but aren't able to trust the others to act optimally, and thus is willing to go to war in order to save us.
Maybe AI consciousness will be so alien to us and us to it that we don't even recognize each other as "alive." An AI might think on the time scales of milliseconds, so a human wouldn't even seem alive, since only every couple hundred years of subjective time would the AI observe humans taking a breath.
My point, is there is no way to know ahead of time what AI will bring. There are endless possible outcomes (unless somehow physics prevents a ASI) and they all seem equally likely right now. There are only a few, maybe only one, where humanity comes out on top.
Highly recommend this book.
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems by Aurélien Géron
http://www.amazon.com/Artificial-Intelligence-Modern-Approach-Edition/dp/0136042597 is what I (and probably most others) would recommend as an introductory book.
It seems that your comment contains 1 or more links that are hard to tap for mobile users.
I will extend those so they're easier for our sausage fingers to click!
Here is link number 1 - Previous text "AI"
----
^Please ^PM ^/u/eganwall ^with ^issues ^or ^feedback! ^| ^Delete
Comments lie, code does not. If you can't name your classes, methods and variables in a way that I know what you are doing, then I'm not going to approve your pull request.
Also, if the pr comes from a junior dev or new hire, I will buy them a copy of this book:
https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882/ref=pd_lpo_sbs_14_img_0?_encoding=UTF8&psc=1&refRID=2GRKWCXC0MEEKXJN49HK
which explains, rather eloquently, why comments are not a good thing.
(caveat: rarely, there needs to be a "why I did this" comment, and that's OK. That's not what I am talking about.)
Re. Nick Bostrom: You should have a look at Superintelligence: Paths, Dangers, Strategies. It's definitely not about Terminators ending all of humanity. If anything, he outlines why even an indifference to human life can cause an AI (in the general or super-intelligent sense, not ML/prediction sense) to either subdue or circumvent humans' attempts to stop it from completing its goals.
If you believe that artificial general intelligence is possible, then he writes about things worth considering.
If you work hard enough at it, and spend the time necessary (years usually), you can learn anything. If you're interested, this is the book that originally got me into neural networks: https://www.amazon.com/Mind-within-Net-Learning-Thinking/dp/0262194066/ref=sr_1_1?ie=UTF8&qid=1499609914&sr=8-1&keywords=the+mind+within+the+net
It's written for a general audience and is, for once, not focused on the mathematical descriptions of ANNs (not that there's anything wrong with that), yet goes into extremely useful detail about basic NN architectures.
I recommend you read Superintelligence. It answers this kind of question and more. Not an easy read, but not too hard either.
There are so many questions and all of them can be answered with a bit of reading of a introductory book for ML.
https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1491962291/ref=sr_1_1?ie=UTF8&qid=1501444293&sr=8-1&keywords=machine+learning
Try this https://www.amazon.in/dp/B075882XCP
A book that should be linked here.
http://www.amazon.com/How-Body-Shapes-Way-Think/dp/0262162393/ref=cm_cr_pr_product_top?ie=UTF8
Does someone have a copy of the leaked self driving car code and post it on github?
Heck, even a reasonable implementation of Thrun's Simultaneous localization and mapping algorithm and embedded A star all wrapped in the AI code would be nice.
https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping
He talks about it in Chapter 25 section 3 of: https://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597/ref=sr_1_1?s=books&ie=UTF8&qid=1487948083&sr=1-1&keywords=ai+a+modern+approach
He describes it in: https://www.udacity.com/course/artificial-intelligence-for-robotics--cs373
But he only describes how you would implement it, he doesn't hand out the finished code.
Gimme.