Top products from r/MachineLearning

We found 72 product mentions on r/MachineLearning. We ranked the 222 resulting products by number of redditors who mentioned them. Here are the top 20.

Next page

Top comments that mention products on r/MachineLearning:

u/schmook · 6 pointsr/MachineLearning

Imagine you have a dataset without labels, but you want to solve a supervised problem with it, so you're going to try to collect labels. Let's say they are pictures of dogs and cats and you want to create labels to classify them.

One thing you could do is the following process:

  1. Get a picture from your dataset.
  2. Show it to a human and ask if it's a cat or a dog.
  3. If the person says it's a cat or dog, mark it as a cat or dog.
  4. Repeat.

    (I'm ignoring problems like pictures that are difficult to classify or lazy or adversarial humans giving you noisy labels)

    That's one way to do it, but is it the most efficient way? Imagine all your pictures are from only 10 cats and 10 dogs. Suppose they are sorted by individual. When you label the first picture, you get some information about the problem of classifying cats and dogs. When you label another picture of the same cat, you gain less information. When you label the 1238th picture from the same cat you probably get almost no information at all. So, to optimize your time, you should probably label pictures from other individuals before you get to the 1238th picture.

    How do you learn to do that in a principled way?

    Active Learning is a task where instead of first labeling the data and then learning a model, you do both simultaneously, and at each step you have a way to ask the model which next example should you manually classify for it to learn the most. You can than stop when you're already satisfied with the results.

    You could think of it as a reinforcement learning task where the reward is how much you'll learn for each label you acquire.

    The reason why, as a Bayesian, I like active learning, is the fact that there's a very old literature in Bayesian inference about what they call Experiment Design.

    Experiment Design is the following problem: suppose I have a physical model about some physical system, and I want to do some measurements to obtain information about the models parameters. Those measurements typically have control variables that I must set, right? What are the settings for those controls that, if I take measurements on that settings, will give the most information about the parameters?

    As an example: suppose I have an electric motor, and I know that its angular speed depends only on the electric tension applied on the terminals. And I happen to have a good model for it: it grows linearly up to a given value, and then it becomes constant. This model has two parameters: the slope of the linear growth and the point where it becomes constant. The first looks easy to determine, the second is a lot more difficult. I'm going to measure the angular speed at a bunch of different voltages to determine those two parameters. The set of voltages I'm going to measure at is my control variable. So, Experiment Design is a set of techniques to tell me what voltages I should measure at to learn the most about the value of the parameters.

    I could do Bayesian Iterated Experiment Design. I have an initial prior distribution over the parameters, and use it to find the best voltage to measure at. I then use the measured angular velocity to update my distribution over the parameters, and use this new distribution to determine the next voltage to measure at, and so on.

    How do I determine the next voltage to measure at? I have to have a loss function somehow. One possible loss function is the expected value of how much the accuracy of my physical model will increase if I measure the angular velocity at a voltage V, and use it as a new point to adjust the model. Another possible loss function is how much I expect the entropy of my distribution over parameters to decrease after measuring at V (the conditional mutual information between the parameters and the measurement at V).

    Active Learning is just iterated experiment design for building datasets. The control variable is which example to label next and the loss function is the negative expected increase in the performance of the model.

    So, now your procedure could be:

  5. Start with:
    • a model to predict if the picture is a cat or a dog. It's probably a shit model.
    • a dataset of unlabeled pictures
    • a function that takes your model and a new unlabeled example and spits an expected reward if you label this example
  6. Do:
    1. For each example in your current unlabeled set, calculate the reward
    2. Choose the example that have the biggest reward and label it.
    3. Continue until you're happy with the performance.
  7. ????
  8. Profit

    Or you could be a lot more clever than that and use proper reinforcement learning algorithms. Or you could be even more clever and use "model-independent" (not really...) rewards like the mutual information, so that you don't over-optimize the resulting data set for a single choice of model.

    I bet you have a lot of concerns about how to do this properly, how to avoid overfitting, how to have a proper train-validation-holdout sets for cross validation, etc, etc, and those are all valid concerns for which there are answers. But this is the gist of the procedure.

    You could do Active Learning and iterated experiment design without ever hearing about bayesian inference. It's just that those problems are natural to frame if you use bayesian inference and information theory.

    About the jargon, there's no way to understand it without studying bayesian inference and machine learning in this bayesian perspective. I suggest a few books:

  • Information Theory, Inference, and Learning Algorithms, David Mackay - for which you can get a pdf or epub for free at this link.

    Is a pretty good introduction to Information Theory and bayesian inference, and how it relates to machine learning. The Machine Learning part might be too introductory if already know and use ML.

  • Bayesian Reasoning and Machine Learning by David Barber - for which you can also get a free pdf here

    Some people don't like this book, and I can see why, but if you want to learn how bayesians think about ML, it is the most comprehensive book I think.

  • Probability Theory, the Logic of Science by E. T. Jaynes. Free pdf of the first few chapters here.

    More of a philosophical book. This is a good book to understand what bayesians find so awesome about bayesian inference, and how they think about problems. It's not a book to take too seriously though. Jaynes was a very idiosyncratic thinker and the tone of some of the later chapters is very argumentative and defensive. Some would even say borderline crackpot. Read the chapter about plausible reasoning, and if that doesn't make you say "Oh, that's kind of interesting...", than nevermind. You'll never be convinced of this bayesian crap.

u/RhoTheory · 33 pointsr/MachineLearning

Grad school for machine learning is pretty vague, so here's some general resources I think would be good for an incoming CS grad student or undergraduate CS researcher with a focus on deep learning. In my opinion, the courses you mentioned you've done should be a sufficient foundation to dive into deep learning, but these resources cover some foundational stuff as well.

  • Kaggle is for machine learning in general. It provides datasets and hardware. It has some nice tutorials and you can look at what other people did.
  • Google has an online crash course on Machine Learning.
  • Hands-On Machine Learning with Scikit-learn and Tensorflow is a great book for diving into machine learning with little background. The O'Reilly books tend to be pretty good.
  • MIT Intro to Deep Learning provides a good theoretical basis for deep learning specifically.
  • MIT Intro to AI. This is my favorite online lecture series of all time. It provides a solid foundation in all the common methods for AI, from neural nets to support vector machines and the like.
  • Tensorflow is a common framework for deep learning and provides good tutorials.
  • Scikit-learn is a framework for machine learning in python. It'd be a good idea to familiarize yourself with it and the algorithms it provides. The link is to a bunch of examples.
  • Stanford's deep learning tutorial provides a more mathematical approach to deep learning than the others I've mentioned--which basic vector calc, linear algebra, and stats should be able to handle.
  • 3Blue1Brown is a math youtuber that animates visual intuitions behind many rather high-level concepts. He has a short series on the math of neural networks.
  • If you are going to be dealing with hardware for machine learning at all, this paper is the gold standard for everything you'd need to know. Actually, even if you aren't dealing with the hardware, I'd recommend you look at the seconds on software. It is fairly high level, however, so don't be discouraged if you don't get some of it.
  • Chris Olah's Blog is amazing. His posts vary from explanations of complex topics very intuitively to actual research papers. I recommend "Neural Networks, Manifolds, and Topology".
u/brational · 2 pointsr/MachineLearning

I was in your shoes not long ago, though a much diff background and job situation.

> I guess maybe my question boils down to do I need to at some point go to grad school?

Yes but don't worry about grad school right now. It's expensive and you'll do better with it once you've been working in the real world. Try and get work to pay for it too.

>I'm not against it, but would rather learn on my own and make it that way, is that feasible?

Yes you can start using ML techniques at work without formal training. Don't let it stop you. Get a good book - I use Kevin Murphy's and also have a copy of EoSL on my desk from the work library (its free online pdf though).

ML is a somewhat broad and growing field. So if you have the mindset that you need to cover it all before you start using it you'll be sticking thumbs up your ass for a few years.

A better approach will be what is your specific data. Just like you're probably familiar with from using SQL, standard textbook techniques or something in a research paper rarely applies exactly to you what you're working with. So it's almost better to approach your problem directly. Explore the data, look at the data, study the data (in a stats fashion) and then look into what could an intelligent program do to better analyze it. And then in the meantime you can study more general ML topics after work.

u/[deleted] · 6 pointsr/MachineLearning

May I ask how you are beginning to skim the surface of ML? If you're reading methods papers or something, I could see how you could start to feel like it was all really esoteric. There are a lot of more applied journals and conferences out there, even for specific fields like biology. Maybe something in your field would be a good entry point?

There are tons of ML methods that are super generalizable-- not at all overly specific. At my work (biotech), people use off-the-shelf computer vision algorithms (segmentation, registration, etc.) all the time. They use clustering and classifiers as well. Classifiers in particular are super easy to use off-the-shelf. A lot of these tools have been incorporated into statisticians bags of tricks. Certain areas of ML really do feel like "new stats" to me.

Bayesian networks is another one that is pretty broadly applicable, and sees a lot of use in computational biology. E.g. inferring gene regulatory networks, modelling genetic diversity, etc. There are bioinformatics books out there that are chock full of ML-flavored algorithms; this one is a classic-- http://www.amazon.com/Biological-Sequence-Analysis-Probabilistic-Proteins/dp/0521629713 though I'm not sure it'd be quite up your alley for synthetic & systems bio.

Googled and found a couple conferences-- might be worth skimming the proceedings

http://mlsb.cc/

http://www.eccb14.org/program/workshops/mlsb

u/Neutran · 2 pointsr/MachineLearning

Count me in!
I really want to read though this book: "https://www.amazon.com/Reinforcement-Learning-Introduction-Adaptive-Computation/dp/0262193981" by Richard Sutton, as well as a few other classical ML books, like Christopher Bishop's and Kevin Murphy's.

I know many concepts already, but I've never studied them in a systematic manner (e.g. follow an 1000-page book from end to end). I hear from multiple friends that it's super beneficial in the long run to build a strong mathematical/statistical foundation.
My current model of "googling here and there" might work in the short term, but will not help me invent new algorithms or improve state-of-the-art.

u/PostmodernistWoof · 3 pointsr/MachineLearning

+1 for top-down learning approaches. There's so much work going on to democratize use of ML techniques in general software development, that, depending on where you want to go, there's little need to start with the classic theory.

IMHO, the classic ML literature suffers a bit from decades of theorists who never had the computing resources (or the data) to make big practical advances, and it tends to be overly dense and mathematical because that's what they spent their time on.

But really it depends on your goals. Which category do you fall into?

  1. Get a PhD in math, study computer science, get a job as a data scientist at Google (or equivalent) and spend your days reading papers and doing cutting edge Research in the field.

  2. Learn classic and modern ML techniques to apply in your day to day software development work where you have a job title other than "data scientist".

  3. You've heard about Deep Learning and AlphaGo etc. and want to play around with these things and learn more about them without necessarily having a professional goal in mind.

    For #1 the Super Harsh Guide is, well, super harsh, but has good links to the bottom up mathematical approach to the whole thing.

    For #2 you should probably start looking at the classic ML techniques as well as the trendy Deep Learning stuff. You might enjoy:

    https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1491962291

    as a place to start and immediately start playing around with stuff.

    For #3 any of the TensorFlow getting started tutorials are good, along with all of Martin Görner's machine learning/deep learning/TensorFlow "without a PhD" videos on YouTube. Here's one of the more recent ones:

    https://www.youtube.com/watch?v=vaL1I2BD_xY
u/TheMiamiWhale · 1 pointr/MachineLearning

It really depends on your comfort and familiarity with the topics. If you've seen analysis before you can probably skip Rudin. If you've seen some functional analysis, you can skip the functional analysis book. Convex Optimization can be read in tandem with ESL, and is probably the most important of the three.

Per my other comment, if your goal is to really understand the material, it's important you understand all the math, at least in terms of reading. Unless you want to do research, you don't need to be able to reproduce all the proofs (to help you gauge your depth of understanding). In terms of bang for your buck, ESL and Convex Optimization are probably the two I'd focus on. Another great book Deep Learning this book is extremely approachable with a modest math background, IMO.

u/andreyboytsov · 1 pointr/MachineLearning

Classic Russel & Norwig textbook is definitely worth reading. It starts from basics and goes to quite advanced topics:
http://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597/
Udacity has AI class that follows some chapters of that book.

Murphy's textbook builds ML from the ground up, starting from basics of probability theory:
http://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/0262018020/
(I see, it was already recommended)

Coursera has the whole machine learning specialization (Python) and a famous ML class by Andrew Ng (Matlab).

I hope it helps. Good luck!

u/blindConjecture · 3 pointsr/MachineLearning

That was a phenomenal article. Extremely long (just like every piece of writing associated with Hofstadter), but excellent nonetheless. I'm admittedly sympathetic to Hofstadter's ideas, not the least of which because of my combined math/cognitive science background.

There was a quote by Stuart Russell, who helped write the book on modern AI, that really stood out to me, and I think expresses a lot of my own issue with the current state of AI:

“A lot of the stuff going on is not very ambitious... In machine learning, one of the big steps that happened in the mid-’80s was to say, ‘Look, here’s some real data—can I get my program to predict accurately on parts of the data that I haven’t yet provided to it?’ What you see now in machine learning is that people see that as the only task.”

This is one of the reasons I've started becoming very interested in ontology engineering. The hyperspecialization of today's AI algorithms is what makes them so powerful, but it's also the biggest hindrance to making larger, more generalizable AI systems. What the field is going to need to get past its current "expert systems" phase is a more robust language through which to represent and share the information encoded in our countless disparate AI systems. \end rant

u/upulbandara · 2 pointsr/MachineLearning

I think it is completely possible. I'm ML engineer with M.Sc. in Computer Science. Presently, there are so many avenues (MOOCs, Kaggle, and books) to learn ML. But I believe the best approach would be:

  1. Buy a good machine learning book. My favorite one is Pattern Recognition and Machine Learning by Christopher M. Bishop. URL: https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738
  2. When you read the book, implement ML algorithms using Python (or R, or Julia, or etc.)
  3. Pick few ML related projects which are completely away from your comfort zone (for example a toy version of Tensorflow) and somehow complete these projects.
  4. Create a Github account and push your projects/artifacts.
u/BullockHouse · 95 pointsr/MachineLearning

I mean, only if you think we're tens of thousands of years away from powerful, general-purpose software agents. If you survey actual experts, they're pretty uncertain (and vulnerable to framing effects) but in general they think less than a century is pretty plausible.

So it's closer to somebody looking at the foundational research in nuclear physics and going "hey guys, this is going to be a real fucking problem at some point."

Which is pretty much what Einstein did (and started the Manhattan project and a pretty significant intelligence operation against the development of a German nuclear weapon).

EDIT: Also, if anyone's interested, the same blogger made a rundown of the opinions of lumaries in the field on AI risk in general. Opinions seem to be split, but there are plenty of bright people who know their shit who take the topic seriously. For those who aren't familiar with the topic and think everyone's just watched too much bad sci-fi, I recommend Bostrom.

u/latent_z · 1 pointr/MachineLearning

I would make a distinction to what are "complex" algorithms/methods towards simple/basic methods. You seem to be at a stage in which it's better for you to discard all the complex methods, and maybe just focus on the simple and basic methods. Simple because they do not require a lot of mathematical knowledge and basic because further theory is built upon them. This would exclude, for now, all the recent published literature.

I would suggest you to get one book that will ease this process, such as Bishop's. Just start with the basics of maximum likelihood and posterior inference estimation with simple Gaussians. I assure you that this is basic, in the sense that you will recognize and use this piece of knowledge in most advanced papers. Mixture of Gaussians and the EM algorithm are also a basic topic, as well as Neural Networks (the simple sigmoid fully connected).

Just make sure that you know these three topics extremely well and learning the rest will be slightly easier.

BTW, this is a post for /r/MLQuestions or /r/learnmachinelearning

u/LazyAnt_ · 11 pointsr/MachineLearning

I wouldn't say it's about Neuroscience, but it covers ML/AI. The Master Algorithm is a really good book. It can also serve as an introduction to a ton of different AI algorithms, from clustering to neural networks. It's short and easy to read, I highly recommend it.

u/noman2561 · 2 pointsr/MachineLearning

Well I do research in pattern recognition and computer vision so I'll try to answer this. An image is a grid of sensor readings. Each reading from a sensor is called a pixel which is the feature vector for that location in the image plane. Features based on spectral characteristics, spatial characteristics, and even motion characteristics (in video) may be derived from the original input (the reading from the sensor). Transformations are applied to the input which consider different aspects of the pixel's spectral components ( [R,G,B] - tristimulus ). A number of different methods exploit spatial correlation too. These features are then used in ML systems as part of the feature vector ( [T1,T2,T3,F1,F2,F3,F4,...] ). As far as books, I learned filtering methods using

"Two-Dimensional Signal and Image Processing" -Lim

I learned pattern recognition using

"Pattern Recognition" -Theodoridis and Koutroumbas

and

"Pattern Recognition and Machine Learning" -Bishop

The last one approaches from more of a CS side but doesn't go as in-depth. The field of CV/PR is pretty large and includes a lot of methods that aren't covered in these books. I would recommend using OpenCV or Matlab to handle images. My personal preference is Python but C++ and Matlab is are both close seconds.

u/LmpPst · 2 pointsr/MachineLearning

If you want super beginner, Data Smart by John Foreman is probably the best. It isn't free and it is very basic.

http://www.amazon.com/Data-Smart-Science-Transform-Information/dp/111866146X

u/majordyson · 29 pointsr/MachineLearning

Having done an MEng at Oxford where I dabbled in ML, the 3 key texts that came up as references in a lot of lectures were these:

Pattern Recognition and Machine Learning (Information Science and Statistics) (Information Science and Statistics) https://www.amazon.co.uk/dp/0387310738/ref=cm_sw_r_cp_apa_i_TZGnDb24TFV9M

Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning Series) https://www.amazon.co.uk/dp/0262018020/ref=cm_sw_r_cp_apa_i_g1GnDb5VTRRP9

(Pretty sure Murphy was one of our lecturers actually?)

Bayesian Reasoning and Machine Learning https://www.amazon.co.uk/dp/0521518148/ref=cm_sw_r_cp_apa_i_81GnDbV7YQ2WJ

There were ofc others, and plenty of other sources and references too, but you can't go buying dozens of text books, not least cuz they would repeat the same things.
If you need some general maths reading too then pretty much all the useful (non specialist) maths we used for 4 years is all in this:
Advanced Engineering Mathematics https://www.amazon.co.uk/dp/0470646136/ref=cm_sw_r_cp_apa_i_B5GnDbNST8HZR

u/IborkedyourGPU · -2 pointsr/MachineLearning

I kind of see your point, but I don't completely agree. As I said already, I know something about active research in this field: enough, as a matter of fact, to be able to read these books

https://www.amazon.com/Understanding-Machine-Learning-Theory-Algorithms/dp/1107057132
https://www.amazon.com/Foundations-Machine-Learning-Adaptive-Computation/dp/0262039400/
https://www.amazon.com/High-Dimensional-Probability-Introduction-Applications-Probabilistic/dp/1108415199/

However, as most researchers, I mostly focus on my specific subfield of Machine Learning. Also, every now and then, I'd like to read something about my job, which however doesn't feel like work (even a professional football player may want to kick a ball for fun every now and then 😉). Thus, I was looking for some general overview of Machine Learning, which wouldn't be too dumbed down, according to experts (otherwise I wouldn't have fun reading it), but which at the same time wasn't a huge reference textbook. After all, this would be just a leisure read, it shouldn't become work after work.

That's why I asked here, rather than on r/LearnMachineLearning. However, if other users also feel I should ask there, I will reconsider.

u/illogical_vectors · 2 pointsr/MachineLearning

The Udacity machine learning track that you've probably seen is actually wonderful. It does a good job of scaling from entry level (even going down to basic data analysis) up to DNN. They charge for the nano-degree, but you can access all of the lectures without that.

As far as reading papers, I would actually recommend against it at this point. They're highly minute unless you're actually doing research into new techniques. If you're mostly looking to build a portfolio for employers, not a good place. If you're looking for a reading source Bishop's Machine Learning and Pattern Recognition is one of my favorites.

u/videoj · 2 pointsr/MachineLearning

O'Reilly has published a number of practical machine learning books such as Programming Collective Intelligence: Building Smart Web 2.0 Applications and Natural Language Processing with Python that you might find good starting points.

u/yudlejoza · 2 pointsr/MachineLearning

Here's my radical idea that might feel over-the-top and some here might disagree but I feel strongly about it:

In order to be a grad student in any 'mathematical science', it's highly recommended (by me) that you have the mathematical maturity of a graduated math major. That also means you have to think of yourself as two people, a mathematician, and a mathematical-scientist (machine-learner in your case).

AFAICT, your weekends, winter break and next summer are jam-packed if you prefer self-study. Or if you prefer classes then you get things done in fall, and spring.

Step 0 (prereqs): You should be comfortable with high-school math, plus calculus. Keep a calculus text handy (Stewart, old edition okay, or Thomas-Finney 9th edition) and read it, and solve some problem sets, if you need to review.

Step 0b: when you're doing this, forget about machine learning, and don't rush through this stuff. If you get stuck, seek help/discussion instead of moving on (I mean move on, attempt other problems, but don't forget to get unstuck). As a reminder, math is learnt by doing, not just reading. Resources:

  • math subreddit
  • math.stackexchange.com
  • math on irc.freenode.net

  • the math department of your college (don't forget that!)


    Here are two possible routes, one minimal, one less-minimal:

    Minimal

  • Get good with proofs/math-thinking. Texts: One of Velleman or Houston (followed by Polya if you get a chance).
  • Elementary real analysis. Texts: One of Spivak (3rd edition is more popular), Ross, Burkill, Abbott. (If you're up for two texts, then Spivak plus one of the other three).


    Less-minimal:

  • Two algebras (linear, abstract)
  • Two analyses (real, complex)
  • One or both of geometry, and topology.


    NOTE: this is pure math. I'm not aware of what additional material you'd need for machine-learning/statistical math. Therefore I'd suggest to skip the less-minimal route.
u/SnOrfys · 2 pointsr/MachineLearning

Data Smart

Whole book uses excel; introduces R near the end; very little math.

But learn the theory (I like ISLR), you'll be better for it and will screw up much less.

u/pete0273 · 1 pointr/MachineLearning

It's only $72 on Amazon. It's mathematical, but without following the Theorem -> Proof style of math writing.

The first 1/3 of the book is a review of Linear Algebra, Probability, Numerical Computing, and Machine Learning.

The middle 1/3 of the book is tried-and-true neural nets (feedforward, convolutional, and recurrent). It also covers optimization and regularization.

The final 1/3 of the book is bleeding edge research (autoencoders, adversarial nets, Boltzmann machines, etc.).

The book does a great job of foreshadowing. In chapters 4-5 it frames problems with the algorithms being covered, and mentions how methods from the final 1/3 of the book are solving them.

https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618/

u/Kiuhnm · 2 pointsr/MachineLearning

Anyway, a great book often recommended is All of statistics. You can't go wrong with that book.

u/bluecoffee · 8 pointsr/MachineLearning

If you're having to ask this, it means you haven't read enough textbooks for reading papers to make sense.

What I mean is that to make sense of most research papers you need to have a certain level of familiarity with the field, and the best way to achieve that familiarity is by reading textbooks. Thing is, if you read those textbooks you'll acquire a familiarity with the field that'll let you identify which papers you should focus on studying.

Now go read MLAPP cover to cover.

u/dfmtr · 2 pointsr/MachineLearning

You can read through a machine learning textbook (Alpaydin's and Bishop's books are solid), and make sure you can follow the derivations. Key concepts in linear algebra and statistics are usually in the appendices, and Wikipedia is pretty good for more basic stuff you might be missing.

u/cavedave · 0 pointsr/MachineLearning

This isnt a job posting. I am posting this for a discussion raised from a website I have no connection with.

Firstly these are interesting ideas and seem ideal for blockchain based business models.

Secondly I think the Question at the end about whether these suit men or women is a good one

Thirdly on a weapons of Math destruction level what does it mean to do jobs effecting peoples lives that involve only maths and not meeting the people?

I posted this to start a discussion about the particular ideas and the concept of interaction free jobs and I'd like to hear your opinion

u/hurtja · 1 pointr/MachineLearning

I would start with reading.

For Neural Networks, I'd do:

  1. Deep Learning (Adaptive Computation and Machine Learning series) https://www.amazon.com/dp/0262035618/ref=cm_sw_r_cp_apa_i_nC11CbNXV2WRE

  2. Neural Networks and Learning Machines (3rd Edition) https://www.amazon.com/dp/0131471392/ref=cm_sw_r_cp_apa_i_OB11Cb24V2TBE

    For overview with NN, Fuzzy Logic Systems, and Evolutionary Algorithms, I recommend:

    Fundamentals of Computational Intelligence: Neural Networks, Fuzzy Systems, and Evolutionary Computation (IEEE Press Series on Computational Intelligence) https://www.amazon.com/dp/1119214343/ref=cm_sw_r_cp_apa_i_zD11CbWRS95XY
u/gtani · 6 pointsr/MachineLearning

Don't worry, you've demonstrated the ability to figure out whatever you need to get hired, you need to worry more about getting a place to live. probably you shd buy one of those shirts that says "Keep calm and carry on". You could cram on java performance tuning or kernel methods or hadoop or whatever and be handed a project that doesn't use it. Here's some "curricula", free books etc

http://web.archive.org/web/20101102120728/http://measuringmeasures.com/blog/2010/3/12/learning-about-machine-learning-2nd-ed.html

http://blog.zipfianacademy.com/post/46864003608/a-practical-intro-to-data-science

http://metaoptimize.com/qa/questions/186/good-freely-available-textbooks-on-machine-learning

http://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/product-reviews/0262018020/ (first review)

--------



http://people.seas.harvard.edu/~mgelbart/glossary.html

http://www.quora.com/Machine-Learning

http://www.quora.com/Machine-Learning-Applications

u/Ken_Obiwan · 6 pointsr/MachineLearning

What worries me is that this advance happened 10 years earlier than it was supposed to. And the DeepMind guys think they could have human-level AI within a few decades.

In other words, it looks like human-level AIs may be something we encounter significantly sooner than we do "overpopulation on Mars", to quote Andrew Ng. I hope Ng is at least considering reading Superintelligence or signing the FLI AI Safety research letter.

u/bbsome · 2 pointsr/MachineLearning

Depends what your goal is. As you have a good background, I would not suggest any stats book or deep learning. First, read trough Probability theory - The logic of science and the go for Bishop's Pattern Recognition or Barbers's Bayesian Reasoning and ML. If you understand the first and one of the second books, I think you are ready for anything.

u/fisat · 8 pointsr/MachineLearning

Read Hands on Machine Learning with Scikit-learn and Tensorflow. This book is awesome.

https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1491962291

u/thecity2 · 3 pointsr/MachineLearning

I would recommend Elements of Statistical Learning (the "ESL" book) for someone with your level of knowledge (they have an easier Intro book "ISL", but seems you could probably head straight for this):

http://www.amazon.com/Elements-Statistical-Learning-Prediction-Statistics/dp/0387848576/ref=sr_1_1?ie=UTF8&qid=1463088042&sr=8-1&keywords=elements+of+statistical+learning

u/banermatt · 1 pointr/MachineLearning

If you want to learn the algorithms by programming them you have Programming Collective Intelligence that is really good. It really helped me to see the algorithms in work in order to deeply understand them.

u/sparsecoder · 7 pointsr/MachineLearning

You might find Wasserman's All of Statistics useful:
http://www.amazon.com/All-Statistics-Statistical-Inference-Springer/dp/0387402721/

It's a very concise, yet broad introductory statistics text with a slant towards data mining/machine learning.

u/ANONYMOUSACCOUNTLOL · 2 pointsr/MachineLearning

May I suggest doing a search in r/statistics and r/machinelearning for learning-foundation books for ML? I think that'll turn up quite enough hits to get you pointed in the right direction.

I always talk up the one I used, which I liked:
http://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738

u/stewedRobot · 5 pointsr/MachineLearning

I'd grab beautifulsoup + scikit-learn + pandas from continum.io (they're part of the standard anaconda download), launch Spyder and follow through this:
http://sebastianraschka.com/Articles/2014_naive_bayes_1.html

You can get a RAKE impl here too : https://github.com/aneesha/RAKE

Doing recommendations on the web like that is covered in an accessible way in "Programming Collective Intelligence"

u/c3534l · 3 pointsr/MachineLearning

The first one isn't too off: Amazon link to a book

Granted, it's not distributed, but I read that book given its high rating and the author really jumps through hoops trying to figure out how you'd do k-means in a spreadsheet without macros or anything.

u/PLLOOOOOP · 3 pointsr/MachineLearning

Is this the Bishop book you guys are talking about?

u/DoorsofPerceptron · 10 pointsr/MachineLearning

For a maths heavy book, I'd go with Bishop's Pattern recognition and Machine Learning.

Check out the reviews here: http://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738

u/Eurchus · 1 pointr/MachineLearning

Sutton and Barto wrote the standard text in reinforcement learning.

Here is the AlphaGo paper in Nature.

The FAQ has a list of resources for learning ML including links to Hinton's Coursera course on neural nets.

u/VelveteenAmbush · 9 pointsr/MachineLearning

> I can't help but cringe every time he assumes that self-improvement is so easy for machines so that once it becomes possible at all, AI skyrockets into superintelligence in a matter of weeks.

He doesn't assume it, he concludes it after discussing the topic in depth.

Pages 75-94 of his book. Preview available via Amazon.

u/madebyollin · 10 pointsr/MachineLearning

The Bostrom book is the go-to reference for the sort of ai risk arguments that Musk and others endorse. Elon has previously linked to this WaitBuyWhy post summarizing the argument from the book, so I would read that if you're curious.

(Not that I agree with any of it, but linking since you asked)

u/tshadley · 2 pointsr/MachineLearning

Check out the reviews of his 2016 BlockChain book at Amazon. I'd say there's a pattern here.

"It's horrible and filled with unexplained terminology"

"The author uses Word Salad liberally, throwing around terms and marketing boilerplate, while defining very little."

"This really feels like it was rushed out to cash in on the current hype. I was very disappointed with this."

u/fbhc · 1 pointr/MachineLearning

It is unfortunate that anyone ever took him seriously to begin with. But it is also clear that his audience is built primarily of those with little to no background in Computer Science, or are looking for a quick-and-easy way to enter into a domain in which they believe they will be able to make lots of money from. His audience has proven to be either gullible, or just the epitome of the internet-age, where people are more interested in headlines than actual content.

https://www.amazon.com/Decentralized-Applications-Harnessing-Blockchain-Technology/dp/1491924543

Siraj wrote a book that can be considered a prelude to much of this. The difference is that the book was read, and heavily ripped apart, by people who actually know what they are talking about; and by people who can objectively criticize his work.

It makes sense that Siraj drifted into the YouTube space. Want to learn a programming language? Real engineers don't watch YouTube tutorials. Want to learn Machine Learning concepts? Real engineers don't seek out a collection of buzzwords.

This industry requires work. Siraj is a marketer.

u/PoulMadsen · 3 pointsr/MachineLearning

I don't work in genomics specically but we do a lot of next generation sequencing. I am a biologist with interests in machine learning so let me try to summarize where people in biology use it:

Microarrays: Cancer research in particular uses this, but basically every biology discipline has some applications of this. Basically what you get is thousand of signal intensities, each represeinting expression of a gene, per sample, and what you are interested in is finding genes that behave differently from sample to sample. This is an example of a high-dimensionality problem, where the number of features is much larger than the number of samples. If you want some idea of how much work has been done in this area take a look at this (list)[http://www.geneontology.org/GO.tools.microarray.shtml]. You can more or less find all kinds of statistical methods here. As a biologist i should probably mention that i believe micro-arrays have problems with reproducibility that no amount of data-analysis will solve.

Gene prediction: This is a typical genomics problem in which we are given a long DNA sequence and told to identify the genes in it. Genes have some telltale signs, but these can be located with slight differences to each other and might be completely absent. Also, genes in eukaryotes are interrupted by socalled introns that do not code for genes (this story is a lot longer in reality). Poisson statistics on dna words (k long subsequences of dna) is the classical way of finding overrepresented dna features. Newer techniques uses HMMs and conditional random fields, as machine learning oriented as it gets. (This)[http://www.amazon.com/Biological-Sequence-Analysis-Probabilistic-Proteins/dp/0521629713] is a modern classic in all things sequence related.

Phylogeny: This is another of bioinformatics major contributions to modern science. Given some model of how evolution changes the composition of a sequence, we are interested in figuring out how organisms/proteins/genes can be related and building trees that can show us these relation.

Next generation sequencing: We can now generate much more data than we can process, we need some way of filtering as the machines can be inaccurate. We also need methods to cluster sequences within specific thresholds.

Sequence searching: This is a major topic. The most cited paper in the history of science is the one that announced BLAST. Machine learning is not as used here yet, but it probably will be if something faster than the traditional alignment algorithms come up.

This was just a short and incomplete overview, if you have specific questions i would be happy to answer.