Reddit Reddit reviews Probabilistic Graphical Models: Principles and Techniques (Adaptive Computation and Machine Learning series)

We found 4 Reddit comments about Probabilistic Graphical Models: Principles and Techniques (Adaptive Computation and Machine Learning series). Here are the top ones, ranked by their Reddit score.

Computers & Technology
Books
Computer Science
AI & Machine Learning
Artificial Intelligence & Semantics
Probabilistic Graphical Models: Principles and Techniques (Adaptive Computation and Machine Learning series)
MIT Press MA
Check price on Amazon

4 Reddit comments about Probabilistic Graphical Models: Principles and Techniques (Adaptive Computation and Machine Learning series):

u/bayesmarkovgauss · 6 pointsr/compsci

Factor graphs are just a different way of drawing Markov random fields, aka undirected probabilistic graphical models. The definitive resource on this subject is Daphne Koller's book:

http://www.amazon.com/Probabilistic-Graphical-Models-Principles-Computation/dp/0262013193/ref=cm_lmf_tit_1

I've also heard good things about the late Sam Roweis' lecture notes:
http://www.cs.nyu.edu/~roweis/csc412-2004/lectures.html

It looks like the Jan 12 notes are the most relevant though you'd probably want to read the preceding ones too.

As for what motivates them:

Graphical models are efficient representations for probability distributions over several variables. They exploit the fact that several variables depend on each other only indirectly via the influence of other variables. Modeling the probability distribution as having fewer direct interactions means that the model takes up less space in memory, inference in the model runs faster, and it takes less data to learn good parameters for the model.

Undirected graphical models are useful for when the interactions in your model are not clearly causal. For example, neighboring pixels in an image are highly correlated, but the value of one pixel does not cause the other. This is in opposition to diseases and symptoms where it is fairly easy to model their interactions as the disease causing each symptom with some probability.

Let me know if you have any more specific questions.

u/Kiuhnm · 5 pointsr/MachineLearning

Take the online course by Andrew Ng and then read Python Machine Learning.

If you then become really serious about Machine Learning, read, in this order,

  1. Machine Learning: A Probabilistic Perspective
  2. Probabilistic Graphical Models: Principles and Techniques
  3. Deep Learning
u/Broseidon241 · 2 pointsr/datascience

I did this, but I came to data science in the final year of my PhD when I got a job at a startup. I started with R, then SQL, then Python. I currently work in data science, moving internal ML products into production settings. I also do research - and knowing how to conduct proper trials is great if the company you work for gives you freedom in how something you've built is rolled out. I can also blend my degree with ML, e.g. designing batteries of questions to identify 'good fit' candidates for a given role - I combine the battery results with future performance data and continually refine the question set / improve the model. As well, I'm a good fit for UX and dabble in that. The combo skillset will give you the ability to produce value in many different ways.

The things that helped me most were:

  • Early on, Programming for Everybody - very gentle intro, and well taught.

  • Andrew Ng's machine learning course.
  • SQLzoo.
  • The Introduction to Statistical Learning course and book then, later, The Elements of Statistical Learning.
  • Buying big fat books about the things I wanted to learn, and working through them (e.g., Probabilistic Graphical Models, Pattern Recognition).
  • Coding algorithms from scratch, starting with linear regression and working my way to DNNs and RNNs. Do it in R, then Python, then Scala if you're ambitious.
  • Doing the Kaggle intro competitions in R and then translating to Python - Titanic, census dataset, etc, and using a variety of approaches for each (i.e. xgboost, sklearn, tensorflow).

    It can be overwhelming, but don't worry. Do one course to completion, with that as your only goal. Then do the next. Then work on a kaggle thing. Then work through a book. One thing at a time - you might get anxious or be uncertain or want to do multiple things at once, but just start with one thing and focus on that and that alone. You'll get where you want to go.

    I also brushed up on my linear algebra / probability using MITs open courses and khanacademy.

    Beyond all this, I found that learning a lot about ML/AI really expanded my thinking about how human beings work and gave me a new and better lens through which to view behaviour and psych research/theories. Very much recommend to all psychologists.

    Good luck!