Friday, April 24, 2009

Interactive Review Qn added to the homework--post your answer as a comment to this thread on the blog

Folks:
 I added an extra question to the last homework

===========
 [Mandatory] [Answer to this question *must*  also be posted on the
            class blog as a comment to my post]. List upto five non-trivial ideas you were
            able to appreciate during the course of this
            semester. (These cannot be "I thought Bayes Nets were Groovy"
            variety--and have to include a sentence of
            justification). 
=============

Your answer to this should be posted to the blog as a comment on *this* thread. (Due by the same time--last class)

The collection of your answers will serve as a form of interactive review of the semester.


Rao


28 comments:

  1. 1) A* search. I think a lot of people will probably mention this. But it's so useful, I'll mention it again. The cost of a node is the cost to get from start to that node + an estimate of the cost to get from there to a solution. Then putting all nodes in a priority queue, so node with least cost so far is selected for expansion. It uses a fundamental data structure, it's recursive, and it's intuitive. I won't forget this.

    2) Learning how to not give up on very hard problems. In other classes we learn about traveling salesman problem, or knapsack problem and classify them as hard. Here's the best algorithm, now we're done. In this class, almost everything we look at is exponentially complex or worse, in time or space. But we still try to find approaches that give us some chance of making progress. It's fine to be able to classify a problem as hard, but it's much better to say, even though this is really hard, the best approach is depth-first search to minimize space requirements, alpha-beta pruning doubles the depth we can check in a given time, and so on...

    3) The idea of programming using first order logic. Then, given a knowledge base, we can ask questions and get answers. This is amazing because it's so general. The simple Prolog interpreter was really impressive. The same code base was used to solve to completely different problems- apt-pet and family-rules. It makes you want to start looking for knowledge bases to encode, just so you can see it solve new problems.

    4) Forward chaining and backward chaining in first order logic, and progression and regression in planning. In general, the idea that you start from what you know and try to reach what you want to prove vs start from what you want to prove and see if you can get back to what you want to know. Regression seemed to be the obvious better choice until we got to impossible states. I didn't see that coming, and I can't express why it happens intuitively. But I want to remember there are two approaches to deciding if we can prove something, and there are plusses and minuses for each.

    5) The relationship between diagnostic and causal probability is pretty amazing. I can memorize the formula (Baye's Rule) and see how to derive it from the product rule, but I can't seem to wrap my head around why it actually works. It just doesn't seem like it should. To give a concrete example from the slides:
    A is Anthrax; Rn is Runny Nose
    P(A|Rn) = P(Rn|A) P(A)/ P(Rn)

    That's a useful technique to know!

    ReplyDelete
  2. 1) A* Search, it is one thing that i really enjoyed actually going through the process of doing. It was very easy to understand and it made a lot of sense. The use of heuristics for the A* search was also something new, but really easy to understand from a searching standpoint.

    2) Bayes nets, it gave me a whole new way of looking at relationships and how things fit together probability wise. I myself learn through visuals so this section really helped me understand the concept quite quickly. It was also fun to look at how these examples could be mapped to real life, or the Simpsons lives.

    3) The simple Prolog project was something that really surprised me, having the link between modus ponens and Prolog. I never learned that there was this route. It was also a fun and frustrating project to work on, but it was fascinating that what we were coding was really close to the Prolog concepts and seeing how "Prolog" code works through tracing our code was an experience because we never really got into the concept of how Prolog actually worked in the classes I have taken.

    4) Game trees, it was fun to sit there and learn the concept of this. It was also interesting what was talked about deep blue that they did the planning by brute force instead of through a better heuristic as we would have predicted in class.

    5) Refutation was also a really good concept to have learned. I'm not too good with theorem proving, so this concept to me helped me have a new way of solving certain theorems. I also liked how it was really a step by step process that was also very visual so that it was easy to follow and understand.

    ReplyDelete
  3. 1) Prior to this class, I had tough time understanding the concept of heuristics. Now after this course, I laugh at my foolishness. A* search is one of those easy to implement and effective algorithms.

    2) Learning Bayes Rule as a way to convert causal probabilities to diagonastic probabilities is more insightful and explains its usefullness.

    3) Working with Springfield Bayes Network was very useful and made the intricate details of a Bayes Network more clear.

    4) Resolution theorem proving for FOPC and its similarity to Prolog interpreter makes complicated things simple to understand.

    5) I am able to relate my plans as MDPs and associate a Gamma with every plan that I make using discounted reward structure.

    ReplyDelete
  4. 1. I noticed that several economic principles were explored during the class, such as cost-benefit analysis and low v. high time preference. As someone with a strong interest in Economics, I though the application in a realm other than the standard capital & labor allocation sphere was interesting.

    2. I was slightly surprised that most of the advances in AI seem to be based on performing bigger searches by using more powerful hardware, rather than by writing algorithms that more closely emulate human reasoning. I think that’s probably because the basic philosophy of computer hardware has not changed, they simply get “bigger” (by getting smaller), faster, and cheaper.

    3. I found the application of heuristics to the different techniques interesting. It parallels the concept in other sciences (and reality) that taking the time to pause and consider a situation can make finding a solution much easier.

    4. The fact that regression back from the goal state through states that support it does not find a solution faster than a progressive search surprised me so much that I’ve spent time I should have spent on the class looking at websites to understand it more thoroughly.

    5. I wouldn’t use the term “groovy” (perhaps "tubular"), but I did enjoy the Bayes Nets topic, especially how they can model environments using causal relationships, diagnostic relationships, or both, and then use that model to show either type of relationship.

    ReplyDelete
  5. So I just tried to post all this but when I hit submit it just disappeared. So you can thank Blogger if my answers seem brief this time.
    1. My previous AI experience involves a class where the focus was on thinking and acting humanly as opposed to acting rationally. So I can think of my other class as “old-school AI” and this one as “cutting-edge AI”, well, at least as cutting edge as you can be playing tic-tac-toe.
    2. I have worked with Bayesian networks a lot before this class, but I have a newfound respect for the work involved after being forced to do some computation by hand in the homework. But beyond that, recognizing the work that would have been required without the Markov condition makes me really appreciate the theoretical work behind Bayes nets.
    3. A* search was the one concept I can say I really understand. It helped me put search algorithms in perspective since I’d been exposed to some of the others before. Which brings me to:
    4. Heuristics – it was cool to see how much heuristics can help solve tough problems. Before I thought heuristics were just parameters in algorithms that people gave arbitrary values to when they couldn’t learn a better one.
    5. As much as it kills me to admit it, I have a new appreciation for LISP. This realization happened when I thought about the projects and had no desire to implement them in an object-oriented language at all.

    ReplyDelete
  6. 1. Heuristics
    Before this semester, I had never heard of heuristics before. By knowing an admissible estimate for the cost from the current state in a search to the goal state, the search time and search space can be reduced.

    2. Constraint Satisfaction Problems
    The main concept I remember from CSP’s is about domains of variables and the strategy of assigning values to those variables. That is, “pick the most constrained variable and assign it the least constraining value.” This strategy really makes sense, and is what most people might naturally try to do when solving a CSP like sudoku, but I don’t think most people would be able to articulate their decision process in such a way. And without having some concrete process to solve a CSP you can’t write an algorithm to do it.

    3. LISP
    Although not content in the textbook, one of the most useful things I’ve learned in this class has been LISP. All of the projects have given me hours and hours of programming experience. I didn’t like LISP at first, but after using it so much I have seen quite a few ways in which lisp can do a lot without many lines of code.

    4. 8-Puzzle Solving Time, h=0
    The most unexpected thing I’ve seen this semester had to do with solving time for the 8-puzzle. After implementing the code to solve the 8-puzzle we had to run some test cases and record the time taken to solve. First, I used h=0 (breadth-first search). There was one case that required something like 5 moves to solve which took about a minute. Then there was another case that required only 1 more move to solve, but it took about 45 minutes. To me, this really stressed the importance of heuristics, always trying to find a better way to solve a problem and trying to avoid unnecessary/duplicate calculations.

    5. Regression
    Given a start state and a goal, it’s natural to try to solve the problem progressively from start to goal. Regression is another way to consider solving a problem. I haven’t yet seen a specific example of a problem that can be solved faster with regression than progression, but just knowing that it is an option gives us another tool to use to solve problems.

    ReplyDelete
  7. 1. Intelligent Agent Design
    The idea that there are 4 different areas of AI one person can venture into. We only went through Acting Rationally the other ideas were interesting as well.

    2. A* Search
    We all learn the base search and compared to Quick sort this was the 2nd most interesting I have seen. Also using Heuristics to improve the search was quite amazing.

    3. Variable Elimination
    The idea that we can eliminate variables brought me back to my internship that optimized differential equations by not having to calculate all the variables. This is the same concepts of an idea.

    4. Planning Regression
    In my mind I do regression type planning a lot. Start with the goal and trying to find the initial state that we need to be in. Knowing how to do this with logic will help me a lot.

    5. Planning MDP
    I always liked the idea of multiple states affecting each other and how to determine what is the best route given a policy. It reminds me of the program Life.

    ReplyDelete
  8. 1. The discussion of the fact that the most difficult problems in CSP lie in a narrow band on the spectrum of the problems with different ratios of the number of constraints to the number of variables is very deep-thought and surprising. Moreover, this phenomenon is common in related problems and a line of research is pursued to calculate the range of ratio that the most difficult problems lie in. This demonstrates that, though certain problems lie in the NP-hard class, certain cases may be computationally tractable, as the truly intractable problems occupy a small area in the spectrum of problems. In fact, most of the problems encountered in AI are NP-hard, the task of AI research is to design effective heuristics to solve certain instances of the problems.

    2. I have been taught with the simulated annealing algorithms a couple of times before, but the use of the shaking-plate metaphor in this class is very intuitive and can help to understand the underlying mechanism of simulated annealing. This explains why we need high temperature initially and then reduce the temperature gradually, which is very clear from the shaking plate.

    3. The discussion of the discounting factor in MDP by using the metaphor of a PhD student and a high-school student who quit and went to work is very interesting and help to connect the property of MDP to real-world scenarios.

    4. The example used to illustrate the representational capabilities of directed and undirected graphical models is very representative and intuitive. This helps to understand the intrinsic capabilities of these two models, which shows that they occupy different regions of the entire space of possible probability distributions.

    5. The discussions about the advantages and disadvantages of the progression and regression in state-space planning in terms of the size of the search space and the termination criteria illustrate the essential differences of these two approach and their properties. In essence, both approaches have their advantages and disadvantages, and there is a never-ending debates in this regard.

    ReplyDelete
  9. a. The effects of heuristics on a search. I thought it was interesting to see how different heuristics could cause a search to be more optimal.

    b. How bays networks can solve just about any real life problems based on statistics alone. I thought it was interesting how a tool for medical diagnosis using bays networks can be more accurate than doctors.

    c. Coming into this course I wasn’t able to appreciate logic problems at all. After applying them to different situations in AI, I gained a new appreciation for them.

    d. Learning how the basic properties of AI apply to every action in the world. From the taxi driver problem to playing a game of chess.

    e. I appreciated how the program deep blue functioned to beat the worlds best chess player. I never knew how it was able to out think and out perform a human at a humans game.

    ReplyDelete
  10. 1. I really enjoyed the initial discussion involving the four different areas: thinking rationally, thinking humanly, acting rationally, and acting humanly. Although, it's a bit sad that our own line of thinking and acting is so inefficient.

    2. A* search was a new concept to me, and it's pretty impressive how drastically different the search can perform by swapping out heuristics.

    3. The CSP sudoku project was pretty fun, even if it's stripped any desire I have to do sudoku puzzles in the future. I knew it would be a problem involving satisfying the constraints involved with sudoku before this project, but before taking the class I wouldn't have known to use heuristics to come to a solution quicker. Between this and the A* search, I've gained a strong appreciation for the power of heuristic values in searching.

    4. It was cool to learn how chess programs beat humans, ie by planning out the next many moves and choosing the move that will lead them to the highest value. Also the very recent discussion about how if two agents playing chess both have mapped out all possible actions and proceed the best way, then it's a pointless game was interesting to me. Therefore, it seems like the game only becomes interesting because of the egos or reputations of the agents involved but I suppose that's a topic for another class.

    5. I've (finally) started enjoying programming in lisp. Probably the biggest hurdle for me was learning what functions were available to use that would make our lives easier (Thank you, dolist). Looking back, I think it would have made the transition to lisp programming easier if we had a handout with some of the most commonly used / useful lisp functions. Anyway, now that we've done a few projects, I consider myself a fairly capable lisp programmer.

    ReplyDelete
  11. I will try to relate my ideas about the 5 things I took from the course with relation to the applications for it in real life...

    1) I saw the idea of A* search being used in a small robotics class here at ASU. They used A* type idea for path correction, to help a robot find way around the maze. The nearest goal was not the exit but the farthest wall. The path in the maze was bumpy. Say of robot was knocked off by 30degrees they recalculated the distance by stopping, pivoting and sensing. The pivot angle which produced the nearest distance was the straight line path. Its was so simple that it took hardly any memory. Neat!

    2) Autonomous path planning.When I first heard the idea, I was a little skeptical because of the typical explosion in branching factor. But then after hearing so much about it and finally http://photojournal.jpl.nasa.gov/catalog/PIA10213, i decided to mention it here. But this kind of search offers more. Coupled with smart pruning that can be applied to propositions that may occur in real life graph planning can be very useful as compared to blind search.

    3) Every thing seems to converge and connect. No matter what search algorithm I use the basic ideas of A*/CSP apply to many search techniques. The search methods may change but they all basically seem to do very similar things to make there life easier. Example in Propositional theorem prooving. We tended to use the idea of fail first to use the proposition with smallest # of subgoals. An idea from MRV heuristic in CSP.

    4) Adverserial search and game theory. The thing that took me aback was the simplicity of the idea. It was so simple yet it worked remarkable well. A simple alpha beta pruning step goes along way to reducing search space.

    5)Possible Techniques to put logic on top of relational data bases. Although knowledge representation was not discussed in the lectures. The idea of using propositions as data may yield better solution to find the information we need. Extracting a Database of facts as horn clauses and then using a query engine to use it(which is a complete and sound inference process) was very interesting.... I have read on syntactic feature based fact extraction and this can definitely be useful for semantic or pragmatic reasoning. Say for example your queries on "mummies of egypt" on such a DB revealed 5 results one which essentially said that King Tut was discovered in 1935 by X person, another result says that X was killed by a curse. The two could be combined to reveal that If X discovered King Tut's grave implies he died of a curse. The 5 results come from syntactic extraction, but the last inference can come from FOL based reasoning....

    ReplyDelete
  12. 1 .I found out that AI involved a lot of searching instead of just using learning for every (I was hoping for 75% of the class to be learning)
    2. The idea that simply working to improve algorithms isn't always the easiest way, sometimes more computing power is the answer

    ReplyDelete
  13. 1) The world we live in is inaccessible, stochastic, sequential, dynamic, and continuous. The worst case scenario. On second thought, maybe I don't really appreciate this idea. No wonder intelligent life is so hard to find.

    2) Phase transition in 3-SAT. This idea was pretty interesting. The hard problems are not distributed throughout, but rather lie right near the 4.3 clause/symbol ratio. Stray a tiny bit in either direction, and the problems become easy again.

    3) Heuristics in A* search. Even extremely simple heuristics can produce huge gains. The "number of misplaced tiles" heuristic when solving the 8 puzzle is an extremely easy calculation, yet improves the performance of the A* search enormously when compared to a breadth first search.

    4) The slide on Sphexishness was really intriguing to me. These digger wasps appear to have devised an elaborate plan to capture a cricket and bury it in their nest to feed their hatchlings. However, even a slight change to this plan can potentially cause the wasp to loop infinitely (assuming there is a scientist with enough time and boredom to keep moving the cricket). In this case, it seems that nature is actually the true intelligent entity, having created this plan over time, encoding it into the wasps genes. Unfortunately, nature takes an extremely long time to adapt its plan, and would likely be unable to prevent the extinction of this species if faced against an army bored scientists.

    5) Going back to heuristics, it seemed like in almost all cases there was a trade-off between taking more time to calculate the heuristic which in turn reduces the time to solve the problem. The extreme case was always solving the whole problem to get a perfect heuristic, but obviously that gains nothing. The real goal is to find the "sweet spot" which minimizes the entire time taken to solve the problem. This is the hard part.

    ReplyDelete
  14. 1) The idea that Artificial Intelligence can be thought of a fancy name for searching at first took me by surprise. I suppose that entering the class I was expecting some magical way in which we make software act "rationally" but the reality is that it is nothing more than searching for a goal.

    2) I genuinely enjoyed the idea of Heuristic functions, it was one of the easier concepts for me to grasp but I believe that they are incredibly beneficial. It was remarkable how a well thought out heuristic can greatly reduce the search time and space of a problem.

    3) Even though MDP were difficult for me to fully grasp, I enjoyed learning how to model agents that are able to help in the real world. Throughout the lectures of MDP I saw a clear connection to what we were learning and how it could apply in the real world.

    4) This may not be part of the curriculum of the course, but one of the best things about the class was loose presentation by Rao. I enjoyed the fact that he kept referencing the Simpsons and to a somewhat lesser extent Seinfeld. This class ended up being one of the funnest classes for me.

    5) Going back to the beginning of the semester I liked the homework problem we had to do regarding coming up with 8 different environments which were different in regards to stochastic/deterministic, observable/partially, etc. It made me begin to look at the world a little differently which is always a good thing.

    ReplyDelete
  15. 1. Heuristics: I liked the concept of heuristics to improve search. Simple heuristics can give a great performance improvement. Also, learning that heuristics can be easily defined by relaxing some of the constraints has helped clear my previous understanding of heuristics.

    2. Constraint Satisfaction Problem: I appreciate the idea of solving a CSP using A* search. It was a realization that many of problems(or games) could be cast as CSP and solved. Solving Sudoku using CSP, cleared a lot of concepts.

    3. First-order Logic: I really liked the simple notation like First order logic that is verbose enough to represent General Knowledge information.

    4. MDPs : The concept of discounting reward was interesting. We usually use it in making decisions and to be able to incorporate it in planning brought the concept of planning closer to the way we plan.

    5. Game-Trees: Although we used gametrees for simple games such as tic-tac-toe, I can now imagine how computer games think.

    ReplyDelete
  16. 1. I loved the initial discussions about agents programs and the factors that go into them. It was interesting to consider the different situations (e.g. accessible or not) and how they must be considered in the agent.

    2. I enjoyed the discussions on search algorithms, which was odd because I normally don't like talking about algorithms. But requiring optimality and completeness while considering time and space complexity put a neat spin on it that wasn't there before.

    3. Heuristics were fascinating as a way to take complex problems and make them feasible. This has applications across many fields inside and outside of computer science.

    4. Bayes nets turned out to be a very handy way of computing probability. Having a few CPT's is far superior to a complete joint distribution across a handful of variables. Also, being able to determine logically that one node is independent of another simplified it even more.

    5. MDP's were a good way to cap off the semester. The idea of weighing the possible pros and cons of a given action and selecting a policy that is most advantageous to the agent seems to me to be the whole point of the course.

    ReplyDelete
  17. 1. Heuristics
    When I first heard the term "Heuristics", I had no idea what it was. Now, after taking this course, I feel like I have a good handle on heuristics, how they work, and when they are useful. In my other courses, such as Cognitive Systems and Intelligent Agents, the term "heuristics" is thrown around a lot, and I feel like this course has given me an advantage in those courses. I now understand when a heuristic is optimal, complete, and admissible for various algorithms (vary useful to know).

    2. Constraint Satisfaction Problems
    Project 2, the Sudoku puzzle, was very difficult for me. The main concept that I pulled away from CSP’s is that you “pick the most constrained variable and assign it the least constraining value.” This concept is very confusing at first, seemingly contradictory, but understanding how it finds the best choice for a given set of variables is very rewarding.

    3. A* Search
    A* search was one of the most interesting topics covered in this class. The main idea behind A* search, combining heursitics with the actual cost to reach the goal node, was a new concept for me to try to understand. Previously I had little understanding of how search algorithms worked, but A* search made the most sense to me. In addition, search algorithms in general (depth-first, beam search, etc), were very fascinating to learn. It was interesting to see how search algorithms could be mixed or combined to result in better, more efficient algorithms.

    4. Lisp
    I might have had about 2 hours worth of experience with Lisp prior to this course. Now, after spending hour upon hour in the lab working on Projects 1,2, & 4, I can say that I understand and appreciate all the foundations and syntaxes of Lisp. What I learned more than anything about Lisp however, is that before you start making a function-- check to see if it already exists :)

    5. Bayes Nets
    I thought I understood probabilities pretty well before taking this course, but I was wrong. I had a general misunderstanding of how probabilities worked, but the "Simpson" problem that was given in the homework really cleared that up for me. The tool that we used to compute the Bayes Net was also very fun and educational :)

    I really enjoyed this class and was able to learn a lot. I feel like it was difficult but rewarding, and it definately feels like a culmination of everything I learned over my 5 years (wow!) at ASU. Professor Rao has been a great professor and a fun lecturer. Here's hoping that I pass his class and that he doesn't have to see me here again next year :)

    ReplyDelete
  18. A* search is definitely at the top of the list because it is such an important search in the field of AI. It is simple and elegant as well as optimal and has great capabilities for generalization with heuristics.

    Bayes Networks are definitely on the list as well because they are so easy to use and setup. The evaluation of bayes nets and the underlying mechanism was more interesting than merely using them, however, and understanding this will surely be useful in future applications

    Adversarial game search is another topic I really enjoyed because it takes into account other players in the game and tries to outsmart them by looking deeper and has the fun possibilities of trying to guess the opponents ability to see traps.

    Planning I would put near the top as well because of the intricacies of progression and regression searches and how regression is not as good as it seems but still very useful. Planning is also prevalent in the community and has many uses.

    Reasoning with uncertainty was another topic that I really enjoyed because it brought in searches through sets of state spaces as opposed to merely state spaces. This opens the possibility to making an agent that can cope with a non-deterministic world as well as a non-observable one and was interesting how the agent can still solve problems in such a world with many of the same features of search as deterministic/observable worlds.

    ReplyDelete
  19. The variety of techniques discussed in class especially the following are the ones(with some bias, because most of the topics are a result of the project that I have done) which made me feel the elegance and applications of these techniques.

    1. A* and heuristics for general problems really made me think any real world problem to be solved interms of search trees and heuristics associated with the problem. The project really made me realize those ideas, and become confident to apply that idea to any domain. For example, like in "Six degrees of Kevin Bacon", we can use A* search to find relevance between two actors, by linking the movie stars they have acted together and linking the chain to the other actor through co-stars of co-stars in any of the movie they had acted. The heuristic function will order nodes (which are co-stars), according to some heuristic function(inlvolving age and the genre of movies the actor acts in or something like that). The point worth mentioning here is the heuristic function. Proofs of Admissibility and Optimality of A* and why it will perform better than any known algorithm, made me realize the elegance of simplicity.

    2. The Constraint Satisfaction Problems and solving sudoku, is really worth mentioning here. Almost all puzzle games can be specified as constraint satisfaction problems, and the constraints propagated, to be solved by a CSP solver. The forward checking, mrv and lc are worth mentioning here. I found this applicable to a small intersting puzzle game called hex-a-hop where you are given an initial state, and you have to reach a final state (there can be many final states), and each stage has a lot of constraints which you have to pass through.

    3. Encoding real world problems as first order logic, and the concept of knowledge base is another important concept which I learned from this class. Inference in first order logic, gives us confidence to develop expert systems for any possible domain. Like a simple troubleshooting system, for a software that you write, or for any appliance. The project again helped us understand its importance and elegance.

    4. Bayesian networks and considering the world probabilistically, gives us a very practical and human-like probabilistic reasoning approach to problems of real world. Rather than predicting single outcomes, we can give confidence levels for outcomes. Another idea, applicable to games, but non-deterministic games, which depend on rolling dice. Not only here, it is applicable to almost all decision making and inference in real world. A card game, can be probabilistically considered, and an inference of it probabilistically, would give us the probability of losing a bet or the worthiness of taking a risk.

    5. "Programming the AI way": the agent design, and observability, stochasticity, dynamic-ness were discussed in the initial stage which was useful through out and to think the world interms of designing an intelligent agent which can serve a particular purpose. Apart from providing a vague interpretation of how to look at designing intelligent beings, it gives us a perspective of generating problem solvers in general. This rather makes a computer science engineer, create programs that aren't conventional or a mathematician, attack it mathematically, but, it makes us build the agent based on intuition and ingenuity.

    ReplyDelete
  20. 1. A* Search, and search in general even (the types of problems it can be used for). Now that I've learned it it seems trivial and obvious, but before this class I wouldn't have been able to come up with anything close on my own. It can solve so many hard problems in such a simple way.

    2. Planning is similarly awesome, especially game planning. I wouldn't have known where to start before this class, but now I think about all planning in a whole new way.

    3. Bayes nets – correlation vs causation. The impact of knowing causation is huge, but the fact that learning causation is so hard sheds light on how even humans learn.

    4. (really 3.5) Bayes theorem. The equation is just basic algebra, yet the implications of it reach many aspects of life. Combined with Bayes nets, we can easily solve probabilities for very complicated chains of events!

    5. Theorem proving and encoding information. Isolating logic so clearly changed my view of intelligence. Before I didn't really see the difference between logical reasoning and everything else that makes us intelligent. The fact that such a simple computer algorithm can do logical reasoning extremely fast (at least compared to humans) and perfectly has tremendous potential. I suppose the only real trick is encoding information, but I hadn't realized that was the hard part until this class.

    ReplyDelete
  21. In particular:
    • I really liked the example of medicate without killing and that clarified the whole notion of Observable but non-deterministic environment very satisfactorily.
    • Before taking this class, I neither understood nor bothered to learnt how tree traversal operation and various tree- search (like BFS, DFS, A*) works, but now I am no more scared with these concepts anymore which at one of time was taboo for me.
    • The Sudoku puzzle solving as a constraint-satisfaction problem was really fun. It gave us a change from the conventional examples like coloring problem and cryptarithmetic problems.
    • The explanation about the existential and universal quantifier through the example “According to a recent CTC study, 90% of the men surveyed said they will marry the same woman.” and Mahabharata gave a whole new way of looking at day-to-day life things with an AI perspective. After taking this course, I have also started to try connecting a novel story or a religious epic with the present day AI concepts.
    • The springfield project of Bayes Network was really interesting and the whole concept of Bayes Network got clear with this project. I liked the Likelihood Weighting topic the most in this whole topic. Another thing, I never watched Simpson previously, but thinking to start watching it.

    In General,
    • All the projects were complementing the course work in a really effective way. In fact, some of the concepts got clear in my mind only after doing the project
    • The Today’s thought section (first slide with some nice thought) were really good. I particularly like the one on Abraham Lincoln and Charles Darwin and the one with the thought “Most people have more than the average number of feet”.

    ReplyDelete
  22. -A* search was interesting. The whole idea of adding heuristics to a search to decide which path to take was a cool idea. I also like that you have to compromise between searching and computing the heuristic in order to achieve best results.

    -Bayes nets seem to have a very practical application in the real world. I like that you can derive the probabilities of causation based on some set of effects.

    -I really would have like to get more into the learning topics.

    -I felt that the homeworks and projects directly applied the theories learned in class. I felt that every question was directly relevant to the course material.

    ReplyDelete
  23. there are may things that I appreciated about the class
    - I enjoyed how we were able to learn about the algorithms and apply them in the projects. In many classes, we brush over them but never utilize them. This wasn't true in this class.
    - The real-world examples used to explain the heuristics and the algorithms gave us a very good perspective on how we use them on a daily basis.
    - the CSPs were fun to utilize. They helped us see how something we do intuitively needs a little more thought when converting into programs.
    - Bayes nets and their applications were fun to learn especially with the simpson examples.
    - I liked understanding the alpha-beta pruning. They seemed like a very practical way of cutting down costs that would be needless incorporated otherwise

    ReplyDelete
  24. 1.] I enjoyed the discussion on planning and contrasting of progression versus regression search. This was the first time that I had a class that dealt with these topics in this detail.
    2.] The second thing I appreciate is the way we made a connection between deterministic (A*, etc.) and non-deterministic search strategies(MDPs). Seeing both of them in a common light helped me in creating the big picture.
    3.]The third good thing was the discussion on min-max algorithm and alpha-beta pruning. It added to my understanding of the EM algorithm in general and also gave a formal notion to the usual game playing strategies we use.
    4.]I also liked the discussion on the various kinds of search during the start of the class. It was fun to start up with the most basic kind of search BFS and DFS and build up gradually trying to remove as many shortcomings as possible.
    5.] Finally I liked the discussion on probabilistic reasoning and Bayes networks. It strengthened my fundamentals on probability and introduced a new tool that can deal with the usually cumbersome representation of joint distributions in an efficient way.

    ReplyDelete
  25. 5 topics I enjoyed this semester
    1. The first thing I really enjoyed about the class was the different way of looking at things. Having to try and think like an inanimate object was very interesting. I enjoyed the class about the search space of the vacuums.
    2. I found the idea of heuristics very useful. This is the first time I have been introduced to them, and they are very helpful.
    3. I liked Bayes Networks. It was very interesting to see that they could be applied to different fields such as the medical field.
    4. I enjoyed learning all the different methods of search. Even in our algorithms class, we only got a basic knowledge of different searches. I now know a lot more techniques.
    5. My favorite project was the Sudoku project. I found CSP very interesting and I liked seeing how effective the heuristics were on solving the puzzles.

    ReplyDelete
  26. 1. The idea of problem-solving as search: the algorithms we have used for class projects are very different from the way humans solve problems, and it was interesting to learn more about how computers "think"

    2. The wide range of applications for AI. It was interesting to see just how many things are affected/improved by AI, that we often take for granted.

    3. A* Search was something I had heard about before, so learning how it works (and implementing it and its variations) was definitely a highlight of the class for me. It was fascinating to see how simple tweaks to a search algorithm (like LCV, MRV) can make a very noticeable impact on running time and efficiency.

    4. I thought Bayes nets were groovy. The amount of information that can be encoded in them, and the utility they offer in performing inference has made me start to see them in other places as well. I like the idea that I potentially predict with some accuracy events that before would have been "completely random".

    5. The challenge AI offers; This has easily been my most challenging course this semester, but also one of the most interesting ones. Never before have I taken a course and heard "We would talk more about this topic, but the papers haven't been written yet." The class has been fascinating and rewarding, and I really enjoyed being able to craft programs to solve programs in seconds that might take me hours.

    ReplyDelete
  27. 1) I think the thing that I appreciated about this class was it giving me a chance to see yet another programming language. I hated lisp, but learned to start to like it as the semester went on.

    2) I also liked the A* Search because it was what showed me what AI was really about... informed search and such.

    ReplyDelete
  28. -The introduction of heuristics with the A* search easily left the biggest mark. I can see why A* is used for the first hands on experience with AI programming, a very straight forward problem-solving agent that can demonstrate great improvements with only minor changes in its guess for the next move.
    -I've never had anything against lisp or scheme, but emacs on the other hand... Well, it introduced me to another programming environment.
    -Discussing the probability of the agents actions with MDPs and how to optimize their actions
    -And even though we never used it I enjoyed reading about Genetic Programming (much less sexy once you know what its doing, but still cool) and a few of the discussions about the prospects for artificial intelligence in the back of the book, as to whether AI could think or could only *simulate* thinking.

    ReplyDelete

Note: Only a member of this blog may post a comment.