****************MDPs

2.1. CSPs where the variables are "real valued", and constraints between variables are expressed as linear inequalities. Clearly, the number of potential assignments for the CSP variables is infinite. What do you think will be the complexity of finding a satisfying assignment of the variables? (recall that discrete variable CSP is NP-complete) If the complexity is "low" by AI standards, what can you do to increase it? (hint: consider modifying constraints).

2.2. Planning problem where (some of) the variables are real valued. Actions have effects that can increment and decrement the variables by arbitrary amounts. What do you think will be the complexity of planning in this case? (recall that discrete variable planning is PSPACE-complete, i.e., it is among the hardest problems that can be solved with polynomial space).

3. The MDPs we considered until now are called "Fully observable"--in that during execution, the agent can tell which state it is in (thus the policy needs to only map states to actions). What happens if the domain is only "partially observable".

Note that this means that the agent may not know which unique state it is in, but knows the "probability distribution" over the possible states it could be in. When the agent does an action, it effect of the action is to modify the distribution into a new distribution over states (with some states being more likely and others less.

Notice that the "distribution" is fully observable although the underlying states aren't.

So, one idea will be to consider the distributions as the states. What happens to the complexity of MDPs? How is this connected to MDPs over "continuous" state spaces?

[Notice that the above still works for the special case fully observable domain. The initial distribution will be a delta function--with probablity being 1.0 for a single state, and 0.0 for all others. Doing an action converts into another delta function--with the 1.0 for a different state].

*****************Learning

Qn 0. [George Costanza qn] Consider two learners that are trying to solve the same classification problem with two classes (+ and -). L1 seems to be averaging about 50% accuracy on the test cases while L2 seems to be averaging 25% accuracy. Which learner is good? Why is this called the George Costanza question? ;-)

1. What happens to MDPs if the underlying dynamics are "deterministic"? Can we still talk about value and policy iteration? Do we still have non-stationary policies for finite horizon deterministic MDPs?

2. We talked about how infinite horizon case is easier than finite horizon case, and said, in passing, that here is one case where "infinite" is easier to handle. Consider the following two cases:

2.2. Planning problem where (some of) the variables are real valued. Actions have effects that can increment and decrement the variables by arbitrary amounts. What do you think will be the complexity of planning in this case? (recall that discrete variable planning is PSPACE-complete, i.e., it is among the hardest problems that can be solved with polynomial space).

3. The MDPs we considered until now are called "Fully observable"--in that during execution, the agent can tell which state it is in (thus the policy needs to only map states to actions). What happens if the domain is only "partially observable".

Note that this means that the agent may not know which unique state it is in, but knows the "probability distribution" over the possible states it could be in. When the agent does an action, it effect of the action is to modify the distribution into a new distribution over states (with some states being more likely and others less.

Notice that the "distribution" is fully observable although the underlying states aren't.

So, one idea will be to consider the distributions as the states. What happens to the complexity of MDPs? How is this connected to MDPs over "continuous" state spaces?

[Notice that the above still works for the special case fully observable domain. The initial distribution will be a delta function--with probablity being 1.0 for a single state, and 0.0 for all others. Doing an action converts into another delta function--with the 1.0 for a different state].

*****************Learning

Qn 0. [George Costanza qn] Consider two learners that are trying to solve the same classification problem with two classes (+ and -). L1 seems to be averaging about 50% accuracy on the test cases while L2 seems to be averaging 25% accuracy. Which learner is good? Why is this called the George Costanza question? ;-)

Qn 1. Consider a scenario where the training set examples have been labeled by a slightly drunk teacher--and thus they sometimes have wrong labels (e.g. +ve are wrongly labelled negative etc.). Of course, for the learning to be doable, the percentage of these mislabelled instances should be quite small. We have two learners, L1 and L2. L1 seems to be 100% correct on the *training* examples. L2 seems to be 90% correct on the training examples. Which learner is likely to do well on test cases?

Qn 2. Compression involves using the pattern in the data to reduce the storage requirements of the data. One way of doing this would be to find the rule underlying the data, and keep the rule and throw the data out. Viewed this way, compression and learning seem one and the same. After all, learning too seems to take the training examples, find a hypothesis ("pattern"/"rule") consistent with the examples, and use that hypothesis instead of the training examples. What, if any, differences do you see between Compression and Learning?

Qn 3. We said that most human learning happens in the context of prior knowledge. Can we view prior knowledge as a form of bias?

In particular, can you say that our prior knowledge helps us focus on certain hypotheses as against other ones in explaining the data?

that is all for now.

Rao

1) You can still have value and policy iteration. When actions were non-deterministic, we looked at the expected utility of the state. When actions are deterministic, we no longer have to sum over the various possible states an action could take us to. We just do Util(s) = Reward(s) + Utility(s'), where s' is best state we can get to from s. It seems very similar to Djikstra's shortest path. I don't think we have non-stationary policies any more. We can always tell the exact utility our actions will give us, so we always take the next step that hurts us the least.

ReplyDeleteQ1: The drunk teacher example:

ReplyDeleteI think testing n the training set should always be 100% accurate because it is after all learnt. Even though the lables are all wrong the learner should be able to learn the wrong data but thigs would only go wrong on a test data set not a training set.

Q2 : I think compression and learning are the same things when it comes to increasing the entropy of the information content they encode. Both techniques cannot learn beyond a certain limit, given indirectly by information theory. They both loose some data and have similar performances given a certain amount of training data.

Q3 I think the bayesian inference techniques in learning as opposed to frequentist approaches help us forming a bias. Say you had an unfair coin and you knew nothing about it you would toss it a couple of times and if by chance there were heads and tails almost equal number of times then you would not be able to infer its unfairness. But say we had previously sampled the coin many times and it was indeed unfair it would show us that it is unfair and use the coin warily.

ReplyDelete