Thursday, January 22, 2009

Thinking Cap topic 1:

[[Folks:
 By now I hope you all had enough time to get yourself signed up to the class blog. As I said, participation is "required" in this class. Participation involves
doing assigned readings, being attentive in the class, and most importantly, taking part in the class blog discussions.

While any discussions you want to do on the blog are fine, I will occasionally throw in topics I want you to discuss. For historical reasons, we will call them thinking-cap topics.  Here is the first discussion topic  for your edification. 

As for the quantity vs. quality of your comments, I suggest you go by the Woody Allen quote below for guidance.. ;-)
  --Rao ]]


Here are some of the things that I would like to see discussion/comments from the class. Please add your thoughts as comments to this blog post. Also, please check any comments already posted to see if your viewpoint is already expressed (remember--this is not a graded homework, but rather a discussion forum).


1. Explain what you understand by the assertion in the class that often it is not the hardest environments but rather the medium-hard ones that give most challenges to the agent designer (e.g. stochastic is harder in this sense than non-deterministic; multi-agent is harder than  much-too-many-agents; partially accessible/observable is harder than full non-observable).

2. We said that accessibility of the environment can be connected to the limitations of sensing in that what is accessible to one agent may well be inaccessible/partially accessible to another. Can you actually think of cases where partial accessibility of the environment has nothing to do with sensor limitations of the agent?


3. Optimality--given that most "human agents" are anything but provably optimal, does it make sense for us to focus on optimality of our agent algorithms? Also, if you have more than one optimality objective ( e.g., cost of travel and time of travel), what should be the goal of an algorithm that aims to get "optimal" solutions?

4. Prior Knowledge--does it make sense to consider agent architectures where prior knowledge and representing and reasoning with it play such central roles (in particular, wouldn't it be enough to just say that everything important is already encoded in the percept sequence)? Also, is it easy to compare the "amount" of knowledge that different agents start with?

5. Environment vs. Agent complexity--One big issue in agent design is that an agent may have very strong limitations on its memory and computational resources. A desirable property of an agent architecture should be that we can instantiate it for any <agent, enviornment> pair, no matter how complex the enviornment and how simplistic the agent. Comment on whether or not whether or not this property holds for the architectures we saw. Also, check out "Simon's Ant" on the web and see why it is related to this question.


6. Anything else from the first two classes that you want to hold-forth on.

Rao


----------------

"
The question is have I learned anything about life. Only that human being are divided into mind and body. The mind embraces all the nobler aspirations, like poetry and philosophy, but the body has all the fun. The important thing, I think, is not to be bitter... if it turns out that there IS a God, I don't think that He's evil. I think that the worst you can say about Him is that basically He's an underachiever. After all, there are worse things in life than death. If you've ever spent an evening with an insurance salesman, you know what I'm talking about. The key is, to not think of death as an end, but as more of a very effective way to cut down on your expenses. Regarding love, heh, what can you say? It's not the quantity of your sexual relations that counts. It's the quality. On the other hand if the quantity drops below once every eight months, I would definitely look into it. Well, that's about it for me folks. Goodbye. "
                ---Boris in Love & Death (1975 http://us.imdb.com/title/tt0073312/ )

25 comments:

  1. As far as #1 goes, I would say that non-deterministic environments are easier, because you know that your actions don't matter, so you don't have to design an algorithm to choose an optimal action.

    For instance, in a non deterministic world, just because you checked your blind spot 1 second ago and it was empty, doesn't mean nothing is there now. At any moment a car can *poof* appear and/or *poof* disappear from that location. So don't bother to write code to check your blind spot. Just write code to deal with the fact that when you change lanes you will occasionaly hit another car.

    On the other hand, in a deterministic world, you can see where cars are at time t and calculate with 100% certainty whether or not a car will be in your blind spot later, at t + 1.

    In a stochastic world, you have a probability curve representing the likelihood of a car being in your blind spot after you last checked. So you really have to write code to check your blind spot and write code for the occasional times when that other car beats the odds and gets into your blind spot before you change lanes...

    I think similar reasoning for the others. A billion agents interacting? Well, there's only so much you can do. Just a few agents interacting? The agent that makes the best predictions about others' actions will win.

    ReplyDelete
  2. 1. If the environment is deterministic, Agent has all the knowledge to take correct action. Agent is left with no choice but take a random decision if the environment is non-deterministic. However, when the environment is stochastic(or similarly partially observable), it becomes a learning task and then performance of the agent depends on the design of the learning algorithm.

    4. Even the most simple architecture for agent has lookup table which is its prior knowledge. As the complexity of environment and goals increase, it is necessary to have the prior knowledge and it is even more important to represent and reason it in an optimum way.
    Talking about comparison, quantifying knowledge is still a research topic for human intelligence. So it has to be difficult for AI.

    5. Simple reflex agent does not store the previous states in its memory whereas the reflex agent with state does.
    Simon's ant who walks its way on the beach acts only on the current perception of environment. The trajectory it leaves behind is not stored by ant but it seems to be complex to the eyes of an observer.

    ReplyDelete
  3. Sanil as well as cameron seem to imply that non-deterministic action means *any thing* at all can happen, and thus suggest that you can never to better than just selecting some random action.

    It is best to think of nondeterminism as Action A takes
    a state s to one of the states
    {s1, s2..., sn} where the set doesn't necessarily have to be all possible states of the world.

    If I have a coin that may be possibly biased, and I ask you what will happen when I toss the coin, you will say that one of heads of tails will come--but can't say with what probability. You won't, for example, say that "who knows--any thing can happen--the world might come to an end".

    With this kind of "bounded indeterminacy", it is in fact possible for the agent to be able to pick actions that will still lead it to its goal.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. 1. A stochastic environment could present a more difficult challenge for AI because, with fewer possible outcomes, it is more possible to predict what will happen and make intelligent decisions taking that into account. In a non-deterministic environment, it is really just necessary to build an entirely reactionary AI, since predictions cannot be reasonably made.

    2. I have been unable to come up with a case where partial accessibility of the environment has nothing to do with the sensory limitations of the agent. The environment exists fully no matter what, and it is simply a matter of being able to gather as much information as you can, in spite of limitations such as (im)mobility, viewing angle, light wavelengths that can be sensed, and more.

    3. As far as optimality goes, I think it definitely still makes sense to focus on this. Humans are often inefficient because of ignorance or inexperience, and this is where learning comes in. Just as humans can improve methods as they learn more, an AI should ideally be able to learn to perform a task in a better way if given the opportunity.
    And as for conflicting optimality objectives, I think in general it is reasonable to expend more of the resource that the agent has in the greatest amount (i.e. for the scenario above; if we have billions of dollars, the main focus would be time). It would also be necessary to consider cases where the amount that can be spent is limited - say if you need $500 after the flight to book a hotel - but this is more planning ahead than optimizing resources.

    ReplyDelete
  6. Chris:

    Re: your point 2, you are mostly right, but do think back to your physics classes..

    rao

    ReplyDelete
  7. Rao- re point 2: Are you referring to Heisenberg's uncertainty principle? As in AI for the large hadron collider that would know either exact position or exact velocity for a particle, but not both?

    ReplyDelete
  8. 3. Although optimality is often hard to achieve, striving for it usually leads to better results than not striving for it. There’s a good golf analogy: when you’re getting ready to make your swing, are you aiming for the green, or are you aiming for the hole? Of course, don’t confuse this with perfection, which often can lead to never completing your goal. You also have to weigh the cost of achieving optimality vs the benefits gained. If you have multiple objectives, you have to weigh all and aim for optimizing overall based on priority. For cost vs time of travel, if time needs 100% priority, book your private jet. If cost needs 100% priority, walk. Otherwise, decide on an acceptable ratio.

    4. Isn’t this how we, as humans, act? Prior knowledge, either through self experience or experience of others, affects all of our decisions. Of course, the problem for an agent is being overloaded with information, but it’s still useful to provide for reasoning in addition to providing self-reasoning based on given percepts. Feeding off of prior knowledge can also help overcome shortcomings in agent percepts today. Take for instance an agent that bluffs in poker. The agent can play the %’s and bet sizes, but otherwise today has no way to read players. If, however, an agent had knowledge of how a player played against certain cards with certain bets, a stronger bluff could be made.

    ReplyDelete
  9. specifically, almost all inaccessibility can be--at least at a philosophical level--attributed to lack of good enough sensors on the agents' part. The only exception we know come form the heisenberg's uncertainty--which basically says that you can *never* measure both position and velocity together accurately.

    Not that this is going to be a huge big issue with rhumba the vaccum cleaner robot--but it does make an interesting philosophical point..

    In practicel, if no other agent currently has sensors to measure certain properties of the environment, then we will just assume that it is the environment is the one that is inaccessible (rather than the agent's limited sensors)

    rao

    ReplyDelete
  10. 1. It is similar to your example in number 3 cost of travel vs. time of travel. It is easy to design an agent that finds the cheapest travel and easy to design an agent that finds the shortest travel time. The complexity comes in when you need results somewhere in between, thus the agent needs more input about the type of results expected.

    3. (Branching off from above). The goal of an algorithm that aims to get "optimal" solutions might have to depend on some hard coded restrictions (e.g. cost < 500 and time < 2 hours). The algorithm might have to "settle" for a result that only partially satisfies each optimality requirement. The question is, at what point are the requirements satisfied 'enough'?

    ReplyDelete
  11. A few posts have implied that non-deterministic worlds are random and/or actions don't really matter. If that is true, then there would be no point in acting in a truly non-deterministic environment. Furthermore, there would be no point in even designing an agent, since my lack of designing an agent would have just as much probability of solving the problem as any design. It seems highly improbable that in the physical world an environment could exist that changes independently of all agents' actions, although I have played a few video games where no matter what I did the character always died.

    As to the Uncertainty tangent that has sprung up: this would only apply to interactions on the sub-atomic level, where the particles used to interact with the environment are significant compared to the particles being observed. Maybe it's the engineer in me, but I don't think the energy transferred to a two ton car by a laser being used to ping for distance will be significant enough to make the resulting location or velocity calculations incorrect.

    ReplyDelete
  12. Re: jmajors
    If I understand non-deterministic correctly it is not simply that the world is so random that your agent's actions don't matter but instead that you can not predict (unknown probability) what of action your agent will take based on a set of possible activities that your agent can do.

    i.e. if your agent controls a game of pacman it can make pacman go up,down,left, or right. You can't predict what it will do because its non-deterministic but your agent isn't going to make the ghosts disappear.

    Of course, if I'm wrong please correct me; I want to make sure I understand this correctly.

    ReplyDelete
  13. Nick:

    Jason and you are both saying the same thing--that non-deterministic doesn't necessarily mean "unbounded indeterminacy"; and your take is correct. See my comment to Sanil and cameron in this thread.

    rao

    ReplyDelete
  14. With Item 1. there is the issue of that in environments that are non-deterministic, stochastic, etc. there is almost an understanding that the expected performance is very unlikely. This will inevitably shorten the time and effort the designer of the agent will put it in as Cameron said, No need to make the effort to calculate every possibility. This also means that no agent can be considered better or worse if an environment is non-deterministic. In such an environment no action can really be considered more rational than another agent's actions since theoretically each action has the probability of leading to a disaster.
    As for prior knowledge I do not think it would be a good idea at all to say that everything that is important is already encoded in the percept sequence for many reasons. First of all, for this to be possible this would imply that the designer knows everything that is important in the environment before hand, which is impossible. Lets look at the taxi driver agent as the example. If we said that all of the important information was encoded into the percept sequence by the designer then that designer would of had to know the road conditions at all times. But the environment is dynamic so this would not be possible. In face it is crucial that an agent have the ability to learn to some degree in order to perform in a dynamic environment.

    Angel Avila

    ReplyDelete
  15. 2.] Going on the lines of the discussion on this point, I think that a star millions of light years away from earth presents a relevant example too. Because of our (present) incapability to travel faster than light it is not possible to observe the current state of the star by any means. Please correct me if I am wrong.

    3.] One method of selecting which contradicting objectives to maximize, in an optimal agent, can be that of relying on external feedback. In the beginning the agent can start by maximizing statistically most preferred objective and based on the feedback of external environment can switch to another objective or continue maximizing the present one. The statistics about contradicting objectives can be provided as prior knowledge and enhanced by learning. E.g. in case of taxi driving agent, one can start with optimizing the cost even if that means trading off the time, because that approach works in most cases. But in case of a fare that is in a hurry to reach somewhere one is bound to get feedback and this can be used to switch to optimizing the time.

    4.]Percept sequences can only encode the current state of the environment, but it is prior knowledge (in form of experience encoded by learning or hardwired responses) that will help determining what is the best option to choose when dealing with a given environment state; assuming that most agents are goal based or utility based agents. As for comparing knowledge amongst agents one criterion that can be applied is that of testing how much two agents can generalize when put into a completely new situation.

    5.] Look-up based and simple reflex-based agents can (at least theoretically) be said to have this property. If, theoretically one can come up with a look-up table or a set of condition-action rules for the latter then we can instantiate the above two kinds of agents for any environment. The problem starts when we have to consider the questions “how the world evolves?” and “what do my actions do?”. Both of these questions require the agent to model details about the environment whose complexity increases with that of the environment. As pointed by Sanil, Simon’s ant is a simple reflex based agent; in a sense a biological extension the dear “rhumba the vacuum cleaner robot” of CSE 471. :-)

    ReplyDelete
  16. 3. I think it is perfectly reasonably to aim for optimality in our agent algorithms despite it being difficult to prove human optimality. Afterall, isn't the hardest part in judging human performance the fact that we don't understand/know the algorithms behind human behavior? Optimizing a computer-agent is possible because we know the environments (generally), we know the agents, and we know the code behind the agents. Granted, tradeoffs are going to be made. In the case of the taxi example, a simple statistical analysis will prove the most optimal solution for a multi-factored goal (i.e. cost and travel). An agent should aim for a compromise between all goals, and act accordingly. I think there is a distinct difference between finding the maxmimum result for all goals and finding the optimal one.

    ReplyDelete
  17. 6. It seems like in the book and our discussions that an agent's goal is always obtained from an outside source, the agent designer. Is it possible to create an agent that formulates its own goals? Would it even make sense? It seems like that characteristic is what has allowed human's to come as far as they have, but it also seems like it could easily backfire. Thoughts?

    ReplyDelete
  18. Doug - I think that some basic goals are programmed into humans such as survival and reproduction. From these goals then can be formulated to help achieve the other more base goals such as building a shelter, hunting, gathering, finding a mate... How to formulate these subsequent goals from more base goals probably involves exploration of the environment, experiment and memory. Say I see a cat catch a mouse and eat it. Now what if I try the same thing? What does that do for my state? It would obviously feed me, I take note and remember for the future that this helped me to survive as I am no longer hungry (or less hungry as a mouse isn't much :D). This may have some sort of cascading effect to eventually lead to figuring out better methods of feeding when joined with other observations like larger animals eat larger animals so therefore I should hunt something larger which might yield more food. Now where those basic goals of survival, reproduction, etc. came from? That's probably a feature from randomness and evolution through selection of fitness who's selective pressures included survival till reproduction. These selective pressures seem to be our agent designer as they have provided goals. All in all, I think it's definitely possible to create an agent that formulates its own goals that both make sense and improve the agent but some basic ground rules (selective pressures) need to exist before anything can come from it. Note that the selective pressures can be dynamic producing endless variation and constant change.

    5. Environment versus Agent Complexity
    For the architectures that we saw it appears that pretty much any agent can be joined with any environment and as long as the rules that the agent follows still make sense in the environment then the agent can still work effectively. While there obviously needs to be a limit somewhere (as the agent needs to be able to think, act, perceive or a combination of those) complex action can be derived from a very simple rule set. If the environment is sufficiently complex and the rules that govern the agent pertain to the environment then the actions depend on the rules which depend on the environment. The complex actions from the agent can then keep the agent functioning effectively in the complex environment. Simon's ant definitely pertains to this as the ant, a (relatively) simple creature, lives in a highly complex world, ours. It's goal was simple and yet the complex environment created complex action by the ant which allowed the ant to survive in the world. I assumed the ant survived and met its goals in the world because there are many ants alive today.

    ReplyDelete
  19. Doug:

    That's the kind of thinking that will give us Skynet if we're not careful.

    I guess you have to consider the source of "human goals". Say I have a goal of graduating. You could argue that graduating is actually just the optimal action I chose to accomplish my pre-programmed goal of survival.

    On the other side, though, say I decide to set a goal to clean my desk. Having a clean desk wasn't my goal before, unless I set a certain threshold before I set the goal. It doesn't contribute to a higher-level goal, like graduating does, so it's not an action I chose to achieve a previous goal. I simply decided to clean it.

    I guess what I'm getting at is an agent won't set a new goal unless it contributes to a current goal. For example, Skynet may have a goal of eliminating all threats, and set a sub-goal of destroying all humanity because it sees all humans as threats.


    3.) I think that, given an appropriate algorithm, it's logical to expect an AI to determine an optimal solution. Think back to when you were learning calculus: "Farmer Joe has 100 feet of wire fence to build a pen. What should the dimensions of the pen be to maximize its area?" An AI can calculate this very quickly, and should be expected to. A taxi AI should be able to use an optimization formula that is a function of cost, time, and whatever other factors.

    4.) Maybe I'm missing the boat on prior knowledge, but it makes total sense to me. Your taxi AI could have prior knowledge that between 4:00 and 6:00 PM it's a bad idea to take I-10 out of Phoenix. In fact, now that I think about it, how could the taxi function without some prior knowledge in the form of a map? If I get in the cab and tell it to take me to 35th Ave and Camelback, it can't use percepts to know what I'm talking about. In that sense, then, you could compare one taxi's map to another's.

    That is kind of a special case, though. I'm not sure what prior knowledge you could give to a roomba.

    ReplyDelete
  20. The thread by Doug/Alex/Ryan is interesting. For what its worth, here are a few points I can add

    -- The point about externally imposed performance metric is meant for artificial intelligent agents--and doesn't strictly have to apply to humans. The reason for it is of course to say that the agent (and its designer) can't just conveniently say what happened to them was what was actually designed by them. (E.g. If the agent falls into a ditch, then it can say that it "meant" to fall into a ditch).

    Of course, from Bush's Iraq to Aesop's fox, humans are quite capable of post-facto rationalization of their actions.
    In making intelligent agents we don't necessarily have to make them human-like in all ways (for example, humans are not required to save other humans when their own lives are in danger--and yet we might want to program it for our robots--as Asimov clearly suggested).

    =========
    A separate point is something that both Alex and Ryan pointed out very well. At least some of the goal-setting capability of humans comes in aid of supporting basic goals (and urges) that have been programmed into them by evolution.

    If *no* goals are pre-programmed, then Camus' "Myth of Sysyphus" conundrum--which says:"There is but one truly philosophical problem, and that is suicide."--comes to center-stage with a vengeance.

    Rao

    ps: As long as we are having ther outre discussions, you might also wonder when and whether the agents that are designed by us start having rights. This is of course a theme that fiction has explored--most recently Spielberg's movie AI--in some form or another. The following link connects to a Science Cafe discussion on this topic that I took part a year or so back:

    [Flyer] http://rakaposhi.eas.asu.edu/nov07cafe.htm

    [Audio] http://rakaposhi.eas.asu.edu/science-cafe-11-16-07.WAV

    ReplyDelete
  21. 3 - Optimality
    Several have commented on the give-and take required in optimizing multiple goals (private jet vs walking) and on the consequences of letting the agent set sub-goals (skynet). I've heard of satisficing, which strives for an adequate solution, rather than an optimal. It is said that humans satisfice, rather than optimize, and this sounds reasonable to me. I doubt that I've taken the cheapest flights for my last several trips, but the cost and the schedule and how much I like Southwest over United means my solutions are adequate for me. I suppose it depends on the application for the agent whether this is a reasonable goal for architecture.

    ReplyDelete
  22. Mike uses the word "Satisficing"--it is a Nobel word (not noble, but Nobel! the only Nobel prize ever awarded to a Computer Science professor went to the guy who studied decision-making at Milwaukee municipal corporation, and conclued that humans are "satisficers" in their decision-making. You have to find out who this guy was (hint: he also had an ant).

    The tricky part, however, is how does one "formalize" satisficing? With optimality, you can talk about whether a decision is optimal or not, and if the latter, how far from optimal it is. With satisficing, it seems like a wishy-washy goal--every decision you could ever make, however atrociously bad, might be considered satisficing. So, most formal analyses are based on optimality [and as I will repeat a couple of times before the semester ends, the *true* difference between a computer scientist and a programmer is that the former knows what it takes to gain optimality, and gives it up reluctantly in many cases; while the latter doesn't know and doesn't care.

    [There *is* a way of formalizing satisficing that realizes that we resort to satisficing because the computational cost of reaching optimum is too high. So, instead of optimizing just the solution, we try to optimize the sum of computational cost to find the solution + the quality of the solution. This however turns out to be quite hard in most cases--since you never really know whether you could have improved the quality of the solution by thinking/computing a bit more. We will talk about what are called "Anytime computations" which make a heuristic solution for this type of optimization feasible (see the juxtaposition of opposites in the previous sentence that makes it almost-oxymoronic..)]

    Rao

    ReplyDelete
  23. Shawn:

    The interesting point about plausible reasoning (of which "reasoning with uncertainty" is one type, with non-monotonic/default logics being the other), is that they allow an agent with limited computational and memory powers to still cope with vastly complex worlds and actions. In the case above, if the agent *has to* enumerate every possible state that can in fact result, then it may be completely infeasible to even get started. Instead, the representation you summarized above allows it to clump all its ignorance (we don't know some of the things that can happen) /laziness (we don't feel like enumerating all of them) into "other stuff can happen 5% of the time".

    This allows the agent to come up with plans that can--in theory--work at least in 95% of the time (not 95% of the worlds, but 95% of the time).

    [Now, of course, if one of the 5% probability mass of the cases corresponds to a *really really* bad world state--e.g. armageddon--then you will pay pretty dearly. For example, sub-prime loans do work out okay in most obvious cases--that capture 95% of the probability; lurking in the 5% probability mass is the armageddon that resulted in the current "Crecession" ;-)

    Rao

    ReplyDelete
  24. 1. There has already been a lot of good discussion on this, so I'll just add one thing. In my eyes, what makes the medium-hard environment more difficult to the agent designer is dependent on the goal of the agent. If your agent's goal is only to observe everything it possibly can in the area around it, ignoring the correctness and completeness of observations, I think a partially observable environment would be no more difficult than a fully observable environment. However, if you set goals beyond the basic act of observation, such as walking out of the room, then you will have a tougher time preparing the agent to respond to unknown obstacles that cannot be observed.

    3. This is the difference between Acting Humanly and Acting Rationally. If we forced ourselves to stick with suboptimal designs for the sake of imitation, we'd still be out there flapping our hands with giant wings attached to them. Also, I have yet to see the bird that can fly at Mach 2. =)

    4. I think it would be worthwhile to consider agent architectures where prior knowledge is the main focus point. Consider the automated checkouts at a grocery store. All it needs to do is guide the user through the process, scan the items, and make appropriate decisions regarding the items scanned, and in this case prior knowledge is key. If something happens and the machine cannot respond appropriately, it doesn't attempt to analyze the situation. It merely calls the attendant who will come help.

    Continuing Ryan's taxi example, I agree that a lot of prior knowledge is necessary for a taxi driver to be able to navigate the city, and if the taxi driver has to respond to a new situation, they would only have their prior knowledge and reasoning capabilities to fall back on and it should be enough to get them through all but the most ridiculous of possibilities in this day and age. I think, though, that prior knowledge needs to be updated with the times or it can only take you so far. Who knows where we'll be in the year 2050? Maybe we'll all be flying solar-powered hovercars with adjustable altitude, in which case having the prior knowledge of a taxicab driver from 2009 would be insufficient.

    ReplyDelete

Note: Only a member of this blog may post a comment.