# Solving problems by searching Chapter 3 in aima

Yüklə 2,23 Mb.
 tarix 17.11.2018 ölçüsü 2,23 Mb. #81007

• ## Intelligent behavior can be generated by having a look-up table or reactive policy that tells the agent what to do in every circumstance, but:

• Such a table or policy is difficult to build
• All contingencies must be anticipated

• ## Given:

• An initial state of the world
• A set of possible actions or operators that can be performed.
• A goal test that can be applied to a single state of the world to determine if it is a goal state.
• ## Find:

• A solution stated as a path of states and operators that shows how to transform the initial state into one that satisfies the goal test.

• ## A problem can be defined formally by five components:

• The initial state that the agent starts in
• A description of the possible actions available to the agent -> Actions(s)
• A description of what each action does; the transition model -> Result(s,a)
• Together, the initial state, actions, and transition model implicitly define the state space of the problem—the set of all states reachable from the initial state by any sequence of actions. -> may be infinite
• The goal test, which determines whether a given state is a goal state.
• A path cost function that assigns a numeric cost to each path.

• ## Real world is absurdly complex

• state space must be abstracted for problem solving
• The process of removing detail from a representation is called abstraction.

• ## (Abstract) action = complex combination of real actions

• e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc.

• ## (Abstract) solution =

• set of real paths that are solutions in the real world

• ## Path cost: a function that assigns a cost to a path, typically by summing the cost of the individual actions in the path.

• May want to find minimum cost solution.

• ## Basic idea:

• offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)

• ## Strategies are evaluated along the following dimensions:

• completeness: does it always find a solution if one exists?
• time complexity: how long does it take to find the solution?
• space complexity: maximum number of nodes in memory
• optimality: does it always find a least-cost solution?
• ## Time and space complexity are measured in terms of

• b: maximum branching factor of the search tree
• d: depth of the least-cost solution
• m: maximum depth of the state space (may be ∞)

• ## Uninformed (blind, exhaustive, brute-force) search strategies use only the information available in the problem definition and do not guide the search with any additional information about the problem.

• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search

• ## Implemented by adding new nodes to the end of the queue (FIFO queue):

• GENERAL-SEARCH(problem, ENQUEUE-AT-END)

• ## Complete? No: fails in infinite-depth spaces, spaces with loops

• Modify to avoid repeated states along path
•  complete in finite spaces
• ## Time?O(bm): terrible if m is much larger than d

• but if solutions are dense, may be much faster than breadth-first

• ## Optimal? No

• Not guaranteed optimal since can find deeper solution before shallower ones explored.

• ## For b = 10, d = 5,

• NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
• NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

• ## Three methods for reducing repeated work in order of effectiveness and computational overhead:

• Do not follow self-loops (remove successors back to the same state).
• Do no create paths with cycles (remove successors already on the path back to the root). O(d) overhead.
• Do not generate any state that was already generated. Requires storing all generated states (O(b) space) and searching them (usually using a hash-table for efficiency).

• ## Must be zero if node represents a goal state.

• Example: Straight-line distance from current location to the goal location in a road navigation problem.
• ## Many search problems are NP-complete so in the worst case still have exponential time complexity; however a good heuristic can:

• Find a solution for an average problem efficiently.
• Find a reasonably good but not optimal solution efficiently.

• ## Idea: use an evaluation function f(n) for each node

• estimate of "desirability"
• Expand most desirable unexpanded node

• ## Special cases:

• greedy best-first search
• A* search

• ## Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) )

• A* is complete as long as
• Branching factor is always ﬁnite
• Every operator adds cost at least d > 0

• ## For 8-puzzle:

• A tile can move from square A to B if A is adjacent to B and B is blank
• (a) A tile can move from square A to B if A is adjacent to B.
• (b) A tile can move from square A to B if B is blank.
• (c) A tile can move from square A to B.

• ## Contingency

• Nondeterministic: Suck may dirty a clean carpet
• Partially observable: location, dirt at current location.
• Percept: [L, Clean], i.e., start in #5 or #7 Solution?

• ## Contingency

• Nondeterministic: Suck may dirty a clean carpet
• Partially observable: location, dirt at current location.
• Percept: [L, Clean], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck]

• ## Implementation:

• fringe is a FIFO queue, i.e., new successors go at end

• ## Implementation:

• fringe is a FIFO queue, i.e., new successors go at end

• ## Implementation:

• fringe is a FIFO queue, i.e., new successors go at end

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Implementation:

• fringe = LIFO queue, i.e., put successors at front

• ## Deterministic, fully observable  single-state problem

• Agent knows exactly which state it will be in; solution is a sequence
• ## Non-observable  sensorless problem (conformant problem)

• Agent may have no idea where it is; solution is a sequence
• ## Nondeterministic and/or partially observable  contingency problem

• percepts provide new information about current state
• often interleave} search, execution

• ## Contour i has all nodes with f=fi, where fi < fi+1

Yüklə 2,23 Mb.

Dostları ilə paylaş:

Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©www.genderi.org 2022
rəhbərliyinə müraciət

Ana səhifə