SlideShare a Scribd company logo
CptS 440 / 540Artificial IntelligenceSearch
SearchSearch permeates all of AIWhat choices are we searching through?Problem solvingAction combinations (move 1, then move 3, then move 2...) Natural language Ways to map words to parts of speech Computer vision Ways to map features to object model Machine learning Possible concepts that fit examples seen so far Motion planning Sequence of moves to reach goal destinationAn intelligent agent is trying to find a set or sequence of actions to achieve a goalThis is a goal-based agent
Problem-solving AgentSimpleProblemSolvingAgent(percept)   state = UpdateState(state, percept)   if sequence is empty then   goal = FormulateGoal(state)   problem = FormulateProblem(state, g)   sequence = Search(problem)   action = First(sequence)   sequence = Rest(sequence)   Return action
AssumptionsStatic or dynamic?Environment is static
AssumptionsStatic or dynamic?Fully or partially observable?Environment is fully observable
AssumptionsStatic or dynamic?Fully or partially observable?Discrete or continuous?Environment is discrete
AssumptionsStatic or dynamic?Fully or partially observable?Discrete or continuous?Deterministic or stochastic?Environment is deterministic
AssumptionsStatic or dynamic?Fully or partially observable?Discrete or continuous?Deterministic or stochastic?Episodic or sequential?Environment is sequential
AssumptionsStatic or dynamic?Fully or partially observable?Discrete or continuous?Deterministic or stochastic?Episodic or sequential?Single agent or multiple agent?
AssumptionsStatic or dynamic?Fully or partially observable?Discrete or continuous?Deterministic or stochastic?Episodic or sequential?Single agent or multiple agent?
Search ExampleFormulate goal: Be in Bucharest.Formulate problem: states are cities, operators drive between pairs of citiesFind solution:  Find a sequence of cities (e.g., Arad, Sibiu, Fagaras, Bucharest) that leads from the current state to a state meeting the goal condition
Search Space DefinitionsStateA description of a possible state of the worldIncludes all features of the world that are pertinent to the problemInitial stateDescription of all pertinent aspects of the state in which the agent starts the searchGoal testConditions the agent is trying to meet (e.g., have $1M)Goal stateAny state which meets the goal conditionThursday, have $1M, live in NYCFriday, have $1M, live in ValparaisoActionFunction that maps (transitions) from one state to another
Search Space DefinitionsProblem formulationDescribe a general problem as a search problemSolutionSequence of actions that transitions the world from the initial state to a goal stateSolution cost (additive)Sum of the cost of operatorsAlternative:  sum of distances, number of steps, etc.SearchProcess of looking for a solutionSearch algorithm takes problem as input and returns solutionWe are searching through a space of possible statesExecutionProcess of executing sequence of actions (solution)
Problem FormulationA search problem is defined by theInitial state (e.g., Arad)Operators (e.g., Arad -> Zerind, Arad -> Sibiu, etc.)Goal test (e.g., at Bucharest)Solution cost (e.g., path cost)
Example Problems – Eight PuzzleStates: tile locationsInitial state: one specific tile configurationOperators:  move blank tile left, right, up, or downGoal:  tiles are numbered from one to eight around the squarePath cost:  cost of 1 per move (solution cost same as number of most or path length)Eight puzzle applet
Example Problems – Robot AssemblyStates: real-valued coordinates of robot joint angles
 parts of the object to be assembledOperators:  rotation of joint anglesGoal test:  complete assemblyPath cost:  time to complete assembly
Example Problems – Towers of HanoiStates: combinations of poles and disksOperators:  move disk x from pole y to pole z subject to constraints cannot move disk on top of smaller disk
 cannot move disk if other disks on topGoal test:  disks from largest (at bottom) to smallest on goal polePath cost:  1 per moveTowers of Hanoi applet
Example Problems – Rubik’s CubeStates: list of colors for each cell on each faceInitial state: one specific cube configurationOperators:  rotate row x or column y on face z direction aGoal:  configuration has only one color on each facePath cost:  1 per moveRubik’s cube applet
Example Problems – Eight QueensStates: locations of 8 queens on chess boardInitial state: one specific queens configurationOperators:  move queen x to row y and column zGoal:  no queen can attack another (cannot be in same row, column, or diagonal)Path cost:  0 per moveEight queens applet
Example Problems –                        Missionaries and CannibalsStates: number of missionaries, cannibals, and boat on near river bankInitial state: all objects on near river bankOperators:  move boat with x missionaries and y cannibals to other side of river no more cannibals than missionaries on either river bank or in boat
 boat holds at most m occupantsGoal:  all objects on far river bankPath cost:  1 per river crossingMissionaries and cannibals applet
Example Problems –Water JugStates: Contents of 4-gallon jug and 3-gallon jugInitial state: (0,0)Operators:fill jug x from faucet
 pour contents of jug x in jug y until y full
 dump contents of jug x down drainGoal:  (2,n)Path cost:  1 per fillSaving the world, Part ISaving the world, Part II
Sample Search ProblemsGraph coloringProtein foldingGame playingAirline travelProving algebraic equalitiesRobot motion planning
Visualize Search Space as a TreeStates are nodesActions are edgesInitial state is rootSolution is path from root to goal nodeEdges sometimes have associated costsStates resulting from operator are children
Search Problem Example (as a tree)
Search Function –Uninformed SearchesOpen = initial state			// open list is all generated states				// that have not been “expanded”While open not empty		// one iteration of search algorithm   state = First(open)		// current state is first state in open   Pop(open)			// remove new current state from open   if Goal(state)			// test current state for goal condition      return “succeed”		// search is complete				// else expand the current state by				// generating children and				// reorder open list per search strategy   else open = QueueFunction(open, Expand(state))Return “fail”
Search StrategiesSearch strategies differ only in QueuingFunctionFeatures by which to compare search strategiesCompleteness (always find solution)Cost of search (time and space)Cost of solution, optimal solutionMake use of knowledge of the domain“uninformed search” vs. “informed search”
Breadth-First SearchGenerate children of a state, QueueingFn adds the children to the end of the open list Level-by-level search Order in which children are inserted on open list is arbitraryIn tree, assume children are considered left-to-right unless specified differentlyNumber of children is “branching factor” b
b = 2Example treesSearch algorithms appletBFS Examples
AnalysisAssume goal node at level d with constant branching factor bTime complexity (measured in #nodes generated)1 (1st level ) + b (2nd level) + b2 (3rd level) + … + bd (goal level) + (bd+1 – b)              = O(bd+1)This assumes goal on far right of levelSpace complexityAt most majority of nodes at level d + majority of nodes at level d+1 = O(bd+1)
Exponential time and spaceFeaturesSimple to implement
Complete
Finds shortest solution (not necessarily least-cost unless all operators have equal cost)AnalysisSee what happens with b=10expand 10,000 nodes/second1,000 bytes/node
Depth-First SearchQueueingFn adds the children to the front of the open list BFS emulates FIFO queueDFS emulates LIFO stackNet effectFollow leftmost path to bottom, then backtrackExpand deepest node first
Example treesDFS Examples
AnalysisTime complexityIn the worst case, search entire space
Goal may be at level d but tree may continue to level m, m>=d
O(bm)
Particularly bad if tree is infinitely deepSpace complexityOnly need to save one set of children at each level
1 + b + b + … + b (m levels total) = O(bm)
For previous example, DFS requires 118kb instead of 10 petabytes for d=12 (10 billion times less)BenefitsMay not always find solution
Solution is not necessarily shortest or least cost
If many solutions, may find one quickly (quickly moves to depth d)
Simple to implement
Space often bigger constraint, so more usable than BFS for large problemsComparison of Search Techniques
Avoiding Repeated StatesCan we do it?Do not return to parent or grandparent stateIn 8 puzzle, do not move up right after downDo not create solution paths with cyclesDo not generate repeated states (need to store and check potentially large number of states)
Maze ExampleStates are cells in a mazeMove N, E, S, or WWhat would BFS do (expand E, then N, W, S)?What would DFS do?What if order changed to N, E, S, W and loops are prevented?
Uniform Cost Search (Branch&Bound)QueueingFn is SortByCostSoFarCost from root to current node n is g(n)Add operator costs along pathFirst goal found is least-cost solutionSpace & time can be exponential because large subtrees with inexpensive steps may be explored before useful paths with costly stepsIf costs are equal, time and space are O(bd)Otherwise, complexity related to cost of optimal solution
UCS Example
UCS ExampleOpen list:  C
UCS ExampleOpen list:  B(2) T(1) O(3) E(2) P(5)
UCS ExampleOpen list:  T(1) B(2) E(2) O(3) P(5)
UCS ExampleOpen list:  B(2) E(2) O(3) P(5)
UCS ExampleOpen list:  E(2) O(3) P(5)
UCS ExampleOpen list:  E(2) O(3) A(3) S(5) P(5) R(6)
UCS ExampleOpen list:  O(3) A(3) S(5) P(5) R(6)
UCS ExampleOpen list:  O(3) A(3) S(5) P(5) R(6) G(10)
UCS ExampleOpen list:  A(3) S(5) P(5) R(6) G(10)
UCS ExampleOpen list:  A(3) I(4) S(5) N(5) P(5) R(6) G(10)
UCS ExampleOpen list:  I(4) P(5) S(5) N(5) R(6) G(10)
UCS ExampleOpen list:  P(5) S(5) N(5) R(6) Z(6) G(10)
UCS ExampleOpen list:  S(5) N(5) R(6) Z(6) F(6) D(8) G(10) L(10)
UCS ExampleOpen list:  N(5) R(6) Z(6) F(6) D(8) G(10) L(10)
UCS ExampleOpen list:  Z(6) F(6) D(8) G(10) L(10)
UCS ExampleOpen list:  F(6) D(8) G(10) L(10)
UCS Example
Comparison of Search Techniques
Iterative Deepening SearchDFS with depth boundQueuingFn is enqueue at front as with DFSExpand(state) only returns children such that depth(child) <= thresholdThis prevents search from going down infinite pathFirst threshold is 1If do not find solution, increment threshold and repeat
Examples
AnalysisWhat about the repeated work?Time complexity (number of generated nodes)[b] + [b + b2] + .. + [b + b2 + .. + bd]
(d)b + (d-1) b2 + … + (1) bd
O(bd)AnalysisRepeated work is approximately 1/b of total workNegligible
Example:  b=10, d=5
N(BFS) = 1,111,100
N(IDS) = 123,450FeaturesShortest solution, not necessarily least costIs there a better way to decide threshold?  (IDA*)
Comparison of Search Techniques
Bidirectional SearchSearch forward from initial state to goal AND backward from goal state to initial stateCan prune many optionsConsiderationsWhich goal state(s) to useHow determine when searches overlapWhich search to use for each directionHere, two BFS searchesTime and space is O(bd/2)
Informed SearchesBest-first search, Hill climbing, Beam search, A*, IDA*, RBFS, SMA*New termsHeuristicsOptimal solutionInformednessHill climbing problemsAdmissibilityNew parametersg(n) = estimated cost from initial state to state nh(n) = estimated cost (distance) from state n to closest goalh(n) is our heuristicRobot path planning, h(n) could be Euclidean distance8 puzzle, h(n) could be #tiles out of placeSearch algorithms which use h(n) to guide search are            heuristic search algorithms
Best-First SearchQueueingFn is sort-by-hBest-first search only as good as heuristicExample heuristic for 8 puzzle:           Manhattan Distance
Example
Example
Example
Example
Example
Example
Example
Example
Example
Comparison of Search Techniques
Hill Climbing (Greedy Search)QueueingFn is sort-by-hOnly keep lowest-h state on open listBest-first search is tentativeHill climbing is irrevocableFeaturesMuch fasterLess memoryDependent upon h(n)If bad h(n), may prune away all goalsNot complete
Example
Example
Hill Climbing IssuesAlso referred to as gradient descentFoothill problem / local maxima / local minimaCan be solved with random walk or more stepsOther problems:  ridges, plateausglobal maximalocal maxima
Comparison of Search Techniques
Beam SearchQueueingFn is sort-by-hOnly keep best (lowest-h) n nodes on open listn is the “beam width”n = 1, Hill climbingn = infinity, Best first search
Example
Example
Example
Example
Example
Example
Example
Example
Example
Comparison of Search Techniques
A*QueueingFn is sort-by-ff(n) = g(n) + h(n)Note that UCS and Best-first both improve searchUCS keeps solution cost lowBest-first helps find solution quicklyA* combines these approaches
Power of fIf heuristic function is wrong it eitheroverestimates (guesses too high)underestimates (guesses too low)Overestimating is worse than underestimatingA* returns optimal solution if h(n) is admissibleheuristic function is admissible if never overestimates true cost to nearest goalif search finds optimal solution using admissible heuristic, the search is admissible
OverestimatingA (15)332B (6)C (20)D (10)15620105I(0)E (20)F(0)G (12)H (20)Solution cost:ABF = 9ADI = 8Open list:A (15) B (9) F (9)Missed optimal solution
ExampleA* applied to 8 puzzleA* search applet
Example
Example
Example
Example
Example
Example
Example
Example
Optimality of A*Suppose a suboptimal goal G2 is on the open listLet n be unexpanded node on smallest-cost path to optimal goal G1f(G2) = g(G2)       since h(G2) = 0          >= g(G1)     since G2 is suboptimal          >= f(n)        since h is admissibleSince f(G2) > f(n), A* will never select G2 for expansion
Comparison of Search Techniques
IDA*Series of Depth-First SearchesLike Iterative Deepening Search, exceptUse A* cost threshold instead of depth thresholdEnsures optimal solutionQueuingFnenqueues at front if f(child) <= thresholdThresholdh(root) first iterationSubsequent iterationsf(min_child)min_child is the cut off child with the minimum f valueIncrease always includes at least one new nodeMakes sure search never looks beyond optimal cost solution
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
AnalysisSome redundant searchSmall amount compared to work done on last iterationDangerous if continuous-valued h(n) values or if values very closeIf threshold = 21.1 and value is 21.2, probably only include 1 new node each iterationTime complexity is O(bm)Space complexity is O(m)
Comparison of Search Techniques
RBFSRecursive Best First SearchLinear space variant of A*Perform A* search but discard subtrees when perform recursionKeep track of alternative (next best) subtreeExpand subtree until f value greater than boundUpdate f values before (from parent)                and after (from descendant) recursive call
Algorithm// Input is current node and f limit// Returns goal node or failure, updated limitRBFS(n, limit)   if Goal(n)      return n   children = Expand(n)   if children empty      return failure, infinity   for each c in children      f[c] = max(g(c)+h(c), f[n])		     // Update f[c] based on parent   repeat      best = child with smallest f value      if f[best] > limit         return failure, f[best]      alternative = second-lowest f-value among childrennewlimit = min(limit, alternative)      result, f[best] = RBFS(best, newlimit)   // Update f[best] based on descendant   if result not equal to failure      return result
Example
Example
Example
Example
Example
Example
AnalysisOptimal if h(n) is admissibleSpace is O(bm)FeaturesPotentially exponential time in cost of solutionMore efficient than IDA*Keeps more information than IDA* but benefits from storing this information
SMA*Simplified Memory-Bounded A* SearchPerform A* searchWhen memory is fullDiscard worst leaf (largest f(n) value)Back value of discarded node to parentOptimal if solution fits in memory
ExampleLet MaxNodes = 3Initially B&G added to open list, then hit maxB is larger f value so discard but save f(B)=15 at parent AAdd H but f(H)=18. Not a goal and cannot go deper, so set f(h)=infinity and save at G.Generate next child I with f(I)=24, bigger child of A. We have seen all children of G, so reset f(G)=24.Regenerate B and child C. This is not goal so f(c) reset to infinityGenerate second child D with f(D)=24, backing up value to ancestorsD is a goal node, so search terminates.
Heuristic FunctionsQ:  Given that we will only use heuristic functions that do not overestimate, what type of heuristic functions (among these) perform best?A: Those that produce higher h(n) values.
ReasonsHigher h value means closer to actual distanceAny node n on open list withf(n) < f*(goal)will be selected for expansion by A*This means if a lot of nodes have a low underestimate (lower than actual optimum cost)All of them will be expandedResults in increased search time and space
InformednessIf h1 and h2 are both admissible andFor all x, h1(x) > h2(x), then h1 “dominates” h2Can also say h1 is “more informed” than h2Exampleh1(x):h2(x):  Euclidean distanceh2 dominates h1
Effect on Search CostIf h2(n) >= h1(n) for all n (both are admissible)then h2 dominates h1 and is better for searchTypical search costsd=14, IDS expands 3,473,941 nodesA* with h1 expands 539 nodesA* with h2 expands 113 nodesd=24, IDS expands ~54,000,000,000 nodesA* with h1 expands 39,135 nodesA* with h2 expands 1,641 nodes
Which of these heuristics are admissible?Which are more informed?h1(n) = #tiles in wrong positionh2(n) = Sum of Manhattan distance between each tile and goal location for the tileh3(n) = 0h4(n) = 1h5(n) = min(2, h*[n])h6(n) = Manhattan distance for blank tileh7(n) = max(2, h*[n])
Generating Heuristic FunctionsGenerate heuristic for simpler (relaxed) problemRelaxed problem has fewer restrictions Eight puzzle where multiple tiles can be in the same spot Cost of optimal solution to relaxed problem is an admissible heuristic for the original problem Learn heuristic from experience
Iterative Improvement AlgorithmsHill climbingSimulated annealingGenetic algorithms
Iterative Improvement AlgorithmsFor many optimization problems,                                solution path is irrelevantJust want to reach goal stateState space / search spaceSet of “complete” configurationsWant to find optimal configuration                                        (or at least one that satisfies goal constraints)For these cases, use iterative improvement algorithmKeep a single current stateTry to improve itConstant memory
ExampleTraveling salesmanStart with any complete tourOperator: Perform pairwise exchanges
ExampleN-queensPut n queens on an n × n board with no two queens on the same row, column, or diagonalOperator: Move queen to reduce #conflicts
Hill Climbing (gradient ascent/descent)“Like climbing Mount Everest in thick fog with amnesia”
CptS 440 / 540 Artificial Intelligence
Local Beam SearchKeep k states instead of 1Choose top k of all successorsProblemMany times all k states end up on same local hillChoose k successors RANDOMLYBias toward good onesSimilar to natural selection
Simulated AnnealingPure hill climbing is not complete, but pure random search is inefficient. Simulated annealing offers a compromise. Inspired by annealing process of gradually cooling a liquid until it changes to a low-energy state. Very similar to hill climbing, except include a user-defined temperature schedule. When temperature is “high”, allow some random moves. When temperature “cools”, reduce probability of random move. If T is decreased slowly enough, guaranteed to reach best state.
Algorithmfunction SimulatedAnnealing(problem, schedule)  // returns solution state   current = MakeNode(Initial-State(problem))   for t = 1 to infinity      T = schedule[t]      if T = 0         return current      next = randomly-selected child of current              = Value[next] - Value[current]      if         > 0          current = next             		 // if better than accept state      else current = next with probability Simulated annealing appletTraveling salesman simulated annealing applet
Genetic AlgorithmsWhat is a Genetic Algorithm (GA)?An adaptation procedure based on the mechanics of natural genetics and natural selectionGas have 2 essential componentsSurvival of the fittestRecombinationRepresentationChromosome = stringGene = single bit or single subsequence in string, represents 1 attribute
CptS 440 / 540 Artificial Intelligence

More Related Content

PDF
Hill Climbing Algorithm in Artificial Intelligence
PPT
AI Lecture 3 (solving problems by searching)
PPTX
Reinforcement learning
PDF
Ai lab manual
PPTX
Hill climbing algorithm
PPTX
Artificial Intelligence Searching Techniques
PPTX
AI_Session 6 Iterative deepening Depth-first and bidirectional search.pptx
PPT
Divide and conquer
Hill Climbing Algorithm in Artificial Intelligence
AI Lecture 3 (solving problems by searching)
Reinforcement learning
Ai lab manual
Hill climbing algorithm
Artificial Intelligence Searching Techniques
AI_Session 6 Iterative deepening Depth-first and bidirectional search.pptx
Divide and conquer

What's hot (20)

PPT
Composite transformations
PDF
Artificial Intelligence - Hill climbing.
PPTX
Search Algorithms in AI.pptx
PPT
Heuristic Search Techniques {Artificial Intelligence}
PPTX
knowledge representation in artificial intelligence
PDF
Run time storage
PPTX
Problem solving agents
PPTX
AI: AI & Problem Solving
PDF
First Order Logic resolution
PPTX
Knowledge based agents
PPTX
PPTX
Knowledge representation in AI
PPT
Informed search (heuristics)
PDF
Multi-agent systems
PPTX
constraint satisfaction problems.pptx
PDF
I. Alpha-Beta Pruning in ai
PDF
5.-Knowledge-Representation-in-AI_010824.pdf
PPTX
8 queen problem
PDF
Symbol table in compiler Design
PPT
Rule Based System
Composite transformations
Artificial Intelligence - Hill climbing.
Search Algorithms in AI.pptx
Heuristic Search Techniques {Artificial Intelligence}
knowledge representation in artificial intelligence
Run time storage
Problem solving agents
AI: AI & Problem Solving
First Order Logic resolution
Knowledge based agents
Knowledge representation in AI
Informed search (heuristics)
Multi-agent systems
constraint satisfaction problems.pptx
I. Alpha-Beta Pruning in ai
5.-Knowledge-Representation-in-AI_010824.pdf
8 queen problem
Symbol table in compiler Design
Rule Based System
Ad

Viewers also liked (6)

PPT
Uniformed tree searching
PDF
16890 unit 2 heuristic search techniques
PPTX
Adversarial search with Game Playing
PPT
Game Playing in Artificial Intelligence
PPT
Iterative deepening search
PPT
Intro to-iterative-deepening
Uniformed tree searching
16890 unit 2 heuristic search techniques
Adversarial search with Game Playing
Game Playing in Artificial Intelligence
Iterative deepening search
Intro to-iterative-deepening
Ad

Similar to CptS 440 / 540 Artificial Intelligence (20)

PPTX
PPT
02-solving-problems-by-searching-(us).ppt
PPTX
PPTX
AI UNIT 2 PPT AI UNIT 2 PPT AI UNIT 2 PPT.pptx
PPT
Rai practical presentations.
PPTX
Lecture 3 Problem Solving.pptx
PPT
Jarrar.lecture notes.aai.2011s.ch3.uniformedsearch
PPTX
AI_Lec2.pptx dive in to ai hahahahahahah
PPTX
3. ArtificialSolving problems by searching.pptx
PPTX
chapter 3 Problem Solving using searching.pptx
PPTX
Chapter 3.pptx
PPT
state-spaces29Sep06.ppt
PPT
Chapter3 Search
PPTX
SP14 CS188 Lecture 2 -- Uninformed Search.pptx
PPT
1 blind search
PPTX
Artificial intelligence(04)
PDF
Lecture 3 problem solving
PPT
Lecture 2
PPTX
CS767_Lecture_02.pptx
02-solving-problems-by-searching-(us).ppt
AI UNIT 2 PPT AI UNIT 2 PPT AI UNIT 2 PPT.pptx
Rai practical presentations.
Lecture 3 Problem Solving.pptx
Jarrar.lecture notes.aai.2011s.ch3.uniformedsearch
AI_Lec2.pptx dive in to ai hahahahahahah
3. ArtificialSolving problems by searching.pptx
chapter 3 Problem Solving using searching.pptx
Chapter 3.pptx
state-spaces29Sep06.ppt
Chapter3 Search
SP14 CS188 Lecture 2 -- Uninformed Search.pptx
1 blind search
Artificial intelligence(04)
Lecture 3 problem solving
Lecture 2
CS767_Lecture_02.pptx

More from butest (20)

PDF
EL MODELO DE NEGOCIO DE YOUTUBE
DOC
1. MPEG I.B.P frame之不同
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPT
Timeline: The Life of Michael Jackson
DOCX
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPTX
Com 380, Summer II
PPT
PPT
DOCX
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
DOC
MICHAEL JACKSON.doc
PPTX
Social Networks: Twitter Facebook SL - Slide 1
PPT
Facebook
DOCX
Executive Summary Hare Chevrolet is a General Motors dealership ...
DOC
Welcome to the Dougherty County Public Library's Facebook and ...
DOC
NEWS ANNOUNCEMENT
DOC
C-2100 Ultra Zoom.doc
DOC
MAC Printing on ITS Printers.doc.doc
DOC
Mac OS X Guide.doc
DOC
hier
DOC
WEB DESIGN!
EL MODELO DE NEGOCIO DE YOUTUBE
1. MPEG I.B.P frame之不同
LESSONS FROM THE MICHAEL JACKSON TRIAL
Timeline: The Life of Michael Jackson
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
LESSONS FROM THE MICHAEL JACKSON TRIAL
Com 380, Summer II
PPT
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
MICHAEL JACKSON.doc
Social Networks: Twitter Facebook SL - Slide 1
Facebook
Executive Summary Hare Chevrolet is a General Motors dealership ...
Welcome to the Dougherty County Public Library's Facebook and ...
NEWS ANNOUNCEMENT
C-2100 Ultra Zoom.doc
MAC Printing on ITS Printers.doc.doc
Mac OS X Guide.doc
hier
WEB DESIGN!

CptS 440 / 540 Artificial Intelligence

  • 1. CptS 440 / 540Artificial IntelligenceSearch
  • 2. SearchSearch permeates all of AIWhat choices are we searching through?Problem solvingAction combinations (move 1, then move 3, then move 2...) Natural language Ways to map words to parts of speech Computer vision Ways to map features to object model Machine learning Possible concepts that fit examples seen so far Motion planning Sequence of moves to reach goal destinationAn intelligent agent is trying to find a set or sequence of actions to achieve a goalThis is a goal-based agent
  • 3. Problem-solving AgentSimpleProblemSolvingAgent(percept) state = UpdateState(state, percept) if sequence is empty then goal = FormulateGoal(state) problem = FormulateProblem(state, g) sequence = Search(problem) action = First(sequence) sequence = Rest(sequence) Return action
  • 5. AssumptionsStatic or dynamic?Fully or partially observable?Environment is fully observable
  • 6. AssumptionsStatic or dynamic?Fully or partially observable?Discrete or continuous?Environment is discrete
  • 7. AssumptionsStatic or dynamic?Fully or partially observable?Discrete or continuous?Deterministic or stochastic?Environment is deterministic
  • 8. AssumptionsStatic or dynamic?Fully or partially observable?Discrete or continuous?Deterministic or stochastic?Episodic or sequential?Environment is sequential
  • 9. AssumptionsStatic or dynamic?Fully or partially observable?Discrete or continuous?Deterministic or stochastic?Episodic or sequential?Single agent or multiple agent?
  • 10. AssumptionsStatic or dynamic?Fully or partially observable?Discrete or continuous?Deterministic or stochastic?Episodic or sequential?Single agent or multiple agent?
  • 11. Search ExampleFormulate goal: Be in Bucharest.Formulate problem: states are cities, operators drive between pairs of citiesFind solution: Find a sequence of cities (e.g., Arad, Sibiu, Fagaras, Bucharest) that leads from the current state to a state meeting the goal condition
  • 12. Search Space DefinitionsStateA description of a possible state of the worldIncludes all features of the world that are pertinent to the problemInitial stateDescription of all pertinent aspects of the state in which the agent starts the searchGoal testConditions the agent is trying to meet (e.g., have $1M)Goal stateAny state which meets the goal conditionThursday, have $1M, live in NYCFriday, have $1M, live in ValparaisoActionFunction that maps (transitions) from one state to another
  • 13. Search Space DefinitionsProblem formulationDescribe a general problem as a search problemSolutionSequence of actions that transitions the world from the initial state to a goal stateSolution cost (additive)Sum of the cost of operatorsAlternative: sum of distances, number of steps, etc.SearchProcess of looking for a solutionSearch algorithm takes problem as input and returns solutionWe are searching through a space of possible statesExecutionProcess of executing sequence of actions (solution)
  • 14. Problem FormulationA search problem is defined by theInitial state (e.g., Arad)Operators (e.g., Arad -> Zerind, Arad -> Sibiu, etc.)Goal test (e.g., at Bucharest)Solution cost (e.g., path cost)
  • 15. Example Problems – Eight PuzzleStates: tile locationsInitial state: one specific tile configurationOperators: move blank tile left, right, up, or downGoal: tiles are numbered from one to eight around the squarePath cost: cost of 1 per move (solution cost same as number of most or path length)Eight puzzle applet
  • 16. Example Problems – Robot AssemblyStates: real-valued coordinates of robot joint angles
  • 17. parts of the object to be assembledOperators: rotation of joint anglesGoal test: complete assemblyPath cost: time to complete assembly
  • 18. Example Problems – Towers of HanoiStates: combinations of poles and disksOperators: move disk x from pole y to pole z subject to constraints cannot move disk on top of smaller disk
  • 19. cannot move disk if other disks on topGoal test: disks from largest (at bottom) to smallest on goal polePath cost: 1 per moveTowers of Hanoi applet
  • 20. Example Problems – Rubik’s CubeStates: list of colors for each cell on each faceInitial state: one specific cube configurationOperators: rotate row x or column y on face z direction aGoal: configuration has only one color on each facePath cost: 1 per moveRubik’s cube applet
  • 21. Example Problems – Eight QueensStates: locations of 8 queens on chess boardInitial state: one specific queens configurationOperators: move queen x to row y and column zGoal: no queen can attack another (cannot be in same row, column, or diagonal)Path cost: 0 per moveEight queens applet
  • 22. Example Problems – Missionaries and CannibalsStates: number of missionaries, cannibals, and boat on near river bankInitial state: all objects on near river bankOperators: move boat with x missionaries and y cannibals to other side of river no more cannibals than missionaries on either river bank or in boat
  • 23. boat holds at most m occupantsGoal: all objects on far river bankPath cost: 1 per river crossingMissionaries and cannibals applet
  • 24. Example Problems –Water JugStates: Contents of 4-gallon jug and 3-gallon jugInitial state: (0,0)Operators:fill jug x from faucet
  • 25. pour contents of jug x in jug y until y full
  • 26. dump contents of jug x down drainGoal: (2,n)Path cost: 1 per fillSaving the world, Part ISaving the world, Part II
  • 27. Sample Search ProblemsGraph coloringProtein foldingGame playingAirline travelProving algebraic equalitiesRobot motion planning
  • 28. Visualize Search Space as a TreeStates are nodesActions are edgesInitial state is rootSolution is path from root to goal nodeEdges sometimes have associated costsStates resulting from operator are children
  • 30. Search Function –Uninformed SearchesOpen = initial state // open list is all generated states // that have not been “expanded”While open not empty // one iteration of search algorithm state = First(open) // current state is first state in open Pop(open) // remove new current state from open if Goal(state) // test current state for goal condition return “succeed” // search is complete // else expand the current state by // generating children and // reorder open list per search strategy else open = QueueFunction(open, Expand(state))Return “fail”
  • 31. Search StrategiesSearch strategies differ only in QueuingFunctionFeatures by which to compare search strategiesCompleteness (always find solution)Cost of search (time and space)Cost of solution, optimal solutionMake use of knowledge of the domain“uninformed search” vs. “informed search”
  • 32. Breadth-First SearchGenerate children of a state, QueueingFn adds the children to the end of the open list Level-by-level search Order in which children are inserted on open list is arbitraryIn tree, assume children are considered left-to-right unless specified differentlyNumber of children is “branching factor” b
  • 33. b = 2Example treesSearch algorithms appletBFS Examples
  • 34. AnalysisAssume goal node at level d with constant branching factor bTime complexity (measured in #nodes generated)1 (1st level ) + b (2nd level) + b2 (3rd level) + … + bd (goal level) + (bd+1 – b) = O(bd+1)This assumes goal on far right of levelSpace complexityAt most majority of nodes at level d + majority of nodes at level d+1 = O(bd+1)
  • 35. Exponential time and spaceFeaturesSimple to implement
  • 37. Finds shortest solution (not necessarily least-cost unless all operators have equal cost)AnalysisSee what happens with b=10expand 10,000 nodes/second1,000 bytes/node
  • 38. Depth-First SearchQueueingFn adds the children to the front of the open list BFS emulates FIFO queueDFS emulates LIFO stackNet effectFollow leftmost path to bottom, then backtrackExpand deepest node first
  • 40. AnalysisTime complexityIn the worst case, search entire space
  • 41. Goal may be at level d but tree may continue to level m, m>=d
  • 42. O(bm)
  • 43. Particularly bad if tree is infinitely deepSpace complexityOnly need to save one set of children at each level
  • 44. 1 + b + b + … + b (m levels total) = O(bm)
  • 45. For previous example, DFS requires 118kb instead of 10 petabytes for d=12 (10 billion times less)BenefitsMay not always find solution
  • 46. Solution is not necessarily shortest or least cost
  • 47. If many solutions, may find one quickly (quickly moves to depth d)
  • 49. Space often bigger constraint, so more usable than BFS for large problemsComparison of Search Techniques
  • 50. Avoiding Repeated StatesCan we do it?Do not return to parent or grandparent stateIn 8 puzzle, do not move up right after downDo not create solution paths with cyclesDo not generate repeated states (need to store and check potentially large number of states)
  • 51. Maze ExampleStates are cells in a mazeMove N, E, S, or WWhat would BFS do (expand E, then N, W, S)?What would DFS do?What if order changed to N, E, S, W and loops are prevented?
  • 52. Uniform Cost Search (Branch&Bound)QueueingFn is SortByCostSoFarCost from root to current node n is g(n)Add operator costs along pathFirst goal found is least-cost solutionSpace & time can be exponential because large subtrees with inexpensive steps may be explored before useful paths with costly stepsIf costs are equal, time and space are O(bd)Otherwise, complexity related to cost of optimal solution
  • 55. UCS ExampleOpen list: B(2) T(1) O(3) E(2) P(5)
  • 56. UCS ExampleOpen list: T(1) B(2) E(2) O(3) P(5)
  • 57. UCS ExampleOpen list: B(2) E(2) O(3) P(5)
  • 58. UCS ExampleOpen list: E(2) O(3) P(5)
  • 59. UCS ExampleOpen list: E(2) O(3) A(3) S(5) P(5) R(6)
  • 60. UCS ExampleOpen list: O(3) A(3) S(5) P(5) R(6)
  • 61. UCS ExampleOpen list: O(3) A(3) S(5) P(5) R(6) G(10)
  • 62. UCS ExampleOpen list: A(3) S(5) P(5) R(6) G(10)
  • 63. UCS ExampleOpen list: A(3) I(4) S(5) N(5) P(5) R(6) G(10)
  • 64. UCS ExampleOpen list: I(4) P(5) S(5) N(5) R(6) G(10)
  • 65. UCS ExampleOpen list: P(5) S(5) N(5) R(6) Z(6) G(10)
  • 66. UCS ExampleOpen list: S(5) N(5) R(6) Z(6) F(6) D(8) G(10) L(10)
  • 67. UCS ExampleOpen list: N(5) R(6) Z(6) F(6) D(8) G(10) L(10)
  • 68. UCS ExampleOpen list: Z(6) F(6) D(8) G(10) L(10)
  • 69. UCS ExampleOpen list: F(6) D(8) G(10) L(10)
  • 71. Comparison of Search Techniques
  • 72. Iterative Deepening SearchDFS with depth boundQueuingFn is enqueue at front as with DFSExpand(state) only returns children such that depth(child) <= thresholdThis prevents search from going down infinite pathFirst threshold is 1If do not find solution, increment threshold and repeat
  • 74. AnalysisWhat about the repeated work?Time complexity (number of generated nodes)[b] + [b + b2] + .. + [b + b2 + .. + bd]
  • 75. (d)b + (d-1) b2 + … + (1) bd
  • 76. O(bd)AnalysisRepeated work is approximately 1/b of total workNegligible
  • 79. N(IDS) = 123,450FeaturesShortest solution, not necessarily least costIs there a better way to decide threshold? (IDA*)
  • 80. Comparison of Search Techniques
  • 81. Bidirectional SearchSearch forward from initial state to goal AND backward from goal state to initial stateCan prune many optionsConsiderationsWhich goal state(s) to useHow determine when searches overlapWhich search to use for each directionHere, two BFS searchesTime and space is O(bd/2)
  • 82. Informed SearchesBest-first search, Hill climbing, Beam search, A*, IDA*, RBFS, SMA*New termsHeuristicsOptimal solutionInformednessHill climbing problemsAdmissibilityNew parametersg(n) = estimated cost from initial state to state nh(n) = estimated cost (distance) from state n to closest goalh(n) is our heuristicRobot path planning, h(n) could be Euclidean distance8 puzzle, h(n) could be #tiles out of placeSearch algorithms which use h(n) to guide search are heuristic search algorithms
  • 83. Best-First SearchQueueingFn is sort-by-hBest-first search only as good as heuristicExample heuristic for 8 puzzle: Manhattan Distance
  • 93. Comparison of Search Techniques
  • 94. Hill Climbing (Greedy Search)QueueingFn is sort-by-hOnly keep lowest-h state on open listBest-first search is tentativeHill climbing is irrevocableFeaturesMuch fasterLess memoryDependent upon h(n)If bad h(n), may prune away all goalsNot complete
  • 97. Hill Climbing IssuesAlso referred to as gradient descentFoothill problem / local maxima / local minimaCan be solved with random walk or more stepsOther problems: ridges, plateausglobal maximalocal maxima
  • 98. Comparison of Search Techniques
  • 99. Beam SearchQueueingFn is sort-by-hOnly keep best (lowest-h) n nodes on open listn is the “beam width”n = 1, Hill climbingn = infinity, Best first search
  • 109. Comparison of Search Techniques
  • 110. A*QueueingFn is sort-by-ff(n) = g(n) + h(n)Note that UCS and Best-first both improve searchUCS keeps solution cost lowBest-first helps find solution quicklyA* combines these approaches
  • 111. Power of fIf heuristic function is wrong it eitheroverestimates (guesses too high)underestimates (guesses too low)Overestimating is worse than underestimatingA* returns optimal solution if h(n) is admissibleheuristic function is admissible if never overestimates true cost to nearest goalif search finds optimal solution using admissible heuristic, the search is admissible
  • 112. OverestimatingA (15)332B (6)C (20)D (10)15620105I(0)E (20)F(0)G (12)H (20)Solution cost:ABF = 9ADI = 8Open list:A (15) B (9) F (9)Missed optimal solution
  • 113. ExampleA* applied to 8 puzzleA* search applet
  • 122. Optimality of A*Suppose a suboptimal goal G2 is on the open listLet n be unexpanded node on smallest-cost path to optimal goal G1f(G2) = g(G2) since h(G2) = 0 >= g(G1) since G2 is suboptimal >= f(n) since h is admissibleSince f(G2) > f(n), A* will never select G2 for expansion
  • 123. Comparison of Search Techniques
  • 124. IDA*Series of Depth-First SearchesLike Iterative Deepening Search, exceptUse A* cost threshold instead of depth thresholdEnsures optimal solutionQueuingFnenqueues at front if f(child) <= thresholdThresholdh(root) first iterationSubsequent iterationsf(min_child)min_child is the cut off child with the minimum f valueIncrease always includes at least one new nodeMakes sure search never looks beyond optimal cost solution
  • 153. AnalysisSome redundant searchSmall amount compared to work done on last iterationDangerous if continuous-valued h(n) values or if values very closeIf threshold = 21.1 and value is 21.2, probably only include 1 new node each iterationTime complexity is O(bm)Space complexity is O(m)
  • 154. Comparison of Search Techniques
  • 155. RBFSRecursive Best First SearchLinear space variant of A*Perform A* search but discard subtrees when perform recursionKeep track of alternative (next best) subtreeExpand subtree until f value greater than boundUpdate f values before (from parent) and after (from descendant) recursive call
  • 156. Algorithm// Input is current node and f limit// Returns goal node or failure, updated limitRBFS(n, limit) if Goal(n) return n children = Expand(n) if children empty return failure, infinity for each c in children f[c] = max(g(c)+h(c), f[n]) // Update f[c] based on parent repeat best = child with smallest f value if f[best] > limit return failure, f[best] alternative = second-lowest f-value among childrennewlimit = min(limit, alternative) result, f[best] = RBFS(best, newlimit) // Update f[best] based on descendant if result not equal to failure return result
  • 163. AnalysisOptimal if h(n) is admissibleSpace is O(bm)FeaturesPotentially exponential time in cost of solutionMore efficient than IDA*Keeps more information than IDA* but benefits from storing this information
  • 164. SMA*Simplified Memory-Bounded A* SearchPerform A* searchWhen memory is fullDiscard worst leaf (largest f(n) value)Back value of discarded node to parentOptimal if solution fits in memory
  • 165. ExampleLet MaxNodes = 3Initially B&G added to open list, then hit maxB is larger f value so discard but save f(B)=15 at parent AAdd H but f(H)=18. Not a goal and cannot go deper, so set f(h)=infinity and save at G.Generate next child I with f(I)=24, bigger child of A. We have seen all children of G, so reset f(G)=24.Regenerate B and child C. This is not goal so f(c) reset to infinityGenerate second child D with f(D)=24, backing up value to ancestorsD is a goal node, so search terminates.
  • 166. Heuristic FunctionsQ: Given that we will only use heuristic functions that do not overestimate, what type of heuristic functions (among these) perform best?A: Those that produce higher h(n) values.
  • 167. ReasonsHigher h value means closer to actual distanceAny node n on open list withf(n) < f*(goal)will be selected for expansion by A*This means if a lot of nodes have a low underestimate (lower than actual optimum cost)All of them will be expandedResults in increased search time and space
  • 168. InformednessIf h1 and h2 are both admissible andFor all x, h1(x) > h2(x), then h1 “dominates” h2Can also say h1 is “more informed” than h2Exampleh1(x):h2(x): Euclidean distanceh2 dominates h1
  • 169. Effect on Search CostIf h2(n) >= h1(n) for all n (both are admissible)then h2 dominates h1 and is better for searchTypical search costsd=14, IDS expands 3,473,941 nodesA* with h1 expands 539 nodesA* with h2 expands 113 nodesd=24, IDS expands ~54,000,000,000 nodesA* with h1 expands 39,135 nodesA* with h2 expands 1,641 nodes
  • 170. Which of these heuristics are admissible?Which are more informed?h1(n) = #tiles in wrong positionh2(n) = Sum of Manhattan distance between each tile and goal location for the tileh3(n) = 0h4(n) = 1h5(n) = min(2, h*[n])h6(n) = Manhattan distance for blank tileh7(n) = max(2, h*[n])
  • 171. Generating Heuristic FunctionsGenerate heuristic for simpler (relaxed) problemRelaxed problem has fewer restrictions Eight puzzle where multiple tiles can be in the same spot Cost of optimal solution to relaxed problem is an admissible heuristic for the original problem Learn heuristic from experience
  • 172. Iterative Improvement AlgorithmsHill climbingSimulated annealingGenetic algorithms
  • 173. Iterative Improvement AlgorithmsFor many optimization problems, solution path is irrelevantJust want to reach goal stateState space / search spaceSet of “complete” configurationsWant to find optimal configuration (or at least one that satisfies goal constraints)For these cases, use iterative improvement algorithmKeep a single current stateTry to improve itConstant memory
  • 174. ExampleTraveling salesmanStart with any complete tourOperator: Perform pairwise exchanges
  • 175. ExampleN-queensPut n queens on an n × n board with no two queens on the same row, column, or diagonalOperator: Move queen to reduce #conflicts
  • 176. Hill Climbing (gradient ascent/descent)“Like climbing Mount Everest in thick fog with amnesia”
  • 178. Local Beam SearchKeep k states instead of 1Choose top k of all successorsProblemMany times all k states end up on same local hillChoose k successors RANDOMLYBias toward good onesSimilar to natural selection
  • 179. Simulated AnnealingPure hill climbing is not complete, but pure random search is inefficient. Simulated annealing offers a compromise. Inspired by annealing process of gradually cooling a liquid until it changes to a low-energy state. Very similar to hill climbing, except include a user-defined temperature schedule. When temperature is “high”, allow some random moves. When temperature “cools”, reduce probability of random move. If T is decreased slowly enough, guaranteed to reach best state.
  • 180. Algorithmfunction SimulatedAnnealing(problem, schedule) // returns solution state current = MakeNode(Initial-State(problem)) for t = 1 to infinity T = schedule[t] if T = 0 return current next = randomly-selected child of current = Value[next] - Value[current] if > 0 current = next // if better than accept state else current = next with probability Simulated annealing appletTraveling salesman simulated annealing applet
  • 181. Genetic AlgorithmsWhat is a Genetic Algorithm (GA)?An adaptation procedure based on the mechanics of natural genetics and natural selectionGas have 2 essential componentsSurvival of the fittestRecombinationRepresentationChromosome = stringGene = single bit or single subsequence in string, represents 1 attribute
  • 183. HumansDNA made up of 4 nucleic acids (4-bit code)46 chromosomes in humans, each contain 3 billion DNA43 billion combinations of bitsCan random search find humans?Assume only 0.1% genome must be discovered, 3(106) nucleotidesAssume very short generation, 1 generation/second3.2(10107) individuals per year, but 101.8(107) alternatives1018106 years to generate human randomlySelf reproduction, self repair, adaptability are the rule in natural systems, they hardly exist in the artificial worldFinding and adopting nature’s approach to computational design should unlock many doors in science and engineering
  • 184. GAs Exhibit SearchEach attempt a GA makes towards a solution is called a chromosomeA sequence of information that can be interpreted as a possible solutionTypically, a chromosome is represented as sequence of binary digitsEach digit is a geneA GA maintains a collection or population of chromosomesEach chromosome in the population represents a different guess at the solution
  • 185. The GA ProcedureInitialize a population (of solution guesses)Do (once for each generation)Evaluate each chromosome in the population using a fitness functionApply GA operators to population to create a new populationFinish when solution is reached or number of generations has reached an allowable maximum.
  • 187. ReproductionSelect individuals x according to their fitness values f(x)Like beam searchFittest individuals survive (and possibly mate) for next generation
  • 188. CrossoverSelect two parentsSelect cross siteCut and splice pieces of one parent to those of the other1 1 1 1 10 0 0 0 01 1 0 0 00 0 1 1 1
  • 189. MutationWith small probability, randomly alter 1 bitMinor operatorAn insurance policy against lost bitsPushes out of local minimaPopulation:1 1 0 0 0 01 0 1 0 0 01 0 0 1 0 00 1 0 0 0 0Goal: 0 1 1 1 1 1Mutation needed to find the goal
  • 190. ExampleSolution = 0 0 1 0 1 0Fitness(x) = #digits that match solutionA) 0 1 0 1 0 1 Score: 1B) 1 1 1 1 0 1 Score: 1C) 0 1 1 0 1 1 Score: 3D) 1 0 1 1 0 0 Score: 3Recombine top two twice.Note: 64 possible combinations
  • 191. ExampleSolution = 0 0 1 0 1 0C) 0 1 1 0 1 1 D) 1 0 1 1 0 0 E) 0 | 0 1 1 0 0 Score: 4F) 1 | 1 1 0 1 1 Score: 3G) 0 1 1 0 1 | 0 Score: 4H) 1 0 1 1 0 | 1 Score: 2Next generation:E) 0 0 1 1 0 0F) 0 1 1 0 1 0G) 0 1 1 | 1 0 0 Score: 3H) 0 0 1 | 0 1 0 Score: 6I) 0 0 | 1 0 1 0 Score: 6J) 0 1 | 1 1 0 0 Score: 3DONE! Got it in 10 guesses.
  • 192. IssuesHow select original population?How handle non-binary solution types?What should be the size of the population?What is the optimal mutation rate?How are mates picked for crossover?Can any chromosome appear more than once in a population?When should the GA halt?Local minima?Parallel algorithms?
  • 194. GAs for OptimizationTraveling salesman problemEatersHierarchical GAs for game playing
  • 196. GAs for Graphic AnimationSimulatorEvolving Circles3D AnimationScientific American Frontiers
  • 197. Biased Roulette WheelFor each hypothesis, spin the roulette wheel to determine the guess
  • 198. InversionInvert selected subsequence1 0 | 1 1 0 | 1 1 -> 1 0 0 1 1 1 1
  • 199. ElitismSome of the best chromosomes from previous generation replace some of the worst chromosomes from current generation
  • 200. K-point crossoverPick k random splice points to crossover parentsExampleK = 31 1 | 1 1 1 | 1 1 | 1 1 1 1 1 -> 1 1 0 0 0 1 1 0 0 0 0 00 0 | 0 0 0 | 0 0 | 0 0 0 0 0 0 0 1 1 1 0 0 1 1 1 1 1
  • 201. Diversity MeasureFitness ignores diversityAs a result, populations tend to become uniformRank-space methodSort population by sum of fitness rank and diversity rankDiversity rank is the result of sorting by the function 1/d2
  • 202. Classifier SystemsGAs and load balancingSAMUEL

Editor's Notes

  • #152: h1(S) = 7h2(S) = 2+3+3+2+4+2+0+2 = 18