CS 371: Introduction to
Artificial Intelligence
Welcome!
|
|
|
|
Todd Neller, Glatfelter 209, tneller@gettysburg.edu,
337-6643, etc. Þ Course Information Page |
|
CS 371 – “Introduction to Artificial
Intelligence” |
|
Comparing programming tools to handyman
tools |
|
Programmers can “machine” their own
tools! |
|
The computer is a power tool for the
mind. |
Outline
|
|
|
Course information |
|
What is AI? |
|
Agents |
|
Agent environments |
|
|
Course Information
|
|
|
|
http://cs.gettysburg.edu/~tneller/cs371/ |
|
Course information |
|
Handouts, assignments, solutions |
|
Other useful stuff |
Our Text
|
|
|
Stuart Russell and Peter Norvig,
“Artificial Intelligence: A Modern Approach” |
|
Unified, agent-based approach |
|
|
Grading
|
|
|
NO MID-TERM, NO FINAL |
|
Approximately 10 short in-class
surprise quizzes |
|
Important to keep up with material on a
class-by-class basis |
|
You’re responsible to know the lecture
and reading material by the beginning of the following class. |
Grading (cont.)
|
|
|
|
General principles: |
|
Grading affects focus of attention |
|
Credit should generally correspond to
effort and time required |
|
Study habits translate to work habits |
|
Analogy to physical trainer |
|
So… |
|
~80% - homework assignments |
|
~10% - in-class quizzes (unannounced) |
|
~10% - class participation |
Class Participation
|
|
|
Don’t sweat it: be here + willing to
engage = full credit |
|
Some project design decisions will be
made collaboratively in class |
|
Culture of cooperation |
Honor Code
|
|
|
Do not implicitly or explicitly
represent the work of others as your own.
“Give credit where credit is due.” |
|
Do your own work.* |
|
Assist each other in understanding of
general concepts, but leave the valuable problem solving challenges to the
individual.* |
|
* except for collaborative group
programming projects where details of collaboration will be given. |
Homework Programming
|
|
|
|
All done in Java unless otherwise
specified. |
|
All submitted electronically. |
|
Make sure these paths are in your
.cshrc file: |
|
/usr/java1.2/bin |
|
/Apps |
Introduction to AI
|
|
|
|
Many different fields, many different
challenging problems… |
|
Knowledge Representation &
Reasoning/ Expert System lymph-node pathology diagnosis |
|
Robotics & Machine
Vision/Autonomous driving outside Pittsburgh |
|
Speech Recognition/PEGASUS airline
reservation system |
|
Game-tree search/Big Blue versus
Kasparov |
|
… unified by the desire to construct
“intelligent” artifacts |
Headline: Squirrel hits
Cyclist
|
|
|
Tragic squirrel anecdote |
|
How do we define “intelligence”? |
|
What would it mean for someone to make
something that is “intelligent”? |
What is AI?
|
|
|
So what is AI? What does it mean for an artifact to be
intelligent? |
What is AI? (cont.)
|
|
|
Dictionary definitions |
|
Deep Blue article |
|
Knowing math: calculator?,
Mathematica?, math student? |
|
Are we just anthropomorphizing? |
Four Views of AI
|
|
|
Acting humanly – Turing Test (Alan
Turing, 1950) |
|
Thinking humanly – cognitive modeling
GPS (Newell & Simon, 1960) |
|
Thinking rationally – laws of thought,
logic, Aristotle’s syllogisms |
|
Acting rationally – when there’s no
provably correct action but one must act, uncertainty, “acting so as to best
achieve one’s goals, given one’s beliefs” |
Definition of an Agent
Agent Concepts
|
|
|
|
“Rational”, “Ideal” agent “does the
right thing.” |
|
For each possible percept sequence, do
whatever action is expected to maximize the performance measure given the
agent’s knowledge of its environment (built in or perceived). |
|
Crossing the street without looking
rational? |
|
Performance measure; when to evaluate? |
|
A system is “autonomous” to the extent
that it thinks and acts on its own. |
P.A.G.E. description
Skeleton Agent
|
|
|
|
Uses memory to store percept sequence |
|
What’s missing? |
|
Performance measure not part of program |
Table-Driven Agent
|
|
|
Losing proposition: |
|
table size large (35^100 for chess
alone) |
|
table construction difficult |
|
table is fixed and cannot adapt to new
environments |
|
learning table would be difficult! |
Vacuum Cleaner World
|
|
|
Actions: Turn left 90º, Turn right 90º,
Suck dirt, Turn off |
|
Simplified: Go left, Go right, Suck
dirt, Turn off |
|
Possible with table driven agent if
simplied? |
|
What if many rooms and not simplified? |
Simple Reflex Agent
Simple Reflex Agent
(cont.)
|
|
|
Condition-Action Rules (e.g. if the car
in front of me is braking, then initiate braking) |
|
Assumption: Correct decision based on
current percept |
Reflex Agent with State
Reflex Agent with State
(cont.)
|
|
|
Keeps description of current world
state |
|
Updates description from percepts,
actions |
|
Example: Sphex wasp behavior |
Goal-based Agent
Utility-based Agent
Properties of
Environments
|
|
|
(in)accessible, (non)deterministic,
(non)episodic, static/dynamic, discrete/continuous |
Summary
|
|
|
|
Agent – something that perceives and
acts in an environment |
|
Ideal – “does the optimal thing given
experience” |
|
Autonomous – acts on own experience |
|
Reflex (w/&w/o state) – immediately
maps perceptsàactions |
|
Goal / utility-based – acts to achieve
goals / maximixe utility |
|
PAGE description |
Summary (cont.)
|
|
|
|
Environment characteristics: |
|
Accessible vs. Inaccesible |
|
Deterministic vs. Nondeterministic |
|
Episodic vs. Nonepisodic |
|
Static vs. Dynamic |
|
Discrete vs. Continuous |
|
As you approach each new problem, think
PAGE description and associate it to others. |