|
CS 371 - Introduction to Artificial Intelligence
Pre-Class and In-Class Activities |
Due: The work of each
class is due before the beginning of the next class.
To start, you will need to be working on a machine with Eclipse
and a web browser (or will need to hazard remotely working on our lab machines). Instructions for setting up your machine for remote work are at the page Tools for Working Remotely. If you would have difficulty accessing/configuring such a machine
please contact me.
- CS 371 home page
-
Tools for Working Remotely
-
My Office Hours are listed on the course information page.
-
Remember to do all readings and any assigned video exercises before each class.
-
Colloquia attendance will still be required. Make a habit of checking your gettysburg.edu email at least daily to get talk announcements.
-
Homework submission:
-
Make sure you have the following in the same, single directory:
-
README file containing: the Honor Pledge, your names, your student ID numbers, and any answers to required written portions of an assignment
-
any required .java source files. (There should be no “package” line at the top – use the “default package” by leaving the package blank in the New… Class form.)
-
Change to that directory in a BASH terminal window.
-
Enter the command “submit371 hw1” (or "hwN" for the Nth assignment)
-
Some seconds after doing so, you should see a file named “feedback.txt” appear in that same directory, providing confirmation of your submission and feedback concerning the submission. You can read this using command “less feedback.txt”. When using the less command, you can go forward one line pressing Enter, forward one page pressing Space, back one page pressing “b”, and quit from less pressing “q”.
-
Before class 1 ():
-
Class 1 ():
-
Before class 2 ():
-
Review of class 1 uninformed search reading topics in more detail: (1:04:41)
-
Assigned readings
-
Watch and program along to these videos with new material: (36:24)
-
Class 2 ():
-
Questions and Answers for readings and videos
-
Download "Suit Yourself Solo" problem specification and generator: SuitYourselfSoloGenerator.java
-
Implement SearchNode subclass SuitYourselfSoloNode.java for problem "Suit Yourself Solo"
with a constructor having the same parameters as the problem generator. Optimization hint: Don't keep copying and updating the card deals. Instead, let your changing state be an array with the number of cards remaining in each pile.
-
Find the optimal (shortest) solution for "new SuitYourselfSoloNode(4, 13, 5, 0L);"
-
Before class 3 ():
-
Watch and program along with these videos:
-
Complete IterativeDeepeningDepthFirstSearcher.java for use in Class 3.
-
Class 3 ():
-
HW1 due class 4 - Any questions?
-
Create test code that runs IterativeDeepeningDepthFirstSearcher multiple
times with randomly generated TilePuzzleNodes with size 4 and a number of
shuffles that makes search take a long time (but with success).
-
See the performance impact of the simplest form of repeated state detection that we
covered in the videos.
-
Try different forms of repeated state detection as time permits. We'll
share our outcomes at the end of class.
-
Before class 4 ():
-
Class 4 ():
-
Overview HW2
-
Discuss potential admissible heuristics for HTilePuzzleNode
-
Implement different admissible HTilePuzzleNode heuristics and compare performance
-
Before Class 5 ():
-
Class 5 ():
-
Discuss representation of Preferred Group Formation problem.
-
Implement challenge problem in groups: Preferred Group Formation (group-input.txt)
-
Discuss tuning of Simulated Annealing for Preferred Group Formation problem.
-
Before class 6 ():
-
Complete Preferred Group Formation problem implementation and optimize given sample problem (group-input.txt).
Be able to share your share your solution in class.
-
Class 6 ():
-
Check solutions for (group-input.txt).
-
SLS Q&A
-
Overview HW3
-
Open ended SLS problem: Suppose a dozen or more students want to have pizza ordered for their event and you want to place the best pizza order for the group.
Assume that pizza toppings can be varied by half-pizza. Now students have many different preferences for (or against) toppings. Students also have different appetites.
Design a system to get student preferences and place and order that will in some sense be best for the group.
-
Before Class 7 ():
-
Class 7 ():
-
Alpha-beta pruning exercises in groups using common exercise tree.
Optional: You can generate your own practice exercises with GenAlphaBetaProblem.java.
-
Group 1: A=3 B=2 C=4 D=2 E=4 F=2 G=4 H=3 I=4
-
Group 2: A=4 B=2 C=4 D=2 E=1 F=3 G=2 H=1 I=5
-
Group 3: A=4 B=5 C=5 D=3 E=5 F=2 G=1 H=4 I=1
-
Group 4: A=3 B=5 C=3 D=3 E=1 F=2 G=2 H=1 I=1
-
Overview HW4 and introduce code base.
-
Play Mancala and discuss strategy.
-
Before Class 8 ():
-
Assigned readings
-
Watch the following FYS 187-4 videos on heuristic evaluation of game-tree nodes:
-
Class 8 ():
-
Heuristic feature discussion: Connect 4
-
In groups: Mancala play and heuristics brainstorming
-
Download Ludii game system (platform independent Java .jar file) or get Ludii-1.3.1.jar locally.
-
Start the game program with command "java -jar Ludii-1.3.1.jar" (or whichever more recent version exists), then File → Load Game (or Ctrl-L) → Games → board → sow → two rows → FairKalah.
-
Play games and discuss possible features as you play.
-
Before class 9 ():
-
Class 9 ():
-
In-class discussion of assigned reading on time management.
- Insights from Todd W. Neller, Taylor C. Neller FairKalah: Towards Fair Mancala Play, Computers and Games: International Conference, CG 2022, Virtual Event, November 22-24, 2022, Revised Selected Papers, Springer Cham, 2022. https://doi.org/10.1007/978-3-031-34017-8
- If time, guided Mancala project work in assigned pairs.
-
Before class 10 ():
-
Class 10 ():
-
Worked expectimax examples
-
Discussion of utility, and the importance of utility definition.
-
"The difference between utopic and dystopic visions of AI in sci-fi is in the definition of the utility function."
-
"To exclude non-quantitative values from utility functions is to assign them a utility of 0."
-
Parameterized Poker Squares play with Poker Squares play sheets
-
Before class 11 ():
-
Class 11 ():
- Parameterized Poker Squares abstraction
- Use of abstraction: abstract problem, solve abstracted problem, apply abstract solution to original problem
-
Expectimax variants:
- Depth-limited
- Iterative-deepening
- Chance sampling
-
Connect Four play demonstration - install Eclipse project cs371-bandit-connect4.zip
-
Run Connect4.java with settings: 6 Rows, 7 Columns, Black AI Player (unchecked), Red AI Player (checked), UCT Player (unchecked, then checked for next game later in class), then Play
-
n-Armed Bandit demonstration activity - Run NArmedBanditGame.java:
-
Set random seed (y/n)? y
-
Seed? 1234
-
(1) Normal mean distribution, (2) Logorithmic mean distribution, or (3) Given mean distribution? 1
-
Number of arms? 10
-
Total number of pulls? 100
-
Overview of exploration/exploitation dilemma, n-armed bandit problem, action selection algorithms, and Monte Carlo Tree Search
-
Before class 12 ():
-
Class 12 ():
-
Partial presentation on the analysis of the dice game Pig (slides)
-
Summarization of dynamic programming (DP) pattern using project example code examples.
-
Overview of HW5 DP exercise.
-
Before class 13 ():
-
Class 13 ():
-
Before class 14 ():
-
Class 14 ():
-
Complete Chutes and Ladders Solver.
-
Download and complete class exercise HogSolverInClass.java
-
Midterm exam expectations overview
-
Before class 15 ():
-
Study for midterm exam to be given in class 15.
-
Class 15 ():
-
Moodle midterm in class
-
Midterm programming questions due before class 17.
-
Before class 16 ():
-
Assigned readings
- Start midterm programming exercises and bring any questions about them that you have to class.
-
Class 16 ():
-
Before class 17 ():
-
Class 17 ():
-
Before class 18 ():
-
Class 18 ():
- Q-Learning implementation for HW6
-
Before class 19 ():
-
Class 19 ():
- Machine Learning (ML) CS Core - Session 1:
-
Before class 20 ():
-
Class 20 ():
- Machine Learning (ML) CS Core - Session 2:
- Underfitting occurs when a machine learning model is too simple to capture the underlying patterns in the data, resulting in poor performance on both the training and testing datasets.
- Overfitting occurs when a machine learning model learns the training data too well, capturing noise and outliers instead of the underlying patterns, which results in excellent performance on the training set but poor generalization to unseen data.
- The Bias-Variance Tradeoff is the conflict between minimizing two sources of predictive model error:
- bias, error from the inadequacy of the predictive function that is chosen for the predictive model, and
- variance, error introduced by the model's sensitivity to small fluctuations in the training data.
- Bias and variance are often traded off by:
- Observing which model complexity settings best predict held-out testing data
- Penalizing model complexity via the loss/performance metric
- Common performance measures:
- Regression: mean absolute error, mean squared error, root mean squared error, R squared (coefficient of determination; linear regression only)
- Classification: accuracy, precision, recall, F1 score
- With significant tuning of model hyperparameters, partition the data into train/validation/test sets. Iteratively train different hyperparameter models with the train set while measuring generalizable performance with the validation set. Finally, test the best hyperparameter model with the test set.
- It's important to understand what one's model has learned in order to guard against biased ML models.
-
Before class 21 ():
- 4th Hour Robotics work
- Review prior class material
-
Class 21 ():
- Machine Learning (ML) CS Core - Session 3: Part 1 of 2
- Data Preprocessing is important prerequisite work that extends beyond the goals of data wrangling (providing accurate, consistent, anomoly-free data) to additional steps that tailor the data to ML models:
- Handling Missing Values: Gaining understanding of the distribution of missing values and then either imputing (predicting) them or flagging them as missing
- Encoding Values: Assigning categorical values to one-hot encoded features, and assigning continuous values to discrete bins
- Normalizing/Standardizing Values: Transforming data values to same-magnitude ranges or normal distributions so as to avoid feature importance bias from features having different scales/distributions
- Feature engineering: Deriving new input features that can aid the ML model computation
- No-Free-Lunch Theorem: Without assumptions of the apriori distribution of target functions to be approximated, no machine learning algorithm can be claimed to be superior to another.
- ML is theoretically undecidable.
- Common sources of ML error include noise, bias, and variance.
- There are fundamental tradeoffs between the expressiveness (complexity) of a ML algorithm's hypothesis space (i.e. functions that can be learned), the amount of training data needed, and the expected performance of the ML algorithm.
- Q-Learning Project Demonstrations
-
Before class 22 ():
- 4th Hour Robotics work
- Review prior class material
-
Class 22 ():
- Machine Learning (ML) CS Core - Session 3: Part 2 of 2
- Q-Learning Project Demonstrations (cont.)
-
Before class 23 ():
- 4th Hour Robotics work
- Review prior class material
-
Class 23 ():
- Machine Learning (ML) CS Core - Session 4
-
Before class 24 ():
- 4th Hour Robotics work
- Review prior class material
-
Class 24 ():
-
Before class 25 ():
-
Class 25 ():
-
Brief description of Propositional Logic representation and reasoning.
-
Probabilistic Reasoning: Gibbs Sampling for Bayesian Network Reasoning
-
Gibbs sampling
project work
-
Before class 26 ():
- Make sure you're caught up on readings.
-
Class 26 ():
-
In-class applications of Gibbs sampling to demonstrate Bayesian network patterns of reasoning.
-
Presentation on AI and Ethics