||CS 391 Selected Topics: Game AI
Due the beginning of class on Thursday 2/18.
Note: This work is to be done in groups of 2. Each group will submit one assignment. Although
you may divide the work, both team members should be able to present/describe
their partner's work upon request.
Using elements of FreeCell Solvers from Homework #3 (repository link to be
emailed Thursday afternoon 2/11), implement an improved FreeCell solver program
according to a well-defined metric.
In addition, create a data file for the output of at least 500 games from
random seeds (1-1000000) where each of the 500 output lines consists of the seed
number followed by either (1) the solution following the
notation of the
FreeCell solutions site or (2) "No solution found." if the search is
What does it mean to be an improved FreeCell solver program
according to a well-defined metric?
- First, here's an example of what this doesn't mean: Submitting the HW3
work with no original improvements or modifications, noting that it is an
improvement upon your HW3 submission and calling it done. This would
be judged as poor work.
- Minimal satisfactory (C) work: Parameter tuning to a previous approach
that slightly improves performance.
- Very good work: Testing of multiple approaches to different components
(e.g. heuristic evaluation, search algorithms) that yields a significant
performance increase over your previous effort.
- Excellent work: A significant performance increase over the best work
What should I choose as a metric?
- There are a number of reasonable choices. If you have any doubts
of your choice, please email me. Here are a few examples of decent
- Percentage of seeds solved in under ___ seconds - This metric
rewards solid, fast solvers.
- Average computational time for seeds solved under ___ seconds -
First, let's assume that failed search attempts count as a full time
penalty. This metric also rewards solid, fast solvers, but is
nuanced to reward solvers that use much less than the allotted ___
- Average solution length - This metric rewards solvers that find
short solutions. However, one would be wise to include in the
metric some large length penalty for failed solution attempts to prevent
good performance through "cherry-picking" the easiest problems and
ignoring a high-percentage of failure
- Real-time solution performance with next move recommendations
required at one second intervals (or some other such time interval).
- Interleaving search with necessary action introduces a new
consideration of reliable, real-time progress towards a solution.
- Consider your metric carefully and critically. Choosing a poor
metric (e.g. completed seed searches per minute) can yield unintended
consequences when optimized (e.g. rapid search failure for many seeds per
minute). Can you think of a poor solver that would score well with
your metric? Choose another.
How much should it improve? What do you mean by
- This is where your metric and documentation of
your efforts in a README plain text file come into play.
Different metrics will have different significance. If our best
solvers solve 99% of deals in less than one minute, an improvement to 99.2%
doesn't tell us much for small seed sample sizes. If, however, you
raise the bar by measuring the number of deals solved within 10 seconds, and
you find that, by this metric, you improve performance by 20% for a large
number of seeds, there's no argument that such data strongly argues the case
of having made a significant improvement.
- Part of the point is to gain experience in objective research
experimentation. Good empirical research in Computer Science generally
includes both innovative problem-solving and a data-supported case for the
significance of the problem-solving approach.