This summer we have been working on research with Professor Neller. We have been utilizing machine learning to create an efficient solver of his game Birds of a Feather. Birds of a Feather is a solitaire game consisting of a four by four grid of standard playing cards. The goal is to consolidate the grid of cards into one single stack. To do this, you can move cards that are in the same row or column if they are also the same or adjacent rank or same suit. These rules result in many solvable states of the puzzle. However, in some cases, the puzzle can have unsolvable states, which result from a card or multiple cards that cannot be consolidated with another card. A typical day in our lab consists of us working at our computers in Glatfelter Hall. We’re usually working with Java, but in order to do some probabilistic calculations, we use R.
A basic brute force search algorithm blindly searches. In terms of Birds of a Feather, a brute force search algorithm searches if the puzzle is solvable or not without direction. Our goal, this summer, has been to speed up the search algorithm by giving it direction. We have been giving the searcher “strategies” or heuristics to achieve this efficiency. Imagine if you were playing any game without a strategy in mind. Every move you made in the game would be made without considering the next. Success would rely solely on random chance. The brute force search is just that. The heuristics, as you would imagine, improve the efficiency of our search algorithm by giving it a strategy for search.
Our heuristics attempt to lead the searcher to what is most likely a solvable state of a puzzle. The heuristic we have had the most success with this summer is what we call “total flockability.” In short, a card’s flockability is determined as the amount of cards it can be consolidated with on the board without considering the row and column restrictions. Total flockability is defined as the sum of the flockability of each card on the board. This heuristic says that board states with higher flockability are more likely to be solvable, so when we search, the searcher prefers the board state with the highest flockability. There are many other heuristics that can be used, this is just one example.
Another method we have used is to prune our search by determining if it will be fruitless to search something prior to actually searching it. One example of this is checking for “odd birds”. An odd bird is a situation where a card cannot be consolidated with any other card, or, in other words, a card that has a flockability of 0. If there is a card on the board that cannot be consolidated with another card, then there is no possible way to consolidate into a single stack; therefore, search will be fruitless. This saves a lot of wasted time since a blind search will go into a path with no solution while our search will avoid this fruitless path.
We have not finished our work yet, but we are currently developing a search that considers multiple heuristics through the use of statistics. Through statistical analysis, we can determine which heuristics are the most predictive of solvability. If each heuristic is given a weighting from 0 to 1, then as a sum, they can be used as a heuristic. By making our searcher be able to consider more heuristics at once, it can more quickly solve a puzzle.
This part of the description is basic and does not require a computer science background for understanding. The searcher operates by taking the original board state, the cards dealt out in a 4 by 4, and then creating the state of every possible move from that state, or, in other words, creating a board for every single card that you can move on your first move. First it checks if any of those moves are already unsolvable. If it is solvable, then it calculates a heuristic for each one of these states. Then whichever board, by the heuristic, is the most promising, we expand. This means we find all of the boards that result from making all the possible moves from that board. This process repeats until we do or do not find a solvable state. If we did, then the search was over and the board is solvable. If we did not, it searched every possible state and concludes that it is unsolvable.
This part of the description will most likely only make sense if you have a background in computer science. The searcher is a selective depth-first search that utilizes a queue, a stack and a priority queue. First, a node is taken off the queue. If closed, a hash set, already contains this state, then we skip searching it as it would be redundant; otherwise, we add it’s string form to closed and continue the search. If this node is the goal, then we assign it to be the goal node and return true. Otherwise, we expand the node. Then we prune the search by only adding nodes that pass various tests. This avoids unnecessarily checking nodes that are guaranteed to not be the goal node since we already know they are not. Each child that is not pruned has its compareVal, a value used for sorting the priority queue, calculated and then is added to the priority queue. We continue this process until the queue is not empty.
Next we iterate through the priority queue, pushing each node onto the stack. The priority queue is organized so that the element that is first is the last element that, according to theheuristic, we want to search. Therefore, the last element, is the most promising node to be pushed onto the stack, so it will be the first node searched. If the stack is not empty, then we pop a node from the stack and offer it to the queue. If the stack is still not empty and the next node on the stack has an equivalent compareVal to the previous node, then we also add that to the queue. We continue this process until there is no longer a tie or the amounts of nodes added to the queue is equal to numTies. If the queue is empty after, then the search is over and we return false; otherwise, start the algorithm from the beginning.
But what are we doing other than playing the game? I wish we could just play the game all day in our lab, but, we are doing even more interesting and challenging things than that. What we are focusing on are the ways to implement how we could solve the puzzle better. We work on designing and exploring many features or behaviors within the game that would help us better understand whether or not a particular puzzle is solvable/unsolvable and what makes it so. Designing these features can be challenging, and sometimes the best you could do is take a guess and hope that it workss. So far, we have worked on implementing around 10 features and have tried to look into how such features would correlate to solvability; how much impact would they have on making a puzzle solvable or unsolvable, measured in terms of probability. The measure of such impact is a heavy computational task, and it is basically a simulation of thousands of possible moves that we might make in the puzzle and analysis on how well we could solve the puzzle if we follow a certain feature/features or a combination of two or more features. We have accumulated more than a G.B of data so far from those simulations and it is just getting bigger and bigger. We are nurturing the data that we get just like our little baby, and for the time being what we can do is just hope that our baby will love us back as it grows older.
This summer we’ve all had a great time learning more about the field of artificial intelligence through our research. It has definitely been a positive learning experience, and we hope to use these skills that we’ve learned both throughout the rest of our time here at Gettysburg and beyond.