For a detailed description of the puzzle see N-puzzle on Wikipedia
The C++ implementation uses the RANGE statement of StEAM. Every possible move is associated to one integer from 0 to 3. The puzzle is an array of 9 elements. We use the VASSERT statement to check if a solution is reached. StEAM has been able to solve all 8-puzzle instances that we provided.
In the next table we present the instances that we use.
No. | Instance | Distance to solution |
1 | 2 0 5 1 7 4 3 6 8 | 9 |
2 | 2 7 5 1 0 4 3 6 8 | 10 |
3 | 2 7 5 0 1 4 3 6 8 | 11 |
4 | 2 7 5 1 6 4 3 8 0 | 12 |
5 | 2 7 5 1 6 0 3 8 4 | 13 |
6 | 2 7 0 1 6 5 3 8 4 | 14 |
7 | 2 0 7 1 6 5 3 8 4 | 15 |
8 | 2 6 7 1 0 5 3 8 4 | 16 |
9 | 2 6 7 1 5 0 3 8 4 | 17 |
10 | 2 6 7 1 5 4 3 8 0 | 18 |
11 | 2 6 7 1 5 4 3 0 8 | 19 |
You can download the sources as a .tar.gz file from here
You can also download the igcc compiled binary from here.
All experiments are conducted on an AMD dual-processor Opteron machine with 4GB RAM, 4GB swap and a 3 Tera bytes hard disk made available through NFS.
For the experiments Breath First Search was used.
To show the improvements in performance by using the externalization implemented in StEAM-XXL we have run the test cases with all three methods available in StEAM.
For the above experiments we employed full hashing.
This graph describes the internal memory usage for finding the optimal solution path in an 8-puzzle instance.
Here, we see a significant difference in the memory consumption of all three methods. The meory usage for the internal StEAM increases drasticaly with harder instances, while in StEAM-XXL, it remains almost constant. Note that in StEAM-XXL, the RAM requirement can be adjusted by changing the cache size.
In the next Graph we report the time taken to find the solution path.
In this graph we see an increse of time for finding the solution with StEAM-XXL.
Actually the performance of externalizing can be improved by using a better hash function. Experiments described below show the connection between hashing and externalization.
This graph shows us how the usage of internal memory increases with time. We present it for instance 11.
Comparing the three methods for this instance shows how efficient StEAM-XXL works. The consuption of internal memory is near linear during the whole searching process.
To see the expanding speed we present a graph which shows the generated states in time, while solving instance 11.
The large number of potential dupplicates, generated by the hashing function causes StEAM-XXL to slow down with time. As we said above, a better hashing function would perform better.
During the experiments with StEAM-XXL we noticed that choosing a different hash function gives us a much better performance. So we present testing results on the influence of a hash function on externalization in the next section..
StEAM can compute the hash value in two ways.
Partial-Hashing is not able to solve all instances since the number of states with equal hash value is limited. This is not a problem for Full-Hashing since it generates much more different hash values. Of course we compare only instances which have been solved by both methods.
We will not compare the memory consumption here, because it does not differ between the methods. Hashing has only an influence on the search speed.
To compare the two hash methods we will show three graphs here, one for the original implementation, one for collapse compression and one for StEAM-XXL.
As we can see the full hashing is slower then partial hashing, but the graph for partial hashing rises faster then the one for full hashing.
This graph looks nearly the same as the one above. If we take a closer look at the partial hashing line we see that it is even more steeply then in the original implementation.
Here we see how important a good hashing function for externalization is. While the graph for full hashing does not differ much from both graphs seen above, using the partial compression causes to slow down the expanding speed drasticaly.
The experiments show that our externalization method is able to reduce the memory consumption significantly. This way we are able to solve harder instances.
The other interessting thing is the strong influence of hashing on the search time. We see in the results that even a very fast externalization is worthless if the hash function computes only a few hash values. While a more complex hash function would spend more time on computing the hash value, we can save the time on checking for duplicates.
This phenomenon will also appear in internal methods with bigger instances. Checking for duplicates is much faster here, but also needs time.