With 25,000 QTM cosets proved to have a distance of 25 or less,

we have shown that there are no positions that require 30 or more

quarter turns to solve. All these sets were run on my personal

machines, mostly on a new single i7 920 box.

These sets cover more than 4e16 of the total 4e19 cube positions,

when inverses and symmetries are taken into account, and no new

distance-26 position was found. This indicates that distance-26

positions are extremely rare; I conjecture the known one is the

only distance-26 position.

In order to take the step to a proof of 28, I would need a couple

of CPU years, or improvements in my program or technique or both.

I will continue solving cosets and look for additional opportunities.

I believe a proof of 20 HTM and 26 QTM (or a counterexample!) will

probably happen within the next few years.

## Congrats!

Since determining the diameter of the cube group (FTM or QTM) appears to require an enormous amount of CPU time, it seems to me to get to the answer in the shortest possible time, we need to have a SETI-style approach. As I understand it, your program needs over 4GB of RAM (or at least over 3.5 GB), which would seem to severely limit the number of computers out on the internet that would be able to run your program.

So I was thinking, couldn't your program be modified so that instead of being based upon the subgroup <U,D,L2,R2,F2,B2>, couldn't it instead be based upon <U,D,L2,R2> or even <U,D,L2>? Wouldn't that reduce the memory requirements by a factor of 6 or 12 so that the program could run with about 800MB or 400MB, respectively?

It appears to me that since <U,D,L2,R2> and <U,D,L2> are subgroups of <U,D,L2,R2,F2,B2>, that the cosets of these subgroups would simply be separate subsets of cosets of <U,D,L2,R2,F2,B2>. So powerful computers could still process <U,D,L2,R2,F2,B2> cosets directly, while less powerful computers could split the work of processing such cosets.

I realize the symmetry of these other subgroups is less than that of <U,D,L2,R2,F2,B2>, but I think the potential to have many more CPUs working on the problem would more than offset the inefficiencies of using the smaller subgroups.