Finite-choice logic programming
I recently came across finite-choice logic programming via this Lobste.rs comment. It’s an approach to logic-programming distinct from the usual depth/breath-first search algorithms, where we instead bias towards immediate consequences, i.e. by starting with those “choices” that only have a single option. That’s much more efficient than e.g. selecting choice points uniformly at random (which can send us down long branches); but they point out that it may fail to terminate.
I like the perspective it gives on the situation, but I wonder whether it would be useful to weight the choices exponentially (similar to Levin search), rather than always taking the immediate consequences. For example, we could arrange the choices in a tree like:
┌── A1
│
──┼── A2
│
├── A3
│
│ ┌── B1
│ │
└──┼── B2
│
│ ┌── C1
│ │
└──┤
│
└──…
Where Ai
have the same number of options as each other,
Bi
have the same number of options as each other, and so
on; and Bi
have more options than Ai
,
Ci
have more options than Bi
, etc.
To select which choice to make, we start at the root and pick a child
uniformly at random: if we hit a leaf, that’s the choice we make next;
otherwise we recurse. In the above example, A1
,
A2
and A3
will each be chosen
of the time; B1
and B2
will each be chosen
of the time; C1
will be chosen
of the time; and the remaining
of the time we’ll recurse down to something in the ‘…’ branch.
This may be a nice framework for a Gen a
implementation,
useful for property checking; and indeed for generating/enumerating
expressions. Would be useful to compare it to equation graphs, like
egglog, etc. to see if this constraint framework can be used/extended to
account for efficient handling of equational relations (e.g. via hash
consing, etc.)?