Work log 2016-07-13
Posted on
by
Chris Warburton
We could learn a compact representation of a generative model; already dont with e.g. graphs.
For each symbol S(i) assign probabilities P(i, 0), P(i, 1), etc.
Initial model: P(S(i)) = P(i, 0)
Expand:
λv(i) -> <term>
<term> <term>
case <term> of
<pat> -> <term>
...
Not the brillest; how about Church Encoding?
~~ λs0 s1 s2 ... -> <term>
~~
For n symbols:
term n = λ s1 s2 ... sn -> <expr n>
expr n = vi | 0 <= 1 < n
| λ -> <expr (S n)>
| <expr n> <expr n>
What can we compare recurrent clustering to?
- Non-recurrent clustering
- Simpler feature extraction/distance, e.g. n-grams, bag-of-words, tree kernel
Ranking would be good too; select first N closest matches.
We want to learn a rule from 𝒫(Def) -> 𝒫(Def) (in general). Maybe more practical for 𝒫(Def) -> Def -> [0,1]