Work log 2016-09-02
Posted on
by
Chris Warburton
Started writing an mlspecBench
script.
REALLY straightforward!
Some observations:
- When writing
quickspecBench
I tried to ensure eval-time checking of the script ‘plumbing’, by using attrsets of env var names, e.g.foo > "${vars.FOO}"
; if theFOO
var wasn’t in this set, we get an error at eval time (before execution). HOWEVER, that’s a) verbose and b) opt-in; it doesn’t check any direct usages, e.g.foo > "$FOO"
, so we STILL need checks and fail-early. In which case, what’s the point? I removed this in favour of just using the vars directly. The realisation when writingmlspecBench
was that these vars are mostly filename references, so there’s STILL a layer of indirection; we can just write these filenames, as they’re just as good an API as env var names. HencequickspecBench
can probably be simplified. - Running phases from a standalone script, like formatting,
clustering, etc. forced some previously-encapsulated scripts to become
exposed. Turns out,
benchmarkOutputs
is the only place where many of these appear. The whole ediface it creates is probably worth throwing out once we have working scripts.
Some general thoughts:
Fractal feature extraction, e.g.
+-------+ +---+---+ +-+-+---+
| | | | | |.|.| |
| | | . | . | +-+-+ . |
| | | | | |.|.| |
| . | +---+---+ +-+-+---+ ...
| | | | | | | |
| | | . | . | | . | . |
| | | | | | | |
+-------+ +---+---+ +---+---+
Ideas in common with quad/octtrees. This only works if we don’t require a fixed-size input. Convolutional nets learn features, could they also learn sizes?