Work log 2016-08-12
Posted on
by
Chris Warburton
Evaluation has slowed due to something being built at eval time :(
formatted
was reading in clustered
at eval
time, so I’ve changed it.
Eval time cut from 1000+ seconds (> 30 mins) to ~120 seconds :)
Easy way to see what’s causing expensive building: everything
expensive (pretty much) should be benchmarked. Put an
exit 1
at the top of the benchmarking scripts and see what
fails to evaluate.
Turns out failed
was forcing some stuff.
TIP benchmarks are now generated at eval time; but that seems
reasonable. I’ve moved its tests to a separate directory to
ensure derivation so we don’t have to run them every time (they’re
more like integration tests than unit tests).
Adding PDFs to Hydra too.