Added case study on probabilistic programming and inference
Added the case study Inference that contains the contents of my bachelor's thesis Probabilistic Programming and Programmable Inference in Effekt.
The content includes Slice Sampling and Metropolis-Hastings algorithms, as well as some examples to show how these algorithms can be applied.
The tests are failing since a check file is missing. You just need to create a file with the same name that ends with .check and contains the expected outputs. You can look into the other case studies to see how this works.
The expected output might be a bit difficult to state since it could rely on random. Is it possible to use a different (deterministic) source of randomness for the tests? Like this very simplistic one:
def linearCongruentialGenerator[R](seed: Int) { prog: => R / Random } : R = {
// parameters from Numerical Recipes (https://en.wikipedia.org/wiki/Linear_congruential_generator)
val a = 1664525
val c = 1013904223
val m = 4294967296
var x: Int = seed;
def next() = { x = (a * x + c).mod(m); x }
try { prog() } with Random {
resume(next().toDouble / m.toDouble)
}
}
Thanks for your contribution, @sinaschaefer. In order for the tests to pass, we need to take care of two more things:
-
As you can see here, you need to annotate your extern definition of
powwith our new "feature-flag". Since we have multiple different backends, one extern definition is ambiguous and compiler can not safely figure out for which backend your extern definition works.This may all sound horribly complicated, but in the end, you just need to prepend
jsto the right-hand side of the definition:extern pure def pow(x: Double, y: Double): Double = js "Math.pow(${x}, ${y})" -
Now, we can finally get the CI to run. Since your examples only target the JavaScript backend, we need to ignore your tests when testing the other backends (or add definitions for other backends to the extern definition). For that, you need to add the line
examplesDir / "casestudies" / "inference.effekt.md"to the files https://github.com/effekt-lang/effekt/blob/master/effekt/jvm/src/test/scala/effekt/LLVMTests.scala https://github.com/effekt-lang/effekt/blob/master/effekt/jvm/src/test/scala/effekt/ChezSchemeTests.scala https://github.com/effekt-lang/effekt/blob/master/effekt/jvm/src/test/scala/effekt/MLTests.scala that is, to all backends for which your extern definitions do not work.
I hope this is helpful, and if not, do not hesitate to shoot us a message.
Edit: You also need to enclose variables in ${...} in extern definitions.
@sinaschaefer, if you'd give me write permission to your fork, I can happily make these changes for you.
@sinaschaefer, it appears you gave me permissions for the wrong repository. I need them for this one: https://github.com/sinaschaefer/effekt (where the branch this PR is referencing is located). Thanks in advance!
@dvdvgt Oh sorry, now it should be right. Thanks!
@dvdvgt could you round to make test go through?
It seems there's no simple solution for this. A generic Emit[R] effect is used for printing the results. Therefore, it is not know whether R is something that can be rounded before printing. Rounding where do emit is invoked does not work either because, there as well, the type of the thing being emitted is not know and cannot be assumed to be a double.
I am inclined to just comment out the parts that cause the failing of the tests due to varying precision of doubles depending on the machine it is run on.
We could handle into a structure like List[T] instead of inspecting. This would make testing much easier anyways.
What do you think @phischu ?
Merge it and revise it later.
I agree that we need to merge it soon -- but you know that we will never revise it, if we do not fix it now. Since @dvdvgt is already working on it, we could "quickly" fix it. Otherwise somebody in the future has to wrap their heads around it again and again just to enable these tests.
I did some inference myself and tried to explain most of the parts of the case study, as I found it hard to follow without pre-existing knowledge. The "metropolis hastings section" might still need some more explanations, though.
I tried to add some formalisation to the linear regression example. I am not sure if my interpretation is correct, and I will happily discuss it.
It's great that this was finally merged! 💯
However... with #590 we now support effects like emit better. Maybe you could refactor the emit effect to a singleton operation, still?