vidma
vidma
it's due to this line: https://github.com/spark-notebook/spark-notebook/blob/1192673cddab617a837ccef09e82ff90c7b0f205/conf/scripts/init.sc#L4 here it's importing stuff like widgets etc. I agree it would be better not to override the popular `io` package, so maybe this like...
it can be fixed, but it's hard not to overlook something...
yeah would be nice to fix it. but might be not as simple. @andypetrella , ideas? first I'd try a dummy way: add `@inline` keyword, like: `@inline def tokenize(content:String):Seq[String]={`, but...
exactly for the same reason I offerred to try `@inline`. because the function (udf) needs to be sent to other machines. and when you reference external code created in notebook...
3. will be fixed by #895
hmm, weird only config change was this: https://github.com/spark-notebook/spark-notebook/commit/b519c8ba45dc7809068d05fa6333b67f529075b9#diff-62ce533313cdad5202810504ca706009 which I think would produce even weirder errors. but I'll check if this might have introduced any backwards incompatibilities... don't you see...
such a doc don't exist yet, I think. Here's a few quick notes on how it works, feel free to improve it: https://github.com/spark-notebook/spark-notebook/blob/master/docs/code_structure.md
also be sure to check both: - logs in your browser console (when openned a notebook) - logs of spark-notebook server ...
well, see this exception is raised in: at sun.nio.fs.UnixPath.relativize(UnixPath.java:416) so there's something wrong with the absolute dir where the notebooks are stored. so: - check if the directory isn't weird...
yup, it that seems with this hadoop version (`sbt -D"hadoop.version"=2.6.0-cdh5.10.1 run -Dhttp.port=9007`), there's some connectivity problem (not yet sure where). one random bet could be changed versions in some common...