AgePredictor icon indicating copy to clipboard operation
AgePredictor copied to clipboard

Test AgePredictor example is not working on windows platform

Open tomaszbozek opened this issue 8 years ago • 0 comments

Relates to: https://issues.apache.org/jira/browse/SPARK-17810

I was trying to run example from https://wiki.apache.org/tika/AgeDetectionParser on Windows 8.1 but it throws an exception:

Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/examples/AgePredictor/spark-warehouse at org.apache.hadoop.fs.Path.initialize(Path.java:206) at org.apache.hadoop.fs.Path.<init>(Path.java:172) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89) at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95) at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95) at org.apache.spark.sql.internal.SessionState$$anon$1.<init>(SessionState.scala:112) at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112) at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:328) at edu.usc.irds.agepredictor.authorage.AgePredicterLocal.predictAge(AgePredicterLocal.java:108) at edu.usc.irds.agepredictor.authorage.AgePredicterLocal.main(AgePredicterLocal.java:141) Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:C:/examples/AgePredictor/spark-warehouse at java.net.URI.checkPath(Unknown Source) at java.net.URI.<init>(Unknown Source) at org.apache.hadoop.fs.Path.initialize(Path.java:203) ... 14 more

simple solution from https://issues.apache.org/jira/browse/SPARK-17810 is to override spark warehouse directory by using spark.sql.warehouse.dir option: -Dspark.sql.warehouse.dir="file:/tmp/spark-warehouse"

tomaszbozek avatar Nov 29 '17 17:11 tomaszbozek