devicehive-java-server icon indicating copy to clipboard operation
devicehive-java-server copied to clipboard

build failure

Open kulak opened this issue 8 years ago • 2 comments

I followed installation instructions and got the following result in the end. ` fka_2.10-0.10.0.1.jar:na] at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:824) [zkclient-0.8.jar:na] at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) [zkclient-0.8.jar:na] 2017-06-15 18:57:52.487 [ZkClient-EventThread-1172-127.0.0.1:2181] ERROR state.change.logger - Controller 0 epoch 4 initiated state change for partition [response_topic_27d58be1-3863-4293-b6d6-33e63b93cbc8,3] from OfflinePartition to OnlinePartition failed kafka.common.NoReplicaOnlineException: No replica for partition [response_topic_27d58be1-3863-4293-b6d6-33e63b93cbc8,3] is alive. Live brokers are: [Set()], Assigned replicas are: [List(0)] at kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75) ~[kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:205) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117) [kafka_2.10-0.10.0.1.jar:na] at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) [scala-library-2.10.6.jar:na] at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:70) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:335) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:166) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:146) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:824) [zkclient-0.8.jar:na] at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) [zkclient-0.8.jar:na] 2017-06-15 18:57:55.527 [ZkClient-EventThread-1172-127.0.0.1:2181] ERROR state.change.logger - Controller 0 epoch 4 initiated state change for partition [request_topic,4] from OfflinePartition to OnlinePartition failed kafka.common.NoReplicaOnlineException: No replica for partition [request_topic,4] is alive. Live brokers are: [Set()], Assigned replicas are: [List(0)] at kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75) ~[kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:205) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117) [kafka_2.10-0.10.0.1.jar:na] at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) [scala-library-2.10.6.jar:na] at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:70) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:335) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:166) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:146) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:824) [zkclient-0.8.jar:na] at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) [zkclient-0.8.jar:na] 2017-06-15 18:58:03.770 [ZkClient-EventThread-1172-127.0.0.1:2181] ERROR state.change.logger - Controller 0 epoch 4 initiated state change for partition [request_topic,0] from OfflinePartition to OnlinePartition failed kafka.common.NoReplicaOnlineException: No replica for partition [request_topic,0] is alive. Live brokers are: [Set()], Assigned replicas are: [List(0)] at kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75) ~[kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:205) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117) [kafka_2.10-0.10.0.1.jar:na] at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) [scala-library-2.10.6.jar:na] at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:70) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:335) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:166) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:146) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:824) [zkclient-0.8.jar:na] at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) [zkclient-0.8.jar:na] 2017-06-15 18:58:09.716 [ZkClient-EventThread-1172-127.0.0.1:2181] ERROR state.change.logger - Controller 0 epoch 4 initiated state change for partition [response_topic_27d58be1-3863-4293-b6d6-33e63b93cbc8,1] from OfflinePartition to OnlinePartition failed kafka.common.NoReplicaOnlineException: No replica for partition [response_topic_27d58be1-3863-4293-b6d6-33e63b93cbc8,1] is alive. Live brokers are: [Set()], Assigned replicas are: [List(0)] at kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75) ~[kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:205) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117) [kafka_2.10-0.10.0.1.jar:na] at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) [scala-library-2.10.6.jar:na] at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) [scala-library-2.10.6.jar:na] at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) [scala-library-2.10.6.jar:na] at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:70) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:335) [kafka_2.10-0.10.0.1.jar:na] at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:166) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:146) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231) [kafka_2.10-0.10.0.1.jar:na] at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:141) [kafka_2.10-0.10.0.1.jar:na] at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:824) [zkclient-0.8.jar:na] at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) [zkclient-0.8.jar:na] 2017-06-15 18:58:31.939 [SyncThread:0] WARN o.a.z.server.persistence.FileTxnLog - fsync-ing the write ahead log in SyncThread:0 took 1551ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2017-06-15 18:58:41.034 [SyncThread:0] WARN o.a.z.server.persistence.FileTxnLog - fsync-ing the write ahead log in SyncThread:0 took 1390ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2017-06-15 18:58:45.789 [NIOServerCxn.Factory:/127.0.0.1:2181] WARN o.a.zookeeper.server.NIOServerCnxn - caught end of stream exception org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid 0x15cae8ff1230014, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) ~[zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [zookeeper-3.4.6.jar:3.4.6-1569965] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131] 2017-06-15 18:58:51.699 [SyncThread:0] WARN o.a.z.server.persistence.FileTxnLog - fsync-ing the write ahead log in SyncThread:0 took 3056ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2017-06-15 18:58:53.173 [SyncThread:0] WARN o.a.z.server.persistence.FileTxnLog - fsync-ing the write ahead log in SyncThread:0 took 1468ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2017-06-15 18:58:58.269 [NIOServerCxn.Factory:/127.0.0.1:2181] WARN o.a.zookeeper.server.NIOServerCnxn - caught end of stream exception org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid 0x15cae8ff1230015, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) ~[zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [zookeeper-3.4.6.jar:3.4.6-1569965] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131] 2017-06-15 18:59:29.866 [SyncThread:0] WARN o.a.z.server.persistence.FileTxnLog - fsync-ing the write ahead log in SyncThread:0 took 1822ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2017-06-15 19:00:05.370 [kafka-scheduler-7] ERROR kafka.utils.KafkaScheduler - Uncaught exception in scheduled task 'kafka-recovery-point-checkpoint' java.lang.OutOfMemoryError: Java heap space 2017-06-15 19:00:05.370 [kafka-scheduler-7] ERROR kafka.utils.KafkaScheduler - Uncaught exception in scheduled task 'kafka-recovery-point-checkpoint' java.lang.OutOfMemoryError: Java heap space 2017-06-15 19:00:05.371 [SyncThread:0] WARN o.a.z.server.persistence.FileTxnLog - fsync-ing the write ahead log in SyncThread:0 took 6892ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2017-06-15 19:00:12.878 [SyncThread:0] WARN o.a.z.server.persistence.FileTxnLog - fsync-ing the write ahead log in SyncThread:0 took 1728ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2017-06-15 19:00:17.411 [SyncThread:0] WARN o.a.z.server.persistence.FileTxnLog - fsync-ing the write ahead log in SyncThread:0 took 3123ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2017-06-15 19:00:18.800 [NIOServerCxn.Factory:/127.0.0.1:2181] WARN o.a.z.server.NIOServerCnxnFactory - Ignoring unexpected runtime exception java.nio.channels.CancelledKeyException: null at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) ~[na:1.8.0_131] at sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) ~[na:1.8.0_131] at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:187) ~[zookeeper-3.4.6.jar:3.4.6-1569965] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131] 2017-06-15 19:00:21.856 [NIOServerCxn.Factory:/127.0.0.1:2181] WARN o.a.zookeeper.server.NIOServerCnxn - caught end of stream exception org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid 0x15cae8ff1230019, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) ~[zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [zookeeper-3.4.6.jar:3.4.6-1569965] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131] 2017-06-15 19:00:21.857 [kafka-scheduler-3] ERROR kafka.utils.KafkaScheduler - Uncaught exception in scheduled task 'highwatermark-checkpoint' java.lang.OutOfMemoryError: Java heap space 2017-06-15 19:00:21.857 [kafka-scheduler-3] ERROR kafka.utils.KafkaScheduler - Uncaught exception in scheduled task 'highwatermark-checkpoint' java.lang.OutOfMemoryError: Java heap space Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "kafka-network-thread-0-PLAINTEXT-1"

Results :

Failed tests: DeviceCommandResourceTest.should_get_empty_response_with_status_204_when_command_not_processed:93->AbstractResourceTest.performRequest:157 Expected: is <204> but: was <401> DeviceCommandResourceTest.should_get_response_with_status_200_and_updated_command_when_command_was_processed_and_waitTimeout_is_0:124->AbstractResourceTest.performRequest:157 Expected: is <204> but: was <401> DeviceCommandResourceTest.should_get_response_with_status_200_and_updated_command_when_command_was_processed_and_waitTimeout_is_0_and_polling_for_device:157->AbstractResourceTest.performRequest:157 Expected: is <204> but: was <401> DeviceNotificationResourceTest.should_get_response_with_status_200_and_notification_when_waitTimeout_is_0_and_polling_for_device:99->AbstractResourceTest.performRequest:157 Expected: is <204> but: was <401> DeviceResourceTest.should_save_device_as_admin:101->AbstractResourceTest.performRequest:157 Expected: is <200> but: was <401> DeviceResourceTest.should_save_device_with_key:64->AbstractResourceTest.performRequest:157 Expected: is <204> but: was <401> JwtTokenResourceTest.should_return_access_and_refresh_tokens_for_basic_authorized_user:91->AbstractResourceTest.performRequest:157 Expected: is <201> but: was <401> Tests in error: JwtClientServiceTest.should_generate_jwt_token_with_access_type » IllegalState JwtClientServiceTest.should_generate_jwt_token_with_refresh_type » IllegalState JwtClientServiceTest.should_throw_MalformedJwtException_whet_pass_token_without_expiration_and_type » IllegalState

Tests run: 94, Failures: 7, Errors: 3, Skipped: 0

[INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] DeviceHive Java Server ............................. SUCCESS [ 5.471 s] [INFO] DeviceHive Shim API Interfaces .................... SUCCESS [ 2.140 s] [INFO] DeviceHive Common Module ........................... SUCCESS [ 0.772 s] [INFO] DeviceHive Common Dao interfaces ................... SUCCESS [ 0.942 s] [INFO] DeviceHive Test Utils .............................. SUCCESS [ 0.793 s] [INFO] DeviceHive Dao RDBMS Implementation ................ SUCCESS [ 1.168 s] [INFO] DeviceHive Dao Riak Implementation ................. SUCCESS [ 4.497 s] [INFO] DeviceHive Shim Kafka Implementation ............... SUCCESS [ 12.182 s] [INFO] DeviceHive Backend Logic ........................... SUCCESS [03:50 min] [INFO] DeviceHive Frontend Logic .......................... FAILURE [58:18 min] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:02 h [INFO] Finished at: 2017-06-15T19:04:16-07:00 [INFO] Final Memory: 52M/142M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-test) on project devicehive-frontend: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test failed: The forked VM terminated without properly saying goodbye. VM crash or System.exit called? [ERROR] Command was /bin/sh -c cd /home/sergei/projs/devicehive-java-server/devicehive-frontend && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Dport=43684 -Dzk.port=43295 -Dkafka.port=42205 -jar /home/sergei/projs/devicehive-java-server/devicehive-frontend/target/surefire/surefirebooter5792660160783658132.jar /home/sergei/projs/devicehive-java-server/devicehive-frontend/target/surefire/surefire8840312296434414001tmp /home/sergei/projs/devicehive-java-server/devicehive-frontend/target/surefire/surefire_2644889339193228243tmp [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :devicehive-frontend

`

kulak avatar Jun 16 '17 17:06 kulak

Hi Kulak, could you please point us to the specific instruction which was used by you?

tmatvienko avatar Jun 19 '17 12:06 tmatvienko

I have not worked with Java for a decade, so I followed instructions:

mvn clean package

kulak avatar Jun 27 '17 22:06 kulak