我是大数据初学者,刚搭建好hadoo3.2,spark3.2,hive3.1.3 ,基于这些搭建kylin4.0 实现海量数据即席查询功能。
上面服务都起来了,都能正常运行。我就用kylin构建一个cube 运行时报错了。卡了好几天也没有解决。
2022-06-02 13:21:45,823 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@4861a8cc{/,null,AVAILABLE,@Spark}
2022-06-02 13:21:45,825 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@79b5e296{/api,null,AVAILABLE,@Spark}
2022-06-02 13:21:45,826 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@658b15b3{/jobs/job/kill,null,AVAILABLE,@Spark}
2022-06-02 13:21:45,827 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@6a480863{/stages/stage/kill,null,AVAILABLE,@Spark}
2022-06-02 13:21:46,756 INFO [pool-1-thread-1] handler.ContextHandler : Started o.s.j.s.ServletContextHandler@7d14f5b7{/metrics/json,null,AVAILABLE,@Spark}
2022-06-02 13:21:47,434 ERROR [Thread-1] application.JobMonitor : Job failed the 1 times.
java.lang.VerifyError: class org.apache.spark.sql.catalyst.expressions.DictEncode overrides final method children.()Lscala/collection/Seq;
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.kylin.engine.spark.job.UdfManager.registerBuiltInFunc(UdfManager.scala:36)
at org.apache.kylin.engine.spark.job.UdfManager.<init>(UdfManager.scala:33)
at org.apache.kylin.engine.spark.job.UdfManager$.create(UdfManager.scala:95)
at org.apache.kylin.engine.spark.job.UdfManager.create(UdfManager.scala)
at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:298)
at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93)
at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2022-06-02 13:21:47,436 ERROR [Thread-1] application.JobWorkSpace : Job failed eventually. Reason: Error occurred when generate retry configuration.
java.util.NoSuchElementException: spark.executor.memory
at org.apache.spark.SparkConf.$anonfun$get$1(SparkConf.scala:245)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.SparkConf.get(SparkConf.scala:245)
at org.apache.spark.autoheal.ExceptionTerminator$.incMemory(ExceptionTerminator.scala:70)
at org.apache.spark.autoheal.ExceptionTerminator$.resolveException(ExceptionTerminator.scala:45)
at org.apache.spark.application.JobMonitor.handleResourceLack(JobMonitor.scala:53)
at org.apache.spark.application.JobMonitor$$anon$1.onReceive(JobMonitor.scala:33)
at org.apache.spark.scheduler.KylinJobEventLoop.$anonfun$onReceive$1(KylinJobEventLoop.scala:42)
at org.apache.spark.scheduler.KylinJobEventLoop.$anonfun$onReceive$1$adapted(KylinJobEventLoop.scala:42)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at org.apache.spark.scheduler.KylinJobEventLoop.onReceive(KylinJobEventLoop.scala:42)
at org.apache.spark.scheduler.KylinJobEventLoop.onReceive(KylinJobEventLoop.scala:29)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
2022-06-02 13:21:47,442 INFO [Thread-1] application.SparkApplication :
==========================[RESOURCE DETECT BEFORE CUBE]===============================
spark plans :
==========================[RESOURCE DETECT BEFORE CUBE]===============================
我看了很多的帖子,虽然没有类似的,但是有类似的问题是说版本的问题。但是我的版本都符合。我也调整 了版本还是这样的问题。