关于#pycharm#中#pyspark#使用的问题,如何解决?

在pycharm上学习使用pyspark,遇见以下错误,弄了很久没弄明白,看看有没有专业人士或者遇到过同样问题的指点一下,源代码如下:

# 单词计数统计
# 1、构建执行环境入口对象
from pyspark import SparkConf, SparkContext
import os
conf = SparkConf().setMaster("local[*]").setAppName("test_spark")
sc = SparkContext(conf=conf)
os.environ['PYSPARK_PYTHON'] = "D:/python310/python.exe"

# 2、读取数据文件
rdd = sc.textFile("D:/hello.txt")

# 3、取出全部单词
word_rdd = rdd.map(lambda x: x.split(" "))

# 4、将所有单词都转换成二元元组,单词为key,value设置为1
word_rdd_with_one = word_rdd.map(lambda word: (word, 1))

# 5、分组并求和
result_rdd = word_rdd_with_one.reduceByKey(lambda a, b: a + b)

# 6、打印输出结果
print(result_rdd.collect())

报错如下:

23/05/30 07:27:15 WARN Shell: Did not find winutils.exe: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
23/05/30 07:27:16 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/05/30 07:27:21 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "C:\Users\����\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\worker.py", line 830, in main
  File "C:\Users\����\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\worker.py", line 820, in process
  File "C:\Users\����\venv\lib\site-packages\pyspark\rdd.py", line 5405, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "C:\Users\����\venv\lib\site-packages\pyspark\rdd.py", line 5405, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "C:\Users\����\venv\lib\site-packages\pyspark\rdd.py", line 828, in func
    return f(iterator)
  File "C:\Users\����\venv\lib\site-packages\pyspark\rdd.py", line 3964, in combineLocally
    merger.mergeValues(iterator)
  File "C:\Users\����\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\shuffle.py", line 258, in mergeValues
    d[k] = comb(d[k], v) if k in d else creator(v)
TypeError: unhashable type: 'list'

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1211)
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1217)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
    at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
    at org.apache.spark.scheduler.Task.run(Task.scala:139)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
    at java.base/java.lang.Thread.run(Thread.java:833)
23/05/30 07:27:21 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (LAPTOP-RO67A6OV executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "C:\Users\����\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\worker.py", line 830, in main
  File "C:\Users\����\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\worker.py", line 820, in process
  File "C:\Users\����\venv\lib\site-packages\pyspark\rdd.py", line 5405, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "C:\Users\����\venv\lib\site-packages\pyspark\rdd.py", line 5405, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "C:\Users\����\venv\lib\site-packages\pyspark\rdd.py", line 828, in func
    return f(iterator)
  File "C:\Users\����\venv\lib\site-packages\pyspark\rdd.py", line 3964, in combineLocally
    merger.mergeValues(iterator)
  File "C:\Users\����\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\shuffle.py", line 258, in mergeValues
    d[k] = comb(d[k], v) if k in d else creator(v)
TypeError: unhashable type: 'list'

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1211)
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1217)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
    at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
    at org.apache.spark.scheduler.Task.run(Task.scala:139)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
    at java.base/java.lang.Thread.run(Thread.java:833)

23/05/30 07:27:21 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "D:\pyspark\练习案例.py", line 22, in <module>
    print(result_rdd.collect())
  File "C:\Users\联想\venv\lib\site-packages\pyspark\rdd.py", line 1814, in collect
    sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
  File "C:\Users\联想\venv\lib\site-packages\py4j\java_gateway.py", line 1322, in __call__
    return_value = get_return_value(
  File "C:\Users\联想\venv\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (LAPTOP-RO67A6OV executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "C:\Users\联想\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\worker.py", line 830, in main
  File "C:\Users\联想\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\worker.py", line 820, in process
  File "C:\Users\联想\venv\lib\site-packages\pyspark\rdd.py", line 5405, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "C:\Users\联想\venv\lib\site-packages\pyspark\rdd.py", line 5405, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "C:\Users\联想\venv\lib\site-packages\pyspark\rdd.py", line 828, in func
    return f(iterator)
  File "C:\Users\联想\venv\lib\site-packages\pyspark\rdd.py", line 3964, in combineLocally
    merger.mergeValues(iterator)
  File "C:\Users\联想\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\shuffle.py", line 258, in mergeValues
    d[k] = comb(d[k], v) if k in d else creator(v)
TypeError: unhashable type: 'list'

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1211)
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1217)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
    at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
    at org.apache.spark.scheduler.Task.run(Task.scala:139)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
    at java.base/java.lang.Thread.run(Thread.java:833)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2785)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2721)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2720)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2720)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1206)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1206)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1206)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2984)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2923)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2912)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:971)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2263)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2284)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2303)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2328)
    at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1019)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:405)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:1018)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:568)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
    at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
    at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "C:\Users\联想\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\worker.py", line 830, in main
  File "C:\Users\联想\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\worker.py", line 820, in process
  File "C:\Users\联想\venv\lib\site-packages\pyspark\rdd.py", line 5405, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "C:\Users\联想\venv\lib\site-packages\pyspark\rdd.py", line 5405, in pipeline_func
    return func(split, prev_func(split, iterator))
  File "C:\Users\联想\venv\lib\site-packages\pyspark\rdd.py", line 828, in func
    return f(iterator)
  File "C:\Users\联想\venv\lib\site-packages\pyspark\rdd.py", line 3964, in combineLocally
    merger.mergeValues(iterator)
  File "C:\Users\联想\venv\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\shuffle.py", line 258, in mergeValues
    d[k] = comb(d[k], v) if k in d else creator(v)
TypeError: unhashable type: 'list'

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1211)
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1217)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
    at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
    at org.apache.spark.scheduler.Task.run(Task.scala:139)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
    ... 1 more

[Stage 0:>                                                          (0 + 1) / 2]
进程已结束,退出代码1


java安装路径为"D:\JDK\bin\java.exe"

  • 文章:windows下安装pyspark及pycharm配置最完整详细教程 中也许有你想要的答案,请看下吧
  • 除此之外, 这篇博客: pycharm配置pyspark环境中的 pycharm配置pyspark环境 部分也许能够解决你的问题, 你可以仔细阅读以下内容或跳转源博客中阅读:
  • 参考这篇博客, 比较靠谱:
    https://blog.csdn.net/ringsuling/article/details/84448369
    用到的配置环境变量:
    还没配置成功

    #spark的安装位置
    C:\file\spark_package\spark-2.4.4-bin-hadoop2.7
    将
    C:\file\spark_package\spark-2.4.4-bin-hadoop2.7\python\pyspark
    拷贝到
    C:\Users\Carry Wan\AppData\Local\Programs\Python\Python37-32\Lib\site-packages
    
    拷贝过去
    C:\Users\Carry Wan\AppData\Local\Programs\Python\Python37-32\Lib\site-packages\pyspark
    
    SPARK_HOME   C:\file\spark_package\spark-2.4.4-bin-hadoop2.7
    PYTHONPATH   C:\file\spark_package\spark-2.4.4-bin-hadoop2.7\python
    
    pip install py4j 
    
    
    JAVA_HOME    C:\Program Files\Java\jdk1.8.0_162\bin
    
    添加
    C:\file\spark_package\spark-2.4.4-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip
    C:\file\spark_package\spark-2.4.4-bin-hadoop2.7\python\lib\pyspark.zip
    
    
    
    

从报错信息来看,看到两处错误:
第一个是关于winutils.exe的警告,可以通过设置HADOOP_HOME和hadoop.home.dir来解决;
第二个是关于PythonRDD的错误,在PythonRDD中使用了不可哈希的列表类型而导致的。将列表转换为元组或使用可哈希的数据类型来解决
word_rdd = rdd.map(lambda x: x.split(" "))换成 word_rdd = rdd.flatMap(lambda x: x.split(" "))试试