Eclipse上运行MapReduce程序时,win10系统用户名中间有空格导致tmp文件生成&读取错误

报错如下:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/D:/Java/Dev/Maven/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.10.0/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/D:/Java/Dev/Maven/.m2/repository/org/slf4j/slf4j-simple/1.6.6/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/D:/Java/Dev/Maven/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2019-09-05 10:27:02,488 WARN  [main] impl.MetricsConfig (MetricsConfig.java:134) - Cannot locate configuration: tried hadoop-metrics2-jobtracker.properties,hadoop-metrics2.properties
2019-09-05 10:27:04,715 WARN  [main] mapreduce.JobResourceUploader (JobResourceUploader.java:147) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2019-09-05 10:27:04,743 WARN  [main] mapreduce.JobResourceUploader (JobResourceUploader.java:480) - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2019-09-05 10:27:10,228 WARN  [pool-8-thread-1] impl.MetricsSystemImpl (MetricsSystemImpl.java:151) - JobTracker metrics system already initialized!
2019-09-05 10:27:10,326 WARN  [Thread-6] mapred.LocalJobRunner$Job (LocalJobRunner.java:590) - job_local64686135_0001
java.lang.Exception: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in localfetcher#1
    at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) ~[hadoop-mapreduce-client-common-3.1.2.jar:?]
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:559) [hadoop-mapreduce-client-common-3.1.2.jar:?]
Caused by: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in localfetcher#1
    at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) ~[hadoop-mapreduce-client-core-3.1.2.jar:?]
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:377) ~[hadoop-mapreduce-client-core-3.1.2.jar:?]
    at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:347) ~[hadoop-mapreduce-client-common-3.1.2.jar:?]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_221]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_221]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_221]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_221]
    at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_221]
Caused by: java.io.FileNotFoundException: File D:/tmp/hadoop-William%20Scott/mapred/local/localRunner/icss/jobcache/job_local64686135_0001/attempt_local64686135_0001_m_000000_0/output/file.out.index does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641) ~[hadoop-common-3.1.2.jar:?]
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) ~[hadoop-common-3.1.2.jar:?]
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) ~[hadoop-common-3.1.2.jar:?]
    at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:211) ~[hadoop-common-3.1.2.jar:?]
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) ~[hadoop-common-3.1.2.jar:?]
    at org.apache.hadoop.io.SecureIOUtils.openFSDataInputStream(SecureIOUtils.java:152) ~[hadoop-common-3.1.2.jar:?]
    at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:71) ~[hadoop-mapreduce-client-core-3.1.2.jar:?]
    at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:62) ~[hadoop-mapreduce-client-core-3.1.2.jar:?]
    at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:57) ~[hadoop-mapreduce-client-core-3.1.2.jar:?]
    at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.copyMapOutput(LocalFetcher.java:125) ~[hadoop-mapreduce-client-core-3.1.2.jar:?]
    at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.doCopy(LocalFetcher.java:103) ~[hadoop-mapreduce-client-core-3.1.2.jar:?]
    at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.run(LocalFetcher.java:86) ~[hadoop-mapreduce-client-core-3.1.2.jar:?]

目前的情况是win10是用微软账号的登录的,姓名之间会自动生成一个空格,不是太方便更改账户。Hadoop运行环境是放在D盘的,但不是根目录。
请问有没有办法让本地的tmp文件换个地方生成,或者更改hadoop-William%20Scott文件夹的名字。
谢谢。

“请问有没有办法让本地的tmp文件换个地方生成?”
可以,但是只修改本地core-site.xml文件,添加

<property>
        <name>hadoop.tmp.dir</name>
        <value>X:/xxx/tmp/xxx</value>
</property>

没有用。需要把修改后的文件复制到项目根目录的scr文件夹下,运行程序时maven会自动载入

我和你一模一样的问题,

Caused by: java.io.FileNotFoundException: D:/tmp/hadoop-Kerry%20Yang/mapred/local/localRunner/Kerry%20Yang/jobcache/job_local976890414_0001/attempt_local976890414_0001_m_000000_0/output/file.out.index

试过楼上的修改core-site.xml,但是只能改掉第一个Kerry%20Yang。第二个始终改不掉。
经过好多次尝试,终于成功。换了个思路,既然你没有指定哪个用户,程序非要调用windows机器的用户Kerry空格Yang, 那我们就指定一个用户,用户名为kerry。结果输出正常了。

System.setProperty("HADOOP_USER_NAME", "kerry")
System.setProperty("user.name", "kerry")

https://blog.csdn.net/dslztx/article/details/46445725

static {
    // 如果是windows操作系统,设置并指明 Hadoop 的运行环境的主目录
    if (System.getProperty("os.name").toLowerCase().indexOf("windows") != -1){
        // 设置远程操作HDFS的用户为 icss
        System.setProperty("HADOOP_USER_NAME", "icss");
        //System.setProperty("hadoop.home.dir", "D:/dev/hadoop-3.0.0");
        System.setProperty("hadoop.tmp.dir","D:/tmp/act");
    }
}
    你试试这样设置以下它的临时缓存目录