hive执行insert命令后报错

问题遇到的现象和发生背景

hive执行命令

INSERT INTO TABLE hive_database.clustered_table VALUES
("001","user01","female",20,"Renshi"),
("002","user02","female",20,"Caishi"),
("003","user03","male",20,"Staff"),
("004","user04","male",20,"Manage"),
("005","user05","female",20,"Staff"),
("006","user06","male",20,"Caishi"),
("007","user07","female",20,"Staff"),
("008","user08","male",20,"Renshi");
select * from hive_database.clustered_table;
运行结果及报错内容
Query ID = hizuna_20221205083609_88e5a8d6-24c5-4bc7-bd97-c846b1670f7e
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks determined at compile time: 3
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1670199452986_0001, Tracking URL = http://hadoop102:8088/proxy/application_1670199452986_0001/
Kill Command = /opt/module/hadoop-3.1.3/bin/mapred job  -kill job_1670199452986_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2022-12-05 08:37:01,591 Stage-1 map = 0%,  reduce = 0%
Ended Job = job_1670199452986_0001 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
我的解答思路和尝试过的方法

扩充内存

我想要达到的结果

select之后能显示插入的数据

附版本

hadoop 3.1.3
hive 3.1.2
无hbase

解决方法:
将hive设置成本地模式来执行任务

  1. 临时设置:
    set hive.exec.mode.local.auto=true;
    
  2. 长远设置:
    在hive-site.xml 文件添加配置
    <property>  
    <name>hive.exec.mode.local.auto</name>  
    <value>true</value>  
    </property> 
    
    设置之后重新启动hive。