hive执行命令
INSERT INTO TABLE hive_database.clustered_table VALUES
("001","user01","female",20,"Renshi"),
("002","user02","female",20,"Caishi"),
("003","user03","male",20,"Staff"),
("004","user04","male",20,"Manage"),
("005","user05","female",20,"Staff"),
("006","user06","male",20,"Caishi"),
("007","user07","female",20,"Staff"),
("008","user08","male",20,"Renshi");
select * from hive_database.clustered_table;
Query ID = hizuna_20221205083609_88e5a8d6-24c5-4bc7-bd97-c846b1670f7e
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks determined at compile time: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1670199452986_0001, Tracking URL = http://hadoop102:8088/proxy/application_1670199452986_0001/
Kill Command = /opt/module/hadoop-3.1.3/bin/mapred job -kill job_1670199452986_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2022-12-05 08:37:01,591 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1670199452986_0001 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
扩充内存
select之后能显示插入的数据
hadoop 3.1.3
hive 3.1.2
无hbase
解决方法:
将hive设置成本地模式来执行任务
set hive.exec.mode.local.auto=true;
<property>
<name>hive.exec.mode.local.auto</name>
<value>true</value>
</property>
设置之后重新启动hive。