Haddop做全排序时 报Caused by: java.io.IOException: Type mismatch in value from map 这个错误如何解决

1.具体报错如下:Caused by: java.io.IOException: Type mismatch in value from map: expected com.atguigu.mapreduce.writableComparable.FlowBean, received org.apache.hadoop.io.Text

2.mapper阶段:
public class FlowMapper extends Mapper<LongWritable, Text, FlowBean, Text>
//写出
private FlowBean outK = new FlowBean();
private Text outV = new Text();
context.write(outK, outV);
期望mapper输出K-V类型为FlowBean和Text型

3.reducer阶段:
public class FlowReducer extends Reducer<FlowBean, Text, Text, FlowBean>
reducer输入类型为FlowBean和Text型 输出类型Text和FlowBean

4.driver阶段:
设置mapper输出的K-V类型
job.setMapOutputKeyClass(FlowBean.class);
job.setOutputValueClass(Text.class);

//5.设置最终输出的K-V类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
不知道这里为什么会报这错误Type mismatch in value from map

具体代码如下:


mapper:
public class FlowMapper extends Mapper<LongWritable, Text, FlowBean, Text> {

    private FlowBean outK = new FlowBean();
    private Text outV = new Text();

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        //1.获取一行信息
        String line = value.toString();

        //2.切割
        String[] split = line.split("\t");

        //3.封装
        outV.set(split[0]);
        outK.setUpFlow(Long.parseLong(split[1]));
        outK.setDownFlow(Long.parseLong(split[2]));
        outK.setSumFlow();

        //4.写出
        context.write(outK, outV);
    }
Reducer:
public class FlowReducer extends Reducer<FlowBean, Text, Text, FlowBean> {
    @Override
    protected void reduce(FlowBean key, Iterable<Text> values, Context context) throws IOException, InterruptedException {

        //手机号
        for (Text value : values) {

            context.write(value, key);
        }
    }
driver:
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

        //1.获取job
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        //2.获取jar
        job.setJarByClass(FlowDriver.class);

        //3.关联mapper和reducer
        job.setMapperClass(FlowMapper.class);
        job.setReducerClass(FlowReducer.class);

        //4.设置mapper输出的K-V类型
        job.setMapOutputKeyClass(FlowBean.class);
        job.setOutputValueClass(Text.class);

        //5.设置最终输出的K-V类型
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(FlowBean.class);

        //6.设置数据的输入路径和输出路径
        FileInputFormat.setInputPaths(job, new Path("C:\\FileSource\\hadoops\\output3"));
        FileOutputFormat.setOutputPath(job, new Path("C:\\FileSource\\hadoops\\output5"));

        //7.提交job
        boolean result = job.waitForCompletion(true);
        System.exit(result ? 0 : 1);
    }

你好,我是有问必答小助手,非常抱歉,本次您提出的有问必答问题,技术专家团超时未为您做出解答


本次提问扣除的有问必答次数,将会以问答VIP体验卡(1次有问必答机会、商城购买实体图书享受95折优惠)的形式为您补发到账户。


因为有问必答VIP体验卡有效期仅有1天,您在需要使用的时候【私信】联系我,我会为您补发。