flink sql client 同步kafka到hive, 生成part文件,但hive查不到。

flink-1.13.2 ,每分钟1个分区,已经生成part文件,但似乎没有提交hive元数据,查不到?

/ user/ hive/ warehouse/ event_track2/ dt=2021-09-27/ hr=19/ mi=57

part-050b138c-65f7-43ff-813b-8891e3cab248-0-1
part-050b138c-65f7-43ff-813b-8891e3cab248-0-2

---------  sql如下:
USE CATALOG  hive;
SET table.sql-dialect=default;
CREATE TABLE  kafka_event_track2 (
        `distinct_id` STRING,
`time` BIGINT, 
`event` STRING,
`target` STRING,
`project` STRING,  
`manufacturer` STRING, 
`model` STRING,
 `os` STRING,
 `os_version` STRING,  
`app_version` STRING,
  `wifi` BOOLEAN,
  `ip` STRING,
  `province` STRING,
  `city` STRING,
  `user_agent` STRING,
`page_name`  STRING,
  `url`  STRING,
  `referer`  STRING,
 `screen_width` INT,
  `screen_height` INT,
       ts_ltz AS TO_TIMESTAMP_LTZ(`time`, 3),
     WATERMARK FOR ts_ltz AS ts_ltz - INTERVAL '5' SECOND -- Define watermark on TIMESTAMP_LTZ column
  
) WITH (
  'connector' = 'kafka',
  'topic' = 'event_track',
  'properties.bootstrap.servers' = '10.219.8.107:9092,10.219.8.108:9092,10.219.8.109:9092',
  'properties.group.id' = 'event_track_group2_1',
  'scan.startup.mode' = 'earliest-offset',
     'format' = 'json',
     'json.ignore-parse-errors' = 'true'
);

SET table.sql-dialect=hive;
CREATE TABLE  event_track2 (
    `distinct_id` STRING,
`time` BIGINT, 
`event` STRING,
`target` STRING,
`project` STRING,  
`manufacturer` STRING, 
`model` STRING,
 `os` STRING,
 `os_version` STRING,  
`app_version` STRING,
  `wifi` BOOLEAN,
  `ip` STRING,
  `province` STRING,
  `city` STRING,
  `user_agent` STRING,
`page_name`  STRING,
  `url`  STRING,
  `referer`  STRING,
 `screen_width` INT,
  `screen_height` INT
) PARTITIONED BY (dt STRING, hr STRING,mi STRING) STORED AS parquet TBLPROPERTIES (
 'partition.time-extractor.timestamp-pattern'='$dt $hr:$mi:00',
  'sink.partition-commit.trigger'='partition-time',
  'sink.partition-commit.delay'='0S',
  'sink.partition-commit.policy.kind'='metastore,success-file'
);

-- select * from event_track2;

SET table.sql-dialect=default;
 
INSERT INTO  `event_track2`   SELECT distinct_id, `time`,  `event`, `project`  , `target` ,`manufacturer`, `model`, `os`, `os_version`, `app_version` , `wifi`, `ip`, `province`, `city`, `user_agent`, `page_name` , `url` , `referer` ,  `screen_width`, `screen_height`,
 DATE_FORMAT(ts_ltz, 'yyyy-MM-dd'), DATE_FORMAT(ts_ltz, 'HH') ,DATE_FORMAT(ts_ltz, 'mm')    FROM kafka_event_track2;


在Hive里执行一下修复就ok了

msck repair table table_name;

请尝试变更hive模型的数据格式,然后再插入数据。

我把 STORED AS parquet 去掉,用缺省的文本格式还不行。文件落盘了就是不同步hive元数据似的。
当前hive 版本1.1.

#sql-client-defaults.yaml 
configuration:
  execution.checkpointing.interval: 1000
catalogs:
# A typical catalog definition looks like:
- name: hive
  type: hive
  hive-conf-dir: /etc/hive/conf.cloudera.hive/
  default-database: default
```yaml


```

最终把这个配置去掉,即启用缺省的process-time 搞掂。 不知道为啥!!!!

 'sink.partition-commit.trigger'='partition-time',