按分区字段查询hive表,不带引号能正常查询,加上引号无法查询,如何解决?

按分区字段查询hive表,不带引号能正常查询,加上引号无法查询
hive日志如下:

NoViableAltException(155@[941:1: ddlStatement : ( createDatabaseStatement | switchDatabaseStatement | dropDatabaseStatement | createTableStatement | dropTableStatement | truncateTableStatement | alterStatement | descStatement | showStatement | metastoreCheck | createViewStatement | createMaterializedViewStatement | dropViewStatement | dropMaterializedViewStatement | createFunctionStatement | createMacroStatement | dropFunctionStatement | reloadFunctionStatement | dropMacroStatement | analyzeStatement | lockStatement | unlockStatement | lockDatabase | unlockDatabase | createRoleStatement | dropRoleStatement | ( grantPrivileges )=> grantPrivileges | ( revokePrivileges )=> revokePrivileges | showGrants | showRoleGrants | showRolePrincipals | showRoles | grantRole | revokeRole | setRole | showCurrentRole | abortTransactionStatement | killQueryStatement | resourcePlanDdlStatements );])
    at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
    at org.antlr.runtime.DFA.predict(DFA.java:116)
    at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:4480)
    at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:2578)
    at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1433)
    at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:230)
    at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:79)
    at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:72)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:617)
    at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1870)
    at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1817)
    at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1812)
    at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
    at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197)
    at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:262)
    at org.apache.hive.service.cli.operation.Operation.run(Operation.java:260)
    at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:575)
    at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:561)
    at sun.reflect.GeneratedMethodAccessor76.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
    at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
    at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
    at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
    at com.sun.proxy.$Proxy69.executeStatementAsync(Unknown Source)
    at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315)
    at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:566)
    at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1557)
    at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1542)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:647)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
FAILED: ParseException line 1:5 cannot recognize input near 'SHOW' 'INDEX' 'ON' in ddl statement

p_date分区带引号p_date='20230130',报错如下

img

p_date分区不带引号p_date=20230130,正常查询如下

img

建表的时候指定了分区字段类型为string,partitioned by (p_date string) stored as orc;

怎样设置带不带引号都能查询?

所以你是用的表内字段做的分区?不用引号的话是直接查的表内字段,用引号的时候是去查的对应分区,而你的分区应该是有问题的,没有找到对应的这个分区才报错,20开头的分区都找不到,你那个日期的就更找不到了

建表的时候指定了分区字段类型为string,partitioned by (p_date string) stored as orc;

结合chatgpt
在Hive中,带引号的查询字段通常用于处理特殊字符或字段名中包含空格的情况。但是,你在使用带引号的查询时遇到了问题。这可能是因为你在查询语句中错误地使用了引号,导致解析异常。

要解决这个问题,你可以尝试以下解决方案:

  1. 使用反引号(`)而不是引号(')来引用字段名。在Hive中,反引号用于转义字段名以处理特殊字符或保留字。

    例如,如果你要查询名为"partition"的字段,可以使用以下查询语句:

    SELECT `partition` FROM table_name;
    
  2. 确保在引号内正确引用字段名。如果你使用单引号或双引号,Hive会将其解析为字符串而不是字段标识符。

    例如,以下查询语句会将'column_name'解析为字符串而不是字段名:

    SELECT 'column_name' FROM table_name;
    

    而以下查询语句会正确引用字段名:

    SELECT "column_name" FROM table_name;
    

请根据你的需求选择适当的解决方案,并确保在查询时正确引用字段名。

看字段有没有问题,单引号双引号有没有问题

也有可能是dbeaver的问题,你可以在hive命令行再试试带引号的查询。另外,看第一个图的hive日志,不像是这个查询导致的错误

正常来讲带不带引号都能查,而且你这个日志也不对吧。

使用下面命令看下你的表中分区类型

desc datax_job_log

: Index is not populated for 20

元数据和数据没有对应上

参考: