yarn运行状态问题

cloudera中yarn重启不了
yarn的运行状态测试如图

img


我是从cloudera manager的yarn实例重启,重启成功后又会有问题
日志一直在重复打印

2023-01-12 02:27:45,443 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NodeManager
STARTUP_MSG:   host = cdh4/192.168.0.104
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 3.0.0-cdh6.2.0
STARTUP_MSG:   classpath = /var/run/cloudera-sc.....
STARTUP_MSG:   build = http://github.com/cloudera/hadoop -r d1dff3d3a126da44e3458bbf148c3bc16ff55bd8; compiled by 'jenkins' on 2019-03-14T06:39Z
STARTUP_MSG:   java = 1.8.0_181
************************************************************/
2023-01-12 02:27:45,471 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: registered UNIX signal handlers for [TERM, HUP, INT]
2023-01-12 02:27:45,945 INFO org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService: Using state database at /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state for recovery
2023-01-12 02:27:45,971 INFO org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService$LeveldbLogger: Recovering log #394474
2023-01-12 02:27:45,971 INFO org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService$LeveldbLogger: Level-0 table #394476: started
2023-01-12 02:27:45,971 INFO org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService$LeveldbLogger: Level-0 table #394476: 0 bytes OK
2023-01-12 02:27:46,031 INFO org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService$LeveldbLogger: Delete type=0 #394474

2023-01-12 02:27:46,031 INFO org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService$LeveldbLogger: Delete type=3 #394472

2023-01-12 02:27:46,045 INFO org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService: Loaded NM state version info 1.2
2023-01-12 02:27:46,283 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.ResourcePluginManager: No Resource plugins found from configuration!
2023-01-12 02:27:46,283 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.ResourcePluginManager: Found Resource plugins from configuration: null
2023-01-12 02:27:46,316 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: Node Manager health check script is not available or doesn't have execute permission, so not starting the node health script runner.
2023-01-12 02:27:46,354 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher
2023-01-12 02:27:46,355 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher
2023-01-12 02:27:46,355 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$LocalizationEventHandlerWrapper
2023-01-12 02:27:46,356 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices
2023-01-12 02:27:46,356 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
2023-01-12 02:27:46,357 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher
2023-01-12 02:27:46,357 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerSchedulerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler
2023-01-12 02:27:46,373 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.ContainerManagerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl
2023-01-12 02:27:46,374 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.NodeManagerEventType for class org.apache.hadoop.yarn.server.nodemanager.NodeManager
2023-01-12 02:27:46,416 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2023-01-12 02:27:46,474 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2023-01-12 02:27:46,474 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system started
2023-01-12 02:27:46,497 INFO org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2023-01-12 02:27:46,507 INFO org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2023-01-12 02:27:46,530 INFO org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl:  Using ResourceCalculatorPlugin : org.apache.hadoop.yarn.util.ResourceCalculatorPlugin@1a45193b
2023-01-12 02:27:46,532 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService
2023-01-12 02:27:46,534 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService
2023-01-12 02:27:46,534 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: AMRMProxyService is disabled
2023-01-12 02:27:46,534 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: per directory file limit = 8192
2023-01-12 02:27:46,537 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2023-01-12 02:27:46,542 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker
2023-01-12 02:27:46,571 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Adding auxiliary service mapreduce_shuffle, "mapreduce_shuffle"
2023-01-12 02:27:46,781 INFO org.apache.spark.network.yarn.YarnShuffleService: Initializing YARN shuffle service for Spark
2023-01-12 02:27:46,781 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Adding auxiliary service spark_shuffle, "spark_shuffle"
2023-01-12 02:27:46,836 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Recovering log #34007
2023-01-12 02:27:46,836 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Level-0 table #34009: started
2023-01-12 02:27:46,864 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Level-0 table #34009: 145 bytes OK
2023-01-12 02:27:46,898 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Delete type=0 #34007

2023-01-12 02:27:46,898 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Delete type=3 #34005

2023-01-12 02:27:46,898 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Compacting 4@0 + 1@1 files
2023-01-12 02:27:46,917 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Generated table #34011: 1 keys, 145 bytes
2023-01-12 02:27:46,917 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Compacted 4@0 + 1@1 files => 145 bytes
2023-01-12 02:27:46,923 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: compacted to: files[ 0 1 0 0 0 0 0 ]
2023-01-12 02:27:46,923 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Delete type=2 #33998

2023-01-12 02:27:46,923 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Delete type=2 #34000

2023-01-12 02:27:46,923 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Delete type=2 #34003

2023-01-12 02:27:46,923 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Delete type=2 #34006

2023-01-12 02:27:46,924 INFO org.apache.spark.network.util.LevelDBProvider$LevelDBLogger: Delete type=2 #34009

2023-01-12 02:27:47,202 INFO org.apache.spark.network.yarn.YarnShuffleService: Started YARN shuffle service for Spark on port 7337. Authentication is not enabled.  Registered executor file is /var/lib/hadoop-yarn/yarn-nm-recovery/nm-aux-services/spark_shuffle/registeredExecutors.ldb
2023-01-12 02:27:47,202 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:  Using ResourceCalculatorPlugin : org.apache.hadoop.yarn.util.ResourceCalculatorPlugin@127d7908
2023-01-12 02:27:47,202 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:  Using ResourceCalculatorProcessTree : null
2023-01-12 02:27:47,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Physical memory check enabled: true
2023-01-12 02:27:47,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Virtual memory check enabled: false
2023-01-12 02:27:47,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: ContainersMonitor enabled: true
2023-01-12 02:27:47,219 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService: rollingMonitorInterval is set as -1. The log rolling monitoring interval is disabled. The logs will be aggregated after this application is finished.

目前yarn集群有6个,但是只有3个节点,所有很多任务一直跑不动

img

你的日志显示的好像很明确了,cdh4/192.168.0.104内存多大啊?

img

  • 文章:yarn报错集 中也许有你想要的答案,请看下吧

如果您在重启Cloudera中的YARN (Yet Another Resource Negotiator) 服务时遇到了问题,可能是由以下几种原因导致的:

配置问题:YARN服务的配置文件可能已损坏或不完整,导致服务无法启动。

资源问题:系统可能缺少足够的内存或磁盘空间来启动YARN服务。

权限问题:YARN服务可能没有足够的权限来启动和运行。

进程问题:可能存在其他进程占用YARN服务的端口,导致无法启动。

第三方依赖库或组件问题: YARN服务依赖的第三方库或组件可能出现问题,导致无法正常启动。

建议您检查配置文件,确保没有问题;检查系统资源使用情况,确保没有缺少;检查权限设置,确保YARN服务有足够的权限运行; 检查端口使用情况,关闭占用端口的进程。查看YARN服务和依赖服务的日志,查找问题原因,并采取相应的措施进行解决。

可能是一个配置问题。可能是 YARN 配置不正确,也可能是环境问题。建议检查以下配置:

yarn-site.xml
core-site.xml
hdfs-site.xml
mapred-site.xml
Capacity Scheduler configuration in yarn-site.xml
ResourceManager 和 NodeManager 配置
检查日志目录是否有空间紧张,检查整个集群是否有其他问题,如网络问题,内存不足等。
仅供参考,望采纳,谢谢。

目前主要分析方向:
1、查找 cloudera-scm-server与cloudera-scm-agent日志文件 排错,示例:
tail -f /opt/cloudera-manager/cm-5.14.1/log/cloudera-scm-server/cloudera-scm-server.log
tail -f /opt/cloudera-manager/cm-5.14.1/log/cloudera-scm-agent/cloudera-scm-agent.log
2、怀疑是cloudera集群中yarn的配置文件有问题

以下答案由chatGPT提供 ,如有不明白可继续向我提问:

img


这个错误消息表明Yarn在运行过程中出现了意外的退出, 并且这个退出是由于内存不足引起的。其中 "OutOfMemory error" 表示程序因为内存不足而无法继续运行。建议您检查代码以及系统的内存配置,看是否有内存泄漏现象,也可以给Yarn分配更多的内存来解决。建议你可以咨询有关Hadoop社区中的专家或者在GitHub上提交issues来获取帮助解决这个问题。

可能是由于以下原因之一导致的:
YARN服务器上可能缺少空间,导致YARN无法启动。
YARN服务器上可能存在其他进程或服务,与YARN竞争资源。
YARN配置文件中可能存在错误,导致YARN无法正常启动。
YARN日志文件可能已满,导致YARN无法记录日志。

如果 Cloudera 中的 YARN 重启不了,那可能是由于以下原因之一:

YARN 的进程是被其它进程占用了,无法重启。在终端中输入 "ps aux | grep yarn" 可以查看当前有哪些进程正在运行 YARN。如果没有重启之前的进程没有退出,就需要先杀掉这些进程。

YARN 的配置文件有问题,导致重启失败。可以检查 yarn-site.xml 中的配置是否正确。

YARN 需要其它服务的支持,如 HDFS,如果其它服务异常,可能导致 YARN 无法重启。

YARN 的运行状态异常, 可能导致无法重启。如果你能提供的图片能看出来什么问题会更好,但是如果YARN的状态为 Dead, 那么可能是由于YARN的资源被耗尽了,或者是因为其它原因导致YARN节点失败了。

建议检查日志文件,以确定重启失败的原因。 之后再进行解决方案。

rollingMonitorInterval is set as -1. The log rolling monitoring interval is disabled. The logs will be aggregated after this application is finished
设置了-1 导致日志不能滚动

如果Cloudera中的YARN无法重启,可能是由于以下原因之一导致的:

YARN服务器上可能缺少空间,导致YARN无法启动。
YARN服务器上可能存在其他进程或服务,与YARN竞争资源。
YARN配置文件中可能存在错误,导致YARN无法正常启动。
YARN日志文件可能已满,导致YARN无法记录日志。
建议您检查上述原因,并进行相应的解决措施。如果仍然无法解决问题,可以尝试在Cloudera社区中寻求帮助。

gc内存,存储空间 hdfs别的组件有啥问题吗

可能是因为重启后程序出现了错误,导致日志不断重复打印。

看看这个文章能不能解决你的问题
https://blog.csdn.net/yt438936731/article/details/106259660/