Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upGitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
error reading control group stats #369
Comments
It will also trigger when there are not many indexes |
Probably something wrong in your env : java.nio.file.NoSuchFileException: /sys/fs/cgroup/cpuacct/kubepods/burstable/pod2ef59b42-3959-438b-9998-484b472d3940/8d123c9540d8f5e2bace99a880f57bf1513b01a148a4932160e1e403dc399b34/cpuacct.usage
… On 10 Sep 2020, at 05:01, tengzhuofei ***@***.***> wrote:
version 6.8.4.10
cpu 8c memory 26Gi
This error is occasionally triggered during a large number of index creation
2020-09-10 02:52:54,420 DEBUG [elasticsearch[10.42.4.18][management][T#1]] org.elasticsearch.monitor.os.OsProbe.getCgroup(OsProbe.java:540) error reading control group stats
java.nio.file.NoSuchFileException: /sys/fs/cgroup/cpuacct/kubepods/burstable/pod2ef59b42-3959-438b-9998-484b472d3940/8d123c9540d8f5e2bace99a880f57bf1513b01a148a4932160e1e403dc399b34/cpuacct.usage
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at java.nio.file.Files.newBufferedReader(Files.java:2784)
at java.nio.file.Files.readAllLines(Files.java:3202)
at java.nio.file.Files.readAllLines(Files.java:3242)
at org.elasticsearch.monitor.os.OsProbe.readSingleLine(OsProbe.java:220)
at org.elasticsearch.monitor.os.OsProbe.readSysFsCgroupCpuAcctCpuAcctUsage(OsProbe.java:309)
at org.elasticsearch.monitor.os.OsProbe.getCgroupCpuAcctUsageNanos(OsProbe.java:296)
at org.elasticsearch.monitor.os.OsProbe.getCgroup(OsProbe.java:515)
at org.elasticsearch.monitor.os.OsProbe.osStats(OsProbe.java:635)
at org.elasticsearch.monitor.os.OsService$OsStatsCache.refresh(OsService.java:68)
at org.elasticsearch.monitor.os.OsService$OsStatsCache.refresh(OsService.java:61)
at org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:54)
at org.elasticsearch.monitor.os.OsService.stats(OsService.java:58)
at org.elasticsearch.node.NodeService.stats(NodeService.java:110)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:74)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:39)
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:138)
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:259)
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:255)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66)
at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1087)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub <#369>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ACOMPGI7DHQA2VZO2OMTX5TSFA6SFANCNFSM4RD6V7GQ>.
|
version 6.8.4.10
cpu 8c memory 26Gi
This error is occasionally triggered during a large number of index creation
2020-09-10 02:52:54,420 DEBUG [elasticsearch[10.42.4.18][management][T#1]] org.elasticsearch.monitor.os.OsProbe.getCgroup(OsProbe.java:540) error reading control group stats
java.nio.file.NoSuchFileException: /sys/fs/cgroup/cpuacct/kubepods/burstable/pod2ef59b42-3959-438b-9998-484b472d3940/8d123c9540d8f5e2bace99a880f57bf1513b01a148a4932160e1e403dc399b34/cpuacct.usage
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at java.nio.file.Files.newBufferedReader(Files.java:2784)
at java.nio.file.Files.readAllLines(Files.java:3202)
at java.nio.file.Files.readAllLines(Files.java:3242)
at org.elasticsearch.monitor.os.OsProbe.readSingleLine(OsProbe.java:220)
at org.elasticsearch.monitor.os.OsProbe.readSysFsCgroupCpuAcctCpuAcctUsage(OsProbe.java:309)
at org.elasticsearch.monitor.os.OsProbe.getCgroupCpuAcctUsageNanos(OsProbe.java:296)
at org.elasticsearch.monitor.os.OsProbe.getCgroup(OsProbe.java:515)
at org.elasticsearch.monitor.os.OsProbe.osStats(OsProbe.java:635)
at org.elasticsearch.monitor.os.OsService$OsStatsCache.refresh(OsService.java:68)
at org.elasticsearch.monitor.os.OsService$OsStatsCache.refresh(OsService.java:61)
at org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:54)
at org.elasticsearch.monitor.os.OsService.stats(OsService.java:58)
at org.elasticsearch.node.NodeService.stats(NodeService.java:110)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:74)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:39)
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:138)
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:259)
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:255)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66)
at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1087)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)