GWJ6: Out Of Place Cacti Mac OS

The thing is there isn't much traffice in sense of many users. Cacti does not even show 10% cpu or ram usage, everything is on low usage on cacti graphs. I will change the values you mentioned above and let you know what happens. – shorif2000 Oct 17 '13 at 14:25. The Raspberry Pi is a tiny and affordable computer that you can use to learn programming through fun, practical projects. Join the global Raspberry Pi community. Bitnami makes it easy to get your favorite open source software up and running on any platform, including your laptop, Kubernetes and all the major clouds. In addition to popular community offerings, Bitnami, now part of VMware, provides IT organizations with an enterprise offering that is secure, compliant, continuously maintained.

This is on a Cacti 0.8.8 machine running RHEL6 as a VM.All graphs have stopped graphing, but it looks like data is getting recorded to the RRDs, at least at first glance.

You get the idea.: ). Latest Godot build - Build for Mac OSX, Linux and Windows + export templates with every update to master (bleeding edge). GWJ6: Out of Place Cacti. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website.

One interesting thing is that some graphs stopped between 23:00 on Wednesday and 00:00 on Thursday, while others stopped at around 05:00 on Wednesday. Both of these would have been at times when I was not actively working on the machine.
Example:
Interface graph
graph_image_01.png (38.11 KiB) Viewed 4479 times
I've noticed the following items in the logs:I looked at the code for spine.c, and it looks like this error is returned if it gets an EAGAIN from mysql, so the problem could be something with mysql, but I don't know what the problem would be. I did notice earlier that the mysqld process was chewing up a fair amount of CPU and memory, so I shut mysqld down and re-started it, with no significant change in behavior - mysqld would quickly chew up CPU and RAM again. For a brief period after I restarted mysqld, I saw that some RRD files were getting updated. It doesn't look like the machine is starving for CPU or memory/vmem:

Code: Select all

I have the poller log dumped out to a file for each poller run, and that also shows some interesting information. Something appears to be 1. causing poller.php to be very unhappy and 2. causing the poller run to try to exceed 300 seconds (the poller run normally completes in just a few seconds). poller.log shows things like this, in addition to thousands upon thousands of 'Waiting on 1 of 1 pollers.' messages, and the occasional 'resource temporarily unavailable' messages from rrd.php, assumingly when it tries to write to an RRD file.
An strace of the running poller.php process shows it waiting for something, until it gets killed by the next 5-minute poller run.
I activated the 'domains' and 'spikekill' plugins during the day on Wednesday, but I've deactivated both of them since, to eliminate them as variables while I work on this larger problem.
Gwj6: out of place cacti mac os isopoller.php is running as 'cactiuser', and cactiuser owns all of the files in the cacti directory structure (/var/www/html/stats).
So... at this point, I'm just trying to get a handle on what's happening, and what I can do to fix it / keep it from happening again.This is on a Cacti 0.8.8 machine running RHEL6 as a VM.All graphs have stopped graphing, but it looks like data is getting recorded to the RRDs, at least at first glance.
One interesting thing is that some graphs stopped between 23:00 on Wednesday and 00:00 on Thursday, while others stopped at around 05:00 on Wednesday. Both of these would have been at times when I was not actively working on the machine.
Example:

Gwj6: Out Of Place Cacti Mac Os Download

Interface graph
graph_image_01.png (38.11 KiB) Viewed 4478 times
I've noticed the following items in the logs:I looked at the code for spine.c, and it looks like this error is returned if it gets an EAGAIN from mysql, so the problem could be something with mysql, but I don't know what the problem would be. I did notice earlier that the mysqld process was chewing up a fair amount of CPU and memory, so I shut mysqld down and re-started it, with no significant change in behavior - mysqld would quickly chew up CPU and RAM again. For a brief period after I restarted mysqld, I saw that some RRD files were getting updated. It doesn't look like the machine is starving for CPU or memory/vmem:

Code: Select all

I have the poller log dumped out to a file for each poller run, and that also shows some interesting information. Something appears to be 1. causing poller.php to be very unhappy and 2. causing the poller run to try to exceed 300 seconds (the poller run normally completes in just a few seconds). poller.log shows things like this, in addition to thousands upon thousands of 'Waiting on 1 of 1 pollers.' messages, and the occasional 'resource temporarily unavailable' messages from rrd.php, assumingly when it tries to write to an RRD file.

Gwj6: Out Of Place Cacti Mac Os Update

An strace of the running poller.php process shows it waiting for something, until it gets killed by the next 5-minute poller run.
I activated the 'domains' and 'spikekill' plugins during the day on Wednesday, but I've deactivated both of them since, to eliminate them as variables while I work on this larger problem.
poller.php is running as 'cactiuser', and cactiuser owns all of the files in the cacti directory structure (/var/www/html/stats).

Gwj6: Out Of Place Cacti Mac Os Catalina


So... at this point, I'm just trying to get a handle on what's happening, and what I can do to fix it / keep it from happening again.