Understanding collectd and rrdtool
27 October 2017 - ~15 Minutes Misc
I recently had reason to collect metrics on a Linux server that was to be posted to an analysis service. My requirements were to collect minimum, maximum and average measurements of several key metrics. Shouldn’t be too difficult in this day and age, right?
collectd
was the obvious choice, even if only by its name. It’s a daemon that
collects metrics using a modular system for data sources and outputs. It can
easily measure CPU and memory usage, and thanks to its plugins, can get “deep”
application-specific metrics such as MySQL database statistics.
The normal choice for the output plugin (which stores the metrics that are
being collected) is rrdtool
, short for round robin database tool. As it’s
name suggests, it’s designed to store a sequence of data with a fixed span, and
when it reaches the end of the span it goes back to the beginning and starts
overwriting data. It’s a perfectly optimised choice for metrics collection where
you want to keep lots of samples of data for a specific timespan, and throw away
older data. Setting up these databases with fixed sample rates and timespans
means you know in advance exactly how much disk space you will need, so you
won’t run into a problem where misconfigured metrics collection eventually fills
up your hard disk space.
However rrdtool is not the easiest thing to understand, and neither is the way that collectd integrates with it the most straightforward either. rrdtool has a number of tutorials on its website which are worth reading to find out the principles, but then the way that collectd uses rrdtool in its default configuration can end up looking a bit strange.
Basics of an rrdtool database
Let’s look at what an rrdtool database will look like, assuming that we’re not
creating the database ourself but someone/something else is (namely collectd).
We will discover that an rrdtool database is simply a file with a .rrd
extension. We’ll also discover a bit of a TLA soup inside the file.
A database will have some metadata associated with it, the most important being the step - this indicates the resolution of the data and is a number given in seconds. If the step is 10, this means that each primary data point is 10 seconds apart (we’ll define what “primary” means a bit later).
A DS is a data source. As you might expect, this is referring to a single metric of some kind, such as “amount of memory in use”, a quantity which is measurable and will (probably) change over time.
A CF is a consolidation function. When you have multiple values collected over a period of time, a consolidation function will collapse these into a single number. The functions that we will consider here is minimum, maximum and average.
We also need to think about the data points, the individual values that are inserted into the database. A PDP is a primary data point - a single value representing a measurement taken at a specific point in time. Each PDP will be collected at precise intervals - the step that is in the database metadata. A CDP is a consolidated data point - a value that is the result of a consolidation function over a number of other data points. For example, if you have a PDP which is collected every ten seconds, if you consolidate 6 of them by taking the average, then you will have a CDP which represents a minutes worth of measurements.
A RRA is a round robin archive. Broadly speaking, you can consider one of these to be like a table in a classic database. It’s a distinct space into which values - either PDPs or CDPs - can be inserted, each value taking up one row. An RRA is associated with a DS and a CF, and has a known size (length, measured in rows). Metadata on the RRA includes the number of PDPs per row - if an RRA is collecting PDPs then this will be 1 PDP per row; if it is for CDPs, then each CDP in a row will have more than one PDP. (And each row in the RRA will have the same PDPs-per-row value.)
So in a typical database you will have a step value which describes how often you are sampling, 10 seconds being a common value. You will have a DS which describes a particular metric - for example, “bytes of memory used”. You will then have at least one RRA collecting the PDPs. The resolution of these RRAs is the step size, 10 seconds in our example. The RRA will have a fixed number of rows, meaning that the RRA will contain data for a specific timespan. Once the RRA is filled, new values will cause the oldest values to be overwritten.
Our typical database may also have further RRAs containing CDPs. Say we want to have a dataset that covers 5-minute intervals instead of 10 second intervals. If we consolidate 30 PDPs (each 10 seconds apart), we end up with a CDP that covers 5 minutes. We can create RRAs that have a CF (minimum, average, etc.) and 30 PDPs-per-row.
An important note
There are two very important points to note here:
Firstly, these parameters - the step size, the RRAs and their parameters, and so on - are set when you create the database and changing them later is not easy.
Secondly, when you query the database, you can only query the values that are in it. This may sound obvious, but basically it means that rrdtool will not perform any data manipulation for you. If you’re used to SQL databases where your query can include data manipulations as a natural and instinctive part of its syntax, then this may come as a surprise.
So what does this mean in practice? Following our earlier example, where we created RRAs of PDPs ten seconds apart, and RRAs of CDPs 5 minutes apart, we can query rrdtool for data with a resolution of ten seconds, or a resolution of 5 minutes. We cannot ask rrdtool to give us data with a 15 minute resolution because there is no RRA in our database which contains data with a 15 minute resolution. Even though it may be possible to derive this information, rrdtool will not derive this information for us.
Therefore, to get the queries we want without additional processing, you must decide ahead of time how you will be querying the database, and choose the parameters up-front that will allow you to realise those queries.
Bring in collectd
Having described what’s needed when you need an rrdtool database, we now turn to collectd. We’re not creating the rrdtool databases, collectd is - and so we need to understand how and why rrdtool creates the databases the way it does.
If you start collectd with the default configuration (just adding the rrdtool plugin and some basic input plugins such as CPU and memory usage), it will create its rrdtool databases. Let’s take a look at the databases it creates.
Firstly, let’s quickly install it. If you are on a recent CentOS, you can probably install and start it with these commands:
yum install -y epel-release
yum install -y collectd collectd-rrdtool
chkconfig collectd on
service collectd start
Using the command rrdtool info
on one of the files that collectd creates, we
can inspect how collectd created it. collectd will most likely be configured to
place its files in /var/lib/collectd/rrd/HOSTNAME
, and the default
configuration should include memory metrics. Let’s have a look at one of these
files:
rrdtool info /var/lib/collectd/rrd/$(hostname --fqdn)/memory/memory-used.rrd
There’s a lot of output of this command. Let’s look at it a bit at a time. Firstly, the whole-database metadata:
filename = "/var/lib/collectd/rrd/ip-172-31-26-177.eu-west-1.compute.internal/memory/memory-used.rrd"
rrd_version = "0003"
step = 10
last_update = 1509052180
header_size = 3496
This is mostly uninteresting, but we can see here the definition of the step size - 10 seconds - which underlies all the measurements in the database.
ds[value].index = 0
ds[value].type = "GAUGE"
ds[value].minimal_heartbeat = 20
ds[value].min = 0.0000000000e+00
ds[value].max = 2.8147497671e+14
ds[value].last_ds = "83779584"
ds[value].value = 0.0000000000e+00
ds[value].unknown_sec = 0
This is a definition of the DS. Again this is mostly uninteresting. We can see
that the DS is called value
which will be referred to later.
rra[0].cf = "AVERAGE"
rra[0].rows = 1200
rra[0].cur_row = 1117
rra[0].pdp_per_row = 1
rra[0].xff = 1.0000000000e-01
rra[0].cdp_prep[0].value = NaN
rra[0].cdp_prep[0].unknown_datapoints = 0
Here we can see the first (or zeroth) RRA being defined. There’s a few interesting things here. The CF field tells us that this RRA is storing averages; the pdp_per_row field is one, which mean that this is a table of PDPs; and that there’s 1200 rows. With 10 seconds between PDPs, that’s up to 12,000 seconds worth of data, or 200 minutes.
The next two RRAs are the same but with CFs of MIN and MAX respectively.
The following RRA is this one:
rra[3].cf = "AVERAGE"
rra[3].rows = 1235
rra[3].cur_row = 404
rra[3].pdp_per_row = 7
rra[3].xff = 1.0000000000e-01
rra[3].cdp_prep[0].value = 1.6756736000e+08
rra[3].cdp_prep[0].unknown_datapoints = 0
This one has 7 PDPs-per-row, which means that each row covers a timespan of 70 seconds. It has 1235 rows. Those are rather odd numbers.
Let’s look further:
rra[6].cf = "AVERAGE"
rra[6].rows = 1210
rra[6].cur_row = 775
rra[6].pdp_per_row = 50
rra[6].xff = 1.0000000000e-01
rra[6].cdp_prep[0].value = 1.0893230080e+09
rra[6].cdp_prep[0].unknown_datapoints = 5
50 PDPs-per-row means 500 seconds, or 81⁄3 minutes per row. And it has 1210 rows. Eh?
Skipping ahead to RRA number 9, there are 223 PDPs-per-row meaning 371⁄6 minutes per row, and 1202 rows. RRA number 12 has 2635 PDPs-per-row, that is 7.319444444 hours per row, and 1201 rows.
These all sound like very peculiar resolutions if you are wanting to query the individual values. However that’s not what collectd’s default configuration is optimising for. The key thing is this: the collectd defaults are optimised for drawing graphs of metrics! Looked at it this way, the numbers start making more sense - first let’s calculate the total timespan of each RRA:
- 1200 rows * 1 PDPs-per-row * 10 seconds between PDPs = 12,000 seconds, or 200 minutes
- 1235 rows * 7 PDPs-per-row * 10 seconds between PDPs = 86,450 seconds, or a fraction over 24 hours
- 1210 rows * 50 PDPs-per-row * 10 seconds between PDPs = 605,000 seconds, or a fraction over 7 days
- 1202 rows * 223 PDPs-per-row * 10 seconds between PDPs = 2,680,460 seconds, or a fraction over 31 days
Now it makes more sense - using these RRAs, we can get high-resolution graphs of the most recent 200 minutes, plus graphs which cover the last day, week and month. Each row corresponds to a pixel wide in the graph, so each graph is 1200 pixels or thereabouts. These databases are perfect for a server metrics dashboard!
The collectd configuration options are optimised for this purpose - there’s no
documented way to specify the rrdtool parameters directly. Instead there’s a
configuration option RRATimespan
which can be specified multiple times to
define the timespan of each set of RRAs - the defaults are 3,600, 86,400,
604,800, 2,678,400 and 31,622,400. You can see these match the values we saw in
the rrdtool info
output (with the exception of the first one - it seems that
the collectd rrdtool plugin applies a minimum number of rows). There’s also an
RRARows
configuration option which defaults to 1200, the width of the graphs.
collectd will then perform some calculations and determine the most appropriate
rrdtool parameters. Presumably due to rounding, it may store slightly more than
1200 rows to make sure that the RRA contains at least the request timespan.
Taking control of the database
The collectd configuration options are fine if what you want to do is generate
graphs with fixed time periods without particularly worrying about what time
period of the individual pixels are. However if you want the individual
measurements to have a specific timespan, there’s no documented collectd
configuration option for this purpose. We have to think laterally and figure out
what values we should put into the RRATimespan
option to get the PDPs-per-row
value that we really want.
Let’s say we want an RRA set which covers one hour per row. One hour is 3,600
seconds, so we want each row to cover 3600 seconds. Let’s stick with the 1200
rows per RRA. To get the total timespan of the file, multiply the time per row
(3,600 seconds) by the number of rows (1200) to get 4,320,000 seconds per file.
This is the number that we want to put into our RRATimespan
configuration
option.
Putting it all together, here is my collectd rrdtool plugin configuration:
LoadPlugin rrdtool
<Plugin rrdtool>
DataDir "/var/lib/collectd/rrd"
CreateFilesAsync false
CacheTimeout 120
CacheFlush 900
WritesPerSecond 50
# The default settings are optimised for plotting time-series graphs over pre-fixed
# time period, but are not very helpful for simply asking "what is my average memory
# usage for the last hour?", so we define some new ones.
# The first one is an anomaly, as it seems that the rrd plugin enforces some
# minimums. The result is a time-series 200 hours long with a granularity of 10s.
RRATimeSpan 3600
# This defines a time-series 20 hours long with a granularity of 1 minute.
RRATimeSpan 72000
# This defines a time-series 50 days long with a granularity of 1 hour.
RRATimeSpan 4320000
</Plugin>
As you can see by the comments, we’re defining RRATimespan
s that cause a
database with rows that have nice round definitions, and we’re not too worried
about the total timespan stored (this is the opposite to collectd’s default
configuration).
Querying rrdtool
Now we have collectd logging and storing data at the intervals we are interested in, we’ll want to query that data. Again this is not the easiest thing to do, but now you’ve got a good grounding in rrdtool’s principles and terminology, it will be easier to understand their documentation. However I’ll provide a script later which will do most of the work.
We now need to highlight another important point: if you are querying for a specific resolution then your start and end time must be aligned with the resolution. So if you are querying for your hourly metrics, then your start time and end time must be on the hour. If you are querying for 5-minute metrics, your start and end times must be on 5-minute boundaries.
This means that we might have to do some manipulations of the numbers to get the information that we need. We can do this entirely in bash, so we can come up with a script to do this for us.
Let’s say that we are using the collectd configuration above, and we want to get metrics for each minute in the last 15 minutes. This means our resolution is 60 seconds.
rrdtool natively works with UNIX timestamps, so the first thing we’ll do is get the current time in UNIX time format and store it in an environment variable.
$ now=$( date +%s )
$ echo $now $( date -d @$now )
1509104513 Fri Oct 27 11:41:53 UTC 2017
So we want to query for 15 minutes up till now
, but we need to round back to
one-minute boundaries. So let’s define our end time as the last minute boundary
before now
. We can do this by subtracting the modulus of the resolution, 60
seconds:
$ end=$(( $now - ( $now % 60 ) ))
$ echo $end $( date -d @$end )
1509104460 Fri Oct 27 11:41:00 UTC 2017
Now we get our start time as 900 seconds - 15 minutes - before the end time:
$ start=$(( $end - 900 ))
$ echo $start $( date -d @$start )
1509103560 Fri Oct 27 11:26:00 UTC 2017
We’re going to do one final tweak, which is to add one second to start
and
subtract one second from end
. The reason for this is that if we leave either
on a data point boundary, rrdtool will return the data points either side of the
time point. So not only will we have two more rows than we expected, the last
one will be incomplete (since we’re part way through that minute), so rrdtool
will return NaN
(or similar placeholder value).
$ start=$(( $start + 1 ))
$ echo $start $( date -d @$start )
1509103561 Fri Oct 27 11:26:01 UTC 2017
$ end=$(( $end - 1 ))
$ echo $end $( date -d @$end )
1509104459 Fri Oct 27 11:40:59 UTC 2017
So now we can execute an rrdtool “export” command:
rrdtool xport --step 60 --start $start --end $end \
"DEF:dmin=memory/memory-used.rrd:value:MIN" \
"DEF:davg=memory/memory-used.rrd:value:AVERAGE" \
"DEF:dmax=memory/memory-used.rrd:value:MAX" \
"XPORT:dmin:Minimum" \
"XPORT:davg:Average" \
"XPORT:dmax:Maximum"
There’s a few unusual parts to this command. The initial options make sense -
we define the resolution we want to query, along with the desired start and end
timestamps. Then we have a number of DEF
elements: these are saying to read
the memory/memory-used.rrd
database file and extract the data streams for
MIN
, AVERAGE
and MAX
CFs. Then we export that same data.
Here is the kind of result we will get:
<?xml version="1.0" encoding="ISO-8859-1"?>
<xport>
<meta>
<start>1509103620</start>
<step>60</step>
<end>1509103620</end>
<rows>15</rows>
<columns>3</columns>
<legend>
<entry>Minimum</entry>
<entry>Average</entry>
<entry>Maximum</entry>
</legend>
</meta>
<data>
<row><t>1509103620</t><v>8.2034688000e+07</v><v>8.2124253867e+07</v><v>8.2254233600e+07</v></row>
<row><t>1509103680</t><v>8.2071552000e+07</v><v>8.2350080000e+07</v><v>8.2667110400e+07</v></row>
<row><t>1509103740</t><v>8.2618777600e+07</v><v>8.2764731733e+07</v><v>8.2898124800e+07</v></row>
<row><t>1509103800</t><v>8.2928435200e+07</v><v>8.3002572800e+07</v><v>8.3206144000e+07</v></row>
<row><t>1509103860</t><v>8.3149619200e+07</v><v>8.3296938667e+07</v><v>8.3417497600e+07</v></row>
<row><t>1509103920</t><v>7.8979891200e+07</v><v>7.9990647467e+07</v><v>8.2660556800e+07</v></row>
<row><t>1509103980</t><v>7.8917632000e+07</v><v>7.8930193067e+07</v><v>7.8948761600e+07</v></row>
<row><t>1509104040</t><v>7.8917632000e+07</v><v>7.8917632000e+07</v><v>7.8917632000e+07</v></row>
<row><t>1509104100</t><v>7.8917632000e+07</v><v>7.9137177600e+07</v><v>7.9872000000e+07</v></row>
<row><t>1509104160</t><v>7.8970880000e+07</v><v>7.8970880000e+07</v><v>7.8970880000e+07</v></row>
<row><t>1509104220</t><v>7.8961049600e+07</v><v>7.8968285867e+07</v><v>7.8970880000e+07</v></row>
<row><t>1509104280</t><v>7.8946304000e+07</v><v>7.9465267200e+07</v><v>8.1812684800e+07</v></row>
<row><t>1509104340</t><v>8.6452633600e+07</v><v>8.6746180267e+07</v><v>8.8073011200e+07</v></row>
<row><t>1509104400</t><v>8.6863872000e+07</v><v>8.8259515733e+07</v><v>8.8559616000e+07</v></row>
<row><t>1509104460</t><v>8.4087603200e+07</v><v>8.5427404800e+07</v><v>8.8543232000e+07</v></row>
</data>
</xport>
We can see here our 15 rows, encapsulated inside XML elements. For each row we
can see the time point t
and three values v
, which the legend element
tells us are minimum, average and maximum respectively.
To make it a bit easier to use, we can make a bash function that will do all the calculations and commands for us:
function rrdexport() {
# Parse arguments
fn=$1; shift
res=$1; shift
samples=$1; shift
if [ -z "$fn" -o -z "$res" -o -z "$samples" ]; then
echo >&2 "bad args. Usage: rrdexport FILENAME RESOLUTION SAMPLES"
fi
# Make collectd flush data to disk
kill -USR1 $( pidof collectd )
sleep 1s
# Formulate the query
now=$(date +%s)
end=$(( $now - ( $now % $res ) ))
start=$(( $end - ($res * $samples) ))
start=$(( start + 1 ))
end=$(( $end - 1 ))
echo "Range: from $( date -d @$start ) ($start) to $( date -d @$end ) ($end)"
# Query
rrdtool xport --step "$res" --start "$start" --end "$end" "$@" \
"DEF:dmin=${fn}:value:MIN" \
"DEF:davg=${fn}:value:AVERAGE" \
"DEF:dmax=${fn}:value:MAX" \
"XPORT:dmin:Minimum" \
"XPORT:davg:Average" \
"XPORT:dmax:Maximum"
}
This function takes three parameters:
- The filename of the
.rrd
database containing the metric you want to query. This entity will by default configure log files to be stored in/var/lib/collectd/rrd/HOSTNAME
; this will contain one log file per metric grouped by resource type (for example,memory/memory-used.rrd
). - The resolution you seek, in seconds. In the default configuration of this entity, 10, 60 and 3600 (10 seconds, 1 minute, 1 hours respectively) are valid. Using other values will almost certainly generate a response, but not one you are expecting.
- The number of samples to retrieve.
You may also specify additional parameters, which are passed through to
rrdtool
- for example, adding --json
to the end of the command will cause
JSON output to be selected.
So we could replace our earlier example with simply:
rrdexport memory/memory-used.rrd 60 60
Or to extract hourly statistics for the last day:
rrdexport memory/memory-used.rrd 3600 24
I hope you find this information useful! Please leave a comment if there’s anything that’s not adequately explained.
About the author
Richard Downer is a software engineer turned cloud solutions architect, specialising in AWS, and based in Scotland. Richard's interest in technology extends to retro computing and amateur hardware hacking with Raspberry Pi and FPGA.