data collector reference - services : cloud...
TRANSCRIPT
Data Collector Reference - ServicesCloud InsightsNetAppNovember 23, 2020
This PDF was generated from https://docs.netapp.com/us-en/cloudinsights/concept_telegraf_display_options.htmlon November 23, 2020. Always check docs.netapp.com for the latest.
Table of ContentsData Collector Reference - Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Telegraf Integration Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Node Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
ActiveMQ Data Collector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Apache Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Consul Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Couchbase Data Collector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
CouchDB Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Docker Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Elasticsearch Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Flink Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Hadoop Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
HAProxy Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
JVM Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Kafka Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Kibana Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Kubernetes Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Memcached Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
MongoDB Data Collector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
MySQL Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Netstat Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Nginx Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
PostgreSQL Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Puppet Agent Data Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Redis Data Collector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Data Collector Reference - Services
Telegraf Integration Data
The Cloud Insights Telegraf data collectors are powerful tools for gathering source-reported metrics from your data sources.
Cloud Insights provides different transform options for metrics collected using Telegraf. When addingthese metrics to a widget, you are presented with a drop-down giving the transform choices:
None
Data is displayed as is, with no manipulation.
Rate
Current value divided by the time range since the previous observation.
Cumulative
The accumulation of the sum of previous values and the current value.
Delta
The difference between the previous observation value and the current value.
Delta rate
Delta value divided by the time range since the previous observation.
Cumulative Rate
Cumulative value divided by the time range since the previous observation.
Node Data Collection
Cloud Insights gathers metrics from the node on which you install an agent.
Installation
1. From Admin > Data Collectors, choose an operating system/platform. Note that installing anyintegration data collector (Kubernetes, Docker, Apache, etc.) will also configure node datacollection.
2. Follow the instructions to configure the agent. The instructions vary depending on the type ofOperating System or Platform you are using to collect data.
Objects and Counters
The following objects and their counters are collected as Node metrics:
Object: Identifiers: Attributes: Datapoints:
Node Filesystem Node UUIDDevicePathType
Node IPNode NameNode OSMode
FreeInodes FreeInodes TotalInodes UsedTotalUsed TotalUsed
Node Disk Node UUIDDisk
Node IPNode NameNode OS
IO Time TotalIOPS In ProgressRead Bytes (per sec)Read Time TotalReads (per sec)Weighted IO Time TotalWrite Bytes (per sec)Write Time TotalWrites (per sec)Current Disk QueueLengthWrite TimeRead TimeIO Time
Node CPU Node UUIDCPU
Node IPNode NameNode OS
System CPU UsageUser CPU UsageIdle CPU UsageProcessor CPU UsageInterrupt CPU UsageDPC CPU Usage
Object: Identifiers: Attributes: Datapoints:
Node Node UUID Node IPNode NameNode OS
Kernel Boot TimeKernel Context Switches(per sec)Kernel EntropyAvailableKernel Interrupts (persec)Kernel Processes Forked(per sec)Memory ActiveMemory Available TotalMemory AvailableMemory BufferedMemory CachedMemory Commit LimitMemory Committed AsMemory DirtyMemory FreeMemory High FreeMemory High TotalMemory Huge Page SizeMemory Huge PagesFreeMemory Huge PagesTotalMemory Low FreeMemory Low TotalMemory MappedMemory Page TablesMemory SharedMemory SlabMemory Swap CachedMemory Swap FreeMemory Swap TotalMemory TotalMemory Used TotalMemory UsedMemory Vmalloc ChunkMemory Vmalloc TotalMemory Vmalloc UsedMemory WiredMemory WritebackTotalMemory Writeback Tmp
Object: Identifiers: Attributes: Datapoints:
Node Network Network InterfaceNode UUID
Node NameNode IPNode OS
Bytes ReceivedBytes SentPackets OutboudDiscardedPackets Outboud ErrorsPackets ReceivedDiscardedPackets Received ErrorsPackets ReceivedPackets Sent
Setup
Setup and Troubleshooting information can be found on the Configuring an Agent page.
MacOS Memory Usage
Cloud Insights (via Telegraf) and macOS report different numbers for memory usage. Both Telegrafand the Mac activity monitor use metrics gathered from vm_stat, however the total memory usage iscalculated differently for each.
Telegraf calculates Memory Used Total as follows:
Memory Used Total = Memory Total - Memory Available Total
Where Memory Available Total is derived from the sum of "Pages free" and "Pages inactive" in vm_stat.
The Mac activity monitor, on the other hand, calculates Memory Used as follows:
Memory Used = App Memory + Wired Memory + Compressed
Where:
• App Memory is derived from the difference between “Anonymous pages” and “Pages purgeable” invm_stat,
• Wired Memory is derived from “Pages wired down” in vm_stat, and
• Compressed is derived from “Pages occupied by compressor” in vm_stat.
ActiveMQ Data Collector
Cloud Insights uses this data collector to gather metrics from ActiveMQ.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose ActiveMQ.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Information may be found in the ActiveMQ documentation
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
ActiveMQ Queue NamespaceQueuePortServer
Node NameNode IPNode UUID
Consumer CountDequeue CountEnqueue CountQueue Size
Troubleshooting
Additional information may be found from the Support page.
Apache Data Collector
This data collector allows collection of data from Apache servers in yourenvironment.
Pre-requisites
• You must have your Apache HTTP Server set up and properly running
• You must have sudo or administrator permissions on your agent host/VM
• Typically, the Apache mod_status module is configured to expose a page at the '/server-status?auto'location of the Apache server. The ExtendedStatus option must be enabled in order to collect allavailable fields. For information about how to configure your server, see the Apache moduledocumentation: https://httpd.apache.org/docs/2.4/mod/mod_status.html#enable
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Apache.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Telegraf’s plugin for Apache’s HTTP Server relies on the 'mod_status' module to be enabled. When thisis enabled, Apache’s HTTP Server will expose an HTML endpoint that can be viewed on your browseror scraped for extraction of status of all Apache’s HTTP Server configuration.
Compatibility:
Configuration was developed against Apache’s HTTP Server version 2.4.38.
Enabling mod_status:
Enabling and exposing the 'mod_status' modules involves two steps:
• Enabling module
• Exposing stats from module
Enabling module:
The loading of modules is controlled by the config file under '/usr/local/apache/conf/httpd.conf'. Editthe config file and uncomment the following lines:
LoadModule status_module modules/mod_status.so
Include conf/extra/httpd-info.conf
Exposing stats from module:
The exposing of 'mod_status' is controlled by the config file under '/usr/local/apache2/conf/extra/httpd-info.conf'. Make sure you have the following in that configuration file (at least, other directives will bethere):
# Allow server status reports generated by mod_status,# with the URL of http://servername/server-status<Location /server-status> SetHandler server-status</Location>
## ExtendedStatus controls whether Apache will generate "full" status# information (ExtendedStatus On) or just basic information (ExtendedStatus# Off) when the "server-status" handler is called. The default is Off.#ExtendedStatus On
For detailed instructions on the 'mod_status' module, see the Apache documentation
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Apache NamespaceServer
Node IPNode NamePortParent Server ConfigGenerationParent Server MPMGenerationServer UptimeIs Stopping
Busy WorkersBytes per RequestBytes per SecondCPU Children SystemCPU Children UserCPU LoadCPU SystemCPU UserAsynchronousConnections ClosingAsynchronousConnections Keep AliveAsynchronousConnections WritingConnections TotalDuration per RequestIdle WorkersLoad Average (last 1m)Load Average (last 15m)Load Average (last 5m)ProcessesRequests per SecondTotal AccessesTotal DurationTotal KBytesScoreboard ClosingScoreboard DNSLookupsScoreboard FinishingScoreboard Idle CleanupScoreboard Keep AliveScoreboard LoggingScoreboard OpenScoreboard ReadingScoreboard SendingScoreboard StartingScoreboard Waiting
Troubleshooting
Additional information may be found from the Support page.
Consul Data Collector
Cloud Insights uses this data collector to gather metrics from Consul.
This topic is considered Preview documentation and is subject to change.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Consul.
If you haven’t configured an Agent for collection, you are prompted to install an agent in yourenvironment.
If you have an agent already configured, select the appropriate Operating System or Platform andclick Continue.
2. Follow the instructions in the Consul Configuration screen to configure the data collector. Theinstructions vary depending on the type of Operating System or Platform you are using to collectdata.
Setup
Information may be found in the Consul documentation.
Objects and Counters for consul
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Consul NamespaceCheck IDService Node
Node IPNode OSNode UUIDNode NameService NameCheck NameService IDStatus
CriticalPassingWarning
Troubleshooting
Additional information may be found from the Support page.
Couchbase Data Collector
Cloud Insights uses this data collector to gather metrics from Couchbase.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Couchbase.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Information may be found in the Couchbase documentation.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Couchbase Node NamespaceClusterCouchbase NodeHostname
Node NameNode IP
Memory FreeMemory Total
Couchbase Bucket NamespaceBucketCluster
Node NameNode IP
Data UsedData FetchesDisk UsedItem CountMemory UsedOperations Per SecondQuota Used
Troubleshooting
Additional information may be found from the Support page.
CouchDB Data Collector
Cloud Insights uses this data collector to gather metrics from CouchDB.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose CouchDB.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Information may be found in the CouchDB documentation.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
CouchDB NamespaceServer
Node NameNode IP
Authentication CacheHitsAuthentication CacheMissDatabase ReadsDatabase WritesDatabases OpenOpen OS FilesMax Request TimeMin Request TimeHttpd Request MethodsCopyHttpd Request MethodsDeleteHttpd Request MethodsGetHttpd Request MethodsHeadHttpd Request MethodsPostHttpd Request MethodsPutStatus Codes 200Status Codes 201Status Codes 202Status Codes 301Status Codes 304Status Codes 400Status Codes 401Status Codes 403Status Codes 404Status Codes 405Status Codes 409Status Codes 412Status Codes 500
Troubleshooting
Additional information may be found from the Support page.
Docker Data Collector
Cloud Insights uses this data collector to gather metrics from Docker.
This topic is considered Preview documentation and is subject to change.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Docker.
If you haven’t configured an Agent for collection, you are prompted to install an agent in yourenvironment.
If you have an agent already configured, select the appropriate Operating System or Platform andclick Continue.
2. Follow the instructions in the Docker Configuration screen to configure the data collector. Theinstructions vary depending on the type of Operating System or Platform you are using to collectdata.
Setup
The Telegraf input plugin for Docker collects metrics through a specified UNIX socket or a TCPendpoint.
Compatibility
Configuration was developed against Docker version 1.12.6.
Setting Up
Accessing Docker through a UNIX socket
If the Telegraf agent is running on baremetal, add the telegraf Unix user to the docker Unix group byrunning the following:
sudo usermod -aG docker telegraf
If the Telegraf agent is running within a Kubernetes pod, expose the Docker Unix socket by mappingthe socket into the pod as a volume and then mounting that volume to /var/run/docker.sock. Forexample, add the following to the PodSpec:
volumes:...- name: docker-sockhostPath:path: /var/run/docker.socktype: File
Then, add the following to the Container:
volumeMounts:...- name: docker-sockmountPath: /var/run/docker.sock
Note that the Cloud Insights installer provided for the Kubernetes platform takes care of this mappingautomatically.
Access Docker through a TCP endpoint
By default, Docker uses port 2375 for unencrypted access and port 2376 for encrypted access.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Docker Engine NamespaceDocker Engine
Node NameNode IPNode UUIDNode OSKubernetes ClusterDocker VersionUnit
MemoryContainersContainers PausedContainers RunningContainers StoppedCPUsGo RoutinesImagesListener EventsUsed File DescriptorsData AvailableData TotalData UsedMetadata AvailableMetadata TotalMetadata UsedPool Blocksize
Object: Identifiers: Attributes: Datapoints:
Docker Container NamespaceContainer NameDocker Engine
Kubernetes ContainerHashKubernetes ContainerPortsKubernetes ContainerRestart CountKubernetes ContainerTermination MessagePathKubernetes ContainerTermination MessagePolicyKubernetes PodTermination GracePeriodContainer ImageContainer StatusContainer VersionNode NameKubernetes ContainerLog PathKubernetes ContainerNameKubernetes Docker TypeKubernetes Pod NameKubernetes PodNamespaceKubernetes Pod UIDKubernetes Sandbox IDNode IPNode UUIDDocker VersionKubernetes IO ConfigSeenKubernetes IO ConfigSourceOpenShift IO SCCKubernetes DescriptionKubernetes DisplayNameOpenShift TagsKompose ServicePod Template HashController Revision
Memory ActiveAnonymousMemory Active FileMemory CacheMemory HierarchicalLimitMemory InactiveAnonymousMemory Inactive FileMemory LimitMemory Mapped FileMemory Max UsageMemory Page FaultMemory Page MajorFaultMemory Paged InMemory Paged OutMemory Resident SetSizeMemory Resident SetSize HugeMemory Total ActiveAnonymousMemory Total ActiveFileMemory Total CacheMemory Total InactiveAnonymousMemory Total InactiveFileMemory Total MappedFileMemory Total PageFaultMemory Total PageMajor FaultMemory Total Paged InMemory Total Paged OutMemory Total ResidentSet SizeMemory Total ResidentSet Size HugeMemory TotalUnevictable
Object: Identifiers: Attributes: Datapoints:
Docker Container BlockIO
NamespaceContainer NameDeviceDocker Engine
Kubernetes ContainerHashKubernetes ContainerPortsKubernetes ContainerRestart CountKubernetes ContainerTermination MessagePathKubernetes ContainerTermination MessagePolicyKubernetes PodTermination GracePeriodContainer ImageContainer StatusContainer VersionNode NameKubernetes ContainerLog PathKubernetes ContainerNameKubernetes Docker TypeKubernetes Pod NameKubernetes PodNamespaceKubernetes Pod UIDKubernetes Sandbox IDNode IPNode UUIDDocker VersionKubernetes Config SeenKubernetes ConfigSourceOpenShift SCCKubernetes DescriptionKubernetes DisplayNameOpenShift TagsSchema Schema VersionPod Template HashController RevisionHash
IO Service BytesRecursive AsyncIO Service BytesRecursive ReadIO Service BytesRecursive SyncIO Service BytesRecursive TotalIO Service BytesRecursive WriteIO Serviced RecursiveAsyncIO Serviced RecursiveReadIO Serviced RecursiveSyncIO Serviced RecursiveTotalIO Serviced RecursiveWrite
Object: Identifiers: Attributes: Datapoints:
Docker ContainerNetwork
NamespaceContainer NameNetworkDocker Engine
Container ImageContainer StatusContainer VersionNode NameNode IPNode UUIDNode OSK8s ClusterDocker VersionContainer ID
RX DroppedRX BytesRX ErrorsRX PacketsTX DroppedTX BytesTX ErrorsTX Packets
Object: Identifiers: Attributes: Datapoints:
Docker Container CPU NamespaceContainer NameCPUDocker Engine
Kubernetes ContainerHashKubernetes ContainerPortsKubernetes ContainerRestart CountKubernetes ContainerTermination MessagePathKubernetes ContainerTermination MessagePolicyKubernetes PodTermination GracePeriodKubernetes Config SeenKubernetes ConfigSourceOpenShift SCCContainer ImageContainer StatusContainer VersionNode NameKubernetes ContainerLog PathKubernetes ContainernameKubernetes Docker TypeKubernetes Pod NameKubernetes PodNamespaceKubernetes Pod UIDKubernetes Sandbox IDNode IPNode UUIDNode OSKubernetes ClusterDocker VersionKubernetes DescriptionKubernetes DisplayNameOpenShift TagsSchema VersionPod Template Hash
Throttling PeriodsThrottling ThrottledPeriodsThrottling ThrottledTimeUsage In Kernel ModeUsage In User ModeUsage PercentUsage SystemUsage Total
Troubleshooting
Problem: Try this:
I do not see my Docker metrics in Cloud Insightsafter following the instructions on theconfiguration page.
Check the Telegraf agent logs to see if it reportsthe following error:
E! Error in plugin [inputs.docker]: Got permissiondenied while trying to connect to the Dockerdaemon socket
If it does, take the necessary steps to provide theTelegraf agent access to the Docker Unix socket asspecified above.
Additional information may be found from the Support page.
Elasticsearch Data Collector
Cloud Insights uses this data collector to gather metrics from Elasticsearch.
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Elasticsearch.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Information may be found in the Elasticsearch documentation.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Elasticsearch Cluster NamespaceCluster
Node IPNode NameCluster Status
Master Node CountTotal Node CountFilesystem DataAvailable (bytes)Filesystem Data Free(bytes)Filesystem Data Total(bytes)JVM ThreadsOS AllocatedProccessorsOS Available ProcessorsOS Mem Free (bytes)OS Mem FreeOS Mem Total (bytes)OS Mem Used (bytes)OS Mem UsedProcess CPUIndices Completion Size(bytes)Indices CountIndices Docs CountIndices Docs DeletedIndices Field DataEvictionsIndices Field DataMemory Size (bytes)Indices Query CacheCountIndices Cache SizeIndices Segments CountIndices Segments DocValues Memory (bytes)Indices Shards IndexPrimaries AvgIndices Shards IndexPrimaries MaxIndices Shards IndexPrimaries MinIndices Shards IndexReplication AvgIndices Shards IndexReplication MaxIndices Shards Index
Object: Identifiers: Attributes: Datapoints:
Elasticsearch Node NamespaceClusterES Node IDES Node IPES Node
Zone ID Machine LearningEnabledMachine LearningMemoryMachine Learning MaxOpen JobsX-Pack InstalledBreakers AccountingEstimated Size (bytes)Breakers AccountingLimit Size (bytes)Breakers AccountingOverheadBreakers AccountingTrippedBreakers Field DataEstimated Size (bytes)Breakers Field DataLimit Size (bytes)Breakers Field DataOverheadBreakers Field DataTrippedBreakers In-FlightSstimated Size (bytes)Breakers In-Flight LimitSize (bytes)Breakers In-FlightOverheadBreakers In-FlightTrippedBreakers ParentEstimated Size (bytes)Breakers Parent LimitSize (bytes)Breakers ParentOverheadBreakers Parent TrippedBreakers RequestEstimated Size (bytes)Breakers Request LimitSize (bytes)Breakers RequestOverhead
Troubleshooting
Additional information may be found from the Support page.
Flink Data Collector
Cloud Insights uses this data collector to gather metrics from Flink.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Flink.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
A full Flink deployment involves the following components:
JobManager: The Flink master. Coordinates a series of TaskManagers (slaves). In a High Availabilitysetup, system will have more than one JobManager.TaskManager: The Flink slaves. This is where Flink operators get executed.The Flink plugin is based on the telegraf’s Jolokia plugin. As such as a requirement to gather info fromall Flink components, JMX needs to be configured and exposed via Jolokia on all components.
Compatibility
Configuration was developed against Flink version 1.7.0.
Setting Up
Jolokia Agent Jar
For all individual components, a version the Jolokia agent jar file must be downloaded. The versiontested against was Jolokia agent 1.6.0.
Instructions below assume that downloaded jar file (jolokia-jvm-1.6.0-agent.jar) is placed underlocation '/opt/flink/lib/'.
JobManager
To configure JobManager to expose the Jolokia API, you can setup the following environment variableon your nodes then restart the JobManager:
export FLINK_ENV_JAVA_OPTS="-javaagent:/opt/flink/lib/jolokia-jvm-1.6.0-agent.jar=port=8778,host=0.0.0.0"
You can choose a different port for Jolokia (8778). If you have an internal IP to lock Jolokia onto youcan replace the "catch all" 0.0.0.0 by your own IP. Notice this IP needs to be accessible from the telegrafplugin.
TaskManager
To configure TaskManager(s) to expose the Jolokia API, you can setup the following environmentvariable on your nodes then restart the TaskManager:
export FLINK_ENV_JAVA_OPTS="-javaagent:/opt/flink/lib/jolokia-jvm-1.6.0-agent.jar=port=8778,host=0.0.0.0"
You can choose a different port for Jolokia (8778). If you have an internal IP to lock Jolokia onto youcan replace the "catch all" 0.0.0.0 by your own IP. Notice this IP needs to be accessible from the telegrafplugin.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Flink Task Manager ClusterNamespaceServer
Node NameTask Manager IDNode IP
Network AvailableMemory SegmentsNetwork Total MemorySegmentsGarbage Collection PSMarkSweep CountGarbage Collection PSMarkSweep TimeGarbage Collection PSScavenge CountGarbage Collection PSScavenge TimeHeap MemoryCommittedHeap Memory InitHeap Memory MaxHeap Memory UsedThread Count DaemonThread Count PeakThread CountThread Count TotalStarted
Flink Job ClusterNamespaceserverJob ID
Node NameJob NameNode IPLast CheckpointExternal PathRestarting Time
DowntimeFull RestartsLast CheckpointAlignment BufferedLast CheckpointDurationLast Checkpoint SizeNumber of CompletedCheckpointsNumber of FailedCheckpointsNumber of in ProgressCheckpointsNumber of CheckpointsUptime
Object: Identifiers: Attributes: Datapoints:
Flink Job Manager ClusterNamespaceServer
Node NameNode IP
Garbage Collection PSMarkSweep CountGarbage Collection PSMarkSweep TimeGarbage Collection PSScavenge CountGarbage Collection PSScavenge TimeHeap MemoryCommittedHeap Memory InitHeap Memory MaxHeap Memory UsedNumber Registered TaskManagersNumber Running JobsTask Slots AvailableTask Slots TotalThread Count DaemonThread Count PeakThread CountThread Count TotalStarted
Object: Identifiers: Attributes: Datapoints:
Flink Task ClusterNamespaceJob IDTask ID
ServerNode NameJob NameSub Task IndexTask Attempt IDTask Attempt NumberTask NameTask Manager IDNode IPCurrent InputWatermark
Buffers In Pool UsageBuffers In Queue LengthBuffers Out Pool UsageBuffers Out QueueLengthNumber Buffers In LocalNumber Bufffers InLocal Per Second CountNumber Buffers in LocalPer Second RateNumber Buffers InRemoteNumber Buffers InRemote Per SecondCountNumber Buffers InRemote Per Second RateNumber Buffers OutNumber Buffers Out PerSecond CountNumber Buffers Out PerSecond RateNumber Bytes In LocalNumber Bytes In LocalPer Second CountNumber Bytes In LocalPer Second RateNumber Bytes InRemoteNumber Bytes InRemote Per SecondCountNumber Bytes InRemote Per Second RateNumber Bytes OutNumber Bytes Out PerSecond CountNumber Bytes Out PerSecond RateNumber Records InNumber Records In PerSecond CountNumber Records In PerSecond Rate
Object: Identifiers: Attributes: Datapoints:
Flink Task Operator ClusterNamespaceJob IDOperator IDTask ID
ServerNode NameJob NameOperator NameSub Task IndexTask Attempt IDTask Attempt NumberTask NameTask Manager IDNode IP
Current InputWatermarkCurrent OutputWatermarkNumber Records InNumber Records In PerSecond CountNumber Records In PerSecond RateNumber Records OutNumber Records OutPer Second CountNumber Records OutPer Second RateNumber Late RecordsDroppedAssigned PartitionsBytes Consumed RateCommit Latency AvgCommit Latency MaxCommit RateCommits FailedCommits SucceededConnection Close RateConnection CountConnection CreationRateCountFetch Latency AvgFetch Latency MaxFetch RateFetch Size AvgFetch Size MaxFetch Throttle Time AvgFetch Throttle Time MaxHeartbeat RateIncoming Byte RateIO RatioIO Time Avg (ns)IO Wait RatioIO Wait Time Avg (ns)Join RateJoin Time AvgLast Heartbeat Ago
Troubleshooting
Additional information may be found from the Support page.
Hadoop Data Collector
Cloud Insights uses this data collector to gather metrics from Hadoop.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Hadoop.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
A full Hadoop deployment involves the following components:
• NameNode: The Hadoop Distributed File System (HDFS) master. Coordinates a series of DataNodes
(slaves).
• Secondary NameNode: a warm failover for the main NameNode. In Hadoop the promotion toNameNode does not occur automatically. Secondary NameNode gathers information fromNameNode to be ready to be promoted when needed.
• DataNode: The HDFS slave. Actual owner for data.
• ResourceManager: The compute master (Yarn). Coordinates a series of NodeManagers (slaves).
• NodeManager: the resource for compute. Actual location for running of applications.
• JobHistoryServer: name says it all.
The Hadoop plugin is based on the telegraf’s Jolokia plugin. As such as a requirement to gather infofrom all Hadoop components, JMX needs to be configured and exposed via Jolokia on all components.
Compatibility
Configuration was developed against Hadoop version 2.9.2.
Setting Up
Jolokia Agent Jar
For all individual components, a version the Jolokia agent jar file must be downloaded. The versiontested against was Jolokia agent 1.6.0.
Instructions below assume that downloaded jar file (jolokia-jvm-1.6.0-agent.jar) is placed underlocation '/opt/hadoop/lib/'.
NameNode
To configure NameNode to expose the Jolokia API, you can setup the following in<HADOOP_HOME>/etc/hadoop/hadoop-env.sh:
export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS -javaagent:/opt/hadoop/lib/jolokia-jvm-1.6.0-agent.jar=port=7800,host=0.0.0.0 -Dcom.sun.management.jmxremote-Dcom.sun.management.jmxremote.port=8000 -Dcom.sun.management.jmxremote.ssl=false-Dcom.sun.management.jmxremote.password.file=$HADOOP_HOME/conf/jmxremote.password"You can choose a different port for JMX (8000 above) and Jolokia (7800). If you have aninternal IP to lock Jolokia onto you can replace the "catch all" 0.0.0.0 by your own IP.Notice this IP needs to be accessible from the telegraf plugin. You can use the option '-Dcom.sun.management.jmxremote.authenticate=false' if you don't want to authenticate. Useat your own risk.
Secondary NameNode
To configure the Secondary NameNode to expose the Jolokia API, you can setup the following in
<HADOOP_HOME>/etc/hadoop/hadoop-env.sh:
export HADOOP_SECONDARYNAMENODE_OPTS="$HADOOP_SECONDARYNAMENODE_OPTS-javaagent:/opt/hadoop/lib/jolokia-jvm-1.6.0-agent.jar=port=7802,host=0.0.0.0-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8002-Dcom.sun.management.jmxremote.ssl=false-Dcom.sun.management.jmxremote.password.file=$HADOOP_HOME/conf/jmxremote.password"You can choose a different port for JMX (8002 above) and Jolokia (7802). If you have aninternal IP to lock Jolokia onto you can replace the "catch all" 0.0.0.0 by your own IP.Notice this IP needs to be accessible from the telegraf plugin. You can use the option '-Dcom.sun.management.jmxremote.authenticate=false' if you don't want to authenticate. Useat your own risk.
DataNode
To configure the DataNodes to expose the Jolokia API, you can setup the following in<HADOOP_HOME>/etc/hadoop/hadoop-env.sh:
export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS -javaagent:/opt/hadoop/lib/jolokia-jvm-1.6.0-agent.jar=port=7801,host=0.0.0.0 -Dcom.sun.management.jmxremote-Dcom.sun.management.jmxremote.port=8001 -Dcom.sun.management.jmxremote.ssl=false-Dcom.sun.management.jmxremote.password.file=$HADOOP_HOME/conf/jmxremote.password"You can choose a different port for JMX (8001 above) and Jolokia (7801). If you have aninternal IP to lock Jolokia onto you can replace the "catch all" 0.0.0.0 by your own IP.Notice this IP needs to be accessible from the telegraf plugin. You can use the option '-Dcom.sun.management.jmxremote.authenticate=false' if you don't want to authenticate. Useat your own risk.
ResourceManager
To configure the ResourceManager to expose the Jolokia API, you can setup the following in<HADOOP_HOME>/etc/hadoop/hadoop-env.sh:
export YARN_RESOURCEMANAGER_OPTS="$YARN_RESOURCEMANAGER_OPTS-javaagent:/opt/hadoop/lib/jolokia-jvm-1.6.0-agent.jar=port=7803,host=0.0.0.0-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8003-Dcom.sun.management.jmxremote.ssl=false-Dcom.sun.management.jmxremote.password.file=$HADOOP_HOME/conf/jmxremote.password"You can choose a different port for JMX (8003 above) and Jolokia (7803). If you have aninternal IP to lock Jolokia onto you can replace the "catch all" 0.0.0.0 by your own IP.Notice this IP needs to be accessible from the telegraf plugin. You can use the option '-Dcom.sun.management.jmxremote.authenticate=false' if you don't want to authenticate. Useat your own risk.
NodeManager
To configure the NodeManagers to expose the Jolokia API, you can setup the following in<HADOOP_HOME>/etc/hadoop/hadoop-env.sh:
export YARN_NODEMANAGER_OPTS="$YARN_NODEMANAGER_OPTS -javaagent:/opt/hadoop/lib/jolokia-jvm-1.6.0-agent.jar=port=7804,host=0.0.0.0 -Dcom.sun.management.jmxremote-Dcom.sun.management.jmxremote.port=8004 -Dcom.sun.management.jmxremote.ssl=false-Dcom.sun.management.jmxremote.password.file=$HADOOP_HOME/conf/jmxremote.password"You can choose a different port for JMX (8004 above) and Jolokia (7804). If you have aninternal IP to lock Jolokia onto you can replace the "catch all" 0.0.0.0 by your own IP.Notice this IP needs to be accessible from the telegraf plugin. You can use the option '-Dcom.sun.management.jmxremote.authenticate=false' if you don't want to authenticate. Useat your own risk.
JobHistoryServer
To configure the JobHistoryServer to expose the Jolokia API, you can setup the following in<HADOOP_HOME>/etc/hadoop/hadoop-env.sh:
export HADOOP_JOB_HISTORYSERVER_OPTS="$HADOOP_JOB_HISTORYSERVER_OPTS-javaagent:/opt/hadoop/lib/jolokia-jvm-1.6.0-agent.jar=port=7805,host=0.0.0.0-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8005-Dcom.sun.management.jmxremote.password.file=$HADOOP_HOME/conf/jmxremote.password"You can choose a different port for JMX (8005 above) and Jolokia (7805). If you have aninternal IP to lock Jolokia onto you can replace the "catch all" 0.0.0.0 by your own IP.Notice this IP needs to be accessible from the telegraf plugin. You can use the option '-Dcom.sun.management.jmxremote.authenticate=false' if you don't want to authenticate. Useat your own risk.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Hadoop SecondaryNameNode
ClusterNamespaceServer
Node NameNode IPCompile InfoVersion
GC CountGC Copies CountGC Marks SweepCompact CountGC Number InfoThreshold ExceededGC Number WarningThreshold ExceededGC TimeGC Copy TimeGC Marks SweepCompact TimeGC Total Extra SleepTimeLogs Error CountLogs Fatal CountLogs Info CountLogs Warn CountMemory HeapCommittedMemory Heap MaxMemory Heap UsedMemory MaxMemory Non HeapCommittedMemory Non Heap MaxMemory Non Heap UsedThreads BlockedThreads NewThreads RunnableThreads TerminatedThreads Timed WaitingThreads Waiting
Object: Identifiers: Attributes: Datapoints:
Hadoop NodeManager ClusterNamespaceServer
Node NameNode IP
Containers AllocatedMemory AllocateMemory AllocatedOportunisticVirtual Cores AllocatedOportunisticVirtual Cores AllocatedMemory AvailableVirtual Cores AvailableDirectories Bad LocalDirectories Bad LogCache Size Before CleanContainer LaunchDuration Avg TimeContainer LaunchDuration Number OfOperationsContainers CompletedContainers FailedContainers InitingContainers KilledContainers LaunchedContainers ReinitingContaIners Rolled Backon FailureContainers RunningDisk Utilization GoodLocal DirectoriesDisk Utilization GoodLog DirectoriesBytes Deleted PrivateBytes Deleted PublicContainers RunningOpportunisticBytes Deleted TotalShuffle ConnectionsShuffle Output BytesShuffle Outputs FailedShuffle Outputs OkGC CountGC Copies CountGC Marks SweepCompact CountGC Number Info
Object: Identifiers: Attributes: Datapoints:
HadoopResourceManager
ClusterNamespaceServer
Node NameNode IP
ApplicationMasterLaunch Delay AvgApplicationMasterLaunch Delay NumberApplicationMasterRegister Delay AvgApplicationMasterRegister Delay NumberNodeManager ActiveNumberNodeManagerDecomissioned NumberNodeManagerDecomissioning NumberNodeManager LostNumberNodeManager RebootedNumberNodeManagerShutdown NumberNodeManager HealthyNumberNodeManager MemoryLimitNodeManager VirtualCores LimitUsed CapacityActive ApplicationsActive UsersAggregate ContainersAllocatedAggregate ContainersPreemptedAggregate ContainersReleasedAggregate MemorySeconds PreemptedAggregate Node LocalContainers AllocatedAggregate Off SwitchContainers AllocatedAggregate Ack LocalContainers AllocatedAggregate Virtual Cores
Object: Identifiers: Attributes: Datapoints:
Hadoop DataNode ClusterNamespaceServer
Node NameNode IPCluster IDVersion
Transceiver CountTransmits in ProgressCache CapacityCache UsedCapacityDFS UsedEstimated Capacity LostTotalLast Volume FailureRateBlocks Number CachedBlocks Number Failed toCacheBlocks Number Failed toUncacheVolumes Number FailedCapacity RemainingGC CountGC Copies CountGC Marks SweepCompact CountGC Number InfoThreshold ExceededGC Number WarningThreshold ExceededGC TimeGC Copy TimeGC Marks SweepCompact TimeGC Total Extra SleepTimeLogs Error CountLogs Fatal CountLogs Info CountLogs Warn CountMemory HeapCommittedMemory Heap MaxMemory Heap UsedMemory MaxMemory Non HeapCommittedMemory Non Heap MaxMemory Non Heap Used
Object: Identifiers: Attributes: Datapoints:
Hadoop NameNode ClusterNamespaceServer
Node NameNode IPTransaction ID LastWrittenTime Since Last LoadedEditsHA StateFile System StateBlock Pool IDCluster IDCompile InfoDistinct Version CountVersion
Block CapacityBlocks TotalCapacity TotalCapacity UsedCapacity Used Non DFSBlocks CorruptEstimated Capacity LostTotalBlocks ExcessHeartbeats ExpiredFiles TotalFile System Lock QueueLengthBlocks MissingBlocks MissingReplication with FactorOneClients ActiveData Nodes DeadData NodesDecommissioning DeadData NodesDecommissioning LiveData NodesDecomissioningEncryption ZonesNumberData Nodes EnteringMaintenanceFiles UnderConstructionData Nodes Dead inMaintenanceData Nodes Live inMaintenanceData Nodes LiveStorages StaleReplication PendingTimeoutsData Node MessagePendingBlocks Pending DeletionBlocks PendingReplication
Object: Identifiers: Attributes: Datapoints:
HadoopJobHistoryServer
ClusterNamespaceServer
Node NameNode IP
GC CountGC Copies CountGC Marks SweepCompact CountGC Number InfoThreshold ExceededGC Number WarningThreshold ExceededGC TimeGC Copy TimeGC Marks SweepCompact TimeGC Total Extra SleepTimeLogs Error CountLogs Fatal CountLogs Info CountLogs Warn CountMemory HeapCommittedMemory Heap MaxMemory Heap UsedMemory MaxMemory Non HeapCommittedMemory Non Heap MaxMemory Non Heap UsedThreads BlockedThreads NewThreads RunnableThreads TerminatedThreads Timed WaitingThreads Waiting
Troubleshooting
Additional information may be found from the Support page.
HAProxy Data Collector
Cloud Insights uses this data collector to gather metrics from HAProxy.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose HAProxy.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Telegraf’s plugin for HAProxy relies on HAProxy Stats enablement. This is a configuration built into
HAProxy but it is not enabled out of the box. When enabled, HAProxy will expose an HTML endpointthat can be viewed on your browser or scraped for extraction of status of all HAProxy configurations.
Compatibility:
Configuration was developed against HAProxy version 1.9.4.
Setting Up:
To enable stats, edit your haproxy configuration file and add the the following lines after the 'defaults'section, using your own user/password and/or haproxy URL:
stats enablestats auth myuser:mypasswordstats uri /haproxy?stats
The following is a simplified example configuration file with stats enabled:
global daemon maxconn 256
defaults mode http stats enable stats uri /haproxy?stats stats auth myuser:mypassword timeout connect 5000ms timeout client 50000ms timeout server 50000ms
frontend http-in bind *:80 default_backend servers
frontend http-in9080 bind *:9080 default_backend servers_2
backend servers server server1 10.128.0.55:8080 check ssl verify none server server2 10.128.0.56:8080 check ssl verify none
backend servers_2 server server3 10.128.0.57:8080 check ssl verify none server server4 10.128.0.58:8080 check ssl verify none
For complete and up to date instructions, see the HAProxy documentation.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
HAProxy Frontend NamespaceAddressProxy
Node IPNode NameProxy IDModeProcess idSessions Rate LimitServer idSessions LimitStatus
Bytes InBytes OutCache HitsCache LookupsCompression BytesBypassedCompression Bytes InCompression Bytes OutCompression ResponsesConnection RateConnection Rate MaxConnections TotalRequests Denied byConnection RuleRequests Denied bySecurity ConcernsResponses Denied bySecurity ConcernsRequests Denied bySession RuleRequests ErrorsResponses 1xxResponses 2xxResponses 3xxResponses 4xxResponses 5xxResponses OtherRequests InterceptedSessions RateSessions Rate MaxRequests RateRequests Rate MaxRequests TotalSessionsSessions MaxSessions TotalRequests Rewrites
Object: Identifiers: Attributes: Datapoints:
HAProxy Server NamespaceAddressProxyServer
Node IPNode NameCheck Time to FinishCheck Fall ConfigurationCheck Health ValueCheck RiseConfigurationCheck StatusProxy IDLast Change TimeLast Session TimeModeProcess idServer idStatusWeight
Active ServersBackup ServersBytes InBytes OutCheck DownsCheck FailsClient AbortsConnectionsConnection AverageTimeDowntime TotalDenied ResponsesConnection ErrorsResponse ErrorsResponses 1xxResponses 2xxResponses 3xxResponses 4xxResponses 5xxResponses OtherServer Selected TotalQueue CurrentQueue MaxQueue Average TimeSessions per SecondSessions per SecondMaxConnection ReuseResponse Time AverageSessionsSessions MaxServer Transfer AbortsSessions TotalSessions Total TimeAverageRequests RedispatchesRequests RetriesRequests Rewrites
Object: Identifiers: Attributes: Datapoints:
HAProxy Backend NamespaceAddressProxy
Node IPNode NameProxy IDLast Change TimeLast Session TimeModeProcess idServer idSessions LimitStatusWeight
Active ServersBackup ServersBytes InBytes OutCache HitsCache LookupsCheck DownsClient AbortsCompression BytesBypassedCompression Bytes InCompression Bytes OutCompression ResponsesConnectionsConnection AverageTimeDowntime TotalRequests Denied bySecurity ConcernsResponses Denied bySecurity ConcernsConnection ErrorsResponse ErrorsResponses 1xxResponses 2xxResponses 3xxResponses 4xxResponses 5xxResponses OtherServer Selected TotalQueue CurrentQueue MaxQueue Average TimeSessions per SecondSessions per SecondMaxRequests TotalConnection ReuseResponse Time AverageSessionsSessions MaxServer Transfer AbortsSessions TotalSessions Total Time
Troubleshooting
Additional information may be found from the Support page.
JVM Data Collector
Cloud Insights uses this data collector to gather metrics from JVM.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose JVM.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Information may be found in JVM documentation.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
JVM NamespaceJVM
OS ArchitectureOS NameOS VersionRuntime SpecificationRuntime SpecificationVendorRuntime SpecificationVersionUptimeRuntime VM NameRuntime VM VendorRuntime VM VersionNode NameNode IP
Class LoadedClass Loaded TotalClass UnloadedMemory HeapCommittedMemory Heap InitMemory Heap Used MaxMemory Heap UsedMemory Non HeapCommittedMemory Non Heap InitMemory Non Heap MaxMemory Non Heap UsedMemory ObjectsPending FinalizationOS Processors AvailableOS Committed VirtualMemory SizeOS Free PhysicalMemory SizeOS Free Swap Space SizeOS Max File DescriptorCountOS Open FileDescriptors CountOS Processor CPU LoadOS Processor CPU TimeOS System CPU LoadOS System Load AverageOS Total PhysicalMemory SizeOS Total Swap SpaceSizeThread Daemon CountThread Peak CountThread CountThread Total StartedCountGarbage Collector CopyCollection CountGarbage Collector CopyCollection TimeGarbage Collector Mark-sweep Collection Count
Troubleshooting
Additional information may be found from the Support page.
Kafka Data Collector
Cloud Insights uses this data collector to gather metrics from Kafka.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Kafka.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
The Kafka plugin is based on the telegraf’s Jolokia plugin. As such as a requirement to gather info fromall Kafka brokers, JMX needs to be configured and exposed via Jolokia on all components.
Compatibility
Configuration was developed against Kafka version 0.11.0.2.
Setting up
All the instructions below assume your install location for kafka is '/opt/kafka'. You can adaptinstructions below to reflect your install location.
Jolokia Agent Jar
A version the Jolokia agent jar file must be downloaded. The version tested against was Jolokia agent1.6.0.
Instructions below assume that the downloaded jar file (jolokia-jvm-1.6.0-agent.jar) is placed under thelocation '/opt/kafka/libs/'.
Kafka Brokers
To configure Kafka Brokers to expose the Jolokia API, you can add the following in<KAFKA_HOME>/bin/kafka-server-start.sh, just before the 'kafka-run-class.sh' call:
export JMX_PORT=9999export RMI_HOSTNAME=`hostname -I`export KAFKA_JMX_OPTS="-javaagent:/opt/kafka/libs/jolokia-jvm-1.6.0-agent.jar=port=8778,host=0.0.0.0-Dcom.sun.management.jmxremote.password.file=/opt/kafka/config/jmxremote.password-Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=$RMI_HOSTNAME-Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
Note that example above is using 'hostname -I' to setup the 'RMI_HOSTNAME' environment variable. Inmultiple IP machines, this will need to be tweaked to gather the IP you care about for RMI connections.
You can choose a different port for JMX (9999 above) and Jolokia (8778). If you have an internal IP tolock Jolokia onto you can replace the "catch all" 0.0.0.0 by your own IP. Notice this IP needs to beaccessible from the telegraf plugin. You can use the option '-Dcom.sun.management.jmxremote.authenticate=false' if you don’t want to authenticate. Use at yourown risk.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Kafka Broker ClusterNamespaceBroker
Node NameNode IP
Replica ManagerFetcher Max LagZookeeper ClientConnectionsZookeeper ClientConnections (15m rate)Zookeeper ClientConnections (5m rate)Zookeeper ClientConnections (mean rate)Zookeeper ClientConnections (1m rate)Replica ManagerPartition CountThread Count DaemonThread Count PeakThread Count CurrentThread Count TotalStartedOffline PartitionsProduce Requests TotalTime (50th Percentile)Produce Requests TotalTime (75th Percentile)Produce Requests TotalTime (95th Percentile)Produce Requests TotalTime (98 Percentile)Produce Requests TotalTime (999th Percentile)Produce Requests TotalTime (99th Percentile)Produce Requests TotalTimeProduce Requests TotalTime MaxProduce Requests TotalTime MeanProduce Requests TotalTime MinProduce Requests TotalTime StddevReplica Manager ISRShrinks
Troubleshooting
Additional information may be found from the Support page.
Kibana Data Collector
Cloud Insights uses this data collector to gather metrics from Kibana.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Kibana.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Information may be found in the Kibana documentation.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Kibana NamespaceAddress
Node IPNode NameVersionStatus
Concurrent ConnectionsHeap MaxHeap UsedRequests per SecondResponse Time AverageResponse Time MaxUptime
Troubleshooting
Additional information may be found from the Support page.
Kubernetes Data Collector
Cloud Insights uses this data collector to gather Kubernetes Pod, Node, andContainer metrics.
Installation
It is strongly recommended to deploy the Telegraf agent as a DaemonSet and aReplicaSet within the Kubernetes environment itself, by selecting Kubernetes as theplatform during agent installation.
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Kubernetes.
If you haven’t configured an Agent for collection, you are prompted to install an agent in yourenvironment.
If you have an agent already configured, select the appropriate Operating System or Platform andclick Continue.
2. Follow the instructions in the Kubernetes Configuration screen to configure the data collector. Theinstructions vary depending on the type of Operating System or Platform you are using to collectdata. The example below shows the instructions for installing the Kubernetes data collector on theKubernetes platform:
Note that there are two different config maps that need updating: telegraf-conf for the DaemonSet andtelegraf-conf-rs for the ReplicaSet. The latter is used to obtain kube-state-metrics data.
Setup
The Telegraf input plugin for Kubernetes collects metrics through the /stats/summary endpoint of thekubelet REST API as well as the kube-state-metrics server (if it exists).
Compatibility
Configuration was developed against Kubernetes version 1.9.1.
Configuring an Agent to Collect Data from Kubernetes
For Kubernetes environments, Cloud Insights deploys the Telegraf agent as a DaemonSet and aReplicaSet. The pods in which the agents run need to have access to the following:
• hostPath
• configMap
• secrets
These Kubernetes objects are automatically created as part of the Kubernetes agent install commandprovided in the Cloud Insights UI. Some variants of Kubernetes, such as OpenShift, implement anadded level of security that may block access to these components. The SecurityContextConstraint is notcreated as part of the Kubernetes agent install command provided in the Cloud Insights UI, and mustbe created manually. Once created, restart the Telegraf pod(s).
apiVersion: v1 kind: SecurityContextConstraints metadata: name: telegraf-hostaccess creationTimestamp: annotations: kubernetes.io/description: telegraf-hostaccess allows hostpath volume mounts forrestricted SAs. labels: app: ci-telegraf priority: 10 allowPrivilegedContainer: false defaultAddCapabilities: [] requiredDropCapabilities: [] allowedCapabilities: [] allowedFlexVolumes: [] allowHostDirVolumePlugin: true volumes: - hostPath - configMap - secret allowHostNetwork: false allowHostPorts: false allowHostPID: false allowHostIPC: false seLinuxContext: type: MustRunAs runAsUser: type: RunAsAny supplementalGroups: type: RunAsAny fsGroup: type: RunAsAny readOnlyRootFilesystem: false users: - system:serviceaccount:monitoring:telegraf-user groups: []
Setting Up
For collecting Kubernetes metrics, the best practice is to deploy the Telegraf agent as a DaemonSet anda ReplicaSet within the Kubernetes environment of interest itself. The Cloud Insights agent installcommand does this if run on one of the Kubernetes nodes in the cluster. With the DaemonSet that iscreated, you can simply specify the "$HOSTIP" environment variable for<INSERT_KUBELET_ADDRESS> in the telegraf-conf ConfigMap.
The pods in which the Telegraf agents run need to have access to a valid Kubernetes access token filein order to use the required kubelet API. To configure a Telegraf agent running outside the Kubernetescluster of interest, you must manually generate this Kubernetes access token file, and configure theTelegraf agent to use this access token file.
To manually generate the Kubernetes access token, run the following in a Bash shell:
SECRET=$(sudo kubectl --namespace kube-system describe sa default |grep Tokens |awk'{print $2}')
TOKEN=$(sudo kubectl --namespace kube-system describe secrets $SECRET |grep ^token |awk'{print $2}')
To verify that the access token works as needed, run the following to confirm the kubelet API isaccessible:
curl -v -X GET -H "Authorization: Bearer $TOKEN"https://<KUBELET_ADDRESS>:<KUBELET_PORT>/stats/summary
To create the required access token file, run the following:
mkdir -p /var/run/secrets/kubernetes.io/serviceaccount/
echo -n $TOKEN | sudo tee /var/run/secrets/kubernetes.io/serviceaccount/token
By default, the Kubernetes input plugin configuration provided by CloudInsights is set up to look forthe required access token file in the location used above. After performing the steps above, restart theTelegraf agent for the changes to take effect.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Kubernetes Container NamespacePodContainerCluster
Kubernetes NodeNode NameNode OSNode UUIDNode IP
CPU NanosecondsCPU Usage NanocoresMemory Major PageFaultsMemory Resident SetSize (RSS)Memory Working SetMemory Page FaultsMemory UsageRoot FilesystemAvailableRoot FilesystemCapacityRoot Filesystem Used
Kubernetes Node Kubernetes NodeCluster
Node NameNode OSNode UUIDNode IP
CPU Usage NanocoresCPU Usage NanosecondsFilesystem AvailableFilesystem TotalFilesystem UsedMemory AvailableMemory UsageMemory Major PageFaultsMemory Page FaultsMemory Resident SetSize (RSS)Memory Working SetNetwork RX Errors (persec)Network RX Bytes (persec)Network TX Errors (persec)Network TX Bytes (persec)Runtime ImageFilesystem AvailableRuntime ImageFilesystem UsedRuntime ImageFilesystem Capacity
Object: Identifiers: Attributes: Datapoints:
Kubernetes Pod NamespacePodCluster
Kubernetes NodeNode NameNode IPNode OSNode UUID
Network TX Bytes (persec)Network TX Errors (persec)Network RX Bytes (persec)Network RX Errors (persec)
Kubernetes Pod Volume VolumePodClusterNamespace
Kubernetes NodeNode NameNode UUIDNode IPNode OS
AvailableCapacityUsed
Kubernetes SystemContainer
System ContainerKubernetes NodeCluster
Node NameNode IPNode OSNode UUID
CPU Usage NanocoresCPU Usage CoreNanosecondsMemory Major PageFaultsMemory Page FaultsMemory Resident SetSize (RSS)Memory UsageMemory Working SetRoot FilesystemAvailableRoot FilesystemCapacityLogs FilesystemAvailableLogs FilesystemCapacity
Installing the kube-state-metrics server
When you install the kube-state-metrics server you can enable collection of the following Kubernetesobjects: StatefulSet, DaemonSet, Deployment, PV, PVC, ReplicaSet, Service, Namespace, Secret,ConfigMap, Pod Volume, and Ingress.
Use the following steps to install the kube-state-metrics server:
Steps
1. Create a temporary folder (for example, /tmp/kube-state-yaml-files/) and copy the .yaml files from
https://github.com/kubernetes/kube-state-metrics/tree/master/examples/standard to this folder.
2. Run the following command to apply the .yaml files needed for installing kube-state-metrics:
kubectl apply -f /tmp/kube-state-yaml-files/
kube-state-metrics Counters
Use the following links to access information for the kube state metrics counters:
1. Cronjob Metrics
2. DaemonSet Metrics
3. Deployment Metrics
4. Endpoint Metrics
5. Horizontal Pod Autoscaler Metrics
6. Ingress Metrics
7. Job Metrics
8. LimitRange Metrics
9. Namespace Metrics
10. Node Metrics
11. Persistent Volume Metrics
12. Persistant Volume Claim Metrics
13. Pod Metrics
14. Pod Disruption Budget Metrics
15. ReplicaSet metrics
16. ReplicationController Metrics
Troubleshooting
Problem: Try this:
I ran the Kubernetes agent installer command, butI do not see a Telegraf agent pod running via:
sudo kubectl --namespace monitoring get pods
Check if there were any errors deploying theDaemonSet:
sudo kubectl --namespace monitoring describe dstelegraf-ds
If there are errors related toSecurityContextConstraints, do the following:
1. Generate the Telegraf DaemonSet YAML
sudo kubectl --namespace monitoring get dstelegraf-ds -o yaml > /tmp/telegraf-ds.yaml
2. Stop the Telegraf service
sudo kubectl --namespace monitoring delete dstelegraf-ds
3. Create the necessary SecurityContextConstraint(see "Configuring Agent to Collect Data" section)
4. Re-create the Telegraf DaemonSet
I configured Telegraf to obtain information aboutmy Kubernetes cluster, but I don’t see anyinformation in Cloud Insights. I see "invalidheader field value" errors in the Telegraf log filepertaining to the kubernetes input plugin Iconfigured.
Ensure the referenced bearer_token file does nothave a trailing newline. To verify, run thefollowing command, and confirm that it returns 0:
tail -c1 <bearer_token_file>
Additional information may be found from the Support page.
Memcached Data Collector
Cloud Insights uses this data collector to gather metrics from Memcached.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Memcached.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for a
different Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Information may be found in the Memcached wiki.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Memcached NamespaceServer
Node IPNode Name
Accepting ConnectionsHandled AuthenticationRequestsFailed AuthenticationsBytes UsedBytes Read (per sec)Bytes Written (per sec)CAS BadvalCAS HitsCAS MissesFlush Reqs (per sec)Get Reqs (per sec)Set Reqs (per sec)Touch Reqs (per sec)Connection Yields (persec)Connection StructuresOpen ConnectionsCurrent Stored ItemsDecr Requests Hits (persec)Decr Requests Misses(per sec)Delete Requests Hits(per sec)Delete Requests Misses(per sec)Items EvictedValid EvictionsExpired ItemsGet Hits (per sec)Get Misses (per sec)Used Hash BytesHash Is ExpandingHash Power LevelIncr Requests Hits (persec)Incr Requests Misses(per sec)Server Max BytesListen Disabled NumReclaimedWorker Threads CountTotal Opened
Troubleshooting
Additional information may be found from the Support page.
MongoDB Data Collector
Cloud Insights uses this data collector to gather metrics from MongoDB.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose MongoDB.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Information may be found in the MongoDB documentation.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
MongoDB NamespaceHostname
MongoDB Database NamespaceHostnameDatabase name
Troubleshooting
Information may be found from the Support page.
MySQL Data Collector
Cloud Insights uses this data collector to gather metrics from MySQL.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose MySQL.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
MySQL NamespaceMySQL Server
Node IPNode Name
Aborted Clients (per sec)Aborted Connects (persec)RX Bytes (per sec)TX Bytes (per sec)Commands Admin (persec)Commands Alter EventCommands AlterFunctionCommands AlterInstanceCommands AlterProcedureCommands Alter ServerCommands Alter TableCommands AlterTablespaceCommands Alter UserCommands AnalyzeCommands Assign ToKeycacheCommands BeginCommands BinlogCommands CallProcedureCommands Change DBCommands ChangeMasterCommands Change ReplFilterCommands CheckCommands ChecksumCommands CommitCommands Create DBCommands Create EventCommands CreateFunctionCommands Create IndexCommands CreateProcedureCommands CreateServerCommands Create Table
Troubleshooting
Additional information may be found from the Support page.
Netstat Data Collector
Cloud Insights uses this data collector to gather Netstat metrics.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Netstat.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Netstat Node UUID Node IPNode Name
Troubleshooting
Additional information may be found from the Support page.
Nginx Data Collector
Cloud Insights uses this data collector to gather metrics from Nginx.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Nginx.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Nginx metric collection requires that Nginx http_stub_status_module be enabled.
Additional information may be found in the Nginx documentation.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Nginx NamespaceServer
Node IPNode NamePort
AcceptsActiveHandledReadingRequestsWaitingWriting
Troubleshooting
Additional information may be found from the Support page.
PostgreSQL Data Collector
Cloud Insights uses this data collector to gather metrics from PostgreSQL.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose PostgreSQL.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
PostgreSQL Server NamespaceDatabaseServer
Node NameNode IP
Buffers AllocatedBuffers BackendBuffers Backend FileSyncBuffers CheckpointBuffers CleanCheckpoints Sync TimeCheckpoints Write TimeCheckpoints RequestsCheckpoints TimedMax Written Clean
PostgreSQL Database NamespaceDatabaseServer
Database OIDNode NameNode IP
Blocks Read TimeBlocks Write TimeBlocks HitsBlocks ReadsConflictsDeadlocksClient NumberTemp Files BytesTemp Files NumberRows DeletedRows FetchedRows InsertedRows ReturnedRows UpdatedTransactions CommittedTransactions Rollbacked
Troubleshooting
Additional information may be found from the Support page.
Puppet Agent Data Collector
Cloud Insights uses this data collector to gather metrics from Puppet Agent.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Puppet.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Setup
Information may be found in the Puppet documentation
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Puppet Agent NamespaceNode UUID
Node NameLocationNode IPVersion ConfigstringVersion Puppet
Changes TotalEvents FailureEvents SuccessEvents TotalResources ChangedResources FailedResources Failed ToRestartResources OutofsyncResources RestartedResources ScheduledResources SkippedResources TotalTime AnchorTime ConfigretrievalTime CronTime ExecTime FileTime FilebucketTime LastrunTime PackageTime ScheduleTime ServiceTime SshauthorizedkeyTime TotalTime User
Troubleshooting
Additional information may be found from the Support page.
Redis Data Collector
Cloud Insights uses this data collector to gather metrics from Redis. Redis is anopen source, in-memory data structure store used as a database, cache, andmessage broker, supporting the following data structures: strings, hashes, lists,sets, and more.
Installation
1. From Admin > Data Collectors, click +Data Collector. Under Services, choose Redis.
Select the Operating System or Platform on which the Telegraf agent is installed.
2. If you haven’t already installed an Agent for collection, or you wish to install an Agent for adifferent Operating System or Platform, click Show Instructions to expand the Agent installationinstructions.
3. Select the Agent Access Key for use with this data collector. You can add a new Agent Access Key byclicking the + Agent Access Key button. Best practice: Use a different Agent Access Key only whenyou want to group data collectors, for example, by OS/Platform.
4. Follow the configuration steps to configure the data collector. The instructions vary depending onthe type of Operating System or Platform you are using to collect data.
Objects and Counters
The following objects and their counters are collected:
Object: Identifiers: Attributes: Datapoints:
Redis NamespaceServer
Troubleshooting
Additional information may be found from the Support page.
Copyright Information
Copyright © 2020 NetApp, Inc. All rights reserved. Printed in the U.S. No part of this documentcovered by copyright may be reproduced in any form or by any means-graphic, electronic, ormechanical, including photocopying, recording, taping, or storage in an electronic retrieval system-without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP “AS IS” AND WITHOUT ANY EXPRESS OR IMPLIEDWARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OFMERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBYDISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT,INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOTLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, ORPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OFLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OROTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OFTHE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.NetApp assumes no responsibility or liability arising from the use of products described herein,except as expressly agreed to in writing by NetApp. The use or purchase of this product does notconvey a license under any patent rights, trademark rights, or any other intellectual propertyrights of NetApp.
The product described in this manual may be protected by one or more U.S. patents,foreign patents, or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject torestrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data andComputer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
Trademark Information
NETAPP, the NETAPP logo, and the marks listed at http://www.netapp.com/TM are trademarks ofNetApp, Inc. Other company and product names may be trademarks of their respective owners.