using splunk enterprise with vxrail appliances and isilon ... · chapter 1: executive summary 6...

91
1 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data USING SPLUNK ENTERPRISE WITH VxRAIL APPLIANCES AND ISILON FOR ANALYSIS OF MACHINE DATA March 2017 Abstract This solution guide describes a Dell EMC hyper-converged infrastructure VxRail Appliance solution that highlights flexible scaling options and tight integration with Splunk Enterprise for analyzing large quantities of machine data. H15699

Upload: others

Post on 21-May-2020

20 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

1 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

USING SPLUNK ENTERPRISE WITH VxRAIL APPLIANCES AND ISILON FOR ANALYSIS OF MACHINE DATA March 2017

Abstract

This solution guide describes a Dell EMC hyper-converged infrastructure

VxRail Appliance solution that highlights flexible scaling options and tight

integration with Splunk Enterprise for analyzing large quantities of machine

data.

H15699

Page 2: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Copyright

2 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA March 2017 Solution Guide H15699.

Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change without notice.

Page 3: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Contents

3 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Contents

Chapter 1 Executive Summary 5

Business case ........................................................................................................ 6

Solution overview ................................................................................................... 6

Key results .............................................................................................................. 7

Audience ................................................................................................................. 7

We value your feedback! ........................................................................................ 7

Chapter 2 Solution Architecture 8

Overview ................................................................................................................. 9

VxRail Appliance architecture ............................................................................... 12

Isilon ..................................................................................................................... 14

VMware vSphere .................................................................................................. 14

Splunk Enterprise ................................................................................................. 15

Chapter 3 Splunk Enterprise Deployment Design and Configuration 18

Overview ............................................................................................................... 19

Compute design ................................................................................................... 19

Network design ..................................................................................................... 20

Storage design ..................................................................................................... 21

Virtualization design ............................................................................................. 24

Splunk Enterprise design ...................................................................................... 24

Chapter 4 Splunk Single Instance 50GB/day with 90-day Retention 30

Overview ............................................................................................................... 31

Implementation ..................................................................................................... 31

Use case summary ............................................................................................... 38

Chapter 5 Splunk Multi-instance 500GB/day with 90-day Retention 39

Overview ............................................................................................................... 40

Implementation ..................................................................................................... 40

Use case summary ............................................................................................... 47

Chapter 6 Splunk Multi-instance 1000GB/day with 90-day Retention 48

Overview ............................................................................................................... 49

Implementation ..................................................................................................... 49

Use case summary ............................................................................................... 51

Page 4: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Contents

4 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7 Splunk Multi-instance 1000GB/day with > 90-day Retention 52

Overview ............................................................................................................... 53

Implementation ..................................................................................................... 53

Use case summary ............................................................................................... 61

Chapter 8 Splunk Multi-instance 1000GB/day with > 90-day Retention and Indexer High Availability 62

Overview ............................................................................................................... 63

Implementation ..................................................................................................... 63

Use case summary ............................................................................................... 75

Chapter 9 Validated Configurations for Splunk Enterprise 76

Splunk-validated sizing configurations ................................................................. 77

Scenario 1: One VxRail node for up to 50 GB/day with 90-day retention ............. 78

Scenario 2: Four VxRail nodes for up to 500 GB/day (distributed) or up to 250 GB/day (clustered) with 90-day retention ....................................................... 79

Scenario 3: Seven VxRail nodes for up to 1 TB/day (distributed) with 90-day retention ......................................................................................................... 80

Scenario 4: Seven VxRail nodes with Isilon for up to 1 TB/day (clustered) with 7-day retention for hot/warm buckets and configurable retention for cold buckets .............................................................................................. 81

Summary .............................................................................................................. 82

Chapter 10 Conclusion 83

Summary .............................................................................................................. 84

Findings ................................................................................................................ 84

Conclusion ............................................................................................................ 84

Chapter 11 References 85

Dell EMC documentation ...................................................................................... 86

VMware documentation ........................................................................................ 86

Splunk Enterprise documentation ......................................................................... 86

Appendix A VxRail Appliance Scalability 87

Overview ............................................................................................................... 88

Test scenario ........................................................................................................ 88

Test methodology ................................................................................................. 88

Test results ........................................................................................................... 90

Summary .............................................................................................................. 91

Page 5: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 1: Executive Summary

5 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 1 Executive Summary

This chapter presents the following topics:

Business case ....................................................................................................... 6

Solution overview ................................................................................................. 6

Key results ............................................................................................................ 7

Audience ............................................................................................................... 7

We value your feedback! ..................................................................................... 7

Page 6: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 1: Executive Summary

6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Business case

Machine data is one of the fastest growing and complex areas of Big Data. It is also one

of the most valuable, containing a definitive record of events that can reveal information

about user transactions, customer behavior, machine behavior, security threats,

fraudulent activity, and more. Making use of this data, however, presents real challenges.

Traditional data analysis, management, and monitoring solutions are not engineered to

handle such high-volume, high-velocity, and highly diverse data.

Splunk Enterprise is the industry-leading platform for machine data. It gives you real-time

visibility, insight, and understanding across your IT infrastructure and the application and

services that run on top of it. The Splunk platform also:

Seamlessly blends metrics and events from both structured and unstructured data

sources

Collects and correlates multiple data sources to rapidly pinpoint service

degradations and reduce mean-time-to-resolution (MTTR)

Monitors end-to-end infrastructure to detect anomalies and prevent problems in real

time

Delivers powerful visualizations to understand relationships, track trends, and

accelerate investigations

Dell EMC™ and Splunk have partnered to provide a menu of standardized, reference

architectures for non-disruptive scalability and performance to aid an organization’s digital

transformation. When paired together, Dell EMC and Splunk combine the analytics

provided by the Splunk ecosystem with the cost-effective, scalable and flexible

infrastructure of Dell EMC to deliver Operational Intelligence.

Solution overview

This solution demonstrates how Splunk Enterprise combined with the Dell EMC VxRail™

Appliance, Isilon™, and VMware virtualization software can easily, efficiently, and cost-

effectively scale to support enterprise-level machine data analytics and real-time

operational intelligence. VxRail is the only fully integrated, preconfigured, and pre-tested

VMware hyper-converged infrastructure appliance family on the market. Based on

VMware vSphere plus VMware vSAN, and EMC software, VxRail delivers an all-in-one IT

infrastructure transformation by leveraging a known and proven building block for the

Software Defined Data Center (SDDC).

The solution describes the design, deployment, and configuration of Splunk Enterprise on

VxRail for five representative uses cases covering a range of customer needs.

Table 1. Solution Use Cases

Daily ingest (GB/day) Retention (days) Equipment Splunk deployment

50 90 1-node VxRail Single/combined instance

500 90 4-node VxRail Distributed

Page 7: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 1: Executive Summary

7 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Daily ingest (GB/day) Retention (days) Equipment Splunk deployment

1000 90 7-node VxRail Distributed

1000 Greater than 90 7-node VxRail + Isilon Distributed

1000 Greater than 90 7-node VxRail + Isilon Clustered

In addition to covering these use cases, the solution includes background material

describing all of the technologies that make the solution compelling, including: VxRail

Appliance architecture, Isilon, VMware vSphere, and Splunk Enterprise.

Key results

This solution provides detailed information for evaluating the applicability of VxRail

offerings for a Splunk implementation. Splunk has validated multiple use case

configurations for VxRail that meet or exceed the performance of Splunk’s documented

reference hardware. Potential customers should be able to match almost any current

needs with an approved configuration. Customers can also be confident that the VxRail

product line together with the flexibility of Splunk Enterprise configuration options can be

scaled out to handle future needs without the need for extensive upgrades or expensive

re-platforming.

Audience

This guide is intended for IT administrators, storage administrators, virtualization

administrators, system administrators, IT managers, and those who evaluate, acquire,

manage, maintain, or operate Splunk Enterprise environments.

We value your feedback!

Dell EMC and the authors of this document welcome your feedback on the solution and

the solution documentation. Contact [email protected] with your

comments.

Authors:

Dell EMC: Eric Wang, James Shen, Tao Guo, Phil Hummel, Reed Tucker

Splunk: Jenny Hollfelder

Page 8: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2: Solution Architecture

8 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2 Solution Architecture

This chapter presents the following topics:

Overview ................................................................................................................ 9

VxRail Appliance architecture ........................................................................... 12

Isilon .................................................................................................................... 14

VMware vSphere ................................................................................................. 14

Splunk Enterprise ............................................................................................... 15

Page 9: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2: Solution Architecture

9 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Overview

The following reference architecture describes a Dell EMC hyper-converged infrastructure

VxRail Appliance with Isilon for a virtualized Splunk Enterprise environment. Dell EMC

and Splunk jointly tested and validated this reference architecture to meet or exceed the

performance of Splunk Enterprise running on Splunk’s reference hardware.

The VxRail Appliance is a fully integrated, preconfigured, and pre-tested hyper-converged

infrastructure appliance. Powered by industry-leading vSAN and vSphere software, the

VxRail Appliance is the easiest and fastest way to streamline and extend a VMware

environment while dramatically simplifying IT operations.

Figure 1 and Figure 2 show how we deployed two reference architectures representing

Splunk instances as virtual machines on a VMware vSphere 6.0 cluster following Splunk’s

documented virtualization best practices. In the storage layer, VxRail leverages VMware

vSAN technology to build vSAN on groups of local attached disks. This configuration

provides rapid read and write disk I/O and low latency through the use of an all-flash

array.

Figure 1 shows the four layers (application layer, virtualization layer, infrastructure layer,

and virtual SAN layer) in this solution. In the application layer, there are four Splunk

components: forwarder, indexer, search head, and master node. The VxRail cluster forms

a Virtual SAN as storage, which is used to hold all virtual machines and the Splunk

hot/warm and cold buckets on an all-flash array.

Reference

architecture

Page 10: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2: Solution Architecture

10 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 1. Splunk Enterprise on VxRail Appliance reference architecture

Figure 2 shows a reference architecture similar to Figure 1 with differences in the number

of VxRail nodes and the location of Splunk buckets. vSAN is used to store all virtual

machines and Splunk hot/warm buckets, while Isilon storage is used to store the Splunk

cold bucket for long-term data retention.

Note: For an explanation of the hot/warm and cold bucket concept, refer to Splunk core

architecture.

Page 11: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2: Solution Architecture

11 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 2. Splunk Enterprise on VxRail Appliance with Isilon reference architecture

Page 12: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2: Solution Architecture

12 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Table 2 lists the hardware components in this solution.

Table 2. Hardware configuration

Component Hardware

VxRail E460F 2 Intel® Xeon® Processors E5-2698 v4 @ 2.20 GHz per node

384 GB (24 x 16 GB) or 512 GB (16 x 32 GB)

800 GB per disk group (1 or 2 disk groups)

5.235TB (3 x 1.92TB) or 20.94TB (6 x 3.84TB SSD) capacity per node**

2 x 10 GbE SFP+ per node

Switch Fabric interconnect

Isilon X410 2 Intel® Xeon® Processors 2.0 GHz per node

128 GB RAM per node

3.2 TB SSD storage

64 TB HDD storage

2 x 10 GbE SFP+ per node

2 x 1 GbE per node

**Note: The net effective usable capacity of the VxRail cluster is ½ the raw capacity. This is due to

the vSAN FTT=1 policy setting applied to each VM.

Table 3 lists the versions of software used in this solution.

Table 3. Software configuration

Software Version

Splunk Enterprise 6.5.0

Splunk Universal Forwarder 6.5.0

RedHat Linux 64-bit 6.7

VMware vSphere Enterprise 6.0 U2

VMware vCenter Server 6.0 U2

VMware Virtual SAN Enterprise 6.2

VMware vRealize Log Insight 3.3.1

VxRail Manager 4.0

OneFS 8.0.0.3

VxRail Appliance architecture

The VxRail Appliance offers the performance, capacity, and graphics capability needed to

meet the infrastructure requirements of a small or medium-sized enterprise. The VxRail

Appliance provides a simple, cost-effective, hyper-converged solution that solves your

Hardware

components

Software

components

Page 13: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2: Solution Architecture

13 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

virtualization infrastructure challenges and supports a wide range of applications and

workloads.

VxRail Appliances use VMware’s vSAN software, which is fully integrated with vSphere

and provides full-featured and cost-effective software-defined storage. vSAN implements

a notably efficient architecture, built directly into a hypervisor. This architecture

distinguishes vSAN from solutions that typically install a virtual storage appliance (VSA)

that runs as a guest VM on each host. Embedding vSAN into the ESXi kernel layer has

obvious advantages in performance and memory requirements. It has very little impact on

CPU utilization (less than 10 percent) and self-balances based on workload and resource

availability. It presents storage as a familiar data store construct and works seamlessly

with other vSphere features such as VMware vSphere vMotion.

vSAN aggregates locally attached disks of hosts in a vSphere cluster to create a pool of

distributed shared storage. Capacity is easily scaled up by adding additional disks to the

cluster and scaled out by adding additional ESXi hosts. This distributed shared storage

provides the flexibility to start with a very small environment and scale it over time.

Storage characteristics are configured using Storage Policy Based Management (SPBM),

which allows object-level policies to be set and modified on the fly to control storage

provisioning and day-to-day management of storage service-level agreements (SLAs).

vSAN is preconfigured when the VxRail system is first initialized and managed through

vCenter. The VxRail Appliance initialization process discovers locally attached storage

disks from each ESXi node in the cluster to create a distributed, shared-storage data

store. The amount of storage in the vSAN data store is an aggregate of all of the capacity

drives in the cluster.

VxRail provides an all-flash SSD configuration. The all-flash configuration uses flash

SSDs for both the caching tier and capacity tier.

The VxRail Appliance uses a modular, distributed system architecture based on a 1U

appliance with one node that scales linearly. In addition, different options are available for

compute, memory, and storage configurations to match any use cases. Choose from a

range of next-generation processors, variable RAM, storage and cache capacity for

flexible CPU-to-RAM-to-storage ratios.

The VxRail Appliance is assembled with proven server-node hardware that has been

integrated, tested, and validated as a complete solution by Dell EMC. The current

generation of VxRail Appliance nodes uses Intel Xeon E5 processors. The Intel Xeon E5

processor is a multi-threaded, multi-core CPU designed to handle diverse workloads for

cloud service, high-performance computing, and networking. The number of cores and

memory capacity differ for each VxRail Appliance model.

VxRail is a self-contained infrastructure, it is not a stand-alone environment. It is intended

to connect and integrate with the customer’s existing data center network. The distributed

cluster architecture allows independent nodes to work together as a single system. The

close coupling between nodes is accomplished through IP networking connectivity. Our

implementation in this solution uses two customer-provided 10 GbE Top-of-the-Rack

(TOR) switches to connect each node in the VxRail cluster.

Storage

components

Compute

components

Networking

components

Page 14: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2: Solution Architecture

14 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

In VxRail, network traffic is segregated using switch-based Virtual LAN (VLAN) technology

and vSphere Network I/O Control. A VxRail cluster supports four types of network traffic:

Management - Management traffic connects VMware vCenter Web Client, VxRail

Manager, and other management interfaces. It provides communications between

the management components and the ESXi nodes in the cluster. Either the default

VLAN or a specific management VLAN can be used for management traffic.

Virtual SAN - Data access for read and write activity as well as for optimization and

data rebuild is performed over the vSAN network. Low network latency is critical for

this traffic and a specific VLAN is required to isolate this traffic.

vMotion – vSphere vMotion is a vSphere feature that allows virtual machines

mobility between nodes. A separate VLAN is used to isolate this traffic.

Virtual Machine - Users access virtual machines and the service provided over the

VM networks. At least one VM VLAN is configured when the system is initially

configured and others may be defined as required.

Note: For detailed VxRail network configuration, refer to the Dell EMC VxRail Network Guide.

Isilon

The Isilon X-Series is a flexible and comprehensive storage product that provides large

capacity and high performance. The VxRail Appliance supports Isilon storage.

Isilon storage uses intelligent software to scale data across a large number of commodity

hardware units, enabling explosive growth in performance and capacity. The product's

revolutionary storage architecture—the OneFS™ operating system (OS)—offers a single

clustered file system.

OneFS provides value by incorporating parallelism at a deep level of the OS. Virtually, the

system is distributed across multiple hardware units. This parallelism allows OneFS to

scale in every dimension as the infrastructure is expanded. By providing multiple

redundancy levels, the system has no single point of failure. As a result, OneFS can grow

to a multi-petabyte scale while providing greater reliability than traditional systems.

OneFS runs on Isilon scale-out network-attached storage (NAS) hardware, ensuring that

Isilon benefits from the ever-improving cost and efficiency curves of commodity hardware.

OneFS allows you to add hardware to or remove hardware from the cluster at any time.

The data is protected from hardware changes. This feature alleviates the cost and burden

of data migrations and hardware refreshes.

VMware vSphere

VMware vSphere is a widely adopted virtualization platform. The technology increases

server utilization so that a firm can consolidate its servers and spend less on hardware,

administration, energy, and floor space. The vSphere platform enables its installations to

respond to user requests reliably while giving administrators the tools to respond to their

changing needs.

Page 15: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2: Solution Architecture

15 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

The components of particular importance in this solution are vSphere ESXi and vCenter.

VMware vSphere ESXi is a bare-metal hypervisor. It installs directly on a physical server,

and partitions that server into multiple virtual machines. An ESXi host refers to the

physical server.

vSphere ESXi hosts and their resources are pooled together into clusters that contain the

CPU, memory, network, and storage resources that are available for allocation to the

virtual machines. Clusters scale up to a maximum of 64 hosts and can support thousands

of virtual machines.

vCenter Server is management software that runs on a virtual or physical server to

oversee multiple ESXi hypervisors as a single cluster. An administrator can interact

directly with vCenter Server or use vSphere Client to manage virtual machines from a

browser window anywhere in the world. For example, the administrator can capture the

detailed blueprint of a known, validated configuration—a configuration that includes

networking, storage, and security settings—and then deploy that blueprint to multiple ESXi

hosts.

Splunk Enterprise

Splunk Enterprise is a software platform that enables you to collect, index, and visualize

machine-generated data gathered from different sources in your IT infrastructure. These

sources include applications, networking devices, host and server logs, mobile devices,

and more.

Splunk turns silos of data into operational insights and provides end-to-end visibility

across your IT infrastructure to enable faster problem solving and informed, data-driven

decisions.

Figure 3 provides a graphic overview of Splunk system architecture. A Splunk Enterprise

instance can perform the role of a search head, an indexer, or both in the case of small

deployments. When the daily ingest rate or search load exceeds the sizing

recommendations for a combined instance environment, Splunk Enterprise scales

horizontally by adding additional indexers and search heads. For more information, refer

to the Splunk Capacity Planning Manual.

VMware vSphere

ESXi

VMware vSphere

vCenter

Splunk core

architecture

Page 16: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2: Solution Architecture

16 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 3. Splunk architecture overview

When a Splunk Enterprise indexer receives data, the indexer parses the raw data into

distinct events based on the timestamp of the event and writes them to the appropriate

index. Splunk implements a form of storage tiering involving hot/warm and cold buckets of

data to optimize performance for newly indexed data and to provide an option to keep

older data for longer periods on higher capacity storage.

Newly indexed data lands in a hot bucket, where it is actively read and written by Splunk.

When the number of hot buckets is reached, or when the size of the data in the hot

buckets exceeds the specified threshold, the hot bucket is rolled to a warm bucket. Warm

buckets reside on the same tier of storage as hot buckets. The only difference is that

warm buckets are read-only. It is important that the storage that is identified for hot/warm

data is your fastest storage tier because it has the biggest impact on the performance of

your Splunk Enterprise deployment.

When the number of warm buckets or volume size is exceeded, data is rolled into a cold

bucket, which can optionally reside on another tier of storage. Cold data may reside on an

NFS mount if the latency is less than 5 ms (ideally) and not more than 200 ms. NAS

technologies offer an acceptable blend of performance and lower cost per TB, making

them a good choice for longer-term retention of cold data.

Data can also be archived or frozen, but such data is no longer searchable by Splunk

search heads. Manual user action is required to bring the data back into Splunk Enterprise

buckets to be searchable. While you might choose to use frozen buckets to meet

compliance retention requirements, this paper shows how Isilon’s massive scalability and

competitive cost of ownership can empower you to retain more data in the cold bucket,

where it remains searchable. Figure 4 provides more details about Splunk bucket

concepts.

Page 17: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 2: Solution Architecture

17 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 4. Splunk index buckets

Page 18: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

18 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3 Splunk Enterprise Deployment Design and Configuration

This chapter presents the following topics:

Overview .............................................................................................................. 19

Compute design ................................................................................................. 19

Network design ................................................................................................... 20

Storage design .................................................................................................... 21

Virtualization design .......................................................................................... 24

Splunk Enterprise design .................................................................................. 24

Page 19: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

19 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Overview

This chapter provides details about the deployment, design, and configuration of Splunk

Enterprise on the Dell EMC hyper-converged infrastructure VxRail Appliance, from a

single instance starter kit to a scalable, distributed, cluster environment. This solution

covers five types of deployment for the different user scenarios:

Single Instance 50GB/day with 90-day Retention – A single instance that combines

indexing and search management functions

Multi-instance 500GB/day with 90-day Retention – One search head with two

indexers

Multi-instance 1000GB/day with 90-day Retention – One search head with five

indexers

Multi-instance 1000GB/day with > 90-day Retention – One search head with five

indexers, using Isilon to provide configurable retention for Splunk cold buckets

Multi-instance 1000GB/day with > 90-day Retention and Indexer High Availability –

One search head with five indexers, including a replication factor of 2 and a search

factor of 2, using Isilon to provide configurable retention for Splunk cold buckets

Compute design

Table 4, Table 5, Table 6, and Table 7 show the details of the compute design of five

types of Splunk Enterprise deployments on VxRail appliance.

Note: Splunk multi-instance 1000 GB/day with 90-day retention deployment and Splunk multi-

instance 1000 GB/day with > 90-day retention deployment use the same compute design listed in

Table 6.

Table 4. Single instance 50 GB/day with 90-day retention deployment on one VxRail node

Instance role Quantity Physical cores/vCPUs Memory

Single Instance combined search head and indexer

1 32/64 256 GB

Table 5. Multi-instance 500 GB/day with 90-day retention deployment on four VxRail nodes

Instance role Quantity Physical cores/vCPUs Memory

Search Head 1 32/64 256 GB

Indexer 2 32/64 256 GB

Admin Server 1 20/40 256 GB

Page 20: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

20 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Table 6. Multi-instance 1000 GB/day with 90-day retention deployment and multi-instance 1000 GB/day with > 90-day retention with Isilon on seven VxRail nodes

Instance role Quantity Physical cores/vCPUs Memory

Search Head 1 32/64 256 GB

Indexer 5 32/64 256 GB

Admin Server 1 20/40 256 GB

Table 7. Multi-instance 1000 GB/day with > 90-day retention and indexer high availability deployment with Isilon on seven VxRail nodes

Instance role Quantity Physical cores/vCPUs Memory

Search Head 1 32/64 256 GB

Indexer 5 32/64 256 GB

Admin Server 1 20/40 256 GB

Network design

The VxRail Appliance is delivered ready to deploy and attach to any 10 GbE network

infrastructure using IPv4 and IPv6. As a best practice, Dell EMC recommends using dual

TOR switches to eliminate the switch as a single point of failure. In this solution, we

designed the VxRail cluster’s network as follows:

Configure the two top-of-rack switches to provide 10 Gb Ethernet connectivity to the VxRail Appliance.

Use VLANs to logically group devices on different network segments or sub networks.

Use separate vSphere virtual distributed port groups to isolate the network communication for each network:

vSphere management network

vSphere vMotion network

vCenter network

vSAN management network

Splunk Enterprise network

Figure 5 shows the VxRail Appliance network design of this solution.

Page 21: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

21 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 5. VxRail Appliance network design

Storage design

This section describes the vSAN storage design for five types of Splunk Enterprise

deployment on the VxRail cluster.

Table 8 shows the Virtual SAN storage policy that is defined for the virtual machines. In

this solution, we used two vSAN storage policies:

The default policy of vSAN, which is used for the home files and the OS Virtual

Machine Disk (VMDK) files of all virtual machines.

A newly created policy for the disk drive that keeps Splunk data on the Splunk

indexer virtual machines. The number of disk stripes per object is set to 10 to

distribute the data evenly across all VxRail nodes to improve the vSAN read/write

performance for large disk drives.

A vSAN storage policy with failures to tolerate (FTT) of 1 and failure tolerance

method of RAID-1 (mirroring) creates a full copy of the data. Because of this, twice

the capacity of the workload is required.

vSAN storage

design

Page 22: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

22 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Table 8. Virtual SAN storage policy configuration

Policy name Rule sets Comments

Virtual SAN Default Storage Policy

Number of failures to tolerate = 1

Failure tolerance method = RAID-1

Disable object checksum = No

Force provisioning = No

IOPS limit for object = 0

Number of disk stripes per object = 1

Object space reservation (%) = 0

This policy is Virtual SAN default policy, it is used for the VM home files and OS VMDK files of virtual machines that are created in this solution.

Splunk-Data-Policy Number of failures to tolerate = 1

Failure tolerance method = RAID-1

Disable object checksum = No

Force provisioning = No

IOPS limit for object = 0

Number of disk stripes per object = 10

Object space reservation (%) = 0

This policy is used for Splunk indexer data storage

Table 9 shows the vSAN storage design for Splunk in this solution.

Table 9. Virtual SAN storage design for Splunk

Deployment type

Instance role Quantity OS storage Indexer storage

Single Instance 50 GB/day with 90-day Retention

Single Instance combined search head and indexer

1 300 GB 3 TB

Multi-instance 500 GB/day with 90-day Retention

Search Head 1 300 GB 0

Indexer 2 300 GB 13.9 TB

Admin Server 1 150 GB 0

Multi-instance 1000 GB/day with 90-day Retention

Search Head 1 300 GB 0

Indexer 5 300 GB 10.8 TB

Admin Server 1 150 GB 0

Multi-instance 1000 GB/day with > 90-day Retention

Search Head 1 300 GB 0

Indexer 5 300 GB 2.1 TB

Admin Server 1 150 GB 0

Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

Search Head 1 300 GB 0

Indexer 5 300 GB 2.1 TB

Admin Server 1 150 GB 0

Isilon storage

design

Page 23: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

23 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

In this solution, a four-node Isilon X410 cluster is used for the Splunk deployment to

provide configurable retention for cold buckets. The detailed configuration of Isilon nodes

and Isilon storage design for Splunk are shown in Table 10 and Table 11.

Table 10. Isilon node configuration

CPU CPU cores RAM SSD capacity HDD capacity Network

Two Intel Xeon Processors 2.0 GHz

8 cores 128 GB 3.2 TB 64 TB 2 x 10 GbE

2 x 1 GbE

Table 11. Isilon storage design for Splunk

Deployment Type Instance Role Quantity Indexer Cold Bucket Storage

Multi-instance 1000 GB/day with > 90-day Retention

Indexer 5 10.8 TB

Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

Indexer 5 10.8 TB

For the overall Isilon configuration, we followed these best practices:

Enabled SmartPools settings across all four Isilon nodes and use an SSD as L3

cache for metadata read acceleration

Enabled SmartConnect to provide automatic client connection load balancing and

failover capabilities

Enabled SmartCache for write performance

Optimized for concurrent access for data access pattern

Used 10 Gb/s external network for data connection

Increased network MTU to 9000 (Jumbo Frames)

Splunk and Dell EMC recommend that NFS storage, including Isilon, is only used for cold

and frozen data, never for hot/warm. For details about system requirements, see the

Splunk Enterprise Installation Manual.

Page 24: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

24 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Virtualization design

VxRail delivers virtualization, compute, and storage in a scalable, easy to manage, hyper-

converged infrastructure appliance. It deeply integrates VMware vSphere virtualization

software that delivers an industry-leading virtualization platform to provide application

virtualization with a highly available, resilient, efficient on-demand infrastructure.

For details about the configuration of the virtual machines that are used in this solution,

refer to Table 4, Table 5, Table 6, and Table 7 in the Compute design section.

This solution implements the following Dell EMC and VMware best practices to provide

optimal performance for all Splunk Enterprise virtual machines running on the VxRail

Appliance:

Create a vSphere HA cluster to provide a virtualized, high-availability Splunk Enterprise environment that is easy to use and cost-effective.

With virtual Non-Uniform Memory Access (NUMA) topology, the virtual socket that has fewer virtual CPU cores than the physical CPU cores of a socket in the physical ESXi host is recommended.

Use a VMware Paravirtual SCSI controller to increase throughput with significant CPU utilization reduction in the SAN environment.

Use a VMware VMXNET3 network adapter to optimize network performance.

Use Thick Provision Eager Zeroed disk provisioning to optimize virtual disk performance.

Install VMware tools in the guest OS to improve VM performance.

Set VM advance parameters numa.vcpu.preferHT to “true” for enabling hyper-

threading with NUMA in ESXi.

For more information, refer to Performance Best Practices for VMware vSphere 6.0.

Splunk Enterprise design

Figure 6 shows the Splunk single instance 50 GB/day with 90-day retention deployment

design with a single Splunk Enterprise instance and a combined indexer and search head.

Note: In this solution, we use one forwarder to demonstrate the deployment process.

Virtual machine

configuration

Virtualization

configuration

Splunk

Enterprise

deployment

design

Page 25: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

25 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 6. Splunk single instance 50 GB/day with 90-day retention deployment

Figure 7 shows the Splunk multi-instance 500 GB/day with 90-day retention deployment

with one search head and two indexers.

Note: In this solution, we use one forwarder to demonstrate the deployment process.

Page 26: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

26 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 7. Splunk multi-instance 500 GB/day with 90-day retention deployment

Figure 8 shows the Splunk multi-instance 1000 GB/day with 90-day retention deployment

and Splunk multi-instance 1000 GB/day with > 90-day retention deployment with one

search head and five indexers. Using the Isilon, the VxRail cluster can provide

configurable retention for Splunk cold buckets.

Note: In this solution, we use one forwarder to demonstrate the deployment process.

Page 27: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

27 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 8. Splunk multi-instance 1000 GB/day with 90-day retention deployment and Splunk multi-instance 1000 GB/day with > 90-day retention deployment

Figure 9 shows the Splunk multi-instance 1000 GB/day with > 90-day retention and

indexer high availability deployment design with one search head and five indexers. Using

Isilon, the VxRail Appliances can provide configurable retention for Splunk cold buckets.

Note: In this solution, we use one forwarder to demonstrate the deployment process.

Page 28: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

28 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 9. Splunk multi-instance 1000 GB/day with > 90-day retention and indexer high availability deployment

In this solution, we implement the following Linux configuration parameter settings to

provide optimal Splunk Enterprise performance:

Change tuned profile to virtual-host in RHEL 6.X. This profile decreases the

swappiness of virtual memory and enables more aggressive writeback of dirty

pages. It tunes the system settings for high throughput and low latency.

Disable Transparent Huge Pages (THP) to avoid the degradation of Splunk

Enterprise performance on RHEL 6.X. For more information, refer to Transparent

huge memory pages and Splunk performance.

Disable SELinux, so that enhanced system security does not add overhead to the

performance.

Increase the maximum number of open file descriptors and processes by

configuring ulimit to avoid the “Too Many Open Files” exception. Table 12 shows

the recommended values.

Table 12. Recommended ulimit values

System-wide resources Ulimit invocation Recommended minimum value

Open files ulimit -n 8,192

User processes ulimit -u 1,024

Splunk

Enterprise Linux

configuration

Page 29: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 3: Splunk Enterprise Deployment Design and Configuration

29 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

System-wide resources Ulimit invocation Recommended minimum value

Data segment size ulimit -d 1,073,741,824

Tune the kernel to optimize the network for high throughput over a 10 Gb Ethernet

by adding the following command string to /etc/sysctl.conf:

net.ipv4.tcp_timestamps=0

net.ipv4.tcp_sack=1

net.core.netdev_max_backlog=250000

net.core.rmem_max=4194304

net.core.wmem_max=4194304

net.core.rmem_default=4194304

net.core.wmem_default=4194304

net.core.optmem_max=4194304

net.ipv4.tcp_rmem=4096 87380 4194304

net.ipv4.tcp_wmem=4096 65536 4194304

net.ipv4.tcp_low_latency=1

Page 30: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

30 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 4 Splunk Single Instance 50 GB/day with 90-day Retention

This chapter presents the following topics:

Overview .............................................................................................................. 31

Implementation ................................................................................................... 31

Use case summary ............................................................................................. 38

Page 31: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

31 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Overview

In this chapter, we will show the Splunk single instance 50 GB/day with 90-day retention

implementation on one VxRail node to an existing VxRail cluster. A single Splunk

Enterprise instance serves as both indexer and search head. We optimize the design for

both high performance and data retention capability using VxRail for all Splunk indexing

(hot/warm/cold).

Implementation

Table 13 lists the process flow for the Splunk single Instance 50 GB/day with 90-day

retention implementation on one VxRail node.

Table 13. Process flow for Splunk single Instance 50 GB/day with 90-day retention implementation

Step Action Description

1 Expanding VxRail cluster Add one VxRail node into the existing VxRail cluster

2 Setting up vSAN policy Prepare the vSAN policy that is used for Splunk disks, including hot/warm and cold buckets

3 Creating Splunk VM template

Prepare the VM template that is used for indexer/search head and forwarder. Tune it according to Splunk’s recommendation

4 Deploying Splunk indexer/search head

Deploy indexer/search head instance that is based on the Splunk VM template

5 Deploying forwarder Deploy forwarder instance that is based on the Splunk VM template for validating implementation

6 Validating implementation Validate the implementation of Splunk

To begin the implementation, expand the existing VxRail cluster by adding one VxRail

node to provide the dedicated resource for Splunk Enterprise single instance deployment.

This is a Dell EMC internal process. Contact your Dell EMC or partner representative

when planning to expand your VxRail cluster.

Note: The VCSA root password must be the same as the password of

[email protected]. If the password was changed, change it back before adding the new

node.

Follow these steps to set up the VSAN policy for Splunk hot/warm and cold buckets.

1. Log in to the vCenter vSphere Web Client using the administrator account.

2. Navigate to Home > VM Storage Policies.

Expanding

VxRail cluster

Setting up vSAN

policy

Page 32: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

32 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 10. VM storage policies

3. Create New VM Storage Policy for Splunk hot/warm and cold buckets using

these settings:

Name: Splunk-Data-Policy

Description: Used for Splunk hot/warm bucket and cold buckets

Number of failures to tolerate: 1

Number of disk stripes per object: 10

Object space reservation (%): 0

Failure tolerance method: RAID-1

Figure 11. VM storage policy settings

Follow these steps to create a VM template and tune it according to Splunk’s

recommendation. We will use the template to deploy a Splunk indexer/search head and a

Splunk forwarder.

Creating Splunk

VM template

Page 33: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

33 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

1. Log in to the vCenter client and deploy one VM with RHEL 6.7 OS.

2. Log in to the Linux VM deployed in step 1 using the root account.

3. Disable the firewall to allow Splunk instances on different hosts to communicate

with each other properly:

service iptables stop

chkconfig iptables off

4. Disable SELinux, so that enhanced system security does not add overhead to

Splunk’s performance:

vi /etc/selinux/config

SELINUX=disabled

5. Disable Transparent Huge Pages (THP) to avoid the degradation of Splunk

Enterprise performance on RHEL 6.X:

vi /etc/grub.conf

transparent_hugepage=never

6. Change the tuned profile to virtual-host in RHEL 6.X for high throughput and low

latency storage access:

yum install -y tuned

chkconfig tuned on

tuned-adm profile virtual-host

7. Tune the kernel to optimize the network for high throughput over a 10 Gb Ethernet

by adding the following command string to /etc/sysctl.conf:

vi /etc/sysctl.conf

net.ipv4.tcp_timestamps=0

net.ipv4.tcp_sack=1

net.core.netdev_max_backlog=250000

net.core.rmem_max=4194304

net.core.wmem_max=4194304

net.core.rmem_default=4194304

net.core.wmem_default=4194304

net.core.optmem_max=4194304

net.ipv4.tcp_rmem=4096 87380 4194304

net.ipv4.tcp_wmem=4096 65536 4194304

net.ipv4.tcp_low_latency=1

8. Increase the maximum number of open file descriptors and processes by

configuring ulimit to avoid the “Too Many Open Files” exception:

vi /etc/security/limits.conf

root - nofile 65536

root - nproc 65536

vi /etc/security/limits.d/90-nproc.conf

root - nofile 65536

Page 34: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

34 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

root - nproc 65536

9. Remove the NIC's MAC address runtime mapping file:

rm -f /etc/udev/rules.d/70-persistent-net.rules

10. Shut down the server:

shutdown -P now

11. Export the Open Virtualization Format (OVF) template for the Splunk VM

template.

Follow these steps to deploy one Splunk indexer/search head instance:

1. Log in to the vCenter vSphere Client and deploy one VM for indexer/search head

using the Splunk VM template.

2. Edit the virtual machine settings as follows:

Memory: 256 GB

CPUs: 64

Hard disk: 300 GB (OS Storage)

3. Reserve all guest memory (all locked)

4. Power on the VM and configure the IP and hostname.

5. Download and install Splunk Enterprise 6.5.0 on the VM to serve as the combined

indexer/search head by following these steps:

a. Change permissions on the installation package:

chmod 744 splunk-6.5.0-xxx-linux-2.6-x86_64.rpm

b. Run the following command to install the Splunk Enterprise RPM in the

default directory /opt/splunk:

rpm -i splunk-6.5.0-xxx-linux-2.6-x86_64.rpm

Note: To install Splunk in a different directory, use the --prefix flag.

rpm -i --prefix=/opt/new_directory splunk-6.5.0-xxx-linux-2.6-x86_64.rpm

6. Start Splunk Enterprise with --accept-license for the first time:

/opt/splunk/bin/splunk start --accept-license

7. Configure the Splunk Enterprise license on webhttps://<Splunk IP>:8000

a. Log in with the default credential: admin/changeme

b. Navigate to Settings > Licensing.

c. Click Add license.

Deploying

Splunk single

instance

Page 35: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

35 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 12. Adding a license

8. Set up the receiving port 9997

/opt/splunk/bin/splunk enable listen 9997 -auth

admin:changeme

9. Remove NIC's MAC address runtime mapping file:

rm -f /etc/udev/rules.d/70-persistent-net.rules

10. Change allowRemoteLogin to “always” in the server.conf file:

vi /opt/splunk/etc/system/local/server.conf

[general]

allowRemoteLogin=always

11. Remove the file instance.cfg:

rm -f /opt/splunk/etc/instance.cfg

12. Export OVF template for indexer/search head VM template.

13. Mount a 3 TB disk for Indexer Storage by following these steps:

a. Edit the virtual machine settings as follows:

Hard disk: 3 TB (Indexer Storage)

b. Stop Splunk Enterprise:

/opt/splunk/bin/splunk stop

c. Make partitions:

fdisk /dev/sdb

d. Make file systems:

mkfs.ext4 /dev/sdb1

e. Mount to Splunk default database:

mount /dev/sdb1 /opt/splunk/var/lib/splunk/defaultdb

vi /etc/fstab

/dev/sdb1 /opt/splunk/var/lib/splunk/defaultdb ext4 defa

ults 1 1

Page 36: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

36 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Note: To use the custom paths of hot/warm bucket and cold bucket, please refer to Use multiple

partitions for index data in the Splunk online document “Managing Indexers and Clusters of

Indexers.”

f. Start Splunk Enterprise:

/opt/splunk/bin/splunk start

14. Log in to the vCenter vSphere Web Client.

15. Navigate to Home > Hosts and Clusters > the indexer/search head VM >

Manage > Policies > Edit VM Storage Policies.

16. Configure the storage policy to Splunk-Data-Policy for Indexer Storage disks.

Follow these steps to deploy one Splunk universal forwarder:

1. Log in to the vCenter vSphere Client and deploy one VM for Forwarder using the

Splunk VM template.

2. Edit the virtual machine settings as follows:

Memory: 4 GB

CPUs: 4

Hard disk: 300 GB (OS Storage)

3. Reserve all guest memory (all locked).

4. Power on the VM and configure the IP and hostname.

5. Download and install Universal Forwarder 6.5.0 on the VM:

rpm -i splunkforwarder-6.5.0-xxx-linux-2.6-x86_64.rpm

6. Start the universal forwarder:

/opt/splunkforwarder/bin/splunk start --accept-license

7. Configure the data input on the forwarder:

/opt/splunkforwarder/bin/splunk add monitor /data

Note: The forwarder asks you to authenticate and begins monitoring the specified directory

immediately after you log in.

8. Restart the universal forwarder:

/opt/splunkforwarder/bin/splunk restart

9. Remove NIC's MAC address runtime mapping file:

rm -f /etc/udev/rules.d/70-persistent-net.rules

10. Remove file instance.cfg:

rm -f /opt/splunk/etc/instance.cfg

Deploying

forwarder

Page 37: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

37 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

11. Export OVF template for Forwarder VM template.

12. Configure the universal forwarder to connect to the receiving indexer:

/opt/splunkforwarder/bin/splunk add forward-server deploy-

indexer01.bigdata.emc.local:9997

Follow these steps to validate the implementation of Splunk:

1. Verify the forward-server as shown in Figure 13:

/opt/splunkforwarder/bin/splunk list forward-server

Figure 13. Verification of forward server

2. Verify the indexer by following these steps:

a. Upload data to the forwarder as shown in Figure 14.

Figure 14. Uploading data to the forwarder

Note: Download tutorialdata.zip from Splunk Tutorial.

b. Search on the indexer as shown in Figure 15.

Figure 15. Searching on the indexer

3. Stop the forwarder and clean the index:

/opt/splunkforwarder/bin/splunk remove forwarder-server

deploy-indexer01.bigdata.emc.local:9997

/opt/splunkforwarder/bin/splunk list forward-server

/opt/splunkforwarder/bin/splunk stop

/opt/splunkforwarder/bin/splunk clean eventdata –index main

4. Delete the forwarder VM.

Validating the

implementation

Page 38: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 4: Splunk Single Instance 50 GB/day with 90-day Retention

38 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Use case summary

In this use case, we added an additional VxRail node to an existing VxRail cluster to add

the required compute and storage capacity required to deploy a Splunk single instance 50

GB/day with 90-day retention. The implementation shows that the VxRail light starter kit

makes Splunk deployment easy.

Page 39: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

39 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 5 Splunk Multi-instance 500 GB/day with 90-day Retention

This chapter presents the following topics:

Overview .............................................................................................................. 40

Implementation ................................................................................................... 40

Use case summary ............................................................................................. 47

Page 40: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

40 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Overview

In this chapter, we will show the Splunk multi-instance 500 GB/day with 90-day retention

implementation on a four-node VxRail cluster, with increasing data volume and number of

concurrent users.

Implementation

Table 14 lists the process flow for the Splunk multi-instance 500 GB/day with 90-day

retention implementation on a four-node VxRail cluster.

Table 14. Process flow for Splunk multi-instance 500 GB/day with 90-day retention implementation

Step Action Description

1 Implementing VxRail cluster

Implement a four-node VxRail cluster

2 Setting up vSAN policy Prepare the vSAN policy that is used for Splunk disks, including hot/warm and cold buckets

3 Deploying Splunk indexer Deploy two indexers that are based on Splunk indexer/search head template

4 Deploying Splunk search head

Deploy a search head that is based on Splunk indexer/search head template and configure with two Indexers

5 Deploying Splunk admin server

Deploy an admin server and configure the indexers and the search head into the cluster

6 Validating implementation Validate the implementation of Splunk

To begin the implementation, implement a four-node VxRail cluster. This is a Dell EMC

internal process. Contact your Dell EMC or partner representative when planning to

implement your VxRail cluster.

For details of the procedure of setting up vSAN policy, refer to Setting up vSAN policy in

Chapter 4.

Follow these steps to deploy two indexers.

1. Log in to the vCenter vSphere client.

2. Use the indexer/search head VM template to deploy one indexer VM.

3. Configure the IP and hostname.

4. Mount a 13.9 TB disk for Indexer Storage.

5. Log in to the vCenter vSphere Web Client.

6. Navigate to Home > Hosts and Clusters > the indexer/search head VM >

Manage > Policies > Edit VM Storage Policies.

7. Configure the storage policy to Splunk-Data-Policy for Indexer Storage disks.

Implementing

VxRail cluster

Setting up vSAN

policy

Deploying

Splunk indexer

Page 41: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

41 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

8. Start Splunk Enterprise:

/opt/splunk/bin/splunk start

9. Configure the Splunk instance name:

/opt/splunk/bin/splunk set servername deploy-indexer0[1-

2].bigdata.emc.local

/opt/splunk/bin/splunk set default-hostname deploy-

indexer0[1-2].bigdata.emc.local

10. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

Follow these steps to deploy a Splunk search head:

1. Log in to the vCenter vSphere client.

2. Use the indexer/search head VM template to deploy one search head VM.

3. Configure the IP and hostname.

4. Start Splunk Enterprise:

/opt/splunk/bin/splunk start

5. Configure the Splunk instance name:

/opt/splunk/bin/splunk set servername deploy-

searchhead01.bigdata.emc.local

/opt/splunk/bin/splunk set default-hostname deploy-

searchhead01.bigdata.emc.local

6. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

7. Configure the indexer instances as search peers:

/opt/splunk/bin/splunk add search-server https://deploy-

indexer01.bigdata.emc.local:8089 -auth admin:changeme -

remoteUsername admin -remotePassword changeme

/opt/splunk/bin/splunk add search-server https://deploy-

indexer02.bigdata.emc.local:8089 -auth admin:changeme -

remoteUsername admin -remotePassword changeme

Follow these steps to deploy one admin server. The admin server is recommended for the

Splunk distributed environment. The procedure of deploying an admin server is the same

as deploying an indexer cluster master but will not make any index replication.

1. Use the Splunk VM template to deploy one VM for the cluster master.

2. Configure the IP and hostname of the VM.

3. Edit the virtual machine settings as follows:

Memory: 256 GB

Deploying

Splunk search

head

Deploying

Splunk admin

server

Page 42: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

42 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

CPUs: 40

4. Start Splunk Enterprise:

/opt/splunk/bin/splunk start

5. Configure the Splunk instance name:

/opt/splunk/bin/splunk set servername deploy-

adminserver.bigdata.emc.local

/opt/splunk/bin/splunk set default-hostname deploy-

adminserver.bigdata.emc.local

6. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

7. Log in to the Splunk web server using the default credential admin/changeme.

8. Navigate to Settings > Indexer clustering.

9. Click Enable indexer clustering, as shown in Figure 16.

Figure 16. Enabling indexer clustering

10. Choose Master node, as shown in Figure 17.

Figure 17. Choose Master node

11. Configure the Replication Factor and Search Factor:

Page 43: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

43 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Replication Factor: 1

Search Factor: 1

Note: This causes the cluster to function purely as a coordinated set of Splunk Enterprise

instances, without data replication. The cluster will not make any duplicate copies of the data, so

you can keep storage size and processing overhead to a minimum.

12. Click Enable Master Node.

Figure 18 shows that message that is displayed.

Figure 18. Restarting Splunk after enabling the master node

13. Click Go to Server Controls and go to the Settings page from which you can

initiate the restart.

Note: Do not restart the master while it is waiting for the peers to join the cluster. Otherwise, you

must restart the peers a second time.

14. Log in to the Splunk web server of indexers using the default credential

admin/changeme.

15. Navigate to Settings > Indexer clustering.

16. Click Enable indexer clustering.

17. Choose Peer node, as shown in Figure 19.

Page 44: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

44 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 19. Choosing peer node

18. Configure the Master URI and Peer replication port, as shown in Figure 20:

Master URI: https//<Admin Server IP>:8089

Peer replication port: 8080

Figure 20. Configuring Master URI and peer replication port

19. Click Enable peer node. .

Figure 21 shows that message that is displayed.

Page 45: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

45 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 21. Restarting Splunk

20. Click Go to Server Controls and restart the server.

21. Repeat step 14 to step 20 on all indexer VMs.

22. Log in to the Splunk web server of search head using the default credential

admin/changeme.

23. Navigate to Settings > Indexer clustering.

24. Click Enable indexer clustering.

25. Choose Search head node, as shown in Figure 22.

Page 46: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

46 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 22. Choosing search head node

26. Configure the Master URI: https://<Admin Server IP>:8089, as shown in Figure

32 .

Figure 23. Configuring the Master URI

27. Click Enable search head node.

Figure 24 shows the message that is displayed.

Page 47: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 5: Splunk Multi-instance 500 GB/day with 90-day Retention

47 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 24. Restarting Splunk from Server Controls

28. Click Go to Server Controls and restart the server.

29. Navigate to Settings > Indexer clustering, as shown in Figure 25.

Figure 25. Completing the process.

Follow these steps to validate the implementation of Splunk:

1. Verify search peers on the search head:

a. Log in to the web server of the search head with default credentials.

b. Navigate to Settings > Distributed search.

c. Click Search peers and check the two indexers as shown in Figure 26.

Figure 26. Check the two indexers

2. Verify that five VMs are balanced among the four ESXi servers.

Use case summary

In this use case, we implemented a 4-node VxRail cluster to deploy the Splunk multi-

instance 500 GB/day with 90-day retention with one search head and two indexers. The

implementation shows the VxRail’s flexibility and demonstrates that it is easy to scale

Splunk deployment by distributing Splunk Enterprise instances across multiple virtual

machines.

Validating

implementation

Page 48: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 6: Splunk Multi-instance 1000 GB/day with 90-day Retention

48 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 6 Splunk Multi-instance 1000 GB/day with 90-day Retention

This chapter presents the following topics:

Overview .............................................................................................................. 49

Implementation ................................................................................................... 49

Use case summary ............................................................................................. 51

Page 49: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 6: Splunk Multi-instance 1000 GB/day with 90-day Retention

49 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Overview

In this chapter, we will show the Splunk multi-instance 1000 GB/day with 90-day retention

implementation on a seven-node VxRail cluster.

Implementation

Table 15 lists the process flow for the Splunk multi-instance 1000 GB/day with 90-day

retention implementation on a seven-node VxRail cluster.

Table 15. Process flow for Splunk multi-instance 1000 GB/day with 90-day retention implementation

Step Action Description

1 Implementing VxRail cluster

Implement a seven-node VxRail cluster

2 Setting up vSAN policy Prepare the vSAN policy that is used for Splunk disks, including hot/warm and cold buckets

3 Deploying Splunk indexer Deploy 5 indexer instances that are based on indexer/search head VM template

4 Deploying Splunk search heads

Deploy a search head instance that is based on indexer/search head VM template

5 Deploying Splunk admin server

Deploy an admin server and configure the indexers and the search head into the cluster

6 Validating implementation Validate the implementation of Splunk

To begin the implementation, implement a seven-node VxRail cluster. This is a Dell EMC

internal process. Contact your Dell EMC or partner representative when planning to

implement your VxRail cluster.

For details of the procedure of setting up vSAN policy, refer to Setting up vSAN policy in

Chapter 4.

Follow these steps to deploy five indexers.

1. Log in to the vCenter vSphere client.

2. Use the indexer/search head VM template to deploy one indexer VM.

3. Configure the IP and host name.

4. Mount a 10.8 TB disk for Indexer Storage.

5. Log in to the vCenter vSphere Web Client.

6. Navigate to Home > Hosts and Clusters > the indexer/search head VM >

Manage > Policies > Edit VM Storage Policies.

7. Configure the storage policy to Splunk-Data-Policy for Indexer Storage disks.

8. Start Splunk Enterprise:

Implementing

VxRail cluster

Setting up vSAN

policy

Deploying

Splunk indexers

Page 50: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 6: Splunk Multi-instance 1000 GB/day with 90-day Retention

50 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

/opt/splunk/bin/splunk start

9. Configure Splunk instance name:

/opt/splunk/bin/splunk set servername deploy-indexer0[1-

5].bigdata.emc.local

/opt/splunk/bin/splunk set default-hostname deploy-

indexer0[1-5].bigdata.emc.local

10. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

Follow these steps to deploy a Splunk search head:

1. Log in to the vCenter vSphere client.

2. Use the indexer/search head VM template to deploy one search head VM.

3. Configure the IP and hostname.

4. Start Splunk Enterprise:

/opt/splunk/bin/splunk start

5. Configure the Splunk instance name:

/opt/splunk/bin/splunk set servername deploy-

searchhead01.bigdata.emc.local

/opt/splunk/bin/splunk set default-hostname deploy-

searchhead01.bigdata.emc.local

6. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

7. Configure the indexer instances as search peers:

/opt/splunk/bin/splunk add search-server https://deploy-

indexer01.bigdata.emc.local:8089 -auth admin:changeme -

remoteUsername admin -remotePassword changeme

/opt/splunk/bin/splunk add search-server https://deploy-

indexer02.bigdata.emc.local:8089 -auth admin:changeme -

remoteUsername admin -remotePassword changeme

/opt/splunk/bin/splunk add search-server https://deploy-

indexer03.bigdata.emc.local:8089 -auth admin:changeme -

remoteUsername admin -remotePassword changeme

/opt/splunk/bin/splunk add search-server https://deploy-

indexer04.bigdata.emc.local:8089 -auth admin:changeme -

remoteUsername admin -remotePassword changeme

/opt/splunk/bin/splunk add search-server https://deploy-

indexer05.bigdata.emc.local:8089 -auth admin:changeme -

remoteUsername admin -remotePassword changeme

Deploying

Splunk search

head

Page 51: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 6: Splunk Multi-instance 1000 GB/day with 90-day Retention

51 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

For details of the procedure of deploying Splunk admin server, refer to Deploying Splunk

admin server in Chapter 5.

Follow these steps to validate the implementation of Splunk:

1. Verify search peers on the search head:

a. Log in to the web server of the search head with default credentials.

b. Navigate to Settings > Distributed search.

c. Click Search peers and check the five indexers.

2. Verify that VMs are balanced among the seven ESXi servers.

Use case summary

In this use case, we implemented a 7-node VxRail cluster to deploy the Splunk multi-

instance 1000 GB/day with 90-day retention with one search head and five indexers. The

implementation shows that the VxRail Appliance can easily scale out for business growth.

For details about VxRail Appliance scalability, refer to Appendix A: VxRail Appliance

Scalability.

Deploying

Splunk admin

server

Validating

implementation

Page 52: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

52 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7 Splunk Multi-instance 1000 GB/day with > 90-day Retention

This chapter presents the following topics:

Overview .............................................................................................................. 53

Implementation ................................................................................................... 53

Use case summary ............................................................................................. 61

Page 53: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

53 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Overview

In this chapter, we will show the Splunk multi-instance 1000 GB/day with > 90-day

retention implementation on a seven-node VxRail cluster with Isilon. This procedure

creates a distributed Splunk Enterprise environment featuring both high performance and

large capacity data retention capability, using VxRail for hot/warm buckets and Isilon for

cold buckets.

Implementation

Table 16 lists the process flow for the Splunk multi-instance 1000 GB/day with > 90-day

retention implementation on a seven-node VxRail cluster with Isilon.

Table 16. Process flow for Splunk multi-instance 1000 GB/day with > 90-day retention implementation

Step Action Description

1 Implementing VxRail cluster

Implement a seven-node VxRail cluster

2 Setting up vSAN policy Prepare the vSAN policy that is used for Splunk disks, including hot/warm and cold buckets

3 Implementing Isilon Prepare Isilon for VxRail with Isilon

4 Configuring Isilon Configure Isilon NFS and add Isilon storage to VxRail

5 Deploying Splunk indexer Deploy 5 indexer instances that are based on indexer/search head VM template

6 Adding Isilon storage Add disks from SplunkCold data store to each Indexer VM for Splunk cold bucket

7 Deploying search head Deploy a search head instance that is based on Search head VM template

8 Deploying Splunk admin server

Deploy an admin server and configure the indexers and the search head into the cluster

9 Deploying forwarder Deploy a forwarder instance that is based on forwarder VM template for validating implementation

10 Validating implementation Validate the implement of Splunk

To begin the implementation, implement a seven-node VxRail cluster. This is a Dell EMC

internal process. Contact your Dell EMC or partner representative when planning to

implement your VxRail cluster.

For details of the procedure of setting up vSAN policy, refer to Setting up vSAN policy in

Chapter 4.

Implementing an Isilon storage array is a Dell EMC internal process. Contact your Dell

EMC representative when planning to set up your Isilon storage.

Implementing

VxRail cluster

Setting up vSAN

policy

Implementing

Isilon

Page 54: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

54 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Follow these steps to configure Isilon NFS for VxRail cluster:

1. Log in to the Isilon OneFS web service using the root account.

2. Navigate to Cluster Management> Network Configuration.

3. Click More>Add Subnet of groupnet0 to create a subnet, as shown in Figure 27.

Figure 27. Creating a subnet

4. Navigate to Access> Access Zones.

5. Click Create an access zone to create an access zone for Splunk, as shown in

Figure 28.

Configuring

Isilon

Page 55: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

55 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 28. Creating an access zone

6. Navigate to Cluster Management> Network Configuration.

7. Click More>Add Pool of subnet-10g to create an IP address pool, as shown in

Figure 29.

Page 56: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

56 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 29. Creating a IP address pool

8. Navigate to Protocols > UNIX Sharing (NFS) > NFS Exports.

9. Click Create Export to create an NFS export for Splunk, as shown in Figure 30.

Description: NFS Share for Splunk

Root Clients: IP addresses of all the ESXi servers in VxRail

Directory Paths: /ifs/data/splunk

Page 57: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

57 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 30. Creating an NFS export

After completing the Isilon configuration, run the following procedure on each ESXi server

to add Isilon NFS storage to VxRail.

1. Log in to the vCenter client using the administrator account.

2. Navigate to Home > Inventory > Hosts and Clusters > ESXi server >

Configuration > Storage > Datastores.

3. Click Add Storage to add Isilon NFS storage as a data store, as shown in Figure

31:

Storage Type: Network File System

Server: <Isilon Smart Connect Zone Name for Splunk>

Folder: /ifs/data/splunk

Data store Name: SplunkCold

Figure 31. Adding Isilon NFS storage as a data store

Page 58: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

58 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Follow these steps to deploy five indexers.

1. Log in to the vCenter vSphere client.

2. Use the indexer/search head VM template to deploy one indexer VM.

3. Configure the IP and hostname.

4. Mount a 2.1 TB disk for Indexer Storage.

5. Log in to the vCenter vSphere Web Client.

6. Navigate to Home > Hosts and Clusters > the indexer/search head VM >

Manage > Policies > Edit VM Storage Policies.

7. Configure the storage policy to Splunk-Data-Policy for Indexer Storage disks.

8. Start Splunk Enterprise:

/opt/splunk/bin/splunk start

9. Configure the Splunk instance name:

/opt/splunk/bin/splunk set servername deploy-indexer0[1-

5].bigdata.emc.local

/opt/splunk/bin/splunk set default-hostname deploy-

indexer0[1-5].bigdata.emc.local

10. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

Follow these steps to add disks from the Splunk Cold data store on each Indexer VM:

1. Log in to the vCenter client using the administrator account.

2. Click Indexer VM and Edit virtual machine settings.

3. Click Add Hardware to run the wizard:

Device Type: Hard Disk

Disk: Create a new virtual disk

Capacity/Disk Size: 10.8 TB

Location/Specify a data store or data store cluster: SplunkCold

Follow these steps to prepare Splunk cold buckets using Isilon disks on VMs.

1. Log in to the indexer using SSH.

2. Make a partition on the newly provisioned Isilon virtual disk:

fdisk /dev/sdc

3. Make a file system on the partition:

mkfs.ext4 /dev/sdc1

4. Mount the Isilon virtual disk to a separate mount point.

mkdir -p /data/isilon

Deploying

Splunk indexers

Adding Isilon

storage

Page 59: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

59 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

mount /dev/sdc1 /data/isilon

vi /etc/fstab

/dev/sdc1 /data/isilon ext4 defaults 1 1

5. Create storage volume definitions for hot/warm and cold data in order to

maximum storage utilization of your hot/warm tier before rolling to cold. Modify

indexes.conf and set maxVolumeDataSizeMB to 80 percent of the total volume

size. This reserves 20 percent for free space to ensure optimal filesystem

performance and will roll data from hot/warm to cold when the

maxVolumeDataSizeMB is reached.

vi /opt/splunk/etc/system/local/indexes.conf

#######################################################

# Volume for hot/warm buckets

#######################################################

[volume:primary]

path = /opt/splunk/var/lib/splunk

maxVolumeDataSizeMB = 1800000

#######################################################

# Volume for cold buckets

#######################################################

[volume:secondary]

path = /data/isilon

maxVolumeDataSizeMB = 8640000

6. Configure the homePath and coldPath for each index to ensure proper placement

of hot/warm and cold indexed data:

vi /opt/splunk/etc/system/local/indexes.conf

[main]

homePath = volume:primary/defaultdb/db

coldPath = volume:secondary/defaultdb/colddb

7. Ensure that the directory for the coldPath exists.

mkdir –p /data/isilon/defaultdb/colddb

8. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

For details about the deployment process of Splunk search head, refer to Deploying

Splunk search head in Chapter 6.

For details of the procedure of deploying a Splunk admin server, refer to Deploying Splunk

admin server in Chapter 5.

Deploying

search head

Deploying

Splunk admin

server

Page 60: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

60 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Follow these steps to deploy a forwarder:

1. Log in to the vCenter vSphere client.

2. Use the forwarder VM template to deploy one forwarder VM.

3. Configure the IP and hostname.

4. Start Splunk forwarder:

/opt/splunkforwarder/bin/splunk start

5. Configure the forwarder to connect to the five indexers:

/opt/splunkforwarder/bin/splunk add forward-server deploy-

indexer01.bigdata.emc.local:9997 –method autobalance

/opt/splunkforwarder/bin/splunk add forward-server deploy-

indexer02.bigdata.emc.local:9997 –method autobalance

/opt/splunkforwarder/bin/splunk add forward-server deploy-

indexer03.bigdata.emc.local:9997 –method autobalance

/opt/splunkforwarder/bin/splunk add forward-server deploy-

indexer04.bigdata.emc.local:9997 –method autobalance

/opt/splunkforwarder/bin/splunk add forward-server deploy-

indexer05.bigdata.emc.local:9997 –method autobalance

6. Verify the forward-server:

/opt/splunkforwarder/bin/splunk list forward-server

Follow these steps to validate the implementation of Splunk:

1. Upload data to the forwarder.

2. Log in to the web interface of the search head.

3. Navigate to app Search & Reporting.

4. Run the command | dbinspect index=main to check the index location. The

index is in the hot/warm bucket with the path

$SPLUNK_HOME/var/lib/splunk/defaultdb/db

5. Log in to indexer using SSH.

6. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

7. Re-run the command | dbinspect index=main on the search head.

The index is available in the Isilon cold bucket with the path

/data/isilon/defaultdb/colddb

8. Stop the forwarder and clean the index:

/opt/splunkforwarder/bin/splunk remove forwarder-server

deploy-indexer01.bigdata.emc.local:9997

/opt/splunkforwarder/bin/splunk remove forwarder-server

deploy-indexer02.bigdata.emc.local:9997

Deploying

forwarder

Validating

implementation

Page 61: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 7: Splunk Multi-instance 1000 GB/day with > 90-day Retention

61 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

/opt/splunkforwarder/bin/splunk remove forwarder-server

deploy-indexer03.bigdata.emc.local:9997

/opt/splunkforwarder/bin/splunk remove forwarder-server

deploy-indexer04.bigdata.emc.local:9997

/opt/splunkforwarder/bin/splunk remove forwarder-server

deploy-indexer05.bigdata.emc.local:9997

/opt/splunkforwarder/bin/splunk list forward-server

/opt/splunkforwarder/bin/splunk stop

/opt/splunkforwarder/bin/splunk clean eventdata –index main

9. Delete the forwarder VM.

10. Verify that the VMs are balanced among the seven ESXi servers.

Use case summary

This use case explains the design and implementation procedure for integrating Isilon into

the Splunk multi-instance 1000 GB/day with > 90-day retention deployment. This solution

enables a distributed Splunk Enterprise environment for high performance and large

capacity data retention capabilities, using VxRail for hot/warm buckets and Isilon for cold

buckets.

Page 62: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

62 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8 Splunk Multi-instance 1000 GB/day with > 90-day Retention

and Indexer High Availability

This chapter presents the following topics:

Overview .............................................................................................................. 63

Implementation ................................................................................................... 63

Use case summary ............................................................................................. 75

Page 63: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

63 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Overview

In this chapter, we will show the Splunk multi-instance 1000 GB/day with > 90-day

retention and indexer high availability implementation on a seven-node VxRail cluster with

Isilon. In this deployment, we build an indexer cluster, a group of indexers configured to

replicate each other’s data to ensure high availability. The Indexer cluster design prevents

data loss while promoting data availability for searching.

Implementation

In this implementation, we deploy a Splunk indexer cluster environment on the VxRail

cluster with Isilon. If you want to migrate from a distributed (non-clustered) Splunk

Enterprise, refer to Migrate non-clustered indexers to a clustered environment in the

Splunk online document Managing Indexers and Clusters of Indexers for the actual

migration process.

Note: The buckets prior to the conversion are "standalone" buckets, so the cluster does not

replicate them. To migrate the legacy data to the cluster, contact Splunk Professional Services.

Table 17 lists the process flow for the Splunk multi-instance 1000 GB/day with > 90-day

retention and indexer high availability implementation on a seven-node VxRail cluster with

Isilon.

Table 17. Process flow for Splunk multi-instance 1000 GB/day with > 90-day retention and indexer high availability implementation

Step Action Description

1 Implementing VxRail cluster

Implement a seven-node VxRail cluster

2 Setting up vSAN policy Prepare the vSAN policy that is used for Splunk disks, including hot/warm and cold buckets

3 Implementing Isilon Prepare Isilon for VxRail with Isilon

4 Configuring Isilon Configure Isilon NFS and add Isilon storage to VxRail

5 Deploying indexer cluster master

Deploy one master instance for the indexer cluster

6 Deploying peer nodes Deploy five Splunk indexer instances and add them into the indexer cluster as peer nodes

7 Adding Isilon storage Add disks from SplunkCold data store to each Indexer VM for Splunk cold bucket

8 Deploying search head Deploy one search head instance for the indexer cluster

9 Configuring master as forwarder

Configure the cluster master as one forwarder of the indexer cluster and connect directly to the peer nodes

10 Deploying forwarder Deploy one universal forwarder instance

11 Configuring indexer discovery

Configure the indexer discovery to connect the universal forwarder to the indexer cluster

12 Validating implementation Validate the implementation of Splunk

Page 64: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

64 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

To begin the implementation, implement a seven-node VxRail cluster. This is a Dell EMC

internal process. Contact your Dell EMC or partner representative when planning to

implement your VxRail cluster.

For details of the procedure of setting up vSAN policy, refer to Setting up vSAN policy in

Chapter 4.

For details of the procedure of implementing Isilon, refer to Implementing Isilon in Chapter

7

For details of the procedure of configuring Isilon, refer to Configuring Isilon in Chapter 7.

Follow these steps to deploy one cluster master for the indexer cluster. For more details

about this configuration, refer to Enable the indexer cluster master node in the Splunk

online document Managing Indexers and Clusters of Indexers.

11. Use the Splunk VM template to deploy one VM for the cluster master.

12. Configure the IP and hostname of the VM.

13. Edit the virtual machine settings as follows:

Memory: 256 GB

CPUs: 40

14. Start Splunk Enterprise:

/opt/splunk/bin/splunk start

15. Configure the Splunk instance name:

/opt/splunk/bin/splunk set servername cluster-

master.bigdata.emc.local

/opt/splunk/bin/splunk set default-hostname cluster-

master.bigdata.emc.local

16. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

17. Log in to the Splunk web server using the default credential admin/changeme.

18. Navigate to Settings > Indexer clustering.

19. Click Enable indexer clustering, as shown in Figure 32.

Figure 32. Enabling indexer clustering

Implementing

VxRail cluster

Setting up vSAN

policy

Implementing

Isilon

Configuring

Isilon

Deploying

indexer cluster

master

Page 65: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

65 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

20. Choose Master node, as shown in Figure 33.

Figure 33. Choose Master node

21. Configure the Replication Factor and Search Factor:

Replication Factor: 2

Search Factor: 2

Note: Choose an adequate replication factor and search factor for your environment. It is not

advisable to increase them later, after the cluster contains significant amounts of data.

22. Click Enable Master Node.

Figure 34 shows that message that is displayed.

Figure 34. Restarting Splunk after enabling the master node

23. Click Go to Server Controls and go to the Settings page from which you can

initiate the restart.

Note: Do not restart the master while it is waiting for the peers to join the cluster. Otherwise, you

must restart the peers a second time.

Page 66: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

66 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Follow these steps to deploy the peer nodes of the indexer cluster. For further details

about this configuration, refer to Enable the peer nodes in the Splunk online document

Managing Indexers and Clusters of Indexers.

1. Use the Indexer/Search head VM template to deploy five VMs for the peer nodes.

2. Configure the IP and hostname of the VM.

3. Mount 2.1 TB indexer storage from VxRail VSAN.

4. Log in to the vCenter vSphere Web Client.

5. Navigate to Home > Hosts and Clusters > the indexer/search head VM >

Manage > Policies > Edit VM Storage Policies.

6. Configure the storage policy to Splunk-Data-Policy for Indexer Storage disks.

7. Mount 10.8 TB indexer cold bucket storage from the Isilon NFS datastore. For

details, refer to Adding Isilon storage in Chapter 7.

8. Start Splunk Enterprise:

/opt/splunk/bin/splunk start

9. Configure a Splunk instance name:

/opt/splunk/bin/splunk set servername cluster-

indexer01.bigdata.emc.local

/opt/splunk/bin/splunk set default-hostname cluster-

indexer01.bigdata.emc.local

10. Add a disk to the indexer VM using SplunkCold data store.

11. Mount the Isilon NFS disk to the Splunk cold bucket database path.

12. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

13. Log in to the Splunk web server using the default credential admin/changeme.

14. Navigate to Settings > Indexer clustering.

15. Click Enable indexer clustering.

16. Choose Peer node, as shown in Figure 35.

Deploying peer

nodes

Page 67: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

67 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 35. Choosing peer node

17. Configure the Master URI and Peer replication port, as shown in Figure 36:

Master URI: https//<master IP>:8089

Peer replication port: 8080

Figure 36. Configuring Master URI and peer replication port

18. Click Enable peer node. .

Figure 37 shows that message that is displayed.

Page 68: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

68 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 37. Restarting Splunk

19. Click Go to Server Controls and restart the server.

Note: A warning message is displayed unless you add two indexers into the cluster, as shown in

Figure 38.

Figure 38. Error message if adding less than <repFactor> indexers

20. Repeat this process on all five indexer VMs.

For details of the procedure of adding Isilon storage, refer to Adding Isilon storage in

Chapter 7.

Follow these steps to deploy one search head in the indexer cluster. The cluster master

acts as one search head by default. For further details regarding this configuration, refer

to Enable the search head in the Splunk online document Managing Indexers and

Clusters of Indexers.

1. Use the Indexer/Search head VM template to deploy a VM for the search head.

2. Configure the IP and the hostname of the VM.

3. Start Splunk Enterprise:

Adding Isilon

storage

Deploying

search head

Page 69: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

69 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

/opt/splunk/bin/splunk start

4. Configure the Splunk instance name:

/opt/splunk/bin/splunk set servername cluster-

searchhead.bigdata.emc.local

/opt/splunk/bin/splunk set default-hostname cluster-

searchhead.bigdata.emc.local

5. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

6. Log in to the Splunk web server using the default credential admin/changeme.

7. Navigate to Settings > Indexer clustering.

8. Click Enable indexer clustering.

9. Choose Search head node, as shown in Figure 39.

Figure 39. Choosing search head node

Page 70: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

70 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

10. Configure the Master URI: https://<master IP>:8089, as shown in Figure 40.

Figure 40. Configuring the Master URI

11. Click Enable search head node.

Figure 41 shows the message that is displayed.

Figure 41. Restarting Splunk from Server Controls

12. Click Go to Server Controls and restart the server.

13. Navigate to Settings > Indexer clustering, as shown in Figure 42.

Figure 42. Completing the process

Note: Data replication can begin immediately with the default configuration. For details regarding

other configurations, refer to Prepare the peers for index replication in the Splunk online document

Managing Indexers and Clusters of Indexers.

Page 71: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

71 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Follow these steps to configure the master as a forwarder of the clustered indexers. The

master forwarder is configured directly to the peer nodes. This is from Splunk Best

practice: Forward master node data to the indexer layer.

1. Log in to the master VM using SSH.

2. Create an outputs.conf file on the master:

vi /opt/splunk/etc/system/local/outputs.conf

# Turn off indexing on the master

[indexAndForward]

index = false

[tcpout]

defaultGroup = my_peers_nodes

forwardedindex.filter.disable = true

indexAndForward = false

[tcpout:my_peers_nodes]

server=172.16.1.81:9997,172.16.1.82:9997,172.16.1.83:9997

autoLB = true

3. Restart Splunk Enterprise:

/opt/splunk/bin/splunk restart

Follow these steps to deploy one universal forwarder which will then be connected to the

peer nodes using indexer discovery in the next section.

1. Use the Forwarder VM template to deploy one forwarder VM.

2. Configure the IP and hostname of the VM.

3. Start Splunk forwarder:

/opt/splunkforwarder/bin/splunk start

4. Configure a Splunk instance name:

/opt/splunkforwarder/bin/splunk set servername cluster-

forwarder.bigdata.emc.local

/opt/splunkforwarder/bin/splunk set default-hostname

cluster-forwarder.bigdata.emc.local

5. Restart Splunk forwarder:

/opt/splunkforwarder/bin/splunk restart

There are several Ways to get data into an indexer cluster. In our implementation, we

followed these steps to use the indexer discovery in this instance, because of the Advantages of the indexer discovery method.

Note: When the forwarder starts for the first time, it gets a list of peers from the master. However,

the list does not persist through a forwarder restart and the forwarder must ask for the list again.

Therefore, do not restart a forwarder while the master is down.

Configuring

master as

forwarder

Deploying

forwarder

Configuring

indexer

discovery

Page 72: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

72 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

1. Make sure that the receiving port 9997 is open on each indexer by following these

steps:

a. Log in to the web server with the default credential admin/changeme.

b. As shown in Figure 43, navigate to Settings > Forwarding and receiving >

Configure receiving.

Figure 43. Checking receiving port 9997

Note: When using indexer discovery, each peer node can have only one configured receiving port.

2. Enable indexer discovery on the master node by following these steps:

a. Log in to the master VM using SSH.

b. Add this stanza to the server.conf file:

vi /opt/splunk/etc/system/local/server.conf

[indexer_discovery]

pass4SymmKey = my_secret

polling_rate = 10

indexerWeightByDiskCapacity = true

c. Restart Splunk Enterprise.

/opt/splunk/bin/splunk restart

Note: The default polling_rate is 10. Refer to Adjust the frequency of polling in the Splunk online

document Managing Indexers and Clusters of Indexers for details.

Note: The default value of indexerWeightByDiskCapacity is false. Refer to Use weighted load

balancing the Splunk online document Managing Indexers and Clusters of Indexers for details.

3. Configure the forwarder to use indexer discovery by following these steps:

a. Log in to the forwarder VM using SSH.

b. Add these settings to the outputs.conf file:

vi /opt/splunkforwarder/etc/system/local/outputs.conf

[indexer_discovery:master1]

pass4SymmKey = my_secret

master_uri = https://172.16.1.80:8089

[tcpout:group1]

autoLBFrequency = 30

Page 73: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

73 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

forceTimebasedAutoLB = true

indexerDiscovery = master1

useACK=true

[tcpout]

defaultGroup = group1

c. Restart the Splunk forwarder:

/opt/splunkforwarder/bin/splunk restart

Note: For further details regarding configuration of load balancing on the forwarder, refer to Set up

load balancing in the Splunk online document Forwarding Data.

Follow these steps to validate the Splunk implementation:

1. Validate the peer nodes on the master using these steps:

a. Log in to the web server of the master using the default credential

admin/changeme.

b. Navigate to Settings > Indexer clustering.

c. Click the Peers tab to verify that the indexers are searchable.

2. Validate the search heads on the master using these steps:

a. Log in to the web server of the master using the default credential

admin/changeme.

b. Navigate to Settings > Indexer clustering.

c. Click the Search Heads tab to verify that the search heads are up.

3. Validate the forwarder on the master using these steps:

a. Log in to the web server of the master using the default credential

admin/changeme.

b. Navigate to Settings > Monitoring Console > Forwarders > Forwarders:

Instance.

c. Choose the forwarder in the Instance drop-down list. Click the name of the

forwarder under Status and Configuration to see the five receivers of the

forwarder.

d. Choose the master in the Instance drop-down list and click the master in

Status and Configuration to see the five receivers of the master.

4. Validate forwarders on the indexers by repeating step 3 for each indexer.

5. Validate the forward servers on the forwarder using these steps:

a. Log in to the forwarder VM using SSH.

b. Run this command to verify the five indexers as the forward servers:

/opt/splunkforwarder/bin/splunk list forward-server

Validating

implementation

Page 74: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

74 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Note: When we enable load balancing, the active forwards are changed among the three indexers

in the cluster.

6. Validate the cluster master on the search head using these steps:

a. Log in to the web server of the search head using the default credential

admin/changeme.

b. Navigate to Settings > Indexer clustering.

c. Verify that the cluster is searchable in the Cluster searched list, as shown in

Figure 44.

Figure 44. Locate the cluster in the Cluster Searched list

7. Validate the Indexing using these steps:

a. Upload data to the forwarder, as shown in Figure 45.

Figure 45. Uploading data to the forwarder

Note: Download the Prices.csv.zip from Splunk Tutorial.

b. Search for the search head, as shown in Figure 46.

Page 75: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 8: Splunk Multi-instance 1000 GB/day with > 90-day Retention and Indexer High Availability

75 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 46. Searching for the search head

8. Verify that the VMs are balanced between the seven ESXi servers.

Use case summary

In this use case, we implemented a 7-node VxRail cluster to deploy the Splunk multi-

instance 1000 GB/day with > 90-day retention and indexer high availability with one

search head, one indexer cluster including one master. and five peer nodes. The

implementation shows the VxRail’s flexibility and demonstrates that it is easy to deploy a

Splunk indexer cluster across VxRail appliances for high data availability.

Page 76: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 9: Validated Configurations for Splunk Enterprise

76 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 9 Validated Configurations for Splunk Enterprise

This chapter presents the following topics:

Splunk-validated sizing configurations ............................................................ 77

Scenario 1: One VxRail node for up to 50 GB/day with 90-day retention...... 78

Scenario 2: Four VxRail nodes for up to 500 GB/day (distributed) up to 250 GB/day (clustered) with 90-day retention ................................. 79

Scenario 3: Seven VxRail nodes for up to 1 TB/day (distributed) with 90-day retention ................................................................................... 80

Scenario 4: Seven VxRail nodes with Isilon for up to 1 TB/day (clustered) with 7-day retention for hot/warm buckets and configurable retention for cold buckets .......................................................................... 81

Summary ............................................................................................................. 82

Page 77: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 9: Validated Configurations for Splunk Enterprise

77 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Splunk-validated sizing configurations

Splunk validated the following configurations for Dell EMC to meet or exceed the

performance of Splunk’s documented reference hardware:

Scenario 1: One VxRail node for up to 50 GB/day with 90-day retention

Scenario 2: Four VxRail nodes for up to 500 GB/day (distributed) or up to 250

GB/day (clustered) with 90-day retention

Scenario 3: Seven VxRail nodes for up to 1 TB/day (distributed) with 90-day

retention

Scenario 4: Seven VxRail nodes with Isilon for up to 1 TB/day with 7-day retention

for hot/warm buckets and configurable retention for cold buckets

These configurations represent typical uses in the current marketplace.

Chapter 2 lists the attributes of the VxRail Appliance. Table 18 describes the physical

characteristics of the VxRail All-Flash Appliances tested by Splunk in the four scenarios.

For the different scenarios, we use the VxRail E460F Appliance with different memory and

disk group configurations to provide a cost-optimized, highly available infrastructure

solution. Keep in mind that with the FTT=1 policy setting for every VM, the net usable

capacity per VM is half of the raw capacity.

Table 18. VxRail Appliance specifications: All-flash nodes

Components VxRail E460F

Processor cores (per node) 40

Processors (per node) 2 Intel Xeon Processors E5-2698 v4 @ 2.20 GHz

Memory/RAM (per node) 384 GB (24 x 16 GB) or 512 GB (16 x 32 GB)

Caching SSD (per node) 800 GB per disk group (1 or 2 disk groups)

Storage – raw (per node) 5.235TB (3 x 1.92TB) or 20.94TB (6 x 3.84TB SSD)

Minimum nodes per cluster 3

Maximum nodes per cluster 64

Splunk implemented these best practices in designing the configurations that were used in

the four scenarios:

When hyper-threading is enabled, allocate the equivalent number of physical cores

(for example, for 32 physical cores allocate 64 vCPUs).

Splunk Enterprise is resource-intensive. For best performance, do not overcommit

vCPU or memory for Splunk instances.

VMware requires resources for the management VMs. These resources have been

taken into consideration for all sizing recommendations.

VxRail Appliance

description

General VxRail

configuration

guidance for

Splunk

Enterprise

Page 78: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 9: Validated Configurations for Splunk Enterprise

78 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Splunk Enterprise deployments that are distributed or clustered require a Cluster

Master to manage the indexing tier. This instance is referred to as an Admin Server.

The Admin Server can also be used as a License Master for Splunk Enterprise.

Scenario 1: One VxRail node for up to 50 GB/day with 90-day retention

This scenario describes the configuration of a Splunk Enterprise single instance

deployment on one VxRail node that can index up to 50 GB/day data with 90-day

retention.

In this scenario, the customer already has an existing VxRail cluster. One new VxRail

node is added to the cluster to provide the dedicated resource for Splunk Enterprise single

instance deployment. To satisfy the vSAN storage policy with failures to tolerate (FTT) of

1 and failure tolerance method of RAID-1 (mirroring), the existing VxRail cluster should

have enough storage capacity available to create the mirror copy on another node. To

tolerate the failure of this node, the existing VxRail cluster should have enough spare

resources (compute and memory) to failover the Splunk VM to another node. Table 19

and Table 20 show the details of the hardware configuration and deployment

configuration.

Table 19. Hardware configuration in scenario 1

VxRail model Specification Number of nodes required Storage required

VxRail E460F 40 x 2.2GHz cores

384GB (24 x 16GB) RAM

1 Disk Group with 800GB Cache SSD

5.235TB (3 x 1.92TB SSD) raw capacity

1 3.3 TB**

(includes space for OS and 20% reserved for free space)

**Note: An additional 3.3 TB raw storage capacity is required on an existing node in the cluster for

the mirror copy of data required by vSAN FTT=1 (mirroring) policy.

Table 20. Deployment configuration of single instance deployment in scenario 1

Instance Role

Quantity Physical cores/vCPUs

Memory OS storage Indexer storage

Single Instance combined search head and indexer

1 32/64 256 GB 300 GB 3 TB

Page 79: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 9: Validated Configurations for Splunk Enterprise

79 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Scenario 2: Four VxRail nodes for up to 500 GB/day (distributed) or up to 250 GB/day (clustered) with 90-day retention

This scenario describes the two configurations of Splunk Enterprise distributed

deployment and clustered indexer deployment on four VxRail nodes.

The distributed deployment can index up to 500 GB/day data with 90-day retention. The

clustered indexer deployment can index up to 250 GB/day data with 90-day retention with

replication factor of 2 and a search factor of 2.

Table 21 and Table 22 show the details of the hardware configuration and deployment

configuration.

Table 21. Hardware configuration in scenario 2

VxRail model Specification Number of nodes required Storage required

VxRail E460F

(per node)

40 x 2.2GHz cores

512GB (16 x 32GB) RAM

2 Disk Groups, each with 800GB Cache SSD

20.94 TB (6 x 3.84TB SSD) raw capacity

10.1 TB effective usable capacity

4

VxRail Cluster 83.8 TB raw capacity

40.3 TB effective usable capacity

27.8 TB

(includes space for OS and 20% reserved for free space)

Note: The net effective usable capacity of the VxRail node and cluster is approximately half of the

raw capacity. This is due to the vSAN FTT=1 policy setting applied to each VM.

Table 22. Deployment configuration of distributed deployment and clustered indexer deployment in scenario 2

Instance role

Quantity Physical cores/vCPUs

Memory OS storage Indexer storage

Search Head 1 32/64 256 GB 300 GB 0

Indexer 2 32/64 256 GB 300 GB 13.9 TB

Admin Server

1 20/40 256 GB 150 GB 0

Page 80: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 9: Validated Configurations for Splunk Enterprise

80 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Scenario 3: Seven VxRail nodes for up to 1 TB/day (distributed) with 90-day retention

This scenario describes the configuration of Splunk Enterprise distributed deployment on

seven VxRail nodes that can index up to 1 TB/day of data with 90-day retention.

The details of the hardware configuration and deployment configuration are shown in

Table 23 and Table 24.

Table 23. Hardware configuration in scenario 3

VxRail model Specification Number of nodes required Storage required

VxRail E460F

(per node)

40 x 2.2GHz cores

384GB (24 x 16GB) RAM

2 Disk Groups, each with 800GB Cache SSD

20.94 TB (6 x 3.84TB SSD) raw capacity

10.1 TB effective usable capacity

7

VxRail Cluster 146.6 TB raw capacity

70.7 TB effective usable capacity

56.4 TB

(includes space for OS and 20% reserved for free space)

Note: The net effective usable capacity of the VxRail node and cluster is approximately half of the

raw capacity. This is due to the vSAN FTT=1 policy setting applied to each VM.

Page 81: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 9: Validated Configurations for Splunk Enterprise

81 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Table 24. Deployment configuration of distributed deployment in scenario 3

Instance Role

Quantity Physical cores/vCPUs

Memory OS storage Indexer storage

Search Head 1 32/64 256 GB 300 GB 0

Indexer 5 32/64 256 GB 300 GB 10.8 TB

Admin Server

1 20/40 256 GB 150 GB 0

Scenario 4: Seven VxRail nodes with Isilon for up to 1 TB/day (clustered) with 7-day retention for hot/warm buckets and configurable retention for cold buckets

This scenario describes the configuration of Splunk Enterprise clustered indexer

deployment on seven VxRail nodes with Isilon. This configuration is capable of indexing

up to 1 TB per day of data with 7-day retention for hot/warm buckets and configurable

retention for cold buckets, including a replication factor of 2 and a search factor of 2.

Table 25 and Table 26 show the details of the hardware configuration and deployment

configuration.

For configuration guidance about Isilon Scale-Out storage, refer to the EMC Isilon Scale-

Out storage and VMware vSphere Sizing Guide.

Table 25. Hardware configuration of scenario 4

VxRail model Specification Number of nodes required Storage required

VxRail E460F (per node)

40 x 2.2GHz cores

384GB (24 x 16GB) RAM

1 Disk Group with 800GB Cache SSD

5.235TB (3 x 1.92TB SSD) raw capacity

2.3 TB effective usable capacity

7

VxRail Cluster 36.6 TB raw capacity

16.1 TB effective usable capacity

12.5 TB

(includes space for OS and 20% reserved for free space)

Note: The net effective usable capacity of the VxRail node and cluster is approximately half of the

raw capacity. This is due to the vSAN FTT=1 policy setting applied to each VM.

Page 82: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 9: Validated Configurations for Splunk Enterprise

82 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Table 26. Clustered indexer deployment configuration of scenario 4

Instance role

Quantity Physical cores/vCPUs

Memory OS storage Indexer storage

Search Head 1 32/64 256 GB 300 GB 0

Indexer 5 32/64 256 GB 300 GB 2.1 TB

Admin Server

1 20/40 256 GB 150 GB 0

Summary

The configuration flexibility of Splunk Enterprise software together with the modular scale-

out features of the VxRail platform provides an integrated technology solution for

analyzing machine-generated Big Data across a wide range of data ingestion rates and

customer use case scenarios. The depth of the partnership between Splunk and Dell

EMC has produced a set of jointly tested and validated systems that customer can

implement with confidence. These systems will meet current needs and flexibly scale

when the need arises.

Page 83: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 10: Conclusion

83 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 10 Conclusion

This chapter presents the following topics:

Summary ............................................................................................................. 84

Findings ............................................................................................................... 84

Conclusion .......................................................................................................... 84

Page 84: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 10: Conclusion

84 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Summary

All businesses must be able to increase their analytics capability in order to lower

operational expense and improve customer experiences. Most enterprises cannot afford

to risk success by implementing homegrown solutions. Splunk, in partnership with Dell

EMC, offers a documented set of proven solutions that operate and scale to all customer

needs, ranging from small businesses to enterprise departmental solutions up to medium

enterprise full-scale deployments with an integrated set of technologies featuring detailed

deployment and implementation guidance. Our approach provides a low-risk, fast time-to-

value, fully supported option for machine-generated data analytics.

Findings

The ongoing dedicated partnership between Splunk and Dell EMC makes investing in new

or expanded machine data analytics less risky and more cost-effective for businesses of

all sizes. The Splunk validated system configurations for Dell EMC VxRail described in

this document together with the detailed sections covering configuration and

implementation guidance provide prospective customers with the information needed to

match the right investment in equipment and necessary people skills to confidently commit

to meeting a wide range of use case goals. Chapter 11 provides more background and

resources for additional background research.

Conclusion

With the explosion of growth in IT data center technologies, the scope of IT challenges

continues to get more disparate and complex. Big Data analytics, specifically the analysis

of machine data, can help business of all sizes drive critical decisions, reduce costs, and

maximize operational efficiencies to overcome these challenges.

The flexible design of the Splunk Enterprise platform and Dell EMC’s VxRail platform

provides end-to-end visibility that can be used for many different use cases and scaling

needs. The solutions described in this document are widely applicable, cost-effective, and

provide varied implementation and support options for machine generated Big Data

analytics.

Page 85: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 11: References

85 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 11 References

This chapter presents the following topics:

Dell EMC documentation ................................................................................... 86

VMware documentation ..................................................................................... 86

Splunk Enterprise documentation .................................................................... 86

Page 86: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Chapter 11: References

86 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Dell EMC documentation

The following documentation on EMC.com or EMC Online Support provides additional

and relevant information. Access to these documents depends on your login credentials. If

you do not have access to a document, contact your Dell EMC representative.

EMC Isilon Scale-Out Storage and VMware vSphere Sizing Guide

Dell EMC VxRail Network Guide

VMware documentation

The following documentation on the VMware website provides additional and relevant

information:

VMware Virtual SAN 6.0 Performance

Performance Best Practices for VMware vSphere 6.0

Splunk Enterprise documentation

The following documentation on the Splunk documentation website provides additional

and relevant information:

Splunk Installation Manual

Splunk Capacity Planning Manual

Transparent huge memory pages and Splunk performance

Managing Indexers and Clusters of Indexers

Distributed Search

Forwarding Data

Page 87: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Appendix A: VxRail Appliance Scalability

87 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Appendix A VxRail Appliance Scalability

This appendix presents the following topics:

Overview .............................................................................................................. 88

Test scenario ...................................................................................................... 88

Test methodology ............................................................................................... 88

Test results ......................................................................................................... 90

Summary ............................................................................................................. 91

Page 88: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Appendix A: VxRail Appliance Scalability

88 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Overview

The Dell EMC hyper-converged infrastructure VxRail Appliance is a clustered node

architecture that brings together compute, storage, and virtualization in modular building

blocks that can scale linearly. Based on VMware vSphere and Virtual SAN software, the

VxRail Appliance allows you to start small and grow, scaling capacity and performance

easily and non-disruptively. This section demonstrates how VMware Virtual SAN performs

and scales well on a VxRail Appliance cluster.

Test scenario

Iometer is an open-source benchmark tool that measures the amount of IOPS

demonstrated by single and clustered systems. To validate the linear scalability of

VMware Virtual SAN on VxRail appliance, we used Iometer to run five different I/O

workloads that represent five typical use cases in the real world:

1. All Read Workload (4 KB): Each Iometer thread is configured to perform random read access with 4 KB I/O size across the entire volume. This workload can determine the maximum random read I/Os per second (IOPS) that a storage solution can deliver.

2. Mixed Read/Write Workload (4 KB): Each Iometer thread is configured to do a mixed read/write access with a 70 percent/30 percent ratio. All accesses are random with 4 KB I/O size. This workload represents a real world commercial application with small I/O size.

3. Mixed Read/Write Workload (32 KB): Each Iometer thread is configured to do a mixed read/write access with a 70 percent/30 percent ratio. All accesses are random with 32 KB I/O size. This workload represents a real world commercial application with large I/O size.

4. Sequential Read Workload: Each Iometer worker thread is configured to do sequential read access with 256 KB I/O size. This workload represents scanned read operations, such as reading a video stream from a storage solution.

5. Sequential Write Workload: Each Iometer worker thread is configured to do sequential write access with 256 KB I/O size. This workload represents scanned write operations, such as copying bulk data to a storage solution.

Test methodology

In this test, we ran Iometer workloads on a VxRail Appliance hybrid configuration cluster

to show the linear scalability of the VxRail cluster. This is not reflective of the higher

performing all-flash VxRail configuration in the scenarios. The cluster size was scaled

from four nodes to six nodes, and then to eight nodes. We created one virtual machine

per node, 10 VMDKs per virtual machine, and 10 Iometer workers per virtual machine.

Each VM is configured with three PVSCSI controllers: one is for the OS disk and the other

two are equally shared by the data disks. All the tests are conducted after all the VMDKs

are written to at least once to prevent zeroed returns on reads. One Iometer I/O worker

thread is configured to handle each VMDK independently.

Table 27 shows the hardware configuration of each node for the VxRail Appliance hybrid

configuration in this test.

Page 89: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Appendix A: VxRail Appliance Scalability

89 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Table 27. VxRail Appliance hybrid node configuration

CPU CPU cores RAM Caching SSD Raw storage HDDs

Network

2 Intel Xeon Processor E5-2660 v3 2.6 GHz

20 Hyper-Threaded (HT) cores

256 GB 1 * 186.31 GB 5 * 1.08 TB 2 * 10GbE SFP+

Table 28 shows the virtual machine configuration that is used in this test.

Table 28. Virtual machine configuration

vCPUs RAM Data storage OS Iometer version

4 4GB 10 * 9GB eager-zeroed-thick VMDKs RHEL 6.7 1.1.0

Table 29 shows Iometer workload profile that is used in this test.

Table 29. Iometer workload profile

Workload profile I/O size Read/write ratio

Random/sequential ratio

Outstanding I/O

All Read 4 KB 4 KB 100% Read 100% Random 16

Mixed 4 KB 4 KB 70% Read, 30% Write

100% Random 4

Mixed 32 KB 32 KB 70% Read, 30% Write

100% Random 2

Sequential Read 256 KB 100% Read 100% Sequential

8

Sequential Write 256 KB 100% Write 100% Sequential

8

Table 30 shows the performance metrics that are collected in this test.

Table 30. Performance metrics

Workload profile Metrics

All Read 4 KBs IOPS, Latency

Mixed 4 KBs IOPS, Latency

Mixed 32 KB IOPS, Latency

Sequential Read Throughput (MB/second)

Sequential Write Throughput (MB/second)

Page 90: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Appendix A: VxRail Appliance Scalability

90 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Test results

Figure 47, Figure 48, and Figure 49 show the vSAN scalability test results for IOPS and

latency for the All Read 4 KB workload, Mixed Read/Write 4 KB workload, and Mixed

Read/Write 32 KB workload. Figure 50 shows the scalability test result for throughput of

the sequential Read/Write 256 KB workloads. The testing results show that vSAN is close-

to-linear scalability.

Figure 47. Iometer all read 4 KB test results

Figure 48. Iometer mixed read/write 4 KB test result

Page 91: Using Splunk Enterprise with VxRail Appliances and Isilon ... · Chapter 1: Executive Summary 6 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Appendix A: VxRail Appliance Scalability

91 Using Splunk Enterprise with VxRail Appliances and Isilon for Analysis of Machine Data

Figure 49. Iometer mixed read/write 32 KB test result

Figure 50. Iometer sequential read/write 256 KB test result

Summary

This test shows that VMware Virtual SAN on a VxRail Appliance hybrid configuration has

close-to-linear scalability. This gives vSAN users the flexibility to start with a small cluster

and increase the size of cluster when required.