sap hana

57
SAP HANA – An In-Memory Computing Engine

Upload: roys-palnati-s

Post on 27-Jan-2016

8 views

Category:

Documents


0 download

DESCRIPTION

sap hana basic

TRANSCRIPT

Page 1: Sap Hana

SAP HANA – An In-Memory Computing Engine

Prepared by- Nilesh Ahir

Page 2: Sap Hana

Author Bio

Nilesh Ahir has completed his masters in Software System from BITS Pilani. He has total 8 years of SAP

experience. He is IBM Certified IT Specialist and has been working as SAP NW BI Package Solution

Consultant for IBM India for last 4+ years. Prior to this he was working with Intel India. He has experience in

ABAP, BW3.5 / BI7.0 and Data mining. He has worked on other non–SAP technologies like NLS, TIBCO and

Web Services.

Page 3: Sap Hana

Document Control

Version

Change Summary

Authored Reviewed ApprovedOn By On By On By

1.0 Initial Draft 30 Sept 2012

Nilesh Ahir

Page 4: Sap Hana

Table of Contents

Introduction............................................................................................................5In-Memory computing............................................................................................6Evolution of SAP HANA........................................................................................6Hardware Technology Innovations........................................................................7

Storage Price Trend....................................................................................8CPU Capacity Trend...................................................................................8

Software Technology Innovations.......................................................................10Row and Column Store.............................................................................11Choosing Row or Column Store..............................................................12Compression Techniques.........................................................................13Choosing Compression Technique...........................................................15

SAP HANA Combines Software and Hardware..................................................15SAP HANA : Landscape......................................................................................17Access and Read Times for Disk and Main Memory...........................................19HANA Architecture..............................................................................................23

ICE............................................................................................................27Data loads into HANA...............................................................................29Data Modeling with HANA........................................................................32HANA Studio............................................................................................33Reporting with HANA................................................................................34HANA Data Base Administration..............................................................35

HANA Sizing........................................................................................................36SAP BW on HANA (Sizing).................................................................................37SAP BI Road Map with HANA.............................................................................40Future with SAP HANA........................................................................................46Related Content..................................................................................................49

Page 5: Sap Hana

Introduction

Why do we require HANA ?

Nowadays, we are living in a competitive world and organizations are making use of information available with them to

keep themselves ahead of the race. Hence they are making use of information generated from vast variety of sources like

traditional sources and non-traditional sources.

Traditional sources are the systems like ERP used for running the business whereas non-traditional sources are emails,

social networking sites etc. However, Information is growing very fast and exponentially. The cascading effect of which

is dramatic fall in performance due to huge amount information (Structured and Unstructured) as OLAP technologies

which are using for last few decades are not capable of handling this huge growing data. Therefore one day existing

technologies will fail to handle and process the growing data and this is called as Information Explosion.

There are three main factors to study this concept and those are Volume, Velocity and Variety.

Variety : To be ahead in race companies have to make use of information from traditional and non-traditional sources in

an effective way. Also, almost 80% of data will be unstructured data.

Velocity: It basically talks about rate at which data is growing and according to IDC; world digital content will double in

every 18 months.

Volume: this talks about total volume or mass of data and as per ‘The Economist’, in 2005, we have created 150 Exabytes

of data whereas in 2011, 1200 Exabytes.

The only solution to this problem is In-memory computing where we can process huge amount of data for analysis in very

short time. In-memory computing makes our OLAP system so fast that we can generate our reports in few seconds by

churning millions of records. Also, it makes the OLAP analysis real time which was the dream of business that has come

true. SAP HANA is based on the same technology.

Page 6: Sap Hana

In-Memory computing

In-memory computing is the technology that allows the processing of massive quantities of real time data in main memory

of server to provide immediate results from analysis and transaction.

Evolution of SAP HANA

SAP HANA is a new generation product from SAP which will change the data warehousing world. HANA stands for

High Performance Analytic Appliance which works on top of In-Memory computing. Today, this solution is feasible

because of innovations in the area of Software and Hardware Technologies.

Page 7: Sap Hana

Hardware Technology Innovations

Page 8: Sap Hana

Storage Price Trend

From the chart below, it can be clearly depicted that there is a drastic decrease in the prices of memory over last few

decades. For example, the price for 1 MB of disk space dropped below US $ 0.01 in 2001, which is a rapid decrease

compared to the cost of more than US $ 250 in 1970. Similarly, the price for 1 MB of Main Memory dropped to US $

0.01 in 2010 whereas it was US $ 10000 in 1980. Thus cost/size relation for disks as well as main memory has decreased

exponentially in the past.

CPU Capacity Trend

Moore’s Law states that the number of transistors on a single chip is doubled approximately every two years.

In reality, the performance of Central Processing Units (CPUs) doubles every 20 months on average. The brilliant

achievement that computer architects have managed is not only creating faster transistors, which results in increased clock

speeds, but also in an increased number of transistors per CPU per square meter, which became cheaper due to efficient

production methods and decreased material consumption. This leads to higher performance for roughly the same

manufacturing cost. For example, in 1971, a processor consisted of 2300 transistors whereas in 2006 it consisted of about

1.7 billion transistors at approximately the same price. Not only does an increased number of transistors play a role in

performance gain, but also more efficient circuitry. A performance gain of up to a factor of two per core has been reached

from one generation to the next, while the number of transistors remained constant.

Page 9: Sap Hana

In 2001, IBM introduced the first processor on one chip, which was able to compute multiple threads at the same time

independently. The IBM Power 4 was built for the high-end server market and was part of IBM ’s Regatta Servers. In

2002, Intel introduced its proprietary hyper-threading technology, which optimizes processor utilization by providing

thread-level parallelism on a single core. With hyper-threading technology, multiple logical processors with duplicated

architectural state are created from a single physical processor. Several tasks can be executed virtually in parallel, thereby

increasing processor utilization. Still the tasks are not truly executed in parallel because the execution resources are still

shared and only multiple instructions of different tasks that are compatible regarding resource usage can be executed in a

single processing step. Hyper-threading is applicable to single-core as well as to multi-core processors.

Multi-core processors were introduced in 2005 starting with two cores on one chip, for example, Advanced Micro

Devices’ (AMD) Athlon 64 X2. At its developer forum in autumn 2006, Intel presented a prototype for an 80-core

processor, while IBM introduced the Cell Broadband Engine with ten cores in the same year. In 2008, Tilera introduced

its Tile64, a multi-core processor for the high-end embedded systems market that consists of 64 cores. 3Leaf is offering a

product that is based on the HyperTransport architecture [30] with 192 cores. In the future, higher numbers of cores are

anticipated on a single chip. In 2008, Tilera predicted a chip with 4096 cores by 2017 for the embedded systems market

and Sun estimated that servers are going to feature 32 and up to 128 cores by 2018.

Page 10: Sap Hana

Software Technology Innovations

In last few decades, we have done remarkable progress in the arena of software technologies. This includes

effective use of row store and column store data bases, new compression techniques for effective and efficient

use of storage space etc.

Page 11: Sap Hana

Row and Column Store

The record-based structure is the most common choice for data warehouse applications. In this structure, data is stored in

physical records using the common physical location of data values as the logical connection across all data points of the

individual record. For example, to find a customer address, the system must first locate the customer using an indexed

value such as customer number, then scan across the record to the position for address. As a result, the smallest

addressable unit of storage is an entire record, and all physical I/O functions will always move complete records or sets of

complete records.

The newer column structure has gained interest as the indexing and data transfer problems associated with record

structures have proved problematic for analytics applications. The column-based DBMS stores all of the values from one

column of a table in a contiguous data set. This allows the reading and/or writing of parts of records. It conserves I/O

bandwidth by transferring only the values that may be used in the query. Since most data warehouse applications use only

a few columns from a table during a typical single access, the resulting bandwidth savings can be substantial.

Below picture represents the way data is physically stored at data base level in the Row and Column Store.

Page 12: Sap Hana

Choosing Row or Column Store

Row store database are widely used and generally used for the ERP system where we are interested in entire record rather

than only few elements of it whereas column store data base are effective when we want to select only few columns and

not entire row. Thus column store has potential to boost the performance of system involves processing of huge amount of

data and involves some sort of aggregation. For example, OLAP system where most of reports are generated by projecting

few columns of info-provider and by doing aggregation with respect to some dimension.

Thus the potential of both type of data bases are now know and this knowledge is used for their effective use for

designing new high performance OLAP system like SAP HANA. It can be clearly depicted from the below

picture that for first type of queries Row Store is faster and more effective as compare to Column Store. Also,

for the aggregation type of queries where only few columns are getting projected Column Store is faster.

Page 13: Sap Hana

Compression Techniques

Run length encoding (RLE) is useful for data with large runs of repeated values, which typically occurs on sorted

columns with a few number of values. Consider the following sequence of values:

- {1,1,1,2,2,3,3,3,3,3,3,3,3,3,3,. . . }

By counting the number of repetitions, we can code such a sequence as

{value, numRepetitions} pairs. The sequence above could be represented as:

- {{1,3},{2,2},{3,10}}

Delta encoding involves storing the difference two adjacent values. Delta coding is useful when we have a sequence of

values where the difference between successive values can be coded in less bits than the values themselves. \

Consider the sequence of values:

- {1200, 1400, 1700, 8000}

The differences between the values are:

- {200, 300, 6300}

To rebuild the sequence, however, we need to know the initial value. Therefore, we code the sequence as the initial value

followed by the set of differences between adjacent values:

- {1200,{200,300,6300}}.

Dictionary Encoding involves storing the large frequently used values as an integer.

Example

Original data

Employee Department

Marc Sales

Merry Marketing

Robert Sales

Suzane Sales

Suresh Marketing

Ganesh Administration

: :

Page 14: Sap Hana

Encoding for Department

Department Encoded Value

Sales 1

Marketing 2

Administration 3

Data after implementing the Dictionary Encoding technique for department

Employee Department

Marc 1

Merry 2

Robert 1

Suzane 1

Suresh 2

Ganesh 3

: :

Compression ration will depend on size of data field and its cardinality.

Null suppression works by suppressing leading zeros in the data. It is mostly used when we have a random sequence of

small integers.

LZO (Lempel Ziv Oberhummer) is a modification of the orginal Lempel Ziv (“LZ77”) dictionary coding

algorithm.

Page 15: Sap Hana

Choosing Compression Technique

SAP HANA Combines Software and Hardware

SAP HANA is a flexible, multipurpose, data-source-agonistic in-memory appliance that combines SAP software

components optimized on hardware provided and delivered by SAP’s leading hardware partners. SAP HANA enables

organizations to analyze business operations based on large amount of detailed information in real-time. Individuals can

create very flexible analytical model without affecting backend enterprise applications or databases.

SAP HANA allows:

Accelerated BI scenarios of any data source

Better operational planning, simulation and forecasting

Fast analysis and better decision making of accelerated SAP ERP transactional data

Better storage, search and ad-hoc analysis of very large data volume

Page 16: Sap Hana
Page 17: Sap Hana

SAP HANA : Landscape

SAP BW will use SAP HANA In-Memory Database and data loads will happen with the help of Data services or Sybase

replication server. For SAP source system except SAP BW, Sybase replication server is recommended whereas for SAP

BW and other 3rd party system, data services has been recommended for data extraction.

Page 18: Sap Hana

With In-memory database, it is possible to churn millions of records in few seconds as we have tight coupling in In-

Memory Data Base (IMDB). Today’s Applications are running on Application server with underline database for holding

the data.

Thus whenever an query is fired the request goes to Database and small chunk of relevant data will be transferred to main

memory of App Server where it will be processed as per logic written in program. Here almost 80 – 85% time will be

consumed in fetching data from database. This bottleneck is removed in IMDB.

Page 19: Sap Hana

Access and Read Times for Disk and Main Memory

With information given in above chart, to read 1 MB of data sequentially in IMDB = 250100NS

Whereas that without IMDB = 35250100NS

Thus with IMDB performance may increase by 140 times.

Page 20: Sap Hana
Page 21: Sap Hana
Page 22: Sap Hana

HANA Architecture

1. Connection and Session Management : This component is responsible for creating and managing sessions and

connections for the database clients. Once a session is established, clients can communicate with the SAP HANA database

using SQL statements.

2. Request Processing And Execution Control : The client requests are analyzed and executed by the set of components

in this block. The Request Parser analyses the client request and dispatches it to the responsible component. The

Execution Layer acts as the controller that invokes the different engines and routes intermediate results to the next

execution step.

Transaction Control statements are forwarded to the Transaction Manager, Data Definition statements are dispatched to

the Metadata Manager and Data Manipulation statements are forwarded to the Optimizer which creates an Optimized

Execution Plan that is subsequently forwarded to the execution layer.

- SQL Parser: It checks the syntax and semantics of the client SQL statements and generates the Logical

Execution Plan. Standard SQL statements are processed directly by DB engine.

- SQLScript: The SAP HANA database has its own scripting language named SQLScript that is designed to

enable optimizations and parallelization. SQLScript is a collection of extensions to SQL. SQLScript is based on

side effect free functions that operate on tables using SQL queries for set processing. The motivation for

SQLScript is to offload data-intensive application logic into the database.

Page 23: Sap Hana

- Multidimensional Expressions (MDX): It is a language for querying and manipulating the multidimensional

data stored in OLAP cubes.

- Planning Engine: It allows financial planning applications to execute basic planning operations in the database

layer. One such basic operation is to create a ‘new version of a dataset’ as a copy of an existing one while

applying filters and transformations.

- Calculation Engine : The SAP HANA database features such as SQLScript and Planning operations are

implemented using a common infrastructure called the Calculation Engine. The SQLScript, MDX, Planning

Model and Domain-Specific models are converted into Calculation Models. The Calc Engine creates Logical

Execution Plan for Calculation Models. The Calculation Engine will break up a model, for example some SQL

Script, into operations that can be processed in parallel. The engine also executes the user defined functions.

3. Transaction Manager: It coordinates database transactions, controls transactional isolation and keeps track of

running and closed transactions. When a transaction is committed or rolled back, the transaction manager informs

the involved engines about this event so they can execute necessary actions. The transaction manager also

cooperates with the persistence layer to achieve atomic and durable transactions.

4. Metadata Manager: Metadata can be accessed via the Metadata Manager. The SAP HANA database metadata

comprises of a variety of objects, such as definitions of relational tables, columns, views, and indexes, definitions

of SQLScript functions and object store metadata. Metadata is stored in tables in row store. The SAP HANA

database features such as transaction support, multi-version concurrency control, are also used for metadata

management.

5. Authorization Manager: It checks whether the user has the required privileges to execute the requested

operations. SAP HANA allows granting of privileges to users or roles. A privilege grants the right to perform a

specified operation (such as create, update, select, execute, etc..) on a specified object (for example a table, view,

SQLScript function, etc…). The SAP HANA database supports Analytic Privileges that represent filters or

hierarchy drilldown limitations for analytic queries. Analytic privileges grant access to values with a certain

combination of dimension attributes. This is used to restrict access to a cube with some values of the dimensional

attributes.

6. Database Optimizer: It gets the Logical Execution Plan from the SQL Parser or the Calculation Engine as input

and generates the optimised Physical Execution Plan based on the database Statistics. The database optimizer will

determine the best plan for accessing row or column stores.

7. Database Executor : It executes the Physical Execution Plan to access the row and column stores and also

process all the intermediate results.

Page 24: Sap Hana

8. Row Store: It is the SAP HANA database row-based in-memory relational data engine. Optimized for high

performance of write operation, Interfaced from calculation / execution layer. Optimised Write and Read

operation is possible due to Storage separation i.e. Transactional Version Memory & Persisted Segment.

a. Transactional Version Memory contains temporary versions i.e. Recent versions of changed records.

This is required for Multi-Version Concurrency Control (MVCC). Write Operations mainly go into

Transactional Version Memory.

b. Persisted Segment contains data that may be seen by any ongoing active transactions. Data that has been

committed before any active transaction was started.

c. Version Memory Consolidation moves the recent version of changed records from Transaction Version

Memory to Persisted Segment based on Commit ID. It also clears outdated record versions from

Transactional Version Memory. It can be considered as garbage collector for MVCC.

d. Segments contain the actual data (content of row-store tables) in pages. Row store tables are linked list of

memory pages. Pages are grouped in segments. Typical Page size is 16 KB.

e. Page Manager is responsible for Memory allocation. It also keeps track of free/used pages.

9. Column Store: It is the SAP HANA database column-based in-memory relational data engine. Parts of it

originate from TREX (Text Retrieval and Extraction) i.e SAP NetWeaver Search and Classification. For the SAP

HANA database, this proven technology was further developed into a full relational column-based data store.

Efficient data compression and optimized for high performance of read operation, Interfaced from calculation /

execution layer. Optimised Read and Write operation is possible due to Storage separation i.e. Main & Delta.

Page 25: Sap Hana

a. Main Storage contains the compressed data in memory for fast read.

b. Delta Storage is meant for fast write operation. The update is performed by inserting a new entry into the

delta storage.

c. Delta Merge is an asynchronous process to move changes in delta storage into the compressed and read

optimized main storage. Even during the merge operation the columnar table will be still available for

read and write operations. To fulfil this requirement, a second delta and main storage are used internally.

d. During Read Operation data is always read from both main & delta storages and result set is merged.

Engine uses multi version concurrency control (MVCC) to ensure consistent read operations.

10. Persistence Layer: It is responsible for durability and atomicity of transactions. It ensures that the database is

restored to the most recent committed state after a restart and that transactions are either completely executed or

completely undone. To achieve this goal in an efficient way the persistence layer uses a combination of write-

ahead logs, shadow paging and savepoints. The persistence layer offers interfaces for writing and reading data. It

also contains SAP HANA's logger that manages the transaction log. Log entries can be written implicitly by the

persistence layer when data is written via the persistence interface or explicitly by using a log interface.

Page 26: Sap Hana

ICE

The SAP in-memory computing engine (formerly Business Analytic Engine (BAE )) is the core ‘engine’ for SAP’s next

generation high-performance in-memory solutions. It leverages technologies such as in-memory computing, columnar

databases, massively parallel processing (MPP), and data compression, to allow organizations to instantly explore and

analyze large volumes of transactional and analytical data – from across the enterprise – in “real real-time”.

Page 27: Sap Hana

The SAP in-memory computing engine delivers the following capabilities:

• Single database with native support for row and columnar data stores, providing full ACID (atomicity, consistency,

isolation, durability) transactional capabilities.

• Powerful and flexible data calculation engine.

• SQL and MDX interfaces.

• Unified information modeling design environment.

• Data repository to persist views of business information

• Data integration capabilities for accessing SAP (BW, ERP, etc.) and non-SAP data sources.

• Integrated lifecycle management capabilities.

Page 28: Sap Hana

Data loads into HANA

To load data into HANA Data base, we can use the Modeler component of HANA Studio to design and trigger the ETL

jobs using Data services, Replication server or SLT.

Page mangement of HANA will take care of disc space allocation for the new data flowing in

Logger will keep track of logging and captures the relevant information.

For recovery purpose the disk storage will be used. Log information backup will be in Log volumes whereas actual

business data backup will be stored in Data volumes

Trigger-Based Replication

The Trigger-Based Data Replication Using SAP Landscape Transformation (SLT) Replicator is based on capturing

database changes at a high level of abstraction in the source ERP system. This method of replication benefits from being

database independent, and also can parallelize database changes on multiple tables or by segmenting large table changes.

Page 29: Sap Hana

ETL-Based Replication

The Extraction-Transformation-Load (ETL) Based Data Replication uses SAP BusinessObjects Data Services to specify

and load the relevant business data in defined periods of time from an ERP system into the IMDB. You can reuse the ERP

application logic by reading extractors or utilizing SAP function modules. In addition, the ETL-based method offers

options for the integration of 3rd party data providers.

Log-Based Replication

The Transaction Log Based Data Replication Using Sybase Replication is based on capturing table

changes from low level database log files. This method is database dependant.

Page 30: Sap Hana

Capability Matrix

Page 31: Sap Hana

Data Modeling with HANA

Page 32: Sap Hana

HANA Studio

Page 33: Sap Hana

Reporting with HANA

Page 34: Sap Hana

HANA Data Base Administration

Page 35: Sap Hana

HANA Sizing

RAM Sizing :

Source Data Footprint is not the total size of Database holding source data but it is the size of data in tables which

we would like to hold in IMDB

We are multiplying with 2 because in IMDB we will require almost 50% of main memory space for processing of

data using programs

We have divided by 5 because this is the average compression we can achieve using compression technique

Disk Sizing

Disk size for persistence layer used for backup and recovery will be four times the size of RAM calculated as

above.

Disk size for the logs will be equal to size of the RAM

Page 36: Sap Hana

SAP BW on HANA (Sizing)

RAM Sizing

Formula one :

Initial 50 GB reduction can be ignored for the business cases where terabytes of data will be stored and handled

Multiplication by 2 for additional memory space for the program execution for data processing and churning.

Divided by 4 as data will be stored in IMDB in compressed mode using various compression techniques

Additional 90 GB will require for the BW specific data

Formula 2 :

Column table footprint is basically size of uncompressed source data that you would like to store in column store

of IMDB (for example Trasaction data) whereas row table footprint is basically size of uncompressed source data

that you would like to store in row store of IMDB (for example certain Master data)

Here column store data size will be huge and data churning and reporting will happen mainly on this data set,

therefore we are multiplying by 2.

For column store higher compression ration is possible i.e. upto 4 : 1, Therefore we have devided by 4

Page 37: Sap Hana

For Row store we can achieve only 1.5 : 1 compression ratio therefore we have divided the row store RAM size

component with 1.5

Also, additional 50 GB for BW specific data

Disk Sizing

Disk size for persistence layer used for backup and recovery will be four times the size of RAM calculated as

above.

Disk size for the logs will be equal to size of the RAM

Page 38: Sap Hana
Page 39: Sap Hana

SAP BI Road Map with HANA

In today’s world, we can see multiple BW implementation region wise providing analytical capability on small set of

data for decision making with EDW for global reporting and decision making.

Page 40: Sap Hana

In first step of HANA implementation, local instances of BW will be powered with HANA Data Base just to be on

safe side and EDW will be powered with BWA so that overall reporting performance can be improved.

Page 41: Sap Hana

In second step of roadmap, we will merge multiple local instances into single box and both local and global reporting

will happen through EDW. In short, EDW running on top of HANA

Page 42: Sap Hana

In third step, New applications will be ported on top of HANA.

Page 43: Sap Hana

Futuristic goal is to run all SAP application on HANA Data base so that data will be generated only once and used by

multiple applications across the landscape.

Page 44: Sap Hana
Page 45: Sap Hana

Future with SAP HANA

Enterprise Performance In-Memory Circle (EPIC)

The combination of the technologies mentioned above finally enables an iterative link between the instant analysis of

data, the prediction of business trends, and the execution of business decisions without delays. How can companies

take advantage of in-memory applications to improve the efficiency and profitability of their business? We predict

that this break-through innovation will lead to fundamentally improved business processes, better decision making,

and new performance standards for enterprise applications across industries and organizational hierarchies. We are

convinced that in-memory technology is a catalyst for innovation, and the enabler for a level of information quality

that has not been possible until now. In-memory enterprise data management provides the necessary equipment to

excel in a future where businesses face ever-growing demands from customers, partners, and shareholders. With

billions of users and a hundred times as many sensors and devices on the Internet, the amount of data we are

Page 46: Sap Hana

confronted with is growing exponentially. Being able to quickly extract business relevant information not only

provides unique opportunities for businesses; it will be a critical differentiator in future competitive markets.

With in-memory technology, companies can fully leverage massive amounts of data to create strategic advantage.

Operational business data can be interactively analyzed and queried without support from the IT department, opening

up completely new scenarios and opportunities.

Consider financial accounting, where data needs to be frequently aggregated for reporting on a daily, weekly,

monthly, or annual basis. With in-memory data management, the necessary filtering and aggregation can happen in

real time. Accounting can be done anytime and in an ad-hoc manner. Financial applications will not only be

significantly faster, they will also be less complex and easier to use. Every user of the system will be able to directly

analyze massive amounts of data. New data is available for analysis as soon as it is entered into the operational

system. Simulations, forecasts, and what-if scenarios can be done on demand, anytime and anywhere. What took days

or weeks in traditional disk-based systems can now happen in the blink of an eye. Users of in-memory enterprise

systems will be more productive and responsive.

This new innovation will create new opportunities and improvements across all industries. Below are a few examples:

• Daily Operations: Gain real-time insight into daily revenue, margin, and labor expenses.

• Competitive Pricing: Intuitively explore impact of competition on product prizing to instantly understand impact to

profit contribution.

• Risk Management: Immediately identify high-risk areas across multiple products and services and run what-if

scenario analyses on the fly.

• Brand and Category Performance: Evaluate the distribution and revenue performance of brands and product

categories by customer, region, and channel at any time.

• Product Lifecycle and Cost Management: Get immediate insight into yield performance versus customer demand.

• Inventory Management: Optimize inventory and reduce out-of-stocks based on live business events.

• Financial Asset Management: Gain a more up-to-date picture of financial markets to manage exposure to currencies,

equities, derivatives, and other instruments.

• Real-Time Warranty and Defect Analysis: Get live insight into defective products to identify deviation in production

processes or handling.

Page 47: Sap Hana

In summary, we foresee in-memory technology triggering the following improvements

in the following three interrelated strategic areas:

• Reduced Total Cost of Ownership: With our in-memory data management concepts, the required analytical

capabilities are directly incorporated into the operational enterprise systems. Dedicated analytical systems are a thing

of the past. Enterprise systems will become less complex and easier to maintain, resulting in less hardware

maintenance and IT resource requirements.

• Innovative Applications: In-memory data management combines high-volume transactions with analytics in the

operational system. Planning, forecasting, pricing optimization, and other processes can be dramatically improved and

supported with new applications that were not possible before.

• Better and Faster Decisions: In-memory enterprise systems allow quick and easy access to information that decision

makers need, providing them with new ways to look at the business. Simulation, what-if analyses, and planning can

be performed interactively on operational data. Relevant information is instantly accessible and the reliance on IT

resources is reduced. Collaboration within and across organizations is simplified and fostered. This can lead to a much

more dynamic management style where problems can be dealt with as they happen.

Page 48: Sap Hana

Related Content

http://scn.sap.com/community/hana-in-memory