Technique/SAP HANA2012. 7. 5. 21:45

As I mentioned in a recent blog about the In-memory Computing Conference 2012 (IMCC 2012) event,  I found thepresentation from Xiaoqun Clever (Corporate Officer, SAP AG Senior Vice President, TIP Design & New Applications, President of SAP Labs China) entitled “Extreme Applications – neuartige Lösungen mit der SAP In-memory Technologie“ interesting but didn’t go into details. In this blog, I wanted to describe what I found so interesting about the presentation.

 

The first slide that I found intriguing was the one that describes the various scenarios for HANA. You can visualize the evolution of Hana over time as moving from left to right: Accelerator -> Database -> Platform. The first thing that I noticed is that “Cloud” is not involved at all. Indeed, it is only mentioned in a single slide later in the presentation - it almost doesn’t appear to play a major role in such HANA-related endeavors.

 

image001.jpg

 

The use case with HANA as a Database and the involvement of NetWeaver components would also cover the current intentions regarding the NetWeaver Cloud offering where HANA will be present as one possible data source. In this area, I will be curious to see what sort of functionality overlap (user management, etc) will exist between the HANA Platform and the NetWeaver Cloud as well as how the two environments will integrate with one another.

 

A later slide describing the scenario “HANA als Platform” is actually titled “Native HANA Applikationen”. The applications listed on the slide are either OnPremise or OnDemand (usually associated with the HANA AppCloud) applications. I started to consider what the portrayal of these applications as being “native” meant for the relationship between HANA and the Cloud. To be truthful, I’m starting to get the impression that we are slowly seeing a merging of the OnPremise and OnDemand worlds in terms of HANA runtime environments. Native applications might be able to run in both worlds since the underlying platform is fundamentally the same. Thus, other considerations (costs involved, customer presence, etc) might be important in making decisions regarding where such applications are hosted.

image002.jpg

 

If we take this assumption forward a few years, you might think that a HANA-Platform-based Business Suite running as a native HANA application (and thus, perhaps easily available in an OnDemand setting) might be an option but another slide in the same presentation shows that a HANA-based Business Suite would be distinct from native HANA Apps.

 

image003.jpg

 

What I liked about the presentation is that it reinforced my understanding of the differences between scenarios involving NetWeaver functionality and those based on native HANA functionality. What I don’t fully understand is the potential use of native functionality in scenarios where NetWeaver functionality (for example, NetWeaver Cloud) is used. Is this possible? Planned?  As things evolve, I’ll waiting to see if the scenarios with HANA as a Database can hold their own against HANA as a Platform scenarios.



http://scn.sap.com/community/cloud-computing-and-on-demand-solutions/blog/2012/07/05/do-native-hana-applications-imply-a-future-without-a-cloud-onpremise-deployment-distinction?utm_source=twitterfeed&utm_medium=twitter

Posted by AgnesKim
Technique/SAP HANA2012. 5. 31. 09:42

Secure Sockets Layer (SSL) with HANA and BI4 Feature Pack 3 requires configuration on the HANA server and BI4 server.  The following steps will show how to configure SSL using OpenSSL and a certificate obtained by from a Certificate Authority (CA).

 

OpenSSL Configuration

 

This blog will cover the OpenSSL Crypto Library, however HANA can also be configured using the SAP Crypto Library.

 

Confirm that OpenSSL is installed

 

shell> rpm -qa | grep -i openssl

openssl-0.9.8h-30.34.1

libopenssl0_9_8-32bit-0.9.8h-30.34.1

openssl-certs-0.9.8h-27.1.30

libopenssl0_9_8-0.9.8h-30.34.1

 

Confirm that OpenSSL is 64-bit

 

shell> file /usr/bin/openssl

openssl: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), stripped

 

Confirm there is a symlink to the libssl.so file

 

ssl_5.png

 

If not, create one as the root user

 

shell> ln -s /usr/lib64/libssl.so.0.9.8 /usr/lib64/libssl.so

 

SSL Certificates

 

This blog won’t go into details of how SSL works, but in generic terms you’ll need to create a Certificate Singing Request (CSR) from the HANA server and send that to a CA.  In return, the CA will give you a Signed Certificate and a copy of their Root CA Certificate.  These, then need to be setup with HANA and the BI4 JDBC and ODBC drivers.

 

Creating the Certificate Signing Request

 

shell> openssl req -new -nodes -newkey rsa:2048 -keyout Server_Key.key -out Server_Req.csr -days 365

 

Fill out the requested information according to your company:

 

-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, YOUR name) []:
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

 

This will create two files

 

  • Key: Server_Key.key
  • CSR: Server_Req.csr

 

The CSR needs to be sent to the CA, which in turn will give you a signed certificate and their Root CA Certificate.

 

Convert the Root CA Certificate to PEM

 

The Root CA Certificate may come in the DER format (.cer extension), HANA requires the cert in PEM format.  Therefore, we will need to convert it using the command

 

shell> openssl x509 -inform der -in CA_Cert.cer -out CA_Cert.pem

 

HANA SSL Configuration

 

Copy both the Signed Cerficiate and Root CA Certificate to the HANA server.  For HANA SSL to work, we need to create two files:

 

  • key.pem
  • trust.pem

 

The key.pem key store file contains the certificate chain, which includes your servers key (Server_Key.key), the CA’s Signed Certificate and the Root CA Certificate.  Whereas the trust.pem trust store file contains the Root CA Certificate.

 

Create the key.pem and trust.pem trust stores

 

key.pem

 

shellcat Server_Cert.pem Server_Key.key CA_Cert.pem > key.pem

 

trust.pem

 

shellcp CA_Cert.pem trust.pem

 

Copy the files to the user's home directory

 

In the user's home directory create a .ssl directory and place both the key.pem and trust.pem files here,

 

ssl_6.png

 

Configure the certificates in HANA

 

Once the key.pem and trust.pem files have been created they need to be configured in HANA.

 

In HANA Studio go to

 

  • Administration
  • Configuration tab
  • Expand indexserver.ini
  • Expand communication
  • Configure the entries related to SSL

 

ssl_!.png

 

Start and Stop HANA to pick up the SSL configuration

 

  • HDB stop
  • HDB start

 

HANA Studio Configuration

When setting up the connection to HANA, check the option 'Connect using SSL', as seen below.

ssl_7.png

 

To confirm the connection has SSL, look for the lock icon on the server icon, as seen below.

 

ssl_8.png

 

BI4 Feature Pack 3 SSL Configuration

 

SSL in BI4 needs to configured for the HANA connectivity you plan to use. 

 

JDBC Configuration

 

For JDBC SSL configuration, we’ll need to add the trust.pem trust store to the Java Key Store (JKS) using the keytool utility provided by the JDK/JRE.  This is done via the command line.  Change the paths for your own configuration:

 

Add trust.pem to the JKS

 

C:\Documents and Settings\Administrator>"C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win32_x86\jre\bin\keytool.exe" -importcert -keystore "C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win32_x86\jre\lib\security\cacerts" -alias HANA -file trust.pem

 

You will be prompted for the keystore password.  The default password is: changeit

 

When prompted to 'Trust this certificate' enter yes.  The alias can be any value, however it must be unique in the keystore.

 

Confirm that your certificate has been added to the keystore

 

C:\Documents and Settings\Administrator>"C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win32_x86\jre\bin\keytool.exe" -list -keystore "C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win32_x86\jre\lib\security\cacerts" -alias HANA

 

If successful, you will see trustedCertEntry in the output, as below

 

ssl_11.png

 

Information Design Tool  (IDT) Configuration

 

In IDT, the connection will need to be setup with the JDBC Driver Property encrypt=true to make the connection use SSL when connecting to HANA,

 

idt.png

 

ODBC Configuration

 

Once the HANA client driver has been installed, you can set up a ODBC connection for HANA.  To connect via SSL, check the box 'Connect using SSL', as below:

ssl_2.png

 

If you added any Special property settings', they won't be displayed in the driver configuration.  To view them, launch the Windows Registry Editor and go to the key:

 

  • HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBC.INI\<Your Data Source Name>

 

ssl_4.png

 

Installing the CA Root Certificate

 

Depending on which CA you get the certificate signed from, you may run into SSL errors.  For example, in Crystal you may see this error,

 

cr1.png

 

To resolve this, install the CA Root Certificate allowing it to be trusted by the server.

 

  • Copy the CA Root Certificate to the machine where the error is coming from

 

  • Double click on the certificate and click 'Install Certificate'

cr5.png

  • Click next

 

cr4.png

  • Select the first option and click next

 

cr3.png

  • Click finish

cr6.png

 

Confirming if SSL is being used

 

Using a tool like Wireshark, the communication between the server and client can be traced, as seen below to verify that SSL is being used.

 

ssl_10.png




http://scn.sap.com/community/in-memory-business-data-management/blog/2012/05/30/ssl-with-hana-and-bi4-feature-pack-3?utm_source=twitterfeed&utm_medium=twitter

Posted by AgnesKim
Technique/SAP HANA2012. 5. 29. 21:32

Two major improvements related to Enterprise Data Warehousing that can be achieved by migrating SAP NetWeaver BW to the SAP HANA database platform are (1) the dramatic improvement in data load times for Data Store Objects being the data containers within BW, and (2) the significantly boosted performance while reporting from them. With the latest BI Content releases SAP NetWeaver 7.30 BI Content 7.37 SP 01 or SAP NetWeaver 7.31 BI Content 7.47 SP 01, a large fraction (about 2/3) of DataStore Objects that were delivered so far with BI Content have now been prepared for HANA-optimization.

 

When you copy these HANA-prepared DataStore Objects (DSO) from the delivered to the active version they will be created automatically as SAP HANA-optimized DataStore Object side-stepping the manual conversion step from standard to HANA-optimized DSO. The delivery of DataStore Objects prepared for HANA-optimization marks the next consequent step towards total cost of administration reduction after the migration of all relevant data flows to SAP BW 7.x technology. This is one important prerequisite for HANA-optimization of involved DataStore Objects (migrated data flows were delivered with SAP NetWeaver 7.30 BI Content 7.36 SP 02 or SAP NetWeaver 7.31 BI Content 7.46 SP 02). If DataStore Objects contain data, you cannot avoid the manual conversion step to HANA-optimization. Therefore, you best benefit from the HANA-prepared DataStore Objects when you copy new data flows from the delivered to the active version in your BW system powered by SAP HANA.

 

You can benefit from automatic creation of HANA-optimized DataStore Objects during BI Content activation if you are using (1) SAP HANA database as of release SAP HANA 1.0 Support Package 03 and (2) SAP NetWeaver BW 7.3, Support Package Stack 07 or SAP NetWeaver BW 7.3, including Enhancement Package 1, Support Package Stack 04.

 

Find the complete list of all 1025 HANA-optimized DataStore Objects attached to note 1708668.




http://scn.sap.com/community/data-warehousing/business-content-and-extractors/blog/2012/05/29/shipment-of-bi-content-for-sap-netweaver-bw-powered-by-sap-hana-datastore-objects-are-now-prepared-for-sap-hana-optimization?utm_source=twitterfeed&utm_medium=twitter

'Technique > SAP HANA' 카테고리의 다른 글

Analytics with SAP and R  (0) 2012.06.14
SSL with HANA and BI4 Feature Pack 3  (0) 2012.05.31
What's New in HANA Studio SPS04  (0) 2012.05.23
HANA is a great way to waste a lot of money  (0) 2012.05.17
BWA, Exalytics and SAP-HANA  (0) 2012.05.11
Posted by AgnesKim
Technique/SAP HANA2012. 5. 23. 17:38

Hello All,

 

As we are getting new versions of HANA,its required to know what are the changes are coming-in in the latest versions.

 

Here I have listed,the new & changed features in HANA Studio SPS04 version.You can get these details from SAP HANA Modeling Guide available in SMP as well.

 

Loading data from flat files (new)

You use this functionality to upload data from flat files, available at a client file system to SAP HANA database. The supported file types are .csv, .xls, and .xlsx.

This approach is very handy to upload your content into HANA Database.For more details,referhttp://scn.sap.com/docs/DOC-27960

 

Exporting objects using SAP Support Mode (new)

You use this functionality to export an object, along with other associated objects and data for SAP support purposes. This option should only be used when requested by SAP support.This would be helpful to debug from SAP side, in case of any issues reported for particular Views.

 

Input parameters (new)

Used to provide input for the parameters within stored procedures, which are evaluated when the procedure is executed. You use input parameters as placeholders during currency conversion and in formulas like calculated measures, and calculated attributes.

 

Import/export server and client (changed)

• Exporting objects using Delivery Units (earlier known as Server Export):

Function to export all packages that make up a delivery unit and the relevant objects contained therein, to the client or to the SAP HANA server filesystem.

 

• Exporting objects using Developer Mode (earlier known as Client Export):

Function to export individual objects to a directory on your client computer. This mode of export should only be used in exceptional cases, since this does not cover all aspects of an object, for example, translatable texts are not copied.

 

• Importing objects using Delivery Unit (earlier known as Server Import):

Function to import objects (grouped into a delivery unit) from the server or client location available in the form of .tgz file.

 

• Importing objects using Developer Mode (earlier known as Client Import):

Function to import objects from a client location to your SAP HANA modeling environment.


Variables (changed)

Variables can be assigned specifically to an attribute, which is used for filtering via the where clause. At runtime, you can provide different values to the variable to view corresponding set of attribute data. You can apply variables on attributes of analytic and calculation views.

 

Hierarchy enhancements (changed)

Now you can create a hierarchy between the attributes of a calculation view in addition to attribute views. The Hierarchy node is available in the Output pane of the Attribute and Calculation view.

 

Activating Objects (changed)

You activate objects available in your workspace to expose the objects for reporting and analysis.

Based on your requirement, you can perform the following actions on objects:

• Activate

• Redeploy

• Cascade Activate

• Revert to Active

 

Auto Documentation (changed)

Now the report also captures the cross references, data foundation joins, and logical view joins.

 

Calculation view (changed)

• Aggregation node: used to summarize data of a group of rows by calculating values in a column

• Multidimensional reporting: if this property is disabled you can create a calculation view without any measure, and the view is not available for reporting purposes

• Union node datatype: you can choose to add unmapped columns that just have constant mapping and a data type

• Column view now available as data source

• Filter expression: you can edit a filter applied on aggregation and projection view attributes using filter expressions from the output pane that offers more conditions to be used in the filter including AND, OR, and NOT

 

Autocomplete function in SQL Editor (changed)

The autocomplete function (Ctrl+Space) in the SQl Editor shows a list of available functions.

 

Hope this helps.

 

Rgds,

Murali

Posted by AgnesKim
Technique/SAP HANA2012. 5. 17. 21:35

HANA is a Greate Way to Waste a Lot of MONEY 라는 다소 자극적인 표현으로 시작한 글. 
근데 사실 지금 HANA를 BW의 다음버전으로 도입하는 곳들은 정확히. "돈을 갖다 버리고 있다"에 한표. (최소한 국내는. 해외는 모르겠고.) 

HANA를 도입하는 것이 Version Upgrade Project 를 하듯이 하게되면 그렇게 될거라는것. 그리고 신규도입하는 곳도 결국 기존 BW 하던 사람들이 고민없이 기존 MDM 모델링으로 하게 되면 그 또한 마찬가지. 

나로선 공부해야 할 것이 늘어나고 있다고 판단되는 것이 HANA 의 등장이지만
뭐 또 그냥 대충 어물쩍 그렇게 넘어갈수도 있겠다 싶기도 하다. 

여튼 그런 부분에 대해서 쓰인 SDN의 한 블로그. 동의하는바.


-------------------------------------------------------------------------------------------------------------

HANA is a great way to waste a lot of money.

Yes, I'm serious here.

 

If you decide to implement this new platform at your site but just copy your coding 1:1 to HANA, you're going to waste money.

If you buy HANA and don't re-engineer your solutions, you're going to waste money.

If there is no change in how data is processed and consumed with the introduction of HANA to your shop, then you're wasting money. And you're wasting an opportunity here.

 

Moving to HANA is disruptive.

It does mean to throw the solutions that are used today over board.

That's where pain lies, where real costs appear - those kind of costs that don't show up on any price list.

 

So why take the pain, why invest so much?

Because this is the chance to enable a renovation of your company IT.

To change how users perceive working with data and to enable them to do things with your corporate data far more clever than what was thinkable with your old system.

That's the opportunity to be better than your competition and better than you are today.

That's a chance for becoming a better company.

 

This does mean, that your users need to say goodbye to their beloved super long and wide excel list reports.

To fully gain the advantages HANA can provide, the consumers of the data also need to be lead grow towards better ways to work with data.

Just upgrading to the new Office version to support even more rows in a spreadsheet won't do. It never did.

 

This does mean, your developers need to re-evaluate what they understand of databases.

They have start over and re-write their fancy utility-scripts that had been so useful on the old platform.

And more important: they need to re-think about what users of their systems should be allowed enabled to do.

Developers will need to give up their lordship of corporate IT. This is going to be a bitter pill to swallow.

 

Just adding the HANA box to your server cabinet is not the silver bullet to your data processing issues. But taking up this disruptive moment and provide your users with new application and approaches to data is what you get.

Once again, leaving the 'comfort zone' is what will provide the real gain.

 

So don't waste your money, don't waste your time and by all means stop creating the same boring systems you created since you've been to IT.

Instead, start over and do something new to become the better company you can be.



------------------------------------------------------------------------------------------------------------- 

http://scn.sap.com/community/in-memory-business-data-management/blog/2012/05/17/still-no-silver-bullet?utm_source=twitterfeed&utm_medium=twitter

Posted by AgnesKim
Technique/SAP HANA2012. 5. 11. 15:38




INTRODUCTION

 

Ever since SAP-HANA was announced couple of years back, I've been following the discussions/developments around In-Memory Database space. In Oct 2011, Oracle CEO Larry Ellison introduced Oracle Exalytics to compete with SAP-HANA. After reading white papers on both SAP-HANA and Oracle Exalytics, it was obvious they were different. The comparison of SAP-HANA and Oracle Exalytics is like comparing apples to oranges.

 

On May 8, 2012 I tweeted:

 

SAP Mentor David Hull responded:

 

I was a bit surprised to know that most don't have a clue as to the difference between Exalytics and SAP-HANA. The difference looked obvious to me. I realized either I was missing something or they were. So I decided to write this blog. And since this blog compares SAP products with Oracle products, I've decided to use Oracle DB instead of generic term RDBMS.

 

First I'll discuss the similarity between SAP-BW, Oracle-Exalytics and SAP-HANA. At a very high level, they look similar as shown in the picture below:

Similarity.png

 

As shown, BW application sits on top of a database, Oracle or SAP-HANA. And the application helps the user find right data. The similarity ends there.

 

Let us now review how Oracle-Exalytics compares with SAP-BW with Business Warehouse Accelerator (BWA): As you can see below, there appears to be one-to-one match between the components of SAP-BW and Exalytics.

 

 

New_BWA_EXA.png

Steps
SAP-BWExalyticsComments
1 and 1a

Data found in BWA;

and returned to the user

Data found in

Adaptive Data Mart & returned to the user


2 & 2a

Data found in OLAP

Cache and returned to the user

Data found in Intelligent cache and returned to the user

This means data was not found in BWA

or Adaptive Data Mart

3 & 3a

Data found in

Aggregates and returned to the user

Data found in Essbase Cubes and returned to the user

This means data was not found in

1) Adaptive Data Mart or BWA and

2) OLAP Cache or Intelligent Cache

4 & 4a

Data found in Cubes

and returned to the user


Not sure if Essbase supports aggregates;

However Oracle supports materialized

views;I assume this is similar to SAP-BW's aggregates.

 

 

 

The diagram below shows why Exalytics Vs SAP-HANA comparison is like apple to orange comparison. In Exalytics, the information users need gets pre-created at a certain level/granularity. One of the best practices in BW/DW world is to create the aggregates upfront to get acceptable response times.

 

In SAP-HANA, however, aggregates are created on the fly; data in SAP-HANA resides in raw form, and depending on what users need, the application performs the aggregation at runtime and displays the information on the user's screen. This helps the users perform analysis near real-time and more quickly.

 

 

New_EXA_HANA.png

Based on the diagrams shown above, Exalytics it seems is comparable to SAP's six year old BWA technology.

 

SUMMARY

 

Based on discussions above, the diagram below compares all three products SAP-BW with BWA, Exalytics and SAP-HANA.

 

New_BWA_EXA_HANA.png

 

                                           Note: I didn't connect Disk to HANA DB because it is primarily used for persistence.

 

I wanted to keep this blog simple so didn't include a lot of details. Depending on your questions/thoughts, I'm planning to either update this blog or write new blog.







http://scn.sap.com/community/in-memory-business-data-management/blog/2012/05/11/bwa-exalytics-and-sap-hana?utm_source=twitterfeed&utm_medium=twitter



Posted by AgnesKim
Technique/SAP HANA2012. 5. 8. 10:42

HANA vs. Exalytics: an Analyst's View

Posted by David Dobrin in Blog on May 6, 2012 10:00:28 PM
Introduction

 

Some people at SAP have asked me to comment on the "HANA vs. Exalytics" controversy from an analyst's point of view.  It's interesting to compare, so I'm happy to go along.  In this piece, I'll try to take you through my thinking about the two products.  I can see that Vishal Sikka just posted a point-by-point commentary on Oracle's claims, so I won't try to go him one better.  Instead, what I'll try to do is make the comparison as simple and non-techie friendly as I can.  Note that SAP did not commission this post, nor did it ask to edit it.

 

Exalytics

 

To begin with, let's start with something that every analyst knows:  Oracle does a lot of what in the retail trade are called "knockoffs."

 

They think this is good business, and I think they're right.  The pattern is simple.  When someone creates a product and proves a market, Oracle (or, for that matter, SAP) creates a quite similar product.  So, when VMWare (and others) prove Vsphere, Oracle creates Oracle VM; when Red Hat builds a business model around Linux, Oracle creates "Oracle Linux with Unbreakable Enterprise Kernel."

 

We analysts know this because it's good business for us, too.  The knockoffs--oh, let's be kind and call them OFPs for "Oracle Followon Products"--are usually feature-compatible with the originals, but have something, some edge, which (Oracle claims) makes them better.

 

People get confused by these claims, and sometimes, when they get confused, they call us.

 

Like any analyst, I've gotten some of these calls, and I've looked at a couple of the OFPs in detail.  Going in, I usually expect that Oracle is offering pretty much what you see at Penneys or Target, an acceptable substitute for people who don't want to or can't pay a premium, want to limit the number of vendors they work with, aren't placing great demands on the product, etc., etc.

 

I think this, because it's what one would expect in any market.  After all, if you buy an umbrella from a sidewalk vendor when it starts to rain, you're happy to get it, it's a good thing, and it's serviceable, but you don't expect it to last through the ages.

 

Admittedly, software is much more confusing than umbrellas. With a $3 umbrella, you know what you're getting.  With an OFP (or a follow-on product from any company), you don't necessarily know what you're getting.  Maybe the new product is a significant improvement over what's gone before. If the software industry were as transparent as the sidewalk umbrella industry, maybe it would be clear.  But as it is, you have to dig down pretty deep to figure it all out. And then you may still be in trouble, because when you get "down in the weeds," as my customers have sometimes accused me of being, you might be right, but you might also fail to be persuasive.

 

Which brings me to the current controversy.  To me, it has a familiar ring. SAP releases HANA, an in-memory database appliance.  Now Oracle has released Exalytics, an in-memory database appliance.  And I'm getting phone calls.

 

HANA:  the Features that Matter

 

I'm going to try to walk you through the differences here, while avoiding getting down in the weeds. This is going to involve some analogies, as you'll see.  If you find these unpersuasive, feel free to contact me.

 

To do this, I'm going to have to step back from phrases like "in-memory" and "analytics," because now both SAP and Oracle are using this language. I'll look instead at the underlying problem that "in-memory" and "analytics" are trying to solve.

 

This problem is really a pair of problems.  Problem 1.  The traditional row-oriented database is great at getting data in, not so good at getting data out.  Problem 2.  The various "analytics databases," which were designed to solve Problem 1--including, but not limited to the column-oriented database that SAP uses--are great at getting data out, not so good at getting data in.

 

What you'd really like is a column-oriented (analytics) database that is good at getting data in, or else a row-oriented database that is good at getting data out.

 

HANA addresses this problem in a really interesting way.  It is a database that can be treated as either row-oriented or column-oriented.  (There is literally a software switch that you can throw.)  So, if you want to do the very fast and flexible analytic reporting that column-oriented databases are designed to do, you throw the switch and run the reports.  And if you want to do the transaction processing that row-oriented databases are designed to do, you throw the switch back.

 

Underneath, it's the same data; what the switch throws is your mode of access to it.

 

In extolling this to me, my old analyst colleague, Adam Thier, now an executive at SAP, said, "In effect, it's a trans-analytic database."  (This is, I'm sure, not official SAP speak.  But it works for me.)  How do they make the database "trans-analytic?"  Well, this is where you get down into the weeds pretty quickly.  Effectively, they use the in-memory capabilities to do the caching and reindexing much more quickly than would have been possible before memory prices fell.

 

There's one other big problem that the in-memory processing solves.  In traditional SQL databases, the only kind of operation you can perform is a SQL operation, which is basically going to be manipulation of rows and fields in rows.  The problem with this is that sometimes you'd like to perform statistical functions on the data:  do a regression analysis, etc., etc.  In a traditional database, though, you're kind of stymied; statistical analysis in a SQL database is complicated and difficult.

 

In HANA, "business functions" (what marketers call statistical analysis routines) are built into the database.  So if you want to do a forecast, you can just run the appropriate statistical function.  It's nowhere near as cumbersome as it would be in a pure SQL database.  And it's very, very fast;  I have personally seen performance improvements of three orders of magnitude.

 

Exalytics:  the Features that Matter

 

Now when I point out that HANA is both row-oriented (for transactions) and column-oriented (so that it can be a good analytics database) and then I point out that it has business functions built-in, I am not yet making any claim about the relative merits of HANA and Exalytics.

 

Why?  Well, it turns out that Exalytics, too, lets you enter data into a row-oriented database and allows you to do reporting on the data from an analytics database.  And in Exalytics, too, you have a business function library.

 

But the way it's done is different.

 

In Exalytics, the transactional, row-oriented capabilities come from an in-memory database (the old TimesTen product that Oracle bought a little more than a decade ago).  The analytics capabilities come from Essbase (which Oracle bought about 5 years ago), and the business function library is an implementation of the open-source R statistical programming language.

 

So what, Oracle would argue. It has the features that matter.  And, Oracle would argue, it also has an edge, something that makes this combination of databases clearly better.  What makes it better, according to Oracle? In Exalytics, you're getting databases and function libraries that are tested, tried, and true.  TimesTen has been at the heart of Salesforce.com since its inception.  Essbase is at the heart of Hyperion, which is used by much of the Global 2000.  And R is used at every university in the country.

 

Confused?  Well, you should be.  That's when you call the analyst.

 

HANA vs. Exalytics

 

So what is the difference between the two, and does it matter?  If you are a really serious database dweeb, you'll catch it right away:

 

In HANA, all the data is stored in one place. In Exalytics, the data is stored in different places.

 

So, in HANA, if you want to report on data, you throw a switch.  In Exalytics, you extract the data from the Times10 database, transform it, and load it into the Essbase database.  In HANA, if you want to run a statistical program and store the results, you run the program and store the results.  In Exalytics, you extract the data from, say, Times10, push it into an area where R can operate on it, run the program, then push the data back into Times10.

 

So why is that a big deal?  Again, if you're a database dweeb, you just kind of get it.  (In doing research for this article, I asked one of those dweeb types about this, and I got your basic shrug-and-roll-of-eye.)

 

I'm not that quick. But I think I sort of get what their objection is.  Moving data takes time.  Since the databases involved are not perfectly compatible, one needs to transform the data as well as move it. (Essbase, notoriously, doesn't handle special characters, or at least didn't use to.)  Because it's different data in each database, one has to manage the timing, and one has to manage the versions.  When you're moving really massive amounts of data around (multi-terabytes), you have to worry about space.  (The 1TB Exalytics machine only has 300 GB of actual memory space, I believe.)

 

One thing you can say for Oracle.  They understand these objections, and in their marketing literature, they do what they can to deprecate them.  "Exalytics," Oracle says, "has Infiniband pipes" that presumably make data flow quickly between the databases, and "unified management tools," that presumably allow you to keep track of the data. Yes, there may be some issues related to having to move the data around.  But Oracle tries to focus you on the "tried and true" argument. You don't need to worry about having to move the data between containers, not when each of the containers is so good, so proven, and has so much infrastructure already there, ready to go.

 

As long as the multiple databases are in one box, it's OK, they're arguing, especially when our (Oracle's) tools are better and more reliable.

 

Still confused?  Not if you're a database dweeb, obviously.  Otherwise, I can see that you might be.  And I can even imagine that you're a little irritated. "Here this article has been going on for several hundred lines," I can hear you saying, "and you still haven't explained the differences in a way that's easy to understand."

 

HANA:  the Design Idea

 

So how can you think of HANA vs. Exalytics in a way that makes the difference between all-in-one-place and all-in-one-box-with-Infiniband-pipes-connecting-stuff completely clear?  It seems to me, the right way, is to look at the design idea that's operating in each.

 

Here, I think, there is a very clear difference.  In TimesTen or Essbase or other traditional databases, the design idea is roughly as follows: if you want to process data, move it inside engines designed for that kind of processing. Yes, there's a cost. You might have to do some processing to get the data in, and it take some time.  But those costs are minor, because once you get it into the container, you get a whole lot of processing that you just couldn't get otherwise.

 

This is a very normal, common design idea.  You saw much the same idea operating in the power tools I used one summer about forty years ago, when I was helping out a carpenter.  His tools were big and expensive and powerful--drill presses and table saws and such like--and they were all the sort of thing where you brought the work to the tool. So if you were building, say, a kitchen, you'd do measuring at the site, then go back to the shop and make what you needed.

 

In HANA, there's a different design idea:  Don't move the data.  Do the work where the data is.  In a sense, it's very much the same idea that now operates in modern carpentry.  Today, the son of the guy I worked for drives up in a truck, unloads a portable table saw and a battery-powered drill, and does everything on site and it's all easier, more convenient, more flexible, and more reliable.

 

So why is bringing the tools to the site so much better in the case of data processing (as well as carpentry?)  Well, you get more flexibility in what you do and you get to do it a lot faster.

 

To show you what I mean, let me give you an example.  I'll start with a demo I saw a couple of years ago of a relatively light-weight in-memory BI tool.

 

The salesperson/demo guy was pretty dweeby, and he traveled a lot.  So he had downloaded all the wait times at every security gate in every airport in America from the TSA web site.  In the demo, he'd say, "Let's say you're in a cab.  You can fire up the database and a graph of the wait-times at each security checkpoint.  So now you can tell which checkpoint to get out at."

 

The idea was great, and so were the visualization tools.   But at the end of the day, there were definite limitations to what he was doing.  Because the system is basically just drawing data out of the database, using SQL, all he was getting was a list of wait times, which were a little difficult to deal with.  What one would really want is the probability that a delay would occur at each of the checkpoints, based on time of day and a couple of other things.  But that wasn't available, not from this system, not in a cab.

 

Perhaps even worse, he wasn't really working with real-time data. If you're sitting in the cab, what you really want to be working with is recent data, but he didn't have that data; his system couldn't really handle an RSS feed.

 

Now, consider what HANA's far more extensive capabilities do for that example.  First of all, in HANA, data can be imported pretty much continuously.  So if he had an RSS feed going, he could be sure the database was up-to-date.  Second, in HANA, he could use the business functions to do some statistical analysis of the gate delay times.  So instead of columns of times, he could get a single, simple output containing the probability of a delay at each checkpoint.  He can do everything he might want to do in one place.  And this gives him better and more reliable information.

 

So What Makes It Better?

 

Bear with me.  The core difference between HANA and Exalytics is that in HANA, all the data is in one place.  Is that a material difference?  Well, to some people it will be; to some people, it won't be.  As an analyst, I get to hold off and say, "We'll see."

 

Thus far, though, I think the indications are that it is material.  Here's why.

 

When I see a new design idea--and I think it's safe to say that HANA embodies one of those--I like to apply two tests.  Is it simplifying?  And is it fruitful?

 

Back when I was teaching, I used to illustrate this test with the following story:

 

A hundred years ago or so, cars didn't have batteries or electrical systems.  Each of the things now done by the electrical system were thought of as entirely separate functions that were performed in entirely different ways.  To start the car, you used a hand crank.  To illuminate the road in front of the car, you used oil lanterns mounted where the car lights are now.

 

Then along came a new design idea: batteries and wires.  This idea passed both tests with flying colors.  It was simplifying.  You could do lots of different things (starting the car, lighting up the road) with the same apparatus, in an easier and more straightforward way (starting the car or operating the lights from the dashboard).  But it was also fruitful.  Once you had electricity, you could do entirely new things with that same idea, like power a heater motor or operate automatic door locks.

 

So what about HANA?  Simplifying and fruitful?  Well, let's try to compare it with Exalytics. Simplifying?  Admittedly, it's a little mind-bending to be thinking about both rows and columns at the same time.  But when you think about how much simpler it is conceptually to have all the data in one database and think about the complications involved when you have to move data to a new area in order to do other operations on it, it certainly seems simplifying.

 

And fruitful?

 

Believe it or not, it took me a while to figure this one out, but Exalytics really helped me along.  The "Aha!" came when I started comparing the business function library in HANA to the "Advanced Visualization" that Oracle was providing.  When it came to statistics, they were pretty much one-to-one; the HANA developers very self-consciously tried to incorporate the in-database equivalents of the standard statistical functions, and Oracle very self-consciously gave you access to the R function library.

 

But the business function library also does…ta da…business functions, things like depreciation or a year-on-year calculation.  Advanced Visualization doesn't. 

 

This is important not because HANA's business function library has more features than R, but because HANA is using the same design idea (the Business Function Library) to enrich various kinds of database capabilities.  On the analytics side, they're using the statistical functions to enrich analytics capabilities.  On the transaction side, they're using the depreciation calculations to enrich the transaction capabilities.  For either, they're using the same basic enrichment mechanism.

 

And that's what Oracle would find hard to match, I think. Sure, they can write depreciation calculation functionality; they've been doing that for years.  But to have that work seamlessly with the Times10 database, my guess is that they'd have to create a new data storage area in Exalytics, with new pipes and changes in the management tools.

 

Will HANA Have Legs?

 

So what happens when you have two competing design ideas and one is simpler and more fruitful than the other?

 

Let me return to my automobile analogy.

 

Put yourself back a hundred years or so and imagine that some automobile manufacturer or other, caught short by a car with a new electrical system, decides to come to market ASAP with a beautiful hand-made car that does everything that new battery car does, only with proven technology.  It has crisp, brass oil lanterns, mahogany cranks, and a picture of a smiling chauffeur standing next to the car in the magazine ad.

 

The subtext of the ad is roughly as follows. "Why would you want a whole new system, with lots and lots of brand-new failure points, when we have everything they have.  Look, they've got light; we've got light, but ours is reliable and proven.  They've got a starter; we've got a starter, but ours is beautiful, reliable, and proven, one that any chauffeur can operate."

 

I can see that people might well believe them, at least for a while.  But at some point, everybody figures out that the guys with the electrical system have the right design idea.  Maybe it happens when the next version comes out with a heater motor and an interior light.  Maybe it happens when you realize that the chauffeur has gone the way of the farrier. But whenever it happens, you realize that the oil lantern and the crank will eventually fall by the wayside.

 

About David Dobrin

 

I run a small analyst firm that in Cambridge, Massachusetts that does strategy consulting in most areas of enterprise applications.  I am not a database expert, but for the past year, I have been doing a lot of work with SAP related to HANA, so I'm reasonably familiar with it.  I don't work with Oracle, but I know a fair amount about both the Times 10 database and the Essbase database, because I covered both Salesforce (which uses Times 10) and Hyperion (Essbase) for many years.

 

SAP is a current customer of B2B Analysts, Inc., the firm I run.

  

 









https://www.experiencesaphana.com/community/blogs/blog/2012/05/06/hana-vs-exalytics-an-analysts-view 

Posted by AgnesKim
Technique/SAP HANA2011. 11. 29. 13:33

HANA optimized planning with BW-IP
Uwe Fischer SAP Employee 
Business Card
Company: SAP AG
Posted on Nov. 28, 2011 03:34 AM in Enterprise Data Warehousing/Business Warehouse, In-Memory Business Data Management

URL: http://www.sdn.sap.com/irj/sdn/ip

 
 

In-memory technology is a success story within SAP NetWeaver BW for many years. The introduction of SAP NetWeaver BW accelerator provided a new level of reporting performance and became role model for others. Since then there was a strong desire to significantly accelerate planning use-cases with this technology, too. The value proposition can be summarized to:

  • Improved plan quality (allow more simulations cycles)
  • Improved user experience (provide better response time)
  • Improved plan accuracy (process higher data volume) 

In general it is the mass data operations that benefit the most from in-memory technology: in reporting it naturally is the aggregation, in planning the disaggregation. But within planning there are much more mass data operations and every planning function is a candidate.

However for a significant performance benefit, mass data operations need to stay within the data layer completely, including data read, calculations and write-back.  BWA did not provide the durability of an acid-compliant database and as such was a secondary store which could not manage written-back data. With SAP HANA the same technology becomes available now with full acid-compliance.

To understand what it means to have the complete operations in HANA, let us look at the processing on a classical database (the width of the arrows describe the volume of data transferred):

First the data is read into a local cache in the application tier. There it is exposed into the plan session which is used to feed both, the BEx query for the end-user and the calculations in planning functions or disaggregation in the query. The calculations are tightly bound to different other components: the metadata of the plan application, constraints like characteristic combinations that the calculations must not violate and the delta buffer that contains the pending changes. These buffered deltas together with the locally cached data feed again the plan session. Finally the deltas in the buffer are written back to the database upon a save-command. All this is handled in the application tier for classical databases.

With HANA optimized planning all steps, data read, calculations and write-back, are done in HANA completely. The components remain the same: 

The plan session orchestrates the data flow between the physical data indexes and the consuming BEX query, the calculations in planning functions or disaggregation in the query. The data is read via projections into the level of aggregation demanded. The calculations are applied and the result written back into a delta buffer within HANA which is subject of further data requests. With this all mass data operations remain within HANA and only query relevant data and meta-data is exchanged between the application tier and HANA leading to significant reduction of IO-costs. In addition the columnar storage and parallel processing provide superior performance.

As a great benefit of this design the complete user experience remains untouched. This is true for the end-user clients (e.g. BEx suite, Advanced analysis for office) as well as for the modeling UI (ABAP planning modeler) and all existing BW-IP models. I.e. there is no need to migrate BW-IP scenarios to run on HANA. Adjustments might be considered though to optimize the HANA usage since not the complete BW-IP feature set can be executed in HANA today (see note 1637199). The other way around, all capabilities offered in HANA are available in the ABAP runtime as well. This allows to toggle between two operation modes of BW integrated planning on HANA:

 Coming from an existing BW 7.x installation (A), the upgrade comprises a simple upgrade to BW 7.30 SP5 on the existing database and a subsequent database migration to HANA. Here BW-IP leverages the SQL-interface of HANA leading to superior read performance. Plan calculations are executed in ABAP (B), still. Their execution in HANA can be enabled by activation of the planning applications kit, activated by flipping a switch (see note 1637199) (C). The planning applications kit leverages the calculation and planning engines build into HANA to process the plan calculations in the best possible performance.  This way the planning applications kit combines the feature-rich capabilities of BW-IP with the superior performance of SAP HANA.

Finally let me summarize the relation between BW-IP and the planning applications kit (PAK).

 

BW-IP

Planning applications kit (PAK)

End user UI

identical

Modeling tools

identical

Feature set

identical

Full HANA optimized

no

partially 1)

Further investment

no

Yes

License

no

yes 2)

 

1) SAP Netweaver BW 7.30 SP5
2) License required for SAP BusinessObjects Planning and Consolidation, Version for SAP NetWeaver'

 

 

Uwe Fischer   is development manager in the SAP BW analytic server.


Comment on this article
Comment on this weblog

Showing messages 1 through 2 of 2.

Titles Only Main Topics Oldest First
  • License
    2011-11-28 10:15:34 Ethan JEWETT Business Card [Reply]

    This blog was really clear and helpful, but it ends with a bombshell. Am I reading this right? If you want to use the "in-memory" planning engine for BW IP you need to purchase a license for BPC version for Netweaver?
    • License
      2011-11-28 11:59:38 Henrique Pinto SAP Employee Business Card [Reply]

      Yeah, the text is ambiguous. "License for BPC" can be interpreted as:


      1) you need to license BPC in order to use PAK;


      2) you need to license BPC if PAK is going to be used for BPC frontend.


      2) is highly unlikely, though, since BPC frontend is not mentioned anywhere in the blog, and the existing communication talk about BPC over HANA only on early 2013.


      Additionally, if you read the aforementioned note (https://service.sap.com/sap/support/notes/1637199), it states:


      "Use of the ABAP Planning Applications Kit requires a license for the following SAP functionality: 'SAP BusinessObjects Planning and Consolidation, version for SAP NetWeaver'."


      which could also be interpreted either as 1) or 2) above.


      As mentioned, I personally interpreted it as 1).


      BR,
      Henrique.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/27521%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP HANA2011. 11. 29. 13:30

Dealing with R and HANA
Alvaro Tejada Galindo Active Contributor Silver: 500-1,499 points
Business Card
Company: SAP Labs
Posted on Nov. 28, 2011 05:03 PM in Analytics, Beyond SAP, In-Memory Business Data Management, Open Source

 
 

First things first...what's "R"? Simply put...is a programming language and software environment for statistical computing and graphics. More infomation can be found here R on Wikipedia

 

I have code in many programming languages, some of them very commercial, and some of them little known, but I got say, that from all, "R" is one of the most weird and awesome languages I have ever played with...and it has an amazing repository of custom add-ons.

 

If you have read the HANA Pocketbook you will realize that there's a reference to "R" in the page 59. Now, that kind of integration between "R" and HANA haven't been developed yet, but it doesn't mean we can get our hands dirty doing some research and development.

 

What I did for this example was to simply show the information of my Analytic View on HANA and exported as an CSV file. From there, it's easy to import it into "R" and start doing some nice things. (The idea is that we should be able to code "R" straight in the HANA environment...or at least that's how I think it's going to be...)

 

image

 

image

 

image

 

The first example that we're going to build on "R" is a simple Pie graphic, using the information from the FORCURAM and CARRNAME fileds.

 

image

 

In this example, we're basically reading the CSV file, including the header. And doing an aggregation of the two fields we want to interact with. After that, it's just a matter of pass the values, the names and call the pie.

 

image

 

Next example is a little bit more complex...and uses a custom package call Word Clouds

 

image

 

Here, we have to load the required libraries, read the CSV file, do the aggregation, create a matrix with the aggregation values, sort the matrix, create a new vector, get it's length, create an array containing the names and finally assign the values and call the wordcloud graphic method...

 

image

 

Hope you like it...and stay tuned for more "R"...

 

Alvaro Tejada Galindo  Active Contributor Silver: 500-1,499 points is a Development Expert, Scripting Languages Geek, Programming books author, Geek Comics author and SAP Mentor Alumni.



http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/27548%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP HANA2011. 11. 29. 13:28

How to Best Leverage SAP BusinessObjects BI 4.0 on SAP HANA 1.0 - Webinar Presentation

     Patrice Le Bihan    Presentation     (PDF 2 MB)     19 October 2011
  •  Overview

In this session you will learn how to best integrate SAP BusinessObjects 4.0 with SAP HANA 1.0. We will cover the capabilities and implementation options for not only reporting but also data modeling, security and other deployment considerations.

Posted by AgnesKim