Technique/SAP BO2013. 1. 10. 13:35

dashboard_html5.png

Figure 1: Select Preview in Mobile to generate the temporary files

 

With SAP BusinessObjects Dashboards 4.0 SP5 (Xcelsius 2011), you have the ability to preview and publish to an HTML5 version rather than Flash. I however am not interested in publishing to the platform for consumption on mobile devices as much as I am in retrieving the raw HTML5 files.

 

While poking around, I discovered that you can grab the previewed version of your dashboard in HTML5 format by searching your temporary files. The location of the dashboard preview is available in a randomized folder under:

 

C:\Users\<username>\AppData\Local\Temp\

 

Search your Temp directory and sub-directories for a file called dashboard.html. Open the directory of the file and you'll find the associated files as well. Save them off and test to your heart's delight.

 

dashboard_html5_output.png

Figure 2: Search for the dashboard.html file in your temporary directory to get the files for your HTML5 Dashboard

 

Note: You must be in preview mode in Dashboards for the files to be present. As soon as you exit the preview, the files are cleaned up.

 

dashboard_html5_ie.png

Figure 3: Look Ma! A non-Flash Dashboard in Internet Explorer

 

Now you have all of the files, presumably, to test out an HTML5 version of your dashboard on the desktop and other devices without needing to publish to the platform for mobile devices.

 

Check out Timo's blog post on other topics regarding HTML5 Dashboards: Making A Mobile HTML5 Dashboard: Follow-Up Q&amp;A

 

Also, check out Tammy's blog post for a good overview of the capabilities and requirements too: Mobilizing Your Dashboards: ASUG Webcast

 

You can follow me on Twitter: @jwarbington



http://scn.sap.com/community/bi-dashboards/blog/2012/11/24/how-to-get-the-html5-version-of-your-dashboard-without-publishing-to-mobile?campaigncode=CRM-XM12-SOC-PROGCAMP-Twitter

Posted by AgnesKim
Technique/그외2012. 7. 8. 09:55

RSAP, Rook and ERP

Posted by Alvaro Tejada Galindo  in Scripting Languages on Jul 6, 2012 4:39:03 AM

As I wrote in my blog Analytics with SAP and R (Windows version) we can use RSAP to connect to our ERP system and play with the data.

 

This time I wanted of course, to keep exploring the capabilities of RSAP, but using something else. As everybody knows, I love micro-frameworks, so for R that not an exception...gladly, Rook came to the rescue...

 

Rook is a simple web server that will run locally and will allow us to do some really nice things...enough talk...let's go to the source code...

 

RSAP_Rook.R

library("RSAP")

require("Rook")

setwd("C:/Blag/R_Scripts")

 

conn = RSAPConnect("sap.yml")

parms <- list('DELIMITER' = ';',

              'FIELDS' = list(FIELDNAME = list('CARRID', 'CARRNAME')),

              'QUERY_TABLE' = 'SCARR')

res <- RSAPInvoke(conn, "RFC_READ_TABLE", parms)

#RSAPClose(conn)

scarr<-res$DATA

flds<-sub("\\s+$", "", res$FIELDS$FIELDNAME)

scarr<-data.frame(colsplit(scarr$WA,";", names=flds))

 

parms <- list('DELIMITER' = ';',

              'FIELDS' = list(FIELDNAME = list('CITYFROM')),

              'QUERY_TABLE' = 'SPFLI')

res <- RSAPInvoke(conn, "RFC_READ_TABLE", parms)

#RSAPClose(conn)

spfli<-res$DATA

flds<-sub("\\s+$", "", res$FIELDS$FIELDNAME)

spfli<-data.frame(colsplit(spfli$WA,";", names=flds))

spfli<-unique(spfli)

 

get_data<-function(p_carrid,p_cityfrom){

  parms<-list('DELIMITER' = ';',

              'FIELDS' = list(FIELDNAME = list('CITYTO','FLTIME')),

              'OPTIONS' = list(TEXT = list(p_carrid, p_cityfrom)),

              'QUERY_TABLE' = 'SPFLI')

  res<-RSAPInvoke(conn, "RFC_READ_TABLE", parms)

  RSAPClose(conn)

  spfli<-res$DATA

  flds<-sub("\\s+$", "", res$FIELDS$FIELDNAME)

  if(length(spfli$WA)>0){

  spfli<-data.frame(colsplit(spfli$WA,";", names=flds))

  return(spfli)

  }else{

   return(spfli)

  }

}

 

newapp<-function(env){

  req<-Rook::Request$new(env)

  res<-Rook::Response$new()

  res$write('<form method="POST">\n')

  res$write('<div align="center"><table><tr>') 

  res$write('<td>Select a carrier: <select name=CARRID>')

  for(i in 1:length(scarr$CARRID)) {

    res$write(sprintf('<OPTION VALUE=%s>%s</OPTION>',scarr$CARRID[i],scarr$CARRNAME[i]))

  }

  res$write('</select></td><td>')

  res$write('Select a city: <select name=CITYFROM>')

  for(i in 1:length(spfli$CITYFROM)) {

    res$write(sprintf('<OPTION VALUE=%s>%s</OPTION>',spfli$CITYFROM[i],spfli$CITYFROM[i]))

  }

  res$write('</select></td>')

  res$write('<td><input type="submit" name="Get Flights"></td>')

  res$write('</tr></table></div>')

  res$write('</form>')

 

  if (!is.null(req$POST())) {

    p_carrid = req$POST()[["CARRID"]]

    p_cityfrom = req$POST()[["CITYFROM"]]

    flights_from<-paste('Distance in Flights from ',p_cityfrom,sep='')

 

    p_carrid<-paste('CARRID = \'',p_carrid,'\'',sep='')

    p_cityfrom<-paste('AND CITYFROM =\'',p_cityfrom,'\'',sep='')

 

    spfli<-get_data(p_carrid,p_cityfrom)

 

    if(length(spfli$CITYTO) > 0){

    png("Flights.png",width=800,height=500)

    plot(spfli$FLTIME,type="n",axes=FALSE,ann=FALSE)

    lines(spfli$FLTIME,col="blue")

    points(spfli$FLTIME, pch=21, bg="lightcyan", cex=1.25)

    box()

    xy<-length(spfli$CITYTO)

    axis(2, col.axis="blue", las=1)

    axis(1, at=1:xy, lab=spfli$CITYTO, col.axis="purple")

    title(main=flights_from, col.main="red", font.main=4)

    dev.off()

    res$write("<div align='center'>")

    res$write(paste("<img src='", server$full_url("pic"), "/", "Flights.png'", "/>", sep = ""))

    res$write("</div>")

    }else{

      res$write("<p>No data to select...</p>")

    }

  }

  res$finish()

}

 

server = Rhttpd$new()

server$add(app = newapp, name = "Flights")

server$add(app = File$new("C:/Blag/R_Scripts"), name = "pic")

server$start()

server$browse("Flights")

 

This is the result...

 

RSAP_Rook_001.png

RSAP_Rook_002.png

RSAP_Rook_003.png

RSAP_Rook_004.png

 

As you can see, we're getting the data from SAP to fill both SELECT's and then call out the query. We generate a PNG graphic showing the distance from the City From to the City To and then call it from our Web Page to show it on the screen.

 

As you can see, RSAP give us a lot of opportunities that we can take advantage by simply putting some effort and imagination. Hope this boots your R interest 



http://scn.sap.com/community/scripting-languages/blog/2012/07/06/rsap-rook-and-erp?utm_source=twitterfeed&utm_medium=twitter

Posted by AgnesKim
Technique/SAP HANA2012. 7. 5. 21:45

As I mentioned in a recent blog about the In-memory Computing Conference 2012 (IMCC 2012) event,  I found thepresentation from Xiaoqun Clever (Corporate Officer, SAP AG Senior Vice President, TIP Design & New Applications, President of SAP Labs China) entitled “Extreme Applications – neuartige Lösungen mit der SAP In-memory Technologie“ interesting but didn’t go into details. In this blog, I wanted to describe what I found so interesting about the presentation.

 

The first slide that I found intriguing was the one that describes the various scenarios for HANA. You can visualize the evolution of Hana over time as moving from left to right: Accelerator -> Database -> Platform. The first thing that I noticed is that “Cloud” is not involved at all. Indeed, it is only mentioned in a single slide later in the presentation - it almost doesn’t appear to play a major role in such HANA-related endeavors.

 

image001.jpg

 

The use case with HANA as a Database and the involvement of NetWeaver components would also cover the current intentions regarding the NetWeaver Cloud offering where HANA will be present as one possible data source. In this area, I will be curious to see what sort of functionality overlap (user management, etc) will exist between the HANA Platform and the NetWeaver Cloud as well as how the two environments will integrate with one another.

 

A later slide describing the scenario “HANA als Platform” is actually titled “Native HANA Applikationen”. The applications listed on the slide are either OnPremise or OnDemand (usually associated with the HANA AppCloud) applications. I started to consider what the portrayal of these applications as being “native” meant for the relationship between HANA and the Cloud. To be truthful, I’m starting to get the impression that we are slowly seeing a merging of the OnPremise and OnDemand worlds in terms of HANA runtime environments. Native applications might be able to run in both worlds since the underlying platform is fundamentally the same. Thus, other considerations (costs involved, customer presence, etc) might be important in making decisions regarding where such applications are hosted.

image002.jpg

 

If we take this assumption forward a few years, you might think that a HANA-Platform-based Business Suite running as a native HANA application (and thus, perhaps easily available in an OnDemand setting) might be an option but another slide in the same presentation shows that a HANA-based Business Suite would be distinct from native HANA Apps.

 

image003.jpg

 

What I liked about the presentation is that it reinforced my understanding of the differences between scenarios involving NetWeaver functionality and those based on native HANA functionality. What I don’t fully understand is the potential use of native functionality in scenarios where NetWeaver functionality (for example, NetWeaver Cloud) is used. Is this possible? Planned?  As things evolve, I’ll waiting to see if the scenarios with HANA as a Database can hold their own against HANA as a Platform scenarios.



http://scn.sap.com/community/cloud-computing-and-on-demand-solutions/blog/2012/07/05/do-native-hana-applications-imply-a-future-without-a-cloud-onpremise-deployment-distinction?utm_source=twitterfeed&utm_medium=twitter

Posted by AgnesKim
Technique/SAP BO2012. 6. 15. 02:04

Hello Everybody

This is my first blog in SDN and I would like to put down some of the limitation faced when I tried to use WEB INTELLIGENCE reporting in BI4.0 with the new connectivity option of directly using a BEX query using BICS option. I have used the .UNV universe in 3.x environment and below are some of the limitation immediately became visible  with BICS approach.

Very Important ones.

1>     BEX  built using a sub query technique:  In this case the when the “Allow External Access to this Query” is checked the Report can’t even executed in Bex( or bex web), therefore can’t be used in BOBJ platform.

2>     Slice and Dice of the dimension in the report most of the time requires refreshing of  the Report:  This is due to the inherent nature of the Data base delegated Measure setting in the Virtual universe generated at the back end during run time, and setting can’t be changed because of that. We probably have to live with this even if we don’t want the calculation of Key Figure to be done by BW for every slice and dice in WEBI.

3>     The following Filtering technique which works for .UNV universe , doesn’t work for BICS

a.       Object Base Filter ( Filter comparing the same dimension)

b.      Sub Query

c.       Feeding the Output of One query to Another

4>     Combined Query Technique( Minus, Union etc) –  The Option for this also doesn’t seem to be enabled

5>     Drilling - while I absolutely love the way the Hierarchy perform with the new BICS option, the Good Old drilling doesn’t work with BICS and I think this is huge

 

Some Other:

6>     Display Attribute defined in BEX with change of description of the field, ordering etc, has no bearing in the WEBI report. All display attribute defined in the Back end info Object level are available, and can be confusing for the General Users. This doesn’t necessarily be a defect but confusing for the end users , depending upon the type of end users.

7>     Due to inherent nature of BICS approach, we can’t do the following.. but I see this as more as a different way of doing things instead of limitation etc.

a.       Can’t have predefined filters as .UNV

b.      Can’t group dimension in logical group

Since I see WEBI as the most important Self service and Report authoring tool by Business rather than IT, these can cause some problem and pain points for use of WEBI down the line with BEX/BICS.

There are others smaller one, line query Stripping etc… Let’s see if you faced similar issues and have a healthy discussion about these.

Thanks

Arun




http://scn.sap.com/community/businessobjects-web-intelligence/blog/2012/06/14/some-limitation-in-web-intelligence-when-using-directly-with-bex-querybics-in-bi-40?utm_source=twitterfeed&utm_medium=twitter


Posted by AgnesKim
Technique/SAP HANA2012. 5. 31. 09:42

Secure Sockets Layer (SSL) with HANA and BI4 Feature Pack 3 requires configuration on the HANA server and BI4 server.  The following steps will show how to configure SSL using OpenSSL and a certificate obtained by from a Certificate Authority (CA).

 

OpenSSL Configuration

 

This blog will cover the OpenSSL Crypto Library, however HANA can also be configured using the SAP Crypto Library.

 

Confirm that OpenSSL is installed

 

shell> rpm -qa | grep -i openssl

openssl-0.9.8h-30.34.1

libopenssl0_9_8-32bit-0.9.8h-30.34.1

openssl-certs-0.9.8h-27.1.30

libopenssl0_9_8-0.9.8h-30.34.1

 

Confirm that OpenSSL is 64-bit

 

shell> file /usr/bin/openssl

openssl: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), stripped

 

Confirm there is a symlink to the libssl.so file

 

ssl_5.png

 

If not, create one as the root user

 

shell> ln -s /usr/lib64/libssl.so.0.9.8 /usr/lib64/libssl.so

 

SSL Certificates

 

This blog won’t go into details of how SSL works, but in generic terms you’ll need to create a Certificate Singing Request (CSR) from the HANA server and send that to a CA.  In return, the CA will give you a Signed Certificate and a copy of their Root CA Certificate.  These, then need to be setup with HANA and the BI4 JDBC and ODBC drivers.

 

Creating the Certificate Signing Request

 

shell> openssl req -new -nodes -newkey rsa:2048 -keyout Server_Key.key -out Server_Req.csr -days 365

 

Fill out the requested information according to your company:

 

-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, YOUR name) []:
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

 

This will create two files

 

  • Key: Server_Key.key
  • CSR: Server_Req.csr

 

The CSR needs to be sent to the CA, which in turn will give you a signed certificate and their Root CA Certificate.

 

Convert the Root CA Certificate to PEM

 

The Root CA Certificate may come in the DER format (.cer extension), HANA requires the cert in PEM format.  Therefore, we will need to convert it using the command

 

shell> openssl x509 -inform der -in CA_Cert.cer -out CA_Cert.pem

 

HANA SSL Configuration

 

Copy both the Signed Cerficiate and Root CA Certificate to the HANA server.  For HANA SSL to work, we need to create two files:

 

  • key.pem
  • trust.pem

 

The key.pem key store file contains the certificate chain, which includes your servers key (Server_Key.key), the CA’s Signed Certificate and the Root CA Certificate.  Whereas the trust.pem trust store file contains the Root CA Certificate.

 

Create the key.pem and trust.pem trust stores

 

key.pem

 

shellcat Server_Cert.pem Server_Key.key CA_Cert.pem > key.pem

 

trust.pem

 

shellcp CA_Cert.pem trust.pem

 

Copy the files to the user's home directory

 

In the user's home directory create a .ssl directory and place both the key.pem and trust.pem files here,

 

ssl_6.png

 

Configure the certificates in HANA

 

Once the key.pem and trust.pem files have been created they need to be configured in HANA.

 

In HANA Studio go to

 

  • Administration
  • Configuration tab
  • Expand indexserver.ini
  • Expand communication
  • Configure the entries related to SSL

 

ssl_!.png

 

Start and Stop HANA to pick up the SSL configuration

 

  • HDB stop
  • HDB start

 

HANA Studio Configuration

When setting up the connection to HANA, check the option 'Connect using SSL', as seen below.

ssl_7.png

 

To confirm the connection has SSL, look for the lock icon on the server icon, as seen below.

 

ssl_8.png

 

BI4 Feature Pack 3 SSL Configuration

 

SSL in BI4 needs to configured for the HANA connectivity you plan to use. 

 

JDBC Configuration

 

For JDBC SSL configuration, we’ll need to add the trust.pem trust store to the Java Key Store (JKS) using the keytool utility provided by the JDK/JRE.  This is done via the command line.  Change the paths for your own configuration:

 

Add trust.pem to the JKS

 

C:\Documents and Settings\Administrator>"C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win32_x86\jre\bin\keytool.exe" -importcert -keystore "C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win32_x86\jre\lib\security\cacerts" -alias HANA -file trust.pem

 

You will be prompted for the keystore password.  The default password is: changeit

 

When prompted to 'Trust this certificate' enter yes.  The alias can be any value, however it must be unique in the keystore.

 

Confirm that your certificate has been added to the keystore

 

C:\Documents and Settings\Administrator>"C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win32_x86\jre\bin\keytool.exe" -list -keystore "C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win32_x86\jre\lib\security\cacerts" -alias HANA

 

If successful, you will see trustedCertEntry in the output, as below

 

ssl_11.png

 

Information Design Tool  (IDT) Configuration

 

In IDT, the connection will need to be setup with the JDBC Driver Property encrypt=true to make the connection use SSL when connecting to HANA,

 

idt.png

 

ODBC Configuration

 

Once the HANA client driver has been installed, you can set up a ODBC connection for HANA.  To connect via SSL, check the box 'Connect using SSL', as below:

ssl_2.png

 

If you added any Special property settings', they won't be displayed in the driver configuration.  To view them, launch the Windows Registry Editor and go to the key:

 

  • HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBC.INI\<Your Data Source Name>

 

ssl_4.png

 

Installing the CA Root Certificate

 

Depending on which CA you get the certificate signed from, you may run into SSL errors.  For example, in Crystal you may see this error,

 

cr1.png

 

To resolve this, install the CA Root Certificate allowing it to be trusted by the server.

 

  • Copy the CA Root Certificate to the machine where the error is coming from

 

  • Double click on the certificate and click 'Install Certificate'

cr5.png

  • Click next

 

cr4.png

  • Select the first option and click next

 

cr3.png

  • Click finish

cr6.png

 

Confirming if SSL is being used

 

Using a tool like Wireshark, the communication between the server and client can be traced, as seen below to verify that SSL is being used.

 

ssl_10.png




http://scn.sap.com/community/in-memory-business-data-management/blog/2012/05/30/ssl-with-hana-and-bi4-feature-pack-3?utm_source=twitterfeed&utm_medium=twitter

Posted by AgnesKim
Technique/SAP HANA2012. 5. 29. 21:32

Two major improvements related to Enterprise Data Warehousing that can be achieved by migrating SAP NetWeaver BW to the SAP HANA database platform are (1) the dramatic improvement in data load times for Data Store Objects being the data containers within BW, and (2) the significantly boosted performance while reporting from them. With the latest BI Content releases SAP NetWeaver 7.30 BI Content 7.37 SP 01 or SAP NetWeaver 7.31 BI Content 7.47 SP 01, a large fraction (about 2/3) of DataStore Objects that were delivered so far with BI Content have now been prepared for HANA-optimization.

 

When you copy these HANA-prepared DataStore Objects (DSO) from the delivered to the active version they will be created automatically as SAP HANA-optimized DataStore Object side-stepping the manual conversion step from standard to HANA-optimized DSO. The delivery of DataStore Objects prepared for HANA-optimization marks the next consequent step towards total cost of administration reduction after the migration of all relevant data flows to SAP BW 7.x technology. This is one important prerequisite for HANA-optimization of involved DataStore Objects (migrated data flows were delivered with SAP NetWeaver 7.30 BI Content 7.36 SP 02 or SAP NetWeaver 7.31 BI Content 7.46 SP 02). If DataStore Objects contain data, you cannot avoid the manual conversion step to HANA-optimization. Therefore, you best benefit from the HANA-prepared DataStore Objects when you copy new data flows from the delivered to the active version in your BW system powered by SAP HANA.

 

You can benefit from automatic creation of HANA-optimized DataStore Objects during BI Content activation if you are using (1) SAP HANA database as of release SAP HANA 1.0 Support Package 03 and (2) SAP NetWeaver BW 7.3, Support Package Stack 07 or SAP NetWeaver BW 7.3, including Enhancement Package 1, Support Package Stack 04.

 

Find the complete list of all 1025 HANA-optimized DataStore Objects attached to note 1708668.




http://scn.sap.com/community/data-warehousing/business-content-and-extractors/blog/2012/05/29/shipment-of-bi-content-for-sap-netweaver-bw-powered-by-sap-hana-datastore-objects-are-now-prepared-for-sap-hana-optimization?utm_source=twitterfeed&utm_medium=twitter

'Technique > SAP HANA' 카테고리의 다른 글

Analytics with SAP and R  (0) 2012.06.14
SSL with HANA and BI4 Feature Pack 3  (0) 2012.05.31
What's New in HANA Studio SPS04  (0) 2012.05.23
HANA is a great way to waste a lot of money  (0) 2012.05.17
BWA, Exalytics and SAP-HANA  (0) 2012.05.11
Posted by AgnesKim
Technique/SAP HANA2012. 5. 23. 17:38

Hello All,

 

As we are getting new versions of HANA,its required to know what are the changes are coming-in in the latest versions.

 

Here I have listed,the new & changed features in HANA Studio SPS04 version.You can get these details from SAP HANA Modeling Guide available in SMP as well.

 

Loading data from flat files (new)

You use this functionality to upload data from flat files, available at a client file system to SAP HANA database. The supported file types are .csv, .xls, and .xlsx.

This approach is very handy to upload your content into HANA Database.For more details,referhttp://scn.sap.com/docs/DOC-27960

 

Exporting objects using SAP Support Mode (new)

You use this functionality to export an object, along with other associated objects and data for SAP support purposes. This option should only be used when requested by SAP support.This would be helpful to debug from SAP side, in case of any issues reported for particular Views.

 

Input parameters (new)

Used to provide input for the parameters within stored procedures, which are evaluated when the procedure is executed. You use input parameters as placeholders during currency conversion and in formulas like calculated measures, and calculated attributes.

 

Import/export server and client (changed)

• Exporting objects using Delivery Units (earlier known as Server Export):

Function to export all packages that make up a delivery unit and the relevant objects contained therein, to the client or to the SAP HANA server filesystem.

 

• Exporting objects using Developer Mode (earlier known as Client Export):

Function to export individual objects to a directory on your client computer. This mode of export should only be used in exceptional cases, since this does not cover all aspects of an object, for example, translatable texts are not copied.

 

• Importing objects using Delivery Unit (earlier known as Server Import):

Function to import objects (grouped into a delivery unit) from the server or client location available in the form of .tgz file.

 

• Importing objects using Developer Mode (earlier known as Client Import):

Function to import objects from a client location to your SAP HANA modeling environment.


Variables (changed)

Variables can be assigned specifically to an attribute, which is used for filtering via the where clause. At runtime, you can provide different values to the variable to view corresponding set of attribute data. You can apply variables on attributes of analytic and calculation views.

 

Hierarchy enhancements (changed)

Now you can create a hierarchy between the attributes of a calculation view in addition to attribute views. The Hierarchy node is available in the Output pane of the Attribute and Calculation view.

 

Activating Objects (changed)

You activate objects available in your workspace to expose the objects for reporting and analysis.

Based on your requirement, you can perform the following actions on objects:

• Activate

• Redeploy

• Cascade Activate

• Revert to Active

 

Auto Documentation (changed)

Now the report also captures the cross references, data foundation joins, and logical view joins.

 

Calculation view (changed)

• Aggregation node: used to summarize data of a group of rows by calculating values in a column

• Multidimensional reporting: if this property is disabled you can create a calculation view without any measure, and the view is not available for reporting purposes

• Union node datatype: you can choose to add unmapped columns that just have constant mapping and a data type

• Column view now available as data source

• Filter expression: you can edit a filter applied on aggregation and projection view attributes using filter expressions from the output pane that offers more conditions to be used in the filter including AND, OR, and NOT

 

Autocomplete function in SQL Editor (changed)

The autocomplete function (Ctrl+Space) in the SQl Editor shows a list of available functions.

 

Hope this helps.

 

Rgds,

Murali

Posted by AgnesKim
Technique/SAP BW2012. 5. 23. 17:36

SAP BW 7.3 Hybrid Provider

 

Hybrid Provider consists of a DataStore object and an InfoCube with automatically generated data flow in between.

• It combines historic data with latest delta information when a query based on it is executed.

• DSO object can be connected to a real time data acquisition Data Source/DTP.

• If the Data Source can provide appropriate delta information in direct access mode a Virtual Provider can be used instead of the DSO.

There are two types of Hybrid Providers:

  1. 1. Hybrid Providers based on direct access.
  2. 2. Hybrid Providers based on a DataStore object

 

Hybrid Providers based on Direct Access

 

Hybrid Provider based on direct access is a combination of a Virtual Provider and an InfoCube. The benefit of this Info Provider is that it provides access to real time data without actually doing any real time data acquisition.

At query runtime, the historic data is read from the InfoCube. Also the near real-time or the latest up-to-date data from the source system is read using the Virtual Provider.

 

Hybrid Providers based on a DataStore object

 

The Hybrid Provider based on a DSO is a combination of DSO and InfoCube. Once this Hybrid Provider is created and activated, the different objects including (DTP, Transformation and process chain) that are used for the data flow from the DSO to the InfoCube are created automatically.

 

One should use Hybrid Provider based on DSO as Info Provider in scenarios where there is need to load data using real time data acquisition. The DTP for real-time data acquisition from a real-time enabled Datasource to the DSO loads the data to the DSO in delta mode. The daemon used for real-time data acquisition immediately activates the data. As this daemon is stopped, the data is loaded from the change log of the DSO to the InfoCube. The InfoCube acts as storage for the historic data from DSO

 

To make data held within a DSO available for reporting, in BI7 there are a number of steps to be done that is create the DSO, InfoCube, Transformation/DTP, MultiProvider, store in a BWA and connect them all up, and then schedule and monitor load jobs.

 

A Hybrid Provider takes a DSO and does it all for you, removing substantial development and maintenance effort. Just load your data into a DSO, create a Hybrid Provider and start reporting. You can even build your Hybrid Provider on a Real-time Data Acquisition Data Source (RDA), which could potentially provide near real-time reporting from a BWA.

 

A typical usage scenario could be that you want to extract your Purchase Orders from R/3 and make available for reporting. Using a Hybrid Provider, as soon as the data is loaded into a DSO they then become available for reporting with all the benefits of an InfoCube and BWA

 

Real-time Data Acquisition

 

Real-time data acquisition enables to update data in real time. As the data is created in the source system, it is immediately updated in the PSA or the delta queue. There are special InfoPackages and DTPs that are real time enabled which are used to load data in InfoProviders.

In order to load real time data from source system to SAP BW, the Datasource should be real time enabled. Most of the standard Data Sources are real-time enabled however we can also create generic Datasource as Real time enabled.

 

Step by step process of creating Hybrid Provider:

Step: 1 we have to first create an Init Infopackage for the Datasource and schedule it as shown below in screenshot.

 

Untitled1.png

 

Step 2: After creating Init Info package, we will then need to create a RDA Info package

 

Untitled2.png

 

Step 3: Now we have the Data source ready .We will have to create a Hybrid Infoprovider Combining DSO and the Infocube. So, for that I need to first create an Infoarea

 

Untitled3.png

 

Step 4: I will go to Data flow screen which is in left hand panel in RSA1

 

Untitled4.png

 

Step 5: Navigate to Infoarea and right click and “create Data flow”

 

Untitled5.png

 

Step 6: We drag and drop Datasource icon from the sidebar available in Data flow, then right click on the icon. Click on use existing object to select the datasource

 

Untitled6.png

 

Step 7: From the Data flow panel, Keep the cursor on the Datasource, right click “Show Direct Dataflow Before”. By clicking on show direct data flow before, it’s automatically shows the relevant Infopackages for the datasource .

 

Untitled7.png

 

 

Step 8: Now, we will remove the Init Infopackage from the data flow and now, the flow will looks as shown below

 

Untitled8.png

 

Step 9: Now drag and drop DSO from side menu. Right click and “Create”. Create a new DSO. Assign the data and key fields. Save and activate it.

 

Step 10: Now, drag and drop the Hybrid Provider from side bar right click and “Create”. Create a new Hybrid Provider based on DSO. The technical name of the provider is HYPD. Assign the previously created DSO to this hybrid provider

 

Untitled9.png

 

While creating the hybrid provider, it shows a warning as follows which means that the DSO can no longer be used as a standalone DSO. It will behave only as a part of hybrid provider. The data fields and the key fields in the DSO are automatically included in the Hybrid Provider.

 

 

Step 11: Once created, it show a system created Infocube under that Hybrid Provider. Note that the Hybrid Provider and the Info Cube have the description same as the DSO, however we have flexibility to give a new name to Hybrid Provider while creating.

 

 

Untitled10.png

 

 

Step 12: we now have to click on complete data flow icon as shown below for system to create a DTP and transformation automatically for the data flow and activate the flow.

 

Untitled12.png

 

 

Step 13: Once Transformation and DTP are active, we need to assign the RDA Infopackage and the RDA DTP to a RDA daemon. Right click on the RDA Infopackage and select “Assign RDA daemon”. It will navigate to RDA Monitor. Create a daemon from left top corner create button and then assign the both of them to the daemon

 

Untitled13.png

 

 

Step 14: Create RDA daemon: In the daemon settings, the daemon specifies the technical number, short description and the period specifies the duration after which it repeats the execution.

 

Untitled14.png

 

We can see that both the Infopackage and DTP are listed under the RDA daemon.

 

 

Step 15: Now, drill down to Infocube menu and click on the DTP. Now click on “Process Chain maintenance”. It will open a system generated process chain which contains DTP from DSO to Cube

 

 

Untitled15.png

 

 

Step 16: Below is the process chain which is automatically created by the system.

 

Untitled16.png

 

 

Step 17: Go to transaction RSRDA (it is RDA Monitor). Run the daemon and the data in real-time gets update from source system to DSO.

 

Untitled17.png

 

 

Untitled18.png

 

 

The new data updated in DSO is now updated into InfoCube after this process chain has run

Below is the process chain successfully run.

 

Untitled19.png

 

 

So we can update the real-time data from the source system to BW system. The real-time data updating works similar to delta functionality. So whenever the users create a new data in source system, it gets automatically updated into BW target system.





http://scn.sap.com/community/data-warehousing/netweaver-bw/blog/2012/05/23/sap-bw-73-hybrid-provider?utm_source=twitterfeed&utm_medium=twitter

'Technique > SAP BW' 카테고리의 다른 글

DSO Comparison  (0) 2012.08.03
Explorer with BWA  (0) 2012.05.11
Usage of BW7.3 Transformation Rule Type “Read from DataStore”  (0) 2012.05.10
Version management in SAP BW 7.3  (0) 2012.04.20
Queries/Workbooks a user can access  (0) 2012.04.20
Posted by AgnesKim
Technique/SAP HANA2012. 5. 17. 21:35

HANA is a Greate Way to Waste a Lot of MONEY 라는 다소 자극적인 표현으로 시작한 글. 
근데 사실 지금 HANA를 BW의 다음버전으로 도입하는 곳들은 정확히. "돈을 갖다 버리고 있다"에 한표. (최소한 국내는. 해외는 모르겠고.) 

HANA를 도입하는 것이 Version Upgrade Project 를 하듯이 하게되면 그렇게 될거라는것. 그리고 신규도입하는 곳도 결국 기존 BW 하던 사람들이 고민없이 기존 MDM 모델링으로 하게 되면 그 또한 마찬가지. 

나로선 공부해야 할 것이 늘어나고 있다고 판단되는 것이 HANA 의 등장이지만
뭐 또 그냥 대충 어물쩍 그렇게 넘어갈수도 있겠다 싶기도 하다. 

여튼 그런 부분에 대해서 쓰인 SDN의 한 블로그. 동의하는바.


-------------------------------------------------------------------------------------------------------------

HANA is a great way to waste a lot of money.

Yes, I'm serious here.

 

If you decide to implement this new platform at your site but just copy your coding 1:1 to HANA, you're going to waste money.

If you buy HANA and don't re-engineer your solutions, you're going to waste money.

If there is no change in how data is processed and consumed with the introduction of HANA to your shop, then you're wasting money. And you're wasting an opportunity here.

 

Moving to HANA is disruptive.

It does mean to throw the solutions that are used today over board.

That's where pain lies, where real costs appear - those kind of costs that don't show up on any price list.

 

So why take the pain, why invest so much?

Because this is the chance to enable a renovation of your company IT.

To change how users perceive working with data and to enable them to do things with your corporate data far more clever than what was thinkable with your old system.

That's the opportunity to be better than your competition and better than you are today.

That's a chance for becoming a better company.

 

This does mean, that your users need to say goodbye to their beloved super long and wide excel list reports.

To fully gain the advantages HANA can provide, the consumers of the data also need to be lead grow towards better ways to work with data.

Just upgrading to the new Office version to support even more rows in a spreadsheet won't do. It never did.

 

This does mean, your developers need to re-evaluate what they understand of databases.

They have start over and re-write their fancy utility-scripts that had been so useful on the old platform.

And more important: they need to re-think about what users of their systems should be allowed enabled to do.

Developers will need to give up their lordship of corporate IT. This is going to be a bitter pill to swallow.

 

Just adding the HANA box to your server cabinet is not the silver bullet to your data processing issues. But taking up this disruptive moment and provide your users with new application and approaches to data is what you get.

Once again, leaving the 'comfort zone' is what will provide the real gain.

 

So don't waste your money, don't waste your time and by all means stop creating the same boring systems you created since you've been to IT.

Instead, start over and do something new to become the better company you can be.



------------------------------------------------------------------------------------------------------------- 

http://scn.sap.com/community/in-memory-business-data-management/blog/2012/05/17/still-no-silver-bullet?utm_source=twitterfeed&utm_medium=twitter

Posted by AgnesKim
Technique/SAP HANA2012. 5. 11. 15:38




INTRODUCTION

 

Ever since SAP-HANA was announced couple of years back, I've been following the discussions/developments around In-Memory Database space. In Oct 2011, Oracle CEO Larry Ellison introduced Oracle Exalytics to compete with SAP-HANA. After reading white papers on both SAP-HANA and Oracle Exalytics, it was obvious they were different. The comparison of SAP-HANA and Oracle Exalytics is like comparing apples to oranges.

 

On May 8, 2012 I tweeted:

 

SAP Mentor David Hull responded:

 

I was a bit surprised to know that most don't have a clue as to the difference between Exalytics and SAP-HANA. The difference looked obvious to me. I realized either I was missing something or they were. So I decided to write this blog. And since this blog compares SAP products with Oracle products, I've decided to use Oracle DB instead of generic term RDBMS.

 

First I'll discuss the similarity between SAP-BW, Oracle-Exalytics and SAP-HANA. At a very high level, they look similar as shown in the picture below:

Similarity.png

 

As shown, BW application sits on top of a database, Oracle or SAP-HANA. And the application helps the user find right data. The similarity ends there.

 

Let us now review how Oracle-Exalytics compares with SAP-BW with Business Warehouse Accelerator (BWA): As you can see below, there appears to be one-to-one match between the components of SAP-BW and Exalytics.

 

 

New_BWA_EXA.png

Steps
SAP-BWExalyticsComments
1 and 1a

Data found in BWA;

and returned to the user

Data found in

Adaptive Data Mart & returned to the user


2 & 2a

Data found in OLAP

Cache and returned to the user

Data found in Intelligent cache and returned to the user

This means data was not found in BWA

or Adaptive Data Mart

3 & 3a

Data found in

Aggregates and returned to the user

Data found in Essbase Cubes and returned to the user

This means data was not found in

1) Adaptive Data Mart or BWA and

2) OLAP Cache or Intelligent Cache

4 & 4a

Data found in Cubes

and returned to the user


Not sure if Essbase supports aggregates;

However Oracle supports materialized

views;I assume this is similar to SAP-BW's aggregates.

 

 

 

The diagram below shows why Exalytics Vs SAP-HANA comparison is like apple to orange comparison. In Exalytics, the information users need gets pre-created at a certain level/granularity. One of the best practices in BW/DW world is to create the aggregates upfront to get acceptable response times.

 

In SAP-HANA, however, aggregates are created on the fly; data in SAP-HANA resides in raw form, and depending on what users need, the application performs the aggregation at runtime and displays the information on the user's screen. This helps the users perform analysis near real-time and more quickly.

 

 

New_EXA_HANA.png

Based on the diagrams shown above, Exalytics it seems is comparable to SAP's six year old BWA technology.

 

SUMMARY

 

Based on discussions above, the diagram below compares all three products SAP-BW with BWA, Exalytics and SAP-HANA.

 

New_BWA_EXA_HANA.png

 

                                           Note: I didn't connect Disk to HANA DB because it is primarily used for persistence.

 

I wanted to keep this blog simple so didn't include a lot of details. Depending on your questions/thoughts, I'm planning to either update this blog or write new blog.







http://scn.sap.com/community/in-memory-business-data-management/blog/2012/05/11/bwa-exalytics-and-sap-hana?utm_source=twitterfeed&utm_medium=twitter



Posted by AgnesKim