Technique/SAP HANA2012. 5. 23. 17:38

Hello All,

 

As we are getting new versions of HANA,its required to know what are the changes are coming-in in the latest versions.

 

Here I have listed,the new & changed features in HANA Studio SPS04 version.You can get these details from SAP HANA Modeling Guide available in SMP as well.

 

Loading data from flat files (new)

You use this functionality to upload data from flat files, available at a client file system to SAP HANA database. The supported file types are .csv, .xls, and .xlsx.

This approach is very handy to upload your content into HANA Database.For more details,referhttp://scn.sap.com/docs/DOC-27960

 

Exporting objects using SAP Support Mode (new)

You use this functionality to export an object, along with other associated objects and data for SAP support purposes. This option should only be used when requested by SAP support.This would be helpful to debug from SAP side, in case of any issues reported for particular Views.

 

Input parameters (new)

Used to provide input for the parameters within stored procedures, which are evaluated when the procedure is executed. You use input parameters as placeholders during currency conversion and in formulas like calculated measures, and calculated attributes.

 

Import/export server and client (changed)

• Exporting objects using Delivery Units (earlier known as Server Export):

Function to export all packages that make up a delivery unit and the relevant objects contained therein, to the client or to the SAP HANA server filesystem.

 

• Exporting objects using Developer Mode (earlier known as Client Export):

Function to export individual objects to a directory on your client computer. This mode of export should only be used in exceptional cases, since this does not cover all aspects of an object, for example, translatable texts are not copied.

 

• Importing objects using Delivery Unit (earlier known as Server Import):

Function to import objects (grouped into a delivery unit) from the server or client location available in the form of .tgz file.

 

• Importing objects using Developer Mode (earlier known as Client Import):

Function to import objects from a client location to your SAP HANA modeling environment.


Variables (changed)

Variables can be assigned specifically to an attribute, which is used for filtering via the where clause. At runtime, you can provide different values to the variable to view corresponding set of attribute data. You can apply variables on attributes of analytic and calculation views.

 

Hierarchy enhancements (changed)

Now you can create a hierarchy between the attributes of a calculation view in addition to attribute views. The Hierarchy node is available in the Output pane of the Attribute and Calculation view.

 

Activating Objects (changed)

You activate objects available in your workspace to expose the objects for reporting and analysis.

Based on your requirement, you can perform the following actions on objects:

• Activate

• Redeploy

• Cascade Activate

• Revert to Active

 

Auto Documentation (changed)

Now the report also captures the cross references, data foundation joins, and logical view joins.

 

Calculation view (changed)

• Aggregation node: used to summarize data of a group of rows by calculating values in a column

• Multidimensional reporting: if this property is disabled you can create a calculation view without any measure, and the view is not available for reporting purposes

• Union node datatype: you can choose to add unmapped columns that just have constant mapping and a data type

• Column view now available as data source

• Filter expression: you can edit a filter applied on aggregation and projection view attributes using filter expressions from the output pane that offers more conditions to be used in the filter including AND, OR, and NOT

 

Autocomplete function in SQL Editor (changed)

The autocomplete function (Ctrl+Space) in the SQl Editor shows a list of available functions.

 

Hope this helps.

 

Rgds,

Murali

Posted by AgnesKim
Technique/SAP BW2012. 5. 23. 17:36

SAP BW 7.3 Hybrid Provider

 

Hybrid Provider consists of a DataStore object and an InfoCube with automatically generated data flow in between.

• It combines historic data with latest delta information when a query based on it is executed.

• DSO object can be connected to a real time data acquisition Data Source/DTP.

• If the Data Source can provide appropriate delta information in direct access mode a Virtual Provider can be used instead of the DSO.

There are two types of Hybrid Providers:

  1. 1. Hybrid Providers based on direct access.
  2. 2. Hybrid Providers based on a DataStore object

 

Hybrid Providers based on Direct Access

 

Hybrid Provider based on direct access is a combination of a Virtual Provider and an InfoCube. The benefit of this Info Provider is that it provides access to real time data without actually doing any real time data acquisition.

At query runtime, the historic data is read from the InfoCube. Also the near real-time or the latest up-to-date data from the source system is read using the Virtual Provider.

 

Hybrid Providers based on a DataStore object

 

The Hybrid Provider based on a DSO is a combination of DSO and InfoCube. Once this Hybrid Provider is created and activated, the different objects including (DTP, Transformation and process chain) that are used for the data flow from the DSO to the InfoCube are created automatically.

 

One should use Hybrid Provider based on DSO as Info Provider in scenarios where there is need to load data using real time data acquisition. The DTP for real-time data acquisition from a real-time enabled Datasource to the DSO loads the data to the DSO in delta mode. The daemon used for real-time data acquisition immediately activates the data. As this daemon is stopped, the data is loaded from the change log of the DSO to the InfoCube. The InfoCube acts as storage for the historic data from DSO

 

To make data held within a DSO available for reporting, in BI7 there are a number of steps to be done that is create the DSO, InfoCube, Transformation/DTP, MultiProvider, store in a BWA and connect them all up, and then schedule and monitor load jobs.

 

A Hybrid Provider takes a DSO and does it all for you, removing substantial development and maintenance effort. Just load your data into a DSO, create a Hybrid Provider and start reporting. You can even build your Hybrid Provider on a Real-time Data Acquisition Data Source (RDA), which could potentially provide near real-time reporting from a BWA.

 

A typical usage scenario could be that you want to extract your Purchase Orders from R/3 and make available for reporting. Using a Hybrid Provider, as soon as the data is loaded into a DSO they then become available for reporting with all the benefits of an InfoCube and BWA

 

Real-time Data Acquisition

 

Real-time data acquisition enables to update data in real time. As the data is created in the source system, it is immediately updated in the PSA or the delta queue. There are special InfoPackages and DTPs that are real time enabled which are used to load data in InfoProviders.

In order to load real time data from source system to SAP BW, the Datasource should be real time enabled. Most of the standard Data Sources are real-time enabled however we can also create generic Datasource as Real time enabled.

 

Step by step process of creating Hybrid Provider:

Step: 1 we have to first create an Init Infopackage for the Datasource and schedule it as shown below in screenshot.

 

Untitled1.png

 

Step 2: After creating Init Info package, we will then need to create a RDA Info package

 

Untitled2.png

 

Step 3: Now we have the Data source ready .We will have to create a Hybrid Infoprovider Combining DSO and the Infocube. So, for that I need to first create an Infoarea

 

Untitled3.png

 

Step 4: I will go to Data flow screen which is in left hand panel in RSA1

 

Untitled4.png

 

Step 5: Navigate to Infoarea and right click and “create Data flow”

 

Untitled5.png

 

Step 6: We drag and drop Datasource icon from the sidebar available in Data flow, then right click on the icon. Click on use existing object to select the datasource

 

Untitled6.png

 

Step 7: From the Data flow panel, Keep the cursor on the Datasource, right click “Show Direct Dataflow Before”. By clicking on show direct data flow before, it’s automatically shows the relevant Infopackages for the datasource .

 

Untitled7.png

 

 

Step 8: Now, we will remove the Init Infopackage from the data flow and now, the flow will looks as shown below

 

Untitled8.png

 

Step 9: Now drag and drop DSO from side menu. Right click and “Create”. Create a new DSO. Assign the data and key fields. Save and activate it.

 

Step 10: Now, drag and drop the Hybrid Provider from side bar right click and “Create”. Create a new Hybrid Provider based on DSO. The technical name of the provider is HYPD. Assign the previously created DSO to this hybrid provider

 

Untitled9.png

 

While creating the hybrid provider, it shows a warning as follows which means that the DSO can no longer be used as a standalone DSO. It will behave only as a part of hybrid provider. The data fields and the key fields in the DSO are automatically included in the Hybrid Provider.

 

 

Step 11: Once created, it show a system created Infocube under that Hybrid Provider. Note that the Hybrid Provider and the Info Cube have the description same as the DSO, however we have flexibility to give a new name to Hybrid Provider while creating.

 

 

Untitled10.png

 

 

Step 12: we now have to click on complete data flow icon as shown below for system to create a DTP and transformation automatically for the data flow and activate the flow.

 

Untitled12.png

 

 

Step 13: Once Transformation and DTP are active, we need to assign the RDA Infopackage and the RDA DTP to a RDA daemon. Right click on the RDA Infopackage and select “Assign RDA daemon”. It will navigate to RDA Monitor. Create a daemon from left top corner create button and then assign the both of them to the daemon

 

Untitled13.png

 

 

Step 14: Create RDA daemon: In the daemon settings, the daemon specifies the technical number, short description and the period specifies the duration after which it repeats the execution.

 

Untitled14.png

 

We can see that both the Infopackage and DTP are listed under the RDA daemon.

 

 

Step 15: Now, drill down to Infocube menu and click on the DTP. Now click on “Process Chain maintenance”. It will open a system generated process chain which contains DTP from DSO to Cube

 

 

Untitled15.png

 

 

Step 16: Below is the process chain which is automatically created by the system.

 

Untitled16.png

 

 

Step 17: Go to transaction RSRDA (it is RDA Monitor). Run the daemon and the data in real-time gets update from source system to DSO.

 

Untitled17.png

 

 

Untitled18.png

 

 

The new data updated in DSO is now updated into InfoCube after this process chain has run

Below is the process chain successfully run.

 

Untitled19.png

 

 

So we can update the real-time data from the source system to BW system. The real-time data updating works similar to delta functionality. So whenever the users create a new data in source system, it gets automatically updated into BW target system.





http://scn.sap.com/community/data-warehousing/netweaver-bw/blog/2012/05/23/sap-bw-73-hybrid-provider?utm_source=twitterfeed&utm_medium=twitter

'Technique > SAP BW' 카테고리의 다른 글

DSO Comparison  (0) 2012.08.03
Explorer with BWA  (0) 2012.05.11
Usage of BW7.3 Transformation Rule Type “Read from DataStore”  (0) 2012.05.10
Version management in SAP BW 7.3  (0) 2012.04.20
Queries/Workbooks a user can access  (0) 2012.04.20
Posted by AgnesKim
Technique/SAP HANA2012. 5. 17. 21:35

HANA is a Greate Way to Waste a Lot of MONEY 라는 다소 자극적인 표현으로 시작한 글. 
근데 사실 지금 HANA를 BW의 다음버전으로 도입하는 곳들은 정확히. "돈을 갖다 버리고 있다"에 한표. (최소한 국내는. 해외는 모르겠고.) 

HANA를 도입하는 것이 Version Upgrade Project 를 하듯이 하게되면 그렇게 될거라는것. 그리고 신규도입하는 곳도 결국 기존 BW 하던 사람들이 고민없이 기존 MDM 모델링으로 하게 되면 그 또한 마찬가지. 

나로선 공부해야 할 것이 늘어나고 있다고 판단되는 것이 HANA 의 등장이지만
뭐 또 그냥 대충 어물쩍 그렇게 넘어갈수도 있겠다 싶기도 하다. 

여튼 그런 부분에 대해서 쓰인 SDN의 한 블로그. 동의하는바.


-------------------------------------------------------------------------------------------------------------

HANA is a great way to waste a lot of money.

Yes, I'm serious here.

 

If you decide to implement this new platform at your site but just copy your coding 1:1 to HANA, you're going to waste money.

If you buy HANA and don't re-engineer your solutions, you're going to waste money.

If there is no change in how data is processed and consumed with the introduction of HANA to your shop, then you're wasting money. And you're wasting an opportunity here.

 

Moving to HANA is disruptive.

It does mean to throw the solutions that are used today over board.

That's where pain lies, where real costs appear - those kind of costs that don't show up on any price list.

 

So why take the pain, why invest so much?

Because this is the chance to enable a renovation of your company IT.

To change how users perceive working with data and to enable them to do things with your corporate data far more clever than what was thinkable with your old system.

That's the opportunity to be better than your competition and better than you are today.

That's a chance for becoming a better company.

 

This does mean, that your users need to say goodbye to their beloved super long and wide excel list reports.

To fully gain the advantages HANA can provide, the consumers of the data also need to be lead grow towards better ways to work with data.

Just upgrading to the new Office version to support even more rows in a spreadsheet won't do. It never did.

 

This does mean, your developers need to re-evaluate what they understand of databases.

They have start over and re-write their fancy utility-scripts that had been so useful on the old platform.

And more important: they need to re-think about what users of their systems should be allowed enabled to do.

Developers will need to give up their lordship of corporate IT. This is going to be a bitter pill to swallow.

 

Just adding the HANA box to your server cabinet is not the silver bullet to your data processing issues. But taking up this disruptive moment and provide your users with new application and approaches to data is what you get.

Once again, leaving the 'comfort zone' is what will provide the real gain.

 

So don't waste your money, don't waste your time and by all means stop creating the same boring systems you created since you've been to IT.

Instead, start over and do something new to become the better company you can be.



------------------------------------------------------------------------------------------------------------- 

http://scn.sap.com/community/in-memory-business-data-management/blog/2012/05/17/still-no-silver-bullet?utm_source=twitterfeed&utm_medium=twitter

Posted by AgnesKim
Technique/SAP HANA2012. 5. 11. 15:38




INTRODUCTION

 

Ever since SAP-HANA was announced couple of years back, I've been following the discussions/developments around In-Memory Database space. In Oct 2011, Oracle CEO Larry Ellison introduced Oracle Exalytics to compete with SAP-HANA. After reading white papers on both SAP-HANA and Oracle Exalytics, it was obvious they were different. The comparison of SAP-HANA and Oracle Exalytics is like comparing apples to oranges.

 

On May 8, 2012 I tweeted:

 

SAP Mentor David Hull responded:

 

I was a bit surprised to know that most don't have a clue as to the difference between Exalytics and SAP-HANA. The difference looked obvious to me. I realized either I was missing something or they were. So I decided to write this blog. And since this blog compares SAP products with Oracle products, I've decided to use Oracle DB instead of generic term RDBMS.

 

First I'll discuss the similarity between SAP-BW, Oracle-Exalytics and SAP-HANA. At a very high level, they look similar as shown in the picture below:

Similarity.png

 

As shown, BW application sits on top of a database, Oracle or SAP-HANA. And the application helps the user find right data. The similarity ends there.

 

Let us now review how Oracle-Exalytics compares with SAP-BW with Business Warehouse Accelerator (BWA): As you can see below, there appears to be one-to-one match between the components of SAP-BW and Exalytics.

 

 

New_BWA_EXA.png

Steps
SAP-BWExalyticsComments
1 and 1a

Data found in BWA;

and returned to the user

Data found in

Adaptive Data Mart & returned to the user


2 & 2a

Data found in OLAP

Cache and returned to the user

Data found in Intelligent cache and returned to the user

This means data was not found in BWA

or Adaptive Data Mart

3 & 3a

Data found in

Aggregates and returned to the user

Data found in Essbase Cubes and returned to the user

This means data was not found in

1) Adaptive Data Mart or BWA and

2) OLAP Cache or Intelligent Cache

4 & 4a

Data found in Cubes

and returned to the user


Not sure if Essbase supports aggregates;

However Oracle supports materialized

views;I assume this is similar to SAP-BW's aggregates.

 

 

 

The diagram below shows why Exalytics Vs SAP-HANA comparison is like apple to orange comparison. In Exalytics, the information users need gets pre-created at a certain level/granularity. One of the best practices in BW/DW world is to create the aggregates upfront to get acceptable response times.

 

In SAP-HANA, however, aggregates are created on the fly; data in SAP-HANA resides in raw form, and depending on what users need, the application performs the aggregation at runtime and displays the information on the user's screen. This helps the users perform analysis near real-time and more quickly.

 

 

New_EXA_HANA.png

Based on the diagrams shown above, Exalytics it seems is comparable to SAP's six year old BWA technology.

 

SUMMARY

 

Based on discussions above, the diagram below compares all three products SAP-BW with BWA, Exalytics and SAP-HANA.

 

New_BWA_EXA_HANA.png

 

                                           Note: I didn't connect Disk to HANA DB because it is primarily used for persistence.

 

I wanted to keep this blog simple so didn't include a lot of details. Depending on your questions/thoughts, I'm planning to either update this blog or write new blog.







http://scn.sap.com/community/in-memory-business-data-management/blog/2012/05/11/bwa-exalytics-and-sap-hana?utm_source=twitterfeed&utm_medium=twitter



Posted by AgnesKim
Technique/SAP BW2012. 5. 11. 12:25

Explorer with BWA

'Technique > SAP BW' 카테고리의 다른 글

DSO Comparison  (0) 2012.08.03
SAP BW 7.3 Hybrid Provider  (0) 2012.05.23
Usage of BW7.3 Transformation Rule Type “Read from DataStore”  (0) 2012.05.10
Version management in SAP BW 7.3  (0) 2012.04.20
Queries/Workbooks a user can access  (0) 2012.04.20
Posted by AgnesKim
Technique/SAP BW2012. 5. 10. 21:21


Previous post
Next post

It is quite a common requirement to load data from point A to point B in BW, while performing a lookup on a DSO to get a bunch of fields from there. 
This is usually implemented as follows: a SELECT statement the transformation Start Routine picks up data from the DSO and fills up an internal table, and an end routine (or field-level routines) populates the target fields by reading the internal table.


In keeping with the general BW 7.3 theme of automating common scenarios, a new transformation rule type has been introduced to do this automatically. 
To take this new feature out for a spin, I created a DSO with loosely based on the 0FIAR_O03 DSO. My DSO had the key fields Company Code, Customer (0DEBITOR), Fiscal Period, Fiscal Variant, Accounting Doc No, Item No and Due Date Item No. It also had the data fields Credit Control Area, Debit/Credit Amount, Local Currency, Credit Limit and Currency.


I created a Flat File DataSource , which did not contain any fields for Credit Limit and Currency. The objective was to derive these two fields in the transformation from the Credit Management Control Area Data DSO (0FIAR_O09). To begin with, this is what the transformation from DataSource to the custom DSO looked like.

Tr1.png

To perform the lookup, first the key fields of the lookup DSO have to be identified. The key fields of the 0FIAR_O09 DSO are Credit Control Area and Customer Number (0C_CTR_AREA and 0DEBITOR). The lookup logic will search the 0FIAR_O09 DSO based on these two fields. In order to do this, the Credit Control Area and Customer from the DataSource should be mapped to the Credit Limit key figure in the target.  

The first step in the Rule Details is to specify the DSO from which the field values will be picked up – in this case, 0FIAR_O09. Next, the “IOAssgnmnt” column must be manually filled up with the names of the InfoObjects. It is important that ALL the key fields of the lookup DSO are specified.

Tr2.png


In a nutshell, the above screen tells the system to derive the value of the 0CRED_LIMIT (the target field) from the 0FIAR_O09 DSO (the lookup DSO) based on the C_CTR_AREA and DEBITOR values coming in from the DSO, which correspond to the 0C_CTR_AREA and 0DEBITOR InfoObjects of the lookup DSO.


The 0CURRENCY target field also needs to be similarly mapped. 

 

This is how the transformation looks after we're done. Observe the "DSO" icon which appears next to the Credit Limit and Currency in the target of the transformation.

TR3.png


Once this is done, run the DTP. The transformation will perform perform the lookup and populate the values. Activate the data when the load completes.  
Now to begin verifying the data. The Flat file contained the following values, which were loaded to the PSA. Observe that there is no Credit Limit data in this file.

 

TR4.png

In the 0FIAR_O09 DSO, the following values were present.

tr5.png

After the load, this is how the data in the DSO looks.

 

tr6.png
As the screenshot shows, the transformation rule has correctly picked up the Credit Limit from the 0FIAR_O09 DSO.


A few caveats are in order on this feature.

  • All the key fields of the lookup DSO should be specified. If a partial key is specified (for instance, if we had mapped only 0DEBITOR in the source fields of the transformation rule) the system will assign the value from the first record it finds in the lookup DSO
  • The InfoObject Assignment for the source fields should have exactly the same names as the corresponding InfoObjects in the lookup DSO. If the InfoObject in the lookup DSO was 0CRED_LIMIT and the target InfoObject of the transformation rule was 0VALUE_LC, this technique cannot be used as the InfoObjects differ
  • The target InfoObject will be filled from the value of the InfoObject having the same name in the lookup DSO. In other words, 0CRED_LIMIT is filled up based on the value of 0CRED_LIMIT in 0FIAR_O09. If 0CRED_LIMIT did not exist in the lookup DSO, the system will throw an error during transformation activation

 

Essentially, this feature is most useful if you have simple lookups, for instance get Field X from DSO Y based on the lookup field Z and write it out in field X of the target. However, it may not be best solution if you have more complex requirements which involve

  • Pulling multiple records from the lookup DSO and getting the first or the last found record in the set
  • A lookup DSO in which the field you want has a different name




'Technique > SAP BW' 카테고리의 다른 글

SAP BW 7.3 Hybrid Provider  (0) 2012.05.23
Explorer with BWA  (0) 2012.05.11
Version management in SAP BW 7.3  (0) 2012.04.20
Queries/Workbooks a user can access  (0) 2012.04.20
All You Need to Know about HybridProvider in BW 7.30  (0) 2011.12.08
Posted by AgnesKim
Technique/그외2012. 5. 10. 20:49

Customizing Logon Page on Portal 7.3

Posted by purav mehta in SAP NetWeaver Portal on May 10, 2012 12:59:15 PM

Please find below details steps for customizing logon page on Portal 7.3.

 

1Locate the WAR file.

 

First step is to get the WAR file delievered by SAP for logon page to customize it.

 

 

Copy the war file tc~sec~ume~logon~ui.war to your local machine from

 

<Installation drive>:\usr\sap\<SID>\J00\j2ee\cluster\apps\sap.com\com.sap.security.core.logon
\servlet_jsp\logon_ui_resources\tc~sec~ume~logon~ui.war

 

 

2. Import the WAR file

 

Next we have to import the WAR file into NWDS by going to: File --> Import -->Web --> War File

 

1.jpg

 

     Select the WAR file from the local system.

 

2.jpg

 

 

    As EAR format can be deployed on JEE server, corresponding EAR project has to be created.

   For this, check the “Add project to an EAR “checkbox as above and specify suitable name in “EAR project name “based on the

   WAR project name.

 

   Click Finish to create both WAR and EAR projects.

 

 

 

3.jpg

 

    Expand the WAR project.

 

 

4.jpg

 

At this point you will notice errors in the project. To remove these errors follow the next step.

 

 

3Adding the required JAR file to remove the Errors.

 

 

     a. Next you need to locate the Jar file “tc~sec~ume~logon~logic_api.jar” on which the WAR file is dependent from the following location:

     <drive>\usr\sap\<SID>\J00\j2ee\cluster\apps\sap.com\com.sap.security.core.logon\servlet_jsp\logon_app\root\WEB-INF\lib

 

    

     Copy the tc~sec~ume~logon~logic_api.jar” file to the WebContent\WEB-INF\lib folder of the WAR project in NWDS.

 

5.jpg

 

    bThis Jar file has also to be added in the build path of WAR file.

         Right Click the WAR project and select Build Path --> Configure Build Path.

 

6.jpg

 

 

     cClick on Libraries tab.Click on “Add External  Jars”  and select the JAR file “tc~sec~ume~logon~logic_api.jar” from local system and “Add” to   get   the following screen:

 

7.jpg

 

Once done you will notice that all errors have gone !!

 

4. Make Changes to Layout

 

     a.  Now its time to start making the desired changes to the layout. In our example we are changing the branding image

          on the Logon screen. We have copied the image “hearts.jpg” to the folder WebContent\layout

 

 

 

8.jpg

 

SAP delivered image  branding-image-portals.jpg has dimension  290X360 px . If you select a bigger image it will get truncated based on the mentioned dimensions. To change the dimensions you need to edit the element urBrandImage in css file

 

 

urBrandImage{overflow:hidden;width:290px;height:360px}

 

 

 

b. After the changes have been made, we need to be sure that WAR project is updated in the EAR project and latest changes are

   picked up. For this Right Click on WAR project and select Java EE Tools -->Update EAR Libraries.

 

 

9.jpg

 

 

 

5. Configuring deployment descriptors

         

          Next we need to configure 2 deployment descriptors of the EAR application as below:

         

          a. application-j2ee-engine.xml

          b. application.xml

 

 

10.jpg

 

     a.  Configuring application-j2ee-engine.xml

 

 

 

        In the EAR, view the General tab of the file <project_name>/EARContent/META-INF/application-j2ee-engine.xml.

 

          i. Enter a provider name for your application.This is usually domain name of the client.

             The provider name defines your namespace where your applications reside on the AS Java.

              If you enter “example.com”, the application deploys to the following path:<ASJava_Installation>/j2ee/cluster/apps/example.com/<project_name>

 

        ii . Next we need to add reference to the standard application com.sap.security.core.logon

               Choose References and choose +  with the quick info text Add element

 

         iii.  Choose Create new and enter the required data.

 

   

Reference Data for the Logon Application

Field Name

Data

Reference target

com.sap.security.core.logon

Reference type

hard

Reference target type

application

Provider name

sap.com

 

11.jpg

 

This will generate the XML in background which can be displayed in the SOURCE tab :

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<application-j2ee-engine

      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

      xsi:noNamespaceSchemaLocation="application-j2ee-engine.xsd">

      <reference

            reference-type="hard">

            <reference-target

                  provider-name="sap.com"

                  target-type="application">com.sap.security.core.logon</reference-target>

      </reference>

      <provider-name>newLogon.com</provider-name>

</application-j2ee-engine>

 

  b.. Configuring application.xml

 

In the EAR, edit the file <project_name>/EARContent/META-INF/application.xml, and define the URL alias and for your custom logon UI.

Double click on application.xml and go to Modules tab . Select the WAR file and enter the “Context Root” field for example : new_logon

 

12.jpg

 

We have to provide this alias name later in NWA so please make a note of it.

 

 

 

6. Creating the deployable EAR file

 

     Next we need to create a deployable EAR file . For this right Click on EAR project and select Export -->SAP EAR file

 

13.jpg

 

7. Deploying the EAR file

 

     Right click on the EAR project and select Run As --> Run on server

     Enter the credentials of the server and file will get deployed on the server with a success message.

     You might get an Error screen in NWDS after deployment as below however you can ignore it.

 

 

 

14.jpg

 

8. Configuring UME properties in NWA

 

     Navigate to the following URL to modify UME properties through Netweaver Administrator

     http://<host>:<port>/nwa/auth

 

     a. Change the property Alias of the aplication for customizing login pages (ume.logon.application.ui_resources_alias)                

          to custom  application “ new_logon” which we mentioned previously  in the Context root of application.xml

 

 

     b.  Change the property Path or URL to the branding image (ume.logon.branding_image) to “layout/hearts.jpg”

 

 

15.jpg

16.jpg

 

 

Hurray!!!  We have successfully customized the Logon Screen …

 

 

9. Next aim is to have a custom text or Notice on the logon page. 

 

Please add the following code after line 44 in the logon.jsp.

 

<!-- ********************************************* -->

    <!--  disclaimer notice                         -->

                <tr>

      <td class="urLblStdNew">

        <span ><b>Notice for All Users</b>

                      <br><br>Paste your content here.

        </span>

      </td>

    </tr>    

<!-- ********************************************* -->

 

Save the new values and restart the portal server.

 

 

17.jpg

 

 

18.jpg

 

This finishes (or rather begins) our journey with the customization of Logon page …. !!!




http://scn.sap.com/community/netweaver-portal/blog/2012/05/10/customizing-logon-page-on-portal-73?utm_source=twitterfeed&utm_medium=twitter

Posted by AgnesKim
Technique/SAP HANA2012. 5. 8. 10:42

HANA vs. Exalytics: an Analyst's View

Posted by David Dobrin in Blog on May 6, 2012 10:00:28 PM
Introduction

 

Some people at SAP have asked me to comment on the "HANA vs. Exalytics" controversy from an analyst's point of view.  It's interesting to compare, so I'm happy to go along.  In this piece, I'll try to take you through my thinking about the two products.  I can see that Vishal Sikka just posted a point-by-point commentary on Oracle's claims, so I won't try to go him one better.  Instead, what I'll try to do is make the comparison as simple and non-techie friendly as I can.  Note that SAP did not commission this post, nor did it ask to edit it.

 

Exalytics

 

To begin with, let's start with something that every analyst knows:  Oracle does a lot of what in the retail trade are called "knockoffs."

 

They think this is good business, and I think they're right.  The pattern is simple.  When someone creates a product and proves a market, Oracle (or, for that matter, SAP) creates a quite similar product.  So, when VMWare (and others) prove Vsphere, Oracle creates Oracle VM; when Red Hat builds a business model around Linux, Oracle creates "Oracle Linux with Unbreakable Enterprise Kernel."

 

We analysts know this because it's good business for us, too.  The knockoffs--oh, let's be kind and call them OFPs for "Oracle Followon Products"--are usually feature-compatible with the originals, but have something, some edge, which (Oracle claims) makes them better.

 

People get confused by these claims, and sometimes, when they get confused, they call us.

 

Like any analyst, I've gotten some of these calls, and I've looked at a couple of the OFPs in detail.  Going in, I usually expect that Oracle is offering pretty much what you see at Penneys or Target, an acceptable substitute for people who don't want to or can't pay a premium, want to limit the number of vendors they work with, aren't placing great demands on the product, etc., etc.

 

I think this, because it's what one would expect in any market.  After all, if you buy an umbrella from a sidewalk vendor when it starts to rain, you're happy to get it, it's a good thing, and it's serviceable, but you don't expect it to last through the ages.

 

Admittedly, software is much more confusing than umbrellas. With a $3 umbrella, you know what you're getting.  With an OFP (or a follow-on product from any company), you don't necessarily know what you're getting.  Maybe the new product is a significant improvement over what's gone before. If the software industry were as transparent as the sidewalk umbrella industry, maybe it would be clear.  But as it is, you have to dig down pretty deep to figure it all out. And then you may still be in trouble, because when you get "down in the weeds," as my customers have sometimes accused me of being, you might be right, but you might also fail to be persuasive.

 

Which brings me to the current controversy.  To me, it has a familiar ring. SAP releases HANA, an in-memory database appliance.  Now Oracle has released Exalytics, an in-memory database appliance.  And I'm getting phone calls.

 

HANA:  the Features that Matter

 

I'm going to try to walk you through the differences here, while avoiding getting down in the weeds. This is going to involve some analogies, as you'll see.  If you find these unpersuasive, feel free to contact me.

 

To do this, I'm going to have to step back from phrases like "in-memory" and "analytics," because now both SAP and Oracle are using this language. I'll look instead at the underlying problem that "in-memory" and "analytics" are trying to solve.

 

This problem is really a pair of problems.  Problem 1.  The traditional row-oriented database is great at getting data in, not so good at getting data out.  Problem 2.  The various "analytics databases," which were designed to solve Problem 1--including, but not limited to the column-oriented database that SAP uses--are great at getting data out, not so good at getting data in.

 

What you'd really like is a column-oriented (analytics) database that is good at getting data in, or else a row-oriented database that is good at getting data out.

 

HANA addresses this problem in a really interesting way.  It is a database that can be treated as either row-oriented or column-oriented.  (There is literally a software switch that you can throw.)  So, if you want to do the very fast and flexible analytic reporting that column-oriented databases are designed to do, you throw the switch and run the reports.  And if you want to do the transaction processing that row-oriented databases are designed to do, you throw the switch back.

 

Underneath, it's the same data; what the switch throws is your mode of access to it.

 

In extolling this to me, my old analyst colleague, Adam Thier, now an executive at SAP, said, "In effect, it's a trans-analytic database."  (This is, I'm sure, not official SAP speak.  But it works for me.)  How do they make the database "trans-analytic?"  Well, this is where you get down into the weeds pretty quickly.  Effectively, they use the in-memory capabilities to do the caching and reindexing much more quickly than would have been possible before memory prices fell.

 

There's one other big problem that the in-memory processing solves.  In traditional SQL databases, the only kind of operation you can perform is a SQL operation, which is basically going to be manipulation of rows and fields in rows.  The problem with this is that sometimes you'd like to perform statistical functions on the data:  do a regression analysis, etc., etc.  In a traditional database, though, you're kind of stymied; statistical analysis in a SQL database is complicated and difficult.

 

In HANA, "business functions" (what marketers call statistical analysis routines) are built into the database.  So if you want to do a forecast, you can just run the appropriate statistical function.  It's nowhere near as cumbersome as it would be in a pure SQL database.  And it's very, very fast;  I have personally seen performance improvements of three orders of magnitude.

 

Exalytics:  the Features that Matter

 

Now when I point out that HANA is both row-oriented (for transactions) and column-oriented (so that it can be a good analytics database) and then I point out that it has business functions built-in, I am not yet making any claim about the relative merits of HANA and Exalytics.

 

Why?  Well, it turns out that Exalytics, too, lets you enter data into a row-oriented database and allows you to do reporting on the data from an analytics database.  And in Exalytics, too, you have a business function library.

 

But the way it's done is different.

 

In Exalytics, the transactional, row-oriented capabilities come from an in-memory database (the old TimesTen product that Oracle bought a little more than a decade ago).  The analytics capabilities come from Essbase (which Oracle bought about 5 years ago), and the business function library is an implementation of the open-source R statistical programming language.

 

So what, Oracle would argue. It has the features that matter.  And, Oracle would argue, it also has an edge, something that makes this combination of databases clearly better.  What makes it better, according to Oracle? In Exalytics, you're getting databases and function libraries that are tested, tried, and true.  TimesTen has been at the heart of Salesforce.com since its inception.  Essbase is at the heart of Hyperion, which is used by much of the Global 2000.  And R is used at every university in the country.

 

Confused?  Well, you should be.  That's when you call the analyst.

 

HANA vs. Exalytics

 

So what is the difference between the two, and does it matter?  If you are a really serious database dweeb, you'll catch it right away:

 

In HANA, all the data is stored in one place. In Exalytics, the data is stored in different places.

 

So, in HANA, if you want to report on data, you throw a switch.  In Exalytics, you extract the data from the Times10 database, transform it, and load it into the Essbase database.  In HANA, if you want to run a statistical program and store the results, you run the program and store the results.  In Exalytics, you extract the data from, say, Times10, push it into an area where R can operate on it, run the program, then push the data back into Times10.

 

So why is that a big deal?  Again, if you're a database dweeb, you just kind of get it.  (In doing research for this article, I asked one of those dweeb types about this, and I got your basic shrug-and-roll-of-eye.)

 

I'm not that quick. But I think I sort of get what their objection is.  Moving data takes time.  Since the databases involved are not perfectly compatible, one needs to transform the data as well as move it. (Essbase, notoriously, doesn't handle special characters, or at least didn't use to.)  Because it's different data in each database, one has to manage the timing, and one has to manage the versions.  When you're moving really massive amounts of data around (multi-terabytes), you have to worry about space.  (The 1TB Exalytics machine only has 300 GB of actual memory space, I believe.)

 

One thing you can say for Oracle.  They understand these objections, and in their marketing literature, they do what they can to deprecate them.  "Exalytics," Oracle says, "has Infiniband pipes" that presumably make data flow quickly between the databases, and "unified management tools," that presumably allow you to keep track of the data. Yes, there may be some issues related to having to move the data around.  But Oracle tries to focus you on the "tried and true" argument. You don't need to worry about having to move the data between containers, not when each of the containers is so good, so proven, and has so much infrastructure already there, ready to go.

 

As long as the multiple databases are in one box, it's OK, they're arguing, especially when our (Oracle's) tools are better and more reliable.

 

Still confused?  Not if you're a database dweeb, obviously.  Otherwise, I can see that you might be.  And I can even imagine that you're a little irritated. "Here this article has been going on for several hundred lines," I can hear you saying, "and you still haven't explained the differences in a way that's easy to understand."

 

HANA:  the Design Idea

 

So how can you think of HANA vs. Exalytics in a way that makes the difference between all-in-one-place and all-in-one-box-with-Infiniband-pipes-connecting-stuff completely clear?  It seems to me, the right way, is to look at the design idea that's operating in each.

 

Here, I think, there is a very clear difference.  In TimesTen or Essbase or other traditional databases, the design idea is roughly as follows: if you want to process data, move it inside engines designed for that kind of processing. Yes, there's a cost. You might have to do some processing to get the data in, and it take some time.  But those costs are minor, because once you get it into the container, you get a whole lot of processing that you just couldn't get otherwise.

 

This is a very normal, common design idea.  You saw much the same idea operating in the power tools I used one summer about forty years ago, when I was helping out a carpenter.  His tools were big and expensive and powerful--drill presses and table saws and such like--and they were all the sort of thing where you brought the work to the tool. So if you were building, say, a kitchen, you'd do measuring at the site, then go back to the shop and make what you needed.

 

In HANA, there's a different design idea:  Don't move the data.  Do the work where the data is.  In a sense, it's very much the same idea that now operates in modern carpentry.  Today, the son of the guy I worked for drives up in a truck, unloads a portable table saw and a battery-powered drill, and does everything on site and it's all easier, more convenient, more flexible, and more reliable.

 

So why is bringing the tools to the site so much better in the case of data processing (as well as carpentry?)  Well, you get more flexibility in what you do and you get to do it a lot faster.

 

To show you what I mean, let me give you an example.  I'll start with a demo I saw a couple of years ago of a relatively light-weight in-memory BI tool.

 

The salesperson/demo guy was pretty dweeby, and he traveled a lot.  So he had downloaded all the wait times at every security gate in every airport in America from the TSA web site.  In the demo, he'd say, "Let's say you're in a cab.  You can fire up the database and a graph of the wait-times at each security checkpoint.  So now you can tell which checkpoint to get out at."

 

The idea was great, and so were the visualization tools.   But at the end of the day, there were definite limitations to what he was doing.  Because the system is basically just drawing data out of the database, using SQL, all he was getting was a list of wait times, which were a little difficult to deal with.  What one would really want is the probability that a delay would occur at each of the checkpoints, based on time of day and a couple of other things.  But that wasn't available, not from this system, not in a cab.

 

Perhaps even worse, he wasn't really working with real-time data. If you're sitting in the cab, what you really want to be working with is recent data, but he didn't have that data; his system couldn't really handle an RSS feed.

 

Now, consider what HANA's far more extensive capabilities do for that example.  First of all, in HANA, data can be imported pretty much continuously.  So if he had an RSS feed going, he could be sure the database was up-to-date.  Second, in HANA, he could use the business functions to do some statistical analysis of the gate delay times.  So instead of columns of times, he could get a single, simple output containing the probability of a delay at each checkpoint.  He can do everything he might want to do in one place.  And this gives him better and more reliable information.

 

So What Makes It Better?

 

Bear with me.  The core difference between HANA and Exalytics is that in HANA, all the data is in one place.  Is that a material difference?  Well, to some people it will be; to some people, it won't be.  As an analyst, I get to hold off and say, "We'll see."

 

Thus far, though, I think the indications are that it is material.  Here's why.

 

When I see a new design idea--and I think it's safe to say that HANA embodies one of those--I like to apply two tests.  Is it simplifying?  And is it fruitful?

 

Back when I was teaching, I used to illustrate this test with the following story:

 

A hundred years ago or so, cars didn't have batteries or electrical systems.  Each of the things now done by the electrical system were thought of as entirely separate functions that were performed in entirely different ways.  To start the car, you used a hand crank.  To illuminate the road in front of the car, you used oil lanterns mounted where the car lights are now.

 

Then along came a new design idea: batteries and wires.  This idea passed both tests with flying colors.  It was simplifying.  You could do lots of different things (starting the car, lighting up the road) with the same apparatus, in an easier and more straightforward way (starting the car or operating the lights from the dashboard).  But it was also fruitful.  Once you had electricity, you could do entirely new things with that same idea, like power a heater motor or operate automatic door locks.

 

So what about HANA?  Simplifying and fruitful?  Well, let's try to compare it with Exalytics. Simplifying?  Admittedly, it's a little mind-bending to be thinking about both rows and columns at the same time.  But when you think about how much simpler it is conceptually to have all the data in one database and think about the complications involved when you have to move data to a new area in order to do other operations on it, it certainly seems simplifying.

 

And fruitful?

 

Believe it or not, it took me a while to figure this one out, but Exalytics really helped me along.  The "Aha!" came when I started comparing the business function library in HANA to the "Advanced Visualization" that Oracle was providing.  When it came to statistics, they were pretty much one-to-one; the HANA developers very self-consciously tried to incorporate the in-database equivalents of the standard statistical functions, and Oracle very self-consciously gave you access to the R function library.

 

But the business function library also does…ta da…business functions, things like depreciation or a year-on-year calculation.  Advanced Visualization doesn't. 

 

This is important not because HANA's business function library has more features than R, but because HANA is using the same design idea (the Business Function Library) to enrich various kinds of database capabilities.  On the analytics side, they're using the statistical functions to enrich analytics capabilities.  On the transaction side, they're using the depreciation calculations to enrich the transaction capabilities.  For either, they're using the same basic enrichment mechanism.

 

And that's what Oracle would find hard to match, I think. Sure, they can write depreciation calculation functionality; they've been doing that for years.  But to have that work seamlessly with the Times10 database, my guess is that they'd have to create a new data storage area in Exalytics, with new pipes and changes in the management tools.

 

Will HANA Have Legs?

 

So what happens when you have two competing design ideas and one is simpler and more fruitful than the other?

 

Let me return to my automobile analogy.

 

Put yourself back a hundred years or so and imagine that some automobile manufacturer or other, caught short by a car with a new electrical system, decides to come to market ASAP with a beautiful hand-made car that does everything that new battery car does, only with proven technology.  It has crisp, brass oil lanterns, mahogany cranks, and a picture of a smiling chauffeur standing next to the car in the magazine ad.

 

The subtext of the ad is roughly as follows. "Why would you want a whole new system, with lots and lots of brand-new failure points, when we have everything they have.  Look, they've got light; we've got light, but ours is reliable and proven.  They've got a starter; we've got a starter, but ours is beautiful, reliable, and proven, one that any chauffeur can operate."

 

I can see that people might well believe them, at least for a while.  But at some point, everybody figures out that the guys with the electrical system have the right design idea.  Maybe it happens when the next version comes out with a heater motor and an interior light.  Maybe it happens when you realize that the chauffeur has gone the way of the farrier. But whenever it happens, you realize that the oil lantern and the crank will eventually fall by the wayside.

 

About David Dobrin

 

I run a small analyst firm that in Cambridge, Massachusetts that does strategy consulting in most areas of enterprise applications.  I am not a database expert, but for the past year, I have been doing a lot of work with SAP related to HANA, so I'm reasonably familiar with it.  I don't work with Oracle, but I know a fair amount about both the Times 10 database and the Essbase database, because I covered both Salesforce (which uses Times 10) and Hyperion (Essbase) for many years.

 

SAP is a current customer of B2B Analysts, Inc., the firm I run.

  

 









https://www.experiencesaphana.com/community/blogs/blog/2012/05/06/hana-vs-exalytics-an-analysts-view 

Posted by AgnesKim
Technique/SAP HANA2012. 5. 6. 20:30

SAP HANA SQL in itself is no rocket science. It strongly gravitates around standard SQL-92, with a couple of extensions here and there.

If I had to decide about my favorite SQL primitive data type, without giving it much more intellectual thought than I already did, I would choose the cute and light 8-bit store unsigned integer, wearing the not-really tiny, little pleonastic name "TINYINT" (please, note the freestyle usage of double quotes for non-identifiers).

If you do the math, 2 to the power of 8 is 256, that is, a TINYINT can assume the values from 0 to 255, which should be enough for columns containing integer data such as, for example, people's ages.

I don't think a Y2K-similar issue is expected for the next couple of hundred years regarding human longevity. Maybe the kids born today will have a chance to live 100-110 years on average (although I am not yet convinced about this statistical extrapolation), but I do not expect us to survive much longer than that, at least before a new age of revolutionary Hardware and Software Migration waves happens spontaneously. It would be interesting to do a global programmer's statistics study (maybe there is already more than one) and calculate the percentage of developers taking the time to carefully analyze the minimum/maximum amount of storage required for the different data fields for a software application. 
My hypothesis is that many developers will tend to take slightly "oversized" data types, just in case. I also tended to do that myself some years ago when I was a real developer, but then again, I was not a brilliant Wozniak. I must also admit that 10 to 15 years ago I was naive enough to think that the Club of Rome was exaggerating a bit in 1972. 
During IT prehistory, people had to save data storage for necessity. Nowadays people may think they can afford to be profligate and extravagant with RAM, Disk and CPU power, because  "capacity is going up and prices are going down" all the time.

There is no need to be extremely ecologically intelligent to see the catch in this reasoning, even though we are bombarded with this kind of statements from everywhere all the time.

 

And now I will temporarily get back to purely robotic-thinking, even though I am not finished with all I wanted to say about the topic above, and will keep developing this line of thought further in the near future.

Let me give you a very compact summary about the main SAP HANA SQL Data Types, just for the (robotic?) fun of it:

  • For Date and Time the winners are:
    • DATE (with range of 0001-01-01 to 9999-12-31)
    • TIME (HH24-MI-SS)
    • SECONDDATE (0001-01-01 00:00:01 to 9999-12-31 24:00:00)
    • TIMESTAMP (0001-01-01 00:00:00.0000000 to 9999-12-31 23:59:59.9999999)
  • For Numeric Types, we start with...
    • TINYINT (8-bit unsigned integer)
    • continue with SMALLINT (16-bit signed integer)
    • INTEGER (32-bit signed integer)
    • BIGINT (64-bit signed integer)
    • DECIMAL (precision von 1 to 34, scale from -6000+ to +6000+). Example: 31416 x 10-4 has the precision 5 and the scale 4.
    • SMALLDECIMAL (floating-point decimal supported only for Column Store with precision up to 16 and scale around 369)
    • REAL (32-Bit floating point)
    • DOUBLE (64-bit floating point)
    • and we will finish with FLOAT(n), a 32-bit or 64-bit REAL, with n significant bits up to 53. Steve Jobs would never have dreamt about such variety of floating points 20+ years ago. A real orgy!
  • For Character Strings we have the following:
    • VARCHAR(n) for ASCII characters, with n up to 5000
    • NVARCHAR(n) for Unicode character sets
    • and ALPHANUM(n) with a much smaller maximum length "n" than for VARCHAR, that is, 127. My memory is playing a trick in my mind because my inner eye is seeing a non-variable-length "CHAR" type. At least there is a CHAR in other SQL versions, but I don't find it in the SAP HANA documentation. Since my SQL proficiency is currently like my French, that is, not awfully bad, but not always fluent, unless I practice it live for a couple of weeks in a row, I cannot say why CHAR has been killed, IF it has been really killed. The reason may lie in some architecture efficiency thing, for good or bad.
  • We have VARBINARY(n) for Binary Types.
  • And we surely have Large Object (LOB) Types like...
    • BLOB for large binary data
    • CLOB for ASCII data
    • and NCLOB for Unicode data. You are not allowed to do everything with LOBs, like for example, ORDER BY or GROUP BY or use them as PRIMARY KEY. To be honest, I would not use a YouTube Video as PRIMARY KEY, even if I could. But I might change my mind, like all wise people.

 

That was it about SAP HANA SQL Data Types on my side... And now I have to go. I need to recreate some users in the SAP HANA instance I will be using tomorrow for the next training, create a couple of variables for some Analytic Views, in order to show off a bit (although currently, this "variables business" with the SAP HANA Studio does not give you the greatest user experience in the world, to put it very diplomatically), and I also want to observe the behavior of my new royal Analytic Privileges, of course on the sly, and without cameras. Last but not least, I have to prepare lunch... collaboratively. I am hungry after so much discussion about storage and resources, but we like mediterranean cuisine, which tends to be lean(er) than others.

 

Gemma Durany

Co-Founder and COO

Glooobal GmbH

Posted by AgnesKim
Technique/SAP BO2012. 5. 6. 13:28

Using SAP HANA Variables with SAP BusinessObjects BI4.0

SAP HANA variables are a powerful technology which enables end users to see data in HANA following their own preferences. In this document you will learn how to set up variables in SAP HANA and how to use them in the SAP BusinessObjects client tools.

View Document

 













http://scn.sap.com/docs/DOC-27676

Posted by AgnesKim