Technique/SAP BW2012. 5. 23. 17:36

SAP BW 7.3 Hybrid Provider

 

Hybrid Provider consists of a DataStore object and an InfoCube with automatically generated data flow in between.

• It combines historic data with latest delta information when a query based on it is executed.

• DSO object can be connected to a real time data acquisition Data Source/DTP.

• If the Data Source can provide appropriate delta information in direct access mode a Virtual Provider can be used instead of the DSO.

There are two types of Hybrid Providers:

  1. 1. Hybrid Providers based on direct access.
  2. 2. Hybrid Providers based on a DataStore object

 

Hybrid Providers based on Direct Access

 

Hybrid Provider based on direct access is a combination of a Virtual Provider and an InfoCube. The benefit of this Info Provider is that it provides access to real time data without actually doing any real time data acquisition.

At query runtime, the historic data is read from the InfoCube. Also the near real-time or the latest up-to-date data from the source system is read using the Virtual Provider.

 

Hybrid Providers based on a DataStore object

 

The Hybrid Provider based on a DSO is a combination of DSO and InfoCube. Once this Hybrid Provider is created and activated, the different objects including (DTP, Transformation and process chain) that are used for the data flow from the DSO to the InfoCube are created automatically.

 

One should use Hybrid Provider based on DSO as Info Provider in scenarios where there is need to load data using real time data acquisition. The DTP for real-time data acquisition from a real-time enabled Datasource to the DSO loads the data to the DSO in delta mode. The daemon used for real-time data acquisition immediately activates the data. As this daemon is stopped, the data is loaded from the change log of the DSO to the InfoCube. The InfoCube acts as storage for the historic data from DSO

 

To make data held within a DSO available for reporting, in BI7 there are a number of steps to be done that is create the DSO, InfoCube, Transformation/DTP, MultiProvider, store in a BWA and connect them all up, and then schedule and monitor load jobs.

 

A Hybrid Provider takes a DSO and does it all for you, removing substantial development and maintenance effort. Just load your data into a DSO, create a Hybrid Provider and start reporting. You can even build your Hybrid Provider on a Real-time Data Acquisition Data Source (RDA), which could potentially provide near real-time reporting from a BWA.

 

A typical usage scenario could be that you want to extract your Purchase Orders from R/3 and make available for reporting. Using a Hybrid Provider, as soon as the data is loaded into a DSO they then become available for reporting with all the benefits of an InfoCube and BWA

 

Real-time Data Acquisition

 

Real-time data acquisition enables to update data in real time. As the data is created in the source system, it is immediately updated in the PSA or the delta queue. There are special InfoPackages and DTPs that are real time enabled which are used to load data in InfoProviders.

In order to load real time data from source system to SAP BW, the Datasource should be real time enabled. Most of the standard Data Sources are real-time enabled however we can also create generic Datasource as Real time enabled.

 

Step by step process of creating Hybrid Provider:

Step: 1 we have to first create an Init Infopackage for the Datasource and schedule it as shown below in screenshot.

 

Untitled1.png

 

Step 2: After creating Init Info package, we will then need to create a RDA Info package

 

Untitled2.png

 

Step 3: Now we have the Data source ready .We will have to create a Hybrid Infoprovider Combining DSO and the Infocube. So, for that I need to first create an Infoarea

 

Untitled3.png

 

Step 4: I will go to Data flow screen which is in left hand panel in RSA1

 

Untitled4.png

 

Step 5: Navigate to Infoarea and right click and “create Data flow”

 

Untitled5.png

 

Step 6: We drag and drop Datasource icon from the sidebar available in Data flow, then right click on the icon. Click on use existing object to select the datasource

 

Untitled6.png

 

Step 7: From the Data flow panel, Keep the cursor on the Datasource, right click “Show Direct Dataflow Before”. By clicking on show direct data flow before, it’s automatically shows the relevant Infopackages for the datasource .

 

Untitled7.png

 

 

Step 8: Now, we will remove the Init Infopackage from the data flow and now, the flow will looks as shown below

 

Untitled8.png

 

Step 9: Now drag and drop DSO from side menu. Right click and “Create”. Create a new DSO. Assign the data and key fields. Save and activate it.

 

Step 10: Now, drag and drop the Hybrid Provider from side bar right click and “Create”. Create a new Hybrid Provider based on DSO. The technical name of the provider is HYPD. Assign the previously created DSO to this hybrid provider

 

Untitled9.png

 

While creating the hybrid provider, it shows a warning as follows which means that the DSO can no longer be used as a standalone DSO. It will behave only as a part of hybrid provider. The data fields and the key fields in the DSO are automatically included in the Hybrid Provider.

 

 

Step 11: Once created, it show a system created Infocube under that Hybrid Provider. Note that the Hybrid Provider and the Info Cube have the description same as the DSO, however we have flexibility to give a new name to Hybrid Provider while creating.

 

 

Untitled10.png

 

 

Step 12: we now have to click on complete data flow icon as shown below for system to create a DTP and transformation automatically for the data flow and activate the flow.

 

Untitled12.png

 

 

Step 13: Once Transformation and DTP are active, we need to assign the RDA Infopackage and the RDA DTP to a RDA daemon. Right click on the RDA Infopackage and select “Assign RDA daemon”. It will navigate to RDA Monitor. Create a daemon from left top corner create button and then assign the both of them to the daemon

 

Untitled13.png

 

 

Step 14: Create RDA daemon: In the daemon settings, the daemon specifies the technical number, short description and the period specifies the duration after which it repeats the execution.

 

Untitled14.png

 

We can see that both the Infopackage and DTP are listed under the RDA daemon.

 

 

Step 15: Now, drill down to Infocube menu and click on the DTP. Now click on “Process Chain maintenance”. It will open a system generated process chain which contains DTP from DSO to Cube

 

 

Untitled15.png

 

 

Step 16: Below is the process chain which is automatically created by the system.

 

Untitled16.png

 

 

Step 17: Go to transaction RSRDA (it is RDA Monitor). Run the daemon and the data in real-time gets update from source system to DSO.

 

Untitled17.png

 

 

Untitled18.png

 

 

The new data updated in DSO is now updated into InfoCube after this process chain has run

Below is the process chain successfully run.

 

Untitled19.png

 

 

So we can update the real-time data from the source system to BW system. The real-time data updating works similar to delta functionality. So whenever the users create a new data in source system, it gets automatically updated into BW target system.





http://scn.sap.com/community/data-warehousing/netweaver-bw/blog/2012/05/23/sap-bw-73-hybrid-provider?utm_source=twitterfeed&utm_medium=twitter

'Technique > SAP BW' 카테고리의 다른 글

DSO Comparison  (0) 2012.08.03
Explorer with BWA  (0) 2012.05.11
Usage of BW7.3 Transformation Rule Type “Read from DataStore”  (0) 2012.05.10
Version management in SAP BW 7.3  (0) 2012.04.20
Queries/Workbooks a user can access  (0) 2012.04.20
Posted by AgnesKim
Technique/SAP BW2012. 5. 10. 21:21


Previous post
Next post

It is quite a common requirement to load data from point A to point B in BW, while performing a lookup on a DSO to get a bunch of fields from there. 
This is usually implemented as follows: a SELECT statement the transformation Start Routine picks up data from the DSO and fills up an internal table, and an end routine (or field-level routines) populates the target fields by reading the internal table.


In keeping with the general BW 7.3 theme of automating common scenarios, a new transformation rule type has been introduced to do this automatically. 
To take this new feature out for a spin, I created a DSO with loosely based on the 0FIAR_O03 DSO. My DSO had the key fields Company Code, Customer (0DEBITOR), Fiscal Period, Fiscal Variant, Accounting Doc No, Item No and Due Date Item No. It also had the data fields Credit Control Area, Debit/Credit Amount, Local Currency, Credit Limit and Currency.


I created a Flat File DataSource , which did not contain any fields for Credit Limit and Currency. The objective was to derive these two fields in the transformation from the Credit Management Control Area Data DSO (0FIAR_O09). To begin with, this is what the transformation from DataSource to the custom DSO looked like.

Tr1.png

To perform the lookup, first the key fields of the lookup DSO have to be identified. The key fields of the 0FIAR_O09 DSO are Credit Control Area and Customer Number (0C_CTR_AREA and 0DEBITOR). The lookup logic will search the 0FIAR_O09 DSO based on these two fields. In order to do this, the Credit Control Area and Customer from the DataSource should be mapped to the Credit Limit key figure in the target.  

The first step in the Rule Details is to specify the DSO from which the field values will be picked up – in this case, 0FIAR_O09. Next, the “IOAssgnmnt” column must be manually filled up with the names of the InfoObjects. It is important that ALL the key fields of the lookup DSO are specified.

Tr2.png


In a nutshell, the above screen tells the system to derive the value of the 0CRED_LIMIT (the target field) from the 0FIAR_O09 DSO (the lookup DSO) based on the C_CTR_AREA and DEBITOR values coming in from the DSO, which correspond to the 0C_CTR_AREA and 0DEBITOR InfoObjects of the lookup DSO.


The 0CURRENCY target field also needs to be similarly mapped. 

 

This is how the transformation looks after we're done. Observe the "DSO" icon which appears next to the Credit Limit and Currency in the target of the transformation.

TR3.png


Once this is done, run the DTP. The transformation will perform perform the lookup and populate the values. Activate the data when the load completes.  
Now to begin verifying the data. The Flat file contained the following values, which were loaded to the PSA. Observe that there is no Credit Limit data in this file.

 

TR4.png

In the 0FIAR_O09 DSO, the following values were present.

tr5.png

After the load, this is how the data in the DSO looks.

 

tr6.png
As the screenshot shows, the transformation rule has correctly picked up the Credit Limit from the 0FIAR_O09 DSO.


A few caveats are in order on this feature.

  • All the key fields of the lookup DSO should be specified. If a partial key is specified (for instance, if we had mapped only 0DEBITOR in the source fields of the transformation rule) the system will assign the value from the first record it finds in the lookup DSO
  • The InfoObject Assignment for the source fields should have exactly the same names as the corresponding InfoObjects in the lookup DSO. If the InfoObject in the lookup DSO was 0CRED_LIMIT and the target InfoObject of the transformation rule was 0VALUE_LC, this technique cannot be used as the InfoObjects differ
  • The target InfoObject will be filled from the value of the InfoObject having the same name in the lookup DSO. In other words, 0CRED_LIMIT is filled up based on the value of 0CRED_LIMIT in 0FIAR_O09. If 0CRED_LIMIT did not exist in the lookup DSO, the system will throw an error during transformation activation

 

Essentially, this feature is most useful if you have simple lookups, for instance get Field X from DSO Y based on the lookup field Z and write it out in field X of the target. However, it may not be best solution if you have more complex requirements which involve

  • Pulling multiple records from the lookup DSO and getting the first or the last found record in the set
  • A lookup DSO in which the field you want has a different name




'Technique > SAP BW' 카테고리의 다른 글

SAP BW 7.3 Hybrid Provider  (0) 2012.05.23
Explorer with BWA  (0) 2012.05.11
Version management in SAP BW 7.3  (0) 2012.04.20
Queries/Workbooks a user can access  (0) 2012.04.20
All You Need to Know about HybridProvider in BW 7.30  (0) 2011.12.08
Posted by AgnesKim
Technique/SAP BW2011. 12. 8. 09:05

All You Need to Know about HybridProvider in BW 7.30

     Rakesh Kalyankar    Article     (PDF 1 MB)     14 November 2011

Overview

The paper provides a detailed description about the following aspects of hybrid providers: - Purpose - Use-cases - Metadata - Modeling - Usage - Technical details


Posted by AgnesKim
Technique/SAP BW2011. 4. 8. 23:59

BW 7.30: New MDX test environment
Roman Moehl SAP Employee 
Business Card
Company: SAP AG
Posted on Apr. 08, 2011 04:00 AM in Analytics, Business Intelligence (BusinessObjects), Enterprise Data Warehousing/Business Warehouse, Standards

 
 

Abstract

SAP NetWeaver BW 7.30 introduces a new test transaction for creating, running and analyzing MDX statements. The following article explains how you can use the test environment to efficiently use the query language MDX.

Motivation

MDX is a query language for accessing multidimensional data sources. It is the central technology of SAP BW’s Open Analysis Interface.

MDX allows dimensional applications to access OLAP data in a generic and standard-based way. Besides external reporting clients from other vendors, MDX is also used by SAP’s own products, for example by BusinessObjects Web Intelligence or BPC.

As a language, MDX offers a variety of functions that potentially result in very complex statements. Customers or client applications that create their own statements often lack of good editing- and tool support. Therefore, SAP BW 7.30 offers a new test transaction for composing, executing and analyzing MDX statements.

The new test transaction MDXTEST is typically used by developers (working on MDX-based integration for SAP BW), administrators and consultants.

Hands on MDXTEST

The new test transaction MDXTEST consists of three parts:

  1. Pane Section
  2. Editor
  3. ResultSet Renderer

MDXTEST Overview

Pane section

The pane section on the left side of the transaction consists of three sub sections.

Pane section

Metadata browser

The metadata browser exposes the ODBO related metadata of the selected Cube. The selected objects (for example Members or Hierarchies) can be dragged onto the MDX editor. This improves and accelerates the construction of MDX statements. The user sees all the available objects that are available for defining statements.

Function library

The function library provides a list of all available MDX functions and methods. For each function or method, a corresponding code snippet can be added to the editor by drag and drop. The functions in the browser are arranged by on their return types, for example Member, Tuple or Set.

Statement navigator

The statement navigator provides a list of stored statements. By double clicking on a statement, the statement is read from the persistency and displayed in the MDX editor. This allows the user to easily find the stored MDX statements.

Editor

The central part of the test transaction is the editor pane. The editor itself provides a set of new functionality that is known from the ABAP editor such as mono-spaced font for indentation and formatting, line numbering or drag-and-drop of function templates.

Editor

Pretty Printer

Most MDX statements are generated by clients. These statements are often not in a readable format. Most of them need to be manually formatted to get a better understanding of statement structure. In addition, the statements are typically quite complex and often consist of a composition of multiple functions. Formatting and restructuring of the statement consumes a lot of time. A built-in pretty printer transforms the text into a “standard” formatting.

ResultSet Renderer

The result of a MDX query is displayed in a separate window to analyze the statement and its result in a decoupled way. Besides the data grid, additional information about the axis and details about MDX-specific statistic events are added to the query result.

ResutlSet Rendering

Executing a MDX statement

Once you’ve constructed your MDX in the editor, there are two ways of executing the statement:

  1. Default: The status bar provides a default execution button. The statements are executed via the multidimensional interface and the default settings.
  2. Expert mode: If you need to run the MDX statement via a different interface, then the expert mode is the right choice. The expert mode is available via a button right next to the default execution button.

The expert mode provides the following options:

  • Interface: It’s possible to run a MDX statement via several APIs. The most common interface is the default multidimensional API. In addition, it’s possible to run the statement via the flattening or XML/A interface.
  • Row restriction: The flattening API allows you to restrict the range of rows that are about to retrieved. Besides a fix from- and to-number, it’s also possible to define a fixed package size. This setting is only available if the flattening API is chosen.
  • Display: The rendering of the result can be influenced by the display setting. In general, you can switch off the default HTML rendering. This might be handy if you run performance measurements and you would like to exclude the rendering overhead.
  • Debug settings: there are a couple of internal MDX-specific debug-breakpoints which are typically only used by SAP support consultants.

Summary

In this article, you learned about the new central UI for testing MDX statements. The various components of the test environment support you in creating, executing and testing MDX with minimal effort.

Roman Moehl   Roman is Senior Developer in SAP NetWeaver BW


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/23519%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2011. 3. 8. 15:13

BW 7.30: Modeling integration between SAP Business Objects Data Services and BW | Part 2 of 2 - Create DataSource
Thomas Rinneberg SAP Employee Active Contributor Bronze: 250-499 points
Business Card
Company: SAP AG
Posted on Mar. 07, 2011 07:58 PM in Enterprise Data Warehousing/Business Warehouse

 
 

In my last blog, I showed how a Business Objects Data Services Source System can be created in BW 7.30. Today, I will tell you, how you continue accessing a particular table in the remote MySQL database, which we connected via data services.

We stopped after successful creation of the source system. Now let’s doubleclick it and like for any other source system, we jump to the corresponding data source tree. Disappointingly, there is no data source available yet. So let’s create one and go to the extraction tab!

Data Source Extraction Tab

Though in general this DataSource looks similar to any other, there is a new adapter, called “Load using RFC from Data Services”. We can specify a source object, and there are three different value help buttons beside this field. Let’s type “*sales*” (our table is a sales table ;-) and try the first button (search):

Source Object Value Help: Search

Yes, this is the one we are looking for. Anyhow, what will the second button (overview) bring?

Source Object Value Help: Overview

A hierarchical overview over all tables. If we expand the node, we find our table again:

Source Object Value Help: Overview

Let me skip the third button for the moment and select our table, then go to the next tab of the DataSource maintenance (Proposal).

Data Source Proposal Tab

This action will do two things: First of all, the list of fields is retrieved from data services. And second, the table definition is imported into the data services repository, which is attached to our source system. Because data services does just a similar thing like BW: A metadata upload from the source into the repository. Now we understand the third button on the previous tab: It lists all sources, which are already imported into the repository. This option is useful, because for big source systems, the retrieval of the already imported tables from the repository can be a lot faster than browsing through all tables of the source. And the list is probably much smaller.

Source Object Value Help: List imported objects

We now can go to the fields tab and finalize the maintenance of the DataSource as usual (e.g. make the PRODID a selectable field), then save and activate. This will generate structures, PSA and program, but not do any action in the data services repository.

The next thing to do is create an Infopackage. For loading from data services, as from any BAPI source system, an Infopackage is mandatory, because the data is actively sent to BW, not pulled, and hence the DTP cannot access it.

InfoPackage: Data Selection

Entering selections in the infopackage when loading from data services was not possible with prior releases, because the selection condition is part of the query transform in the data services data flow, not a variable when starting the job. However now, saving the infopackage will generate the data flow in the first place. Hence we have the possibility to generate the where-statement into the query transform, reflecting the selection condition entered in the infopackage.

On the extraction tab, the information of the Extraction tab of the DataSource is repeated as usual. Let me go to the 3rd Party Selections.

InfoPackage: 3d Party Selections

None of the fields is input enabled. The repository and the JobServer are copied from the source system attributes which you maintained when you created the source system. Also the Jobname is generated. Each InfoPackage will generate a separate data flow and job named infopackage@bw-system. By this, you have no trouble with transports, because even if you use the same repository for connecting to your productive and your development BW, the generated jobs are named different and thus do not interfere. You can just transport the infopackage. The job and flow will be automatically generated when the infopackage is saved or executed the first time. If the infopackage and DataSource definition do not change (i.e. also the selection conditions stay the same), the job and flow are generated only once. Each time something changes (e.g. with dynamic selection conditions), the job and flow are re-generated before the data load request is created.

Generated Data Services Data Flow

One remark to the field “Maximum connections”: This is the degree of parallelism which shall be used to load data to BW, comparable with what you can maintain in transaction SMQS for ABAP source systems. There is also the parameter for the package size available via Menu “Scheduler” – “DataS. Default Data Transfer”. Both parameters are transferred into the generated data flow, i.e. the BW target DataSource.

Now you might have one obvious question. What if you want to have a more complex data flow, e.g. containing data quality transform or joins? The answer is: In this case, you must not enter a data store when creating the source system:

Source System Attributes Popup

Then the Data Services Adapter is not available in the DataSource and you have (mostly) a standard BAPI source system, where you have to enter the fields of the DataSource yourself as usual:

Data Source extraction tab

You then can create your own job in data services (and make sure it indeed loads into your DataSource) and enter the Jobname in the InfoPackage manually (resp. via value help):

InfoPackage 3d Party Selections

Job Name Value Help

The repository and JobServer still are copied from the Source System.

 

legal disclaimer

Don't miss any of the other Information on BW 7.30 which you can find here

 

Thomas Rinneberg  Active Contributor Bronze: 250-499 points is software architect in the SAP Business Warehouse Staging Team


Comment on this articlehope you are burning to try this new feature of BW 7.30 or already did. In any case, please post your comments here!
Comment on this weblog

http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/23761%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 12. 2. 01:32

SAP NetWeaver 7.3 in Ramp Up
Benny Schaich-Lebek SAP Employee 
Business Card
Posted on Dec. 01, 2010 07:10 AM in Business Process Management, Enterprise Portal (EP), SAP NetWeaver Platform

As announced at TechEd this year, SAP NetWeaver 7.3 was released for restricted shipment on Monday, November 29th. Restricted shipment, better known as "ramp up" or release to customer (RTC) means the availability of the product to certain customers for productive usage.

Unrestricted shipment is expected to be in first quarter of 2011.

Here are some out of lots of new features:

  • Greatly enhanced Java support: Java EE5 certified, Java-only ESB and JMS pub/sub capabilities
  • Reusable business rule sets with Microsoft Excel integration
  • Enhanced standards support (WS Policy 1.2, SOAP 1.2, WS Trust 1.3, Java SE 6, JSR 168/286, WSRP 1.0, SAML 1.0/2.0)
  • Tighter integration between SAP NetWeaver Business Warehouse and SAP BusinessObjects
  • Individual and team productivity enhancements in the SAP NetWeaver Portal
  • ...and heaps of new features and enhancements in each part of the SAP NetWeaver stack!

Here is more detail by the usage types of NetWeaver:

Enterprise Portal

With Enterprise Workspaces SAP is enabling a flexible, intuitive environment to compose content, enabling enterprise end users to integrate and run structured and unstructured assets using a self-service approach.

 

Managing and Mashing up Portal Pages with Web Page Composer
Supporting  business key users in  the easy creation and management of  enriched portal pages, blending business applications and user-generated content, generating truly flexible UI.

 

Unified Access to Applications and Processes with Lower TCO
Delivering  the best of class integration layer for SAP, Business Objects and non-SAP applications & reports while maintaining low TCO with capabilities such as advanced caching, integration with SAP central Transport System and significant performance and scalability improvements. Common Java stack and improved server administration and development environment.

 

Portal Landscape Interoperability and Openness
Providing industry standards integration capabilities for SAP and non-SAP content, both into the SAP Portal and for 3rd party Portals, such as JSR and Java 5 support, or open API’s for navigation connectors.

Business Warehouse

Scalability & Performance have been enhanced for faster decision making. Count in remarkably accelerated data loads, a next level of performance for BW Accelerator, and support for Teradata  as additional databases for SAP NetWeaver BW Increased flexibility  by further integration of SAP BusinessObjects BI and EIM tools with tighter integration with SAP BusinessObjects Data Services and SAP BusinessObjects Metadata Management Configuration and operations was simplified with the new integrated Admin Cockpit  into SAP Solution Manager. Also wizard based system configuration was introduced

Process Integration

PI has introduced the availability for a high number of solutions to allow out-of-the box integration: For SAP applications there is prepackaged process integration content semantically interlinked with SAP applications and industry solutions and for partners and ISVs SAP provides certification programs that help to ensure quality.

There is ONE platform (and not several) to support all integration scenarios: A2A, B2B, interoperability with other ESBs, SOA, and so forth.

In addition there is support of replacement of third-party integration solutions to
lower TCO Interoperability with other ESBs to protect investments.

A Broad support of operating environments and databases is made available.

Business Process Management/CE,

With the WD/ABAP Integration you may browse the WD/ABAP UI repository of a backend system and use a WD/ABAP UI in a BPM task.

The API for Managing Processes and Tasks starts process instances, retrieves task lists, and Executes actions on task.

With Business Rule Improvements you now can reuse rules or decision tables across rule sets. Together with this came other usability and developer productivity enhancements.

With zero configuration for local services a big improvement for simplification of SOA Configuration was achieved.

Mobile

In the new version operational costs are reduced through optimized monitoring and administration capabilities. Robustness was enhanced through improved security and simplified upgrades. There is greater flexibility regarding backend interoperability through Web Service interfaces and multiple backend connectivity.

More information is available at the SDN pages for SAP NetWeaver 7.3 or the manuals of NetWeaver 7.3 in the SAP Help Portal.

Benny Schaich-Lebek   is a product specialist at SAP NetWeaver product management



http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22371%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim