Technique/SAP BW2011. 11. 25. 21:59

Functional Module Based Delta Enabled Generic Datasource

     Debjit Singha (L & T Infotech)    Article     (PDF 747 KB)     08 July 2011
     Overview

This document explains the process to create Delta enabled Generic Datasource based on Function Module. Here I explained the steps required to use RSAX_BIW_GET_DATA_SIMPLE to create Delta enable Extractor. . Articles explain everything right from the creation of the dummy transparent table to that of enabling Delta of a Datasource. It also describes auxiliary steps like creation of Table Maintenance and TCode creation for direct data entry. If you are looking for the entire steps involved in the creation of Delta Enabled Generic Datasource based on Function Module, this paper will definitely help you doing that.

Posted by AgnesKim
Technique/SAP BW2011. 11. 25. 21:53
     Roland Kramer    Presentation     (PDF 6 MB)     20 August 2011
 

Overview

A complete overview of the Systemcopy process with the recommended export/import method provided with the SAPInst application, including all pre and post step's for a successful BI Systemcopy e.g. as a Milestone prior to an technical Upgrade.

Posted by AgnesKim
Technique/SAP BW2011. 11. 25. 21:45

Repeat Delta Elucidate From OLTP to BW System
vijayGM 
Business Card
Company: YASH TECHNOLOGIES Pvt Limited
Posted on Nov. 11, 2011 03:34 AM in
Beginner, Business Process Expert, Business Process Modeling, CRM, Financial Excellence

URL: http://wiki.sdn.sap.com/wiki/display/NWTech/Repeat+Delta+Elucidate+From+OLTP+to+BW+System

 
 


Applies to: SAP BW3.5, BI7.0

 

For more information, visit the Business Intelligence homepage.

 

Summary

                         In some business process data loading will be conducted on hourly bases through Timestamp Field, are often required as delta criteria for generic delta extraction. However, in many tables such the timestamp field is not available; instead the creation/change date and time are available. Generic delta needs to function on one field. This article explains how exactly request are processed BW system to at OLPT server and what are the dependent fields are interlinked to fill the TIMESTAMP field on runtime extractor. 

 

Author(s):    VIJAY.G.M

Company:    Yash Technologies Private Limited

Created on:  3rd October 2011

 

http://www.erphowtos.com/guides-a-tutorials/doc_view/534-generic-extraction-using-function-module-fm.html 

With respect to above like document, OLTP system pulls the delta records based on following ABAP code,

But in document not reflecting how extractor gets filled at OLPT system by BW system. This document elaborates the bag round technical debugging observation at extract checker.

 

( erdat >= startdate and ( erfzeit >= starttime OR ( erdat <= enddate and erfzeit <= endtime ) ) )

                                                                                 OR

( aedat >= startdate and ( aezeit >= starttime OR ( aedat <= enddate and aezeit <= endtime ) ) )

 

 

 

  

 

 

 

 

Some major observations from the above table:

  1. During initialization, the lower limit value is blank. The higher limit is the current time.
  2. During delta request, the lower limit value was 30 minutes, this value older than the higher limit value of the previous (i.e. init) delta request. This is due to the fact that we had given a safety lower interval of 1800. As a result, the lower interval was taken as the previous higher interval limit minus 30 minutes (i.e. 1800 seconds)
  3. There is a time difference of +5:30 Hours (India) between the Time of extraction column (which shows the system time when the delta request was received in the source system) and the Low and High fields. This is due to the fact that the timestamp configured as a UTC timestamp, and the time zone of the system is UTC+5:30 (i.e. five hour thirty minutes ahead of UTC). The same difference exists between the Low and High fields and the Start/End Date & Time, which are the fields obtained when the low and high timestamps were split. The difference in this case is also due to the time zone.

 

What are SMQ1 and RSA7?

 

SMQ1 (Out bound Queue) is the physical storage for all transactions created in the source system.

 

Delta queue is a virtual store that displays open and unprocessed LUWS against active initialized data sources available in the source system and fetches data from SMQ1 physical storage. In addition to the default structure of data source, there will be five additional fields which will get populated on the fly in Delta Queue.

 

Following are the fields which get populated on the fly for all data sources irrespective of data source type whether it is application specific (standard) or generic data source, if Data sources are using delta queue for delta processing

 

  • Host ID

 

  • Process ID

 

  • Time Stamp

 

  • Counter

 

  • Counter

 

As specified in the below screen shot, highlighted fields are populated on the fly when the delta created for the particular data sources.

 

 

 

 

 

 

 

 

 

 

 

If the Extractor is using a function module when data is requested from BI

 

In this case only repeat delta will be visible in RSA7 because delta queue able store last delta records unless and until next delta requested .For example if the data source delta type is AIE (after image via Extractor) and above specified fields will be filled while loading data into BW.

 

                                    When BW system request to load the initialization and delta update process, immediately two tables get updated at both system to maintain consistent extraction.

 

ROOSPRMSC ----------- OLTP System Table

 

RSSDLINIT     -----------BW System Table

 

BW System table ROOSPRMSC has following information.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

BW System table RSSDLINIT has following information.

 

 

 

 

 

 

 

 

 

 

 

 

What is LUW and how it will process when we request delta data from BW?

LUW is logical unit of work

LUW Processing

LUW is logical unit of work, The qRFC outbound queue controlled using an Outbound Scheduler  (QOUT Scheduler). The QOUT Scheduler prompts the transfer of a LUW to a target system when all previous LUWs in this queue have been processed. When one LUW has been executed, the QOUT Scheduler automatically executes the next LUW in the queue.

In other words when we request delta load from BW, Source system will identify the last delta records which are in form of TID’s by using ROOSPRMSC table and it will delete previous confirmed LUWs(repeat delta table) and Process new LUWs(delta table)

How the source system will identify delta Records? What is GETID? What is GOTID?

ROOSPRMSC table will be used to identify last delta request and last delta LUW which has been loaded into BW

ROOSPRMSC: Control Parameter per Data Source Channel

This table stores all control parameters related to a data load.

Table fields and importance

INITRNR: This field provides the initialization request number

DELTARNR: This field provides the last delta request number

UTC Timestamp: This field provides the timestamp of the last delta request.

GETTID: This field refers to the last but one delta TID

GOTTID: This field refers to the last delta TID (that has reached to BW)

System will delete LUW’s greater than GETID and less than or equal to GOTID

For the next delta TID will be starting the succeeding TID of GOTTID, refer the above screen shots.

What is repeat delta and how it works?

                                  The data is stored in compressed form in the delta queue. It can be requested from several BI systems. The delta queue is also repeat enabled; it stores the data from the last extraction process. The repeat mode of the delta queue is specific to the target system.

In the above example screenshot refers repeat delta LUW which has been loaded into BW for the previous extraction and this repeat delta will be deleted in the time of next delta request

Delta steps:

  1. Identify previous delta LUW’s (repeat delta)
  2. Delete repeat delta LUW’s
  3. confirm unprocessed delta LUW’s
  4. Process unprocessed LUW’s

 

What is TID?

TID is concatenation of “IPADDRESS in which the record is created”, “Dialog Work Process used in creating service order”, “Timestamp at which the data is posted in SMQ1”, “Sequential number of record”.

In other words,

TID: ARFCIPID+ ARFCPID+ ARFCTIME+ ARFCTIDCNT

TID= Host ID (IP ID) +Process ID +Timestamp+ Transaction ID (LUW -> COMMIT WORK)

  • Host ID= IP address of system
  • Process ID= Process ID of LUW (hexadecimal format)
  • Dialog process id which is available in decimal format in SM51 and hexadecimal format of Dialog work process will be saved
  • Timestamp: The time stamp of delta record posted into outbound queue(SMQ1) and timestamp will be in UNIX hexadecimal timestamp
  • Host ID, Process Id and Timestamp will be saved in below tables respectively.

ARFCSSTATE, ARFCSDATA, TRFCQOUT

The Delta Queue is constructed of three tables

 

  1. ARFCSDATA: Raw Data, Based on the Extract Structure, but compressed.

 

  1. ARFCSSTATE & TRFCQOUT: Pointer tables to Access and control the flow of                                          data to multiple BW systems.

     

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

UNIX hex decimal timestamp converter

http://dan.drydog.com/unixdatetime.html

Example: 4A4D8809 = Friday, July 03, 2009 4:24:41 AM UTC (GMT).

Extractor Debugging Process.

 

  • When BW system request the data load, with help of last delta request, lower timestamp will filled (last extract date = start of next extract date) , for reference execute the extractor on debug mode with following steps.
  • While debug mode concentrate on RSA2_SERV_GET_OLTP_SOURCE function module, on that extractor executes TIMESTAMP_CALCULATE Form (line Number 535).
  • There some Incremental Conversion will calculated according to lower time stamp and upper time stamp on runtime and passes into extractor I_T_SELECT interface structure, based on this selection, function module logic will perform on records, 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In this way observed the time stamp filled by BW system.

SAP has delivered the standard ABAP program’RAC1_ DIAGNOSIS’ which will diagnosis consistence of delta extraction and loading.

Based on these observation, easily find out the delta missing records.

 

 

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Refernce Doc : Note 583086 ‐ Diagnostic program for BW Delta Queue 

References: http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/40427814-376a-2c10-5589-bc1aaa6692c3?QuickLink=index&overridelayout=true

http://www.erphowtos.com/guides-a-tutorials/doc_view/534-generic-extraction-using-function-module-fm.html 

Business Intelligence homepage.


vijayGM   SAP BI/ABAP Consultant


Comment on this article

Posted by AgnesKim
Technique/SAP BW2011. 11. 25. 21:05

Using Nested Exception Aggregation in BEx Reports- Scenario

VaraPrasad KVS    Article     (PDF 1 MB)     25 October 2011

Overview

This article addresses the requirement of Nested Exception aggregation in BW/BI Reports; In this article I'm going to explain the following list: •How to use the formula variables with replacement path as processing type, uses of IF ELSE condition in the calculated Keyfigures and creating and using Nested Exception Aggregation with Scenario.


Posted by AgnesKim
Technique/SAP BW2011. 8. 8. 14:01

Execute a BW query by excluding values from another BW query.
Bhushan Suryavanshi 
Business Card
Company: Bombardier Aerospace
Posted on Aug. 07, 2011 02:12 AM in ABAP, Analytics, Enterprise Data Warehousing/Business Warehouse

 
 

Motivation

In SAP BW, usually there is no easy way of performing set operations (union, intersection, outer joins etc.) of two queries. This is usually a manual activity. Basically, post processing using two query results is difficult. Infosets can be used in some scenarios but these need additional modelling steps and not as flexible. This blog illustrates how we can fetch values from one query and then exclude these selected values from another query.

Business Example

Using technical content, you want to find out all infocubes that exists in the system but with no data being fetched from them over the last two years (via queries say).

Let X = {A1, A2, .. AN} be a query that delivers all the cubes in the system.

Let Y = {A1, A2, A3} be the query that delivers all the cubes in use over the last two years i.e. some data was fetched from them.

You want to find Z = X - Y = {A4, A5 .. AN}.

Steps

1) In query X, create a customer exit variable of type Selection option on characteristic A.

 

2) Write the customer exit coding in FUNCTION EXIT_SAPLRRS0_001 include ZXRSRU01. We calculate the variable values set at i_step=1. (How customer exit variables are processed is out of scope of this blog)

 

3) Code sample

Initialization of the request object (Every query execution is represented in BW via a request object).

DATA:

l_r_request TYPE REF TO cl_rsr_request,

l_s_compkey TYPE rszcompkey.

*** Initialization with query Y to be executed from within the ABAP code ***

l_s_compkey-objvers = rs_c_objvers-active.

l_s_compkey-infocube = 'INFOCUBE'.

l_s_compkey-compid = 'QUERY_Y'.   *** executing query Y

l_s_compkey-comptype = rzd1_c_comptp-report.

 

4) Get the internal query id for QUERY_Y

* Get the compkey

CALL FUNCTION 'RRI_REPDIR_READ'

CHANGING

c_s_repkey = l_s_compkey

EXCEPTIONS

entry_not_found = 1.

IF sy-subrc <> 0.

MESSAGE s792(brain) WITH l_s_compkey-compid.

EXIT.

ENDIF.

 

5) Create the Request object

CREATE OBJECT l_r_request

EXPORTING

i_genuniid = l_s_compkey-genuniid

EXCEPTIONS

OTHERS = 1.

 

 6) Call the query Y from within ABAP code

* get the query definition

CALL METHOD l_r_request->get_initial_state

IMPORTING

e_t_dim = l_t_dim

e_t_mem = l_t_mem

e_t_cel = l_t_cel

e_t_atr = l_t_atr

e_t_con = l_t_con

e_t_fac = l_t_fac

e_t_prptys = l_t_prptys

EXCEPTIONS

x_message = 8

OTHERS = 1.

 

* Set the request

CALL METHOD l_r_request->set_request

EXPORTING

i_t_dim = l_t_dim

i_t_mem = l_t_mem

i_t_cel = l_t_cel

i_t_atr = l_t_atr

i_t_con = l_t_con

i_t_fac = l_t_fac

i_t_prptys = l_t_prptys

EXCEPTIONS

no_processing = 1

x_message = 8.

 

* read data

CALL METHOD l_r_request->read_data

IMPORTING

e_warnings = l_warnings

EXCEPTIONS

no_processing = 1

x_message = 8.

  

*close the request

l_r_request->p_r_olap->FREE( ).

 

7) After fetching the results, assign the result set elements {A1, A2, A3} as exclude 'E' to the exit variable.

* Get the text table from the output handle of the request

ASSIGN l_r_request->n_sx_output-text->* TO <l_th_text>.

loop at <l_th_text> into l_s_txt_n.

clear l_s_range.

l_s_range-low = l_s_txt_n-CHAVL_EXT.

l_s_range-sign = 'E'. *** Excluding

l_s_range-opt = 'EQ'.

append l_s_range to e_t_range.

endloop.

......

e_t_range contains the variable values which will finally be submitted to OLAP to fetch the results of query X excluding values of Y i.e. X - Y.

Thus, in this way you can perform other set operations simply in the customer exit coding itself.

 

 P.S. For more details on executing BW queries from within ABAP, please refer: http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/43db5ee1-0701-0010-2d90-c3b991eb616c

Bhushan Suryavanshi   is a SAP BI Analyst at Bombardier Aerospace.


출처 : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/25643%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

'Technique > SAP BW' 카테고리의 다른 글

Repeat Delta Elucidate From OLTP to BW System  (0) 2011.11.25
Using Nested Exception Aggregation in BEx Reports- Scenario  (0) 2011.11.25
Remodeling on DSO  (0) 2011.07.12
Remodeling in SAP BI 7.0  (0) 2011.05.11
Interrupting the Process chain  (0) 2011.05.10
Posted by AgnesKim
Technique/SAP BW2011. 7. 12. 10:34

Remodeling on DSO
Sriman Kanchukommala 
Business Card
Company: Cognizant
Posted on Jul. 11, 2011 09:15 AM in Enterprise Data Warehousing/Business Warehouse

 
 

Remodeling on DSO

Remodeling Overview

Remodeling on DSO is a new feature available from BI 7.3 which enables to change the structure of a DSO already loaded.

Note: Now this feature is supporting for DSO in BI 7.3 and not supported for Info objects.

Using remodeling a characteristic can be simply deleted or added/replaced with a constant value, value of another Info Object, with value of an attribute of another Info Object.

Similarly a Key Figure can be deleted, replaced with a constant value or a new Key Figure can be added.

This blog describes how to add a new characteristic to DSO using the remodeling feature and populating data for added Characteristic.

 Note following before you start remodeling process:

  • Back-up of existing data.
  • During remodeling process DSO is locked for any changes or data loads so make sure you finished all the data loads for this DSO till the time this process finishes.

Note following after you finish remodeling process and start daily loads and querying this DSO:

  • All the objects dependent on DSO like transformations, Multi Providers will have to be re-activated.
  • Adjust queries based on this DSO to accommodate the changes made.
  • If new field was added using remodeling than don’t forget to map it in the transformation rules for future data loads.

  Initial Structure of DSO

 Initial structure of DSO

 

 

 

 

 

 

 

 

 Start the remodeling toolbox. This can be done either via Transaction Code: RSMRT or from the DSO context menu.

 

In the next screen enter the technical name you want for the remodeling rule to be and in the next box enter DSO technical name being remodeled. Click on Create button.

 

 

In the next screen enter the relevant description and click on Transfer button.

 

Click on the Add Icon to create a new remodeling rule.

 

Select the radio button Add characteristic, enter/select the new characteristic.

 

Check the key field button

 

 Remodeling rule is now ready and can be scheduled. Click on the schedule button.

 

 Select the desired scheduling option. While the remodeling rule is being executed it can be monitored by clicking on the monitor button.

 

 DSO Structure after adding New Characteristic,

 

 Finally add the new characteristic to the query and execute it.

 

 

 

 

Sriman Kanchukommala   Mr. Srimannarayana.Kanchukommala is working as a Priciple Consultant in SAP BI for Cognizant Technology Solutions, with over all 8+ years of experience in SAP BI.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/25362%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2011. 5. 11. 22:59

Remodeling in SAP BI 7.0
nehagarg 
Business Card
Company: HCL Tech
Posted on May. 11, 2011 05:47 AM in Enterprise Data Warehousing/Business Warehouse

URL: http://help.sap.com/saphelp_nw04s/helpdata/en/9b/0bc041f123c717e10000000a155106/content.htm

 
 

Before the release of SAP BI 7.0, in the previous version 3.x; we were not able to redesign the info cubes without deleting the existing data from the info cube. We had to delete all the data, modify the cube & then re-load the data again. But with the release of 7.0; the concept of Remodeling introduced as a feature to change the design of an info cube without deleting the Existing Data.

 

Before starting with the steps of Remodeling, I would like to mention the Dos & DONTs –

  • No data loading should be in process
  • After remodeling, check all the BI objects that are connected to the InfoProvider like transformation rules, MultiProviders, DTPs have been deactivated. You have to re-activate these objects manually.
  • The remodeling makes existing queries that are based on the InfoProvider invalid. You have to manually adjust these queries according to the remodeled InfoProvider.
  • You cannot replace or delete units. This avoids having key figures in the InfoCube without the corresponding unit

Following are the steps to be followed

Go to the context menu of Info cube that is to be redesign, select Remodeling as shown in fig. or select the Administration tab from the Left Most Area. There you can select Remodeling or can go direct to the T-Code RSMRT.

Fig1

Fig2

It opens the screen for remodeling the info cube. Enter the Remodeling Rule name & the Info Cube name. Click on create button. Write the description for Remodeling Rule.

Fig3

Fig4

Select the Add button shown circled in the below figure. It adds an operation into the list.

Fig5

Select the Radio Button as per requirement. Here I am selecting the ADD Characteristics to add the info object into the Info cube.

Fig6

The Info Object 0REGION is the new field to be incorporated into the Info Cube by reading Master data from info object 0SOLD_TO. Click on Transfer Tab & then save the changes.

Fig7

**We can even fix the value for the new info object as a Constant or can do One to One Mapping with the existing characteristics in the Info cube or write a customer exit to populate the vale of New Info Object.

Fig8

Check the consistency of the Info Cube.

Now click on the Schedule Button & execute for either “Start Immediate” or “Start Later”.

Fig9

Monitor the request.

Fig10

Now go to the info cube in RSA1, check the new Info Objects incorporated into the Cube with their corresponding data.

Fig11

Fig12

Also remember to activate the Transformation as well as the DTP of the Info Cube. Now there you can map the new Info Objects in the transformation but no need to upload the whole data again.

In the similar way, we can delete the Characteristics from the info cube without deleting the existing data into the Info Cube.

nehagarg   SAP BI Consultant HCL Technologies


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/24470%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2011. 5. 10. 19:54

Interrupting the Process chain
YJV Active Contributor Silver: 500-1,499 points
Business Card
Posted on May. 10, 2011 02:36 AM in Enterprise Data Warehousing/Business Warehouse

 
 

 Scenario:

Let’s say there are two Process chains A and B and C.  A and B are the Master chains which extracts the data from different non sap source systems. All our BW chains are dependent on a non sap source system, when the jobs get completed on non sap systems, it’ll send a message to the third party system from where our BW jobs will get triggered an Event based.

Note: Reason why the non sap system sends the message to third party triggering tool is because when ever there is failure in the non sap System; it will not raise an event to kick off the BW chain(s). we have to trigger them manually. To avoid this we use the third party triggering toll to trigger the chains at a time using an Event .

Now C is dependent both on A and B, In other words C has to trigger only after A and B is completed. 

We can achieve this using the Interrupt Process type.                          

For example, if Process chain A got completed and B is still running, then using the Interrupt we can make the Process chain C to wait until both the Chains got completed.

Let’s see step by step.

 Process Chain A and B

image

 Process chain C

Process chain C is dependent on both A and B chains, we use interrupts (A_interrupt, B_interrupt) which will wait till those chains got completed.

image

Now let’s see how interrupt works

A_interrupt: Interrupting the PC C until PC A gets completed.

image

image

Copy the highlighted Event and Parameters

image

Enter the above copied Event and Parameter in the Interrupt Process type like below screen 

image

Activate and schedule all the three process chains.

Note: All the three process chains (A, B, C) get triggers on Event based.

When the process chain C is in Scheduling, you can see the job BI_INTERRUPT_WAIT      in both A and B chains like below screens.

image

Three chains (A , B , C) got triggered by the same Event

<image

 

C will wait for both A and B like below.

image

 

 

 


YJV  Active Contributor Silver: 500-1,499 points is a SAP BI consultant


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/24505%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2011. 4. 8. 23:59

BW 7.30: New MDX test environment
Roman Moehl SAP Employee 
Business Card
Company: SAP AG
Posted on Apr. 08, 2011 04:00 AM in Analytics, Business Intelligence (BusinessObjects), Enterprise Data Warehousing/Business Warehouse, Standards

 
 

Abstract

SAP NetWeaver BW 7.30 introduces a new test transaction for creating, running and analyzing MDX statements. The following article explains how you can use the test environment to efficiently use the query language MDX.

Motivation

MDX is a query language for accessing multidimensional data sources. It is the central technology of SAP BW’s Open Analysis Interface.

MDX allows dimensional applications to access OLAP data in a generic and standard-based way. Besides external reporting clients from other vendors, MDX is also used by SAP’s own products, for example by BusinessObjects Web Intelligence or BPC.

As a language, MDX offers a variety of functions that potentially result in very complex statements. Customers or client applications that create their own statements often lack of good editing- and tool support. Therefore, SAP BW 7.30 offers a new test transaction for composing, executing and analyzing MDX statements.

The new test transaction MDXTEST is typically used by developers (working on MDX-based integration for SAP BW), administrators and consultants.

Hands on MDXTEST

The new test transaction MDXTEST consists of three parts:

  1. Pane Section
  2. Editor
  3. ResultSet Renderer

MDXTEST Overview

Pane section

The pane section on the left side of the transaction consists of three sub sections.

Pane section

Metadata browser

The metadata browser exposes the ODBO related metadata of the selected Cube. The selected objects (for example Members or Hierarchies) can be dragged onto the MDX editor. This improves and accelerates the construction of MDX statements. The user sees all the available objects that are available for defining statements.

Function library

The function library provides a list of all available MDX functions and methods. For each function or method, a corresponding code snippet can be added to the editor by drag and drop. The functions in the browser are arranged by on their return types, for example Member, Tuple or Set.

Statement navigator

The statement navigator provides a list of stored statements. By double clicking on a statement, the statement is read from the persistency and displayed in the MDX editor. This allows the user to easily find the stored MDX statements.

Editor

The central part of the test transaction is the editor pane. The editor itself provides a set of new functionality that is known from the ABAP editor such as mono-spaced font for indentation and formatting, line numbering or drag-and-drop of function templates.

Editor

Pretty Printer

Most MDX statements are generated by clients. These statements are often not in a readable format. Most of them need to be manually formatted to get a better understanding of statement structure. In addition, the statements are typically quite complex and often consist of a composition of multiple functions. Formatting and restructuring of the statement consumes a lot of time. A built-in pretty printer transforms the text into a “standard” formatting.

ResultSet Renderer

The result of a MDX query is displayed in a separate window to analyze the statement and its result in a decoupled way. Besides the data grid, additional information about the axis and details about MDX-specific statistic events are added to the query result.

ResutlSet Rendering

Executing a MDX statement

Once you’ve constructed your MDX in the editor, there are two ways of executing the statement:

  1. Default: The status bar provides a default execution button. The statements are executed via the multidimensional interface and the default settings.
  2. Expert mode: If you need to run the MDX statement via a different interface, then the expert mode is the right choice. The expert mode is available via a button right next to the default execution button.

The expert mode provides the following options:

  • Interface: It’s possible to run a MDX statement via several APIs. The most common interface is the default multidimensional API. In addition, it’s possible to run the statement via the flattening or XML/A interface.
  • Row restriction: The flattening API allows you to restrict the range of rows that are about to retrieved. Besides a fix from- and to-number, it’s also possible to define a fixed package size. This setting is only available if the flattening API is chosen.
  • Display: The rendering of the result can be influenced by the display setting. In general, you can switch off the default HTML rendering. This might be handy if you run performance measurements and you would like to exclude the rendering overhead.
  • Debug settings: there are a couple of internal MDX-specific debug-breakpoints which are typically only used by SAP support consultants.

Summary

In this article, you learned about the new central UI for testing MDX statements. The various components of the test environment support you in creating, executing and testing MDX with minimal effort.

Roman Moehl   Roman is Senior Developer in SAP NetWeaver BW


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/23519%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2011. 3. 27. 15:24

SAP BW - RFC Function Module Reconciliation SAP BW and ECC Sales Header Data

Suraj Tigga    Article     (PDF 664 KB)     21 March 2011 

Overview

Document specifies the detailed understanding for reconciling the SAP ECC (Sales Order Header data - 2LIS_11_VAHDR) with the data loaded to SAP BI. Reconciliation of the data is done using RFC function module and ABAP report. Advantage of using this method is one can schedule the ABAP report at any specific time and get the detail list of mismatched Sales Order.

Posted by AgnesKim