Technique/SAP BW2011. 3. 27. 15:22

SAP BW - Excel Pivot Chart and Pivot Table Report (Excel)

Suraj Tigga (Capgemini Consulting)    Article     (PDF 590 KB)     21 March 2011  


Overview

Document explains to create Excel Pivot Chart and Pivot Table Report for the SAP BW data using Microsoft Excel.

Posted by AgnesKim
Technique/SAP BW2011. 3. 8. 15:13

BW 7.30: Modeling integration between SAP Business Objects Data Services and BW | Part 2 of 2 - Create DataSource
Thomas Rinneberg SAP Employee Active Contributor Bronze: 250-499 points
Business Card
Company: SAP AG
Posted on Mar. 07, 2011 07:58 PM in Enterprise Data Warehousing/Business Warehouse

 
 

In my last blog, I showed how a Business Objects Data Services Source System can be created in BW 7.30. Today, I will tell you, how you continue accessing a particular table in the remote MySQL database, which we connected via data services.

We stopped after successful creation of the source system. Now let’s doubleclick it and like for any other source system, we jump to the corresponding data source tree. Disappointingly, there is no data source available yet. So let’s create one and go to the extraction tab!

Data Source Extraction Tab

Though in general this DataSource looks similar to any other, there is a new adapter, called “Load using RFC from Data Services”. We can specify a source object, and there are three different value help buttons beside this field. Let’s type “*sales*” (our table is a sales table ;-) and try the first button (search):

Source Object Value Help: Search

Yes, this is the one we are looking for. Anyhow, what will the second button (overview) bring?

Source Object Value Help: Overview

A hierarchical overview over all tables. If we expand the node, we find our table again:

Source Object Value Help: Overview

Let me skip the third button for the moment and select our table, then go to the next tab of the DataSource maintenance (Proposal).

Data Source Proposal Tab

This action will do two things: First of all, the list of fields is retrieved from data services. And second, the table definition is imported into the data services repository, which is attached to our source system. Because data services does just a similar thing like BW: A metadata upload from the source into the repository. Now we understand the third button on the previous tab: It lists all sources, which are already imported into the repository. This option is useful, because for big source systems, the retrieval of the already imported tables from the repository can be a lot faster than browsing through all tables of the source. And the list is probably much smaller.

Source Object Value Help: List imported objects

We now can go to the fields tab and finalize the maintenance of the DataSource as usual (e.g. make the PRODID a selectable field), then save and activate. This will generate structures, PSA and program, but not do any action in the data services repository.

The next thing to do is create an Infopackage. For loading from data services, as from any BAPI source system, an Infopackage is mandatory, because the data is actively sent to BW, not pulled, and hence the DTP cannot access it.

InfoPackage: Data Selection

Entering selections in the infopackage when loading from data services was not possible with prior releases, because the selection condition is part of the query transform in the data services data flow, not a variable when starting the job. However now, saving the infopackage will generate the data flow in the first place. Hence we have the possibility to generate the where-statement into the query transform, reflecting the selection condition entered in the infopackage.

On the extraction tab, the information of the Extraction tab of the DataSource is repeated as usual. Let me go to the 3rd Party Selections.

InfoPackage: 3d Party Selections

None of the fields is input enabled. The repository and the JobServer are copied from the source system attributes which you maintained when you created the source system. Also the Jobname is generated. Each InfoPackage will generate a separate data flow and job named infopackage@bw-system. By this, you have no trouble with transports, because even if you use the same repository for connecting to your productive and your development BW, the generated jobs are named different and thus do not interfere. You can just transport the infopackage. The job and flow will be automatically generated when the infopackage is saved or executed the first time. If the infopackage and DataSource definition do not change (i.e. also the selection conditions stay the same), the job and flow are generated only once. Each time something changes (e.g. with dynamic selection conditions), the job and flow are re-generated before the data load request is created.

Generated Data Services Data Flow

One remark to the field “Maximum connections”: This is the degree of parallelism which shall be used to load data to BW, comparable with what you can maintain in transaction SMQS for ABAP source systems. There is also the parameter for the package size available via Menu “Scheduler” – “DataS. Default Data Transfer”. Both parameters are transferred into the generated data flow, i.e. the BW target DataSource.

Now you might have one obvious question. What if you want to have a more complex data flow, e.g. containing data quality transform or joins? The answer is: In this case, you must not enter a data store when creating the source system:

Source System Attributes Popup

Then the Data Services Adapter is not available in the DataSource and you have (mostly) a standard BAPI source system, where you have to enter the fields of the DataSource yourself as usual:

Data Source extraction tab

You then can create your own job in data services (and make sure it indeed loads into your DataSource) and enter the Jobname in the InfoPackage manually (resp. via value help):

InfoPackage 3d Party Selections

Job Name Value Help

The repository and JobServer still are copied from the Source System.

 

legal disclaimer

Don't miss any of the other Information on BW 7.30 which you can find here

 

Thomas Rinneberg  Active Contributor Bronze: 250-499 points is software architect in the SAP Business Warehouse Staging Team


Comment on this articlehope you are burning to try this new feature of BW 7.30 or already did. In any case, please post your comments here!
Comment on this weblog

http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/23761%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2011. 2. 7. 20:23

How to find the BW 7.0/7.01 Planning URL.
Edward John SAP Employee Active Contributor Silver: 500-1,499 points
Business Card
Company: SAP
Posted on Feb. 07, 2011 02:58 AM in Business Intelligence (BI)

 
 

I would like to introduce a new method to identify the URL generated by the system to access the Planning Modeler.



The planning modeler uses SSO (e.g. http://<server>/sap/public/myssocntl?sap-client=000). For that reason, ou cannot access the Planning Modeler URL directly from your default browser.

This procedure is very useful in case of Planning Modeler does not start or, in a urgent case, to access without SSO.

  • Go to transaction SE24 and display the class "CL_RSPLS_WDPP"
  • Open the method "start_wdapp"
 image
  •   Add a breakpoint on the code line:

  l_url = get_url(
      i_dc = i_dc
      i_application = i_app
      i_params = i_params ).

image
  • Go to transaction RSPLAN
  • Active the Debugging mode with  /h  on transaction box.
  • Click on "Start Modeler" button
  • The Debugger will start and will stop on the break-point.
  • Press F6 and check the "l_url" content. Its value is the Planning Modeler URL configured for that system.

image 

Edward John  Active Contributor Silver: 500-1,499 points Graduated in Information Systems. Work with SAP as Business Information Warehouse Consultant since 2008.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/23316%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2011. 1. 21. 09:28

BW on Oracle: a performance hitch I see more and more often....
Lars Breddemann SAP Employee Active Contributor Gold: 1,500-2,499 points
Business Card
Company: SAP Österreich
Posted on Jan. 20, 2011 08:51 AM in Business Intelligence (BI), Run SAP, SAP NetWeaver Platform, Software Support and Maintenance

 
 

Ok, once again reproducing a problem takes a lot of (waiting-)time for the query to finish.

In the meantime I can also just write a super-short-blog post about a pretty common BW-on-Oracle performance hitch:
Inappropriate database optimizer statistics.
(yes, 'inapropriate' not 'outdated'!)

Usually, that means: if everything runs as it is supposed to, BW will take care of collecting the CBO statistics whenever large changes to the data are performed (e.g. data had been loaded or condensed, etc.).
Of course, the BW admin has to set this up in process chains or as attributes to infoproviders but usually, this works well.

If these BW-measures somehow "forget" to update the statistics when new statistics would be required, there is still the daily "Update Optimizer Statistics" (brconnect -f stats) run that should cover this.

The big exception

Unfortunately there is one setting available that prevents the update of new statistics:
the exception table DBSTATC.

In this table you can setup statistic gathering parameters that deviate from the default build-in behavior of brconnect. (This behavior is not too complex, but really does cover most SAP specifics. See the documentation for details here).

One of the most used exceptions here is to avoid gathering new statistics at all.
This can be useful, e.g. when a table is very volatile and you don't want to have your access plans changed all the time because the statistics sometimes report a small/emtpy and sometimes a large table.
(For such case SAP even provides pre-defined table statistics, check sap note 756335 on that)

But what does this have to do with BW?

Well, before brconnect there had been a different tool called SAPDBA and this tool wasn't able to cope with BW objects.
Therefore BW development decided to build its own statistics update coding and to prevent SAPDBA from messing around with the BW tables. And for that they created entries in DBSTATC with ACTIVE=I automatically.

A comfortable solution

This is of course all long past now, but these entries seem to have survived many system upgrades.
In order to get rid of them, there's a report available: SAP_DBSTATC_CLEANUP.
With it you can display and delete the DBSTATC entries for single infocubes or all of them.
Looking at the sap note that introduces this report (SAP note 129252) you'll find that this had been around for quite a while now :-)

The recommendation is of course to remove all the DBSTATC entries for BW objects, so that the standard statistic gathering can be performed.
SAP note 1013912 - "FAQ: Oracle BW Performance" already contained this recommendation and from today on, it also contains the report name ...

And yes, the solution is to remove the entries and not to set ACTIVE=A or something like that. DBSTATC really is all about defining an exception. If the tables should be handled the standard way, then DBSTATC should contain any entry for them!

Ok, the customer report is through... back to the real work!

Best regards,
Lars

Lars Breddemann  Active Contributor Gold: 1,500-2,499 points is senior support consultant at SAP global active support.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/23130%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2011. 1. 21. 09:24

How to clear BW/(SAP AS) caches
Martin Maruskin Active Contributor Gold: 1,500-2,499 points
Business Card
Company: self employed
Posted on Jan. 20, 2011 12:00 PM in Application Server, Business Intelligence (BI), SAP NetWeaver Platform

URL: http://sapport.blogspot.com/2011/01/how-to-clear-bwsap-as-caches.html


There are several places across BW tools from where clearing of different types of cache can be done. 

 

BEx Web Application Designer metadata cache: stores metadata information about objects related to web templates. You can also delete the cache in which metadata information belonging to current opened template within the session to BW backend. To delete this cache choose in BW Web Application Designer: Menu Tools -> Setting -> Delete cache -> Delete cache. Documentation for NW2004 can be found here. This is not valid for BW based on version 7.x.

 

image

 

OLAP Cache: so called Query cache is type of cache is used to buffer BW queries data sets to be available for next query run. In this case OLAP processor first checks cache and do not need to read the data out of BW infoproviders. To clear OLAP cache run TA RSRT -> Cache monitor -> Delete or you can go via TA RSRCACHE. Online docu here. If you want to do it from your custom ABAP program you may use function module: RSR_CACHE_CLEAR_ALL_CACHES. 


 

HTTP server cache: this is related to HTTP server build in SAP AS server. You can clear it via TA SMICM -> Goto -> HTTP server cache -> Invalidate cache -> Global in system. Online docu here.

 

 

<!--[if gte mso 9]><xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:TrackMoves /> <w:TrackFormatting /> <w:PunctuationKerning /> <w:ValidateAgainstSchemas /> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:DoNotPromoteQF /> <w:LidThemeOther>EN-US</w:LidThemeOther> <w:LidThemeAsian>X-NONE</w:LidThemeAsian> <w:LidThemeComplexScript>X-NONE</w:LidThemeComplexScript> <w:Compatibility> <w:BreakWrappedTables /> <w:SnapToGridInCell /> <w:WrapTextWithPunct /> <w:UseAsianBreakRules /> <w:DontGrowAutofit /> <w:SplitPgBreakAndParaMark /> <w:DontVertAlignCellWithSp /> <w:DontBreakConstrainedForcedTables /> <w:DontVertAlignInTxbx /> <w:Word11KerningPairs /> <w:CachedColBalance /> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> <m:mathPr> <m:mathFont m:val="Cambria Math" /> <m:brkBin m:val="before" /> <m:brkBinSub m:val=" " /> <m:smallFrac m:val="off" /> <m:dispDef /> <m:lMargin m:val="0" /> <m:rMargin m:val="0" /> <m:defJc m:val="centerGroup" /> <m:wrapIndent m:val="1440" /> <m:intLim m:val="subSup" /> <m:naryLim m:val="undOvr" /> </m:mathPr></w:WordDocument> </xml><![endif]--><!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true" DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267"> <w:LsdException Locked="false" Priority="0" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Normal" /> <w:LsdException Locked="false" Priority="9" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="heading 1" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 2" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 3" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 4" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 5" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 6" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 7" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 8" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 9" /> <w:LsdException Locked="false" Priority="39" Name="toc 1" /> <w:LsdException Locked="false" Priority="39" Name="toc 2" /> <w:LsdException Locked="false" Priority="39" Name="toc 3" /> <w:LsdException Locked="false" Priority="39" Name="toc 4" /> <w:LsdException Locked="false" Priority="39" Name="toc 5" /> <w:LsdException Locked="false" Priority="39" Name="toc 6" /> <w:LsdException Locked="false" Priority="39" Name="toc 7" /> <w:LsdException Locked="false" Priority="39" Name="toc 8" /> <w:LsdException Locked="false" Priority="39" Name="toc 9" /> <w:LsdException Locked="false" Priority="35" QFormat="true" Name="caption" /> <w:LsdException Locked="false" Priority="10" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Title" /> <w:LsdException Locked="false" Priority="1" Name="Default Paragraph Font" /> <w:LsdException Locked="false" Priority="11" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtitle" /> <w:LsdException Locked="false" Priority="22" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Strong" /> <w:LsdException Locked="false" Priority="20" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Emphasis" /> <w:LsdException Locked="false" Priority="59" SemiHidden="false" UnhideWhenUsed="false" Name="Table Grid" /> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Placeholder Text" /> <w:LsdException Locked="false" Priority="1" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="No Spacing" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 1" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 1" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 1" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 1" /> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Revision" /> <w:LsdException Locked="false" Priority="34" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="List Paragraph" /> <w:LsdException Locked="false" Priority="29" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Quote" /> <w:LsdException Locked="false" Priority="30" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Quote" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 1" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 1" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 1" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 1" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 1" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 2" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 2" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 2" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 2" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 2" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 2" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 2" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 2" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 2" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 3" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 3" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 3" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 3" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 3" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 3" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 3" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 3" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 3" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 4" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 4" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 4" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 4" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 4" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 4" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 4" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 4" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 4" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 5" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 5" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 5" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 5" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 5" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 5" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 5" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 5" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 5" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 6" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 6" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 6" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 6" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 6" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 6" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 6" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 6" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 6" /> <w:LsdException Locked="false" Priority="19" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis" /> <w:LsdException Locked="false" Priority="21" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis" /> <w:LsdException Locked="false" Priority="31" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference" /> <w:LsdException Locked="false" Priority="32" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Reference" /> <w:LsdException Locked="false" Priority="33" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Book Title" /> <w:LsdException Locked="false" Priority="37" Name="Bibliography" /> <w:LsdException Locked="false" Priority="39" QFormat="true" Name="TOC Heading" /> </w:LatentStyles> </xml><![endif]--><!--[if gte mso 10]> <mce:style><! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} --> <!--[endif]-->
  1. OLAP Cache: so called Query cache is type of cache is used to buffer BW queries data sets to be available for next query run. In this case OLAP processor first checks cache and do not need to read the data out of BW infoproviders. To clear OLAP cache run TA RSRT -> Cache monitor -> Delete or you can go via TA RSRCACHE. Online docu here. If you want to do it from your custom ABAP program you may use function module: RSR_CACHE_CLEAR_ALL_CACHES.

Martin Maruskin  Active Contributor Gold: 1,500-2,499 points SAP NetWeaver BW certified consultant


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/23115%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/그외2011. 1. 7. 12:43

A first (BW) issue...
Lars Breddemann SAP Employee Active Contributor Gold: 1,500-2,499 points
Business Card
Company: SAP Österreich
Posted on Jan. 06, 2011 06:27 PM in Business Intelligence (BI), Run SAP, SAP NetWeaver Platform, Software Support and Maintenance

In the second week working in BW support I noticed that there was a problem reported by customers again and again.
The customers complained about "database statistics update fails in process chains" or "brconnect run returns errors".
When checking the relevant log file entries for the actions there was always something like

ERROR at line 1:
ORA-20000: index "SAPR3"."/BIC/F100069~900"  or partition of such index is in unusable state
ORA-06512: at "SYS.DBMS_STATS", line 15181
ORA-06512: at "SYS.DBMS_STATS", line 15203
ORA-06512: at line 1

The more database savvy customers or DBAs sometimes had a quick resolution at hand: to rebuild the index.

And, indeed, after rebuilding the affected index (as I realized it was nearly always a 900 index) the statistics could be correctly gathered.
Problem solved, or not?

Well, only until the next compression run for the infocube happened.

Infocube compression, wait, like infocube.zip or what?

The term 'compression' is a bit misleading here and 'to condense' seems more appropriate.
To understand this action, we've to take a look into how fact table data is managed in SAP BW.
If you're yet unfamiliar with the BW-star schema (extended star schema) it might be a good idea to check the documentation or the BW related blogs and WIKIs here at SDN.

The fact tables are used to store the actual business data that should be reported on to. In the simplest case, this data is loaded from the source system (e.g. the ERP system) on a regular basis, say daily.
So, we get new data in more or less large chunks or packets.
The BW term for this is 'request'.
A data request is a specific chunk of data that only has one thing in common: it has been loaded into the fact table together. [more on this]

Requests and what they are good for

This request wise processing provides several options that are otherwise quite hard to achieve in datawarehouse systems:

  • we can load just the data that had been changed since the last load (delta loading)
  • we can check the data after each load for technical and quality issues and decide wether or not it should occur in the reporting
  • in case some of the data was not imported correctly, we can easily delete it again, without impairing the reports that use the already available data.

If you think about it, this means, that it's totally possible to load information about the same business objects (let's say: direct sales to high level customers in southern germany) can be loaded several times.´

If you load the sales transaction data every day to later on report it on a weekly base then you'll get the the sum of all the sales aggregated over a week - and use data of 7 requests work of data (1 request per day, 7 days a week).

But as we see, for our reporting requirement (sales on weekly base) it's actually not necessary to keep all the data load pakets (requests).
Once we are sure about that the data is technically and quality wise OK, then we might just as well sum up the data to the weekly level, store this and throw away the 7 requests.
This is what the compression of Infocubes pretty much is about.
In SAP BW this is implemented based on two tables:
the F-facttable to which all data load requests go and the E-facttable, which contains the information for the pre-aggregated (condensed) data. [more on this]

Getting closer...

On reporting time, the SAP BW OLAP engine knows about the fact that our data is stored in two tables.
So for every BW query against an Infocube, we usually see TWO nearly identical SQL statements, that only differ in the facttable that is actually used.

Now we have:

  • two tables,
  • we've data that needs to be read and aggregated request-wise from the F-facttable,
  • we've the aggregation result that has to be saved in the E-facttable
  • we've data that afterwards needs to be deleted from the F-facttable - otherwise some numbers would be included twice in our report!

Looking at this database requirements, there's an Oracle feature available, that really is made for it (it really is ;-)): PARTITIONING!

Without partitioning the final deletion of already condensed requests would require to

  • scan at least one full index in order to find all rows matching the requests to be deleted,
  • remove them
  • and update the indexes afterwards.

With partitioning all we've to do is to drop the partitions that contain our request.
That's the reason why the F-facttables are always partitioned based on the request dimension (on DBMSes that support partitioning, of course)

So, we can easily get rid of data, when the table is partitioned the right way.

But what about the indexes?

There's a treat in Oracle available for that as well: local partitioning of indexes.
Simply put this means: for every table partition a corresponding partition is created for the partitioned index.

With this, we don't even have to rebuild the indexes after dropping a table partition.
All we've to do is to also drop the corresponding index partition together with the table partition.
The remaining index will still be completely correct and will still cover all data in the table.

Ok, now we arrived at the start of the problem ;-)

All this sounds quite good.
In fact, it's great!
And (of course) here begins the problem.

This great combination of clever data design, implementation and database feature exploitation only works properly if the indexes really are partitioned exactly as the table is.
So BW has to take care of this since Oracle allows to create indexes with a different partitioning scheme or without partitioning as well.
If this is the case and we drop table partitions, then Oracle would have to read every row of the dropped partition to know every deleted row and take this information to maintain the index data.
Obviously this would render the partition advantage null and void.
So, Oracle simply flags all indexes for which the same drop partition cannot be performed as UNUSABLE.

Such UNUSABLE indexes can be repaired simply be rebuilding them.
The Oracle cost-based optimizer is clever enough to ignore those indexes (also see oracle parameter "skip_unusable_indexes"), so queries will not fail because of this.

Except... except we force Oracle to use the broken index by using a hint.

"Where the heck to we do such stuff?" - is that the question you've got in mind right now?

Well, we do it everytime you run the update statistics job.
And collecting CBO statistics after deleting lots of data from a central fact table in BW is usually done as part of the data loading process (chain).

In Oracle update statistics means calling a PL/SQL stored procedure in the DBMS_STATS package. And in there, Oracle will run SQL statements like this:

select /*+ no_parallel_index(t,"/BIC/F100069~900") dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_expand index(t,"/BIC/F100069~900") */ count(distinct sys_op_lbid(15948,'R',t.rowid)) as nrw ,count(distinct sys_op_lbid(15948,'L',t.rowid)) as nlb ,count(distinct "KEY_IC_0025P") as ndk ,null as clf from "SAPR3"."/BIC/F100069" t where TBL$OR$IDX$PART$NUM("SAPR3"."/BIC/F100069",0,4,0,"ROWID") = :objn

p>No reason to understand the whole statement for now, but the important part here is the INDEX hint.
With this hint Oracle is forced to use the index (for which statistics should be collected).
If our index is UNUSABLE at this point in time, then Oracle has no other choice as to report:

ERROR at line 1:
ORA-20000: index "SAPR3"."/BIC/F100069~900"  or partition of such index is in unusable state
ORA-06512: at "SYS.DBMS_STATS", line 15181
ORA-06512: at "SYS.DBMS_STATS", line 15203
ORA-06512: at line 1

LOOOONG story short.

Mashing all this together:

WHEN
        we've a partitioned table with a non-partitioned index
AND
       we drop a partition of the table, so that the non-partitioned index is flagged UNUSABLE
AND
       when we finally run an update statistics on this table,
THEN
       we end up with our ORA-20000 error message.

Why do I tell you this, when BW does take care of the database objects so well?

Because sometimes it fails to do so.
One reason for such a failure is a bug that has been introduced with a SAP Basis SP.
This bug is already fixed with sapnote 1479683 but there are many customers around who haven't yet installed the note and who are just facing miraculous errors in their info cube compression process chains.

As you can see the connection between the ORA-20000 error message about the unusable index and the real cause is rather a long distance one, although straight forward once you undertand it.

The solution (ahem... TATAAA!)

To finally get rid of this problem, I wrote note #1513510.

In there you'll find that you have to
a) import the fix for the SAP Basis bug (note #1479683)
b) recreate (NOT just rebuild) the indexes

For step b) the easiest and quickest way is to use the same function modules that SAP BW does use for the this task.

Excurse... Aggregates are just little infocubes...

In note #1513510 I included an SQL statement to find all partitioned tables for which there are non-partitioned indexes, so that the DBA or BW-Poweruser can look up the problematic ones without having to wait for process chains to fail.
The statement looks like this:

select 
    /*+ 
    no_parallel_index(t,"/BIC/F100069~900")
    dbms_stats 
    cursor_sharing_exact
    use_weak_name_resl 
    dynamic_sampling(0) 
    no_monitoring 
    no_expand 
    index(t,"/BIC/F100069~900") 
    */ 
 count(distinct sys_op_lbid(15948,'R',t.rowid)) as nrw
,count(distinct sys_op_lbid(15948,'L',t.rowid)) as nlb
,count(distinct "KEY_IC_0025P") as ndk
,null as clf 
from 
    "SAPR3"."/BIC/F100069" t 
where 
    TBL$OR$IDX$PART$NUM("SAPR3"."/BIC/F100069",0,4,0,"ROWID") = :objn

If you run this statement you may come across tables like /BIC/F100234.
But you don't have any Infocube named "100234" - so what are those tables about?
They belong to aggregate infocubes. [more on this here]
These are (small) subsets of data that the OLAP processor can choose to deliver the reporting result much quicker. In this respect aggregates are very much like database indexes.
Since the aggregates really are subsets of the actual large infocube they have an F- and E-facttable as well and the same problem can occur wich them as well.

If you now want to know to which infocube a specific aggregate belongs to, you can easily look it up in the RSAGGRDIR table.
For every aggregate you'll find an entry with in the table that maps the aggregate to the infocube.

Checking this table in SE16 delivers an output similar to this:

Table:          RSDDAGGRDIR

   AGGRUID                   OBJVERS INFOCUBE   AGGRCUBE ...
                                                         ...
   03T89W5IUUEFPARPLPZ29YZF3 A       0BWVC_AGV  100108   ...
   200H0IWIR23WASLOESNT8BKJZ A       0BWVC_AGV  100099   ...
   3VILFVABC6MYNF9R10M0WYVHR A       ICD05      100772   ...
   3VIVFYBTHUCH8HF58HQCRGJXS A       ICD03      100778   ...
   3VIVG4AW8R8BQ0JPRVJWKZJZK A       ICD03      100779   ...
   3VIVG8ZVTWHX3SFLC8ZEQ6RQO A       ICD03      100780   ...
   3VP09ETI53LHVKWQHLL79RK5X A       RSDRICUBE  100032   ...
   40VXFTXRAJ6NNT88CWOEA3LYN A       0BWVC09CP  100071   ...
   40VXFU60ZFEEYGCAFPLSI952N A       0BWVC09CP  100072   ...
   40VXK7M8IUTH0IH052QGOF94F A       0BWVC09CP  100073   ...
   [...]
   

Ok, I hope some of you really made it to the end of this rather lengthly first BW blog post. I'll try to keep them shorter in the future ;-)

regards,

Lars

Lars Breddemann  Active Contributor Gold: 1,500-2,499 points is senior support consultant at SAP global active support.



http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22941

Posted by AgnesKim
Technique/SAP BW2010. 12. 14. 22:54

BW 7.30: Graphical Data Flow Modeling
Michael te Uhle SAP Employee 
Business Card
Company: SAP AG
Posted on Dec. 14, 2010 04:33 AM in Business Intelligence (BI)

SAP NetWeaver BW 7.30 provides you with a new metadata and transport object for BW data flows. This functionality enables you to persist the data flow views that you created with the ‘Display Data Flow’ function. Data flows are integrated as trees into the Data Warehousing Workbench. They are created and maintained in the same graphical design environment that you will be familiar with from transformation and process chain maintenance screens, for example.

 

 Data Flow Sales Europe

 

A data flow provides a constant view of objects in the system. You can integrate existing objects and create new objects. The data flow maintenance screen can be used as a single point of entry for future development of your Data Warehousing objects and you can use the dataflow functionality to organize your projects.

Next I want to give you a short demonstration of some of the user-friendly modeling capabilities of data flow maintenance. I will also show you how  to model a data flow from a source to a target, based on existing objects or by creating new objects. I have modeled the following sales scenario: The USA data is sent from a DataSource via a DataStore object to the USA Sales InfoCube. The same setup applies to the Europe data, but the data comes from three DataSources. The USA InfoCube and Europe InfoCube are combined using a MultiProvider for reporting.  Let’s assume that some of the objects are already available and others have to be created from scratch.

Let's start in the data flow tree of the Data Warehousing Workbench. We create a new data flow using the InfoArea context menu and enter the name and description of the data flow. We save the empty data flow and the object appears in the tree. Then we drag and drop the icon of the object that we want to create from the toolbar to the data flow maintenance area. We start with the DataStore object, the InfoCube for Sales Europe and the MultiProvider.

 

 Insert InfoProvider

 

We have now created three non-persistent objects. This means that the objects only exist locally in this data flow and not in the database. Now we draw two lines from the DataStore object to the InfoCube and one line from the InfoCube to the MultiProvider.

 

 Draw lines to create transformations

 

What happened? When you drew the line from the DataStore object to the InfoCube, the system realized that the line was a transformation and created a non-persistent transformation. When you drew the second line, the system created a data transfer process (DTP) because a transformation already existed. Further lines would result in further DTPs. When you drew a line from the InfoCube to the MultiProvider, the system created a simple line. This indicates a contained-in relationship, which is the only valid possibility in this case.

Next we used the undo and redo buttons that allow you to undo and redo your previous actions. The screenshots below show the result of the last four actions being undone. In this case, the three connections as well as dragging and dropping the MultiProvider are reversed.

 

Undo of operations

 

To redo your actions, simply open the dropdown menu of the redo button and select the required action or press the redo button four times, for example.

 

 Result of Undo

 

Before we move on, we’ll assign a technical name and description to the InfoCube by choosing the change option in the context menu. Note that this option does not physically create the InfoCube in the system. The InfoCube is still a non-persistent object that is saved locally in the data flow.

The Sales Europe DataStore object exists already in the system. We add it to the data flow by choosing a command from the context menu of the non-persistent object:

 Use existing object

 

Choose the DataStore object ‘Sales Europe’ in the value help. The system adds the object to the data flow maintenance screen and changes the background color of the node to blue. This indicates that the node represents an existing persistent BW object.

Next we create the Sales Europe InfoCube by double-clicking on the non-persistent object. The dialog box for creating InfoCubes opens. The DataStore object is selected as the InfoCube template because it is connected to the InfoCube. The description and technical name of the non-persistent object are applied to the InfoCube.

 

 Create InfoCube

 

Activate the InfoCube and create the transformations and DTPs by double-clicking on the appropriate connections. The source and target are already specified. You simply need to maintain the details. Activate the objects.

Now we need to integrate Sales USA (already exists in the system) into our data flow. We start by dragging the existing Sales USA InfoCube from the InfoProvider tree and dropping it onto the MultiProvider. The existing InfoCube is added. Since the target is the MultiProvider, the system creates a connection between the InfoCube and the MultiProvider, based on the algorithm explained above.

 

 Connect existing InfoCube

 

To integrate the entire data flow of the Sales USA InfoCube, expand the existing data flow of the  InfoCube by choosing the menu entry ‘Use data flow of object’. A dialog box opens. Specify whether you want to use the data flow upwards, downwards or in both directions. The data flow is shown in a separate window and can be added to the data flow maintainance by pressing the confirm button.

 

 Insert a Dataflow of an Object

 

The objects of the data flow are now integrated in our data flow. Double click on the MultiProvider to open the MultiProvider maintenance screen. Here the source objects are listed due to the connections and are already selected in the MultiProvider maintenance. Simply complete your MultiProvider and activate it.

 

 Creation of the MultiProvider

 

Connect three European Sales DataSources to the DataStore object by selecting multiple DataSources in a Data Warehousing Workbench object tree and dropping them onto the DataStore object. The transformations are automatically created (as mentioned above).

 

 Adding several DataSources

 

The Sales data flow scenario can be completed by drawing links for the DTPs and maintaining the details for transformation and DTPs.

This document gives you a first impression of what is possible with the new data flow maintenance. Note that this is only part of the functionality. Some additional features are listed below. A very important function is described in a separate document: Creating and Using Data Flow Templates.

  • If the data flow is collected for the transport, all relevant objects are also collected.
  • You can copy data flows with the data flow copy wizard by performing a simple copy or a multiple deep copy.
  • You can use the create data flow wizard to generate parts of your data flow.
  • Semantically partitioned objects are supported in the data flow maintenance
  • You can collect and integrate all DTPs and/or transformations that belong to objects in the data flow by selecting a single option. For example, if you start your data flow with a list of InfoProviders and you want to display all transformations that connect objects to other objects of this object set.
  • …and many more features.  Try it out!

  

Michael te Uhle   is a Development Architect in the SAP BW Data Warehousing Team



http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22621%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2010. 12. 9. 10:36

DSO Transport and Activation performance issue - adding new fields
Tanuj Gupta SAP Employee 
Business Card
Company: SAP Labs India
Posted on Dec. 08, 2010 09:06 AM in Business Intelligence (BI)

 

Scenario:

• DSO/InfoCube having very large number of entries > 10 million entries
• Database is Oracle
• Customer is adding new fields to the DSO/InfoCube and transporting the changes to the Production system.

Background:

Whenever the changes like
a) Adding new key figures
b) Adding new characteristics and dimensions.

are performed on an object (DSO or InfoCube) which already contains data, the activation process has to adjust the table(s) structure. This adjustments process has to alter the table by adding new column and initialize the column to "NOT NULL" and with initial default value. This should work without any problems given the fact that there are enough database resources to perform this change on the database.

Adding not null fields requires checking of the entire "existing" data for null values which requires large amount of space in the transaction log and in the database and comprise many I/O operation. Depending on I/O throughput of your storage it takes hours.

Reason for initializing the newly added column is that from BW reporting and extraction point the column has to be not null and initialized with default value, as BW can not handle null values. General recommendation in such cases where it is required to remodel the object by adding new fields where there is already much data is to follow the below workaround.

The general workaround involves making a copy of the existing data in the cube using datamart interface or Open Hub services. For example
1. Make copy of the existing Cube (Cube1) to Cube2
2. Add new fields to Cube2
3. Load the data from Cube1 to Cube2 using datamart (use Cube2 for further purposes.

If you have some reporting queries defined on Cube1 then need to do the following

1. Make copy of the existing Cube (Cube1) to Cube2
2. Load data from Cube1 to Cube2 using datamart
3. Delete Data from Cube1
4. Add new fields to Cube1
5. Reload the data from Cube2 to Cube1 using datamart

Alternative Solution:

An alternative to this solution is SAP Note 1287382 which observed to be much faster than the normal process.

1287382 Extending BW objects on Oracle with large tables

We have observed a significant performance improvement using this SAP Note although the implementation of this SAP Note, require good application knowledge.

Here are the steps for your reference:

1. Have a list of the large DSOs (identify the size of active table, change log table) and InfoCubes (identify the size of E, F table).
2. Archive or clear change log tables, compress the InfoCubes, minimize the number of records in the tables.
3. Have a list of objects to be transported, identify the dependency of the objects, create the change requests (try to group large DSOs and InfoCubes into separate change request), identify the sequence of the transport.
4. Check the current SP level and implement SAP Note 1287382 and correction mentioned in the SAP Note 1340922.
5. Run program SAP_RSADMIN_MAINTAIN, maintain the RSADMIN parameters as mentioned in the SAP Note 1287382.
6. Schedule background job to run program SAP_NOT_NULL_REPAIR_ORACLE with suitable parameters (table name, parallelism). More parallel processing could be scheduled depending on your hardware capacity. For InfoCubes, the job has to run separately for all the DIM tables and Fact tables.

Job SAP_NOT_NULL_REPAIR_ORACLE
- Renames the table name (/BIC/AMYODS00) to (/BIC/AMYODS00_OLD)
- New table /BIC/AMYODS00 is created in the system and new fields are added to this new table
- Data from the /BIC/AMYODS00_OLD is copied to the newly created table /BIC/AMYODS00

7. Ensure that Job in SM37 is successfully executed.
8. If any errors occur observed in SM37:

The program fails before copying :- Check whether "/BIC/AMYODS00_OLD" exists. Normally, however, this should not be the case because, if errors occur, "/BIC/AMYODS00_OLD" is renamed "/BIC/AMYODS00". If the table exists, before the program is restarted, the table "/BIC/AMYODS00" must be dropped (!!! MUST NOT CONTAIN DATA) and "/BIC/AMYODS00_OLD" becomes "/BIC/AMYODS00".
1.) Drop the new empty table "/BIC/AMYODS00" (using SE14 or SQL: DROP TABLE "/BIC/AMYODS00")
2.) Rename "/BIC/AMYODS00_OLD" to "/BIC/AMYODS00" (must be done with native SQL )
3.) Update in DD03L at least one column to nullable

The program fails after copying: - If the number of records in "/BIC/AMYODS00_OLD" and "/BIC/AMYODS00" differ.

- ANALYSIS and repetition in accordance with the previous step (The program fails before copying). /BIC/AMYODS00_OLD should have complete data in the table.
The program should not be restarted. Instead, the remaining steps (structuring an index and structuring statistics) are carried out manually after the error analysis.

9. Drop *_OLD tables if the conversion is successful.

If the DSO/InfoProvider is contain more than 10 million entries, the above alternative solution should help to reduce the amount of time significantly.

 

Tanuj Gupta   Platinum Consultant (Solution Support), SAP Labs India.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/19300%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 12. 2. 01:32

SAP NetWeaver 7.3 in Ramp Up
Benny Schaich-Lebek SAP Employee 
Business Card
Posted on Dec. 01, 2010 07:10 AM in Business Process Management, Enterprise Portal (EP), SAP NetWeaver Platform

As announced at TechEd this year, SAP NetWeaver 7.3 was released for restricted shipment on Monday, November 29th. Restricted shipment, better known as "ramp up" or release to customer (RTC) means the availability of the product to certain customers for productive usage.

Unrestricted shipment is expected to be in first quarter of 2011.

Here are some out of lots of new features:

  • Greatly enhanced Java support: Java EE5 certified, Java-only ESB and JMS pub/sub capabilities
  • Reusable business rule sets with Microsoft Excel integration
  • Enhanced standards support (WS Policy 1.2, SOAP 1.2, WS Trust 1.3, Java SE 6, JSR 168/286, WSRP 1.0, SAML 1.0/2.0)
  • Tighter integration between SAP NetWeaver Business Warehouse and SAP BusinessObjects
  • Individual and team productivity enhancements in the SAP NetWeaver Portal
  • ...and heaps of new features and enhancements in each part of the SAP NetWeaver stack!

Here is more detail by the usage types of NetWeaver:

Enterprise Portal

With Enterprise Workspaces SAP is enabling a flexible, intuitive environment to compose content, enabling enterprise end users to integrate and run structured and unstructured assets using a self-service approach.

 

Managing and Mashing up Portal Pages with Web Page Composer
Supporting  business key users in  the easy creation and management of  enriched portal pages, blending business applications and user-generated content, generating truly flexible UI.

 

Unified Access to Applications and Processes with Lower TCO
Delivering  the best of class integration layer for SAP, Business Objects and non-SAP applications & reports while maintaining low TCO with capabilities such as advanced caching, integration with SAP central Transport System and significant performance and scalability improvements. Common Java stack and improved server administration and development environment.

 

Portal Landscape Interoperability and Openness
Providing industry standards integration capabilities for SAP and non-SAP content, both into the SAP Portal and for 3rd party Portals, such as JSR and Java 5 support, or open API’s for navigation connectors.

Business Warehouse

Scalability & Performance have been enhanced for faster decision making. Count in remarkably accelerated data loads, a next level of performance for BW Accelerator, and support for Teradata  as additional databases for SAP NetWeaver BW Increased flexibility  by further integration of SAP BusinessObjects BI and EIM tools with tighter integration with SAP BusinessObjects Data Services and SAP BusinessObjects Metadata Management Configuration and operations was simplified with the new integrated Admin Cockpit  into SAP Solution Manager. Also wizard based system configuration was introduced

Process Integration

PI has introduced the availability for a high number of solutions to allow out-of-the box integration: For SAP applications there is prepackaged process integration content semantically interlinked with SAP applications and industry solutions and for partners and ISVs SAP provides certification programs that help to ensure quality.

There is ONE platform (and not several) to support all integration scenarios: A2A, B2B, interoperability with other ESBs, SOA, and so forth.

In addition there is support of replacement of third-party integration solutions to
lower TCO Interoperability with other ESBs to protect investments.

A Broad support of operating environments and databases is made available.

Business Process Management/CE,

With the WD/ABAP Integration you may browse the WD/ABAP UI repository of a backend system and use a WD/ABAP UI in a BPM task.

The API for Managing Processes and Tasks starts process instances, retrieves task lists, and Executes actions on task.

With Business Rule Improvements you now can reuse rules or decision tables across rule sets. Together with this came other usability and developer productivity enhancements.

With zero configuration for local services a big improvement for simplification of SOA Configuration was achieved.

Mobile

In the new version operational costs are reduced through optimized monitoring and administration capabilities. Robustness was enhanced through improved security and simplified upgrades. There is greater flexibility regarding backend interoperability through Web Service interfaces and multiple backend connectivity.

More information is available at the SDN pages for SAP NetWeaver 7.3 or the manuals of NetWeaver 7.3 in the SAP Help Portal.

Benny Schaich-Lebek   is a product specialist at SAP NetWeaver product management



http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22371%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2010. 10. 11. 20:44

BW 7.30: Semantically Partitioned Objects
Alexander Hermann SAP Employee 
Business Card
Company: SAP AG
Posted on Oct. 11, 2010 04:22 AM in Business Intelligence (BI)

 
 

Motivation

Enterprise Data Warehouses are the central source for BI applications and are faced with the challenge of efficiently managing constantly growing data volumes. A few years ago, Data Warehouse installations requiring terabytes of space were a rarity. Today the first installations with petabyte requirements are starting to appear on the horizon.

In order to handle such large data quantities, we need to find modeling methods that guarantee the efficient delivery of data for reporting. Here it is important to consider various aspects such as the loading and extraction processes, the index structure and data activation in a DataStore object. The Total Cost of Development (TCD) and the Total Cost of Ownership (TCO) are also very important factors.

Here is an example of a typical modeling scenario. Documents need to be saved in a DataStore object. These documents can come from anywhere in the world and are extracted on a country-specific basis. Here each request contains exactly one country/region.

Figure 1

If an error occurs (due to invalid master data) while the system is trying to activate one of the requests, the other requests cannot be activated either and are therefore initially not available for reporting. This issue becomes even more critical if the requests concern country-specific, independent data.

Semantic partitioning provides a workaround here. Instead of consolidating all the regions into one DataStore object, the system uses several structurally identical DataStore objects or “partitions”. The data is distributed between the partitions, based on a semantic criterion (in this example, "region").

Figure 2

Any errors that occur while requests are being activated now only affect the regions that caused the errors. All the other regions are still available for reporting. In addition, the reduced data volume in the individual partitions results in improved loading and administration performance.

However, the use of semantic partitioning also has some clear disadvantages. The effort required to generate the metadata objects (InfoProviders, transformations, data transfer processes) increases with every partition created. In addition, any changes to the data model must be carried out for every partition and for all dependent objects. This makes the change management more complex. Your CIO might have something to say about this, especially with regards to TCO and TCD!

Examples of semantically partitioned objects

Here you can set the semantically partitioned DataStores or InfoCubes (abbreviated to “SPO”: semantically partitioned object) introduced in SAP NetWeaver BW 7.30. It is now possible to use SPOs to generate and manage semantically partitioned data models with minimal effort.

SPOs provide you with a central UI that enables you to perform the one-time maintenance of the structure and partitioning properties. During the activation stage, the required information is retrieved for generating the partitions. Changes such as adding a new InfoObject to the structure are performed in the same on the SPO and are automatically applied to the partitions. You can also generate DTPs and process chains that match the partitioning properties.

The following example demonstrates how to create a semantically partitioned DataStore object. The section following the example provides you with an extensive insight into the new functions.

DataStore objects and InfoCubes can be semantically partitioned. In the Data Warehousing Workbench, choose “Create DataStore Object”, for example, and complete the fields in the dialog box. Make sure that the option “Semantically Partitioned” is set.

 

Figure 3 

 Figure 4

 

A wizard (1) guides you through the steps for creating an SPO. First, define the structure that are used to using for standard DataStore objects (2). Choose "Maintain Partitions".

 

Figure 5

 

In the next dialog box, you are asked to specify the characteristics that you want to use as partitioning criteria. You can select up to 5 characteristics. For this example, select "0REGION". The compounded InfoObject "0COUNTRY" is automatically included in the selection.

 

Figure 6

 

You can now maintain the partitions. Choose the button (1) to add new partitions and change their descriptions (2). Use the checkbox (3) to decide whether you want to use single values or value ranges to describe the partitions. Choose “Start Activation”. You have now created your first semantically partitioned DataStore object.

 

Figure 7

Figure 8

 

In the next step, you connect the partitions to a source. Go to step 4: “Create Transformation” and configure the central transformation using the relevant business logic.

 

Figure 9

 

Now go to step 5: “Create Data Transfer Processes” to generate DTPs for the partitions. On the next screen, you see a list of the partitions and all available sources (1). First, choose “Create New DTP Template” (2) to create a parameter configuration.

 

Figure 10

 

A parameter configuration/DTP template corresponds to the settings that can be configured in a DTP. These settings are applied when DTPs are generated.

 

Figure 11

 

Once you have created the DTP template, drag it from the Template area and drop it on a free area under the list of partitions (1). This assigns a DTP to every source-target combination. If you need different templates for different partitions, you can drag and drop a template onto one specific source-target combination.

Once you have finished, select all the DTPs (2) and choose “Generate”.

 

Figure 12

 

The last step is to generate a process chain in order to execute the DTPs. Go to step 6 in the wizard: “Create Process Chains”. In the next screen, select all the DTPs and drag and drop them to the lower right screen area: “Detail View (1)”.   You use the values "path" and “sequence” to control the parallel processing of DTPs. DTPs with the same path are executed consecutively.

 

Figure 13

 

Choose “Generate” (3). The following process chain is created.

 

Figure 14

  

Summary

In this article, you learned how to create a semantically partitioned object. Using the central UI of an SPO it's now possible to create and maintain complex partitioned data models with minimal effort. In addition, SPOs guarantee the consistency of your metadata (homogenous partitions) and data (filtered according to the partition criterion).  

Once you have completed the 6 steps, you will have created the following components:

 

  • An SPO with three partitions (DataStore objects)
  • A central transformation for business logic implementation
  • 3 data transfer processes
  • 1 process chain

 

출처 : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/21334%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim