Technique/SAP BW2010. 12. 9. 10:35

BW 7.30: Data Flow Copy Wizard
Thomas Rinneberg SAP Employee Active Contributor Bronze: 250-499 points
Business Card
Company: SAP AG
Posted on Dec. 08, 2010 09:08 AM in Business Intelligence (BI)

 
 

If you have read my blog about the Data Flow Generation Wizard, you have learned already about one tool, which will ease the work to create a BW data flow. However it might be that you have done this already and heavily changed the generated objects to suite your needs. Now it comes that you need to create another data flow, which looks quite similar to the one you have already created. Only the target would be a little different. Or you need to load from another DataSource in addition. Or from another source system (maybe a dummy source system? Cf. my other blog on transport features in 7.30). Too bad, that you need to do all your modifications again!

No – because with BW 7.30 there also is – the Data Flow Copy Wizard.

And again, we start in the data warehousing workbench RSA1.

Workbench

 

Let’s display the data flow graphically:

Context Menu to start copy wizard

 

DataFlow Popup

 

DataFlow Display

 

Now, if we want to copy this data flow, we just need to choose “Copy data flow” instead “Display data flow”. Again, you can choose in which direction the objects shall be collected.

DataFlow Popup

 

DataFlow Copy Question Popup

 

The system asks you whether you want to collect also process objects or only data model objects. Why this? The process objects usually are included in a process chain. And if you intend to copy the process objects, you should start copying with the process chain. We will try this out later, but for now, let’s press “No” and sneak into the Wizard itself.

DataFlow Copy Wizard Start Step

 

Looking at the step list on the left, it seems like the objects to be copied are divided up into the various object types. However the order is strange, isn’t it? It is by purpose, and you will (hopefully) understand, if you read the lengthy explanation in above screen shot ;-) Ok, let us start with the first step, “number of copies”!

DataFlow Copy Wizard Number of Copies

 

I have already chosen two copies, else this step can be skipped. Now what do these “replacements” mean? Usually, when copies are performed, the objects are related to the original objects in terms of naming conventions. At least, for each object to be copied, you need to enter a new name. Now if you are going to create two copies at a time, you would need to enter two new names for each object. In order to simplify this, you can enter a placeholder & in the new object name and &VAR& in the description, and the placeholder will be replaced with what you specify in above screen. It could look like this:

Replacement Input

 

So from an object e.g. ZWQC_& (Sales for &VAR&) two objects can be created with names ZWQC_USA (Sales for States) and ZWQC_EMEA (Sales for Europe and Asia).

Data Flow Copy Wizard Source System Step

 

Now the actual copy customizing starts. All following steps have the elements already visible on above screen: For any original object the target object can be specified in several ways when clicking on the column in the middle:

Copy Modes

 

  • You can use the original object uncopied. This means the original object will be included into the copied data flow. By this, you can, depending on the object that you keep
    • Add a new load branch to an existing data target
    • Create a new DataSource for the same source system
    • Load the same DataSource from another source system
    • Load an existing DataSource into a new target
    If you keep all objects in all steps, you actually do not perform a copy.
  • You can use a different, already existing object. This will include the specified object into your copied data flow. You might have created the InfoProvider already, but you want to copy the transformation from an already existing transformation. For object type source system like shown above, the source system must have been created before you start the wizard.
  • You can create a new object as copy of the original object. This is the standard option that you would like to use, and the one, which is not available for source systems ;-)
  • You can exclude the object from the further copy process. This means, that also all objects dependent on the excluded object will be excluded from the copy process. So if you exclude the source system, you will automatically exclude the DataSource and the corresponding Transformation as well. It will leave only the DataStore and InfoCube to be tackled and the transformation between.
This gives already an impression how the wizard takes care of the interdependencies between the objects so that you always will get a consistent copy no matter how complex your data flow is. I have chosen to keep the source system, and for the InfoProviders, I will copy the cube only:

Data Target Assignment

 

Then the wizard gives me no choice for messing up with the corresponding transformations in the next step:

Transformation Assignment

 

This help you will especially appreciate, if it comes to deep copy of process chains. Let’s try this out. I exit the wizard.

Exit Question

 

Oops, I can save my entries!? That sounds like a useful feature. Indeed, if I would have continued the wizard to the end and actually performed the copy, my entries were saved automatically. So if I am going to change something with the original objects and want to propagate this change to the copies I had made, I am offered the following additional step in the wizard:

Use previous copy processes as template

 

Having this, I can very swiftly walk through the steps which carry already my settings from the chosen previous copy process. I just need to exclude some of the objects whose changes I do not want to copy over (or rather use the already copied objects). And moreover there is transaction RSCOPY, which shows me which copy processes I have already undertaken. We will come back to this later, now we wanted to look at the process chain copy. Let us choose menu “process chain” -> “copy” in the process chain maintenance:

Process chain copy question

 

Of course we want to use the wizard. This time we are not asked whether we want to collect process objects as well ;-) Instead, the step list contains some more steps:

Copy Wizard Steps for Process Chain copy

 

Let’s fast forward to “Process Chains”.

Process Chain Assignement Popup

 

The system assumes we want to copy the process chain (how intelligent ;-) and thus confronts us with a popup where we could change the target object name and description. Having filled it out, the wizard shows us another chain as well, the subchain of the selected one:

Subchain

 

Let us keep that re-use subchain and fast forward to the “directly dependent processes”…

Source Systems Empty

 

STOP! What is this? I cannot change the source system? Why can’t I change the source system? Go on: I cannot change any of the DataSources, InfoProvider and Transformations! So we tricked ourselves: By not copying the subchain, we are still referring the original data flow in our copy (in the subchain). The system ensures that the outcome is consistent so it does not let me choose another data flow. Ok, convinced. I will copy the subchain as well. Now I am allowed to do my changes concerning the InfoCube as before. Puh.

Data Target Assignment

 

Copy Wizard Process Copy Step

 

So these are the “directly dependent objects” – directly dependent on a data flow object. Since I have copied the InfoCube only, the system proposes me to keep most of the processes, but only create a new DTP for the new data flow branch plus a data deletion process. We can double click on the original object "0WQ_308_DELDELTA" to see what it looks like.

Data Deletion Process

 

It contains both DataStore and InfoCube. If I copy it, the InfoCube would be replaced by my new InfoCube in the copy. But the DataStore would still be in it. Well, it shall be a copy… Also, if I look at the list of processes above, I see that the InfoPackages and DataStore will be loaded in my new chain as well as in my old chain. Not such a good idea. Maybe it would be better to copy the data flow only and modify my existing chain such that it drops and loads the new cube in addition to the old one. So the system does not totally relief me from thinking myself…

Ok, let us continue with our chain copy anyhow to see the outcome.

Copy Wizard Other Processes Step

 

These are the data flow independent processes; we have to choose names for the triggers, alright. There is one step missing in our example, since we have no such processes in our chain, these are the “indirectly dependent processes”, which refer to a “directly dependent process” in turn.

Let us go to the end and execute.

Batch Question

 

We choose "In Dialog".

Log

 

Everything worked. And what is the result?

Process Chain
Process Chain

 

The target chain looks quite alike the old one, except the subchain and DTP we have copied as well. And how does the data flow look like now?

Data Flow

 

That’s a nice result, isn’t it? So we have a new cube as copy of the old one, plus the transformation and DTP.

But now I had promised to show you transaction RSCOPY where we can see the logs of our copy processes:

Copy Monitor

 

Oops, there is a red icon. I have hidden this attempt from you, but now it comes to the bright daylight that I had made a mistake. Let’s double click it!

Copy Log

 

I missed to remove the starting “0” from the process chain name. And how did I recover this? I just started the wizard again, chose this failed copy process as a template (you can see it in the column “template” in the previous screen shot of transaction RSCOPY) and just corrected my fault. The copy then was executed to the end.

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22416%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 12. 9. 10:33

BW 7.3: Troubleshooting Real-Time Data Acquisition
Tobias Kuefner SAP Employee 
Business Card
Company: SAP AG
Posted on Dec. 08, 2010 09:07 AM in Business Intelligence (BI)

The main advantage of real-time data acquisition (RDA) is that new data is reflected in your BI reports just a few minutes after being entered in your operational systems. RDA therefore supports your business users to make their tactical decisions on a day-by-day basis. The drawback however is that these business users notice much faster when one of their BI reports is not up to date. They might call you then and ask why the document posted 5 minutes ago is not visible yet in reporting. And what do you do now? I’ll show you how BW 7.3 helps you to resolve problems with real-time data acquisition faster than ever before.

First, let’s have a look at what else is new to RDA in BW 7.3. The most powerful extension is definitely the HybridProvider. By using RDA to transfer transactional data into a HybridProvider, you can easily combine the low data latency of RDA with the fast response times of an InfoCube or a BWA index, even for large amounts of data. You’ll find more information about this combination in a separate blog. Additionally. BW 7.3 allows for real-time master data acquisition. This means that you can transfer delta records to InfoObject attributes and texts at a frequency of one per minute. And just like RDA directly activates data transferred to a DataStore object, master data transferred to an InfoObject becomes available for BI reporting immediately. 

But now, let’s start the RDA monitor and look at my examples for RDA troubleshooting. I’ve chosen some data flows from my BW 7.0 test content and added a HybridProvider and an InfoObject. I know that this flight booking stuff is not really exciting, but the good thing is that I can break it without getting calls from business users.

Remember that you can double-click on the objects in the first column to view details. You can look up for example that I’ve configured to stop RDA requests after 13 errors.

Everything looks fine. So let’s start the RDA daemon. It will execute all the InfoPackages and DTPs assigned to it at a frequency of one per minute. But wait… what’s this?

The system asks me whether I’d like to start a repair process chain to transfer missing requests to one of the data targets. Why? Ah, okay… I’ve added a DTP for the newly created HybridProvider but forgotten to transfer the requests already loaded from the DataSource. Let’s have a closer look at these repair process chains while they are taking care of the missing requests.

On the left hand side, you can see the repair process chain for my HybridProvider. Besides the DTP, it also contains a process to activate DataStore object data and a subchain generated by my HybridProvider to transfer data into the InfoCube part. On the right hand side, you can see the repair process chain for my airline attributes which contains an attribute change run. Fortunately, you don’t need to bother with these details – the system is doing that for you. But now let’s really start the RDA daemon.

Green traffic lights appear in front of the InfoPackages and DTPs. I refresh the RDA monitor. Requests appear and show a yellow status while they load new data package by package. The machine is running and I can go and work on something else now.

About a day later, I start the RDA monitor again and get a shock. What has happened?

The traffic lights in front of the InfoPackages and DTPs have turned red. The RDA daemon is showing the flash symbol which means that is has terminated. Don’t panic! It’s BW 7.3. The third column helps me to get a quick overview: 42 errors have occurred under my daemon, 4 DTPs have encountered serious problems (red LEDs), and 4 InfoPackages have encountered tolerable errors (yellow LEDs). I double-click on “42” to get more details.

Here you can see in one table which objects ran into which problem at what time. I recognize at a glance that 4 InfoPackages repeatedly failed to open an RFC connection at around 16:00. The root cause is probably the same, and the timestamps hopefully indicate that it has already been removed (No more RFC issues after 16:07). I cannot find a similar pattern for the DTP errors. This indicates different root causes. Finally, I can see that the two most recent runtime errors were not caught and thus the RDA daemon has terminated. You can scroll to the right to get more context information regarding the background job, the request, the data package, and the number of records in the request.

Let’s have a short break to draw a comparison. What would you do in BW 7.0? 1) You could double-click on a failed request to analyze it. This is still the best option to analyze the red DTP requests in our example. But you could not find the tolerable RFC problems and runtime errors.

2) You could browse through the job overview and the job logs. This would have been the preferable approach to investigate the runtime errors in our example. The job durations and the timestamps in the job log also provide a good basis to locate performance issues, for example in transformations.

3) You could browse through the application logs. These contain more details than the job logs. The drawback however is that the application log is lost if runtime errors occur.

These three options are still available in BW 7.3 – they have even been improved. In particular, the job and application logs have been reduced to the essential messages. Locating a problem is still a cumbersome task however if you don’t know when it occurred. The integrated error overview in the RDA monitor, BW 7.3 allows you to analyze any problem with the preferred tool. Let me show you some examples.

Unless you have other priorities from your business users I’d suggest starting with runtime errors because they affect all objects assigned to the daemon. RDA background jobs are scheduled with a period of 15 minutes to make them robust against runtime errors. In our example, this means the RDA daemon serves all DataSources from the one with the lowest error counter up to the one which causes the runtime. The job is then terminated and restarted 15 minutes later. The actual frequency is thus reduced from 60/h to 4/h, which is not real-time anymore. Let’s see what we can do here. I’ll double-click on “10” in the error column for the request where the problem has occurred.

I just double-click on the error message in the overview to analyze the short dump.

 

Puh… This sounds like sabotage! How can I preserve the other process objects assigned to the same daemon from this runtime error while I search for the root cause? I could just wait another hour of course. This RDA request will then probably have reached the limit of 13 errors that I configured with the InfoPackage. Once this threshold is reached, the RDA daemon will exclude this InfoPackage from execution. The smarter alternative is to temporarily stop the upload and delete the assignment to the daemon.

The overall situation becomes less serious once the DataSource has been isolated under “Unassigned Nodes”.  The daemon continues at a frequency of onc per minute although there are still 32 errors left.

Note that most of these errors – namely the RFC failures – can be tolerated. This means that these errors (yellow LEDs) do not hinder InfoPackages or DTPs until the configured error limit is reached. Assume that I’ve identified the root cause for the RFC failures as a temporary issue. I should then reset the error counter for all objects that have not encountered other problems. This function is available in the menu and context menu. The error counter of an InfoPackage or DTP is reset automatically when a new request is created. Now let’s look at one of the serious problems. I’ll therefore double-click on “2” in the error column of the first DTP with red LED.

When I double-click on the error message, I see the exception stack unpacked. Unfortunately that does not tell me more than I already knew: An exception has occurred in a sub step of the DTP. So I navigate to the DTP monitor by double-clicking the request ID (217).

 

Obviously, one of the transformation rules contains a routine that has raised the exception “13 is an unlucky number”. I navigate to the transformation and identify the root cause quickly.

In the same way, I investigate the exception which has occurred in DTP request 219. The DTP monitor tells me that something is wrong with a transferred fiscal period. A closer look at the transformation reveals a bug in the rule for the fiscal year variant. Before I can fix the broken rules, I need to remove the assignment of the DataSource to the daemon. When the corrections are done, I schedule the repair process chains to repeat the DTP requests with the fixed transformations. Finally I re-assign the DataSource to the daemon.

The RDA monitor already looks much greener now. Only one DataSource with errors is left. More precisely, there are two DTPs assigned to this DataSource which encountered intolerable errors, so the request status is red. Again, I double-click in the error column to view details.

The error message tells me straight away that the update command has caused the problem this time rather than the transformation. Again, the DTP monitor provides insight into the problem.

Of course “GCS” is not a valid currency (Should that be “Galactic Credit Standard” or what?). I go back to the RDA monitor and double-click on the PSA of the DataSource in the second column. In the request overview, I mark the source request of the failed DTP request and view the content of the problematic data package number 000006.

Obviously, the data is already wrong in the DataSource. How could this happen? Ah, okay… It’s an InfoPackage for Web Service (Push). Probably the source is not an SAP system, and a data cleansing step is needed – either in the source system or in the transformation. As a short-term solution, I could delete or modify the inconsistent records and repeat the failed DTP requests with the repair process chain.

That’s all. I hope you enjoyed this little trip to troubleshooting real-time data acquisition, even though this is probably not part of your daily work yet. Let me summarize what to do if problems occur with RDA. Don’t panic. BW 7.3 helps you to identify and resolve problems faster than ever before. Check the error column in the RDA monitor to get a quick overview. Double-click wherever you are to get more details. Use the repair process chains to repeat broken DTP requests. 


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20954%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 12. 2. 01:32

SAP NetWeaver 7.3 in Ramp Up
Benny Schaich-Lebek SAP Employee 
Business Card
Posted on Dec. 01, 2010 07:10 AM in Business Process Management, Enterprise Portal (EP), SAP NetWeaver Platform

As announced at TechEd this year, SAP NetWeaver 7.3 was released for restricted shipment on Monday, November 29th. Restricted shipment, better known as "ramp up" or release to customer (RTC) means the availability of the product to certain customers for productive usage.

Unrestricted shipment is expected to be in first quarter of 2011.

Here are some out of lots of new features:

  • Greatly enhanced Java support: Java EE5 certified, Java-only ESB and JMS pub/sub capabilities
  • Reusable business rule sets with Microsoft Excel integration
  • Enhanced standards support (WS Policy 1.2, SOAP 1.2, WS Trust 1.3, Java SE 6, JSR 168/286, WSRP 1.0, SAML 1.0/2.0)
  • Tighter integration between SAP NetWeaver Business Warehouse and SAP BusinessObjects
  • Individual and team productivity enhancements in the SAP NetWeaver Portal
  • ...and heaps of new features and enhancements in each part of the SAP NetWeaver stack!

Here is more detail by the usage types of NetWeaver:

Enterprise Portal

With Enterprise Workspaces SAP is enabling a flexible, intuitive environment to compose content, enabling enterprise end users to integrate and run structured and unstructured assets using a self-service approach.

 

Managing and Mashing up Portal Pages with Web Page Composer
Supporting  business key users in  the easy creation and management of  enriched portal pages, blending business applications and user-generated content, generating truly flexible UI.

 

Unified Access to Applications and Processes with Lower TCO
Delivering  the best of class integration layer for SAP, Business Objects and non-SAP applications & reports while maintaining low TCO with capabilities such as advanced caching, integration with SAP central Transport System and significant performance and scalability improvements. Common Java stack and improved server administration and development environment.

 

Portal Landscape Interoperability and Openness
Providing industry standards integration capabilities for SAP and non-SAP content, both into the SAP Portal and for 3rd party Portals, such as JSR and Java 5 support, or open API’s for navigation connectors.

Business Warehouse

Scalability & Performance have been enhanced for faster decision making. Count in remarkably accelerated data loads, a next level of performance for BW Accelerator, and support for Teradata  as additional databases for SAP NetWeaver BW Increased flexibility  by further integration of SAP BusinessObjects BI and EIM tools with tighter integration with SAP BusinessObjects Data Services and SAP BusinessObjects Metadata Management Configuration and operations was simplified with the new integrated Admin Cockpit  into SAP Solution Manager. Also wizard based system configuration was introduced

Process Integration

PI has introduced the availability for a high number of solutions to allow out-of-the box integration: For SAP applications there is prepackaged process integration content semantically interlinked with SAP applications and industry solutions and for partners and ISVs SAP provides certification programs that help to ensure quality.

There is ONE platform (and not several) to support all integration scenarios: A2A, B2B, interoperability with other ESBs, SOA, and so forth.

In addition there is support of replacement of third-party integration solutions to
lower TCO Interoperability with other ESBs to protect investments.

A Broad support of operating environments and databases is made available.

Business Process Management/CE,

With the WD/ABAP Integration you may browse the WD/ABAP UI repository of a backend system and use a WD/ABAP UI in a BPM task.

The API for Managing Processes and Tasks starts process instances, retrieves task lists, and Executes actions on task.

With Business Rule Improvements you now can reuse rules or decision tables across rule sets. Together with this came other usability and developer productivity enhancements.

With zero configuration for local services a big improvement for simplification of SOA Configuration was achieved.

Mobile

In the new version operational costs are reduced through optimized monitoring and administration capabilities. Robustness was enhanced through improved security and simplified upgrades. There is greater flexibility regarding backend interoperability through Web Service interfaces and multiple backend connectivity.

More information is available at the SDN pages for SAP NetWeaver 7.3 or the manuals of NetWeaver 7.3 in the SAP Help Portal.

Benny Schaich-Lebek   is a product specialist at SAP NetWeaver product management



http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22371%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2010. 10. 11. 20:44

BW 7.30: Semantically Partitioned Objects
Alexander Hermann SAP Employee 
Business Card
Company: SAP AG
Posted on Oct. 11, 2010 04:22 AM in Business Intelligence (BI)

 
 

Motivation

Enterprise Data Warehouses are the central source for BI applications and are faced with the challenge of efficiently managing constantly growing data volumes. A few years ago, Data Warehouse installations requiring terabytes of space were a rarity. Today the first installations with petabyte requirements are starting to appear on the horizon.

In order to handle such large data quantities, we need to find modeling methods that guarantee the efficient delivery of data for reporting. Here it is important to consider various aspects such as the loading and extraction processes, the index structure and data activation in a DataStore object. The Total Cost of Development (TCD) and the Total Cost of Ownership (TCO) are also very important factors.

Here is an example of a typical modeling scenario. Documents need to be saved in a DataStore object. These documents can come from anywhere in the world and are extracted on a country-specific basis. Here each request contains exactly one country/region.

Figure 1

If an error occurs (due to invalid master data) while the system is trying to activate one of the requests, the other requests cannot be activated either and are therefore initially not available for reporting. This issue becomes even more critical if the requests concern country-specific, independent data.

Semantic partitioning provides a workaround here. Instead of consolidating all the regions into one DataStore object, the system uses several structurally identical DataStore objects or “partitions”. The data is distributed between the partitions, based on a semantic criterion (in this example, "region").

Figure 2

Any errors that occur while requests are being activated now only affect the regions that caused the errors. All the other regions are still available for reporting. In addition, the reduced data volume in the individual partitions results in improved loading and administration performance.

However, the use of semantic partitioning also has some clear disadvantages. The effort required to generate the metadata objects (InfoProviders, transformations, data transfer processes) increases with every partition created. In addition, any changes to the data model must be carried out for every partition and for all dependent objects. This makes the change management more complex. Your CIO might have something to say about this, especially with regards to TCO and TCD!

Examples of semantically partitioned objects

Here you can set the semantically partitioned DataStores or InfoCubes (abbreviated to “SPO”: semantically partitioned object) introduced in SAP NetWeaver BW 7.30. It is now possible to use SPOs to generate and manage semantically partitioned data models with minimal effort.

SPOs provide you with a central UI that enables you to perform the one-time maintenance of the structure and partitioning properties. During the activation stage, the required information is retrieved for generating the partitions. Changes such as adding a new InfoObject to the structure are performed in the same on the SPO and are automatically applied to the partitions. You can also generate DTPs and process chains that match the partitioning properties.

The following example demonstrates how to create a semantically partitioned DataStore object. The section following the example provides you with an extensive insight into the new functions.

DataStore objects and InfoCubes can be semantically partitioned. In the Data Warehousing Workbench, choose “Create DataStore Object”, for example, and complete the fields in the dialog box. Make sure that the option “Semantically Partitioned” is set.

 

Figure 3 

 Figure 4

 

A wizard (1) guides you through the steps for creating an SPO. First, define the structure that are used to using for standard DataStore objects (2). Choose "Maintain Partitions".

 

Figure 5

 

In the next dialog box, you are asked to specify the characteristics that you want to use as partitioning criteria. You can select up to 5 characteristics. For this example, select "0REGION". The compounded InfoObject "0COUNTRY" is automatically included in the selection.

 

Figure 6

 

You can now maintain the partitions. Choose the button (1) to add new partitions and change their descriptions (2). Use the checkbox (3) to decide whether you want to use single values or value ranges to describe the partitions. Choose “Start Activation”. You have now created your first semantically partitioned DataStore object.

 

Figure 7

Figure 8

 

In the next step, you connect the partitions to a source. Go to step 4: “Create Transformation” and configure the central transformation using the relevant business logic.

 

Figure 9

 

Now go to step 5: “Create Data Transfer Processes” to generate DTPs for the partitions. On the next screen, you see a list of the partitions and all available sources (1). First, choose “Create New DTP Template” (2) to create a parameter configuration.

 

Figure 10

 

A parameter configuration/DTP template corresponds to the settings that can be configured in a DTP. These settings are applied when DTPs are generated.

 

Figure 11

 

Once you have created the DTP template, drag it from the Template area and drop it on a free area under the list of partitions (1). This assigns a DTP to every source-target combination. If you need different templates for different partitions, you can drag and drop a template onto one specific source-target combination.

Once you have finished, select all the DTPs (2) and choose “Generate”.

 

Figure 12

 

The last step is to generate a process chain in order to execute the DTPs. Go to step 6 in the wizard: “Create Process Chains”. In the next screen, select all the DTPs and drag and drop them to the lower right screen area: “Detail View (1)”.   You use the values "path" and “sequence” to control the parallel processing of DTPs. DTPs with the same path are executed consecutively.

 

Figure 13

 

Choose “Generate” (3). The following process chain is created.

 

Figure 14

  

Summary

In this article, you learned how to create a semantically partitioned object. Using the central UI of an SPO it's now possible to create and maintain complex partitioned data models with minimal effort. In addition, SPOs guarantee the consistency of your metadata (homogenous partitions) and data (filtered according to the partition criterion).  

Once you have completed the 6 steps, you will have created the following components:

 

  • An SPO with three partitions (DataStore objects)
  • A central transformation for business logic implementation
  • 3 data transfer processes
  • 1 process chain

 

출처 : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/21334%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 9. 25. 21:44

BW 7.30: The New Planning Modeler
Sabrina Hinsberger SAP Employee 
Business Card
Company: SAP AG
Posted on Sep. 17, 2010 09:32 AM in Beginner, Business Intelligence (BI), SAP NetWeaver Platform

 

The new Planning Modeler

With release 730 the new Planning Modeler is born!  You will see that it is not only new but also different from the old release. 

First of all, what is the Planning Modeler? It is the central tool for customizing Planning Applications within SAP BW Integrated Planning. The new release comes with all the central features you know in the old version. But unlike the old Java Web Dynpro based Modeler, the new release is SAP GUI based. Additionally, the new Modeler allows a better integration into the modeling of SAP BW. The planning customization is based on objects like transactional InfoCubes and InfoObjects with their hierarchies and master data. These objects are maintained in the Administrator Workbench (Transaction RSA1). It was a top goal of the new development to have a strong integration here. As the result of fulfilling this key goal, you can from now on see all planning objects within the Administrator Workbench and navigate from there into the Planning Modeler.

The real time InfoProviders as well as the filters and planning functions can be found under the corresponding Aggregation Level and be maintained there. This makes it easy to find planning objects connected to each other.   

RSA1 

The Planning Sequences have their own area in the Administrator Workbench with a brand new feature: You can arrange your Planning Sequences now in InfoAreas which enables you to do an entire semantic grouping of your Planning Applications. At one glimpse you can see all planning objects which are used in the Planning Sequence.

By just double clicking on the sequence name you can display details of the sequence and execute it afterwards in the test framework.

 

Moreover transaction RSPLAN gives you a standalone design time tool for your Planning Objects. This view is leaned towards the look and feel of the InfoProvider maintenance.

 

Especially the AggregationLevel maintenance was adapted to the provider maintenance so that the InfoObjects can now be taken to the AggregationLevel by just using drag & drop. 

 

As a conclusion one can say:
If you want to build up a Planning Application, the new release can save you time and effort.  There is no need to install a Java stack. All can be done in the SAP GUI!  This means lower TCO and an easy start for your planning project!

What is even better for those who already run Planning Applications within SAP BW Integrated Planning: You don’t need to do any migration of your current planning objects!

Just call transaction RSPLAN or the Administrator Workbench (Transaction RSA1) and try out the New Planning Modeler.

 


출처 : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20727%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 8. 16. 10:52

SAP BW – Infoprovider Data Display (LISTCUBE) - Improvised

Suraj Tigga (capgemin)    Article     (PDF 761 KB)     04 August 2010

Overview

Methods to display the Infoprovider data without repetitive selection of Characteristics and Key Figures.




http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/70e092a3-9d8a-2d10-6ea5-8846989ad405&utm_source=twitterfeed&utm_medium=twitter
Posted by AgnesKim
Technique/SAP BW2010. 8. 6. 10:31

BW 7.30: Define Delta in BW and no more Init-InfoPackages
Thomas Rinneberg SAP Employee
Business Card
Company: SAP AG
Posted on Aug. 05, 2010 03:52 PM in Business Intelligence (BI)

You might know and appreciate the capabilities to generically define delta when building a DataSource in SAP source systems. But you had a lot of work, if you want to load delta data from any other source system type, like DBConnect, UDConnect or File. You could declare the data is delta, ok, but this had no real effect, it was just declarative. The task to select the correct data from the source was still yours.

Now with BW 7.30 this has changed. Because now there is – the generic BW delta!

As expected, you start by defining a BW DataSource.

Create DataSource - Extraction Tab

Nothing new, up to now. But if you choose that this DataSource is delta enabled, you will find a new dropdown:

Use generic delta

These are the same options that you already know from the SAP source system DataSource definition:

Generic delta in OSOA

Ok, let’s see what happens if we select “Date”.

Date Delta

The fields “Delta Field” and the two interval fields you know already from the generic delta in the SAP source system. And they have the same meaning. So hopefully I can skip the lengthy explanation of the Security Margin Interval Logic and come to the extra field which popped up: The Time Zone. Well ok, not very thrilling, but probably useful: Since the data in your source might not be saved at the same time zone like the BW which loads it (or your local time), you can explicitly specify the time zone of your data.

“Time stamp – short” offers quite the same input fields, except that the intervals are given in seconds rather than days. “Time stamp Long (UTC)” is by definition lacking the “Time zone” field. Let’s watch “Numeric Pointer”:

Numeric Delta

Oops – no Upper Interval! I guess now I do need to spend some words on these intervals: The value given in “upper Interval” is subtracted from the upper limit used for selecting the delta field. Let’s say current upper value of the delta field is 100. The upper interval is 5. So we would need to select the data up to value 95. But hold on – how should the system know the current value of the numeric field without extracting it? So we would extract the data up to the current upper value anyhow – and hence there is no use in specifying an upper interval.

The lower limit in turn is automatically parsed from the loaded data – and thus known before the next request starts. And hence we can subtract the safety margin before starting selection.

Our example DataSource has a UTC time stamp field, so let’s select it:

Timestamp Delta

Activate the DataSource and create an InfoPackage:

InfoPackage

CHANGED is no selectable field in the InfoPackage. Why not? Well, the delta selections are calculated automatically. You do not need to select explicitly on them. Now let’s not forget to set the update mode to Init in order to take advantage of the generic delta:

Auto-Delta-Switch to Delta

Wait a minute! There is a new flag: “Switch InfoPack. in PC to Delta (F1)”. Guess I need to press F1 to understand what this field is about.

Explanation

Sounds useful, doesn’t it? No more maintenance of two different InfoPackages and process chains for delta upload! You can use the same InfoPackage to load Init and Delta like in the DTP.

In our small test we do not need a process chain, so let’s go on without this flag and load it. Then let’s switch the InfoPackage to Delta manually and load again.

Monitor

Indeed, there are selections for our field CHANGED.

 

Don't miss any of the other Information on BW 7.30 which you can find here

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20413%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

===========================================================================================================

호오!! 테스트해보고 싶!

Posted by AgnesKim
Technique/SAP BW2010. 8. 3. 12:45

Creating a BW Archive Object for InfoCube/DSO from Scratch and Other Homemade Recipes
Karin Tillotson
Business Card
Company: Valero Energy Corporation
Posted on Aug. 02, 2010 07:29 PM in Business Intelligence (BI), Business Process Expert, SAP Developer Network, SAP NetWeaver Platform

 

In this blog, I will go over the step-by-step instructions for creating a BW Archive Object for InfoCubes and DSO’s and will also provide some SAP recommended BW housekeeping tips.

 

To start with, I thought I would go over some differences between ERP Archiving and BW Archiving:

 

ERP Archiving:

  • Delivered data structures/business objects
  • Delivered archive objects (more than 600 archive objects in ECC 6.0)
  • Archives mostly original data
  • Performs an archivability check for some archive objects checking for business complete data or residence time (period of time that must elapse before data can be archived)
  • After archiving, data can be entered for the archived time period
 

BW Archiving:

  • Generated data structures
  • Generated archive objects
  • Archives mainly replicated data
  • No special check for business complete or residence time
  • After archiving a time slice, no new data can be loaded for that time slice
 

To begin archiving, you will need to perform the next steps:

  1. Set up archive file definitions
  2. Set up content repositories (if using 3rd party storage)
  3. Create archive object for InfoCube/DSO

 

Step 1 - To begin archiving, you will need a place to write out the archive files.  You do not necessarily need a 3rd party storage system (though I highly recommend one).  But, you do need a filesystem/directory in which to either temporarily or permanently “house” the files.

 

Go to transaction /nFILE

 

image 1

 

Either select a SAP supplied Logical File Path, or create your own. 

Double click on the relevant Logical File Path, then select/double click on the relevant Syntax group (AS/400, UNIX, or Windows).

Assign the physical path where the archive files will be written to.

 

image 2

 

Next, you need to configure the naming convention of the archive files.

Select the relevant Logical File Path, and go to Logical File Name Definition:

 

image 3

 

In the Physical file parameter, select the relative parameters you wish to use to describe the archive files.  See OSS Note 35992 for all of the possible parameters you can choose.

 

Step 2 - If you will be storing the archive files in a 3rd party storage system (have I mentioned I highly recommend this), you need to configure the content repository.

image 15

 

Enter the Content Repository Name, Description, etc.  The parameters entered will be subject to the 3rd party storage requirements.

 

Step 3 is to create the archive object for the relevant InfoCube or DSO:

Go to transaction RSA1:

imaage 5

 

Find and select the relevant InfoCube/DSO, right-click and then click on Create Data Archiving Process.

 

The following tabs will lead you through the rest of the necessary configuration.

The General Settings tab is where you will select whether you are going to configure an ADK based archived object, a Nearline Storage (NLS) object or a combination.

image 6

 

On the Selection Profile tab, if the time slice characteristic isn’t a key field, select the relevant field from the drop down and select this radio button:

image 7

 

If using the ADK method, configure the following parameters:

Enter the relevant Logical File Name, Maximum size of the archive file, the content repository (if using 3rd party storage), whether the delete jobs and store jobs should be scheduled manually or automatically, and if the delete job should read the files from the storage system.

image 8

You then need to Save and Activate the Data Archiving Process.

 

Once the archive object has been activated, you can then either schedule the archive process through the ADK (Archive Development Kit) using transaction SARA, or you can right click on the InfoCube/DSO and select Manage ADK Archive.

 

image 9

 

Click on the Archiving tab:

image 10

 

And, click on Create Archiving Request.

 

When submitting the Archive Write Job, I recommend selecting the check box for Autom. Request Invalidation.

If this is selected and an error occurs during the archive job, the system will automatically set the status of the run to ‘99 Request Canceled’ so that the lock will be deleted.

image 13 

 

If submitting the job through RSA1 -> Manage, select the appropriate parameters in the Process Flow Control section:

 

image 14

 

When entering the time slice criteria for the archive job, keep in mind that a write lock will be placed on the relevant InfoCube/DSO until both the archive write job and the archive delete job have completed. 

 

Additional topics to consider when implementing an archive object for an InfoCube/DSO:

  • For ODS objects, ensure all requests have been activated
  • For InfoCubes, ensure the requests to be archived have been compressed
  • Recommended to delete the change log data (for the archived time slice)
  • Prior to running the archive jobs, stop the relevant load job
  • Once archiving is complete, resume relevant load job
 

In addition to data archiving, here are some SAP recommended NetWeaver Housekeeping items to consider:

 

From the SAP Data Management Guide that can be found at www.service.sap.com/ilm

 

(Be sure to check back every once in awhile as this gets updated every quarter).

There are recommendations for tables such as:

  • BAL*
  • EDI*
  • RSMON*
  • RSBERRORLOG
  • RSDDSTATAGGRDEF
  • RSPC* (BW Process Chains)
  • RSRWBSTORE
  • Etc.

There are also several SAP OSS Notes that describe options for tables that you do not need to archive:

Search SAP Notes on Clean-Up Programs

www.service.sap.com/notes

Table RSBATCHDATA

  • Clean-up program RSBATCH_DEL_MSG_PARM_DTPTEMP

Table ARFCSDATA

  • Clean-up program RSARFCER

Tables RSDDSTAT

  • Clean-up program RSDDK_STA_DEL_DATA

Table RSIXWWW

  • Clean-up program RSRA_CLUSTER_TABLE_REORG

Table RSPCINSTANCE

  • Clean-up program RSPC_INSTANCE_CLEANUP
  http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20375%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

----------------------------------------------------------------------------------------------------------------

근데 사실 국내엔 BI를 Archiving 한 사례가 없지..
그리고 요즘처럼 하드웨어 가격이 떨어지면 차라리 하드웨어구매가.. =_=

Posted by AgnesKim
Technique/SAP BW2010. 7. 29. 14:10

BW 7.30: Simple modeling of simple data flows
Thomas Rinneberg SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 28, 2010 02:05 PM in Business Intelligence (BI)

 

Have you ever thought: How many steps do I need to do, until I can load a flat file into BW? I need so many objects, foremost I need lots of InfoObjects before I can even start creating my infoprovider. Then I need DataSource, Transfomation (where I need to draw many arrows), DTP and InfoPackage. I just want to load a file! Why is there no help by the system?

Now there is - BW 7.30 brings the DataFlow Generation Wizard!

You start by going to BW DataWarehouse workbench (as always), then selecting the context menu entry „Generate Data Flow..." on either the File-Source system (if you just have a file and want to generate everything needed to load it) or on an already existing DataSource (if you have that part already done - this works also for non-file-source systems!) or on an infoprovider (if you have your data target already modeled and just want to push some data into it).

Context Menu to start data flow wizard


Then the wizard will pop up:

Step 1 - Source options

Here, we have started from the source system. If you start from the InfoProvider, the corresponding step will not be shown in the progress area on the left, since you have selected that already. Same for the DataSource.

I guess you noticed already: ASCII is missing in the file type drop down (how sad! – however please read the wizard text in the screenshot above: It’s just the wizard where it is not supported because the screen would become too complex). And look closer: There is „native XLS-File“. Yes, indeed. No longer „save as CSV“ necessary in Excel. You can just specify your Excel-File in the wizard (and in DataSource maintainance as well). There is just one flaw for those who want to go right to batch upload: The Excel installation on your PC or laptop is used to interpret the file contents, so it is not possible to load Excel files from the SAP application server. For this, you still need to save as CSV first, but the CSV structure is identical to the XLS structure, so you do not need to change the DataSource.

Ok, lets fill out the rest of the fields, file name of course, data source, source system, blabla – (oops, all this is prefilled after selecting the file!) – plus the ominous Data Type (yes, we still can’t live without that)

Step 1 - Pre-Filled Input Fields

and „Continue“:

Step 2 - CSV Options

Step 2 - Excel Options

One remark on the header lines: If you enter more than one (and it is recommended to have at least one line containing the column headers), we expect the column headers to be the last of the header lines, i.e. directly before the data. Now let‘s go on:

Step 3 - Data Target

The following InfoProvider Types and Subtypes are available:

  • InfoCube – Standard and Semantically Partitioned
  • DataStore-Object – Standard, Write Optimized and Semantically Partitioned
  • InfoObject – Attributes and Texts
  • Virtual Provider – Based on DTP
  • Hybrid Provider – Based on DataStore
  • InfoSource
This is quite a choice. For those of you which got lost in that list, have a look at the decision tree which is available via the „i“ button on the screen. As a hint: A standard DataStore-Object is good for most ;-)

Step 4 - Field Mapping

This is the core of the wizard. At this point, the file has already been read and parsed, and the corresponding data types and field names have been derived from the data of the file and the header line (if the file has one). In case you want to check whether the system did a good job, just double click the field name in the first column.

This screen does also define the transformation (of course only 1:1 mapping, but this will do for most cases – else you can just modify the generated transformation in the transformation UI later) as well as the target infoprovider (if not already existing) plus the necessary InfoObjects. You can choose from existing InfoObjects (and the „Suggestion“ will give you a ranked list of InfoObjects which map your fields better or worse) or you can let the Wizard create „New InfoObjects“ after completion. The suggestion uses a variety of search strategies, from data type match via text match to already used matches in 3.x or 7.x transformations.

And that was already the last step:

Step 5 - End

After „Finish“, the listed objects are generated. Note, that no InfoPackage will be generated, because the system will generate the DTP to directly access the file rather than the PSA.


Don't miss any of the other Information on BW 7.30 which you can find here

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20105%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

필요한 이유는 전혀 모르겠지만;; 일단 scrap.

Posted by AgnesKim
Technique/SAP BW2010. 7. 25. 02:21

Delta Queue Diagnosis
P Renjith Kumar SAP Employee
Business Card
Company: SAP Labs
Posted on Jul. 23, 2010 04:25 PM in Business Intelligence (BI), SAP NetWeaver Platform

 

Many times we come across situation where there may be inconsistencies in the delta queue. To check these we can use a diagnostic tool. The report is explained in detail here.

RSC1_DIAGNOSIS Program is Diagnosis Tool for BW Delta Queue

image

How to use this report?

Execute the report RSC1_DIAGNOSIS from SE38/SA38, With datasource and destination details.

Use

With the RSC1_DIAGNOSIS check program, the most important information about the status and condition of the delta queue is issued for a specific DataSource.

Output

You get the following details once the report is executed

  • General information about datasource and version.
  • Meta data of Datasource and Generated objects for the datasource
  • ROOSPRMSC table details of datasource like GETID and GOTID
  • ARFCSSTATE Status
  • TRFCQOUT Status
  • Records check with Recorded status
  • Inconsistencies in delta management tables
  • Error details if available.

Let see the output format of the report.

image

image

How to analyze?

Before analyzing this output we need to know some important tables and concepts. Let us see

The delta management tables

DeltaQueue Management Tables : RSA7

Tables

ROOSPRMSC            :  Control Parameter Per DataSource Channel

ROOSPRMSF            :  Control Parameters Per DataSource

TRFCQOUT              :  tRFC Queue Description (Outbound Queue)

ARFCSSTATE            :  Description of ARFC Call Status (Send)

ARFCSDATA             :  ARFC Call Data (Callers)

The delta queue is constructed of three qRFC tables namely ARFCSDATA which has the data and AFRCSSTATE, TRFCQOUT which is to control dataflow to BI systems.

Now we need to know about TID (Transaction ID). You can see two things GETTID and GOTTID. Now we will see what those are.

GETTID and GOTTID can be seen in table ROOSPRMSC.

image

GETTID:   Delta Queue, Pointer to Maximum Booked Records in BW (i.e.) this refers

<address>               to The last but one delta TID</address><address>        </address><address>GOTTID:  Delta Queue, Pointer to Maximum Extracted Record I (i.e.) this refers to the </address><address>              Last delta TID that has reached BW. (Used in case of repeat delta)</address>

System will delete the LUW'S greater than GETTID and less than or equal to GOTTID. This is because delta queue have last but one delta and loaded delta only.

Now we will see about the TID in detail

TID = ARFCIPID+ ARFCPID+ ARFCTIME+ ARFCTIDCNT  field content.

All the four fields can be seen in the table ARFCSSTATE.

<address>ARFCIPID                  : IP Address</address><address>ARFCPID                   : Process ID.</address><address>ARFCTIME                 : UTC time stamp since 1970.</address><address>ARFCTIDCNT             : Current number</address>

To know how this is split I am taking the GETTID

GETTID = 0A10B02B0A603EB2C2530020

This is separated like this ( 8 + 4 + 8 + 4 ) and it is sent to the four table.

GETTID : 0A10B02B   0A60  3EB2C253  0020

<address>ARFCIPID                   = 0A10B02B</address><address>ARFCPID                    = 0A60</address><address>ARFCTIME                  = 3EB2C253</address><address>ARFCTIDCNT               = 0020</address>

Give this as selection in table ARFCSSTATE.Here you can find the details of the TID.

image

Here you find details of TID.

Now we move on to the output of the report.

image

How to get the generated program?

20001115174832 = Time of generation

/BI0/QI0HR_PT_20001 = Generated extract structure

E8VDVBZO2CTULUAENO66537BO = Generated program

But to display the generated program you need to add "GP" to the prefix of the generated program and can be seen from SE38.

Adding prefix ‘GP' = GPE8VDVBZO2CTULUAENO66537BO

How to check details in STATUS ARFCSSTATE?

The output displays an analysis of the ARFCSSTATE status in the form

STATUS READ 100 times

<address>LOW <Date> <Time> <TID> <SID-CLNT></address><address>HIGH <Date> <Time> <TID> <SID-CLNT></address>

STATUS RECORDED 200 times

<address>LOW <Date> <Time> <TID> <SID-CLNT></address><address>HIGH <Date> <Time> <TID> <SID-CLNT></address>

READ             = Repeat delta entries with TID.

RECORDED     = Delta entries

Using this analysis, you can see whether there are obvious inconsistencies in the delta queue. From the above output, you can see that there are 100 LUWs with the READ status (that is, they are already loaded) and 200 LUWs with the Recorded status (that is, they still have to be loaded).  For a consistent queue, however, there is only one status block for each status. That is, 1 x Read status, 1 x Recorded status. If there are several blocks for a status, then the queue is not consistent. This can occur for the problem described in note 516251.

How to check details in STATUS TRFCQOUT?

Only LUWs with STATUS READY or READ should appear in TRFCQOUT. Another status indicates an error. In addition, the GETTID and GOTTID are issued here with the relevant QCOUNT.

Status READ   = Repeat delta entries with low and high TID

Status READY = Delta entries ready to be transferred.

If the text line "No Record with NOSEND = U exists" is not issued, then the problem from note 444261 has occurred.

In our case we did not get the READ and READY or RECORDED status, That's why it is showing as ‘No Entry in ARFCSSTATE' and ‘No Entry in TRFCQOUT'. But you will normally find that.

Checking Table level inconsistencies

In addition, this program lists possible inconsistencies between the TRFCQOUT and ARFCSSTATE tables.

If you see the following in the output

"Records in TRFCQOUT w/o record in ARFCSSTATE"

This shows inconsistency at table level, to correct this check the note 498484.

The records issued for this check must be deleted from the TRFCQOUT table. This allows the additional delta without reinitialization.However, if you are not certain that data was loaded correctly in BW (see note 498484) and that it was not duplicated, you should carry out a reinitialization.

 

 

 

 http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20226%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim