Technique/SAP BW2011. 12. 8. 09:05

All You Need to Know about HybridProvider in BW 7.30

     Rakesh Kalyankar    Article     (PDF 1 MB)     14 November 2011

Overview

The paper provides a detailed description about the following aspects of hybrid providers: - Purpose - Use-cases - Metadata - Modeling - Usage - Technical details


Posted by AgnesKim
Technique/SAP BW2010. 12. 9. 10:35

BW 7.30: Data Flow Copy Wizard
Thomas Rinneberg SAP Employee Active Contributor Bronze: 250-499 points
Business Card
Company: SAP AG
Posted on Dec. 08, 2010 09:08 AM in Business Intelligence (BI)

 
 

If you have read my blog about the Data Flow Generation Wizard, you have learned already about one tool, which will ease the work to create a BW data flow. However it might be that you have done this already and heavily changed the generated objects to suite your needs. Now it comes that you need to create another data flow, which looks quite similar to the one you have already created. Only the target would be a little different. Or you need to load from another DataSource in addition. Or from another source system (maybe a dummy source system? Cf. my other blog on transport features in 7.30). Too bad, that you need to do all your modifications again!

No – because with BW 7.30 there also is – the Data Flow Copy Wizard.

And again, we start in the data warehousing workbench RSA1.

Workbench

 

Let’s display the data flow graphically:

Context Menu to start copy wizard

 

DataFlow Popup

 

DataFlow Display

 

Now, if we want to copy this data flow, we just need to choose “Copy data flow” instead “Display data flow”. Again, you can choose in which direction the objects shall be collected.

DataFlow Popup

 

DataFlow Copy Question Popup

 

The system asks you whether you want to collect also process objects or only data model objects. Why this? The process objects usually are included in a process chain. And if you intend to copy the process objects, you should start copying with the process chain. We will try this out later, but for now, let’s press “No” and sneak into the Wizard itself.

DataFlow Copy Wizard Start Step

 

Looking at the step list on the left, it seems like the objects to be copied are divided up into the various object types. However the order is strange, isn’t it? It is by purpose, and you will (hopefully) understand, if you read the lengthy explanation in above screen shot ;-) Ok, let us start with the first step, “number of copies”!

DataFlow Copy Wizard Number of Copies

 

I have already chosen two copies, else this step can be skipped. Now what do these “replacements” mean? Usually, when copies are performed, the objects are related to the original objects in terms of naming conventions. At least, for each object to be copied, you need to enter a new name. Now if you are going to create two copies at a time, you would need to enter two new names for each object. In order to simplify this, you can enter a placeholder & in the new object name and &VAR& in the description, and the placeholder will be replaced with what you specify in above screen. It could look like this:

Replacement Input

 

So from an object e.g. ZWQC_& (Sales for &VAR&) two objects can be created with names ZWQC_USA (Sales for States) and ZWQC_EMEA (Sales for Europe and Asia).

Data Flow Copy Wizard Source System Step

 

Now the actual copy customizing starts. All following steps have the elements already visible on above screen: For any original object the target object can be specified in several ways when clicking on the column in the middle:

Copy Modes

 

  • You can use the original object uncopied. This means the original object will be included into the copied data flow. By this, you can, depending on the object that you keep
    • Add a new load branch to an existing data target
    • Create a new DataSource for the same source system
    • Load the same DataSource from another source system
    • Load an existing DataSource into a new target
    If you keep all objects in all steps, you actually do not perform a copy.
  • You can use a different, already existing object. This will include the specified object into your copied data flow. You might have created the InfoProvider already, but you want to copy the transformation from an already existing transformation. For object type source system like shown above, the source system must have been created before you start the wizard.
  • You can create a new object as copy of the original object. This is the standard option that you would like to use, and the one, which is not available for source systems ;-)
  • You can exclude the object from the further copy process. This means, that also all objects dependent on the excluded object will be excluded from the copy process. So if you exclude the source system, you will automatically exclude the DataSource and the corresponding Transformation as well. It will leave only the DataStore and InfoCube to be tackled and the transformation between.
This gives already an impression how the wizard takes care of the interdependencies between the objects so that you always will get a consistent copy no matter how complex your data flow is. I have chosen to keep the source system, and for the InfoProviders, I will copy the cube only:

Data Target Assignment

 

Then the wizard gives me no choice for messing up with the corresponding transformations in the next step:

Transformation Assignment

 

This help you will especially appreciate, if it comes to deep copy of process chains. Let’s try this out. I exit the wizard.

Exit Question

 

Oops, I can save my entries!? That sounds like a useful feature. Indeed, if I would have continued the wizard to the end and actually performed the copy, my entries were saved automatically. So if I am going to change something with the original objects and want to propagate this change to the copies I had made, I am offered the following additional step in the wizard:

Use previous copy processes as template

 

Having this, I can very swiftly walk through the steps which carry already my settings from the chosen previous copy process. I just need to exclude some of the objects whose changes I do not want to copy over (or rather use the already copied objects). And moreover there is transaction RSCOPY, which shows me which copy processes I have already undertaken. We will come back to this later, now we wanted to look at the process chain copy. Let us choose menu “process chain” -> “copy” in the process chain maintenance:

Process chain copy question

 

Of course we want to use the wizard. This time we are not asked whether we want to collect process objects as well ;-) Instead, the step list contains some more steps:

Copy Wizard Steps for Process Chain copy

 

Let’s fast forward to “Process Chains”.

Process Chain Assignement Popup

 

The system assumes we want to copy the process chain (how intelligent ;-) and thus confronts us with a popup where we could change the target object name and description. Having filled it out, the wizard shows us another chain as well, the subchain of the selected one:

Subchain

 

Let us keep that re-use subchain and fast forward to the “directly dependent processes”…

Source Systems Empty

 

STOP! What is this? I cannot change the source system? Why can’t I change the source system? Go on: I cannot change any of the DataSources, InfoProvider and Transformations! So we tricked ourselves: By not copying the subchain, we are still referring the original data flow in our copy (in the subchain). The system ensures that the outcome is consistent so it does not let me choose another data flow. Ok, convinced. I will copy the subchain as well. Now I am allowed to do my changes concerning the InfoCube as before. Puh.

Data Target Assignment

 

Copy Wizard Process Copy Step

 

So these are the “directly dependent objects” – directly dependent on a data flow object. Since I have copied the InfoCube only, the system proposes me to keep most of the processes, but only create a new DTP for the new data flow branch plus a data deletion process. We can double click on the original object "0WQ_308_DELDELTA" to see what it looks like.

Data Deletion Process

 

It contains both DataStore and InfoCube. If I copy it, the InfoCube would be replaced by my new InfoCube in the copy. But the DataStore would still be in it. Well, it shall be a copy… Also, if I look at the list of processes above, I see that the InfoPackages and DataStore will be loaded in my new chain as well as in my old chain. Not such a good idea. Maybe it would be better to copy the data flow only and modify my existing chain such that it drops and loads the new cube in addition to the old one. So the system does not totally relief me from thinking myself…

Ok, let us continue with our chain copy anyhow to see the outcome.

Copy Wizard Other Processes Step

 

These are the data flow independent processes; we have to choose names for the triggers, alright. There is one step missing in our example, since we have no such processes in our chain, these are the “indirectly dependent processes”, which refer to a “directly dependent process” in turn.

Let us go to the end and execute.

Batch Question

 

We choose "In Dialog".

Log

 

Everything worked. And what is the result?

Process Chain
Process Chain

 

The target chain looks quite alike the old one, except the subchain and DTP we have copied as well. And how does the data flow look like now?

Data Flow

 

That’s a nice result, isn’t it? So we have a new cube as copy of the old one, plus the transformation and DTP.

But now I had promised to show you transaction RSCOPY where we can see the logs of our copy processes:

Copy Monitor

 

Oops, there is a red icon. I have hidden this attempt from you, but now it comes to the bright daylight that I had made a mistake. Let’s double click it!

Copy Log

 

I missed to remove the starting “0” from the process chain name. And how did I recover this? I just started the wizard again, chose this failed copy process as a template (you can see it in the column “template” in the previous screen shot of transaction RSCOPY) and just corrected my fault. The copy then was executed to the end.

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22416%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 12. 9. 10:33

BW 7.3: Troubleshooting Real-Time Data Acquisition
Tobias Kuefner SAP Employee 
Business Card
Company: SAP AG
Posted on Dec. 08, 2010 09:07 AM in Business Intelligence (BI)

The main advantage of real-time data acquisition (RDA) is that new data is reflected in your BI reports just a few minutes after being entered in your operational systems. RDA therefore supports your business users to make their tactical decisions on a day-by-day basis. The drawback however is that these business users notice much faster when one of their BI reports is not up to date. They might call you then and ask why the document posted 5 minutes ago is not visible yet in reporting. And what do you do now? I’ll show you how BW 7.3 helps you to resolve problems with real-time data acquisition faster than ever before.

First, let’s have a look at what else is new to RDA in BW 7.3. The most powerful extension is definitely the HybridProvider. By using RDA to transfer transactional data into a HybridProvider, you can easily combine the low data latency of RDA with the fast response times of an InfoCube or a BWA index, even for large amounts of data. You’ll find more information about this combination in a separate blog. Additionally. BW 7.3 allows for real-time master data acquisition. This means that you can transfer delta records to InfoObject attributes and texts at a frequency of one per minute. And just like RDA directly activates data transferred to a DataStore object, master data transferred to an InfoObject becomes available for BI reporting immediately. 

But now, let’s start the RDA monitor and look at my examples for RDA troubleshooting. I’ve chosen some data flows from my BW 7.0 test content and added a HybridProvider and an InfoObject. I know that this flight booking stuff is not really exciting, but the good thing is that I can break it without getting calls from business users.

Remember that you can double-click on the objects in the first column to view details. You can look up for example that I’ve configured to stop RDA requests after 13 errors.

Everything looks fine. So let’s start the RDA daemon. It will execute all the InfoPackages and DTPs assigned to it at a frequency of one per minute. But wait… what’s this?

The system asks me whether I’d like to start a repair process chain to transfer missing requests to one of the data targets. Why? Ah, okay… I’ve added a DTP for the newly created HybridProvider but forgotten to transfer the requests already loaded from the DataSource. Let’s have a closer look at these repair process chains while they are taking care of the missing requests.

On the left hand side, you can see the repair process chain for my HybridProvider. Besides the DTP, it also contains a process to activate DataStore object data and a subchain generated by my HybridProvider to transfer data into the InfoCube part. On the right hand side, you can see the repair process chain for my airline attributes which contains an attribute change run. Fortunately, you don’t need to bother with these details – the system is doing that for you. But now let’s really start the RDA daemon.

Green traffic lights appear in front of the InfoPackages and DTPs. I refresh the RDA monitor. Requests appear and show a yellow status while they load new data package by package. The machine is running and I can go and work on something else now.

About a day later, I start the RDA monitor again and get a shock. What has happened?

The traffic lights in front of the InfoPackages and DTPs have turned red. The RDA daemon is showing the flash symbol which means that is has terminated. Don’t panic! It’s BW 7.3. The third column helps me to get a quick overview: 42 errors have occurred under my daemon, 4 DTPs have encountered serious problems (red LEDs), and 4 InfoPackages have encountered tolerable errors (yellow LEDs). I double-click on “42” to get more details.

Here you can see in one table which objects ran into which problem at what time. I recognize at a glance that 4 InfoPackages repeatedly failed to open an RFC connection at around 16:00. The root cause is probably the same, and the timestamps hopefully indicate that it has already been removed (No more RFC issues after 16:07). I cannot find a similar pattern for the DTP errors. This indicates different root causes. Finally, I can see that the two most recent runtime errors were not caught and thus the RDA daemon has terminated. You can scroll to the right to get more context information regarding the background job, the request, the data package, and the number of records in the request.

Let’s have a short break to draw a comparison. What would you do in BW 7.0? 1) You could double-click on a failed request to analyze it. This is still the best option to analyze the red DTP requests in our example. But you could not find the tolerable RFC problems and runtime errors.

2) You could browse through the job overview and the job logs. This would have been the preferable approach to investigate the runtime errors in our example. The job durations and the timestamps in the job log also provide a good basis to locate performance issues, for example in transformations.

3) You could browse through the application logs. These contain more details than the job logs. The drawback however is that the application log is lost if runtime errors occur.

These three options are still available in BW 7.3 – they have even been improved. In particular, the job and application logs have been reduced to the essential messages. Locating a problem is still a cumbersome task however if you don’t know when it occurred. The integrated error overview in the RDA monitor, BW 7.3 allows you to analyze any problem with the preferred tool. Let me show you some examples.

Unless you have other priorities from your business users I’d suggest starting with runtime errors because they affect all objects assigned to the daemon. RDA background jobs are scheduled with a period of 15 minutes to make them robust against runtime errors. In our example, this means the RDA daemon serves all DataSources from the one with the lowest error counter up to the one which causes the runtime. The job is then terminated and restarted 15 minutes later. The actual frequency is thus reduced from 60/h to 4/h, which is not real-time anymore. Let’s see what we can do here. I’ll double-click on “10” in the error column for the request where the problem has occurred.

I just double-click on the error message in the overview to analyze the short dump.

 

Puh… This sounds like sabotage! How can I preserve the other process objects assigned to the same daemon from this runtime error while I search for the root cause? I could just wait another hour of course. This RDA request will then probably have reached the limit of 13 errors that I configured with the InfoPackage. Once this threshold is reached, the RDA daemon will exclude this InfoPackage from execution. The smarter alternative is to temporarily stop the upload and delete the assignment to the daemon.

The overall situation becomes less serious once the DataSource has been isolated under “Unassigned Nodes”.  The daemon continues at a frequency of onc per minute although there are still 32 errors left.

Note that most of these errors – namely the RFC failures – can be tolerated. This means that these errors (yellow LEDs) do not hinder InfoPackages or DTPs until the configured error limit is reached. Assume that I’ve identified the root cause for the RFC failures as a temporary issue. I should then reset the error counter for all objects that have not encountered other problems. This function is available in the menu and context menu. The error counter of an InfoPackage or DTP is reset automatically when a new request is created. Now let’s look at one of the serious problems. I’ll therefore double-click on “2” in the error column of the first DTP with red LED.

When I double-click on the error message, I see the exception stack unpacked. Unfortunately that does not tell me more than I already knew: An exception has occurred in a sub step of the DTP. So I navigate to the DTP monitor by double-clicking the request ID (217).

 

Obviously, one of the transformation rules contains a routine that has raised the exception “13 is an unlucky number”. I navigate to the transformation and identify the root cause quickly.

In the same way, I investigate the exception which has occurred in DTP request 219. The DTP monitor tells me that something is wrong with a transferred fiscal period. A closer look at the transformation reveals a bug in the rule for the fiscal year variant. Before I can fix the broken rules, I need to remove the assignment of the DataSource to the daemon. When the corrections are done, I schedule the repair process chains to repeat the DTP requests with the fixed transformations. Finally I re-assign the DataSource to the daemon.

The RDA monitor already looks much greener now. Only one DataSource with errors is left. More precisely, there are two DTPs assigned to this DataSource which encountered intolerable errors, so the request status is red. Again, I double-click in the error column to view details.

The error message tells me straight away that the update command has caused the problem this time rather than the transformation. Again, the DTP monitor provides insight into the problem.

Of course “GCS” is not a valid currency (Should that be “Galactic Credit Standard” or what?). I go back to the RDA monitor and double-click on the PSA of the DataSource in the second column. In the request overview, I mark the source request of the failed DTP request and view the content of the problematic data package number 000006.

Obviously, the data is already wrong in the DataSource. How could this happen? Ah, okay… It’s an InfoPackage for Web Service (Push). Probably the source is not an SAP system, and a data cleansing step is needed – either in the source system or in the transformation. As a short-term solution, I could delete or modify the inconsistent records and repeat the failed DTP requests with the repair process chain.

That’s all. I hope you enjoyed this little trip to troubleshooting real-time data acquisition, even though this is probably not part of your daily work yet. Let me summarize what to do if problems occur with RDA. Don’t panic. BW 7.3 helps you to identify and resolve problems faster than ever before. Check the error column in the RDA monitor to get a quick overview. Double-click wherever you are to get more details. Use the repair process chains to repeat broken DTP requests. 


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20954%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 8. 6. 10:31

BW 7.30: Define Delta in BW and no more Init-InfoPackages
Thomas Rinneberg SAP Employee
Business Card
Company: SAP AG
Posted on Aug. 05, 2010 03:52 PM in Business Intelligence (BI)

You might know and appreciate the capabilities to generically define delta when building a DataSource in SAP source systems. But you had a lot of work, if you want to load delta data from any other source system type, like DBConnect, UDConnect or File. You could declare the data is delta, ok, but this had no real effect, it was just declarative. The task to select the correct data from the source was still yours.

Now with BW 7.30 this has changed. Because now there is – the generic BW delta!

As expected, you start by defining a BW DataSource.

Create DataSource - Extraction Tab

Nothing new, up to now. But if you choose that this DataSource is delta enabled, you will find a new dropdown:

Use generic delta

These are the same options that you already know from the SAP source system DataSource definition:

Generic delta in OSOA

Ok, let’s see what happens if we select “Date”.

Date Delta

The fields “Delta Field” and the two interval fields you know already from the generic delta in the SAP source system. And they have the same meaning. So hopefully I can skip the lengthy explanation of the Security Margin Interval Logic and come to the extra field which popped up: The Time Zone. Well ok, not very thrilling, but probably useful: Since the data in your source might not be saved at the same time zone like the BW which loads it (or your local time), you can explicitly specify the time zone of your data.

“Time stamp – short” offers quite the same input fields, except that the intervals are given in seconds rather than days. “Time stamp Long (UTC)” is by definition lacking the “Time zone” field. Let’s watch “Numeric Pointer”:

Numeric Delta

Oops – no Upper Interval! I guess now I do need to spend some words on these intervals: The value given in “upper Interval” is subtracted from the upper limit used for selecting the delta field. Let’s say current upper value of the delta field is 100. The upper interval is 5. So we would need to select the data up to value 95. But hold on – how should the system know the current value of the numeric field without extracting it? So we would extract the data up to the current upper value anyhow – and hence there is no use in specifying an upper interval.

The lower limit in turn is automatically parsed from the loaded data – and thus known before the next request starts. And hence we can subtract the safety margin before starting selection.

Our example DataSource has a UTC time stamp field, so let’s select it:

Timestamp Delta

Activate the DataSource and create an InfoPackage:

InfoPackage

CHANGED is no selectable field in the InfoPackage. Why not? Well, the delta selections are calculated automatically. You do not need to select explicitly on them. Now let’s not forget to set the update mode to Init in order to take advantage of the generic delta:

Auto-Delta-Switch to Delta

Wait a minute! There is a new flag: “Switch InfoPack. in PC to Delta (F1)”. Guess I need to press F1 to understand what this field is about.

Explanation

Sounds useful, doesn’t it? No more maintenance of two different InfoPackages and process chains for delta upload! You can use the same InfoPackage to load Init and Delta like in the DTP.

In our small test we do not need a process chain, so let’s go on without this flag and load it. Then let’s switch the InfoPackage to Delta manually and load again.

Monitor

Indeed, there are selections for our field CHANGED.

 

Don't miss any of the other Information on BW 7.30 which you can find here

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20413%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

===========================================================================================================

호오!! 테스트해보고 싶!

Posted by AgnesKim
Technique/SAP BW2010. 7. 29. 14:10

BW 7.30: Simple modeling of simple data flows
Thomas Rinneberg SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 28, 2010 02:05 PM in Business Intelligence (BI)

 

Have you ever thought: How many steps do I need to do, until I can load a flat file into BW? I need so many objects, foremost I need lots of InfoObjects before I can even start creating my infoprovider. Then I need DataSource, Transfomation (where I need to draw many arrows), DTP and InfoPackage. I just want to load a file! Why is there no help by the system?

Now there is - BW 7.30 brings the DataFlow Generation Wizard!

You start by going to BW DataWarehouse workbench (as always), then selecting the context menu entry „Generate Data Flow..." on either the File-Source system (if you just have a file and want to generate everything needed to load it) or on an already existing DataSource (if you have that part already done - this works also for non-file-source systems!) or on an infoprovider (if you have your data target already modeled and just want to push some data into it).

Context Menu to start data flow wizard


Then the wizard will pop up:

Step 1 - Source options

Here, we have started from the source system. If you start from the InfoProvider, the corresponding step will not be shown in the progress area on the left, since you have selected that already. Same for the DataSource.

I guess you noticed already: ASCII is missing in the file type drop down (how sad! – however please read the wizard text in the screenshot above: It’s just the wizard where it is not supported because the screen would become too complex). And look closer: There is „native XLS-File“. Yes, indeed. No longer „save as CSV“ necessary in Excel. You can just specify your Excel-File in the wizard (and in DataSource maintainance as well). There is just one flaw for those who want to go right to batch upload: The Excel installation on your PC or laptop is used to interpret the file contents, so it is not possible to load Excel files from the SAP application server. For this, you still need to save as CSV first, but the CSV structure is identical to the XLS structure, so you do not need to change the DataSource.

Ok, lets fill out the rest of the fields, file name of course, data source, source system, blabla – (oops, all this is prefilled after selecting the file!) – plus the ominous Data Type (yes, we still can’t live without that)

Step 1 - Pre-Filled Input Fields

and „Continue“:

Step 2 - CSV Options

Step 2 - Excel Options

One remark on the header lines: If you enter more than one (and it is recommended to have at least one line containing the column headers), we expect the column headers to be the last of the header lines, i.e. directly before the data. Now let‘s go on:

Step 3 - Data Target

The following InfoProvider Types and Subtypes are available:

  • InfoCube – Standard and Semantically Partitioned
  • DataStore-Object – Standard, Write Optimized and Semantically Partitioned
  • InfoObject – Attributes and Texts
  • Virtual Provider – Based on DTP
  • Hybrid Provider – Based on DataStore
  • InfoSource
This is quite a choice. For those of you which got lost in that list, have a look at the decision tree which is available via the „i“ button on the screen. As a hint: A standard DataStore-Object is good for most ;-)

Step 4 - Field Mapping

This is the core of the wizard. At this point, the file has already been read and parsed, and the corresponding data types and field names have been derived from the data of the file and the header line (if the file has one). In case you want to check whether the system did a good job, just double click the field name in the first column.

This screen does also define the transformation (of course only 1:1 mapping, but this will do for most cases – else you can just modify the generated transformation in the transformation UI later) as well as the target infoprovider (if not already existing) plus the necessary InfoObjects. You can choose from existing InfoObjects (and the „Suggestion“ will give you a ranked list of InfoObjects which map your fields better or worse) or you can let the Wizard create „New InfoObjects“ after completion. The suggestion uses a variety of search strategies, from data type match via text match to already used matches in 3.x or 7.x transformations.

And that was already the last step:

Step 5 - End

After „Finish“, the listed objects are generated. Note, that no InfoPackage will be generated, because the system will generate the DTP to directly access the file rather than the PSA.


Don't miss any of the other Information on BW 7.30 which you can find here

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20105%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

필요한 이유는 전혀 모르겠지만;; 일단 scrap.

Posted by AgnesKim
먹고살것2010. 7. 29. 00:58

SAP BW 7.3 beta is available
Thomas Zurek SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 28, 2010 07:50 AM in BI Accelerator, Business Intelligence (BI), Business Objects, SAP NetWeaver Platform

 

Last week, SAP BW 7.3 has been released in a beta version. This is just the upbeat for the launch of the next big version of BW which is planned for later this year. This will provide major new features to the existing, more than 20000 active BW installations which make it one of the most widely adopted SAP products. I like to take this opportunity to highlight a few focus areas that shape this new version and that lay out the future direction.

First, let me sift through a notion that many find helpful once they have thought about it, namely EDW = DB + X. This intends to state that an enterprise data warehouse (EDW) sits on top of a (typically relational) database (DB) but requires additional software to manage the data layers inside the data warehouse, the processes incl. scheduling, the models, the mappings, the consistency mechanisms etc. That software - referred to as "X" - can consist of a mix of tools (ETL tool, data modeling tools like ERwin, own programs and scripts, e.g. for extraction, a scheduler, an OLAP engine, ...) or it can be a single package like BW. This means that BW is not the database but the software managing the EDW and its underlying semantics.
SAP BW 7.3

In BW 7.3, this role becomes more and more apparent which, in turn, is also the basic tint for its future. I believe that this is important to understand as analysts and competitors seem to focus completely on the database when discussing the EDW topic. Gartner's magic quadrant on DW DBMS as an instance of such an analysis; another is the marketing pitches of DB vendors who have specialized on data warehousing like Teradata, Netezza, Greenplum and the like. By the way: last year, I did describe a customer migrating his EDW from "Oracle + Y" (Y was home-grown) to "Oracle + BW" which underlines the significance of "X" in the equation above.

But let's focus on BW 7.3. From my perspective, there is 3 fundamental investment areas in the new release:

  • In-memory: BWA and its underlying technology is leveraged to a much wider extent than previously seen. This eats into the "DB" portion of the equation above. It is now possible to move DSO data directly into BWA, either via a BWA-based infocube (no data persisted on the DB) or via the new hybrid provider whereby the latter automatically pushes data from a DSO into a BWA-based infocube.
    More OLAP operations can be pushed down into the BWA engine whenever they operate on data that solely sits in BWA. One important step torwards this is the option to define BWA-based multiproviders (= all participating infoproviders have their data stored in BWA).
  • Manageability: This affects the "X" portion. Firstly, numerous learnings and mechanisms that have been developed to manage SAP's Business-by-Design (ByD) software have made it into BW 7.3. Examples are template-based approaches for configuring and managing systems, an improved admin cockpit and fault-tolerant mechanisms around process chains. Secondly, there is a number of management tools that allow to easily model and maintain fundamental artifacts of the DW layers. For example, it is now possible to create a large number of identical DSOs, each one for a different portion of the data (e.g. one DSO per country) with identical data flows. Changes can be submitted to all of them in one go. This allows to create highly parallelizable load scenarios. This has been possible in previous releases only at the expense of huge maintenance efforts. Similarly, there is a new tool which allows graphically model data flows, save them as templates or instantiate them with specific infoproviders.
  • Interoperability with SAP Business Objects tools: Now, this refers to the wider environment of the "EDW". Recent years have seen major investments to improve the interoperability with client tools like WebIntelligence or Xcelsius. Many of those have been provided already in BW 7.01 (=EhP1 of BW 7.0). Still, there are a number of features that required major efforts and overhauls. Most prominently, there is a new source system type in BW 7.3 to easily integrated "data stores" in Data Services with BW. It is now straightforward to tap into Data Services from BW. Consequently, this provides a best-of-breed integration of non-SAP data sources into BW.

This is meant to be a brief and not necessarily exhaustive overview of BW 7.3. A more detailed list of the features can be found on this page. Over the course of the next weeks and months the development team will blog on a variety of specific BW 7.3 features. This links can be found on that page too.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20285%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim