Technique/SAP BW2010. 8. 3. 12:45

Creating a BW Archive Object for InfoCube/DSO from Scratch and Other Homemade Recipes
Karin Tillotson
Business Card
Company: Valero Energy Corporation
Posted on Aug. 02, 2010 07:29 PM in Business Intelligence (BI), Business Process Expert, SAP Developer Network, SAP NetWeaver Platform

 

In this blog, I will go over the step-by-step instructions for creating a BW Archive Object for InfoCubes and DSO’s and will also provide some SAP recommended BW housekeeping tips.

 

To start with, I thought I would go over some differences between ERP Archiving and BW Archiving:

 

ERP Archiving:

  • Delivered data structures/business objects
  • Delivered archive objects (more than 600 archive objects in ECC 6.0)
  • Archives mostly original data
  • Performs an archivability check for some archive objects checking for business complete data or residence time (period of time that must elapse before data can be archived)
  • After archiving, data can be entered for the archived time period
 

BW Archiving:

  • Generated data structures
  • Generated archive objects
  • Archives mainly replicated data
  • No special check for business complete or residence time
  • After archiving a time slice, no new data can be loaded for that time slice
 

To begin archiving, you will need to perform the next steps:

  1. Set up archive file definitions
  2. Set up content repositories (if using 3rd party storage)
  3. Create archive object for InfoCube/DSO

 

Step 1 - To begin archiving, you will need a place to write out the archive files.  You do not necessarily need a 3rd party storage system (though I highly recommend one).  But, you do need a filesystem/directory in which to either temporarily or permanently “house” the files.

 

Go to transaction /nFILE

 

image 1

 

Either select a SAP supplied Logical File Path, or create your own. 

Double click on the relevant Logical File Path, then select/double click on the relevant Syntax group (AS/400, UNIX, or Windows).

Assign the physical path where the archive files will be written to.

 

image 2

 

Next, you need to configure the naming convention of the archive files.

Select the relevant Logical File Path, and go to Logical File Name Definition:

 

image 3

 

In the Physical file parameter, select the relative parameters you wish to use to describe the archive files.  See OSS Note 35992 for all of the possible parameters you can choose.

 

Step 2 - If you will be storing the archive files in a 3rd party storage system (have I mentioned I highly recommend this), you need to configure the content repository.

image 15

 

Enter the Content Repository Name, Description, etc.  The parameters entered will be subject to the 3rd party storage requirements.

 

Step 3 is to create the archive object for the relevant InfoCube or DSO:

Go to transaction RSA1:

imaage 5

 

Find and select the relevant InfoCube/DSO, right-click and then click on Create Data Archiving Process.

 

The following tabs will lead you through the rest of the necessary configuration.

The General Settings tab is where you will select whether you are going to configure an ADK based archived object, a Nearline Storage (NLS) object or a combination.

image 6

 

On the Selection Profile tab, if the time slice characteristic isn’t a key field, select the relevant field from the drop down and select this radio button:

image 7

 

If using the ADK method, configure the following parameters:

Enter the relevant Logical File Name, Maximum size of the archive file, the content repository (if using 3rd party storage), whether the delete jobs and store jobs should be scheduled manually or automatically, and if the delete job should read the files from the storage system.

image 8

You then need to Save and Activate the Data Archiving Process.

 

Once the archive object has been activated, you can then either schedule the archive process through the ADK (Archive Development Kit) using transaction SARA, or you can right click on the InfoCube/DSO and select Manage ADK Archive.

 

image 9

 

Click on the Archiving tab:

image 10

 

And, click on Create Archiving Request.

 

When submitting the Archive Write Job, I recommend selecting the check box for Autom. Request Invalidation.

If this is selected and an error occurs during the archive job, the system will automatically set the status of the run to ‘99 Request Canceled’ so that the lock will be deleted.

image 13 

 

If submitting the job through RSA1 -> Manage, select the appropriate parameters in the Process Flow Control section:

 

image 14

 

When entering the time slice criteria for the archive job, keep in mind that a write lock will be placed on the relevant InfoCube/DSO until both the archive write job and the archive delete job have completed. 

 

Additional topics to consider when implementing an archive object for an InfoCube/DSO:

  • For ODS objects, ensure all requests have been activated
  • For InfoCubes, ensure the requests to be archived have been compressed
  • Recommended to delete the change log data (for the archived time slice)
  • Prior to running the archive jobs, stop the relevant load job
  • Once archiving is complete, resume relevant load job
 

In addition to data archiving, here are some SAP recommended NetWeaver Housekeeping items to consider:

 

From the SAP Data Management Guide that can be found at www.service.sap.com/ilm

 

(Be sure to check back every once in awhile as this gets updated every quarter).

There are recommendations for tables such as:

  • BAL*
  • EDI*
  • RSMON*
  • RSBERRORLOG
  • RSDDSTATAGGRDEF
  • RSPC* (BW Process Chains)
  • RSRWBSTORE
  • Etc.

There are also several SAP OSS Notes that describe options for tables that you do not need to archive:

Search SAP Notes on Clean-Up Programs

www.service.sap.com/notes

Table RSBATCHDATA

  • Clean-up program RSBATCH_DEL_MSG_PARM_DTPTEMP

Table ARFCSDATA

  • Clean-up program RSARFCER

Tables RSDDSTAT

  • Clean-up program RSDDK_STA_DEL_DATA

Table RSIXWWW

  • Clean-up program RSRA_CLUSTER_TABLE_REORG

Table RSPCINSTANCE

  • Clean-up program RSPC_INSTANCE_CLEANUP
  http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20375%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

----------------------------------------------------------------------------------------------------------------

근데 사실 국내엔 BI를 Archiving 한 사례가 없지..
그리고 요즘처럼 하드웨어 가격이 떨어지면 차라리 하드웨어구매가.. =_=

Posted by AgnesKim
Technique/SAP BW2010. 7. 29. 14:10

BW 7.30: Simple modeling of simple data flows
Thomas Rinneberg SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 28, 2010 02:05 PM in Business Intelligence (BI)

 

Have you ever thought: How many steps do I need to do, until I can load a flat file into BW? I need so many objects, foremost I need lots of InfoObjects before I can even start creating my infoprovider. Then I need DataSource, Transfomation (where I need to draw many arrows), DTP and InfoPackage. I just want to load a file! Why is there no help by the system?

Now there is - BW 7.30 brings the DataFlow Generation Wizard!

You start by going to BW DataWarehouse workbench (as always), then selecting the context menu entry „Generate Data Flow..." on either the File-Source system (if you just have a file and want to generate everything needed to load it) or on an already existing DataSource (if you have that part already done - this works also for non-file-source systems!) or on an infoprovider (if you have your data target already modeled and just want to push some data into it).

Context Menu to start data flow wizard


Then the wizard will pop up:

Step 1 - Source options

Here, we have started from the source system. If you start from the InfoProvider, the corresponding step will not be shown in the progress area on the left, since you have selected that already. Same for the DataSource.

I guess you noticed already: ASCII is missing in the file type drop down (how sad! – however please read the wizard text in the screenshot above: It’s just the wizard where it is not supported because the screen would become too complex). And look closer: There is „native XLS-File“. Yes, indeed. No longer „save as CSV“ necessary in Excel. You can just specify your Excel-File in the wizard (and in DataSource maintainance as well). There is just one flaw for those who want to go right to batch upload: The Excel installation on your PC or laptop is used to interpret the file contents, so it is not possible to load Excel files from the SAP application server. For this, you still need to save as CSV first, but the CSV structure is identical to the XLS structure, so you do not need to change the DataSource.

Ok, lets fill out the rest of the fields, file name of course, data source, source system, blabla – (oops, all this is prefilled after selecting the file!) – plus the ominous Data Type (yes, we still can’t live without that)

Step 1 - Pre-Filled Input Fields

and „Continue“:

Step 2 - CSV Options

Step 2 - Excel Options

One remark on the header lines: If you enter more than one (and it is recommended to have at least one line containing the column headers), we expect the column headers to be the last of the header lines, i.e. directly before the data. Now let‘s go on:

Step 3 - Data Target

The following InfoProvider Types and Subtypes are available:

  • InfoCube – Standard and Semantically Partitioned
  • DataStore-Object – Standard, Write Optimized and Semantically Partitioned
  • InfoObject – Attributes and Texts
  • Virtual Provider – Based on DTP
  • Hybrid Provider – Based on DataStore
  • InfoSource
This is quite a choice. For those of you which got lost in that list, have a look at the decision tree which is available via the „i“ button on the screen. As a hint: A standard DataStore-Object is good for most ;-)

Step 4 - Field Mapping

This is the core of the wizard. At this point, the file has already been read and parsed, and the corresponding data types and field names have been derived from the data of the file and the header line (if the file has one). In case you want to check whether the system did a good job, just double click the field name in the first column.

This screen does also define the transformation (of course only 1:1 mapping, but this will do for most cases – else you can just modify the generated transformation in the transformation UI later) as well as the target infoprovider (if not already existing) plus the necessary InfoObjects. You can choose from existing InfoObjects (and the „Suggestion“ will give you a ranked list of InfoObjects which map your fields better or worse) or you can let the Wizard create „New InfoObjects“ after completion. The suggestion uses a variety of search strategies, from data type match via text match to already used matches in 3.x or 7.x transformations.

And that was already the last step:

Step 5 - End

After „Finish“, the listed objects are generated. Note, that no InfoPackage will be generated, because the system will generate the DTP to directly access the file rather than the PSA.


Don't miss any of the other Information on BW 7.30 which you can find here

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20105%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

필요한 이유는 전혀 모르겠지만;; 일단 scrap.

Posted by AgnesKim
먹고살것2010. 7. 29. 00:58

SAP BW 7.3 beta is available
Thomas Zurek SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 28, 2010 07:50 AM in BI Accelerator, Business Intelligence (BI), Business Objects, SAP NetWeaver Platform

 

Last week, SAP BW 7.3 has been released in a beta version. This is just the upbeat for the launch of the next big version of BW which is planned for later this year. This will provide major new features to the existing, more than 20000 active BW installations which make it one of the most widely adopted SAP products. I like to take this opportunity to highlight a few focus areas that shape this new version and that lay out the future direction.

First, let me sift through a notion that many find helpful once they have thought about it, namely EDW = DB + X. This intends to state that an enterprise data warehouse (EDW) sits on top of a (typically relational) database (DB) but requires additional software to manage the data layers inside the data warehouse, the processes incl. scheduling, the models, the mappings, the consistency mechanisms etc. That software - referred to as "X" - can consist of a mix of tools (ETL tool, data modeling tools like ERwin, own programs and scripts, e.g. for extraction, a scheduler, an OLAP engine, ...) or it can be a single package like BW. This means that BW is not the database but the software managing the EDW and its underlying semantics.
SAP BW 7.3

In BW 7.3, this role becomes more and more apparent which, in turn, is also the basic tint for its future. I believe that this is important to understand as analysts and competitors seem to focus completely on the database when discussing the EDW topic. Gartner's magic quadrant on DW DBMS as an instance of such an analysis; another is the marketing pitches of DB vendors who have specialized on data warehousing like Teradata, Netezza, Greenplum and the like. By the way: last year, I did describe a customer migrating his EDW from "Oracle + Y" (Y was home-grown) to "Oracle + BW" which underlines the significance of "X" in the equation above.

But let's focus on BW 7.3. From my perspective, there is 3 fundamental investment areas in the new release:

  • In-memory: BWA and its underlying technology is leveraged to a much wider extent than previously seen. This eats into the "DB" portion of the equation above. It is now possible to move DSO data directly into BWA, either via a BWA-based infocube (no data persisted on the DB) or via the new hybrid provider whereby the latter automatically pushes data from a DSO into a BWA-based infocube.
    More OLAP operations can be pushed down into the BWA engine whenever they operate on data that solely sits in BWA. One important step torwards this is the option to define BWA-based multiproviders (= all participating infoproviders have their data stored in BWA).
  • Manageability: This affects the "X" portion. Firstly, numerous learnings and mechanisms that have been developed to manage SAP's Business-by-Design (ByD) software have made it into BW 7.3. Examples are template-based approaches for configuring and managing systems, an improved admin cockpit and fault-tolerant mechanisms around process chains. Secondly, there is a number of management tools that allow to easily model and maintain fundamental artifacts of the DW layers. For example, it is now possible to create a large number of identical DSOs, each one for a different portion of the data (e.g. one DSO per country) with identical data flows. Changes can be submitted to all of them in one go. This allows to create highly parallelizable load scenarios. This has been possible in previous releases only at the expense of huge maintenance efforts. Similarly, there is a new tool which allows graphically model data flows, save them as templates or instantiate them with specific infoproviders.
  • Interoperability with SAP Business Objects tools: Now, this refers to the wider environment of the "EDW". Recent years have seen major investments to improve the interoperability with client tools like WebIntelligence or Xcelsius. Many of those have been provided already in BW 7.01 (=EhP1 of BW 7.0). Still, there are a number of features that required major efforts and overhauls. Most prominently, there is a new source system type in BW 7.3 to easily integrated "data stores" in Data Services with BW. It is now straightforward to tap into Data Services from BW. Consequently, this provides a best-of-breed integration of non-SAP data sources into BW.

This is meant to be a brief and not necessarily exhaustive overview of BW 7.3. A more detailed list of the features can be found on this page. Over the course of the next weeks and months the development team will blog on a variety of specific BW 7.3 features. This links can be found on that page too.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20285%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2010. 7. 25. 02:21

Delta Queue Diagnosis
P Renjith Kumar SAP Employee
Business Card
Company: SAP Labs
Posted on Jul. 23, 2010 04:25 PM in Business Intelligence (BI), SAP NetWeaver Platform

 

Many times we come across situation where there may be inconsistencies in the delta queue. To check these we can use a diagnostic tool. The report is explained in detail here.

RSC1_DIAGNOSIS Program is Diagnosis Tool for BW Delta Queue

image

How to use this report?

Execute the report RSC1_DIAGNOSIS from SE38/SA38, With datasource and destination details.

Use

With the RSC1_DIAGNOSIS check program, the most important information about the status and condition of the delta queue is issued for a specific DataSource.

Output

You get the following details once the report is executed

  • General information about datasource and version.
  • Meta data of Datasource and Generated objects for the datasource
  • ROOSPRMSC table details of datasource like GETID and GOTID
  • ARFCSSTATE Status
  • TRFCQOUT Status
  • Records check with Recorded status
  • Inconsistencies in delta management tables
  • Error details if available.

Let see the output format of the report.

image

image

How to analyze?

Before analyzing this output we need to know some important tables and concepts. Let us see

The delta management tables

DeltaQueue Management Tables : RSA7

Tables

ROOSPRMSC            :  Control Parameter Per DataSource Channel

ROOSPRMSF            :  Control Parameters Per DataSource

TRFCQOUT              :  tRFC Queue Description (Outbound Queue)

ARFCSSTATE            :  Description of ARFC Call Status (Send)

ARFCSDATA             :  ARFC Call Data (Callers)

The delta queue is constructed of three qRFC tables namely ARFCSDATA which has the data and AFRCSSTATE, TRFCQOUT which is to control dataflow to BI systems.

Now we need to know about TID (Transaction ID). You can see two things GETTID and GOTTID. Now we will see what those are.

GETTID and GOTTID can be seen in table ROOSPRMSC.

image

GETTID:   Delta Queue, Pointer to Maximum Booked Records in BW (i.e.) this refers

<address>               to The last but one delta TID</address><address>        </address><address>GOTTID:  Delta Queue, Pointer to Maximum Extracted Record I (i.e.) this refers to the </address><address>              Last delta TID that has reached BW. (Used in case of repeat delta)</address>

System will delete the LUW'S greater than GETTID and less than or equal to GOTTID. This is because delta queue have last but one delta and loaded delta only.

Now we will see about the TID in detail

TID = ARFCIPID+ ARFCPID+ ARFCTIME+ ARFCTIDCNT  field content.

All the four fields can be seen in the table ARFCSSTATE.

<address>ARFCIPID                  : IP Address</address><address>ARFCPID                   : Process ID.</address><address>ARFCTIME                 : UTC time stamp since 1970.</address><address>ARFCTIDCNT             : Current number</address>

To know how this is split I am taking the GETTID

GETTID = 0A10B02B0A603EB2C2530020

This is separated like this ( 8 + 4 + 8 + 4 ) and it is sent to the four table.

GETTID : 0A10B02B   0A60  3EB2C253  0020

<address>ARFCIPID                   = 0A10B02B</address><address>ARFCPID                    = 0A60</address><address>ARFCTIME                  = 3EB2C253</address><address>ARFCTIDCNT               = 0020</address>

Give this as selection in table ARFCSSTATE.Here you can find the details of the TID.

image

Here you find details of TID.

Now we move on to the output of the report.

image

How to get the generated program?

20001115174832 = Time of generation

/BI0/QI0HR_PT_20001 = Generated extract structure

E8VDVBZO2CTULUAENO66537BO = Generated program

But to display the generated program you need to add "GP" to the prefix of the generated program and can be seen from SE38.

Adding prefix ‘GP' = GPE8VDVBZO2CTULUAENO66537BO

How to check details in STATUS ARFCSSTATE?

The output displays an analysis of the ARFCSSTATE status in the form

STATUS READ 100 times

<address>LOW <Date> <Time> <TID> <SID-CLNT></address><address>HIGH <Date> <Time> <TID> <SID-CLNT></address>

STATUS RECORDED 200 times

<address>LOW <Date> <Time> <TID> <SID-CLNT></address><address>HIGH <Date> <Time> <TID> <SID-CLNT></address>

READ             = Repeat delta entries with TID.

RECORDED     = Delta entries

Using this analysis, you can see whether there are obvious inconsistencies in the delta queue. From the above output, you can see that there are 100 LUWs with the READ status (that is, they are already loaded) and 200 LUWs with the Recorded status (that is, they still have to be loaded).  For a consistent queue, however, there is only one status block for each status. That is, 1 x Read status, 1 x Recorded status. If there are several blocks for a status, then the queue is not consistent. This can occur for the problem described in note 516251.

How to check details in STATUS TRFCQOUT?

Only LUWs with STATUS READY or READ should appear in TRFCQOUT. Another status indicates an error. In addition, the GETTID and GOTTID are issued here with the relevant QCOUNT.

Status READ   = Repeat delta entries with low and high TID

Status READY = Delta entries ready to be transferred.

If the text line "No Record with NOSEND = U exists" is not issued, then the problem from note 444261 has occurred.

In our case we did not get the READ and READY or RECORDED status, That's why it is showing as ‘No Entry in ARFCSSTATE' and ‘No Entry in TRFCQOUT'. But you will normally find that.

Checking Table level inconsistencies

In addition, this program lists possible inconsistencies between the TRFCQOUT and ARFCSSTATE tables.

If you see the following in the output

"Records in TRFCQOUT w/o record in ARFCSSTATE"

This shows inconsistency at table level, to correct this check the note 498484.

The records issued for this check must be deleted from the TRFCQOUT table. This allows the additional delta without reinitialization.However, if you are not certain that data was loaded correctly in BW (see note 498484) and that it was not duplicated, you should carry out a reinitialization.

 

 

 

 http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20226%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 7. 25. 02:08

Best Practice: BW Process Chain monitoring with SAP Solution Manager - Part 2: Setup Example
Dirk Mueller SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 23, 2010 04:23 PM in Application Lifecycle Management, Business Intelligence (BI), SAP NetWeaver Platform, SAP Solution Manager

 

Setup example: BW Process Chain monitoring as part of Business Process Monitoring

This blog is the second part of a series describing best practices for BW Process Chain monitoring with SAP Solution Manager. You can read the first part of this series here

Before describing the single steps that need to be performed for setting up the BW Process Chain monitoring, we assume that the following steps are already performed:

  • The BW system containing the process chains is successfully connected as a managed system in the Solution Manager.
  • A solution containing the managed BW system is already created.
  • The prerequisites mentioned in the first part of this series are met.

First of all we recommend creating a suitable structure in the "Business Scenarios" section of the Solution Directory (this can be found in the Solution Manager under "Solution Landscape Maintenance"). In our example we use the following structure:

Display Solution Directory

 

Now we can start the "Setup Business Process Monitoring" session for our solution. The above created "Business Process" must be selected:

2 - Change BPM Setup

 

Then choose the process chains ("Business Process Steps") that you have maintained:

3 - Change BPM Setup

 

Now select the Monitoring Type "Background Processing":

4 - Change BPM Setup

 

Save your changes and proceed to the next check called "Background Processing". Here you have to choose an "Object identifier". Usually this identifier is identically to the name of your process chain. As "Job Type" you have to choose "BW Process Chain". The "Schedule Type" is usually "On Weekly Basis".

5 - Change BPM Setup

 

After saving your changes you have to provide the name of the process chain(s) that you want to monitor - in our case "DM_TEST_CHAIN_1". You can enter wildcards in the column "BW Process Chain ID" and search for the process chain in the managed BW system by choosing the option "Save + Check Identification". Then choose a process chain in the result list. Please enter "use chain start condition" as the "Start Procedure" of the process chain:

6 - Change BPM Setup

 

In the tab "Schedule on Weekly Basis", please activate the days on which you want to have the monitoring of the process chain(s) active:

7 - Change BPM Setup

 

Then, we need to enter the thresholds for the single key performance indicators that we want to monitor (e.g. "Start Delay", "Not Started on Time", ...). Additionally you can enter multiple chain elements. The same logic applies here: you have to enter and search for the "Process Type" and "Variant" of the single step of the Process Chain step and provide the thresholds. For example:

8 - Change BPM Setup

 

Finally, we need to generate and activate this monitor in the Solution Manager and the managed BW system. This can be done using the check "Generation/Activation/Deactivation":

9 - Change BPM Setup

 

Please note that the generating of the customizing and the activation must not result in errors.

Now you can switch to the "Operations" section of the solution. Here you will see now the previously created hierarchy in the Solution Directory that results in a graphical overview like this:

10 - Solution View

 

By clicking on the alert icon (in our example a red alert in the business scenario "BW Process Chains", business process "ERP Loads"), the Business Process Monitoring session starts and we can check for details on the alert(s) that have been raised:

11 - Monitoring View

 

By choosing the RSPC icon, you can directly jump into the managed BW system for further analysis of the alerts.

Frequently Asked Questions about Business Process Monitoring are answered under http://wiki.sdn.sap.com/wiki/display/SM/FAQ+Business+Process+Monitoring.

The previous blogs provide further details about Business Process Monitoring functionalities within the SAP Solution Manager.

 


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20213%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 7. 25. 02:06

Best Practice: BW Process Chain monitoring with SAP Solution Manager - Part 1
Dirk Mueller SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 23, 2010 04:17 PM in Application Lifecycle Management, Business Intelligence (BI), SAP NetWeaver Platform, SAP Solution Manager

 
Nearly every SAP customer who operates an SAP BW or SAP APO system encounters one main challenge - how to control and monitor BW Process Chains?! The definition of Process Chains is normally directly performed with the help of standard BW transactions like RSPC. The scheduling of the Process Chains is then usually also performed directly from within BW. Some customers schedule their Process Chains via external scheduling tools like SAP Central Process Scheduling by Redwood. Then perhaps comes the "toughest" part: How to monitor those Process Chains, especially if you have several hundred or even thousand Process Chains running per day?

Old and new monitoring capabilities for BW Process Chains

At least the most important, i.e. most business critical, Process Chains should be monitored and in case of a problem a solution has to be found as fast as possible. What monitoring alternatives have been available in the past?

  • Manual monitoring via transaction RSPC or RSPCM. A very time-consuming way of monitoring and especially for complex chains it is nearly impossible to keep an overview. Further, the defined process chains that are monitored by RSPCM are user-individual.
  • Automated monitoring via Business Process Monitoring in SAP Solution Manager as described in the Best Practice document "Background Job monitoring with SAP Solution Manager". Up to now the setup was also somewhat time-consuming and only single jobs within the Process Chain could be monitored so that only a milestone monitoring could be achieved. Further, the monitoring definition requires manual adoptions as soon as a process chain is activated.
  • Automated monitoring via SAP CCMS described in the Best Practice document "Background Job monitoring with SAP Solution Manager". Here the setup was very easy but the monitoring functionality was somewhat limited as only the status of a complete chain could be monitored. No details of chain elements were available and no other monitoring capabilities than just the chain status were available.

Now the Business Process Monitoring in SAP Solution Manager was enhanced in order to overcome all those limitations described above. Besides the monitoring of simple background jobs it is now also possible to monitor complete Process Chains just by entering the corresponding Chain ID. Additionally you can also monitor single steps within a Process Chain. With this monitor you can of course monitor the status of a Process Chain (and selected elements), but you are not limited to status monitoring only. You can also monitor whether a Process Chain and/or one of its specific elements

  • Did not start or finish on time (Start Delay and End Delay)
  • Is running outside a defined time window
  • Has a runtime that is longer than expected
  • Is running into a status indicating an error or warning

Besides these more technical alerts you can also perform automated content checks for a complete Process Chain and/or one of its specific elements like

  • Did the Process Chain process too many/few records?
  • Were too many/few data packages processed?
  • Are there exceptional job log entries?

A big advantage of monitoring BW process chains using the Business Process Monitoring in SAP Solution Manager is that you see the BW process chains in the context of the complete business process in a graphical way. Correlating the impact of an incident to the business process itself is now possible at a glance.

Below you can find the technical prerequisites of the involved software components:

  • SAP Solution Manager 7.0 EhP1 SP23 or higher
  • ST-SER 701_2010_1 or higher on SAP Solution Manager side
  • ST-PI 2008_1_XXX with SP2 or higher on backend side
  • ST-A/PI 01M or higher on backend side
  • Implementation of SAP Note 1436853 - BPM for BW Process Chains and Steps - Prerequisites

 

A second part on this best Practice will cover the setup in SAP Solution Manager. It will be available soon. The corresponding link will be posted here as soon as it is available.

Frequently Asked Questions about Business Process Monitoring are answered under http://wiki.sdn.sap.com/wiki/display/SM/FAQ+Business+Process+Monitoring.

The previous blogs provide further details about Business Process Monitoring functionalities within the SAP Solution Manager.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20204%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 7. 25. 01:41
먹고살것2010. 5. 25. 13:11

Can SAP Deliver IT Simplicity?


Apps vendor talks real-time; customers hear "removing IT infrastructure."



There's no doubt that SAP customers are excited about the in-memory and column-store database technology announced at last week's SAPPHIRE event. But are they hearing only what they want to hear from SAP? And if that's the case, when can the company deliver what they are really after?

SAP put the emphasis of its SAP Business Analytic Engine announcement on delivering what it called "real, real-time" analysis. But among the SAP customers InformationWeek canvassed, the bottom-line takeaway on the "New DB" described by Chairman Hasso Plattner was that it could simplify IT environments by eliminating business intelligence infrastructure.

"Most of what we look at through BI is just data that's in SAP R3 put in a different place so that we can report on it quickly and efficiently," said Mike O'Dell, CIO at Pacific Coast Building Products. "If suddenly I can do that same reporting on a live system because it's in-memory and it's fast, then I don't need the infrastructure for BI."

An executive at Kraft Foods had much the same take. "The real value is in removing complexity," said Tom Zavos, senior director of Business Intelligence at Kraft. "I won't have to do ETL anymore, and I won't need a separate Business Warehouse database or additional appliances like the [SAP] BW Accelerator."

Customizing apps can result in code sprawl, architectural chaos and brittle systems

Best Practices For A Robust App Architecture

In fact, Zavos and others told InformationWeek that the desire for simplicity trumps the demand for real-time analysis. "We do have situations where people want real-time insight, but that's more often the exception," Zavos said. Customer-facing users like salespeople might appreciate real time, he added. But he questioned the need for marketing, procurement or manufacturing personnel to go beyond daily updates.

In the six-step roadmap outlined in his keynote address, SAP's Plattner said the New DB/SAP Business Analytic Engine would first serve as a sort of turbo charger alongside existing application and data warehouse infrastructure. This "no risk" approach offers the advantage of not ripping and replacing existing systems, he said. Workloads will be moved over to the new environment gradually, he said, and aging legacy systems decommissioned over time.

But if simplicity is what customers are really after, how quickly can companies hope to get to the latter stages of SAP's roadmap? It's too early to say, co-CEO Hagemann Snabe told InformationWeek. He did allow, "it will start in analytics, and then you'll see us building more advanced optimization applications like planning."

The response at least suggests that customers won't have to wait years to consolidate BI infrastructure. The real question on most customer minds is, how much will it cost?

In an interview with InformationWeek, co-CEO McDermott said questions about cost can only be answered when the product comes to market, but he noted that "by definition, it seems that removing layers takes cost out... There will be different situations for different customers, but the theme is, let's get rid of redundant IT and free up cash flow for innovation."

The bottom line is that SAP is selling consolidation as well as real-time performance. And on both fronts, there are many questions about cost, performance, storage capacity, data integration flexibility and many other details that are nowhere near being answered. Nonetheless, SAP customers like what they're hearing.

The Enterprise 2.0 Conference is the largest gathering for people ready to connect teams, and harness collective intelligence with social tools and 2.0 technologies. It happens in Boston, June 14-17. Find out more here.


출처: http://www.informationweek.com/news/software/erp/showArticle.jhtml?articleID=225000144&cid=RSSfeed_IWK_All

----------------------------------------------------------------------------------------------------------------
BW 도, BIA도 버리겠다는건가..
BO로 가겠다는건가..
의도적으로 자꾸 BO를 키우는게 보이긴 하지만.. 음..
real time analysis..
요즘은 뭐가 맞는건지 잘 모르겠군..
이번 사파이어에서 나온 roadmap을 구해서 좀 봐얄거 같은데..
누구한테 구해봐야 하나.. 킁.


Posted by AgnesKim