Technique/SAP BW2010. 12. 2. 01:32

SAP NetWeaver 7.3 in Ramp Up
Benny Schaich-Lebek SAP Employee 
Business Card
Posted on Dec. 01, 2010 07:10 AM in Business Process Management, Enterprise Portal (EP), SAP NetWeaver Platform

As announced at TechEd this year, SAP NetWeaver 7.3 was released for restricted shipment on Monday, November 29th. Restricted shipment, better known as "ramp up" or release to customer (RTC) means the availability of the product to certain customers for productive usage.

Unrestricted shipment is expected to be in first quarter of 2011.

Here are some out of lots of new features:

  • Greatly enhanced Java support: Java EE5 certified, Java-only ESB and JMS pub/sub capabilities
  • Reusable business rule sets with Microsoft Excel integration
  • Enhanced standards support (WS Policy 1.2, SOAP 1.2, WS Trust 1.3, Java SE 6, JSR 168/286, WSRP 1.0, SAML 1.0/2.0)
  • Tighter integration between SAP NetWeaver Business Warehouse and SAP BusinessObjects
  • Individual and team productivity enhancements in the SAP NetWeaver Portal
  • ...and heaps of new features and enhancements in each part of the SAP NetWeaver stack!

Here is more detail by the usage types of NetWeaver:

Enterprise Portal

With Enterprise Workspaces SAP is enabling a flexible, intuitive environment to compose content, enabling enterprise end users to integrate and run structured and unstructured assets using a self-service approach.

 

Managing and Mashing up Portal Pages with Web Page Composer
Supporting  business key users in  the easy creation and management of  enriched portal pages, blending business applications and user-generated content, generating truly flexible UI.

 

Unified Access to Applications and Processes with Lower TCO
Delivering  the best of class integration layer for SAP, Business Objects and non-SAP applications & reports while maintaining low TCO with capabilities such as advanced caching, integration with SAP central Transport System and significant performance and scalability improvements. Common Java stack and improved server administration and development environment.

 

Portal Landscape Interoperability and Openness
Providing industry standards integration capabilities for SAP and non-SAP content, both into the SAP Portal and for 3rd party Portals, such as JSR and Java 5 support, or open API’s for navigation connectors.

Business Warehouse

Scalability & Performance have been enhanced for faster decision making. Count in remarkably accelerated data loads, a next level of performance for BW Accelerator, and support for Teradata  as additional databases for SAP NetWeaver BW Increased flexibility  by further integration of SAP BusinessObjects BI and EIM tools with tighter integration with SAP BusinessObjects Data Services and SAP BusinessObjects Metadata Management Configuration and operations was simplified with the new integrated Admin Cockpit  into SAP Solution Manager. Also wizard based system configuration was introduced

Process Integration

PI has introduced the availability for a high number of solutions to allow out-of-the box integration: For SAP applications there is prepackaged process integration content semantically interlinked with SAP applications and industry solutions and for partners and ISVs SAP provides certification programs that help to ensure quality.

There is ONE platform (and not several) to support all integration scenarios: A2A, B2B, interoperability with other ESBs, SOA, and so forth.

In addition there is support of replacement of third-party integration solutions to
lower TCO Interoperability with other ESBs to protect investments.

A Broad support of operating environments and databases is made available.

Business Process Management/CE,

With the WD/ABAP Integration you may browse the WD/ABAP UI repository of a backend system and use a WD/ABAP UI in a BPM task.

The API for Managing Processes and Tasks starts process instances, retrieves task lists, and Executes actions on task.

With Business Rule Improvements you now can reuse rules or decision tables across rule sets. Together with this came other usability and developer productivity enhancements.

With zero configuration for local services a big improvement for simplification of SOA Configuration was achieved.

Mobile

In the new version operational costs are reduced through optimized monitoring and administration capabilities. Robustness was enhanced through improved security and simplified upgrades. There is greater flexibility regarding backend interoperability through Web Service interfaces and multiple backend connectivity.

More information is available at the SDN pages for SAP NetWeaver 7.3 or the manuals of NetWeaver 7.3 in the SAP Help Portal.

Benny Schaich-Lebek   is a product specialist at SAP NetWeaver product management



http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22371%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/그외2010. 10. 26. 19:16

New SAP iPhone App - SAP Business One Mobile Application
Karin Schattka SAP Employee 
Business Card
Posted on Oct. 26, 2010 02:30 AM in Mobile

URL: http://itunes.apple.com/app/sap-business-one-mobile-application/id392606876

 
 

With the SAP Business One mobile application for iPhone, you can view reports and content, process approval requests, manage customer and partner data, and much more.

Key features:
• Alerts and Approvals - Get alerts on specific events - such as deviations from approved discounts, prices, credit limits, or targeted gross profits - and view approval requests waiting for your immediate action. Trigger remote actions, and drill into the relevant content or metric before making your decision.

• Reports - Refer to built-in SAP Crystal Reports that present key information about your business. Add your own customized reports to the application, and easily share them via e-mail.

• Business Partners – Access and manage your customer and partner information including addresses, phone numbers and contact details, view historical activities and special prices; create new business partners and log new activities; contact or locate partners directly. All changes automatically get synchronized with SAP Business One on the backend.

• Stock Info - Monitor inventory levels, and access detailed information about your products including purchasing and sales price, available quantity, product specifications and pictures.

Please download SAP Business One Mobile Application directly from iTunes store and watch the YouTube video:

Karin Schattka   is part of the SAP Community Network team.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/21777%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 10. 11. 20:44

BW 7.30: Semantically Partitioned Objects
Alexander Hermann SAP Employee 
Business Card
Company: SAP AG
Posted on Oct. 11, 2010 04:22 AM in Business Intelligence (BI)

 
 

Motivation

Enterprise Data Warehouses are the central source for BI applications and are faced with the challenge of efficiently managing constantly growing data volumes. A few years ago, Data Warehouse installations requiring terabytes of space were a rarity. Today the first installations with petabyte requirements are starting to appear on the horizon.

In order to handle such large data quantities, we need to find modeling methods that guarantee the efficient delivery of data for reporting. Here it is important to consider various aspects such as the loading and extraction processes, the index structure and data activation in a DataStore object. The Total Cost of Development (TCD) and the Total Cost of Ownership (TCO) are also very important factors.

Here is an example of a typical modeling scenario. Documents need to be saved in a DataStore object. These documents can come from anywhere in the world and are extracted on a country-specific basis. Here each request contains exactly one country/region.

Figure 1

If an error occurs (due to invalid master data) while the system is trying to activate one of the requests, the other requests cannot be activated either and are therefore initially not available for reporting. This issue becomes even more critical if the requests concern country-specific, independent data.

Semantic partitioning provides a workaround here. Instead of consolidating all the regions into one DataStore object, the system uses several structurally identical DataStore objects or “partitions”. The data is distributed between the partitions, based on a semantic criterion (in this example, "region").

Figure 2

Any errors that occur while requests are being activated now only affect the regions that caused the errors. All the other regions are still available for reporting. In addition, the reduced data volume in the individual partitions results in improved loading and administration performance.

However, the use of semantic partitioning also has some clear disadvantages. The effort required to generate the metadata objects (InfoProviders, transformations, data transfer processes) increases with every partition created. In addition, any changes to the data model must be carried out for every partition and for all dependent objects. This makes the change management more complex. Your CIO might have something to say about this, especially with regards to TCO and TCD!

Examples of semantically partitioned objects

Here you can set the semantically partitioned DataStores or InfoCubes (abbreviated to “SPO”: semantically partitioned object) introduced in SAP NetWeaver BW 7.30. It is now possible to use SPOs to generate and manage semantically partitioned data models with minimal effort.

SPOs provide you with a central UI that enables you to perform the one-time maintenance of the structure and partitioning properties. During the activation stage, the required information is retrieved for generating the partitions. Changes such as adding a new InfoObject to the structure are performed in the same on the SPO and are automatically applied to the partitions. You can also generate DTPs and process chains that match the partitioning properties.

The following example demonstrates how to create a semantically partitioned DataStore object. The section following the example provides you with an extensive insight into the new functions.

DataStore objects and InfoCubes can be semantically partitioned. In the Data Warehousing Workbench, choose “Create DataStore Object”, for example, and complete the fields in the dialog box. Make sure that the option “Semantically Partitioned” is set.

 

Figure 3 

 Figure 4

 

A wizard (1) guides you through the steps for creating an SPO. First, define the structure that are used to using for standard DataStore objects (2). Choose "Maintain Partitions".

 

Figure 5

 

In the next dialog box, you are asked to specify the characteristics that you want to use as partitioning criteria. You can select up to 5 characteristics. For this example, select "0REGION". The compounded InfoObject "0COUNTRY" is automatically included in the selection.

 

Figure 6

 

You can now maintain the partitions. Choose the button (1) to add new partitions and change their descriptions (2). Use the checkbox (3) to decide whether you want to use single values or value ranges to describe the partitions. Choose “Start Activation”. You have now created your first semantically partitioned DataStore object.

 

Figure 7

Figure 8

 

In the next step, you connect the partitions to a source. Go to step 4: “Create Transformation” and configure the central transformation using the relevant business logic.

 

Figure 9

 

Now go to step 5: “Create Data Transfer Processes” to generate DTPs for the partitions. On the next screen, you see a list of the partitions and all available sources (1). First, choose “Create New DTP Template” (2) to create a parameter configuration.

 

Figure 10

 

A parameter configuration/DTP template corresponds to the settings that can be configured in a DTP. These settings are applied when DTPs are generated.

 

Figure 11

 

Once you have created the DTP template, drag it from the Template area and drop it on a free area under the list of partitions (1). This assigns a DTP to every source-target combination. If you need different templates for different partitions, you can drag and drop a template onto one specific source-target combination.

Once you have finished, select all the DTPs (2) and choose “Generate”.

 

Figure 12

 

The last step is to generate a process chain in order to execute the DTPs. Go to step 6 in the wizard: “Create Process Chains”. In the next screen, select all the DTPs and drag and drop them to the lower right screen area: “Detail View (1)”.   You use the values "path" and “sequence” to control the parallel processing of DTPs. DTPs with the same path are executed consecutively.

 

Figure 13

 

Choose “Generate” (3). The following process chain is created.

 

Figure 14

  

Summary

In this article, you learned how to create a semantically partitioned object. Using the central UI of an SPO it's now possible to create and maintain complex partitioned data models with minimal effort. In addition, SPOs guarantee the consistency of your metadata (homogenous partitions) and data (filtered according to the partition criterion).  

Once you have completed the 6 steps, you will have created the following components:

 

  • An SPO with three partitions (DataStore objects)
  • A central transformation for business logic implementation
  • 3 data transfer processes
  • 1 process chain

 

출처 : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/21334%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 8. 16. 10:52

SAP BW – Infoprovider Data Display (LISTCUBE) - Improvised

Suraj Tigga (capgemin)    Article     (PDF 761 KB)     04 August 2010

Overview

Methods to display the Infoprovider data without repetitive selection of Characteristics and Key Figures.




http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/70e092a3-9d8a-2d10-6ea5-8846989ad405&utm_source=twitterfeed&utm_medium=twitter
Posted by AgnesKim
Technique/SAP BW2010. 8. 6. 10:31

BW 7.30: Define Delta in BW and no more Init-InfoPackages
Thomas Rinneberg SAP Employee
Business Card
Company: SAP AG
Posted on Aug. 05, 2010 03:52 PM in Business Intelligence (BI)

You might know and appreciate the capabilities to generically define delta when building a DataSource in SAP source systems. But you had a lot of work, if you want to load delta data from any other source system type, like DBConnect, UDConnect or File. You could declare the data is delta, ok, but this had no real effect, it was just declarative. The task to select the correct data from the source was still yours.

Now with BW 7.30 this has changed. Because now there is – the generic BW delta!

As expected, you start by defining a BW DataSource.

Create DataSource - Extraction Tab

Nothing new, up to now. But if you choose that this DataSource is delta enabled, you will find a new dropdown:

Use generic delta

These are the same options that you already know from the SAP source system DataSource definition:

Generic delta in OSOA

Ok, let’s see what happens if we select “Date”.

Date Delta

The fields “Delta Field” and the two interval fields you know already from the generic delta in the SAP source system. And they have the same meaning. So hopefully I can skip the lengthy explanation of the Security Margin Interval Logic and come to the extra field which popped up: The Time Zone. Well ok, not very thrilling, but probably useful: Since the data in your source might not be saved at the same time zone like the BW which loads it (or your local time), you can explicitly specify the time zone of your data.

“Time stamp – short” offers quite the same input fields, except that the intervals are given in seconds rather than days. “Time stamp Long (UTC)” is by definition lacking the “Time zone” field. Let’s watch “Numeric Pointer”:

Numeric Delta

Oops – no Upper Interval! I guess now I do need to spend some words on these intervals: The value given in “upper Interval” is subtracted from the upper limit used for selecting the delta field. Let’s say current upper value of the delta field is 100. The upper interval is 5. So we would need to select the data up to value 95. But hold on – how should the system know the current value of the numeric field without extracting it? So we would extract the data up to the current upper value anyhow – and hence there is no use in specifying an upper interval.

The lower limit in turn is automatically parsed from the loaded data – and thus known before the next request starts. And hence we can subtract the safety margin before starting selection.

Our example DataSource has a UTC time stamp field, so let’s select it:

Timestamp Delta

Activate the DataSource and create an InfoPackage:

InfoPackage

CHANGED is no selectable field in the InfoPackage. Why not? Well, the delta selections are calculated automatically. You do not need to select explicitly on them. Now let’s not forget to set the update mode to Init in order to take advantage of the generic delta:

Auto-Delta-Switch to Delta

Wait a minute! There is a new flag: “Switch InfoPack. in PC to Delta (F1)”. Guess I need to press F1 to understand what this field is about.

Explanation

Sounds useful, doesn’t it? No more maintenance of two different InfoPackages and process chains for delta upload! You can use the same InfoPackage to load Init and Delta like in the DTP.

In our small test we do not need a process chain, so let’s go on without this flag and load it. Then let’s switch the InfoPackage to Delta manually and load again.

Monitor

Indeed, there are selections for our field CHANGED.

 

Don't miss any of the other Information on BW 7.30 which you can find here

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20413%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

===========================================================================================================

호오!! 테스트해보고 싶!

Posted by AgnesKim
Technique/SAP BW2010. 8. 3. 12:45

Creating a BW Archive Object for InfoCube/DSO from Scratch and Other Homemade Recipes
Karin Tillotson
Business Card
Company: Valero Energy Corporation
Posted on Aug. 02, 2010 07:29 PM in Business Intelligence (BI), Business Process Expert, SAP Developer Network, SAP NetWeaver Platform

 

In this blog, I will go over the step-by-step instructions for creating a BW Archive Object for InfoCubes and DSO’s and will also provide some SAP recommended BW housekeeping tips.

 

To start with, I thought I would go over some differences between ERP Archiving and BW Archiving:

 

ERP Archiving:

  • Delivered data structures/business objects
  • Delivered archive objects (more than 600 archive objects in ECC 6.0)
  • Archives mostly original data
  • Performs an archivability check for some archive objects checking for business complete data or residence time (period of time that must elapse before data can be archived)
  • After archiving, data can be entered for the archived time period
 

BW Archiving:

  • Generated data structures
  • Generated archive objects
  • Archives mainly replicated data
  • No special check for business complete or residence time
  • After archiving a time slice, no new data can be loaded for that time slice
 

To begin archiving, you will need to perform the next steps:

  1. Set up archive file definitions
  2. Set up content repositories (if using 3rd party storage)
  3. Create archive object for InfoCube/DSO

 

Step 1 - To begin archiving, you will need a place to write out the archive files.  You do not necessarily need a 3rd party storage system (though I highly recommend one).  But, you do need a filesystem/directory in which to either temporarily or permanently “house” the files.

 

Go to transaction /nFILE

 

image 1

 

Either select a SAP supplied Logical File Path, or create your own. 

Double click on the relevant Logical File Path, then select/double click on the relevant Syntax group (AS/400, UNIX, or Windows).

Assign the physical path where the archive files will be written to.

 

image 2

 

Next, you need to configure the naming convention of the archive files.

Select the relevant Logical File Path, and go to Logical File Name Definition:

 

image 3

 

In the Physical file parameter, select the relative parameters you wish to use to describe the archive files.  See OSS Note 35992 for all of the possible parameters you can choose.

 

Step 2 - If you will be storing the archive files in a 3rd party storage system (have I mentioned I highly recommend this), you need to configure the content repository.

image 15

 

Enter the Content Repository Name, Description, etc.  The parameters entered will be subject to the 3rd party storage requirements.

 

Step 3 is to create the archive object for the relevant InfoCube or DSO:

Go to transaction RSA1:

imaage 5

 

Find and select the relevant InfoCube/DSO, right-click and then click on Create Data Archiving Process.

 

The following tabs will lead you through the rest of the necessary configuration.

The General Settings tab is where you will select whether you are going to configure an ADK based archived object, a Nearline Storage (NLS) object or a combination.

image 6

 

On the Selection Profile tab, if the time slice characteristic isn’t a key field, select the relevant field from the drop down and select this radio button:

image 7

 

If using the ADK method, configure the following parameters:

Enter the relevant Logical File Name, Maximum size of the archive file, the content repository (if using 3rd party storage), whether the delete jobs and store jobs should be scheduled manually or automatically, and if the delete job should read the files from the storage system.

image 8

You then need to Save and Activate the Data Archiving Process.

 

Once the archive object has been activated, you can then either schedule the archive process through the ADK (Archive Development Kit) using transaction SARA, or you can right click on the InfoCube/DSO and select Manage ADK Archive.

 

image 9

 

Click on the Archiving tab:

image 10

 

And, click on Create Archiving Request.

 

When submitting the Archive Write Job, I recommend selecting the check box for Autom. Request Invalidation.

If this is selected and an error occurs during the archive job, the system will automatically set the status of the run to ‘99 Request Canceled’ so that the lock will be deleted.

image 13 

 

If submitting the job through RSA1 -> Manage, select the appropriate parameters in the Process Flow Control section:

 

image 14

 

When entering the time slice criteria for the archive job, keep in mind that a write lock will be placed on the relevant InfoCube/DSO until both the archive write job and the archive delete job have completed. 

 

Additional topics to consider when implementing an archive object for an InfoCube/DSO:

  • For ODS objects, ensure all requests have been activated
  • For InfoCubes, ensure the requests to be archived have been compressed
  • Recommended to delete the change log data (for the archived time slice)
  • Prior to running the archive jobs, stop the relevant load job
  • Once archiving is complete, resume relevant load job
 

In addition to data archiving, here are some SAP recommended NetWeaver Housekeeping items to consider:

 

From the SAP Data Management Guide that can be found at www.service.sap.com/ilm

 

(Be sure to check back every once in awhile as this gets updated every quarter).

There are recommendations for tables such as:

  • BAL*
  • EDI*
  • RSMON*
  • RSBERRORLOG
  • RSDDSTATAGGRDEF
  • RSPC* (BW Process Chains)
  • RSRWBSTORE
  • Etc.

There are also several SAP OSS Notes that describe options for tables that you do not need to archive:

Search SAP Notes on Clean-Up Programs

www.service.sap.com/notes

Table RSBATCHDATA

  • Clean-up program RSBATCH_DEL_MSG_PARM_DTPTEMP

Table ARFCSDATA

  • Clean-up program RSARFCER

Tables RSDDSTAT

  • Clean-up program RSDDK_STA_DEL_DATA

Table RSIXWWW

  • Clean-up program RSRA_CLUSTER_TABLE_REORG

Table RSPCINSTANCE

  • Clean-up program RSPC_INSTANCE_CLEANUP
  http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20375%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

----------------------------------------------------------------------------------------------------------------

근데 사실 국내엔 BI를 Archiving 한 사례가 없지..
그리고 요즘처럼 하드웨어 가격이 떨어지면 차라리 하드웨어구매가.. =_=

Posted by AgnesKim
Technique/SAP BW2010. 7. 29. 14:10

BW 7.30: Simple modeling of simple data flows
Thomas Rinneberg SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 28, 2010 02:05 PM in Business Intelligence (BI)

 

Have you ever thought: How many steps do I need to do, until I can load a flat file into BW? I need so many objects, foremost I need lots of InfoObjects before I can even start creating my infoprovider. Then I need DataSource, Transfomation (where I need to draw many arrows), DTP and InfoPackage. I just want to load a file! Why is there no help by the system?

Now there is - BW 7.30 brings the DataFlow Generation Wizard!

You start by going to BW DataWarehouse workbench (as always), then selecting the context menu entry „Generate Data Flow..." on either the File-Source system (if you just have a file and want to generate everything needed to load it) or on an already existing DataSource (if you have that part already done - this works also for non-file-source systems!) or on an infoprovider (if you have your data target already modeled and just want to push some data into it).

Context Menu to start data flow wizard


Then the wizard will pop up:

Step 1 - Source options

Here, we have started from the source system. If you start from the InfoProvider, the corresponding step will not be shown in the progress area on the left, since you have selected that already. Same for the DataSource.

I guess you noticed already: ASCII is missing in the file type drop down (how sad! – however please read the wizard text in the screenshot above: It’s just the wizard where it is not supported because the screen would become too complex). And look closer: There is „native XLS-File“. Yes, indeed. No longer „save as CSV“ necessary in Excel. You can just specify your Excel-File in the wizard (and in DataSource maintainance as well). There is just one flaw for those who want to go right to batch upload: The Excel installation on your PC or laptop is used to interpret the file contents, so it is not possible to load Excel files from the SAP application server. For this, you still need to save as CSV first, but the CSV structure is identical to the XLS structure, so you do not need to change the DataSource.

Ok, lets fill out the rest of the fields, file name of course, data source, source system, blabla – (oops, all this is prefilled after selecting the file!) – plus the ominous Data Type (yes, we still can’t live without that)

Step 1 - Pre-Filled Input Fields

and „Continue“:

Step 2 - CSV Options

Step 2 - Excel Options

One remark on the header lines: If you enter more than one (and it is recommended to have at least one line containing the column headers), we expect the column headers to be the last of the header lines, i.e. directly before the data. Now let‘s go on:

Step 3 - Data Target

The following InfoProvider Types and Subtypes are available:

  • InfoCube – Standard and Semantically Partitioned
  • DataStore-Object – Standard, Write Optimized and Semantically Partitioned
  • InfoObject – Attributes and Texts
  • Virtual Provider – Based on DTP
  • Hybrid Provider – Based on DataStore
  • InfoSource
This is quite a choice. For those of you which got lost in that list, have a look at the decision tree which is available via the „i“ button on the screen. As a hint: A standard DataStore-Object is good for most ;-)

Step 4 - Field Mapping

This is the core of the wizard. At this point, the file has already been read and parsed, and the corresponding data types and field names have been derived from the data of the file and the header line (if the file has one). In case you want to check whether the system did a good job, just double click the field name in the first column.

This screen does also define the transformation (of course only 1:1 mapping, but this will do for most cases – else you can just modify the generated transformation in the transformation UI later) as well as the target infoprovider (if not already existing) plus the necessary InfoObjects. You can choose from existing InfoObjects (and the „Suggestion“ will give you a ranked list of InfoObjects which map your fields better or worse) or you can let the Wizard create „New InfoObjects“ after completion. The suggestion uses a variety of search strategies, from data type match via text match to already used matches in 3.x or 7.x transformations.

And that was already the last step:

Step 5 - End

After „Finish“, the listed objects are generated. Note, that no InfoPackage will be generated, because the system will generate the DTP to directly access the file rather than the PSA.


Don't miss any of the other Information on BW 7.30 which you can find here

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20105%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

필요한 이유는 전혀 모르겠지만;; 일단 scrap.

Posted by AgnesKim
먹고살것2010. 7. 29. 00:58

SAP BW 7.3 beta is available
Thomas Zurek SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 28, 2010 07:50 AM in BI Accelerator, Business Intelligence (BI), Business Objects, SAP NetWeaver Platform

 

Last week, SAP BW 7.3 has been released in a beta version. This is just the upbeat for the launch of the next big version of BW which is planned for later this year. This will provide major new features to the existing, more than 20000 active BW installations which make it one of the most widely adopted SAP products. I like to take this opportunity to highlight a few focus areas that shape this new version and that lay out the future direction.

First, let me sift through a notion that many find helpful once they have thought about it, namely EDW = DB + X. This intends to state that an enterprise data warehouse (EDW) sits on top of a (typically relational) database (DB) but requires additional software to manage the data layers inside the data warehouse, the processes incl. scheduling, the models, the mappings, the consistency mechanisms etc. That software - referred to as "X" - can consist of a mix of tools (ETL tool, data modeling tools like ERwin, own programs and scripts, e.g. for extraction, a scheduler, an OLAP engine, ...) or it can be a single package like BW. This means that BW is not the database but the software managing the EDW and its underlying semantics.
SAP BW 7.3

In BW 7.3, this role becomes more and more apparent which, in turn, is also the basic tint for its future. I believe that this is important to understand as analysts and competitors seem to focus completely on the database when discussing the EDW topic. Gartner's magic quadrant on DW DBMS as an instance of such an analysis; another is the marketing pitches of DB vendors who have specialized on data warehousing like Teradata, Netezza, Greenplum and the like. By the way: last year, I did describe a customer migrating his EDW from "Oracle + Y" (Y was home-grown) to "Oracle + BW" which underlines the significance of "X" in the equation above.

But let's focus on BW 7.3. From my perspective, there is 3 fundamental investment areas in the new release:

  • In-memory: BWA and its underlying technology is leveraged to a much wider extent than previously seen. This eats into the "DB" portion of the equation above. It is now possible to move DSO data directly into BWA, either via a BWA-based infocube (no data persisted on the DB) or via the new hybrid provider whereby the latter automatically pushes data from a DSO into a BWA-based infocube.
    More OLAP operations can be pushed down into the BWA engine whenever they operate on data that solely sits in BWA. One important step torwards this is the option to define BWA-based multiproviders (= all participating infoproviders have their data stored in BWA).
  • Manageability: This affects the "X" portion. Firstly, numerous learnings and mechanisms that have been developed to manage SAP's Business-by-Design (ByD) software have made it into BW 7.3. Examples are template-based approaches for configuring and managing systems, an improved admin cockpit and fault-tolerant mechanisms around process chains. Secondly, there is a number of management tools that allow to easily model and maintain fundamental artifacts of the DW layers. For example, it is now possible to create a large number of identical DSOs, each one for a different portion of the data (e.g. one DSO per country) with identical data flows. Changes can be submitted to all of them in one go. This allows to create highly parallelizable load scenarios. This has been possible in previous releases only at the expense of huge maintenance efforts. Similarly, there is a new tool which allows graphically model data flows, save them as templates or instantiate them with specific infoproviders.
  • Interoperability with SAP Business Objects tools: Now, this refers to the wider environment of the "EDW". Recent years have seen major investments to improve the interoperability with client tools like WebIntelligence or Xcelsius. Many of those have been provided already in BW 7.01 (=EhP1 of BW 7.0). Still, there are a number of features that required major efforts and overhauls. Most prominently, there is a new source system type in BW 7.3 to easily integrated "data stores" in Data Services with BW. It is now straightforward to tap into Data Services from BW. Consequently, this provides a best-of-breed integration of non-SAP data sources into BW.

This is meant to be a brief and not necessarily exhaustive overview of BW 7.3. A more detailed list of the features can be found on this page. Over the course of the next weeks and months the development team will blog on a variety of specific BW 7.3 features. This links can be found on that page too.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20285%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2010. 7. 25. 02:21

Delta Queue Diagnosis
P Renjith Kumar SAP Employee
Business Card
Company: SAP Labs
Posted on Jul. 23, 2010 04:25 PM in Business Intelligence (BI), SAP NetWeaver Platform

 

Many times we come across situation where there may be inconsistencies in the delta queue. To check these we can use a diagnostic tool. The report is explained in detail here.

RSC1_DIAGNOSIS Program is Diagnosis Tool for BW Delta Queue

image

How to use this report?

Execute the report RSC1_DIAGNOSIS from SE38/SA38, With datasource and destination details.

Use

With the RSC1_DIAGNOSIS check program, the most important information about the status and condition of the delta queue is issued for a specific DataSource.

Output

You get the following details once the report is executed

  • General information about datasource and version.
  • Meta data of Datasource and Generated objects for the datasource
  • ROOSPRMSC table details of datasource like GETID and GOTID
  • ARFCSSTATE Status
  • TRFCQOUT Status
  • Records check with Recorded status
  • Inconsistencies in delta management tables
  • Error details if available.

Let see the output format of the report.

image

image

How to analyze?

Before analyzing this output we need to know some important tables and concepts. Let us see

The delta management tables

DeltaQueue Management Tables : RSA7

Tables

ROOSPRMSC            :  Control Parameter Per DataSource Channel

ROOSPRMSF            :  Control Parameters Per DataSource

TRFCQOUT              :  tRFC Queue Description (Outbound Queue)

ARFCSSTATE            :  Description of ARFC Call Status (Send)

ARFCSDATA             :  ARFC Call Data (Callers)

The delta queue is constructed of three qRFC tables namely ARFCSDATA which has the data and AFRCSSTATE, TRFCQOUT which is to control dataflow to BI systems.

Now we need to know about TID (Transaction ID). You can see two things GETTID and GOTTID. Now we will see what those are.

GETTID and GOTTID can be seen in table ROOSPRMSC.

image

GETTID:   Delta Queue, Pointer to Maximum Booked Records in BW (i.e.) this refers

<address>               to The last but one delta TID</address><address>        </address><address>GOTTID:  Delta Queue, Pointer to Maximum Extracted Record I (i.e.) this refers to the </address><address>              Last delta TID that has reached BW. (Used in case of repeat delta)</address>

System will delete the LUW'S greater than GETTID and less than or equal to GOTTID. This is because delta queue have last but one delta and loaded delta only.

Now we will see about the TID in detail

TID = ARFCIPID+ ARFCPID+ ARFCTIME+ ARFCTIDCNT  field content.

All the four fields can be seen in the table ARFCSSTATE.

<address>ARFCIPID                  : IP Address</address><address>ARFCPID                   : Process ID.</address><address>ARFCTIME                 : UTC time stamp since 1970.</address><address>ARFCTIDCNT             : Current number</address>

To know how this is split I am taking the GETTID

GETTID = 0A10B02B0A603EB2C2530020

This is separated like this ( 8 + 4 + 8 + 4 ) and it is sent to the four table.

GETTID : 0A10B02B   0A60  3EB2C253  0020

<address>ARFCIPID                   = 0A10B02B</address><address>ARFCPID                    = 0A60</address><address>ARFCTIME                  = 3EB2C253</address><address>ARFCTIDCNT               = 0020</address>

Give this as selection in table ARFCSSTATE.Here you can find the details of the TID.

image

Here you find details of TID.

Now we move on to the output of the report.

image

How to get the generated program?

20001115174832 = Time of generation

/BI0/QI0HR_PT_20001 = Generated extract structure

E8VDVBZO2CTULUAENO66537BO = Generated program

But to display the generated program you need to add "GP" to the prefix of the generated program and can be seen from SE38.

Adding prefix ‘GP' = GPE8VDVBZO2CTULUAENO66537BO

How to check details in STATUS ARFCSSTATE?

The output displays an analysis of the ARFCSSTATE status in the form

STATUS READ 100 times

<address>LOW <Date> <Time> <TID> <SID-CLNT></address><address>HIGH <Date> <Time> <TID> <SID-CLNT></address>

STATUS RECORDED 200 times

<address>LOW <Date> <Time> <TID> <SID-CLNT></address><address>HIGH <Date> <Time> <TID> <SID-CLNT></address>

READ             = Repeat delta entries with TID.

RECORDED     = Delta entries

Using this analysis, you can see whether there are obvious inconsistencies in the delta queue. From the above output, you can see that there are 100 LUWs with the READ status (that is, they are already loaded) and 200 LUWs with the Recorded status (that is, they still have to be loaded).  For a consistent queue, however, there is only one status block for each status. That is, 1 x Read status, 1 x Recorded status. If there are several blocks for a status, then the queue is not consistent. This can occur for the problem described in note 516251.

How to check details in STATUS TRFCQOUT?

Only LUWs with STATUS READY or READ should appear in TRFCQOUT. Another status indicates an error. In addition, the GETTID and GOTTID are issued here with the relevant QCOUNT.

Status READ   = Repeat delta entries with low and high TID

Status READY = Delta entries ready to be transferred.

If the text line "No Record with NOSEND = U exists" is not issued, then the problem from note 444261 has occurred.

In our case we did not get the READ and READY or RECORDED status, That's why it is showing as ‘No Entry in ARFCSSTATE' and ‘No Entry in TRFCQOUT'. But you will normally find that.

Checking Table level inconsistencies

In addition, this program lists possible inconsistencies between the TRFCQOUT and ARFCSSTATE tables.

If you see the following in the output

"Records in TRFCQOUT w/o record in ARFCSSTATE"

This shows inconsistency at table level, to correct this check the note 498484.

The records issued for this check must be deleted from the TRFCQOUT table. This allows the additional delta without reinitialization.However, if you are not certain that data was loaded correctly in BW (see note 498484) and that it was not duplicated, you should carry out a reinitialization.

 

 

 

 http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20226%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 7. 25. 02:08

Best Practice: BW Process Chain monitoring with SAP Solution Manager - Part 2: Setup Example
Dirk Mueller SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 23, 2010 04:23 PM in Application Lifecycle Management, Business Intelligence (BI), SAP NetWeaver Platform, SAP Solution Manager

 

Setup example: BW Process Chain monitoring as part of Business Process Monitoring

This blog is the second part of a series describing best practices for BW Process Chain monitoring with SAP Solution Manager. You can read the first part of this series here

Before describing the single steps that need to be performed for setting up the BW Process Chain monitoring, we assume that the following steps are already performed:

  • The BW system containing the process chains is successfully connected as a managed system in the Solution Manager.
  • A solution containing the managed BW system is already created.
  • The prerequisites mentioned in the first part of this series are met.

First of all we recommend creating a suitable structure in the "Business Scenarios" section of the Solution Directory (this can be found in the Solution Manager under "Solution Landscape Maintenance"). In our example we use the following structure:

Display Solution Directory

 

Now we can start the "Setup Business Process Monitoring" session for our solution. The above created "Business Process" must be selected:

2 - Change BPM Setup

 

Then choose the process chains ("Business Process Steps") that you have maintained:

3 - Change BPM Setup

 

Now select the Monitoring Type "Background Processing":

4 - Change BPM Setup

 

Save your changes and proceed to the next check called "Background Processing". Here you have to choose an "Object identifier". Usually this identifier is identically to the name of your process chain. As "Job Type" you have to choose "BW Process Chain". The "Schedule Type" is usually "On Weekly Basis".

5 - Change BPM Setup

 

After saving your changes you have to provide the name of the process chain(s) that you want to monitor - in our case "DM_TEST_CHAIN_1". You can enter wildcards in the column "BW Process Chain ID" and search for the process chain in the managed BW system by choosing the option "Save + Check Identification". Then choose a process chain in the result list. Please enter "use chain start condition" as the "Start Procedure" of the process chain:

6 - Change BPM Setup

 

In the tab "Schedule on Weekly Basis", please activate the days on which you want to have the monitoring of the process chain(s) active:

7 - Change BPM Setup

 

Then, we need to enter the thresholds for the single key performance indicators that we want to monitor (e.g. "Start Delay", "Not Started on Time", ...). Additionally you can enter multiple chain elements. The same logic applies here: you have to enter and search for the "Process Type" and "Variant" of the single step of the Process Chain step and provide the thresholds. For example:

8 - Change BPM Setup

 

Finally, we need to generate and activate this monitor in the Solution Manager and the managed BW system. This can be done using the check "Generation/Activation/Deactivation":

9 - Change BPM Setup

 

Please note that the generating of the customizing and the activation must not result in errors.

Now you can switch to the "Operations" section of the solution. Here you will see now the previously created hierarchy in the Solution Directory that results in a graphical overview like this:

10 - Solution View

 

By clicking on the alert icon (in our example a red alert in the business scenario "BW Process Chains", business process "ERP Loads"), the Business Process Monitoring session starts and we can check for details on the alert(s) that have been raised:

11 - Monitoring View

 

By choosing the RSPC icon, you can directly jump into the managed BW system for further analysis of the alerts.

Frequently Asked Questions about Business Process Monitoring are answered under http://wiki.sdn.sap.com/wiki/display/SM/FAQ+Business+Process+Monitoring.

The previous blogs provide further details about Business Process Monitoring functionalities within the SAP Solution Manager.

 


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20213%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim