Technique/그외2010. 12. 23. 01:37

SOA Design Principles - Service Abstraction
Farooq Farooqui 
Business Card
Company: Cognizant UK
Posted on Dec. 22, 2010 06:10 AM in SAP Process Integration (PI), Service-Oriented Architecture

 
 

Service Abstraction is one of the important principles of service design. Using service abstraction, provider can abstract technical and implementation service logic from the consumers. Application of this principle converts services into back box. The consumer has only required functional and technical information, which goes in service contract.

 

    

 

The key question is why we understand and apply service abstraction during the Service design? To answer this question let’s take an example a service ValidatePassport.

 

A service ValidatePassport contains different service operations. GetPassportDetails, for instance, is one of the operations that takes the passport details such as passport number, country of issue, etc and return the passport details. The moment consumers trigger the service he wants response back. To full fill this contract, a provider implemented a service and used different systems resources such as Middleware, SAP system, and Oracle database. At runtime service ValidatePassport check the system which is up and running. And accordingly it will read the details from one of the system and send the response back to the caller.

In this instance a provider uses different systems, applications, and programming logics. If provider does not abstract this information from the consumer then consumer can assume many things and make wrong judgment at the time of implementing the service at his end. Moreover, each time provider has to notify consumer if there is change in system, programming language, or implementation logic. Hence service abstraction gives freedom to owners to evolve their service and change implementation logic as required.

What is Service Abstraction?

  • Hiding design details and implementation logic from the outside world.
  • Turning service to a black box by hiding systems information, processing logic, and programmatic (Java, ABAP etc) approaches.

Why Service Abstraction require?

  • Encourage provider to share less information with outside world.
  • Give freedom to consumers to implement service efficiently – without any assumptions and wrong judgments.
  • Encourage service provider to advance the service using different IT technologies and IS systems.

What shouldn't goes in service contract is identified by the service abstraction?

  • Technical and implementation logic of the service
  • Programmatic logic such as technical systems, programming languages, and technical frameworks.
  • Systems processing details –involvement of systems when service is executed.
  • Business rules that process service request for different input messages.
  • Systems resources, internal validation, error handling, authentication, certificates, and business process etc.
  • Uncertain conditions such as availability of the service, possibility when service can become less responsive, etc.
  • Composite services description and details – if main service is composed of many other services. 

Farooq Farooqui   is a SAP certified consultant. He posses honors degree in electronics engineering and has nearly 5 years of SAP experience.




http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22771%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2010. 12. 14. 22:54

BW 7.30: Graphical Data Flow Modeling
Michael te Uhle SAP Employee 
Business Card
Company: SAP AG
Posted on Dec. 14, 2010 04:33 AM in Business Intelligence (BI)

SAP NetWeaver BW 7.30 provides you with a new metadata and transport object for BW data flows. This functionality enables you to persist the data flow views that you created with the ‘Display Data Flow’ function. Data flows are integrated as trees into the Data Warehousing Workbench. They are created and maintained in the same graphical design environment that you will be familiar with from transformation and process chain maintenance screens, for example.

 

 Data Flow Sales Europe

 

A data flow provides a constant view of objects in the system. You can integrate existing objects and create new objects. The data flow maintenance screen can be used as a single point of entry for future development of your Data Warehousing objects and you can use the dataflow functionality to organize your projects.

Next I want to give you a short demonstration of some of the user-friendly modeling capabilities of data flow maintenance. I will also show you how  to model a data flow from a source to a target, based on existing objects or by creating new objects. I have modeled the following sales scenario: The USA data is sent from a DataSource via a DataStore object to the USA Sales InfoCube. The same setup applies to the Europe data, but the data comes from three DataSources. The USA InfoCube and Europe InfoCube are combined using a MultiProvider for reporting.  Let’s assume that some of the objects are already available and others have to be created from scratch.

Let's start in the data flow tree of the Data Warehousing Workbench. We create a new data flow using the InfoArea context menu and enter the name and description of the data flow. We save the empty data flow and the object appears in the tree. Then we drag and drop the icon of the object that we want to create from the toolbar to the data flow maintenance area. We start with the DataStore object, the InfoCube for Sales Europe and the MultiProvider.

 

 Insert InfoProvider

 

We have now created three non-persistent objects. This means that the objects only exist locally in this data flow and not in the database. Now we draw two lines from the DataStore object to the InfoCube and one line from the InfoCube to the MultiProvider.

 

 Draw lines to create transformations

 

What happened? When you drew the line from the DataStore object to the InfoCube, the system realized that the line was a transformation and created a non-persistent transformation. When you drew the second line, the system created a data transfer process (DTP) because a transformation already existed. Further lines would result in further DTPs. When you drew a line from the InfoCube to the MultiProvider, the system created a simple line. This indicates a contained-in relationship, which is the only valid possibility in this case.

Next we used the undo and redo buttons that allow you to undo and redo your previous actions. The screenshots below show the result of the last four actions being undone. In this case, the three connections as well as dragging and dropping the MultiProvider are reversed.

 

Undo of operations

 

To redo your actions, simply open the dropdown menu of the redo button and select the required action or press the redo button four times, for example.

 

 Result of Undo

 

Before we move on, we’ll assign a technical name and description to the InfoCube by choosing the change option in the context menu. Note that this option does not physically create the InfoCube in the system. The InfoCube is still a non-persistent object that is saved locally in the data flow.

The Sales Europe DataStore object exists already in the system. We add it to the data flow by choosing a command from the context menu of the non-persistent object:

 Use existing object

 

Choose the DataStore object ‘Sales Europe’ in the value help. The system adds the object to the data flow maintenance screen and changes the background color of the node to blue. This indicates that the node represents an existing persistent BW object.

Next we create the Sales Europe InfoCube by double-clicking on the non-persistent object. The dialog box for creating InfoCubes opens. The DataStore object is selected as the InfoCube template because it is connected to the InfoCube. The description and technical name of the non-persistent object are applied to the InfoCube.

 

 Create InfoCube

 

Activate the InfoCube and create the transformations and DTPs by double-clicking on the appropriate connections. The source and target are already specified. You simply need to maintain the details. Activate the objects.

Now we need to integrate Sales USA (already exists in the system) into our data flow. We start by dragging the existing Sales USA InfoCube from the InfoProvider tree and dropping it onto the MultiProvider. The existing InfoCube is added. Since the target is the MultiProvider, the system creates a connection between the InfoCube and the MultiProvider, based on the algorithm explained above.

 

 Connect existing InfoCube

 

To integrate the entire data flow of the Sales USA InfoCube, expand the existing data flow of the  InfoCube by choosing the menu entry ‘Use data flow of object’. A dialog box opens. Specify whether you want to use the data flow upwards, downwards or in both directions. The data flow is shown in a separate window and can be added to the data flow maintainance by pressing the confirm button.

 

 Insert a Dataflow of an Object

 

The objects of the data flow are now integrated in our data flow. Double click on the MultiProvider to open the MultiProvider maintenance screen. Here the source objects are listed due to the connections and are already selected in the MultiProvider maintenance. Simply complete your MultiProvider and activate it.

 

 Creation of the MultiProvider

 

Connect three European Sales DataSources to the DataStore object by selecting multiple DataSources in a Data Warehousing Workbench object tree and dropping them onto the DataStore object. The transformations are automatically created (as mentioned above).

 

 Adding several DataSources

 

The Sales data flow scenario can be completed by drawing links for the DTPs and maintaining the details for transformation and DTPs.

This document gives you a first impression of what is possible with the new data flow maintenance. Note that this is only part of the functionality. Some additional features are listed below. A very important function is described in a separate document: Creating and Using Data Flow Templates.

  • If the data flow is collected for the transport, all relevant objects are also collected.
  • You can copy data flows with the data flow copy wizard by performing a simple copy or a multiple deep copy.
  • You can use the create data flow wizard to generate parts of your data flow.
  • Semantically partitioned objects are supported in the data flow maintenance
  • You can collect and integrate all DTPs and/or transformations that belong to objects in the data flow by selecting a single option. For example, if you start your data flow with a list of InfoProviders and you want to display all transformations that connect objects to other objects of this object set.
  • …and many more features.  Try it out!

  

Michael te Uhle   is a Development Architect in the SAP BW Data Warehousing Team



http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22621%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/SAP BW2010. 12. 14. 12:30

Design Consideration for 7.30 InfoProviders - Webinar Presentation

     Andreas Keppler    Presentation     (PDF 1 MB)     01 December 2010

Overview

The SAP BW release 7.30 comes with many new InfoProviders, namely HybridProvider, SemanticPartioned Object (SPO), Analytical Index, CompositeProvider, and more. This session introduces the new InfoProviders and shows where they sit in the shelf among all the other InfoProviders and Features in SAP BW. Moreover, the session provides guidance in when to use which InfoProvider.

Posted by AgnesKim
Technique/그외2010. 12. 10. 18:26

Web Version of ABAP Keyword Documentation
Horst Keller SAP Employee Active Contributor Bronze: 250-499 points
Business Card
Posted on Dec. 10, 2010 12:58 AM in ABAP

 
 

Now that AS ABAP, 7.0, EhP2 is released, there are no resaons any more to restrict the Web Version of the ABAP Keyword Documentation to Release 7.0.

And here it is, the 7.02-Version (thanks to the colleagues who made it happen):

http://help.sap.com/abapdocu_702/en/index.htm
http://help.sap.com/abapdocu_702/de/index.htm

Last but not least, the 7.02-version has lots of improvements regarding contents, even for 7.0-subjects. Besides an improved structure, there is e.g. a new chapter on date-time-processing (http://help.sap.com/abapdocu_702/en/abendate_time_processing.htm) or many  chapters were reworked as e.g. the one on built-in data (http://help.sap.com/abapdocu_702/en/abenbuilt_in_types_complete.htm) and their implications (e.g. http://help.sap.com/abapdocu_702/en/abenconversion_elementary.htm or http://help.sap.com/abapdocu_702/en/abenlogexp_rules_operands.htm).

Have Fun!

Horst

PS: The 7.0-version is still available under ".../abapdocu_70/...".

 

 

 

Horst Keller  Active Contributor Bronze: 250-499 points is a Knowledge Architect in the department TD Core AS&DM ABAP of SAP


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22553%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/그외2010. 12. 9. 17:55

SAP NetWeaver Portal 7.3 – Unified Access to Applications and Processes
Aviad Rivlin SAP Employee Active Contributor Gold: 1,500-2,499 points
Business Card
Company: SAP Labs Israel
Posted on Dec. 09, 2010 12:30 AM in Enterprise Portal (EP)

 

Dear portal experts,

 

As you probably already now, the next major release of the Portal (also known as SAP NetWeaver Portal 7.3) was successfully released for ramp-up after a very successful beta program.

 

In this blog, I would like to share with you some of the new functionalities that are part of the new portal release, but fist let’s align on the release numbers. The current and most adopted SAP portal version is the SAP NetWeaver Portal 7.0 with enhancements package 1 and enhancements package 2. When talking about NetWeaver 7.3 release, please note that this is neither a support package nor an enhancement package; it is the next major release of the SAP NetWeaver Portal with plenty of new capabilities and functionalities.

 

Those of you, who have attended the Unified Access to Applications and Processes session at SAP TECHED 2010, should already be familiar with these topics. But for those of you who did not have the chance to join us at TECHED, let’s start…

 

The next major release of the portal provides enhanced functionalities in various areas: access to SAP and non-SAP application, simplification of content creation, Web Page Composer enhancements, Knowledge Management enhancements, Wiki’s and more. In this blog, I will concentrate on the key enhancements in the area of content creation and access to SAP and non-SAP applications.

 

  • Simplified implementation process for SAP Business Packages – a simplified process for business packages implementation with guided wizards for system aliases assignment and role customization. Refer to the diagram bellow explaining the new and simplified process:

 

 

 

  • Simplified content creation via automatic iView (application) upload from the back-end system to the portal.

 

  • Top-down approach for role creation - the ability to start building the navigation hierarchy from the role itself and create\add folders, pages and iViews from within the role (this will simplify the work, especially for those of you who are new to the Portal).

 

  • Enhanced transport capabilities archived via integrating with the Central Transport System (CTS+) and the new concept of synchronized folder. A synchronized folder is a folder within the transport package that contains references to the PCD objects you would like to transport. Once the transport package is actually exported, only then, the PCD objects are grabbed into to the transport package (for those of you who are looking for the traditional behavior of the transport package, no worries, it is supported as well, side by side with the new functionality).

 

  • Enriched Portal role types – with the new portal version we have 4 different role types: free-style roles (the roles as you know from the current portal version), workcenter roles (roles that implement the workcenter role user experience guidelines), roles from packages (role from SAP Business Package) and role from PFCG (uploaded roles from the PFCG ABAP repository). We will deep dive into the details of each role in one of the coming blogs.

 

  • Interoperability with 3rd party Portals – many customers are running a dual vendor portals approach (i.e.: more than one portal vendor in the organization), and the typical question is: how can I expose to the end user one, harmonized portal, although from an IT perspective I have more than one portal vendor? The solution covers: branding alignment, single sign-on and session management.

 

  • AJAX framework – many documents and sessions were already gives about the new AJAX framework and from now on it is available for you to use! This is actually the new default framework page when installing the portal. You can find plenty of information about the AJAX framework already today and plenty more to come…

 

  • QuickLaunch – a new navigation concept for navigating within your portal though a search capability. For example, you are looking to create a new purchase order, but do not know where to find this iView. You simply type\search for the word “purchase” and all the iViews\pages\roles with the sub-string “purchase” are available for you.

 

       

 

 

That’s it for this time. You can expect more details in the coming weeks. If you are interested in specific topics in the area of the SAP NetWeaver Portal 7.3, please list them bellow and I will do the best to cover them in the coming blogs.

 

 

Happy New Year!

Aviad Rivlin 

 

 

Aviad Rivlin  Active Contributor Gold: 1,500-2,499 points is a member of the SAP NetWeaver Portal Solution Management Group


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22431%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 12. 9. 10:36

DSO Transport and Activation performance issue - adding new fields
Tanuj Gupta SAP Employee 
Business Card
Company: SAP Labs India
Posted on Dec. 08, 2010 09:06 AM in Business Intelligence (BI)

 

Scenario:

• DSO/InfoCube having very large number of entries > 10 million entries
• Database is Oracle
• Customer is adding new fields to the DSO/InfoCube and transporting the changes to the Production system.

Background:

Whenever the changes like
a) Adding new key figures
b) Adding new characteristics and dimensions.

are performed on an object (DSO or InfoCube) which already contains data, the activation process has to adjust the table(s) structure. This adjustments process has to alter the table by adding new column and initialize the column to "NOT NULL" and with initial default value. This should work without any problems given the fact that there are enough database resources to perform this change on the database.

Adding not null fields requires checking of the entire "existing" data for null values which requires large amount of space in the transaction log and in the database and comprise many I/O operation. Depending on I/O throughput of your storage it takes hours.

Reason for initializing the newly added column is that from BW reporting and extraction point the column has to be not null and initialized with default value, as BW can not handle null values. General recommendation in such cases where it is required to remodel the object by adding new fields where there is already much data is to follow the below workaround.

The general workaround involves making a copy of the existing data in the cube using datamart interface or Open Hub services. For example
1. Make copy of the existing Cube (Cube1) to Cube2
2. Add new fields to Cube2
3. Load the data from Cube1 to Cube2 using datamart (use Cube2 for further purposes.

If you have some reporting queries defined on Cube1 then need to do the following

1. Make copy of the existing Cube (Cube1) to Cube2
2. Load data from Cube1 to Cube2 using datamart
3. Delete Data from Cube1
4. Add new fields to Cube1
5. Reload the data from Cube2 to Cube1 using datamart

Alternative Solution:

An alternative to this solution is SAP Note 1287382 which observed to be much faster than the normal process.

1287382 Extending BW objects on Oracle with large tables

We have observed a significant performance improvement using this SAP Note although the implementation of this SAP Note, require good application knowledge.

Here are the steps for your reference:

1. Have a list of the large DSOs (identify the size of active table, change log table) and InfoCubes (identify the size of E, F table).
2. Archive or clear change log tables, compress the InfoCubes, minimize the number of records in the tables.
3. Have a list of objects to be transported, identify the dependency of the objects, create the change requests (try to group large DSOs and InfoCubes into separate change request), identify the sequence of the transport.
4. Check the current SP level and implement SAP Note 1287382 and correction mentioned in the SAP Note 1340922.
5. Run program SAP_RSADMIN_MAINTAIN, maintain the RSADMIN parameters as mentioned in the SAP Note 1287382.
6. Schedule background job to run program SAP_NOT_NULL_REPAIR_ORACLE with suitable parameters (table name, parallelism). More parallel processing could be scheduled depending on your hardware capacity. For InfoCubes, the job has to run separately for all the DIM tables and Fact tables.

Job SAP_NOT_NULL_REPAIR_ORACLE
- Renames the table name (/BIC/AMYODS00) to (/BIC/AMYODS00_OLD)
- New table /BIC/AMYODS00 is created in the system and new fields are added to this new table
- Data from the /BIC/AMYODS00_OLD is copied to the newly created table /BIC/AMYODS00

7. Ensure that Job in SM37 is successfully executed.
8. If any errors occur observed in SM37:

The program fails before copying :- Check whether "/BIC/AMYODS00_OLD" exists. Normally, however, this should not be the case because, if errors occur, "/BIC/AMYODS00_OLD" is renamed "/BIC/AMYODS00". If the table exists, before the program is restarted, the table "/BIC/AMYODS00" must be dropped (!!! MUST NOT CONTAIN DATA) and "/BIC/AMYODS00_OLD" becomes "/BIC/AMYODS00".
1.) Drop the new empty table "/BIC/AMYODS00" (using SE14 or SQL: DROP TABLE "/BIC/AMYODS00")
2.) Rename "/BIC/AMYODS00_OLD" to "/BIC/AMYODS00" (must be done with native SQL )
3.) Update in DD03L at least one column to nullable

The program fails after copying: - If the number of records in "/BIC/AMYODS00_OLD" and "/BIC/AMYODS00" differ.

- ANALYSIS and repetition in accordance with the previous step (The program fails before copying). /BIC/AMYODS00_OLD should have complete data in the table.
The program should not be restarted. Instead, the remaining steps (structuring an index and structuring statistics) are carried out manually after the error analysis.

9. Drop *_OLD tables if the conversion is successful.

If the DSO/InfoProvider is contain more than 10 million entries, the above alternative solution should help to reduce the amount of time significantly.

 

Tanuj Gupta   Platinum Consultant (Solution Support), SAP Labs India.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/19300%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 12. 9. 10:35

BW 7.30: Data Flow Copy Wizard
Thomas Rinneberg SAP Employee Active Contributor Bronze: 250-499 points
Business Card
Company: SAP AG
Posted on Dec. 08, 2010 09:08 AM in Business Intelligence (BI)

 
 

If you have read my blog about the Data Flow Generation Wizard, you have learned already about one tool, which will ease the work to create a BW data flow. However it might be that you have done this already and heavily changed the generated objects to suite your needs. Now it comes that you need to create another data flow, which looks quite similar to the one you have already created. Only the target would be a little different. Or you need to load from another DataSource in addition. Or from another source system (maybe a dummy source system? Cf. my other blog on transport features in 7.30). Too bad, that you need to do all your modifications again!

No – because with BW 7.30 there also is – the Data Flow Copy Wizard.

And again, we start in the data warehousing workbench RSA1.

Workbench

 

Let’s display the data flow graphically:

Context Menu to start copy wizard

 

DataFlow Popup

 

DataFlow Display

 

Now, if we want to copy this data flow, we just need to choose “Copy data flow” instead “Display data flow”. Again, you can choose in which direction the objects shall be collected.

DataFlow Popup

 

DataFlow Copy Question Popup

 

The system asks you whether you want to collect also process objects or only data model objects. Why this? The process objects usually are included in a process chain. And if you intend to copy the process objects, you should start copying with the process chain. We will try this out later, but for now, let’s press “No” and sneak into the Wizard itself.

DataFlow Copy Wizard Start Step

 

Looking at the step list on the left, it seems like the objects to be copied are divided up into the various object types. However the order is strange, isn’t it? It is by purpose, and you will (hopefully) understand, if you read the lengthy explanation in above screen shot ;-) Ok, let us start with the first step, “number of copies”!

DataFlow Copy Wizard Number of Copies

 

I have already chosen two copies, else this step can be skipped. Now what do these “replacements” mean? Usually, when copies are performed, the objects are related to the original objects in terms of naming conventions. At least, for each object to be copied, you need to enter a new name. Now if you are going to create two copies at a time, you would need to enter two new names for each object. In order to simplify this, you can enter a placeholder & in the new object name and &VAR& in the description, and the placeholder will be replaced with what you specify in above screen. It could look like this:

Replacement Input

 

So from an object e.g. ZWQC_& (Sales for &VAR&) two objects can be created with names ZWQC_USA (Sales for States) and ZWQC_EMEA (Sales for Europe and Asia).

Data Flow Copy Wizard Source System Step

 

Now the actual copy customizing starts. All following steps have the elements already visible on above screen: For any original object the target object can be specified in several ways when clicking on the column in the middle:

Copy Modes

 

  • You can use the original object uncopied. This means the original object will be included into the copied data flow. By this, you can, depending on the object that you keep
    • Add a new load branch to an existing data target
    • Create a new DataSource for the same source system
    • Load the same DataSource from another source system
    • Load an existing DataSource into a new target
    If you keep all objects in all steps, you actually do not perform a copy.
  • You can use a different, already existing object. This will include the specified object into your copied data flow. You might have created the InfoProvider already, but you want to copy the transformation from an already existing transformation. For object type source system like shown above, the source system must have been created before you start the wizard.
  • You can create a new object as copy of the original object. This is the standard option that you would like to use, and the one, which is not available for source systems ;-)
  • You can exclude the object from the further copy process. This means, that also all objects dependent on the excluded object will be excluded from the copy process. So if you exclude the source system, you will automatically exclude the DataSource and the corresponding Transformation as well. It will leave only the DataStore and InfoCube to be tackled and the transformation between.
This gives already an impression how the wizard takes care of the interdependencies between the objects so that you always will get a consistent copy no matter how complex your data flow is. I have chosen to keep the source system, and for the InfoProviders, I will copy the cube only:

Data Target Assignment

 

Then the wizard gives me no choice for messing up with the corresponding transformations in the next step:

Transformation Assignment

 

This help you will especially appreciate, if it comes to deep copy of process chains. Let’s try this out. I exit the wizard.

Exit Question

 

Oops, I can save my entries!? That sounds like a useful feature. Indeed, if I would have continued the wizard to the end and actually performed the copy, my entries were saved automatically. So if I am going to change something with the original objects and want to propagate this change to the copies I had made, I am offered the following additional step in the wizard:

Use previous copy processes as template

 

Having this, I can very swiftly walk through the steps which carry already my settings from the chosen previous copy process. I just need to exclude some of the objects whose changes I do not want to copy over (or rather use the already copied objects). And moreover there is transaction RSCOPY, which shows me which copy processes I have already undertaken. We will come back to this later, now we wanted to look at the process chain copy. Let us choose menu “process chain” -> “copy” in the process chain maintenance:

Process chain copy question

 

Of course we want to use the wizard. This time we are not asked whether we want to collect process objects as well ;-) Instead, the step list contains some more steps:

Copy Wizard Steps for Process Chain copy

 

Let’s fast forward to “Process Chains”.

Process Chain Assignement Popup

 

The system assumes we want to copy the process chain (how intelligent ;-) and thus confronts us with a popup where we could change the target object name and description. Having filled it out, the wizard shows us another chain as well, the subchain of the selected one:

Subchain

 

Let us keep that re-use subchain and fast forward to the “directly dependent processes”…

Source Systems Empty

 

STOP! What is this? I cannot change the source system? Why can’t I change the source system? Go on: I cannot change any of the DataSources, InfoProvider and Transformations! So we tricked ourselves: By not copying the subchain, we are still referring the original data flow in our copy (in the subchain). The system ensures that the outcome is consistent so it does not let me choose another data flow. Ok, convinced. I will copy the subchain as well. Now I am allowed to do my changes concerning the InfoCube as before. Puh.

Data Target Assignment

 

Copy Wizard Process Copy Step

 

So these are the “directly dependent objects” – directly dependent on a data flow object. Since I have copied the InfoCube only, the system proposes me to keep most of the processes, but only create a new DTP for the new data flow branch plus a data deletion process. We can double click on the original object "0WQ_308_DELDELTA" to see what it looks like.

Data Deletion Process

 

It contains both DataStore and InfoCube. If I copy it, the InfoCube would be replaced by my new InfoCube in the copy. But the DataStore would still be in it. Well, it shall be a copy… Also, if I look at the list of processes above, I see that the InfoPackages and DataStore will be loaded in my new chain as well as in my old chain. Not such a good idea. Maybe it would be better to copy the data flow only and modify my existing chain such that it drops and loads the new cube in addition to the old one. So the system does not totally relief me from thinking myself…

Ok, let us continue with our chain copy anyhow to see the outcome.

Copy Wizard Other Processes Step

 

These are the data flow independent processes; we have to choose names for the triggers, alright. There is one step missing in our example, since we have no such processes in our chain, these are the “indirectly dependent processes”, which refer to a “directly dependent process” in turn.

Let us go to the end and execute.

Batch Question

 

We choose "In Dialog".

Log

 

Everything worked. And what is the result?

Process Chain
Process Chain

 

The target chain looks quite alike the old one, except the subchain and DTP we have copied as well. And how does the data flow look like now?

Data Flow

 

That’s a nice result, isn’t it? So we have a new cube as copy of the old one, plus the transformation and DTP.

But now I had promised to show you transaction RSCOPY where we can see the logs of our copy processes:

Copy Monitor

 

Oops, there is a red icon. I have hidden this attempt from you, but now it comes to the bright daylight that I had made a mistake. Let’s double click it!

Copy Log

 

I missed to remove the starting “0” from the process chain name. And how did I recover this? I just started the wizard again, chose this failed copy process as a template (you can see it in the column “template” in the previous screen shot of transaction RSCOPY) and just corrected my fault. The copy then was executed to the end.

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22416%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 12. 9. 10:33

BW 7.3: Troubleshooting Real-Time Data Acquisition
Tobias Kuefner SAP Employee 
Business Card
Company: SAP AG
Posted on Dec. 08, 2010 09:07 AM in Business Intelligence (BI)

The main advantage of real-time data acquisition (RDA) is that new data is reflected in your BI reports just a few minutes after being entered in your operational systems. RDA therefore supports your business users to make their tactical decisions on a day-by-day basis. The drawback however is that these business users notice much faster when one of their BI reports is not up to date. They might call you then and ask why the document posted 5 minutes ago is not visible yet in reporting. And what do you do now? I’ll show you how BW 7.3 helps you to resolve problems with real-time data acquisition faster than ever before.

First, let’s have a look at what else is new to RDA in BW 7.3. The most powerful extension is definitely the HybridProvider. By using RDA to transfer transactional data into a HybridProvider, you can easily combine the low data latency of RDA with the fast response times of an InfoCube or a BWA index, even for large amounts of data. You’ll find more information about this combination in a separate blog. Additionally. BW 7.3 allows for real-time master data acquisition. This means that you can transfer delta records to InfoObject attributes and texts at a frequency of one per minute. And just like RDA directly activates data transferred to a DataStore object, master data transferred to an InfoObject becomes available for BI reporting immediately. 

But now, let’s start the RDA monitor and look at my examples for RDA troubleshooting. I’ve chosen some data flows from my BW 7.0 test content and added a HybridProvider and an InfoObject. I know that this flight booking stuff is not really exciting, but the good thing is that I can break it without getting calls from business users.

Remember that you can double-click on the objects in the first column to view details. You can look up for example that I’ve configured to stop RDA requests after 13 errors.

Everything looks fine. So let’s start the RDA daemon. It will execute all the InfoPackages and DTPs assigned to it at a frequency of one per minute. But wait… what’s this?

The system asks me whether I’d like to start a repair process chain to transfer missing requests to one of the data targets. Why? Ah, okay… I’ve added a DTP for the newly created HybridProvider but forgotten to transfer the requests already loaded from the DataSource. Let’s have a closer look at these repair process chains while they are taking care of the missing requests.

On the left hand side, you can see the repair process chain for my HybridProvider. Besides the DTP, it also contains a process to activate DataStore object data and a subchain generated by my HybridProvider to transfer data into the InfoCube part. On the right hand side, you can see the repair process chain for my airline attributes which contains an attribute change run. Fortunately, you don’t need to bother with these details – the system is doing that for you. But now let’s really start the RDA daemon.

Green traffic lights appear in front of the InfoPackages and DTPs. I refresh the RDA monitor. Requests appear and show a yellow status while they load new data package by package. The machine is running and I can go and work on something else now.

About a day later, I start the RDA monitor again and get a shock. What has happened?

The traffic lights in front of the InfoPackages and DTPs have turned red. The RDA daemon is showing the flash symbol which means that is has terminated. Don’t panic! It’s BW 7.3. The third column helps me to get a quick overview: 42 errors have occurred under my daemon, 4 DTPs have encountered serious problems (red LEDs), and 4 InfoPackages have encountered tolerable errors (yellow LEDs). I double-click on “42” to get more details.

Here you can see in one table which objects ran into which problem at what time. I recognize at a glance that 4 InfoPackages repeatedly failed to open an RFC connection at around 16:00. The root cause is probably the same, and the timestamps hopefully indicate that it has already been removed (No more RFC issues after 16:07). I cannot find a similar pattern for the DTP errors. This indicates different root causes. Finally, I can see that the two most recent runtime errors were not caught and thus the RDA daemon has terminated. You can scroll to the right to get more context information regarding the background job, the request, the data package, and the number of records in the request.

Let’s have a short break to draw a comparison. What would you do in BW 7.0? 1) You could double-click on a failed request to analyze it. This is still the best option to analyze the red DTP requests in our example. But you could not find the tolerable RFC problems and runtime errors.

2) You could browse through the job overview and the job logs. This would have been the preferable approach to investigate the runtime errors in our example. The job durations and the timestamps in the job log also provide a good basis to locate performance issues, for example in transformations.

3) You could browse through the application logs. These contain more details than the job logs. The drawback however is that the application log is lost if runtime errors occur.

These three options are still available in BW 7.3 – they have even been improved. In particular, the job and application logs have been reduced to the essential messages. Locating a problem is still a cumbersome task however if you don’t know when it occurred. The integrated error overview in the RDA monitor, BW 7.3 allows you to analyze any problem with the preferred tool. Let me show you some examples.

Unless you have other priorities from your business users I’d suggest starting with runtime errors because they affect all objects assigned to the daemon. RDA background jobs are scheduled with a period of 15 minutes to make them robust against runtime errors. In our example, this means the RDA daemon serves all DataSources from the one with the lowest error counter up to the one which causes the runtime. The job is then terminated and restarted 15 minutes later. The actual frequency is thus reduced from 60/h to 4/h, which is not real-time anymore. Let’s see what we can do here. I’ll double-click on “10” in the error column for the request where the problem has occurred.

I just double-click on the error message in the overview to analyze the short dump.

 

Puh… This sounds like sabotage! How can I preserve the other process objects assigned to the same daemon from this runtime error while I search for the root cause? I could just wait another hour of course. This RDA request will then probably have reached the limit of 13 errors that I configured with the InfoPackage. Once this threshold is reached, the RDA daemon will exclude this InfoPackage from execution. The smarter alternative is to temporarily stop the upload and delete the assignment to the daemon.

The overall situation becomes less serious once the DataSource has been isolated under “Unassigned Nodes”.  The daemon continues at a frequency of onc per minute although there are still 32 errors left.

Note that most of these errors – namely the RFC failures – can be tolerated. This means that these errors (yellow LEDs) do not hinder InfoPackages or DTPs until the configured error limit is reached. Assume that I’ve identified the root cause for the RFC failures as a temporary issue. I should then reset the error counter for all objects that have not encountered other problems. This function is available in the menu and context menu. The error counter of an InfoPackage or DTP is reset automatically when a new request is created. Now let’s look at one of the serious problems. I’ll therefore double-click on “2” in the error column of the first DTP with red LED.

When I double-click on the error message, I see the exception stack unpacked. Unfortunately that does not tell me more than I already knew: An exception has occurred in a sub step of the DTP. So I navigate to the DTP monitor by double-clicking the request ID (217).

 

Obviously, one of the transformation rules contains a routine that has raised the exception “13 is an unlucky number”. I navigate to the transformation and identify the root cause quickly.

In the same way, I investigate the exception which has occurred in DTP request 219. The DTP monitor tells me that something is wrong with a transferred fiscal period. A closer look at the transformation reveals a bug in the rule for the fiscal year variant. Before I can fix the broken rules, I need to remove the assignment of the DataSource to the daemon. When the corrections are done, I schedule the repair process chains to repeat the DTP requests with the fixed transformations. Finally I re-assign the DataSource to the daemon.

The RDA monitor already looks much greener now. Only one DataSource with errors is left. More precisely, there are two DTPs assigned to this DataSource which encountered intolerable errors, so the request status is red. Again, I double-click in the error column to view details.

The error message tells me straight away that the update command has caused the problem this time rather than the transformation. Again, the DTP monitor provides insight into the problem.

Of course “GCS” is not a valid currency (Should that be “Galactic Credit Standard” or what?). I go back to the RDA monitor and double-click on the PSA of the DataSource in the second column. In the request overview, I mark the source request of the failed DTP request and view the content of the problematic data package number 000006.

Obviously, the data is already wrong in the DataSource. How could this happen? Ah, okay… It’s an InfoPackage for Web Service (Push). Probably the source is not an SAP system, and a data cleansing step is needed – either in the source system or in the transformation. As a short-term solution, I could delete or modify the inconsistent records and repeat the failed DTP requests with the repair process chain.

That’s all. I hope you enjoyed this little trip to troubleshooting real-time data acquisition, even though this is probably not part of your daily work yet. Let me summarize what to do if problems occur with RDA. Don’t panic. BW 7.3 helps you to identify and resolve problems faster than ever before. Check the error column in the RDA monitor to get a quick overview. Double-click wherever you are to get more details. Use the repair process chains to repeat broken DTP requests. 


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20954%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 12. 2. 01:32

SAP NetWeaver 7.3 in Ramp Up
Benny Schaich-Lebek SAP Employee 
Business Card
Posted on Dec. 01, 2010 07:10 AM in Business Process Management, Enterprise Portal (EP), SAP NetWeaver Platform

As announced at TechEd this year, SAP NetWeaver 7.3 was released for restricted shipment on Monday, November 29th. Restricted shipment, better known as "ramp up" or release to customer (RTC) means the availability of the product to certain customers for productive usage.

Unrestricted shipment is expected to be in first quarter of 2011.

Here are some out of lots of new features:

  • Greatly enhanced Java support: Java EE5 certified, Java-only ESB and JMS pub/sub capabilities
  • Reusable business rule sets with Microsoft Excel integration
  • Enhanced standards support (WS Policy 1.2, SOAP 1.2, WS Trust 1.3, Java SE 6, JSR 168/286, WSRP 1.0, SAML 1.0/2.0)
  • Tighter integration between SAP NetWeaver Business Warehouse and SAP BusinessObjects
  • Individual and team productivity enhancements in the SAP NetWeaver Portal
  • ...and heaps of new features and enhancements in each part of the SAP NetWeaver stack!

Here is more detail by the usage types of NetWeaver:

Enterprise Portal

With Enterprise Workspaces SAP is enabling a flexible, intuitive environment to compose content, enabling enterprise end users to integrate and run structured and unstructured assets using a self-service approach.

 

Managing and Mashing up Portal Pages with Web Page Composer
Supporting  business key users in  the easy creation and management of  enriched portal pages, blending business applications and user-generated content, generating truly flexible UI.

 

Unified Access to Applications and Processes with Lower TCO
Delivering  the best of class integration layer for SAP, Business Objects and non-SAP applications & reports while maintaining low TCO with capabilities such as advanced caching, integration with SAP central Transport System and significant performance and scalability improvements. Common Java stack and improved server administration and development environment.

 

Portal Landscape Interoperability and Openness
Providing industry standards integration capabilities for SAP and non-SAP content, both into the SAP Portal and for 3rd party Portals, such as JSR and Java 5 support, or open API’s for navigation connectors.

Business Warehouse

Scalability & Performance have been enhanced for faster decision making. Count in remarkably accelerated data loads, a next level of performance for BW Accelerator, and support for Teradata  as additional databases for SAP NetWeaver BW Increased flexibility  by further integration of SAP BusinessObjects BI and EIM tools with tighter integration with SAP BusinessObjects Data Services and SAP BusinessObjects Metadata Management Configuration and operations was simplified with the new integrated Admin Cockpit  into SAP Solution Manager. Also wizard based system configuration was introduced

Process Integration

PI has introduced the availability for a high number of solutions to allow out-of-the box integration: For SAP applications there is prepackaged process integration content semantically interlinked with SAP applications and industry solutions and for partners and ISVs SAP provides certification programs that help to ensure quality.

There is ONE platform (and not several) to support all integration scenarios: A2A, B2B, interoperability with other ESBs, SOA, and so forth.

In addition there is support of replacement of third-party integration solutions to
lower TCO Interoperability with other ESBs to protect investments.

A Broad support of operating environments and databases is made available.

Business Process Management/CE,

With the WD/ABAP Integration you may browse the WD/ABAP UI repository of a backend system and use a WD/ABAP UI in a BPM task.

The API for Managing Processes and Tasks starts process instances, retrieves task lists, and Executes actions on task.

With Business Rule Improvements you now can reuse rules or decision tables across rule sets. Together with this came other usability and developer productivity enhancements.

With zero configuration for local services a big improvement for simplification of SOA Configuration was achieved.

Mobile

In the new version operational costs are reduced through optimized monitoring and administration capabilities. Robustness was enhanced through improved security and simplified upgrades. There is greater flexibility regarding backend interoperability through Web Service interfaces and multiple backend connectivity.

More information is available at the SDN pages for SAP NetWeaver 7.3 or the manuals of NetWeaver 7.3 in the SAP Help Portal.

Benny Schaich-Lebek   is a product specialist at SAP NetWeaver product management



http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/22371%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529
Posted by AgnesKim
Technique/그외2010. 11. 2. 17:38

BI annotator

BI annotator - July 30, 2007

BI annotator allows allows business objects (BI) users to relate and combine text data to structured data from a database/datawarehouse. Both text and relational data are exposed to business users via a single SAP BusinessObjects semantic layer (or universe) and directly consumed by BusinessObjects XI R2 reports..

Relating external text data and corporate data adds tremendous value to business intelligence applications by increasing the contextual information available to users for making their business decisions. Examples include combining farm lot crop yields with local temperature data feeds, combining company revenues by geography with US government demographic data feeds, or integrating ERP manufacturing information with a hand created sales forecast from an Excel spreadsheet. Managing mixed data access through the BusinessObjects XI R2 BI platform increases the reliability, security, and audit-ability of the resulting BI solution. The seamless integration of relational and text data will allow users to create reports analytics and dashboards that provide a 360 degree view of their business.

 


How can BI annotator helps with Business Intelligence solutions?

BI annotator integrates text data feeds with relational data sources by indexing business dimension values and parsing the text to generate a relationships star schema that maps the individual text items to the dimensional values in a database/data warehouse.

The text-to-dimensions relationships are maintained in a 'Coordinates' table. This 'Coordinates' table created and populated by the BI Annotator at index time maintains the relationships between the dimensions of interest in a given universe and the text items. This 'Coordinates' table contains joint data that allows easy end user integration of the relational and text data. It flexibly relates the text to the various dimensions of the universe.

The Physical View of BI annotator's architecture is shown here:

 


Enjoy!

We hope you use this prototype to discover how integrating text data via a universe can change the way people consume information. Please give us feedback to make sure we know how you're using this and what else you would like to see it do in the future. And remember, this is a prototype only and NOT for use in production environments.

And remember, this is a prototype only and NOT for use in production environments.


Download BI Annotator Prototype


http://www.sdn.sap.com/irj/boc/innovation-center?rid=%2Fwebcontent%2Fuuid%2F608383af-44f1-2b10-02a9-a472d39ef364
Posted by AgnesKim