Technique/그외2010. 10. 26. 19:16

New SAP iPhone App - SAP Business One Mobile Application
Karin Schattka SAP Employee 
Business Card
Posted on Oct. 26, 2010 02:30 AM in Mobile

URL: http://itunes.apple.com/app/sap-business-one-mobile-application/id392606876

 
 

With the SAP Business One mobile application for iPhone, you can view reports and content, process approval requests, manage customer and partner data, and much more.

Key features:
• Alerts and Approvals - Get alerts on specific events - such as deviations from approved discounts, prices, credit limits, or targeted gross profits - and view approval requests waiting for your immediate action. Trigger remote actions, and drill into the relevant content or metric before making your decision.

• Reports - Refer to built-in SAP Crystal Reports that present key information about your business. Add your own customized reports to the application, and easily share them via e-mail.

• Business Partners – Access and manage your customer and partner information including addresses, phone numbers and contact details, view historical activities and special prices; create new business partners and log new activities; contact or locate partners directly. All changes automatically get synchronized with SAP Business One on the backend.

• Stock Info - Monitor inventory levels, and access detailed information about your products including purchasing and sales price, available quantity, product specifications and pictures.

Please download SAP Business One Mobile Application directly from iTunes store and watch the YouTube video:

Karin Schattka   is part of the SAP Community Network team.


http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/21777%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 10. 11. 20:44

BW 7.30: Semantically Partitioned Objects
Alexander Hermann SAP Employee 
Business Card
Company: SAP AG
Posted on Oct. 11, 2010 04:22 AM in Business Intelligence (BI)

 
 

Motivation

Enterprise Data Warehouses are the central source for BI applications and are faced with the challenge of efficiently managing constantly growing data volumes. A few years ago, Data Warehouse installations requiring terabytes of space were a rarity. Today the first installations with petabyte requirements are starting to appear on the horizon.

In order to handle such large data quantities, we need to find modeling methods that guarantee the efficient delivery of data for reporting. Here it is important to consider various aspects such as the loading and extraction processes, the index structure and data activation in a DataStore object. The Total Cost of Development (TCD) and the Total Cost of Ownership (TCO) are also very important factors.

Here is an example of a typical modeling scenario. Documents need to be saved in a DataStore object. These documents can come from anywhere in the world and are extracted on a country-specific basis. Here each request contains exactly one country/region.

Figure 1

If an error occurs (due to invalid master data) while the system is trying to activate one of the requests, the other requests cannot be activated either and are therefore initially not available for reporting. This issue becomes even more critical if the requests concern country-specific, independent data.

Semantic partitioning provides a workaround here. Instead of consolidating all the regions into one DataStore object, the system uses several structurally identical DataStore objects or “partitions”. The data is distributed between the partitions, based on a semantic criterion (in this example, "region").

Figure 2

Any errors that occur while requests are being activated now only affect the regions that caused the errors. All the other regions are still available for reporting. In addition, the reduced data volume in the individual partitions results in improved loading and administration performance.

However, the use of semantic partitioning also has some clear disadvantages. The effort required to generate the metadata objects (InfoProviders, transformations, data transfer processes) increases with every partition created. In addition, any changes to the data model must be carried out for every partition and for all dependent objects. This makes the change management more complex. Your CIO might have something to say about this, especially with regards to TCO and TCD!

Examples of semantically partitioned objects

Here you can set the semantically partitioned DataStores or InfoCubes (abbreviated to “SPO”: semantically partitioned object) introduced in SAP NetWeaver BW 7.30. It is now possible to use SPOs to generate and manage semantically partitioned data models with minimal effort.

SPOs provide you with a central UI that enables you to perform the one-time maintenance of the structure and partitioning properties. During the activation stage, the required information is retrieved for generating the partitions. Changes such as adding a new InfoObject to the structure are performed in the same on the SPO and are automatically applied to the partitions. You can also generate DTPs and process chains that match the partitioning properties.

The following example demonstrates how to create a semantically partitioned DataStore object. The section following the example provides you with an extensive insight into the new functions.

DataStore objects and InfoCubes can be semantically partitioned. In the Data Warehousing Workbench, choose “Create DataStore Object”, for example, and complete the fields in the dialog box. Make sure that the option “Semantically Partitioned” is set.

 

Figure 3 

 Figure 4

 

A wizard (1) guides you through the steps for creating an SPO. First, define the structure that are used to using for standard DataStore objects (2). Choose "Maintain Partitions".

 

Figure 5

 

In the next dialog box, you are asked to specify the characteristics that you want to use as partitioning criteria. You can select up to 5 characteristics. For this example, select "0REGION". The compounded InfoObject "0COUNTRY" is automatically included in the selection.

 

Figure 6

 

You can now maintain the partitions. Choose the button (1) to add new partitions and change their descriptions (2). Use the checkbox (3) to decide whether you want to use single values or value ranges to describe the partitions. Choose “Start Activation”. You have now created your first semantically partitioned DataStore object.

 

Figure 7

Figure 8

 

In the next step, you connect the partitions to a source. Go to step 4: “Create Transformation” and configure the central transformation using the relevant business logic.

 

Figure 9

 

Now go to step 5: “Create Data Transfer Processes” to generate DTPs for the partitions. On the next screen, you see a list of the partitions and all available sources (1). First, choose “Create New DTP Template” (2) to create a parameter configuration.

 

Figure 10

 

A parameter configuration/DTP template corresponds to the settings that can be configured in a DTP. These settings are applied when DTPs are generated.

 

Figure 11

 

Once you have created the DTP template, drag it from the Template area and drop it on a free area under the list of partitions (1). This assigns a DTP to every source-target combination. If you need different templates for different partitions, you can drag and drop a template onto one specific source-target combination.

Once you have finished, select all the DTPs (2) and choose “Generate”.

 

Figure 12

 

The last step is to generate a process chain in order to execute the DTPs. Go to step 6 in the wizard: “Create Process Chains”. In the next screen, select all the DTPs and drag and drop them to the lower right screen area: “Detail View (1)”.   You use the values "path" and “sequence” to control the parallel processing of DTPs. DTPs with the same path are executed consecutively.

 

Figure 13

 

Choose “Generate” (3). The following process chain is created.

 

Figure 14

  

Summary

In this article, you learned how to create a semantically partitioned object. Using the central UI of an SPO it's now possible to create and maintain complex partitioned data models with minimal effort. In addition, SPOs guarantee the consistency of your metadata (homogenous partitions) and data (filtered according to the partition criterion).  

Once you have completed the 6 steps, you will have created the following components:

 

  • An SPO with three partitions (DataStore objects)
  • A central transformation for business logic implementation
  • 3 data transfer processes
  • 1 process chain

 

출처 : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/21334%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/그외2010. 10. 1. 11:32

제작년인가 프로젝트 하면서 산출물 만들 때 많이 당했던 오류라.. 스크랩.

http://myjay.byus.net/tt/trackback/689

친절한 manual 을 만들어주신 @myjay_ 님께 감사 :)

Posted by AgnesKim
Technique/SAP BW2010. 9. 25. 21:44

BW 7.30: The New Planning Modeler
Sabrina Hinsberger SAP Employee 
Business Card
Company: SAP AG
Posted on Sep. 17, 2010 09:32 AM in Beginner, Business Intelligence (BI), SAP NetWeaver Platform

 

The new Planning Modeler

With release 730 the new Planning Modeler is born!  You will see that it is not only new but also different from the old release. 

First of all, what is the Planning Modeler? It is the central tool for customizing Planning Applications within SAP BW Integrated Planning. The new release comes with all the central features you know in the old version. But unlike the old Java Web Dynpro based Modeler, the new release is SAP GUI based. Additionally, the new Modeler allows a better integration into the modeling of SAP BW. The planning customization is based on objects like transactional InfoCubes and InfoObjects with their hierarchies and master data. These objects are maintained in the Administrator Workbench (Transaction RSA1). It was a top goal of the new development to have a strong integration here. As the result of fulfilling this key goal, you can from now on see all planning objects within the Administrator Workbench and navigate from there into the Planning Modeler.

The real time InfoProviders as well as the filters and planning functions can be found under the corresponding Aggregation Level and be maintained there. This makes it easy to find planning objects connected to each other.   

RSA1 

The Planning Sequences have their own area in the Administrator Workbench with a brand new feature: You can arrange your Planning Sequences now in InfoAreas which enables you to do an entire semantic grouping of your Planning Applications. At one glimpse you can see all planning objects which are used in the Planning Sequence.

By just double clicking on the sequence name you can display details of the sequence and execute it afterwards in the test framework.

 

Moreover transaction RSPLAN gives you a standalone design time tool for your Planning Objects. This view is leaned towards the look and feel of the InfoProvider maintenance.

 

Especially the AggregationLevel maintenance was adapted to the provider maintenance so that the InfoObjects can now be taken to the AggregationLevel by just using drag & drop. 

 

As a conclusion one can say:
If you want to build up a Planning Application, the new release can save you time and effort.  There is no need to install a Java stack. All can be done in the SAP GUI!  This means lower TCO and an easy start for your planning project!

What is even better for those who already run Planning Applications within SAP BW Integrated Planning: You don’t need to do any migration of your current planning objects!

Just call transaction RSPLAN or the Administrator Workbench (Transaction RSA1) and try out the New Planning Modeler.

 


출처 : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20727%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 8. 16. 10:52

SAP BW – Infoprovider Data Display (LISTCUBE) - Improvised

Suraj Tigga (capgemin)    Article     (PDF 761 KB)     04 August 2010

Overview

Methods to display the Infoprovider data without repetitive selection of Characteristics and Key Figures.




http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/70e092a3-9d8a-2d10-6ea5-8846989ad405&utm_source=twitterfeed&utm_medium=twitter
Posted by AgnesKim
Technique/그외2010. 8. 12. 22:12

Twibap: the ABAP Twitter API
Uwe FetzerActive Contributor:
Business Card
Company: SE|38 IT-Engineering & Consulting
Posted on Aug. 12, 2010 04:34 AM in ABAP, Beyond SAP, Code Exchange, JavaScript

URL: http://dev.twitter.com

 
 

Prolog

You probably know my last year's SCN Blog post "A story about Twitter, XML and WD4A" ;-)
In April this year I've received a short DM from Mark F. "getting #SAP #ABAP on this list would be wholesome" (he referted to the site: http://dev.twitter.com/pages/libraries).
Nice idea, I thought, give me time until the summer break.

End of July I wanted to refresh my Twitter API knowledge by reading the docs and I saw this message on the dev site: "The @twitterapi team will be shutting off basic authentication on the Twitter API. All applications, by this date, need to switch to using OAuth."
No problem, with OAuth I've battled already while developing Wave and Streamwork apps. (haha, more later)

Chapter 1 - The Twitter API

The Twitter REST(?) API is pretty good described on http://dev.twitter.com/doc I think, no further explanation is needed here.


Chapter 2 -  The JSON Parser

In my Twitter WDA client I've used the XML response. Since I felt in love with JSON while working with Python, I've decided to use JSON this time.
First prob: how to parse JSON ABAP?
My search on SCN found the nice JSON function group from Quentin Dubois (Wiki page)
Because the Twitter response contains not only flat data but also embedded objects (a status object always contains a user object) and some responses are arrays, the mentioned function group is not really the solution I needed, so I've decided to write my own parser (but with parts of the code of the module, hope Quentin doesn't kill me now).

The result of a parsed JSON object is now a hashed key/value table, where we can read each element by simply call e.g. "text = simplejson->get_value( 'text' ).".
Is the result of the read element again an object, you have just to parse it again:
user = simplejson->get_value( 'user' ).       "returns another object
user_data = simplejson->parse_object( user ). "parse object
simplejson->set_data( user_data ).            "set new data in parser
screen_name = simplejson->get_value( 'screen_name' ).  "get element

The result of a parsed JSON array is a standard table of the hashed key/value table.

With this tiny simplejson helper class I wrote my first twitter API classes, and since the basic authentification is not cut off yet, all test went well until here.

Chapter 3 - OAuth

The next step of the journey was the implementation of OAuth. With a look at my Python sources (Streamwork OAuth implementation) and the first chapters of the Twitter OAuth docs it seemed very familiar and I began with the realization.


An OAuth request contains, amongst others, two fields called "oauth_nonce", a string with random characters, and "oauth_timestamp", the seconds counted from Jan. 1st 1970.
Because there are no standard functions (I think so), I've developed two small helper methods:


Hint under friends: if the timestamp is not correct, Twitter will refuse your request, believe me ;) -> set your system time correctly!

-> read more: OAuth at Twitter

Chapter 4 - HMAC-SHA1

But the first test results brought me back down to earth: I've overseen the tiny remark "Twitter requires that all OAuth requests be signed using the HMAC-SHA1 algorithm." WTF?
Streamwork uses PLAINTEXT authentification, but what is HMAC-SHA1? Googlegooglegoogle

The search brought me two results (ok, much more than two, but these two are the most relevant ones):
- a SHA1-Javascript library
- a simple note in an SCN-Forum post, means there is a function module called "CALCULATE_HASH_FOR_CHAR"

An SHA1 function module? Great. Looking into the source of FM "CALCULATE_HASH_FOR_CHAR" and the question marks in my brain appeared again (only a system-call in it). What does the FM docu say? Nothing, no docu available. The usage of this FM was definitely too "hot" for me. What, if I overwrote some needed cryptographic stuff in the system. Not fatal on my own systems, but what about client systems? No, thanks.
Fortunately I remembered, that I've read somewhere somewhat about the usage of Javascript within ABAP. SE24, "CL_*JAVA*", <F4> -> et voilá: found the class "CL_JAVA_SCRIPT".
Google again -> points me to this SAP help site


Again WTF....

But thanks god (and SAP!), we still have the old docus available: here you can find the relevant part from NW7.0

Chapter 5 - Javascript

Although I don't like Javascript very much, playing around with the CL_JAVA_SCRIPT class, I was surprised about the functionality of the class. Even whole ABAP-OO classes can be bound the Javascript source.
A CL_PYTHON would definitely be better, but the class works great atm and is probably the only way to use open source libraries for functions not delivered by SAP!

Back to topic:
my first experiments with the class I've done like described in the docu: with inline code. But for sure, this is not the solution I want to build into the API. Where to store the Javascript sources? Where they belong: in the Mime repository.
Now we have the SHA1 library and an additional single liner called twibap.js stored in the mime repository and with this code snipped we can load the source back into an ABAP string:

twibap.js contains:

and with this code we finally can sign the message:

In addition I only had to develop my own encoding method called “percent_encode”, because the "cl_http_utility=>escape_url()" method doesn't fit to the OAuth dictate, where the only characters you can ignore are "- _ . ~" (and some other abnormalities).

The whole Twitter workflow works nice now, but I was not very satisfied with this JS solution.
Therefore back to SAP notes and google for a deeper search for more information about the function module "CALCULATE_HASH_FOR_CHAR".


Chapter 6 - The SecureStore

In Note 1416202 I finally found the answer. The function modules are NOT "secret", but "The raw documentation was not activated."
With NW 7.01 SP7 the documentation will be delivered (I was so close to install SP7 on my system..., but luckily I found the docu in the infinite vastness of the internet).

In the documentation of the function group "SECH" and its function modules we can read, that we can use these function modules for our own purposes. So did I:

Hey, it works! Party! Trashed the Javascript part.

Boom, Dump, failed again. What happened? In the first steps of the OAuth authorization process (request_token etc.) the oauth_secret contains only the consumer_secret (42 characters + "&").
The function module 'SET_HMAC_KEY' works brilliant until that point, where I want to sign a user action (e.g. sending a tweet). In this case the secret combines the consumer secret and the token secret (of the user). The function module responses with an "parameter_length" exception.
With some experiments I found out, that the FM only accepts 81 characters as maximum. Hey, why? I only want to SET the key, no process at this moment. And in addition: nowhere in the HMAC-SHA1 OAuth key definition is a length maximum mentioned.

In my despair I opened a SCN forum thread.
And what a surprise (or not): no 24 hours later I've got the solution :) SCN members rock!

The solution: if the key is longer than 81 characters, we have to store the hash of the key, not the key itself (still don’t know why).
The code snippet:

Now the Twitter API is finally finished...

Chapter 7 - A simple client

... and we can build our first simple (very simple!) Twitter client (output of our own home timeline):

Time to build a SAPLINK nuggets for the beta test. To test the nugget I've imported it to another system, activated all sources (ignored errors, because of recursive definitions) and started the test client.
Oh noooo, again -> Dump: FM 'SET_HMAC_KEY' does not exist.

Chapter 8 - Back to Chapter 5

Why? And why me? The answer of the first question is easy. I've developed the API on 7.01 and imported the nugget in a 7.00 system (but I cannot answer the second Q)
Thanks to my decades long experience ;) I've only stared out the Javascript part. Now I only had to activate the part again, create a nugget especially for 7.00 systems and include the Mime objects into the nugget again.

Epilog

You: "And for what is it good for?"
Me: "No idea"
You: "But why did you make it"
Me: "You could also ask: Why are you running Android 2.2 (Froyo) on a WindowsMobile phone. The answer would be the same: Because it works, and it makes so much fun ;)"


(probably the tiniest Froyo phone of the world)

http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20474%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim
Technique/SAP BW2010. 8. 6. 10:31

BW 7.30: Define Delta in BW and no more Init-InfoPackages
Thomas Rinneberg SAP Employee
Business Card
Company: SAP AG
Posted on Aug. 05, 2010 03:52 PM in Business Intelligence (BI)

You might know and appreciate the capabilities to generically define delta when building a DataSource in SAP source systems. But you had a lot of work, if you want to load delta data from any other source system type, like DBConnect, UDConnect or File. You could declare the data is delta, ok, but this had no real effect, it was just declarative. The task to select the correct data from the source was still yours.

Now with BW 7.30 this has changed. Because now there is – the generic BW delta!

As expected, you start by defining a BW DataSource.

Create DataSource - Extraction Tab

Nothing new, up to now. But if you choose that this DataSource is delta enabled, you will find a new dropdown:

Use generic delta

These are the same options that you already know from the SAP source system DataSource definition:

Generic delta in OSOA

Ok, let’s see what happens if we select “Date”.

Date Delta

The fields “Delta Field” and the two interval fields you know already from the generic delta in the SAP source system. And they have the same meaning. So hopefully I can skip the lengthy explanation of the Security Margin Interval Logic and come to the extra field which popped up: The Time Zone. Well ok, not very thrilling, but probably useful: Since the data in your source might not be saved at the same time zone like the BW which loads it (or your local time), you can explicitly specify the time zone of your data.

“Time stamp – short” offers quite the same input fields, except that the intervals are given in seconds rather than days. “Time stamp Long (UTC)” is by definition lacking the “Time zone” field. Let’s watch “Numeric Pointer”:

Numeric Delta

Oops – no Upper Interval! I guess now I do need to spend some words on these intervals: The value given in “upper Interval” is subtracted from the upper limit used for selecting the delta field. Let’s say current upper value of the delta field is 100. The upper interval is 5. So we would need to select the data up to value 95. But hold on – how should the system know the current value of the numeric field without extracting it? So we would extract the data up to the current upper value anyhow – and hence there is no use in specifying an upper interval.

The lower limit in turn is automatically parsed from the loaded data – and thus known before the next request starts. And hence we can subtract the safety margin before starting selection.

Our example DataSource has a UTC time stamp field, so let’s select it:

Timestamp Delta

Activate the DataSource and create an InfoPackage:

InfoPackage

CHANGED is no selectable field in the InfoPackage. Why not? Well, the delta selections are calculated automatically. You do not need to select explicitly on them. Now let’s not forget to set the update mode to Init in order to take advantage of the generic delta:

Auto-Delta-Switch to Delta

Wait a minute! There is a new flag: “Switch InfoPack. in PC to Delta (F1)”. Guess I need to press F1 to understand what this field is about.

Explanation

Sounds useful, doesn’t it? No more maintenance of two different InfoPackages and process chains for delta upload! You can use the same InfoPackage to load Init and Delta like in the DTP.

In our small test we do not need a process chain, so let’s go on without this flag and load it. Then let’s switch the InfoPackage to Delta manually and load again.

Monitor

Indeed, there are selections for our field CHANGED.

 

Don't miss any of the other Information on BW 7.30 which you can find here

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20413%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

===========================================================================================================

호오!! 테스트해보고 싶!

Posted by AgnesKim
Technique/SAP BW2010. 8. 3. 12:45

Creating a BW Archive Object for InfoCube/DSO from Scratch and Other Homemade Recipes
Karin Tillotson
Business Card
Company: Valero Energy Corporation
Posted on Aug. 02, 2010 07:29 PM in Business Intelligence (BI), Business Process Expert, SAP Developer Network, SAP NetWeaver Platform

 

In this blog, I will go over the step-by-step instructions for creating a BW Archive Object for InfoCubes and DSO’s and will also provide some SAP recommended BW housekeeping tips.

 

To start with, I thought I would go over some differences between ERP Archiving and BW Archiving:

 

ERP Archiving:

  • Delivered data structures/business objects
  • Delivered archive objects (more than 600 archive objects in ECC 6.0)
  • Archives mostly original data
  • Performs an archivability check for some archive objects checking for business complete data or residence time (period of time that must elapse before data can be archived)
  • After archiving, data can be entered for the archived time period
 

BW Archiving:

  • Generated data structures
  • Generated archive objects
  • Archives mainly replicated data
  • No special check for business complete or residence time
  • After archiving a time slice, no new data can be loaded for that time slice
 

To begin archiving, you will need to perform the next steps:

  1. Set up archive file definitions
  2. Set up content repositories (if using 3rd party storage)
  3. Create archive object for InfoCube/DSO

 

Step 1 - To begin archiving, you will need a place to write out the archive files.  You do not necessarily need a 3rd party storage system (though I highly recommend one).  But, you do need a filesystem/directory in which to either temporarily or permanently “house” the files.

 

Go to transaction /nFILE

 

image 1

 

Either select a SAP supplied Logical File Path, or create your own. 

Double click on the relevant Logical File Path, then select/double click on the relevant Syntax group (AS/400, UNIX, or Windows).

Assign the physical path where the archive files will be written to.

 

image 2

 

Next, you need to configure the naming convention of the archive files.

Select the relevant Logical File Path, and go to Logical File Name Definition:

 

image 3

 

In the Physical file parameter, select the relative parameters you wish to use to describe the archive files.  See OSS Note 35992 for all of the possible parameters you can choose.

 

Step 2 - If you will be storing the archive files in a 3rd party storage system (have I mentioned I highly recommend this), you need to configure the content repository.

image 15

 

Enter the Content Repository Name, Description, etc.  The parameters entered will be subject to the 3rd party storage requirements.

 

Step 3 is to create the archive object for the relevant InfoCube or DSO:

Go to transaction RSA1:

imaage 5

 

Find and select the relevant InfoCube/DSO, right-click and then click on Create Data Archiving Process.

 

The following tabs will lead you through the rest of the necessary configuration.

The General Settings tab is where you will select whether you are going to configure an ADK based archived object, a Nearline Storage (NLS) object or a combination.

image 6

 

On the Selection Profile tab, if the time slice characteristic isn’t a key field, select the relevant field from the drop down and select this radio button:

image 7

 

If using the ADK method, configure the following parameters:

Enter the relevant Logical File Name, Maximum size of the archive file, the content repository (if using 3rd party storage), whether the delete jobs and store jobs should be scheduled manually or automatically, and if the delete job should read the files from the storage system.

image 8

You then need to Save and Activate the Data Archiving Process.

 

Once the archive object has been activated, you can then either schedule the archive process through the ADK (Archive Development Kit) using transaction SARA, or you can right click on the InfoCube/DSO and select Manage ADK Archive.

 

image 9

 

Click on the Archiving tab:

image 10

 

And, click on Create Archiving Request.

 

When submitting the Archive Write Job, I recommend selecting the check box for Autom. Request Invalidation.

If this is selected and an error occurs during the archive job, the system will automatically set the status of the run to ‘99 Request Canceled’ so that the lock will be deleted.

image 13 

 

If submitting the job through RSA1 -> Manage, select the appropriate parameters in the Process Flow Control section:

 

image 14

 

When entering the time slice criteria for the archive job, keep in mind that a write lock will be placed on the relevant InfoCube/DSO until both the archive write job and the archive delete job have completed. 

 

Additional topics to consider when implementing an archive object for an InfoCube/DSO:

  • For ODS objects, ensure all requests have been activated
  • For InfoCubes, ensure the requests to be archived have been compressed
  • Recommended to delete the change log data (for the archived time slice)
  • Prior to running the archive jobs, stop the relevant load job
  • Once archiving is complete, resume relevant load job
 

In addition to data archiving, here are some SAP recommended NetWeaver Housekeeping items to consider:

 

From the SAP Data Management Guide that can be found at www.service.sap.com/ilm

 

(Be sure to check back every once in awhile as this gets updated every quarter).

There are recommendations for tables such as:

  • BAL*
  • EDI*
  • RSMON*
  • RSBERRORLOG
  • RSDDSTATAGGRDEF
  • RSPC* (BW Process Chains)
  • RSRWBSTORE
  • Etc.

There are also several SAP OSS Notes that describe options for tables that you do not need to archive:

Search SAP Notes on Clean-Up Programs

www.service.sap.com/notes

Table RSBATCHDATA

  • Clean-up program RSBATCH_DEL_MSG_PARM_DTPTEMP

Table ARFCSDATA

  • Clean-up program RSARFCER

Tables RSDDSTAT

  • Clean-up program RSDDK_STA_DEL_DATA

Table RSIXWWW

  • Clean-up program RSRA_CLUSTER_TABLE_REORG

Table RSPCINSTANCE

  • Clean-up program RSPC_INSTANCE_CLEANUP
  http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20375%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

----------------------------------------------------------------------------------------------------------------

근데 사실 국내엔 BI를 Archiving 한 사례가 없지..
그리고 요즘처럼 하드웨어 가격이 떨어지면 차라리 하드웨어구매가.. =_=

Posted by AgnesKim
Technique/SAP BW2010. 7. 29. 14:10

BW 7.30: Simple modeling of simple data flows
Thomas Rinneberg SAP Employee
Business Card
Company: SAP AG
Posted on Jul. 28, 2010 02:05 PM in Business Intelligence (BI)

 

Have you ever thought: How many steps do I need to do, until I can load a flat file into BW? I need so many objects, foremost I need lots of InfoObjects before I can even start creating my infoprovider. Then I need DataSource, Transfomation (where I need to draw many arrows), DTP and InfoPackage. I just want to load a file! Why is there no help by the system?

Now there is - BW 7.30 brings the DataFlow Generation Wizard!

You start by going to BW DataWarehouse workbench (as always), then selecting the context menu entry „Generate Data Flow..." on either the File-Source system (if you just have a file and want to generate everything needed to load it) or on an already existing DataSource (if you have that part already done - this works also for non-file-source systems!) or on an infoprovider (if you have your data target already modeled and just want to push some data into it).

Context Menu to start data flow wizard


Then the wizard will pop up:

Step 1 - Source options

Here, we have started from the source system. If you start from the InfoProvider, the corresponding step will not be shown in the progress area on the left, since you have selected that already. Same for the DataSource.

I guess you noticed already: ASCII is missing in the file type drop down (how sad! – however please read the wizard text in the screenshot above: It’s just the wizard where it is not supported because the screen would become too complex). And look closer: There is „native XLS-File“. Yes, indeed. No longer „save as CSV“ necessary in Excel. You can just specify your Excel-File in the wizard (and in DataSource maintainance as well). There is just one flaw for those who want to go right to batch upload: The Excel installation on your PC or laptop is used to interpret the file contents, so it is not possible to load Excel files from the SAP application server. For this, you still need to save as CSV first, but the CSV structure is identical to the XLS structure, so you do not need to change the DataSource.

Ok, lets fill out the rest of the fields, file name of course, data source, source system, blabla – (oops, all this is prefilled after selecting the file!) – plus the ominous Data Type (yes, we still can’t live without that)

Step 1 - Pre-Filled Input Fields

and „Continue“:

Step 2 - CSV Options

Step 2 - Excel Options

One remark on the header lines: If you enter more than one (and it is recommended to have at least one line containing the column headers), we expect the column headers to be the last of the header lines, i.e. directly before the data. Now let‘s go on:

Step 3 - Data Target

The following InfoProvider Types and Subtypes are available:

  • InfoCube – Standard and Semantically Partitioned
  • DataStore-Object – Standard, Write Optimized and Semantically Partitioned
  • InfoObject – Attributes and Texts
  • Virtual Provider – Based on DTP
  • Hybrid Provider – Based on DataStore
  • InfoSource
This is quite a choice. For those of you which got lost in that list, have a look at the decision tree which is available via the „i“ button on the screen. As a hint: A standard DataStore-Object is good for most ;-)

Step 4 - Field Mapping

This is the core of the wizard. At this point, the file has already been read and parsed, and the corresponding data types and field names have been derived from the data of the file and the header line (if the file has one). In case you want to check whether the system did a good job, just double click the field name in the first column.

This screen does also define the transformation (of course only 1:1 mapping, but this will do for most cases – else you can just modify the generated transformation in the transformation UI later) as well as the target infoprovider (if not already existing) plus the necessary InfoObjects. You can choose from existing InfoObjects (and the „Suggestion“ will give you a ranked list of InfoObjects which map your fields better or worse) or you can let the Wizard create „New InfoObjects“ after completion. The suggestion uses a variety of search strategies, from data type match via text match to already used matches in 3.x or 7.x transformations.

And that was already the last step:

Step 5 - End

After „Finish“, the listed objects are generated. Note, that no InfoPackage will be generated, because the system will generate the DTP to directly access the file rather than the PSA.


Don't miss any of the other Information on BW 7.30 which you can find here

 
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20105%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

필요한 이유는 전혀 모르겠지만;; 일단 scrap.

Posted by AgnesKim
Technique/SAP BW2010. 7. 25. 02:21

Delta Queue Diagnosis
P Renjith Kumar SAP Employee
Business Card
Company: SAP Labs
Posted on Jul. 23, 2010 04:25 PM in Business Intelligence (BI), SAP NetWeaver Platform

 

Many times we come across situation where there may be inconsistencies in the delta queue. To check these we can use a diagnostic tool. The report is explained in detail here.

RSC1_DIAGNOSIS Program is Diagnosis Tool for BW Delta Queue

image

How to use this report?

Execute the report RSC1_DIAGNOSIS from SE38/SA38, With datasource and destination details.

Use

With the RSC1_DIAGNOSIS check program, the most important information about the status and condition of the delta queue is issued for a specific DataSource.

Output

You get the following details once the report is executed

  • General information about datasource and version.
  • Meta data of Datasource and Generated objects for the datasource
  • ROOSPRMSC table details of datasource like GETID and GOTID
  • ARFCSSTATE Status
  • TRFCQOUT Status
  • Records check with Recorded status
  • Inconsistencies in delta management tables
  • Error details if available.

Let see the output format of the report.

image

image

How to analyze?

Before analyzing this output we need to know some important tables and concepts. Let us see

The delta management tables

DeltaQueue Management Tables : RSA7

Tables

ROOSPRMSC            :  Control Parameter Per DataSource Channel

ROOSPRMSF            :  Control Parameters Per DataSource

TRFCQOUT              :  tRFC Queue Description (Outbound Queue)

ARFCSSTATE            :  Description of ARFC Call Status (Send)

ARFCSDATA             :  ARFC Call Data (Callers)

The delta queue is constructed of three qRFC tables namely ARFCSDATA which has the data and AFRCSSTATE, TRFCQOUT which is to control dataflow to BI systems.

Now we need to know about TID (Transaction ID). You can see two things GETTID and GOTTID. Now we will see what those are.

GETTID and GOTTID can be seen in table ROOSPRMSC.

image

GETTID:   Delta Queue, Pointer to Maximum Booked Records in BW (i.e.) this refers

<address>               to The last but one delta TID</address><address>        </address><address>GOTTID:  Delta Queue, Pointer to Maximum Extracted Record I (i.e.) this refers to the </address><address>              Last delta TID that has reached BW. (Used in case of repeat delta)</address>

System will delete the LUW'S greater than GETTID and less than or equal to GOTTID. This is because delta queue have last but one delta and loaded delta only.

Now we will see about the TID in detail

TID = ARFCIPID+ ARFCPID+ ARFCTIME+ ARFCTIDCNT  field content.

All the four fields can be seen in the table ARFCSSTATE.

<address>ARFCIPID                  : IP Address</address><address>ARFCPID                   : Process ID.</address><address>ARFCTIME                 : UTC time stamp since 1970.</address><address>ARFCTIDCNT             : Current number</address>

To know how this is split I am taking the GETTID

GETTID = 0A10B02B0A603EB2C2530020

This is separated like this ( 8 + 4 + 8 + 4 ) and it is sent to the four table.

GETTID : 0A10B02B   0A60  3EB2C253  0020

<address>ARFCIPID                   = 0A10B02B</address><address>ARFCPID                    = 0A60</address><address>ARFCTIME                  = 3EB2C253</address><address>ARFCTIDCNT               = 0020</address>

Give this as selection in table ARFCSSTATE.Here you can find the details of the TID.

image

Here you find details of TID.

Now we move on to the output of the report.

image

How to get the generated program?

20001115174832 = Time of generation

/BI0/QI0HR_PT_20001 = Generated extract structure

E8VDVBZO2CTULUAENO66537BO = Generated program

But to display the generated program you need to add "GP" to the prefix of the generated program and can be seen from SE38.

Adding prefix ‘GP' = GPE8VDVBZO2CTULUAENO66537BO

How to check details in STATUS ARFCSSTATE?

The output displays an analysis of the ARFCSSTATE status in the form

STATUS READ 100 times

<address>LOW <Date> <Time> <TID> <SID-CLNT></address><address>HIGH <Date> <Time> <TID> <SID-CLNT></address>

STATUS RECORDED 200 times

<address>LOW <Date> <Time> <TID> <SID-CLNT></address><address>HIGH <Date> <Time> <TID> <SID-CLNT></address>

READ             = Repeat delta entries with TID.

RECORDED     = Delta entries

Using this analysis, you can see whether there are obvious inconsistencies in the delta queue. From the above output, you can see that there are 100 LUWs with the READ status (that is, they are already loaded) and 200 LUWs with the Recorded status (that is, they still have to be loaded).  For a consistent queue, however, there is only one status block for each status. That is, 1 x Read status, 1 x Recorded status. If there are several blocks for a status, then the queue is not consistent. This can occur for the problem described in note 516251.

How to check details in STATUS TRFCQOUT?

Only LUWs with STATUS READY or READ should appear in TRFCQOUT. Another status indicates an error. In addition, the GETTID and GOTTID are issued here with the relevant QCOUNT.

Status READ   = Repeat delta entries with low and high TID

Status READY = Delta entries ready to be transferred.

If the text line "No Record with NOSEND = U exists" is not issued, then the problem from note 444261 has occurred.

In our case we did not get the READ and READY or RECORDED status, That's why it is showing as ‘No Entry in ARFCSSTATE' and ‘No Entry in TRFCQOUT'. But you will normally find that.

Checking Table level inconsistencies

In addition, this program lists possible inconsistencies between the TRFCQOUT and ARFCSSTATE tables.

If you see the following in the output

"Records in TRFCQOUT w/o record in ARFCSSTATE"

This shows inconsistency at table level, to correct this check the note 498484.

The records issued for this check must be deleted from the TRFCQOUT table. This allows the additional delta without reinitialization.However, if you are not certain that data was loaded correctly in BW (see note 498484) and that it was not duplicated, you should carry out a reinitialization.

 

 

 

 http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/20226%3Futm_source%3Dtwitterfeed%26utm_medium%3Dtwitter%26utm_campaign%3DFeed%253A+SAPNetworkWeblogs+%2528SAP+Network+Weblogs%2529

Posted by AgnesKim