Google
SAP BW: July 2007

Saturday, July 28, 2007

ODS vs CUBE

Diff between ODS vs CUBE

The main difference between the ODS Object and the PSA and InfoCube is that the ODS Object allows existing data to be changed. Whereas an InfoCube principally allows inserts, and only allows deletion on the basis of requests, data in an ODS Object can be changed during the staging process. This enables an ODS Object to be used as a consolidation Object in a Data Warehouse. PSA data change is only supported by manual change or by customer programs and not by the staging mechanism.

Unlike ODS Objects; InfoCubes have a mandatory time-dimension that allows you to look at particular relationships in relation to time-periods. For example, you can look at how relationships have changed over a certain time-period.

An ODS Object is principally used for analyzing the status of data at a certain point in time. This allows you to see what relationships are currently like. Exceptionally you can also track history in ODS Objects by adding a date to the key fields of the ODS Object.

It is generally true to say that it is not always necessary to implement ODS Objects in every scenario. Rather, it depends on the requirements of each scenario. You should only use ODS if the requirements of your scenario fit one of the three usage possibilities outlined above (Inbound ODS, Consistent ODS, Application-related ODS). An ODS Object placed in the data flow
to an InfoCube without having a function does nothing except hinder loading performance.

Thursday, July 26, 2007

BW Authorizations

BW Authorizations:

The activities that you can carry out in SAP SEM-BPS are covered by the SAP authorization concept. This means that you can assign different access rights to planning functionality to the people who work with the SEM System.

Integration

The system checks the special authorization objects that SEM-BPS defines and, if necessary, also those authorization objects that are defined for reporting in the SAP Business Information Warehouse environment. In this case, the SEM-BPS users must have both the SEM-BPS application-specific authorizations and the general SAP BW reporting authorizations. You manage the SEM-BPS authorizations using the system administration tools and the BW reporting authorizations using the relevant functions under Business Explorer ® Authorizations ® Reporting - Authorization Objects.

To assign authorizations for changing and displaying plan data separately, you must include the ACTVT (activity) field in the reporting authorization object. In this field the value 02 represents the authorization to change and 03 the authorization to display plan data. If you do not include the field, then this corresponds to an authorization to change plan data.

In addition to that, because of internal dependencies, you need authorization for the following authorization objects for data entry using planning layouts:

· S_BDS_DS: This authorization object controls access to documents, which belong to a document set of the Business Document Service (BDS).

· S_TRANSLAT: This authorization object controls access to the translation functions of the SAP System.

Features

The following authorization objects exist for the SEM Business Planning and Simulation component:

· R_AREA: You use this authorization object to control access to planning areas and all subordinate objects. You must set up read access to planning areas for people who will work with the SEM-BPS component. Otherwise, they will not be able to access any of the subordinate planning elements.

· R_PLEVEL: You use this authorization object to control access to planning levels and all subordinate objects. This authorization object is also relevant to access documents of the SEM-BIC component.

· R_PACKAGE: You use this authorization object to control access to planning packages (including ad hoc packages).

· R_METHOD: You use this authorization object to control access to planning functions and the appropriate parameter groups.

· R_PARAM: You use this authorization group to control access to individual parameter groups of a certain planning function.

· R_BUNDLE: You use this authorization object to control access to global planning sequences (you control authorizations for planning sequences, which you create for a planning level, with the authorization objects R_METHOD, R_PLEVEL, or R_AREA).

No separate authorization for execution is defined for this authorization object. Whether a global planning sequence can be executed or not, depends on the authorization objects for the planning functions contained in it.

· R_PROFILE: You use this authorization object to control access to planning profiles. A planning profile restricts the objects that can be viewed. If you wish to view the planning objects, you must have at least display authorization for the appropriate planning profile.

· R_PM_NAME: You use this authorization object to control access to planning folders. In order to be able to work with planning folders, you also require the necessary authorizations for the planning objects combined in the folder.

· R_WEBITF: You use this authorization object to control access to Web interfaces that you create and edit with the Web Interface Builder, and from which you can generate Web-enabled BSP applications.

· R_STS_PT: You use this authorization object to control access to the Status and Tracking System. The object enables a check to be carried out whether a user is allowed access to a certain subplan or a version of it with the Status and Tracking System.

· R_STS_CUST: You use this authorization object to control access to Customizing for the Status and Tracking System. The object enables or forbids a user to execute Customizing.

· R_STS_SUP: This authorization object provides the assigned users with the status of a super user in relation to the Status and Tracking System. The object enables changing access to all plan data, independent of whether and where a user of the cost center hierarchy it is based on is assigned. The authorization object is intended for members of a staff controller group, who are not part of the line organization of the company, but who nevertheless must be able to intervene in the planning process.

In accordance with the hierarchical relationships that exist between the various types of planning objects, authorizations that are assigned to an object on a higher level are passed on to its subordinate objects. An authorization that has been passed on can be enhanced but not restricted on a lower level. The following table presents the combination possibilities using the example of a change authorization for planning area and level:

Change Planning Area

Change Planning Level

Authorization Available for Level

yes

no

yes

yes

yes

yes

no

no

no

no

yes

yes

In practice this behavior means that you can proceed according to two different strategies when setting up authorizations:

· Minimization of Customizing Effort: You assign authorizations for planning objects on as high a level as possible, and thereby enable access to the planning objects without further authorization assignment on lower levels.

· Optimization of Delimitation of Access Rights: You assign authorizations for planning objects on as low a level as possible, and therefore make sure that access to a planning object is only possible for the person responsible for this.

Activities

Create the user profiles you require and then assign authorization objects to these profiles. Then assign the newly created user profiles to possible users.

You can find further information on the activities associated with the different authorization objects in the online documentation on the authorization objects themselves. You can call this up in the maintenance transaction "Role maintenance" (PFCG).

Wednesday, July 25, 2007

Sales and Distribution Tables

commonly used SD Tables:

KONV


Conditions for Transaction Data

KONP Conditions for Items

LIKPDelivery Header Data

LIPSDelivery: Item data

VBAKSales Document: Header Data

VBAPSales Document: Item Data

VBBESales Requirements: Individual Records

VBEHSchedule line history

VBEPSales Document: Schedule Line Data

VBFASales Document Flow

VBLBSales document: Release order data

VBLKSD Document: Delivery Note Header

VBPASales Document: Partner

VBRKBilling: Header Data

VBRPBilling: Item Data

VBUKSales Document: Header Status and Administrative Data

VBUPSales Document: Item Status

VEKPHandling Unit - Header Table

VEPOPacking: Handling Unit Item (Contents)

VEPVGDelivery Due Index

Tuesday, July 24, 2007

Conversion Routines in SAP BW

Conversion Routine:

Conversion routines are used in SAP BW so that the characteristic values (key) of an InfoObject can be displayed or used in a different format to how they are stored in the database. They can also be stored in the database in a different format to how they are in their original form, and supposedly different values can be consolidated into one.

Conversion routines that are often implemented in SAP BW are now described.

Integration

In SAP BW, conversion routines essentially serve to simplify the input of characteristic values for a query runtime. For example with cost center 1000, the long value with left-hand zeros 0000001000 (from the database) is not to be entered, but just 1000. Conversion routines are therefore linked to characteristics (InfoObjects) and can be used by them.

Conversion routines can also be set with data loading. Although it is important to note that conversion routine are often already defined for DataSource fields (particularly from SAP source systems). The properties of the replicated DataSource fields are displayed in the transfer structure or DataSource maintenance.

In many cases it is desirable to store the conversion routines of these fields in the corresponding InfoObject on the BW side too. It is therefore necessary, when defining transfer rules, to consider the connection between the conversion routine of the InfoObject in the communication structure and the conversion routine of the transfer structure field.

When loading data you now have to consider that when extracting from SAP source systems the data is already in the internal format and is not converted. When loading flat files and when loading using a BAPI or DB Connect, the conversion routine displayed signifies that an INPUT conversion is executed before writing to the PSA. For example, the date field is delivered from a flat file in the external format‚10.04.2003’. If a conversion routine has been specified in the transfer structure maintenance, this field can be converted to the internal format ‘20030410’ in the transfer rules, according to that conversion routine.

Conversion routines ALPHA, NUMCV, and GJAHR check whether data exists in the correct internal format before it is updated. For more on this see the extensive documentation in the BW system in the transaction for converting to conforming internal values (transaction RSMDCNVEXIT). If the data is not in the correct internal form an error message is issued. These three conversion routines can be set in Transfer Rule Maintenance so that a check is not executed but an INPUT conversion is. Make this setting using the Optional Conversion flag in transfer rules maintenance. Both the check and the conversion are executed in the transfer rules for the target field.

Business Content objects are delivered with conversion routines if they are also used by the DataSource in the source system. The external presentation is then the same in both systems. The conversion routines used for the R/3 DataSource fields are then transferred to the BW system when the DataSources from the SAP source systems are replicated.

Saturday, July 21, 2007

BW Integrations

Analysis of BW support navigation facilities integrated to BW 3.0

OLAP BAPI: SAP BW 3.0 comes with the OLAP BAPI Interface (OBI) which provides functions that can be used by third party reporting tools to access BW Info cubes. It provides an open interface to access any information that is available through OLAP engine.

Integrating with XML: OLAP BAPI serves as the basis for the SAP implementation of XML for analysis. It is an XML API based on Simple Object Access Protocol (SOAP) designed for standardized access to an analytical data provider over the web. The XML interface introduced with SAP BW 3.0 release accepts XML data streams compliant with the SOAP. Unlike all other SAP BW interfaces in XML interface the actual data transfer is initiated by the source system.

Open Hub Services: The Open Hub Service allows controlled distribution of consistent data from any SAP BW InfoProvider to flat files, database tables and other applications with full support for delta management, selections, projections and aggregation. Open Hub Services have InfoSpokes as their core metadata objects. With the SAP 3.0 release InfoSpokes have become generally available.

Content Management Framework: The SAP Web Content Management Server stores unstructured information that users can go through and use efficiently. Integration with the SAP BW content management framework provides an integrated view on structured and unstructured information to the end user.

Friday, July 20, 2007

T-Codes for BW

Check out the commonly used T-Codes in BW

BW_TCODES.xls

Wednesday, July 18, 2007

Creation of Infoobjects

Steps for creation of Infoobjects a beautiful PPT

Creationofinfoobjects.ppt

What is IDOC

Find the beautiful PPT regarding to IDOCs

21IDOCs.ppt

Monday, July 16, 2007

How to retain deltas when you change LO extractor in Production system

Requirement may come up to add new fields to LO cockpit extractor which is up & running in production environment. This means the extractor is delivering daily deltas from SAP R/3 to BW system .

Since this change is to be done in R/3 Production system, there is always a risk that daily deltas of LO cockpit extractor would get disturbed. If the delta mechanism is disturbed (delta queue is broken) then there no another way than doing re-initialization for that extractor. However this re-init is not easy in terms of time & resource. Moreover no organization would be willing to provide that much downtime for live reporting based on that extractor.

As all of us know that initialization of LO Extractor is critical, resource intensive & time consuming task. Pre-requisites to perform fill setup tables are - we need to lock the users from transactional updates in R/3 system, Stop all batch jobs that update the base tables of the extractor. Then we need to schedule the setup jobs with suitable date ranges/document number ranges.

We also came across such scenario where there was a requirement to add 3 new fields to existing LO cockpit extractor 2LIS_12_VCITM. Initialization was done for this extractor 1 year back and the data volume was high.

We adopted step by step approach to minimize the risk of delta queue getting broken /disturbed. Hope this step by step procedure will help all of us who have to work out similar scenarios.

Step by Step Procedure:-

1.Carry out changes in LO Cockpit extractor in SAP R/3 Dev system.
As per the requirement add new fields to Extractor.
These new fields might be present in standard supporting structures that you get when you execute "Maintain Data source" for extractor in LBWE. If all required fields are present in supporting structure mentioned above then just add these fields using arrow buttons provided and there is no need to write user exit code to populate these new fields.
However if these fields (or some of the required fields) are not present in supporting structures then you have to go for append structure and user exit code. The coding in user exit is required to populate the newly added fields. You have to write ABAP code in User exit under CMOD & in Include ZXRSAU01.
All above changes will ask you for transport request. Assign appropriate development class/Package and assign all these objects into a transport request.

2.Carry out changes in BW Dev system for objects related to this change.
Carry out all necessary changes in BW Dev system for objects related to this change (Info source, transfer rules, ODS, Info cubes, Queries & workbooks). Assign appropriate development class/Package and assign all these objects into a transport request

3.Test the changes in QA system.
Test the new changes in SAP R/3 and BW QA systems. Make necessary changes (if needed) and include them in follow-up transports.

4.Stop V3 batch jobs for this extractor.
V3 batch jobs for this extractor are scheduled to run periodically (hourly/daily etc) Ask R/3 System Administrator to put on hold/cancel this job schedule.

5.Lock out users, batch jobs on R/3 side & stop Process chain schedule on BW.
In order to avoid the changes in database tables for this extractor and hence possible risk of loss of data, ask R/3 System Administrator to lock out the users. Also batch job schedule need to be put on hold /cancel.
Ask System Administrator to clear pending queues for this extractor (if any) in SMQ1/SMQ2. Also pending /error out v3 updates in SM58 should be processed.
On BW production system the process chain related to delta Info package for this extractor should be stopped or put on hold.

6.Drain the delta queue to Zero for this extractor.
Execute the delta Info package from BW and load the data into ODS & Info cubes. Keep executing delta Info package till you get 0 records with green light for the request on BW side. Also you should get 0 LUW entries in RSA7 for this extractor on R/3 side.

7.Import R/3 transports into R/3 Production system.
In this step we import R/3 transport request related to this extractor. This will include user exit code also. Please ensure that there is no syntax error in include ZXRSAU01 and it is active. Also ensure that objects such as append structure is active after transport.

8.Replicate the data source in BW system.
On BW production system, replicate the extractor (data source).

9.Import BW transport into BW Production system.
In this step we import BW transport related to this change into BW Production system.

10.Run program to activate transfer rules
Execute program RS_TRANSTRU_ACTIVATE_ALL. Enter the Info source and source system name and execute. This will make sure that transfer rules for this Info source are active

11.Execute V3 job Manually in R/3 side
Go to LBWE and click on Job Control for Application area related to this extractor (for 2LIS_12_VCITM it is application 12). Execute the job immediately and it should finish with no errors.

12.Execute delta Info package from BW system
Run delta Info package from BW system. Since there is no data update, this extraction request should be green with zero records (successful delta extraction)

13.Restore the schedule on R/3 & BW systems
Ask System Administrator to resume V3 update job schedule, batch job schedule and unlock the users. On BW side, restore the process chains schedule.

From next day onwards (or as per frequency), you should be able to receive the delta for this extractor with data also populated for new fields.

Friday, July 13, 2007

50 BW Interview questions

1) Please describe your experience with BEx (Business Explorer)
A) Rate your level of experience with BEx and the rationale for you’re self-rating

B) How many queries have you developed? :

C) How many reports have you written?

D) How many workbooks have you developed?

E) Experience with jump targets (OLTP, use jump target)

F) Describe experience with BW-compatible ETL tools (e.g. Ascential)

2) Describe your experience with 3rd party report tools (Crystal Decisions, Business Objects a plus)

3) Describe your experience with the design and implementation of standard & custom InfoCubes.

1. How many InfoCubes have you implemented from start to end by yourself (not with a team)?

2. Of these Cubes, how many characteristics (including attributes) did the largest one have.

3. How much customization was done on the InfoCubes have you implemented?

4) Describe your experience with requirements definition/gathering.

5) What experience have you had creating Functional and Technical specifications?

6) Describe any testing experience you have:

7) Describe your experience with BW extractors

1. How many standard BW extractors have you implemented?

2. How many custom BW extractors have you implemented?

8) Describe how you have used Excel as a compliment to BEx

A) Describe your level of expertise and the rationale for your self-rating (experience with macros, pivot tables and formatting)
B)

9) Describe experience with ABAP

10) Describe any hands on experience with ASAP Methodology.

11) Identify SAP functional areas (SEM, CRM, etc.) you have experience in. Describe that experience.

12) What is partitioning and what are the benefits of partitioning in an InfoCube?

A) Partitioning is the method of dividing a table (either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of the fields in the table. By partitioning an infocube, the reporting performance is enhanced because it is easier to search in smaller tables. Also table maintenance becomes easier.

13) What does Rollup do?

A) Rollup creates aggregates in an infocube whenever new data is loaded.

14) What are the inputs for an infoset?

A) The inputs for an infoset are ODS objects and InfoObjects (with master data or text).

15) What internally happens when BW objects like Info Object, Info Cube or ODS are created and activated?

A) When an InfoObject, InfoCube or ODS object is created, BW maintains a saved version of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use.

16) What is the maximum number of key fields that you can have in an ODS object?

A) 16.

17) What is the specific advantage of LO extraction over LIS extraction?

A) The load performance of LO extraction is better than that of LIS. In LIS two tables are used for delta management that is cumbersome. In LO only one delta queue is used for delta management.

18) What is the importance of 0REQUID?

A) It is the InfoObject for Request id. OREQUID enables BW to distinguish between different data records.

19) Can you add programs in the scheduler?

A) Yes. Through event handling.

20) What is the importance of the table ROIDOCPRMS?

A) It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.

21) What is the importance of 'start routine' in update rules?

A) A Start routine is a user exit that can be executed before the update rule starts to allow more complex computations for a key figure or a characteristic. The start routine has no return value. Its purpose is to execute preliminary calculations and to store them in a global data structure. You can access this structure or table in the other routines.
22) When is IDOC data transfer used?

A) IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.

23) What is partitioning characteristic in CO-PA used for?

A) For easier parallel search and load of data.

24) What is the advantage of BW reporting on CO-PA data compared with directly running the queries on CO-PA?

A) BW has a better performance advantage over reporting in R/3. For a huge amount of data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an OLTP system and is good for transaction processing rather than analytical processing.

25) What is the function of BW statistics cube?

A) BW statistics cube contains the data related to the reporting performance and the data loads of all the InfoCubes in the BW system.

26) When an ODS is in 'overwrite' mode, does uploading the same data again and again create new entries in the change log each time data is uploaded?
A) No.

27) What is the function of 'selective deletion' tab in the manage->contents of an infocube?

A) It allows us to select a particular value of a particular field and delete its contents.

28) When we collapse an infocube, is the consolidated data stored in the same infocubeinfocube? or is it stored in the new

A) Data is stored in the same cube.

29) What is the effect of aggregation on the performance? Are there any negative effects on the performance?

A) Aggregation improves the performance in reporting.

30) What happens when you load transaction data without loading master data?

A) The transaction data gets loaded and the master data fields remain blank.

31) When given a choice between a single infocube and multiple InfoCubes with a multiprovider, what factors does one need to consider before making a decision?

A) One would have to see if the InfoCubes are used individually. If these cubes are often used individually, then it is better to go for a multiprovider with many cubes since the reporting would be faster for an individual cube query rather than for a big cube with lot of data.

32) How many hierarchy levels can be created for a characteristic info object?

A) Maximum of 98 levels.

33) What is open hub service?

A) The open hub service enables you to distribute data from an SAP BW system into external data marts, analytical applications, and other applications. With this, you can ensure controlled distribution using several systems. The central object for the export of data is the Infospoke. Using this, you can define the object from which the data comes and into which target it is transferred. Through the open hub service, SAP BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.

34) What is the function of 'reconstruction' tab in an infocube?

A) It reconstructs the deleted requests from the infocube. If a request has been deleted and later someone wants the data records of that request to be added to the infocube, one can use the reconstruction tab to add those records. It goes to the PSA and brings the data to the infocube.

35) What are secondary indexes with respect to InfoCubes?

A) Index created in addition to the primary index of the infocube. When you activate a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes.

36) What is DB connect and where is it used?

A) DB connect is database connecting piece of program. It is used in connecting third party tools with BW for reporting purpose.

37) Can we extract hierarchies from R/3 for CO-PA?

A) No We cannot, “NO hierarchies in CO/PA�?.

38) Explain ‘field name for partitioning’ in CO-PA

A) The CO/PA partitioning is used to decrease package size (eg: company code)

39) What is V3 update method ?

A) It is a program in R/3 source system that schedules batch jobs to update extract structure to data source collectively.

40) Differences between serialized and non-serialized V3 updates

41) What is the common method of finding the tables used in any R/3 extraction

A) By using the transaction LISTSCHEMA we can navigate the tables.

42) Differences between table view and infoset query

A) An InfoSet Query is a query using flat tables.

43) How to load data from one InfoCube to another InfoCube ?

A) Thro DataMarts data can be loaded from one InfoCube to another InfoCube.

44) What is the significance of setup tables in LO extractions ?
A) It adds the Selection Criteria to the LO extraction.

45) Difference between extract structure and datasource

A) In Datasource we define the data from diff source sys,where as in extract struct it contains the replicated data of datasource n where in we can define extract rules, n transfer rules
B) Extract Structure is a record layout of InfoObjects.
C) Extract Structure is created on SAP BW system.

46) What happens internally when Delta is Initialized

47) What is referential integrity mechanism ?

A) Referential integrity is the property that guarantees that values from one column depend on values from another column.This property is enforced through integrity constraints.
48) What is activation of extract structure in LO ?

49) What is the difference between Info IDoc and data IDoc ?

50) What is D-Management in LO ?
A) It is a method used in delta update methods, which is based on change log in LO.