Google
SAP BW: 2007

Saturday, December 29, 2007

Questions and answers of V3 Update

V3 update: Questions and answers

Question 1
Update records are written to the SM13, although you do not use the extractors from the logistics cockpit (LBWE) at all.
Active datasources have been accidentally delivered in a PI patch.For that reason, extract structures are set to active in the logistics cockpit. Select transaction LBWE and deactivate the active structures. From now on, no additional records are written into SM13.
If the system displays update records for application 05 (QM) in transaction SM13, even though the structure is not active, see note 393306 for a solution.

Question 2
How can I selectively delete update records from SM13?
Start the report RSM13005 for the respective module (z.B. MCEX_UPDATE_03).

  • Status COL_RUN INIT: without Delete_Flag but with VB_Flag (records are updated).
  • Status COL_RUN OK: with Delete_Flag (the records are deleted for all modules with COL_RUN -- OK)

With the IN_VB flag, data are only deleted, if there is no delta initialization. Otherwise, the records are updated.
MAXFBS : The number of processed records without Commit.

ATTENTION: The delta records are deleted irrevocably after executing report RSM13005 (without flag IN_VB). You can reload the data into BW only with a new delta-initialization!

Question 3
What can I do when the V3 update loops?
Refer to Note 0352389. If you need a fast solution, simply delete all entries from SM13 (executed for V2), however, this does not solve the actual problem.

ATTENTION: THIS CAUSES DATA LOSS. See question 2 !

Question 4
Why has SM13 not been emptied even though I have started the V3 update?

  • The update record in SM13 contains several modules (for example, MCEX_UPDATE_11 and MCEX_UPDATE_12). If you start the V3 update only for one module, then the other module still has INIT status in SM13 and is waiting for the corresponding collective run. In some cases, the entry might also not be deleted if the V3 update has been started for the second module.In this case, schedule the request RSM13005 with the DELETE_FLAG (see question 2).
  • V3 updating no longer functions after the PI upgrade because you did not load all the delta records into the BW system prior to the upgrade.Proceed as described in note 328181.

Question 5
The entries from SM13 have not been retrieved even though I followed note 0328181!
Check whether all entries were actually deleted from SM13 for all clients. Look for records within the last 25 years with user * .

Question 6
Can I schedule V3 update in parallel?
The V3 update already uses collective processing.You cannot do this in parallel.

Question 7
The Logistics Cockpit extractors deliver incorrect numbers. The update contains errors !
Have you installed the most up-to-date PI in your OLTP system?
You should have at least PI 2000.1 patch 6 or PI 2000.2 patch 2.

Question 8
Why has no data been written into the delta queue even though the V3 update was executed successfully?
You have probably not started a delta initialization. You have to start a delta initialization for each DataSource from the BW system before you can load the delta.Check in RSA7 for an entry with a green status for the required DataSource. Refer also to Note 0380078.

Question 9
Why does the system write data into the delta queue, even though the V3 update has not been started?
You are using the automatic goods receipt posting (transaction MRRS) and start this in the background.In this case the system writes the records for DataSources of application 02 directly into the delta queue (RSA7).This does not cause double data records.This does not result in any inconsistencies.

Question 10
Why am I not able to carry out a structural change in the Logistics Cockpit although SM13 is blank?
Inconsistencies occurred in your system. There are records in update table VBMOD for which there are no entries in table VBHDR. Due to those missing records, there are no entries in SM13. To remove the inconsistencies, follow the instructions in the solution part of Note 67014. Please note that no postings must be made in the system during reorganization in any case!

Question 11
Why is it impossible to plan a V3 job from the Logistics Cockpit?
The job always abends immediately. Due to missing authorizations, the update job cannot be planned. For further information see Note 445620.

Sunday, November 25, 2007

Monday, November 19, 2007

Differences between BW 3.5 and BI 7.0

Major Differences between Sap Bw 3.5 & sapBI 7.0 version:

1. In Infosets now you can include Infocubes as well.

2. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube.

3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM.

4. The monitoring has been imprvoed with a new portal based cockpit. Which means you would need to have an EP guy in ur project for implementing the portal ! :)

5. Search functionality hass improved!! You can search any object. Not like 3.5

6. Transformations are in and routines are passe! Yess, you can always revert to the old transactions too.

7. The Data Warehousing Workbench replaces the Administrator Workbench.

8. Functional enhancements have been made for the DataStore object:
New type of DataStore object
Enhanced settings for performance optimization of DataStore objects.

9. The transformation replaces the transfer and update rules.

10. New authorization objects have been added

11. Remodeling of InfoProviders supports you in Information
Lifecycle Management.

12 The Data Source:
There is a new object concept for the Data Source.
Options for direct access to data have been enhanced.
From BI, remote activation of Data Sources is possible in SAP source systems.

13.There are functional changes to the Persistent Staging Area (PSA).

14.BI supports real-time data acquisition.

15 SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ Major differences include:
a) Renamed ODS as DataStore.
b) Inclusion of Write-optmized DataStore which does not have any change log and the requests do need any activation
c) Unification of Transfer and Update rules
d) Introduction of "end routine" and "Expert Routine"
e) Push of XML data into BI system (into PSA) without Service API or Delta Queue
f) Intoduction of BI accelerator that significantly improves the performance.
g) Load through PSA has become a must. I am not too sure about this. It looks like we would not have the option to bypass the PSA Yes,

16. Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules. Also in the Transformation now we can do "Start Routine, Expert Routine and End Routine". during data load.

New features in BI 7 compared to earlier versions:

i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition (RDA).

ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options.

iii. One level of Transformation. This replaces the Transfer Rules and Update Rules

iv. Performance optimization includes new BI Accelerator feature.

v. User management (includes new concept for analysis authorizations) for more flexible BI end user authorizations.

Monday, November 12, 2007

Need to delete data...? for enhancing datasource..?

No need to delete any data in BW or R/3 side for enhancing datasource if loading data into ODS in overwrite mode.

Simply follow the below steps:

1. Add new fields to ODS & cubes and adjust update rules.

2. Clear LBWQ(update queue - Run V3 job only).

3. Clear RSA7(Delta queue - Run infopak to pull data into bw)

4. Move datasource changes to Production, replicate and activate transfer rules.

5. Delete data in LBWG(setup tables).

6. Fill setup tables for historic data.

7. Initialize datasource if required, without data transfer(Zero initialization).

8. Pull data from R/3 to BW ODS in overwrite mode with Repair Full option.

Loading data in overwrite mode, so no problem, Just load again historic data as well. and push delta from ODS to further(ods/cube).

9. Push delta from ODS to CUBE.

Sunday, October 28, 2007

Creation of Infoobjects

Creation of Infoobject Chars and Keyfigures


Sunday, October 21, 2007

Extracts

Extracts:

Since internal tables have fixed line structures, they are not suited to handle data sets with varying structures. Instead, you can use extract datasets for this purpose.

An extract is a sequential dataset in the memory area of the program. You can only address the entries in the dataset within a special loop. The index or key access permitted with internal tables is not allowed. You may only create one extract in any ABAP program. The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem.

An extract dataset consists of a sequence of records of a pre-defined structure. However, the structure need not be identical for all records. In one extract dataset, you can store records of different length and structure one after the other. You need not create an individual dataset for each different structure you want to store. This fact reduces the maintenance effort considerably.

In contrast to internal tables, the system partly compresses extract datasets when storing them. This reduces the storage space required. In addition, you need not specify the structure of an extract dataset at the beginning of the program, but you can determine it dynamically during the flow of the program.

You can use control level processing with extracts just as you can with internal tables. The internal administration for extract datasets is optimized so that it is quicker to use an extract for control level processing than an internal table.

Procedure for creating an extract:

  1. Define the record types that you want to use in your extract by declaring them as field groups. The structure is defined by including fields in each field group. ----- Defining an Extract -----
  2. Fill the extract line by line by extracting the required data.------ Filling an Extract with Data -----
  3. Once you have filled the extract, you can sort it and process it in a loop. At this stage, you can no longer change the contents of the extract. ------ Processing Extracts ------

Sunday, October 14, 2007

Dimensions

Dimensions:

Fact table and the relevant dimension tables of an InfoCube are connected with one another relationally using the dimension keys. The dimension key is provided by the system per characteristic combination in a dimension table.

  • With the execution of a query the OLAP processor checks the dimension tables of the InfoCube to be evaluated for the characteristic combinations required in the selection.
  • The dimension keys determined in this way point the way to the information in the fact table.
  • Dimension tables consist of a maximum of 248 characteristics.
  • The Time dimension holds the time characteristics needed for analysis.
  • The Unit dimension contains the unit of measure and currency characteristics needed to describe the key figures properly.

The Data Packet dimension is used to identify discrete packets of information loaded into the InfoCube. In this way, packets can be deleted, reloaded or maintained individually

InfoCubes are made up of a number of InfoObjects. All InfoObjects (characteristics and key figures) are available InfoCube-independently. Characteristics relate to master data with their attributes and text descriptions.

An InfoCube consists of several InfoObjects and is structured according to the star schema. This means there is a (large) fact table that contains the key figures for the InfoCube as well as several (smaller) dimension tables which surround it. The characteristics of the InfoCube are stored in these dimensions.

An InfoCube fact table only contains key figures, in contrast to an ODS object, whose data part can also contain characteristics. The characteristics of an InfoCube are stored in its dimensions.

The dimensions and the fact table are linked to one another via abstract identification numbers (dimension IDs), which are in the key part of the particular database table. As a result, the key figures of the InfoCube relate to the characteristics of the dimension. The characteristics determine the granularity (the degree of fineness), in which the key figures are managed in the InfoCube.

Characteristics that logically belong together (district and area, for example, belong to the regional dimension) are grouped together in a dimension. By adhering to this design criterion, dimensions are to a large extent independent of each other, and dimension tables remain small with regards to data volume, which is desirable for reasons of performance. This InfoCube structure is optimized for Reporting.

Sunday, September 30, 2007

What Is SPRO In BW Project?

1) What is spro?
2) How to use in bw project?
3) What is difference between idoc and psa in transfer methods?

1. SPRO is the transaction code for Implementation Guide, where you can do configuration settings.
* Type spro in the transaction box and you will get a screen customizing :
Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse Information.

2. SPRO is used to configure the following settings :
* General Settings like printer settings, fiscal year settings, ODS Object Settings, Authorisation settings, settings for displaying SAP Documents, etc., etc.,
* Links to other systems : like links between flat files and BW Systems, R/3 and BW, and other data sources, link between BW system and Microsoft Analysis services, and crystal enterprise....etc., etc.,
* UD Connect Settings : Like configuring BI Java Connectors, Establishing the RFC Desitination for SAP BW for J2EEE Engine, Installation of Availability monitoring for UD Connect.
* Automated Processes: like settings for batch processes, background processes etc., etc.,
* Transport Settings : like settings for source system name change after transport and create destination for import post-processing.
* Reporting Relevant Settings : Like Bex Settings, General Reporting Settings.
* Settings for Business Content : which is already provided by SAP.

3. PSA : Persistant Staging Area : is a holding area of raw data. It contains detailed requests in the format of the transfer structure. It is defined according to the Datasource and source system, and is source system dependent.

IDOCS : Intermediate DOCuments : Data Structures used as API working storage for applications, which need to move data in or out of SAP Systems.

Saturday, September 22, 2007

Difference Between PSA, ALE

how data is transferd using PSA&IDoc

The following update types are available in SAP BW:
1. PSA
2. ALE (data IDoc)

You determine the PSA or IDoc transfer method in the transfer rule maintenance screen. The process for loading the data for both transfer methods is triggered by a request IDoc to the source system. Info IDocs are used in both transfer methods. Info IDocs are transferred exclusively using ALE

A data IDoc consists of a control record, a data record, and a status record The control record contains, for example, administrative information such as the receiver, the sender, and the client. The status record describes the status of the IDoc, for example, "Processed". If you use the PSA for data extraction, you benefit from increased flexiblity (treatment of incorrect data records). Since you are storing the data temporarily in the PSA before updating it in to the data targets, you can check the data and change it if necessary. Unlike a data request with IDocs, the PSA gives you various options for additional data updates into data targets:

InfoObject/Data Target Only - This option means that the PSA is not used as a temporary store. You choose this update type if you do not want to check the source system data for consistency and accuracy, or you have already checked this yourself and are sure that you no longer require this data since you are not going to change the structure of the data target again.

PSA and InfoObject/Data Target in Parallel (Package by Package) - BW receives the data from the source system, writes the data to the PSA and at the same time starts the update into the relevant data targets. Therefore, this method has the best performance.

The parallel update is described in detail in the following: A dialog process is started by data package, in which the data of this package is writtein into the PSA table. If the data is posted successfully into the PSA table, the system releases a second, parallel dialog process that writes the data to the data targets. In this dialog process the transfer rules for the data records of the data package are applied, that data is transferred to the communcation structure, and then written to the data targets. The first dialog process (data posting into the PSA) confirms in the source system that is it completed and the source system sends a new data package to BW while the second dialog process is still updating the data into the data targets.

The parallelism relates to the data packages, that is, the system writes the data packages into the PSA table and into the data targets in parallel. Caution: The maximum number of processes setin the source system in customizing for the extractors does not restrict the number of processes in BW. Therefore, BW can require many dialog processes for the load process. Ensure that there are enough dialog processes available in the BW system. If there are not enough processes on the system side, errors occur. Therefore, this method is the least recommended.

PSA and then into InfoObject/Data Targets (Package by Package) - Updates data in series into the PSA table and into the data targets by data package. The system starts one process that writes the data packages into the PSA table. Once the data is posted successfuly into the PSA table, it is then written to the data targets in the same dialog process. Updating in series gives you more control over the overall data flow when compared to parallel data transfer since there is only one process per data package in BW. In the BW system the maximum number of dialog process required for each data request corresponds to the setting that you made in customizing for the extractors in the control parameter maintenance screen. In contrast to the parallel update, the system confirms that the process is completed only after the data has been updated into the PSA and also into the data targets for the first data package.

Only PSA - The data is not posted further from the PSA table immediately. It is useful to transfer the data only into the PSA table if you want to check its accuracy and consistency and, if necessary, modify the data. You then have the following options for updating data from the PSA table:

Automatic update - In order to update the data automatically in the relevant data target after all data packages are in the PSA table and updated successfully there, in the scheduler when you schedule the InfoPackage, choose Update Subsequently in Data Targets on the Processing tab page.

Tuesday, September 18, 2007

Interview Questions

Interview Questions:

1. Identify the statement(s) that is/are true. A change run...

a. Activates the new Master data and Hierarchy data
b. Aggregates are realigned and recalculated
c. Always reads data from the InfoCube to realign aggregates
d. Aggregates are not affected by change run

1: A, B

2. Which statement(s) is/are true about Multiproviders?

a. This is a virtual Infoprovider that does not store data
b. They can contain InfoCubes, ODSs, info objects and info sets
c. More than one info provider is required to build a Multiprovider
d It is similar to joining the data tables

2: A, B

3. The structure of the PSA table created for an info source will be...

a. Featuring the exact same structure as Transfer structure
b. Similar to the transfer rules
c. Similarly structured as the Communication structure
d. The same as Transfer structure, plus four more fields in the beginning

3: D

4. In BW, special characters are not permitted unless it has been defined using this transaction:

a. rrmx
b. rskc
c. rsa15
d. rrbs

4: B

5. Select the true statement(s) about info sources:

a. One info source can have more than one source system assigned to it
b. One info source can have more than one data source assigned to it provided the data sources are in different source systems
c. Communication structure is a part of an info source
d. None of the above

5: A, C

6. Select the statement(s) that is/are true about the data sources in a BW system:

a. If the hide field indicator is set in a data source, this field will not be transferred to BW even after replicating the data source
b. A field in a data source won't be usable unless the selection field indicator has been set in the data source
c. A field in an info package will not be visible for filtering unless the selection field has been checked in the data source
d. All of the above

6: A, C

7. Select the statement(s) which is/are true about the 'Control parameters for data transfer from the Source System':

a. The table used to store the control parameters is ROIDOCPRMS
b. Field max lines is the maximum number of records in a packet
c. Max Size is the maximum number of records that can be transferred to BW
d. All of the above

7: A

8. The indicator 'Do not condense requests into one request when activation takes place' during ODS activation applies to condensation of multiple requests into one request to store it in the active table of the ODS.

a. True
b. False

8: B

9. Select the statement(s) which is/are not true related to flat file uploads:

a. CSV and ASCII files can be uploaded
b. The table used to store the flat file load parameters is RSADMINC
c. The transaction for setting parameters for flat file upload is RSCUSTV7
d. None of the above

9: C

10. Which statement(s) is/are true related to Navigational attributes vs Dimensional attributes?

a. Dimensional attributes have a performance advantage over Navigational attributes for queries
b. Change history will be available if an attribute is defined as navigational
c. History of changes is available if an attribute is included as a characteristic in the cube
d. All of the above

10: A, C

11. When a Dimension is created as a line item dimension in a cube, Dimensions IDs will be same as that of SIDs.

a. True
b. False

11: A

12. Select the true statement(s) related to the start routine in the update rules:

a. All records in the data packet can be accessed
b. Variables declared in the global area is available for individual routines
c. Returncode greater than 0 will be abort the whole packet
d. None of the above

12: A, B, C

13. If a characteristic value has been entered in InfoCube-specific properties of an InfoCube, only these values can be loaded to the cube for that characteristic.

a. True
b. False

13: A

14. After any changes have been done to an info set it needs to be adjusted using transaction RSISET.

a. True
b. False

14: A

15. Select the true statement(s) about read modes in BW:

a. Read mode determines how the OLAP processor retrieves data during query execution and navigation
b. Three different types of read modes are available
c. Can be set only at individual query level
d. None of the above

15: A, B

Wednesday, September 12, 2007

What is BDC and How you use it?

What is BDC
During data transfer, data is transferred from an external system into the SAP
R/3 System. •Transfer data from an external system into an R/3 System as it is
installed. •Transfer data regularly from an external system into an R/3 System.

Example: If data for some departments in your company is input using a system
other than the R/3 System, you can still integrate this data in the R/3 System. To do
this, you export the data from the external system and use a data transfer method to
import it into the R/3 System.
Batch input with batch input sessions : Data consistency check with the help of
screen logic.

With the batch input method, an ABAP program reads the external data that is to be
entered in the R/3 System and stores the data in a "batch input session". The
session records the actions that are required to transfer data into the system using
normal SAP transactions.

When the program has generated the session, you can run the session to execute
the SAP transactions in it. You can explicitly start and monitor a session with the
batch input management function (by choosing System ® Services ® Batch input), or
have the session run in the background processing system.

Use the BDC_OPEN_GROUP function module to create a new session. Once you
have created a session, then you can insert batch input data into it with
BDC_INSERT. Use the BDC_INSERT function module to add a transaction to a
batch input session. Use the BDC_CLOSE_GROUP function module to close a
session after you have inserted all of your batch input data into it.

Download a pdf of BDC&LSMW
http://www.savefile.com/files/799694

Monday, September 10, 2007

Some BW Questions

Some BW Questions

1. The following transactions are relevant to the data sources in an SAP BW source system.

a. RSA3
b. RSA4
c. RSA5
d. RSA6

2. True or False? A reference characteristic will use the SID table and master data table of the referred characteristic.

a. True
b. False

3. The following statements are not true about navigational attributes.

a. An attribute of an info object cannot be made navigational if the attribute-only flag on the attribute info object has been checked.
b. Navigational attributes can be used to create aggregates.
c. It is possible to make a display attribute to navigational in an info cube data without deleting all the data from the info cube.
d. Once an attribute is made navigational in an info cube, it is possible to change it back to a display attribute if the data has been deleted from the info cube.

4. True or False? It is possible to create a key figure without assigning currency or unit.

a. True
b. False

5. The following statements are true for compounded info objects.

a. An info cube needs to contain all info objects of the compounded info object if it has been included in the info cube.
b. An info object cannot be included as a compounding object if it is defined as an attribute only.
c. An info object can be included as an attribute and a compounding object simultaneously.
d. The total length of a compounded info object cannot exceed 60.

6. The following statements are true for an info cube.

a. Each characteristic of info cube should be assigned to at least one dimension.
b. One characteristic can be assigned to more than one dimensions.
c. One dimension can have more than one characteristic.
d. More than one characteristic can be assigned to one line item dimension.

7. The following statements are true for info cubes and aggregates.

a. Requests cannot be deleted if info cubes are compressed.
b. A request cannot be deleted from an info cube if that request (is compressed) in the aggregates.
c. Deleting a request from the cube will delete the corresponding request from the aggregate, if the aggregate has not been compressed.
d. All of the above.

8. The following statements are true regarding the ODS request deletion.

a. It is not possible to delete a request from ODS after the request has been activated.
b. Deleting an (inactive) request will delete all requests that have been loaded into the ODS after this request was loaded.
c. Deleting an active request will delete the request from the change log table.
d. None of the above.

9. The following statements are true for aggregates.

a. An aggregate stores data of an info cube redundantly and persistently in a summarized form in the database.
b. An aggregate can be built on characteristics or navigational attributes from the info cube.
c. Aggregates enable queries to access data quickly for reporting.
d. None of the above.

10. True or False? If an info cube has active aggregates built on it, the new requests loaded will not be available for reporting until the rollup has been completed successfully.

a. True
b. False

11. What is the primary purpose of having multi-dimensional data models?

a. To deliver structured information that the business user can easily navigate by using any possible combination of business terms to show the KPIs.
b. To make it easier for developers to build applications, that will be helpful for the business users.
c. To make it easier to store data in the database and avoid redundancy.
d. All of the above.

12. The following statements are true for partitioning.

a. If a cube has been partitioned, the E table of the info cube will be partitioned on time.
b. The F table of the info cube is partitioned on request.
c. The PSA table is partitioned automatically with several requests on one partition.
d. It is not possible to partition the info cube after data has been loaded, unless all the data is deleted from the cube.

13. The following statements are true for OLAP CACHE.

a. Query navigation states and query results are stored in the application server memory.
b. If the same query has been executed by another user the result sets can be used if the global cache is active.
c. Reading query results from OLAP cache is faster than reading from the database.
d. Changing the query will invalidate the OLAP cache for that query.

14. The following statements are true about the communication structure.

a. It contains all the info objects that belong to an info source.
b. All the data is updated into the info cube with this structure.
c. It is dependent on the source system.
d. All of the above.

15. The following statements are untrue about ODSs.

a. It is possible to create ODSs without any data fields.
b. An ODS can have a maximum of 16 key fields.
c. Characteristics and key figures can be added as key fields in an ODS.
d. After creating and activating, an export data source is created automatically.

Thursday, August 30, 2007

Creating CO-PA DataSource

Creating CO-PA DataSource for Transaction Data

This step has to be carried out in the R/3 system:

SAP IMG Menu

Integration with Other mySAP.com Components à Data Transfer to the SAP Business Information Warehouse à Settings for Application-Specific DataSources (PI) à Profitability Analysis à Create Transaction Data DataSource

Transaction Code

KEB0

Then carry out the following steps:

1. On the CO-PA/SAP BW: DataSource for Transaction Data screen:

a. Make the following entries:

Field

Entry

DataSource

1_CO_PA%CL%ERK is automatically generated. Do not change anything. You can add additional characters.

Create

X

Operating concern

Your Operating concern, e.g.BP01.

Costing-based

X

b. Choose Execute.

c. Make the following entries:

Field

Entry

Short text

CO-PA Baseline

Medium length text

CO-PA Baseline

Long text

CO-PA Baseline

Field name for partitioning

---

  1. Select all Characteristics from the segment level.

e. Select all Characteristics from the segment table.

f. Select all Value Fields.

  1. Choose InfoCatalog from the application toolbar.

2. The Create Object Directory Entry dialog box is displayed requesting the user to enter a development class. Enter a development class in the customer namespace and choose Save or choose Local object.

3. On the DataSource: Customer version Edit screen:

a. Select all possible checkboxes in column Selection.

b. Choose Save.

4. In the information dialog box, choose Enter.

Then replicate the datasource in BW

Thursday, August 23, 2007

Metadata Modelling

SAP BW Business Warehouse- Metadata Modelling

The administration services in SAP BW can be availed through Administration Workbench (AWB). It is a single point of entry for data warehouse development, administration and maintenance tasks in SAP BW with Metadata modeling component, scheduler and monitor as its main components as described in the figure hereunder:

Metadata modeling: Metadata modeling component is the main entry point for defining the core metadata objects used to support reporting and analysis. This includes everything from defining the extraction process and implementing transformations to defining flat or multidimensional objects for storage of information.

Modeling Features

  • Metadata modeling provides a Metadata Repository where all the metadata is stored and a Metadata Manager that handles all the requests for retrieving, adding, changing, or deleting metadata.
  • Reporting and scheduling mechanism: Reporting and scheduling are the processes required for the smooth functioning of SAP BW. The various batch processes in the SAP BW need to be planned to provide timely results, avoid resource conflicts by running too many jobs at a time and to take care of logical dependencies between different jobs. These processes are controlled in the scheduler component of AWB. This is achieved by either scheduling single processes independently or defining process chains for complex network of jobs required to update the information available in the SAP BW system. Reporting Agent controls execution of queries in a batch mode to print reports, identify exception conditions and notify users and pre compute results for web templates.
  • Administering ETL service layer in multi- tier level: SAP’s ETL service layer provides services for data extraction, data transformation and loading of data. It also serves as the staging area for intermediate data storage for quality assurance purposes. The extraction technology of SAP BW is supported by database management systems of mySAP technology and does not allow extraction from other database systems like IBM, IMS and Sybase. It does not support dBase, MS Access and MS Excel file formats. However, it provides all the functionality required for loading data from non- SAP systems as the ETL services layer provide open interfaces for loading non-SAP data.

Monday, August 6, 2007

Tickets and Authorization in SAP Business Warehouse

What is tickets? and example?


Tickets are the tracking tool by which the user will track the work which we do. It can be a change requests or data loads or what ever. They will of types critical or moderate. Critical can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client that the issue is solved. Tickets are raised at the time of support project these may be any issues, problems.....etc. If the support person faces any issues then he will ask/request to operator to raise a ticket. Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to contract in between companies. Generally Ticket raised by the client can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved ASAP. If the ticket is of low priority it must be considered only after attending to high priority tickets.

The typical tickets in a production Support work could be:
1. Loading any of the missing master data attributes/texts.
2. Create ADHOC hierarchies.
3. Validating the data in Cubes/ODS.
4. If any of the loads runs into errors then resolve it.
5. Add/remove fields in any of the master data/ODS/Cube.
6. Data source Enhancement.
7. Create ADHOC reports.

1. Loading any of the missing master data attributes/texts - This would be done by scheduling the infopackages for the attributes/texts mentioned by the client.
2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object.
3. Validating the data in Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3.
4. If any of the loads runs into errors then resolve it. - Analyze the error and take suitable action.
5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement
6. Data source Enhancement.
7. Create ADHOC reports. - Create some new reports based on the requirement of client.

Checklists for a support project of BPS - To start the checklist:

1) InfoCubes / ODS / datatargets
2) planning areas
3) planning levels
4) planning packages
5) planning functions
6) planning layouts
7) global planning sequences
8) profiles
9) list of reports
10) process chains
11) enhancements in update routines
12) any ABAP programs to be run and their logic
13) major bps dev issues
14) major bps production support issues and resolution

Difference Between BW Technical and Functional

n general Functional means, derive the funtional specification from the business requirement document. This job normally is done either by the business analyst or system analyst who has a very good knowledge of the business. In some large organizations there will be a business analyst as well as system analyst.

In any business requirement or need for new reports or queries originates with the business user. This requirement will be recorded after discussion by the business analyst. A system analyst analyses these requirements and generates functional specification document. In the case of BW it could be also called logical design in DATA MODELING.

After review this logical desing will be translated to physical design . This process defines all the required dimensions, key figures, master data, etc.

Once this process is approved and signed off by the requester(users), then conversion of this into practically usable tasks using the SAP BW software. This is called Technical. The whole process of creating an InfoProvider, InfoObjects, InforSources, Source system, etc falls under the Technical domain.

What is the role of consultant has to play if the title is BW administrator? What is his day to day activity and which will be the main focus area for him in which he should be proficient?

BW Administartor - is the person who provides Authorization access to different Roles, Profiles depending upon the requirement.

For eg. There are two groups of people : Group A and Group B.

Group A - Manager

Group B - Developer

Now the Authorization or Access Rights for both the Groups are different.

So for doing this sort of activity.........we required Administrator.

Friday, August 3, 2007

How to handle inventory in SAP BW

Controlling the inventory in BW
very helpful document

inventory.pdf

Business Warehouse- Architecture -

Being implemented on top of SAP Web Application Services SAP’s BW provides a multi-tier architecture (figure shown below), along with a complete software development environment, system management tools and additional functionalities such as currency conversion or security. Although it is closely related to SAP R/3, SAP BW is a completely separate software package which comes with automated extraction and loading facilities.

Components of BW architecture

SAP BW is based on integrated metadata concept with metadata being managed by metadata services. SAP’s BW has following layers:

  • Extraction, Loading and Transformation (ELT) services layer.
  • Storage services layer, with services for storing and archiving information.
  • Analysis and access services layer, which provides access to the information stored in SAP BW.
  • Presentation services layer, which offers different options for presenting information to end users.
  • Administration services.
  • Metadata services.


Wednesday, August 1, 2007

Useful Tables for BW Reporting

Tables for BW Reporting:

SAP... its all just tables and programs right? Sounds pretty easy, a couple of 1s here and few 0s over there and then you have a high powered analytical datewarehouse. Works for me.

I have this pessimistic theory that as technology grows we will be so focused on growing new technologies that we will forget the basic principles that lead to these nascent developments. For instance, you dont think about how or why the database software works, it just does. But if something at that fundamental level became corrupt everything above that layer would cease to work.

Much like the engine in your car. Drivers probably give about 5 seconds thought about their car engine in a given year. But features like satellite radio, heated seats, and sun roofs seem to garner all the attention. Most drivers can tell you every detailed function about each of the buttons on the radio, but probably couldnt point out a spark plug if asked.

Long tangent, sorry.

Anyway, we use abstraction layers to simplify some complex technologies. Query designer, Visual Composer etc etc. However, when things don't work it really helps to know 'whats under the hood'. Below is a list of some helpful frontend tables to grab technical IDs (useful when troubleshooting transports) and text descriptions in bulk, amongst other things.

For homework, try using SE54 to create some nice views, then send me the view definitions!

Queries
RSZELTDIR Directory of the reporting component elements
RSZELTTXT Texts of reporting component elements
RSZELTXREF Directory of query element references
RSRREPDIR Directory of all reports (Query GENUNIID)
RSZCOMPDIR Directory of reporting components

Workbooks
RSRWBINDEX List of binary large objects (Excel workbooks)
RSRWBINDEXT Titles of binary objects (Excel workbooks)
RSRWBSTORE Storage for binary large objects (Excel workbooks)
RSRWBTEMPLATE Assignment of Excel workbooks as personal templates
RSRWORKBOOK 'Where-used list' for reports in workbooks

Web templates
RSZWOBJ Storage of the Web Objects
RSZWOBJTXT Texts for Templates/Items/Views
RSZWOBJXREF Structure of the BW Objects in a Template
RSZWTEMPLATE Header Table for BW HTML Templates

Saturday, July 28, 2007

ODS vs CUBE

Diff between ODS vs CUBE

The main difference between the ODS Object and the PSA and InfoCube is that the ODS Object allows existing data to be changed. Whereas an InfoCube principally allows inserts, and only allows deletion on the basis of requests, data in an ODS Object can be changed during the staging process. This enables an ODS Object to be used as a consolidation Object in a Data Warehouse. PSA data change is only supported by manual change or by customer programs and not by the staging mechanism.

Unlike ODS Objects; InfoCubes have a mandatory time-dimension that allows you to look at particular relationships in relation to time-periods. For example, you can look at how relationships have changed over a certain time-period.

An ODS Object is principally used for analyzing the status of data at a certain point in time. This allows you to see what relationships are currently like. Exceptionally you can also track history in ODS Objects by adding a date to the key fields of the ODS Object.

It is generally true to say that it is not always necessary to implement ODS Objects in every scenario. Rather, it depends on the requirements of each scenario. You should only use ODS if the requirements of your scenario fit one of the three usage possibilities outlined above (Inbound ODS, Consistent ODS, Application-related ODS). An ODS Object placed in the data flow
to an InfoCube without having a function does nothing except hinder loading performance.

Thursday, July 26, 2007

BW Authorizations

BW Authorizations:

The activities that you can carry out in SAP SEM-BPS are covered by the SAP authorization concept. This means that you can assign different access rights to planning functionality to the people who work with the SEM System.

Integration

The system checks the special authorization objects that SEM-BPS defines and, if necessary, also those authorization objects that are defined for reporting in the SAP Business Information Warehouse environment. In this case, the SEM-BPS users must have both the SEM-BPS application-specific authorizations and the general SAP BW reporting authorizations. You manage the SEM-BPS authorizations using the system administration tools and the BW reporting authorizations using the relevant functions under Business Explorer ® Authorizations ® Reporting - Authorization Objects.

To assign authorizations for changing and displaying plan data separately, you must include the ACTVT (activity) field in the reporting authorization object. In this field the value 02 represents the authorization to change and 03 the authorization to display plan data. If you do not include the field, then this corresponds to an authorization to change plan data.

In addition to that, because of internal dependencies, you need authorization for the following authorization objects for data entry using planning layouts:

· S_BDS_DS: This authorization object controls access to documents, which belong to a document set of the Business Document Service (BDS).

· S_TRANSLAT: This authorization object controls access to the translation functions of the SAP System.

Features

The following authorization objects exist for the SEM Business Planning and Simulation component:

· R_AREA: You use this authorization object to control access to planning areas and all subordinate objects. You must set up read access to planning areas for people who will work with the SEM-BPS component. Otherwise, they will not be able to access any of the subordinate planning elements.

· R_PLEVEL: You use this authorization object to control access to planning levels and all subordinate objects. This authorization object is also relevant to access documents of the SEM-BIC component.

· R_PACKAGE: You use this authorization object to control access to planning packages (including ad hoc packages).

· R_METHOD: You use this authorization object to control access to planning functions and the appropriate parameter groups.

· R_PARAM: You use this authorization group to control access to individual parameter groups of a certain planning function.

· R_BUNDLE: You use this authorization object to control access to global planning sequences (you control authorizations for planning sequences, which you create for a planning level, with the authorization objects R_METHOD, R_PLEVEL, or R_AREA).

No separate authorization for execution is defined for this authorization object. Whether a global planning sequence can be executed or not, depends on the authorization objects for the planning functions contained in it.

· R_PROFILE: You use this authorization object to control access to planning profiles. A planning profile restricts the objects that can be viewed. If you wish to view the planning objects, you must have at least display authorization for the appropriate planning profile.

· R_PM_NAME: You use this authorization object to control access to planning folders. In order to be able to work with planning folders, you also require the necessary authorizations for the planning objects combined in the folder.

· R_WEBITF: You use this authorization object to control access to Web interfaces that you create and edit with the Web Interface Builder, and from which you can generate Web-enabled BSP applications.

· R_STS_PT: You use this authorization object to control access to the Status and Tracking System. The object enables a check to be carried out whether a user is allowed access to a certain subplan or a version of it with the Status and Tracking System.

· R_STS_CUST: You use this authorization object to control access to Customizing for the Status and Tracking System. The object enables or forbids a user to execute Customizing.

· R_STS_SUP: This authorization object provides the assigned users with the status of a super user in relation to the Status and Tracking System. The object enables changing access to all plan data, independent of whether and where a user of the cost center hierarchy it is based on is assigned. The authorization object is intended for members of a staff controller group, who are not part of the line organization of the company, but who nevertheless must be able to intervene in the planning process.

In accordance with the hierarchical relationships that exist between the various types of planning objects, authorizations that are assigned to an object on a higher level are passed on to its subordinate objects. An authorization that has been passed on can be enhanced but not restricted on a lower level. The following table presents the combination possibilities using the example of a change authorization for planning area and level:

Change Planning Area

Change Planning Level

Authorization Available for Level

yes

no

yes

yes

yes

yes

no

no

no

no

yes

yes

In practice this behavior means that you can proceed according to two different strategies when setting up authorizations:

· Minimization of Customizing Effort: You assign authorizations for planning objects on as high a level as possible, and thereby enable access to the planning objects without further authorization assignment on lower levels.

· Optimization of Delimitation of Access Rights: You assign authorizations for planning objects on as low a level as possible, and therefore make sure that access to a planning object is only possible for the person responsible for this.

Activities

Create the user profiles you require and then assign authorization objects to these profiles. Then assign the newly created user profiles to possible users.

You can find further information on the activities associated with the different authorization objects in the online documentation on the authorization objects themselves. You can call this up in the maintenance transaction "Role maintenance" (PFCG).

Wednesday, July 25, 2007

Sales and Distribution Tables

commonly used SD Tables:

KONV


Conditions for Transaction Data

KONP Conditions for Items

LIKPDelivery Header Data

LIPSDelivery: Item data

VBAKSales Document: Header Data

VBAPSales Document: Item Data

VBBESales Requirements: Individual Records

VBEHSchedule line history

VBEPSales Document: Schedule Line Data

VBFASales Document Flow

VBLBSales document: Release order data

VBLKSD Document: Delivery Note Header

VBPASales Document: Partner

VBRKBilling: Header Data

VBRPBilling: Item Data

VBUKSales Document: Header Status and Administrative Data

VBUPSales Document: Item Status

VEKPHandling Unit - Header Table

VEPOPacking: Handling Unit Item (Contents)

VEPVGDelivery Due Index

Tuesday, July 24, 2007

Conversion Routines in SAP BW

Conversion Routine:

Conversion routines are used in SAP BW so that the characteristic values (key) of an InfoObject can be displayed or used in a different format to how they are stored in the database. They can also be stored in the database in a different format to how they are in their original form, and supposedly different values can be consolidated into one.

Conversion routines that are often implemented in SAP BW are now described.

Integration

In SAP BW, conversion routines essentially serve to simplify the input of characteristic values for a query runtime. For example with cost center 1000, the long value with left-hand zeros 0000001000 (from the database) is not to be entered, but just 1000. Conversion routines are therefore linked to characteristics (InfoObjects) and can be used by them.

Conversion routines can also be set with data loading. Although it is important to note that conversion routine are often already defined for DataSource fields (particularly from SAP source systems). The properties of the replicated DataSource fields are displayed in the transfer structure or DataSource maintenance.

In many cases it is desirable to store the conversion routines of these fields in the corresponding InfoObject on the BW side too. It is therefore necessary, when defining transfer rules, to consider the connection between the conversion routine of the InfoObject in the communication structure and the conversion routine of the transfer structure field.

When loading data you now have to consider that when extracting from SAP source systems the data is already in the internal format and is not converted. When loading flat files and when loading using a BAPI or DB Connect, the conversion routine displayed signifies that an INPUT conversion is executed before writing to the PSA. For example, the date field is delivered from a flat file in the external format‚10.04.2003’. If a conversion routine has been specified in the transfer structure maintenance, this field can be converted to the internal format ‘20030410’ in the transfer rules, according to that conversion routine.

Conversion routines ALPHA, NUMCV, and GJAHR check whether data exists in the correct internal format before it is updated. For more on this see the extensive documentation in the BW system in the transaction for converting to conforming internal values (transaction RSMDCNVEXIT). If the data is not in the correct internal form an error message is issued. These three conversion routines can be set in Transfer Rule Maintenance so that a check is not executed but an INPUT conversion is. Make this setting using the Optional Conversion flag in transfer rules maintenance. Both the check and the conversion are executed in the transfer rules for the target field.

Business Content objects are delivered with conversion routines if they are also used by the DataSource in the source system. The external presentation is then the same in both systems. The conversion routines used for the R/3 DataSource fields are then transferred to the BW system when the DataSources from the SAP source systems are replicated.

Saturday, July 21, 2007

BW Integrations

Analysis of BW support navigation facilities integrated to BW 3.0

OLAP BAPI: SAP BW 3.0 comes with the OLAP BAPI Interface (OBI) which provides functions that can be used by third party reporting tools to access BW Info cubes. It provides an open interface to access any information that is available through OLAP engine.

Integrating with XML: OLAP BAPI serves as the basis for the SAP implementation of XML for analysis. It is an XML API based on Simple Object Access Protocol (SOAP) designed for standardized access to an analytical data provider over the web. The XML interface introduced with SAP BW 3.0 release accepts XML data streams compliant with the SOAP. Unlike all other SAP BW interfaces in XML interface the actual data transfer is initiated by the source system.

Open Hub Services: The Open Hub Service allows controlled distribution of consistent data from any SAP BW InfoProvider to flat files, database tables and other applications with full support for delta management, selections, projections and aggregation. Open Hub Services have InfoSpokes as their core metadata objects. With the SAP 3.0 release InfoSpokes have become generally available.

Content Management Framework: The SAP Web Content Management Server stores unstructured information that users can go through and use efficiently. Integration with the SAP BW content management framework provides an integrated view on structured and unstructured information to the end user.

Friday, July 20, 2007

T-Codes for BW

Check out the commonly used T-Codes in BW

BW_TCODES.xls

Wednesday, July 18, 2007

Creation of Infoobjects

Steps for creation of Infoobjects a beautiful PPT

Creationofinfoobjects.ppt

What is IDOC

Find the beautiful PPT regarding to IDOCs

21IDOCs.ppt

Monday, July 16, 2007

How to retain deltas when you change LO extractor in Production system

Requirement may come up to add new fields to LO cockpit extractor which is up & running in production environment. This means the extractor is delivering daily deltas from SAP R/3 to BW system .

Since this change is to be done in R/3 Production system, there is always a risk that daily deltas of LO cockpit extractor would get disturbed. If the delta mechanism is disturbed (delta queue is broken) then there no another way than doing re-initialization for that extractor. However this re-init is not easy in terms of time & resource. Moreover no organization would be willing to provide that much downtime for live reporting based on that extractor.

As all of us know that initialization of LO Extractor is critical, resource intensive & time consuming task. Pre-requisites to perform fill setup tables are - we need to lock the users from transactional updates in R/3 system, Stop all batch jobs that update the base tables of the extractor. Then we need to schedule the setup jobs with suitable date ranges/document number ranges.

We also came across such scenario where there was a requirement to add 3 new fields to existing LO cockpit extractor 2LIS_12_VCITM. Initialization was done for this extractor 1 year back and the data volume was high.

We adopted step by step approach to minimize the risk of delta queue getting broken /disturbed. Hope this step by step procedure will help all of us who have to work out similar scenarios.

Step by Step Procedure:-

1.Carry out changes in LO Cockpit extractor in SAP R/3 Dev system.
As per the requirement add new fields to Extractor.
These new fields might be present in standard supporting structures that you get when you execute "Maintain Data source" for extractor in LBWE. If all required fields are present in supporting structure mentioned above then just add these fields using arrow buttons provided and there is no need to write user exit code to populate these new fields.
However if these fields (or some of the required fields) are not present in supporting structures then you have to go for append structure and user exit code. The coding in user exit is required to populate the newly added fields. You have to write ABAP code in User exit under CMOD & in Include ZXRSAU01.
All above changes will ask you for transport request. Assign appropriate development class/Package and assign all these objects into a transport request.

2.Carry out changes in BW Dev system for objects related to this change.
Carry out all necessary changes in BW Dev system for objects related to this change (Info source, transfer rules, ODS, Info cubes, Queries & workbooks). Assign appropriate development class/Package and assign all these objects into a transport request

3.Test the changes in QA system.
Test the new changes in SAP R/3 and BW QA systems. Make necessary changes (if needed) and include them in follow-up transports.

4.Stop V3 batch jobs for this extractor.
V3 batch jobs for this extractor are scheduled to run periodically (hourly/daily etc) Ask R/3 System Administrator to put on hold/cancel this job schedule.

5.Lock out users, batch jobs on R/3 side & stop Process chain schedule on BW.
In order to avoid the changes in database tables for this extractor and hence possible risk of loss of data, ask R/3 System Administrator to lock out the users. Also batch job schedule need to be put on hold /cancel.
Ask System Administrator to clear pending queues for this extractor (if any) in SMQ1/SMQ2. Also pending /error out v3 updates in SM58 should be processed.
On BW production system the process chain related to delta Info package for this extractor should be stopped or put on hold.

6.Drain the delta queue to Zero for this extractor.
Execute the delta Info package from BW and load the data into ODS & Info cubes. Keep executing delta Info package till you get 0 records with green light for the request on BW side. Also you should get 0 LUW entries in RSA7 for this extractor on R/3 side.

7.Import R/3 transports into R/3 Production system.
In this step we import R/3 transport request related to this extractor. This will include user exit code also. Please ensure that there is no syntax error in include ZXRSAU01 and it is active. Also ensure that objects such as append structure is active after transport.

8.Replicate the data source in BW system.
On BW production system, replicate the extractor (data source).

9.Import BW transport into BW Production system.
In this step we import BW transport related to this change into BW Production system.

10.Run program to activate transfer rules
Execute program RS_TRANSTRU_ACTIVATE_ALL. Enter the Info source and source system name and execute. This will make sure that transfer rules for this Info source are active

11.Execute V3 job Manually in R/3 side
Go to LBWE and click on Job Control for Application area related to this extractor (for 2LIS_12_VCITM it is application 12). Execute the job immediately and it should finish with no errors.

12.Execute delta Info package from BW system
Run delta Info package from BW system. Since there is no data update, this extraction request should be green with zero records (successful delta extraction)

13.Restore the schedule on R/3 & BW systems
Ask System Administrator to resume V3 update job schedule, batch job schedule and unlock the users. On BW side, restore the process chains schedule.

From next day onwards (or as per frequency), you should be able to receive the delta for this extractor with data also populated for new fields.