Powered By

Free XML Skins for Blogger

Powered by Blogger

Saturday, October 18, 2008

SAP ALE ABAP DETIAL

ALE

Reasons for Distributing Business Functions

In a modern company, the flows of logistics and information between the various organizational units are likely to be sizable and complex. One reason for this is the adoption of new management concepts like "lean production".

Many previously centralized responsibilities are now being assigned to the organizational units that are directly linked to the relevant information or to the production.

The assignment of business management functions like inventory management, central purchasing or financial accounting to the various organizational units is not the same in every company.

There is a tendency in some areas towards an increasing independence between business units within a company. This lends itself to the idea of modeling intra-company relationships along the same lines as customer-vendor relationships.

Market requirements have led to many changes in business processes. These have increased the demands on process flows in areas such as purchasing, sales and distribution, production and accounting.

The increasing integration of business processes means that they can no longer be modeled in terms of a single company only. Relationships with customers and vendors must also be considered.


Distributing these various tasks away from the center means that a high level of communication is demanded from integration functions. Fast access to information held in other areas is required (for example, the sales department may require information on the stocks of finished products in the individual plants).

Distributed Responsibilities in a Company.

Users of modern business data processing systems require:


a high degree of integration between business application systems to ensure effective modeling of business processes

decoupled application systems that can be implemented decentrally and independently of any particular technology.

The design, construction and operation of complex, enterprise-wide, distributed application systems remains one of the greatest challenges in data processing. The conventional solutions available today do not provide a totally satisfactory answer to the diverse needs of today's users.

Further standardization of business processes accompanied by ever tighter integration within a central system no longer represents a practicable approach to the problem.

The following are some of the most commonly encountered difficulties:

• technical bottlenecks,
• upgrade problems,
• the effect of time zones on international corporations,
• excessively long response times in large centralized systems.

For these reasons a number of R/2 customers operate several systems in parallel (arranged, for example, on a geographical basis). Whilst the three-tier client-server architecture of the R/3 System means that the significance of these technical restrictions is somewhat reduced, they are still present.

Whilst the idea of using distributed databases to implement distributed application systems sounds tempting, this is rarely a practical approach these days. The reasons for this include high communications overhead, uneconomic data processing operations and inadequate security mechanisms.

ALE - The Objectives

ALE (Application Link Enabling) supports the construction and operation of distributed applications. ALE handles the exchange of business data messages across loosely coupled SAP applications, ensuring that data is consistent. Applications are integrated by using synchronous and asynchronous communication, rather than by means of a central database.

ALE comprises three layers:

1. applications
2. distribution
3. communication

In order to meet the requirements of today's customers and to be open for future developments, ALE must meet the following challenges:

Communication between different software releases
Continued data exchange after a release upgrade without special maintenance.
Independence of the technical format of a message from its contents
Extensions that can be made easily, even by customers
Applications that are decoupled from the communication
Communications interfaces that allow connections to third-party applications
Support for R/3-R/2 scenarios

ALE - The Concept

The basic principle behind ALE is the guarantee of a distributed, yet fully integrated, R/3 System installation. Each application is self-sufficient and exists in the distributed environment with its own set of data.

Distributed databases are rarely a good solution today to the problem of data transport for the following reasons:

The R/3 System contains consistency checks that could not be performed in an individual database. Replicating tables in distributed databases would render these consistency checks useless.

Mirrored tables require two-phase commits. These result in a heavy loss of performance.

The distribution is controlled at the level of tables for distributed databases, and at the level of the applications in the case of ALE distribution.

Long distance access to distributed data can be difficult even today (because of error rates, a high level of network activity and long response times).

The use of self-sufficient systems implies a certain measure of data redundancy. Therefore data has to be both distributed and synchronized across the entire system. Communication is performed asynchronously.

For certain functions that require read-only access to information, direct requests have to be made between the remote systems, using synchronous RFC, or, if this is not available, CPI-C programs. The function modules and CPI-C programs are written as required for each application.

Summary

There are both technical and business-related benefits to be realized from the distribution of applications in an integrated network.

State-of-the-art communication technology and the client/server architecture have made the distribution of standard software technically possible.

Distributed databases do not represent a good solution for the distribution of control data, master data and transaction data.

Asynchronous exchange of data with a measure of data redundancy is the best solution utilizing today's technology.

The goal of ALE is to enable data exchange between R/3-R/3, R/2- R/3 and R/3-non-SAP systems.

Control data, master data and transaction data is transmitted.
ALE also supports release upgrades and customer modifications.
ALE allows a wide range of customer-specific field choices in the communication.
IDocs (Intermediate Documents) are used for the asynchronous communication.
Allowance is made for distribution in the various applications of the R/3 System.
The application initiates the distribution of the data.
ALE and EDI complement each other.


OUT BOUND PROCESING

In the output processing one of the function modules of the application creates an IDoc, the so-called master IDoc. This IDoc is sent to the ALE layer where the following processing steps are applied:

• receiver determination, if this has not already been done by the application
• data selection
• segment filtering
• field conversion
• version change

The resulting IDocs (it is possible that several IDocs could be created in the receiver determination) are referred to as communication IDocs and are stored in the database. The dispatch control then decides which of these IDocs should be sent immediately. These are passed to the communications layer and are sent either using the transactional Remote Function Call (RFC) or via file interfaces (e.g. for EDI).
If an error occurs in the ALE layer, the IDoc containing the error is stored and a workflow is created. The ALE administrator can use this workflow to process the error.

OUT BOUND PROCESING STEP BY STEP

Receiver Determination

An IDoc is similar to a normal letter in that it has a sender and a receiver. If the receiver has not been explicitly identified by the application, then the ALE layer uses the customer distribution model to help determine the receivers for the message.

The ALE layer can find out from the model whether any distributed systems should receive the message and, if so, then how many. The result may be that one, several or no receivers at all are found.

For each of the distributed systems that have been ascertained to be receiver systems, the data that is specified by the filter objects in the customer distribution model is selected from the master IDoc. This data is then used to fill an IDoc, and the appropriate system is entered as receiver.

Segment Filtering

Individual segments can be deleted from the IDoc before dispatch by selecting Functions for the IDoc processing ® Settings for filtering in ALE Customizing. The appropriate setting depends on the sending and receiving logical R/3 System.

Field Conversion

Receiver-specific field conversions are defined under Functions for the IDoc processing ® Conversions in ALE Customizing.

General rules can be specified for field conversions; these are important for converting data fields to exchange information between R/2 and R/3 Systems. For example, the field "plant" can be converted from a 2 character field to a 4 character field.

The conversion is done using general EIS conversion tools (Executive Information System).

IDoc Version Change

SAP ensures that ALE functions between different R/3 System releases. By changing the IDoc format you can convert message types of different R/3 releases. SAP Development use the following rules when converting existing message types:

• Fields may be appended to a segment type;
• Segments can be added;

ALE Customizing keeps a record of which version of each message type is in use for each receiver. The correct version of the communication IDoc is created in the ALE output.

Dispatch Control

Controlling the time of dispatch:

The IDocs can either be sent immediately or in the background processing. This setting is made in the partner profile.
If the IDoc is to be dispatched in batch, a job has to be scheduled. You can chose the execution frequency. (e.g. daily, weekly).

Controlling the amount of data sent:

• IDocs can be dispatched in packets. To define a packet size appropriate for a specific partner, select Communication ® Manual maintenance of partner profile ® Maintain partner profile in ALE Customizing.


Mass Processing of Idocs

Mass processing refers to bundles of IDoc packets, which are dispatched and processed by the receiving R/3 System. Only one RFC call is needed to transfer several IDocs. Performance is considerably better when transferring optimal packet sizes.
To define a mass processing parameter, select Communication ® Manual maintenance of partner profile ® Maintain partner profile. For a message type the parameters packet size and output mode can be defined.


If the output mode is set to "Collect IDocs", outbound IDocs of the same message type and receiver are sent in a scheduled background job or in the BALE transaction in appropriately sized IDoc packets. The IDocs can be dispatched in batch or in the BALE transaction code.

Some distribution scenarios cannot support mass processing of inbound IDoc packets. This is especially true if the application sending the IDocs uses the ABAP/4 command CALL TRANSACTION USING. In this case the outbound parameter PACKETSIZE must be set to "1".

To get a list of function modules that can be mass processed, select Enhancements ® Inbound ® specify inbound module in ALE Customizing. INPUTTYP is "0".

INBOUND PROCESING

After an IDoc has been successfully transmitted to another system, inbound processing is carried out in the receiver system, involving the following steps in the ALE layer:

• segment filtering
• field conversion
• data transfer to the application

There are three different ways of processing an inbound IDoc:

• A function module can be called directly (standard setting),
• A workflow can be started
• A work item can be started

INBOUND PROCESING STEP BY STEP

Segment Filtering

Segment filtering functions the same way in inbound processing as in outbound processing.


Field Conversion

Specific field conversions are defined in ALE Customizing.
The conversion itself is performed using general conversion tools from the EIS area (Executive Information System).

Generalized rules can be defined. The ALE implementation guide describes how the conversion rules can be specified.
One set of rules is created for each IDoc segment and rules are defined for each segment field.
The rules for converting data fields from an R/2-specific format to an R/3 format can be defined in this way. An example of this R/2 - R/3 conversion is the conversion of the plant field from a 2 character field to a 4 character field.

Input Control

When the IDocs have been written to the database, they can be imported by the receiver application.
IDocs can be passed to the application either immediately on arrival or can follow in batch.
You can post an inbound IDoc in three ways:

1. by calling a function module directly:

- A function is called that imports the IDoc directly. An error workflow will be started only if an error occurs.

2. by starting a SAP Business Workflow. A workflow is the sequence of steps to post an IDoc.

- Workflows for ALE are not supplied in Release 3.0.

3. by starting a work item

- A single step performs the IDoc posting.
The standard inbound processing setting is that ALE calls a function module directly. For information about SAP Business Workflow alternatives refer to the online help for ALE programming.

You can specify the people to be notified for handling IDoc processing errors for each message type in SAP Business Workflow.

Repeated Attempts to Pass the Idoc to the Aplication

If the IDoc could not be passed to the application successfully (status: 51 - error on handover to application), then repeated attempts may be made with the RBDMANIN report.
This functionality can be accessed through the menu: Logistics ® Central functions ® Distribution and then Period. work ® IDoc, ALE input
Selections can be made according to specific errors. Therefore this report could be scheduled as a periodic job that collects IDocs that could not be passed to the applications because of a locking problem.

Error Handling in ALE Ibound Processing

The following is a description of how an error that occurs during ALE processing is handled:

• The processing of the IDoc causing the error is terminated.
• An event is triggered.
• This event starts an error workitem:

- The employees responsible will find a workitem in their workflow inboxes.
- An error message is displayed when the workitem is processed.
- The error is corrected in another window and the IDoc can then be resubmitted for processing.
- If the error cannot be corrected, the IDoc can be marked for deletion.

Once the IDoc has been successfully imported, an event is triggered that terminates the error workitem. The workitem then disappears from the inbox.

Objects and Standard Tasks

Message Type Standard Task ID of Standard Task

BLAOCH 7975 BLAOCH_Error
BLAORD 7974 BLAORD_Error
BLAREL 7979 BLAREL_Error
COAMAS Keine
COELEM Keine
COPAGN 8062 COPAGN_Error
COPCPA 500002 COPCPA_Error
COSMAS 8103 COSMAS_Error
CREMAS 7959 CREMAS_Error
DEBMAS 8039 DEBMAS_Error
EKSEKS 8058 EKSEKS_Error
FIDCMT 8104 FIDCMT_Error
FIROLL 8113 FIROLL_Error
GLMAST 7950 GLMAST_Error
GLROLL 7999 GLROLL_Error
INVCON 7932 INVCON_Error
INVOIC 8057 INVOIC_MM_Er
MATMAS 7947 MATMAS_Error
ORDCHG 8115 ORDCHG_Error
ORDERS 8046 ORDERS_Error
ORDRSP 8075 ORDRSP_Error
SDPACK Keine
SDPICK 8031 SDPICK_Error
SISCSO 8059 SISCSO_Error
SISDEL 8060 SISDEL_Error
SISINV 8061 SISINV_Error
SOPGEN 8063 SOPGEN_Error
WMBBIN 8047 WMBBIN_Error
WMCATO 7968 WMCATO_Error
WMCUST 8049 WMCUST_Error
WMINFO 8032 WMINFO_Error
WMINVE 7970 WMINVE_Error
WMMBXY 8009 WMMBXY_Error
WMSUMO 8036 WMSUMO_Error
WMTOCO 7972 WMTOCO_Error
WMTORD 8013 WMTORD_Error
WMTREQ 8077 WMTREQ_Error
COSFET COSFET_Error
CREFET CREFET_Error
DEBFET DEBFET_Error
GLFETC GLFETC_Error
MATFET MATFET_Error

EDI Message Types
Message Type Standard Task Functional Area

DELINS 8000 DELINS_Error
EDLNOT 8065 EDLNOT_error
INVOIC 8056 INVOIC_FI_Er
REMADV 7949 REMADV_Error
ALE QUICK START

This documentation describes how to configure a distribution in your R/3 Systems using Application Link Enabling (ALE). You will learn how to create a message flow between two clients and how to distribute materials. You will get familiar with the basic steps of the ALE configuration.

To set up and perform the distribution, proceed as follows:


1. Setting Up Clients
2. Defining A Unique Client ID
3. Defining Technical Communications Parameters
4. Modeling the Distribution
5. Generating Partner Profiles in the Sending System
6. Distributing the Customer Model
7. Generating Partner Profiles in the Receiving System
8. Creating Material Master Data
9. Sending Material Master Data
10.Checking Communication .


1. Setting Up Clients :

You must first set up two clients to enable communication. The two clients may be located on the same physical R/3 System or on separate systems.


You can either use existing clients or you can create new clients by making copies of existing ones (for example, a copy of client 000 or a client of the International Demo System (IDES)). To create new clients, you use the Copy source client function. You will find this function in the Customizing (Tools ® Business Engineering ® Customizing) under Basic functions ® Set up clients. Here you will also find additional information on setting up the clients.

Example: Clients 100 and 200 are available. Both are copies of client 000.

2. Defining A Unique Client ID :

To avoid any confusion, it is necessary for participating systems in a distributed environment to have an unique ID. The name of the "logical System" is used as the unique ID. This name is assigned explicitly to one client on an R/3 System.
When you have set up two clients for the exercise, you must tell them which logical systems exist in the distributed environment and what the description of their own client is. You will find the functions you require in the Customizing for ALE under Basic configuration ® Set up logical system.

Example : Client 100 is described as logical system LOGSYS0100.
Client 200 is described as logical system LOGSYS0200.

To maintain the logical systems in the distributed environment, choose Maintain logical systems, and

Execute the function and enter a logical system (LOG. SYSTEM) and a short text for each of your clients.

Save your entries.

When using two clients in different systems, make sure that you maintain identical entries in both systems. When using two clients in one physical R/3 System, you have to make the settings only once, since the entries are client-independent.

Log. System Short text

LOGSYS0100 System A, client 100
LOGSYS0200 System B, client 200

Allocate the corresponding logical systems to both clients using the

Allocate logical system to the client function:

Execute the function in each of the two clients.
In the view, double-click on the corresponding client.
In the Logical system field, enter the logical system name to be assigned to the indivdual client.
Save your entry.

In client Logical system

100 LOGSYS0100
200 LOGSYS0200

3.Defining Technical Communications Parameters

For the two logical systems to be able to communicate with one another, each must know how to reach the other technically. This information is found in the RFC destination.

On each of the two clients, you must maintain the RFC destination for the other logical system. You will find the function you require in the Customizing for ALE under the item Communication ® Define RFC destination.

Execute the function.
Choose Create.
Define the RFC destination:
- For the name of the destination, use the name of the logical system which is to refer to the destination (use UPPERCASE letters).

In client 100 you maintain the RFC destination LOGSYS0200.
In client 200 you maintain the RFC destination LOGSYS0100.

- As Connection type, choose 3.
- Enter a description of the RFC destination.

'RFC destination for the logical system LOGSYS0200' as a description of destination LOGSYS0200.

- As logon parameters, enter the logon language (for example, E), the logon client (for example, 200 for LOGSYS0200) and the logon user (user ID with target system password).
- Choose Enter.
- Enter the target machine and the system number:

The target machine indicates which receiving system application server is to handle communication. You can enter the specifications as UNIX host name, as host name in DNS format, as IP address or as SAP router name.
If you use SAP Logon, you can retrieve the information via Server selection ® Servers. Choose the corresponding SAP System ID and then OK. The system displays a list of all available application servers.

The system number indicates the service used (TCP service, SAP system number). When using SAP Logon, you can get the system number by selecting the system on the inital screen and then choosing EDIT.

- Save your entries.
- After saving the RFC destination, you can use Test connection to test the connection, and attempt a remote logon via Remote Login. If you succeed, the system displays a new window of the other system. Choose System ® Status... to check that you are in the correct client.

Define RFC Destination :

In this section, you define the technical parameters for the RFC destinations.
The Remote Function Call is controlled via the parameters of the RFC destination.
The RFC destinations must be maintained in order to create an RFC port.
The name of the RFC destination should correspond to the name of the logical system in question.
The following types of RFC destinations are maintainable:

• R/2 links
• R/3 links
• internal links
• logical destinations
• CMC link
• SNA/CPI-C connections
• TCP/IP links
• links of the ABAP/4 drivers

Example :

1. Enter the following parameters for an R/3 link:

- name for RFC destination: S11BSP001
- link type: 3 (for R/3 link)
- target machine: bspserver01
- system number: 11
- user in target machine: CPIC
- password, language and target client.

Standard settings

In the standard system, no RFC destinations are maintained.

Activities

1. Click on one the categories (for example, R/3 links) and choose Edit -> Create;
2. Enter the required parameters dependent on the type.
3. For an R/3 link, that is, for example, the name of the RFC destination, the name of the partner machine, logon parameter (see example).

For an R/2 connection select the option 'Password unlocked' in the log-on parameters. To test an R/2 connection you cannot use the transaction connection test, you have to use Report ACPICT1 which sets up a test connection to client 0 of the host destination. Select the check boxes for the parameters ABAP and CONVERT.

Processing RFCs with errors

If errors occur in a Remote Function Call, these are processed in the standard in the single error processing. A background job is scheduled for each RFC that resulted in an error, and this background job keeps restarting the RFC until the RFC has been processed successfully. In the case that the connection to the recipient system has been broken, this can mean that a very large of background jobs gets created that will represent a considerable additional load on the sending system.


You should always use the collective error processing in productive operation so as to improve the system performance. This will not automatically re-submit the RFC immediately, but a periodically scheduled background job will collect together all the RFCs that failed and will re-start them as a packet. This helps to reduce the number of background jobs created. This can be done both for R/3 connections and for TCP/IP connections.

To set up the collective error processing proceed as follows:

• Change the RFC destination
• Select the Destination -> TRFC options function from the menu.
• Enter the value 'X' into the 'Suppress backgr. job in case of comms. error' field.

Perform the error handling as follows:

• Start the 'Transactional RFC' monitor (menu: Logistics -> Central functions -> Distribution -> Monitoring -> Transactional RFC)
• Select the Edit -> Select.FM execute function.

For the error handling you should schedule a periodic background job that regularly does this.

Train the error handling for errors in the Remote Function Call before the prodictive start.

Further notes

The 'SAP*' user may not be used on the target machine for Remote Function Calls.

Notes on the transport

The maintenance of the RFC destination is not a part of the automatic transport and correction system. Therefore the setting has to be made manually on all systems.

Archives