sap hana - ERP Q&A https://www.erpqna.com/tag/sap-hana/ Trending SAP Career News and Guidelines Sat, 22 Nov 2025 11:58:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.erpqna.com/wp-content/uploads/2021/11/cropped-erpqna-32x32.png sap hana - ERP Q&A https://www.erpqna.com/tag/sap-hana/ 32 32 SAP S/4HANA Architecture Functional Flow https://www.erpqna.com/sap-s-4hana-architecture-functional-flow/?utm_source=rss&utm_medium=rss&utm_campaign=sap-s-4hana-architecture-functional-flow Fri, 14 Nov 2025 11:26:52 +0000 https://www.erpqna.com/?p=94016 SAP S/4HANA SAP S/4HANA is the latest version of SAP. The full form of SAP S/4HANA is SAP Business Suite 4th generation ERP High Performance Analytic Appliance. There are 3 layers that are present in SAP namely:- In the 1990’s a new architecture was introduced, and it was R/3. In R/3 architecture we can observe […]

The post SAP S/4HANA Architecture Functional Flow appeared first on ERP Q&A.

]]>
SAP S/4HANA

SAP S/4HANA is the latest version of SAP. The full form of SAP S/4HANA is SAP Business Suite 4th generation ERP High Performance Analytic Appliance.

There are 3 layers that are present in SAP namely:-

  1. Presentation layer
  2. Application Layer
  3. Database Layer

In the 1990’s a new architecture was introduced, and it was R/3. In R/3 architecture we can observe the three layers namely presentation layer, application layer and database layer.

The speed of R/3 architecture is fast when compared with R/2 architecture. The algorithms in R/2 architecture were in the database layer whereas the algorithms in R/3 architecture are in the application layer. The database layer can be anyone like oracle, Microsoft etc…

FIG : – R/3 Architecture

1. Presentation Layer

The presentation layer is a screen where user can interact with the system. For example we can take Monitor of a computer or laptop screen. So for SAP it is GUI –> Graphical User Interface. Here user will interact with SAP data base with the help of SAP GUI which is a software that will be installed on laptops.

2. Application Layer

This is the main advantage of SAP R/3 architecture. Here it acts as like middle wear between presentation layer and database layer. Here it will process the user input , applies business rules and execute the programs in this layer.
Here in this layer it will collects all the details or actions that are being carried out by the user and interacts with the database system and represents them on the output screen.

    Dispatcher: Dispatcher will collects the request and will decide which type of request it is whether it is dialog, update, background, spool , enqueue. After getting to know about the spool request type it will also check for the work processor which is idle.
    Work Processor:- After getting the request into work processor it will co-ordinate with the database management and will perform the operation or action on the database and will also gets a acknowledgement .

    3. Database layer

    After getting the request or command form the database management the database layer will process it and will update the request.

      The below diagram is the in-depth process of how the database layer will function.

      DATABASE LAYER FUNCTION FLOW:

      • The request from the work processor will first enter into the session manager and the session manager will makes user to automatic behaviour and transaction isolation level.
      • Once the request got established the request moves to the planning engine which analyses SQL.
      • And here SQL engine will runs the machine.
      • The transaction manager assigns a transaction ID and ensures that every operation follows the principle, transactions to the SQL engine and data engine.
      • The data engine takes over the user request and performs it on the database.
      • Index Server → Executes SQL/MDX queries, does actual processing.
      • Name Server → Knows where the data lives.
      • Pre-processor → Text & search.
      • XS Engine → Application services.
      • Persistence Layer → Save points + logs.

      In 2015 the SAP S/4HANA has been introduced and here the database is HANA, and this is SAP’S own database. The Hana database can also be called as in-memory database.

      S/4HANA is also called 4th generation SAP.

      Versions of SAP S/4HANA are released in the following years: –

      VERSIONYY MM
      1st Version of SAP S/4HANA1511(2015 11th)
      2nd Version of SAP S/4HANA1610 (2016 10th)
      3rd Version of SAP S/4HANA1709(2019 9th)
      4th Version of SAP S/4HANA1809 (2018 9th)
      5th Version of SAP S/4HANA1909(2019 9th)
      6th Version of SAP S/4HANA2020 (YYYY)
      7th Version of SAP S/4HANA2021(YYYY)
      8th Version of SAP S/4HANA2022(YYYY)
      9th Version of SAP S/4HANA2023(YYYY)

      SAP S/4Hana works with Fiori NetWeaver GUI.

      Advantages of sap s/4hana:

      • The performance and speed have been improved due to databases.
      • Embedded analytics have been embedded in sap s/4hana
      • In ECC we can only perform operations, and some extract basic reports can be extracted in ECC, but not more complex reports can be extracted, when it comes to s/4Hana we can easily extract the complex reports easily.

      OLTP – Online Transaction processing (ECC)

      OLAP – Online Analytical Processing (BI/BW)

      OLTP + OLAP + Planning – SAP S/4HANA

      Planning ->

      • MRP line
      • PD MRP
      • ATP (advanced available to promise)
      • PDS (production planning and detailed scheduling)

      SAP HANA: This Hana is a database which is also called an in-memory database, and this provides or is capable of analytics, data models and library for code writing.

      The HANA database can be accessed through HANA STUDIO.

      SAP HANA: This Hana is a database which is also called an in-memory database, and this provides or is capable of analytics, data models and library for code writing.

      The HANA database can be accessed through HANA STUDIO.

      SAP BW powered by HANA (BW for HANA):

      SIDE CAR: For suppose you have an ECC system which is working on any database and now you want to connect it with the HANA database then there will be a middle wear SLT which is a software brings the data from table to HANA database.

      • HANA live is a tool which can be used to view the analytics of data.
      • Lumera system is a toll which is being used to represent or visualize the analytical data.

      Benefits of a side car:

      • Main cars are not being distributed.
      • It can be implemented in 1.5 to 2 months.

      Drawback:

      • Data footprint increases: the data is being replicated from the production system to the side car so more space will be occupied.
      • The landscape is very complex.

      SAP BW POWERED BY HANA: It was introduced in 2012 and here the analytical can be obtained. The complex reports can be fetched within seconds only.

      SAP BUSINESS SUIT POWERED BY SAP HANA: It was introduced in 2013 in this enhancement pack (EHP 7 or and here the system is ECC, and the database is HANA.)

      OLAP + OLTP.

      SAP SIMPLE FINANCE POWERED BY SAP HANA: ECC with HANA Database but the difference between this and SOH is only the fico module as they have simplified the data module for FICO and introduced Fiori apps separately for fico and ACDOCA table which is single source of truth table.

      Apart from fico modules the other modules will work as they were functioning in the same manner as ECC on any other database.

      S/4HANA: This was introduced in 2015 they have simplified the data model for all the modules. They have introduced MATDOC, ACDOCA table and innovations + Fiori apps for all the modules

      APP_L -> Technical application for ECC.

      S/4 core -> Technical application for HANA.

      Rating: 5 / 5 (1 votes)

      The post SAP S/4HANA Architecture Functional Flow appeared first on ERP Q&A.

      ]]>
      Read \ Write Data between HANA Datalake and HANA On-Prem DB https://www.erpqna.com/read-write-data-between-hana-datalake-and-hana-on-prem-db/?utm_source=rss&utm_medium=rss&utm_campaign=read-write-data-between-hana-datalake-and-hana-on-prem-db Wed, 23 Oct 2024 13:15:12 +0000 https://www.erpqna.com/?p=88433 A simple guide to Read \ Write table data between SAP HANA Datalake and SAP HANA On-Premises DB. Key topics include: Export from HANA On-Prem DB and import to the HANA Datalake Filesystem in CSV and PARQUET formats using HANA Datalake Relational Engine 1. Creation of HANA On-Prem Remote Server in HANA Datalake Relational Engine […]

      The post Read \ Write Data between HANA Datalake and HANA On-Prem DB appeared first on ERP Q&A.

      ]]>
      A simple guide to Read \ Write table data between SAP HANA Datalake and SAP HANA On-Premises DB.

      Key topics include:

      • Export from HANA On-Prem DB and Import to HANA Datalake Filesystem in CSV and PARQUET Formats using HANA Cloud.
      • Export from HANA On-Prem DB and import to the HANA Datalake Filesystem in CSV and PARQUET formats using HANA Datalake Relational Engine.

      Export from HANA On-Prem DB and import to the HANA Datalake Filesystem in CSV and PARQUET formats using HANA Datalake Relational Engine

      1. Creation of HANA On-Prem Remote Server in HANA Datalake Relational Engine

      Step 1: Open SQL Console

      From the Database Explorer of SAP HANA Datalake Relational Engine, open the SQL Console.

      Step 2: Create HANA On-Prem Remote Server

      Execute the following SQL query to create the remote server for the HANA On-Prem system

      CREATE SERVER REMOTE_SERVER CLASS 'HANAODBC' USING
      'Driver=libodbcHDB.so;
      ConnectTimeout=0;
      CommunicationTimeout=15000;
      RECONNECT=0;
      ServerNode= hanahdb.onprem.sap.server:30241;
      ENCRYPT=TRUE;
      sslValidateCertificate=False;
      UID=USERNAME;
      PWD=PaSsWoRd;
      UseCloudConnector=ON;
      LocationID=SCC-LOC-01';

      Please note the following

      • REMOTE_SERVER: This is an example name. Replace it with the actual source name
      • hanahdb.onprem.sap.server and 30241: These are the example server name and port. Replace them with the required HANA On-Prem server details
      • USERNAME and PaSsWoRd: Replace these with valid credentials
      • SCC-LOC-01: Replace it with the valid Cloud Connector Location name

      Step 3: Verify the Remote Server Connection

      Run the following SQL query to check if the newly created remote source is functioning correctly

      CALL sp_remote_tables('REMOTE_SERVER');

      If the output lists all the tables of the HANA On-Prem database, the remote server has been created successfully

      Step 4: Check the Remote Server Details

      To view the details of the newly created remote server, execute the following query:

      SELECT * FROM SYSSERVER;

      2. Create a Virtual Table in HANA Datalake Relational Engine for HANA On-Prem Table

      Create a Existing (Virtual) Table

      To create a existing table (virtual table) that points to a table in the HANA On-Prem database, execute the following SQL query

      CREATE EXISTING TABLE VT_TESTMYTABLE AT 'REMOTE_SERVER..SCHEMA_NAME.TABLE_NAME';

      Please note the following

      • VT_TESTMYTABLE: This is an example virtual table name. Replace it with the required name
      • REMOTE_SERVER: Replace this with the name of the newly created remote server
      • SCHEMA_NAME: Replace it with the schema name of the table in the HANA On-Prem database
      • TABLE_NAME: Replace this with the actual table name in the HANA On-Prem database

      3. Export / Import Operations from HANA Datalake Relational Engine to HANA Datalake Filesystem

      Export Virtual Table Data

      • Once the virtual table is created in HANA Datalake Relational Engine, you can use SQL commands or tools to export its data

      Export from HANA On-Prem and Import to HANA Datalake Filesystem in CSV and PARQUET Formats using HANA Cloud

      1. Creation of HANA On-Prem Remote Source in HANA Cloud

      Step 1: Login to the HANA Cloud Database

      • Open Database Explorer of your SAP HANA Cloud Database
      • Login to your HANA Cloud Database Instance and expand the Catalog to locate Remote Sources

      Step 2: Add a Remote Source

      • Right-click on Remote Sources and select Add Remote Source
      • Provide the necessary details
        • Source Name: REMOTE_SOURCE_NAME (This is an example, replace it with the appropriate name).
        • Adapter Name: HANA (ODBC).
        • Source Location: indexserver.

      Step 3: Adapter Properties Configuration

      • Default driver libodbcHDB.so will be selected automatically
      • Provide:
        • Server: hanahdb.onprem.sap.server (example, replace with your required server).
        • Port: 30241 (example, replace with the correct port number).

      Step 4: Extra Adapter Properties

      • Enter the configuration: useHaasSocksProxy=true;sccLocationId=SCC-LOC-01;encrypt=yes;sslValidateCertificate=False

      Note: SCC-LOC-01 is an example Cloud Connector name. Replace it with the correct one

      Step 5: Credentials Setup

      • Select Technical User as the credentials mode
      • Provide valid Username and Password

      Step 6: Save the Remote Source

      • After entering all the details, click Save
      • Alternatively, you can use the SQL query below to create the remote source:
      CREATE REMOTE SOURCE REMOTE_SOURCE_NAME
      ADAPTER "hanaodbc"
      CONFIGURATION 'ServerNode=hanahdb.onprem.sap.server:30241;useHaasSocksProxy=true;sccLocationId=SCC-LOC-01;encrypt=yes;sslValidateCertificate=False;'
      WITH CREDENTIAL TYPE 'PASSWORD'
      USING 'user=Username;password=Password';

      Step 7: Verify the Remote Source

      • Run the following SQL command to check if the newly created remote source is working
      CALL PUBLIC.CHECK_REMOTE_SOURCE('REMOTE_SOURCE_NAME');
      • If the command executes successfully without errors, the remote source is functional.

      Step 8: View the Remote Source

      • Expand the Catalog of the HANA Cloud Database Instance
      • Right-click on Remote Sources and select Show Remote Sources to confirm your connection

      2. Create a Virtual Table in HANA Cloud for HANA On-Prem Table

      Step 9: Open Remote Source

      • Right-click on the newly created Remote Source (REMOTE_SOURCE_NAME) and select Open

      Step 10: Search for On-Prem Table (Remote Objects)

      • Use the Schema and Object filters to search for the required On-Prem table
      • Click Search to display the list of available remote objects (tables)

      Step 11: Create Virtual Object

      • Select the desired table from the list
      • Click on Create Virtual Object(s)

      Step 12: Define Virtual Table Details

      • Provide a name for the virtual table
      • Select the target schema in your HANA Cloud Database
      • Click Create to finish the process

      The newly created virtual table in HANA Cloud can now be used for operations, including exporting data to the HANA Datalake Filesystem.

      3. Export / Import Operations from HANA Cloud to HANA Datalake Filesystem

      Export Virtual Table Data

      • Once the virtual table is created in HANA Datalake Relational Engine, you can use SQL commands or tools to export its data
      Rating: 5 / 5 (1 votes)

      The post Read \ Write Data between HANA Datalake and HANA On-Prem DB appeared first on ERP Q&A.

      ]]>
      Integrating SAP S/4HANA with Kafka via SAP Advanced Event Mesh: Part1 – Outbound connection https://www.erpqna.com/integrating-sap-s-4hana-with-kafka-via-sap-advanced-event-mesh-part1-outbound-connection/?utm_source=rss&utm_medium=rss&utm_campaign=integrating-sap-s-4hana-with-kafka-via-sap-advanced-event-mesh-part1-outbound-connection Wed, 15 May 2024 09:58:32 +0000 https://www.erpqna.com/?p=84819 Introduction In today’s fast-paced business world, the ability to seamlessly communicate and exchange data between different systems is crucial. SAP Advanced Event Mesh (AEM) offers a robust solution for real-time event-driven communication across various SAP systems and external services. In this blog post, we’ll explore how to integrate S/4HANA with Kafka using SAP AEM for […]

      The post Integrating SAP S/4HANA with Kafka via SAP Advanced Event Mesh: Part1 – Outbound connection appeared first on ERP Q&A.

      ]]>
      Introduction

      In today’s fast-paced business world, the ability to seamlessly communicate and exchange data between different systems is crucial. SAP Advanced Event Mesh (AEM) offers a robust solution for real-time event-driven communication across various SAP systems and external services. In this blog post, we’ll explore how to integrate S/4HANA with Kafka using SAP AEM for data streaming and event-driven architecture.

      Step-by-Step Guide

      Let’s break down the connection process between S/4 HANA and Kafka using SAP AEM into 6 sections, each explaining a key part of the connection setup to help you easily understand and implement the process.

      1. Login and Setup SAP AEM Service

        • First, log in to your BTP subaccount and create a subscription for AEM ensuring your user has the required roles. Once subscribed, log in to your SAP AEM tenant and navigate to Cluster Manager to create an Event Broker service. This service enables applications to publish or consume events. Below is the start page of SAP AEM after logging in.

        • Create an Event Broker Service by clicking on ‘Create Service’.

        • Provide a meaningful name for the service e.g. – ‘AEM_SERVICE_DEV’, select the service type, and choose the region. Click on “Create Service”.

        • After the service is activated, you’ll see the page.

        • Navigate to “Manage” and then “Authentication”. Enable Client Certificate Authentication.

        2. Establishing Trust between S/4 HANA and AEM

        • To implement client certificate-based authentication, you need to establish trust between S/4 HANA and the AEM service broker. In your S/4 HANA system, execute the STRUST transaction to open the Trust Manager. Export the certificates from SSL client (Standard) and upload them into AEM in the next step.

        • Go to “Manage” and then “Certificate Authorities”. Upload the exported certificates by clicking on “Add Client Certificate Authority”.

        • Once done, all the certificates will be displayed as shown below.

        • Now, import the certificate chain of the SAP AEM service broker host and BTP-IS Subaccount host in the SSL client (Standard) in the STRUST transaction code.

        3. Broker Manager Configuration in AEM

        • Click on “Open Broker Manager” and log in using the “Management Editor Username” and “Management Editor Password”. You can find these access details under the “Status” section of the broker service.

        • Once logged into Broker Manager, create a Queue which will serve as a storage mechanism for messages received by SAP AEM. When S/4HANA will generate any events or messages, they will be placed in the queue before being processed and forwarded to Kafka.

        • Provide a meaningful name for the Queue e.g. – ‘AEM_DEV’.
        • Assign a Subscription to the Queue. By creating a subscription, we ensure that our SAP AEM instance is subscribed to the relevant topics or events generated by S/4 HANA.

        • Go to “Access Control” and create a Client Username with the hostname from the leaf certificate maintained in SSL Client (Standard) in the STRUST.

        4. Configure AEM to Kafka connection through Kafka Sender Bridge

        • The Kafka Sender Bridge is required to facilitate communication between AEM and the target Kafka cluster by converting AEM messages into Kafka events and propagating them to the remote Kafka cluster.
        • To establish client certificate authentication between AEM and the Kafka cluster, you’ll need .jks files of the Keystore and Truststore from your target Kafka broker.
        • Open the command prompt and use the command ‘keytool’ to convert the .jks files into .p12 files. Here’s how:

        keytool -importkeystore -srckeystore C:\OpenSSL\<keystorefilename>.jks -destkeystore C:\OpenSSL\keystore.p12 -srcstoretype jks -deststoretype pkcs12

        keytool -importkeystore -srckeystore C:\OpenSSL\<truststorefilename>.jks -destkeystore C:\OpenSSL\truststore.p12 -srcstoretype jks -deststoretype pkcs12

        • Once converted, copy these .p12 files to the OpenSSL -> Bin folder.
        • Now, navigate to the ‘OpenSSL’ directory and convert these .p12 files to .pem files using the commands below:

        openssl pkcs12 -in keystore.p12 -out keystore.pem

        openssl pkcs12 -in truststore.p12 -out truststore.pem

        • You’ll need to set a passphrase during this process. Note: Remember this passphrase, as you’ll need it for client certificate authentication.
        • From the ‘truststore.pem’ file, copy the content of root and the leaf certificates and save it as .cer files. Add those in our service broker under “Manage” -> “Certificate Authorities” -> “Domain Certificate Authorities”.

        • Now, navigate inside Broker Manager to “Kafka Bridges” and create a “Kafka Sender”.

        • Add the Kafka Broker Host and Port details in the ‘Bootstrap Address List’ and copy the contents of the ‘keystore.pem’ file and paste them under Client Certificate Authentication – > Content as shown below. Additionally, include the passphrase that we entered while converting the .p12 file to .pem in the ‘Password’.

        • Once the Kafka Sender is created, go inside, and click on “Queue Binding”.

        • Select our queue – ‘AEM_DEV’ created in section 3.

        • Go inside the Queue Binding created in earlier step and add the topic name of the target Kafka cluster in the “Remote Topic”.

        • Confirm whether the Kafka connection is up and running.

        5. Configure S/4 HANA to SAP AEM connection

        • Now to establish a connection from S/4 HANA to AEM go to transaction code SM59, create a type-G RFC destination and enter the host and port of the SAP AEM service broker.

        • In transaction code /IWXBE/CONFIG, create Channel configuration in the S/4 HANA system by clicking on ‘via Service Key -> Advanced’ and assign the RFC destination created in the earlier step. In the ‘Service Key’ section enter the JSON content of the service key created using ‘aem-validation-service-plan’ instance in BTP cockpit.

        • Save the above changes and activate the channel.
        • Create an outbound binding and assign any standard topic. For example, select “Business Partner”. So whenever a Business Partner is newly created or modified, a standard event will be raised through this outbound channel.

        6. Testing the end-to-end connection

        • To test the end-to-end connection, go to transaction code BP and create a Business Partner. Click on save.

        • Once saved, an event should be raised. You can check this by going to transaction code /IWXBE/EEE_SUPPORT and then to /IWXBE/R_EVENT_MONITOR.

        • Select your AEM channel.
        • You will find a list of all events that were raised and sent to AEM.

        • Now, go to AEM. In the Kafka sender, you can see the message count in the sent section has increased. This means that the message was successfully received by AEM and then pushed to the Kafka cluster. Additionally, verify the message at the Kafka end.

        • You can also navigate to the ‘Try-Me’ section where you can set up the sender and receiver connection. Subsequently, you can subscribe to our topic at the receiver end and observe the incoming message from S/4 HANA as shown below.

        Conclusion

        Through this blog, we’ve demonstrated the process of sending an event from SAP S/4HANA to Kafka via SAP AEM. Now, enterprises can leverage the power of event-driven architectures to drive innovation and efficiency in their operations.

        Rating: 0 / 5 (0 votes)

        The post Integrating SAP S/4HANA with Kafka via SAP Advanced Event Mesh: Part1 – Outbound connection appeared first on ERP Q&A.

        ]]>
        Expose OData services/URL for Calculation View in SAP HANA https://www.erpqna.com/expose-odata-services-url-for-calculation-view-in-sap-hana/?utm_source=rss&utm_medium=rss&utm_campaign=expose-odata-services-url-for-calculation-view-in-sap-hana Mon, 05 Jun 2023 12:22:11 +0000 https://www.erpqna.com/?p=75199 In this blog post, we will learn how to build the XSODATA services used to expose our data model to the user interface. This tutorial is designed for SAP HANA on premise and SAP HANA, express edition. It is not designed for SAP HANA Cloud. I have used SAP HANA XS Advanced (SAP Web IDE). […]

        The post Expose OData services/URL for Calculation View in SAP HANA appeared first on ERP Q&A.

        ]]>
        In this blog post, we will learn how to build the XSODATA services used to expose our data model to the user interface.

        This tutorial is designed for SAP HANA on premise and SAP HANA, express edition. It is not designed for SAP HANA Cloud. I have used SAP HANA XS Advanced (SAP Web IDE).

        When you want to expose data from a HANA database to a third party (In my case they were going to use the data to build a user interface application using ReactJS), the recommended best practice is to use OData.

        xsodata services use the proprietary XSJS dialect of JavaScript to create an OData service. To use XSJS, SAP adds Node.js modules to provide XSJS compatibility.

        Before we proceed further, let’s have a quick look at our system.

        Our Project name is DUMMY_PROJECT and DB name is DUMMY_DATABASE.

        There are two calculation Views: –

        1. CV_EMP – calculation view without parameter.

        Its Output: –

        2. CV_EMP_WITH_PARAMETER – Same calculation view as above but with parameter on

        DEPT_NAME’ column named as IP_DEPARTMENT.

        Its Output: – With ‘FINANCE’ as an input for the Input Parameter IP_DEPARTMENT.

        Step 1: Create a Node.js module in your project

        Right-click on the project (DUMMY_PROJECT) and select New->Node.js Module.

        Give it a name (DUMMY_ODATA) and click “Next”, on next screen add description (Dummy OData Application) and be sure to Enable XSJS Support by selecting a checkbox:

        Click “Finish” and validate that module is available under the project as shown below.

        Right click on Node.js module just created (DUMMY_ODATA) and build it.

        Step 2: Check Node.js module availability in mta.yaml file and set dependency from DB module

        Double click on mta.yaml file and switch to “MTA Editor” view, Node.js module (DUMMY_ODATA) should be available there.

        For Node.js module to be linked with DB module scroll down to “Requires” section, hit “+” icon and select our reporting container DB(DUMMY_DATABASE) for it to appear as on screenshot below, and save the changes.

        Step 3: Create OData service for customer-facing CV

        1. For CV without Input Parameter
        2. For CV with Input Parameter
        • For CV without Input Parameter

        Let’s create new folder under “lib” folder and name it “odata”. This will be the place where we will store OData service files.

        Right click on our newly created “odata” folder and create new file with “.xsodata” extension for our CV_EMPLOYEE Calculation View as “reporting_employee.xsodata”.

        Double click on created file to open code editor and specify code as below.

        service
        {
        "DUMMY_PROJECT.DUMMY_DATABASE.models::CV_EMP" as "EMPLOYEE"
        with("EMP_CODE","EMP_NAME","DEPT_NAME","LOCATION","MANAGER");
        }

        Path to calculation view, “with” section where we specify fields for output and specify which column is a key (in current key is not defined).

        Save file and build Node.js module (DUMMY_ODATA in our sample).

        1. For CV without Input Parameter

        Double click on created file in the above step “reporting_employee.xsodata” to open code editor and specify code as below.

        service
        {
        "DUMMY_PROJECT.DUMMY_DATABASE.models::CV_EMP_WITH_PARAMETER" as "EMPLOYEE_WITH_IP"
        with("EMP_CODE","EMP_NAME","DEPT_NAME","LOCATION","MANAGER")
        key ("DEPT_NAME") parameters via key and entity;
        }

        Path to calculation view, “with” section where we specify fields for output and specify which column is a key and for CV with input parameter, we also need to specify by which entity parameters will be used (“DEPT_NAME” in our sample).

        Save file and build Node.js module (DUMMY_ODATA in our sample).

        Step 4: Build Project and deploy to XS Advanced

        Right click on project name (DUMMY_PROJECT) and click on “Build”-> “Build”.

        If build successful, we will see appropriate message in log.

        Building a project will form/update an .mtar file under mta_archives folder.

        As a final step we need to deploy our project .mtar file to XS Advanced.

        Right click on it and click on Deploy -> Deploy to XS Advanced.

        Choose Organization and Space and click “Deploy”.

        After successful deployment we should see appropriate message and entry in logs.

        Step 5: Check in XSA-COCKPIT that our application is running

        click on Tools -> SAP HANA XS Advanced Cockpit. A page will be opened in a new tab.

        Click on the Organization Space and search with Node.js module / Application (DUMMY_ODATA in our sample).

        Click on our application name DUMMY_ODATA and copy application Routes link/URL which we will use:

        Click on this URL and a new tab will open with message something like this.

        Apart from this you can also get a link or URL by following the below steps.

        Right click on xsodata file (reporting_employee.xsodata) and click on “Run”-> “Run as Node.js Application”.

        After successful run we should see appropriate message and link/URL generated in logs.

        Click on this URL and a new tab will open with same message as above “Hello World“.

        NOTE: – Both the links/URLS generated might differ, but both will work, because the one which is generated from the XSA cockpit is DB deployed version that is from the DB host server and the second one is from the local server. However, better is to use the link generated using SAP HANA XS Advanced Cockpit.

        Step 6: Access the application through URL

        1. For CV without Input Parameter
        2. For CV with Input Parameter
        3. For CV without Input Parameter

        To access that service/OData URL works correctly we can check it via link which is formed by combination of

        Link or URL generated in the above step (ABOVE_GENERATED_LINK) + Path to service as shown below: –

        ABOVE_GENERATED_LINK/odata/reporting_employee.xsodata/EMPLOYEE –> Paste this URL in a web browser to see data.

        Data/Output: –

        1. For CV with Input Parameter

        Link or URL generated in the above step (ABOVE_GENERATED_LINK) + Path to service + Input Parameter Value as shown below: –

        ABOVE_GENERATED_LINK/odata/reporting_employee.xsodata/EMPLOYEE_WITH_IP(IP_DEPARTMENT=’FINANCE’) Paste the above URL in a web browser to see data

        Data/Output: – You will get the output in JSON format as shown below

        1. For CV with Input Parameter

        Link or URL generated in the above step (ABOVE_GENERATED_LINK) + Path to service + Input Parameter Value as shown below: –

        ABOVE_GENERATED_LINK/odata/reporting_employee.xsodata/EMPLOYEE_WITH_IP(IP_DEPARTMENT=’FINANCE’) Paste the above URL in a web browser to see data

        Data/Output: –

        Rating: 0 / 5 (0 votes)

        The post Expose OData services/URL for Calculation View in SAP HANA appeared first on ERP Q&A.

        ]]>
        BTP Destinations and SAP Build Apps to integrate SAP C4C & S/4HANA https://www.erpqna.com/btp-destinations-and-sap-build-apps-to-integrate-sap-c4c-s-4hana/?utm_source=rss&utm_medium=rss&utm_campaign=btp-destinations-and-sap-build-apps-to-integrate-sap-c4c-s-4hana Thu, 20 Apr 2023 12:35:27 +0000 https://www.erpqna.com/?p=73998 Introduction: In today’s digital world, businesses are looking for ways to streamline their processes and enhance their customer experience. One way to achieve this is through the integration of different systems. In this blog post, we will explore how to integrate SAP C4C and S/4 HANA using BTP destinations and SAP Build apps. To integrate […]

        The post BTP Destinations and SAP Build Apps to integrate SAP C4C & S/4HANA appeared first on ERP Q&A.

        ]]>
        Introduction: In today’s digital world, businesses are looking for ways to streamline their processes and enhance their customer experience. One way to achieve this is through the integration of different systems. In this blog post, we will explore how to integrate SAP C4C and S/4 HANA using BTP destinations and SAP Build apps.

        To integrate SAP C4C and S/4 HANA, we can use BTP (Business Technology Platform) destinations and SAP Build Apps.

        BTP destinations are endpoints that define how to connect to a remote system or service. BTP Destinations are typically used in cloud-based scenarios where different cloud services need to communicate with each other securely. They provide a way to define the connection details for target systems such as the endpoint URL, authentication credentials, and other settings. BTP Destinations can be created and maintained using the SAP Cloud Platform cockpit or SAP Cloud SDK. They can be used in various scenarios, such as connecting to remote data sources, invoking external web services, or sending notifications to third-party systems.

        SAP Build Apps is a visual programming environment where citizen and professional developers can build enterprise-ready custom software without writing any code. It makes it easier for users to create engaging and functional SAP Fiori apps without the need for extensive technical expertise or coding knowledge. It provides a streamlined and collaborative design process that helps organizations deliver high-quality apps faster and more efficiently.

        You can sign up for a free trial of SAP Build to get hands-on experience with the tool. The trial account provides access to all the features and functionalities of SAP Build, allowing you to create and prototype your own SAP Fiori apps. You can find it here.

        Let us quickly get into our use case.

        Use Case: An SAP Build App is embedded into the Agent desktop screen in C4C tenant where you will be able to view the S/4HANA transactions like Sales Orders, Customer Returns, Outbound Deliveries etc based on the Customer like below.

        Let’s take an example of integrating Sales orders from S/4HANA in to C4C screen.

        Step 1: Set up the BTP Destination in the BTP Sub account.

        You can follow below steps:

        • Open the SAP Cloud Platform Cockpit and log in to your BTP sub-account.
        • Navigate to the “Destinations” page under the “Connectivity” tab.
        • Click on the “New Destination” button to create a new destination.
        • In the “Destination Name” field, enter a name for your destination.
        • In the “Type” field, select “HTTP” as the destination type.
        • In the “Description” field, enter a brief description of your destination.
        • In the “URL” field, enter the URL of the S/4HANA Sales order API (https://myXXXXXX.s4hana.ondemand.com/sap/opu/odata/sap/API_SALES_ORDER_SRV) that you want to integrate.
        • In the “Proxy Type” field, select “Internet” as the proxy type.
        • In the “Authentication” section, select “BasicAuthentication” as the authentication method.
        • Enter the username and password credentials for the API service.
        • In the “Additional Properties” section, add the following key-value pairs:
          • “WebIDEEnabled”: “true”
          • “HTML5.DynamicDestination” :”true”
          • “AppgyverEnabled” :”true”
        • These properties will allow you to access the API service using the SAP Web IDE.
        • Click on the “Save” button to save your destination.
        • Once you set up, you can click Check Connection to see if the Connection is Successful.

        Step 2: Create an Appgyver app with basic screens to display the values from the S/4HANA. Add a list view to the page to display the results and a page parameter to read the Account id and Query the Sales orders based on Account ID. Here are some steps to follow.

        • Create a new page in Appgyver by selecting “Create New Page” from the Pages menu.
        • Name the page and select a layout that will suit your needs.
        • Add a container component to the page to hold the sales order query results.
        • You can create a page Parameter to read the Account id from the C4C Screen and pass that to S/4HANA system to query sales orders based on Account.
        • Next we have to enable BTP Authentication. click on the AUTH tab -> Enable Authentication
        • Add the BTP destination created in the Previous step by clicking on the Data tab -> Add integration.
        • Then select the BTP Destination you have created.
        • Then do install Integration and enable Data entity and save it, then that Data resource will be added to your Project.
        • Add a data variable to the page to store the query results. You can select the Sales Order Data resource you have added above.
        • Bind the container component to the data variable using the “Repeat with” binding option. This will cause the container to display a list of items based on the data returned by the query.
        • Customize the appearance of the container and its child components (e.g., text elements, buttons, etc.) to display the sales order data in a clear and intuitive way.
        • Test the page by previewing it in the Appgyver preview app or by deploying it to a test environment. If necessary, adjust the page and data variables until the desired results are achieved.
        • Deploy the app using then Open Build Service and add the Deployed URL as a Mash up in C4C Agent desktop Screen.
        • You can configure the Mash up in Agent Desktop as a tab or in the Account Screen as a tab by passing the Account ID.
        • You can repeat the same steps to get any other S/4HANA transactional data like Customer returns, Outbound Deliveries, Credit Memo Requests etc.
        Rating: 0 / 5 (0 votes)

        The post BTP Destinations and SAP Build Apps to integrate SAP C4C & S/4HANA appeared first on ERP Q&A.

        ]]>
        Supportability Tools for SAP HANA https://www.erpqna.com/supportability-tools-for-sap-hana/?utm_source=rss&utm_medium=rss&utm_campaign=supportability-tools-for-sap-hana Fri, 14 Apr 2023 10:57:54 +0000 https://www.erpqna.com/?p=73790 The SAP HANA supportability toolset provides a consolidated set of tools for performance analysis and troubleshooting. The tool is available as an extension for Microsoft Visual Studio Code and includes a range of reports that can be used for online and offline analysis. Key Value Simple The tool integrates SAP HANA database knowledge from SAP […]

        The post Supportability Tools for SAP HANA appeared first on ERP Q&A.

        ]]>
        The SAP HANA supportability toolset provides a consolidated set of tools for performance analysis and troubleshooting.

        The tool is available as an extension for Microsoft Visual Studio Code and includes a range of reports that can be used for online and offline analysis.

        Key Value

        Simple

        The tool integrates SAP HANA database knowledge from SAP notes and SAP HANA database experts.
        User could analyze HANA Database related issue much easier.

        Consolidate

        The tool supports both online analysis and offline analysis. User could do analysis based on monitoring views, statistics tables which needs database access and also based on trace logs which doesn’t need database access.

        The tool utilizes all the information available for analysis purpose.

        Analysis Flow

        User could use the tool starting from pin pointing issue to root cause analysis, all steps in one tool.

        Feature Summary

        An overview of the tools and features available is shown below:

        How to Install

        The SAP HANA supportability tools need to be installed as an extension in Microsoft Visual Studio Code.

        Getting Started

        Create a Work Folder

        To work in offline analysis mode, you require at least one work folder to be able to import your folders and files. In the Resource Explorer, create a work folder and then import a folder or file, for example, a full system information dump (FSID) file.

        1. Create a folder

        2. Import file to the created “test” folder

        Connect a Work Folder to an SAP HANA Database

        To use the online features in the SAP HANA supportability tools, you must connect a work folder to an SAP HANA system. You can use the SAP HANA Database Explorer extension to manage a list of database connections.

        You can manage and use connections as follows:

        1. In the SAP HANA Database Explorer extension, manage a list of database connections.

        2. In the SAP HANA supportability tools, connect a work folder to an SAP HANA database by selecting one of the defined connections from the database list.

        3. Statement Overview page and Object Dependencies page are enabled after connecting to a SAP HANA database

        Here’s a short demonstration of create a work folder, import files to work folder and connect to a work folder to a SAP HANA database:

        Welcome Page

        The welcome page provides you with an overview of your work folders and allows you to access the walkthrough page to quickly learn the tool’s features and functionalities.

        The welcome page is displayed when you open the supportability tools for SAP HANA or enter welcome in View Command Palette and select Supportability tools for SAP HANA: Show Welcome Page.

        The welcome page dashboard is composed of two main sections:

        Work Folders

        Your individual work folders are listed as cards.

        Get to Know the Supportability Tools for SAP HANA

        Select Walkthrough Page to get an overview of the features available in the tool. The walkthroughs let you become familiar with the different functions and easily navigate through the tool.

        Online Analysis

        Statement Overview

        The statement overview provides the most relevant and important information about the top SQL statements in the database. The results of the individual reports are presented in a visualized form, making it easier to identify critical statements and determine the source of a problem.

        Starting from the top SQL statements, the reports let you navigate down into the specific statement details. You can customize the time range and you can select various key figures for the analysis.

        The following reports are available, each shown on a separate tab:

        • Top SQL
        • Expensive Statements
        • Executed Statements
        • Active Statements
        • Plan Trace
        • Plan Stability

        The statement overview is only available in online mode, that is, when the work folder is connected to an SAP HANA database. For more information, see Connect a Work Folder to a Database.

        Top SQL

        The Top SQL page lets you analyze the most critical SQL statements running in the database. The report lists the top SQL statements in terms of different key figures, such as elapsed time, number of executions, or memory consumption.

        1. In the toolbar of the Top SQL statement page, select the time range, in the Source dropdown list, select a source depending on whether the SQL statement issues are in the past or present, in the Show dropdown list, specify a Top N selection.
        2. Use the pie chart to select a KPI.
        3. The selected KPI filters the data of the TOP N rows and displays the resulting dataset as a table.

        You can select a row in the table to display details about the selected statement in the specific sections below.

        Statement Detail

        In this section, review the detailed information about the selected statement. The source of the data is the system view SYS.M_SQL_PLAN_CACHE.

        Statement String

        In this section, review the full statement string of the selected statement. Click Open in SQL Analyzer to visualize the SQL statement from the current cached plan in the SQL analyzer.

        Heatmap trend chart

        In this section, view the heatmap trend chart of the selected statement. The historical plan cache statistics are taken from the _SYS_STATISTICS.HOST_SQL_PLAN_CACHE view.

        Statement trend chart

        In this section, view the plan cache history of the selected statement for the last 7 days. By default, the following charts are displayed: Execution time, Preparation Time, Service Network Request Count, and Service Network Request Duration. The trend charts show the minimum, maximum, and average statistics for a predefined set of values.

        Object Dependency Visualization

        The object dependency viewer provides a visualization of database objects and their object dependency hierarchy, making information otherwise only available in tabular form easier to understand.

        The object dependency viewer provides a graph structure showing the object dependency chains of tables, views, and stored procedures. It includes any assigned analytic privileges and provides their details in separate nodes (shown in blue). A simple example is shown below:

        The object dependency viewer is only available in online mode, that is, when the work folder is connected to an SAP HANA database. For more information, see Connect a Work Folder to a Database.

        You can export an object dependency graph as a DOT file for offline analysis.

        Offline Analysis

        Trace Overview

        The trace overview provides an overview of the imported traces of full system information dump (FSID) files. The report evaluates the current system status information from the dump files and provides the traces in a merged form for each component of the database.

        The trace overview provides the following sections:

        Trace List

        The trace list table provides a hierarchical view of the imported FSID (full system information dump) files, listing the imported root and the contained trace information. The imported trace information is provided as a merged form of the traces. The host, port, and service information indicates from where the trace was generated. The log start time and log end time information gives the start and end times of the merged chunks. For example, the index server trace for a certain port has multiple chunks, but the table shows a single row with a start and end time within the entire trace chunk.

        One or more traces can be selected in the table to see the occurrences in the component occurrence section or to merge the trace. A keyword-based trace file search can be used to filter the list.

        Dump List

        This list shows the runtime dumps and crash dumps if dumps exist in the imported FSID files. If there are no dumps, this section is not shown. If dumps exist, the dump files are listed with their generation time and issue type.

        Nameserver History Trace

        If the FSID file of a system database has been imported, the nameserver history trace view section visualizes the related nameserver_history.trc file. It can be used to find a suspicious point by narrowing down the time range. The selected time range is synchronized with the component occurrence section.

        The contents can also be updated by selecting other hosts or ports. The host dropdown lists all hosts of the imported trace files, and the port dropdown lists all ports of the selected host.

        Component Occurrence

        A stacked bar chart is used in this section to show how many trace rows were generated by which components during the available time range of the selected traces in the trace list section. The y-axis describes the number of trace rows that come from a specific component. If there are any dumps, the chart provides a vertical indicator (red arrow) showing the dump generation time and its name. A double-click on the indicator opens the dump file viewer for further analysis of the applicable dump file.

        Merge Trace

        Optionally use the merge trace feature by selecting traces in the trace list and a certain time range in component occurrence to merge the traces stacked in the selected service level with the given time range.

        Analyze Trace(automatically detect known issues)

        Use the analyze trace feature to automatically detect known issues in trace files documented in SAP note 2380176 FAQ: SAP HANA Database Trace.

        1. Click Analyze Trace button.
        2. Click Confirm on the pop up window.

        Dump File Viewer

        The dump file viewer lets you analyze dump files to help troubleshoot the causes of error situations.

        The dump file viewer consists of three tabs, Auto Analysis, Threads and Call Stack, and Statistics Viewer, on which the results of analyzing the FSID (full system information dump) files or runtime dump files appear:

        Auto Analysis

        The auto analysis report is the most important feature of the dump file viewer. It categorizes the detected issues into one or more of ten different types and provides a type-specific analysis.

        The detected issues are categorized as follows:

        • OOM
        • Composite OOM
        • Crash
        • Many transactions blocked
        • High workload
        • Many running threads
        • Wait graph
        • Savepoint blocked
        • Index handle states
        • No fatal issue

        Threads and Call Stack

        The threads and call stack report provides information about individual threads and lets you compare the call stacks of a selected thread.

        Statistics Viewer

        The statistics viewer lets you display the data of the statistics tables associated with the trace files.

        SQL Trace Analysis

        The SQL trace visualization report simplifies the analysis of an SQL trace file by providing an overview of the top SQL statements executed in the database. It provides statistics on the different statement types, the executed transactions, as well as the accessed tables.

        The SQL trace analysis is a good starting point for understanding executed statements and their potential effects on performance, as well as for identifying potential performance bottlenecks at statement level.

        The SQL trace visualization has the following tabs:

        Overview

        The Statistics overview by statement type section shows the number and percentage of different SQL statement types.

        The Top 10 statements by elapsed time section provides a pie chart and details of the top 10 SQL statements with the longest execution times.

        The SQL Trace Details section provides overall information about the SQL trace file. This includes the overall SQL execution time, the number of SQL statements, the number of transactions, longest transaction, and so on

        Statements

        The Top 20 statements by elapsed time table provides the details of the top 20 SQL statements.

        The SQL statements of the longest transaction are shown in the Statements of longest transaction table.

        Tables

        This section gives the top 10 tables for SELECT, INSERT, and UPDATE statements.

        Executed Statements and Expensive Statements Reports

        You can use the executed statements and expensive statements reports to analyze the overall SQL load in the system and identify the most expensive statements.

        Abstract SQL Plan Visualization

        An abstract SQL plan (ASP) can be exported from the SAP HANA database as a JSON file and then imported into the SAP HANA supportability tools and visualized.

        An ASP is an abstraction of a query execution plan in JSON format. It is not a complete plan but contains logical plans and physical properties such as join algorithms.

        Each node shows the operator name, the unique operator ID in brackets, and any applied hints. Leaf nodes (shown with red borders) represent data sources. For data source nodes, the name of the table and the data source ID are shown instead of the operator name and operator ID. A data source is defined by its database, schema, and table.

        Topology Visualization

        The topology information contained in a full system information dump (FSID) file can be visualized in a tree-based view. This can make it easier to read and understand difficult and hierarchical topology data while you are investigating an SAP HANA database issue.

        The topology visualization comprises the following areas:

        General information

        The General Information section provides an overview of the database topology with scale-out, host, and replication information.

        Stored keywords

        The Stored Keywords section contains a user-defined list of important keywords or keywords of interest in a dedicated tree table.

        Main tree table area

        The main tree table area provides the topology information, listed as key-value pairs, in a hierarchical form based on the original data hierarchy existing in the topology information file. When an entry is selected within the table, the selected row path is updated, giving the path from the root to the selected item.

        At the top on the right, a global search field allows you to freely search the topology information entries using a path-based expression.

        Kernel Profiler Trace Visualization

        This report provides a visualization for DOT files generated by the kernel profiler. Kernel profiler trace files are generated in SAP HANA to help performance analysis.

        The kernel profiler traces provide information about CPU consumption and wait times of internal processes of the SAP HANA database. The kernel profiler trace is particularly helpful in cases where the system is hanging with high CPU usage, but it is also useful for single query performance issues. The files generated by the kernel profiler contain system-level statistics on the memory allocation of the system and information about which code causes high CPU consumption.

        The report page contains the following main sections:

        Detail information

        This section shows the most important trace information, including SAP HANA database details, information about busy, waiting, and inactive threads in the collections or sample runs, delays and errors, number of samples, sampling times, busy times, and CPU statistics.

        DOT graph

        The DOT gaph is a graphical presentation of the call stack hierarchy. It visualizes frequent or expensive execution paths during query processing and provides the following information:

        • Name of the function
        • CPU or wait time of the function and descendant
        • CPU or wait time of just the function (function only time)

        Tables

        The tables in this section provide system-level statistics on the memory allocation of the system.

        Rating: 0 / 5 (0 votes)

        The post Supportability Tools for SAP HANA appeared first on ERP Q&A.

        ]]>
        12 Good Reasons to Move to SAP GTS, edition for SAP HANA https://www.erpqna.com/12-good-reasons-to-move-to-sap-gts-edition-for-sap-hana/?utm_source=rss&utm_medium=rss&utm_campaign=12-good-reasons-to-move-to-sap-gts-edition-for-sap-hana Tue, 04 Apr 2023 08:47:36 +0000 https://www.erpqna.com/?p=73415 In this blog, we will share 12 good reasons existing users of SAP Global Trade Services should move to the new and future-proof SAP Global Trade Services, edition for SAP HANA. Reason no. 1 – A Roadmap to the Future The first and maybe the most crucial reason you should move to SAP GTS, edition […]

        The post 12 Good Reasons to Move to SAP GTS, edition for SAP HANA appeared first on ERP Q&A.

        ]]>
        In this blog, we will share 12 good reasons existing users of SAP Global Trade Services should move to the new and future-proof SAP Global Trade Services, edition for SAP HANA.

        A road map to the future
        12 Good Reasons to Move to SAP GTS, edition for SAP HANA

        Reason no. 1 – A Roadmap to the Future

        The first and maybe the most crucial reason you should move to SAP GTS, edition for SAP HANA, is that you will have to sooner or later. This is because the current SAP GTS 11.0 is close to ending its lifecycle and will be out of mainstream maintenance at the end of December 2025.

        As illustrated below, a new era has begun with the introduction of SAP GTS, edition for SAP HANA. With the move to the SAP HANA platform, we will see a new version of SAP GTS being released every second year, in line with the overall release strategy of SAP HANA, and we will see the second version, SAP GTS, edition for SAP HANA 2023 shipping this year (2023).

        A road map for the future

        Future innovations and, eventually, also necessary new legal requirements will become available in SAP GTS, edition for SAP HANA only.

        Please note that SAP GTS, edition for SAP HANA connects to both SAP ECC and SAP S/4HANA, so there is no need to upgrade to SAP S/4HANA before embarking on the journey.

        Reason no 2 – Better User Experience

        Another reason for moving to the SAP GTS, edition for HANA is the new user experience. SAP GTS, edition for SAP HANA, runs, from an end-user perspective, entirely from the SAP Fiori launchpad. SAP Fiori represents, of course, a change management challenge. Still, the Fiori user experience is undoubtedly a better one and is a user experiences consistent with SAP strategy and coherent with SAP S/4HANA and other modern SAP solutions.

        SAP GTS, edition for SAP HANA Fiori Launchpad

        Reason no 3 – Improved Usability and Efficiency with Re-designed Fiori Apps

        With the SAP Fiori user experience also comes 15 new Fiori apps. They are replacing more than 70 old SAP GUI transactions. These apps represent not only enhanced usability. They will also increase efficiency and provide increased transparency for users and corporations.

        You should not underestimate the change management effort of switching to SAP Fiori; however, for the new generation entering the workforce, SAP Fiori is clearly preferred compared to the classic SAP GUI.

        The native Fiori-apps mainly cover compliance management, export, transit, and trade preference management. However, fans of the SAP GUI will still recognize major parts of GTS, and the transition should be relatively easy.

        Improved Usability and Efficiency with Re-designed Fiori Apps

        Reason no 4 – Embedded Analytics for Work, Reporting, and Audit

        One example of improvements provided by the SAP GTS, edition for SAP HANA is the embedded analytics available in the Fiori apps. Not only do they efficiently communicate insight into the current state of business, but the analytics also works as an efficient way of filtering and focusing on the most critical tasks. Users can easily toggle between the visual filter and more classic field-based (compact) filtering based on personal preference.

        Embedded Analytics for Work, Reporting, and Audi

        Reason no 5 – Improved concept of processing status, progress, and proposal

        In some apps, like the Manage Export Declarations app, we introduce new concepts for processing statuses and processing proposals. Combined with generic SAP Fiori collaboration options, SAP GTS, E4H provides modern tools for team collaboration.

        Improved concept of processing status, progress, and proposal

        Reason no 6 – Enterprise Search for Selected Objects

        Every application that uses the ABAP platform as its underlying technology platform can use Enterprise Search in connection with SAP HANA as the technology for basic searches. Enterprise Search allows you to search all structured data in an application in a unified way. This also applies to the SAP GTS, edition for SAP HANA. This represents a vast improvement in data availability and user experience. With a reference entered in the search field on top, users can easily find all documents or master data elements sharing the same reference.

        Enterprise Search for Selected Objects

        Reason no 7 – Renewed Trade Preference Management

        You can find one of the most significant improvements within Trade Preference Management. The management of long-term supplier declarations has been completely reworked.

        The old data model is deprecated, and the corresponding reports and transactions are deleted and replaced by new reports and Fiori applications. In addition, with the 2023 release, we support Preference Data Management for Product Identifiers. The ability to determine preferential origin on the batch level is a unique advantage of SAP GTS, edition for SAP HANA.

        The change in the data model is making the transition a bit harder; however, with the added value it provides, it represents a reasonable renewal of an important and quickly changing area within International trade.

        Renewed Trade Preference Management

        Reason no 8 – Improvements with SAP S/4HANA 2022 Order to Cash

        SAP GTS, edition for SAP HANA, also improves integration with SAP S/4HANA. For example, we persist the license-type indicator in sales orders and scheduling agreements.

        SAP GT, edition for SAP HANA, actively pushes trade compliance check results and status updates to sales orders and scheduling agreements. SAP GTS ensures that the statuses always reflect reality, and with the data persisted in S/4HANA, we open up more for better usage of the statuses, without the need to make a status call to SAP GTS.

        Improvements with SAP S/4HANA 2022 Order to Cash

        Reason no 9 – Improvements with SAP S/4HANA TM Advanced Shipping & Receiving (ASR)

        International Trade Management and logistics are both practically and technically tightly connected. In SAP GTS, edition for SAP HANA, we will continue enhancing our integration with SAP TM and SAP EWM. Furthermore, as Advanced Shipping and receiving become the new standard for shipments in SAP, we will see further end-to-end integrations between SAP TM, SAP EWM, and SAP GTS.

        Improvements with SAP S4HANA TM Advanced Shipping and Receiving

        Reason no 10 – Simplified Operations – SAP HANA Search in SPL

        Even though SAP GTS 11 on an SAP HANA database already can utilize SAP HANA Search in Sanctioned Party List Screening (SPL) in addition to classic GTS Search and TREX, with the introduction of SAP GTS edition for SAP HANA, TREX is no longer an option. Therefore, as you move from TREX to SAP HANA Search, it is crucial to understand how the SAP HANA Search differs. SAP HANA Search is undoubtedly a better solution, for instance, simplifying and eliminating time-consuming and complex maintenance tasks every time we see a change in SPL lists.

        Simplified Operations – SAP HANA Search in SPL

        Reason no 11 – Platform & Technology

        Despite everything new, just like SAP GTS 11.0, the SAP GTS edition for SAP HANA is also a standalone solution, logically and physically separated from SAP ECC and SAP S/4HANA.
        It runs, however, on the strategic and future-proof SAP HANA platform.

        The integration with surrounding applications is more or less the same, using the same formats and protocols.

        And let us, once more, clarify one point. SAP GTS, edition for SAP HANA, integrates not only with SAP S/4HANA but also with SAP ERP and even non-SAP systems in the same way that SAP GTS 11.0 does. Therefore, there is no prerequisite to run SAP S/4HANA to run SAP GTS, edition for SAP HANA.

        Platform and Technology

        Reason no 12 – Co-Deployment with S/4HANA 2022

        With the next release, SAP GTS edition for SAP HANA 2023, planned availability later this year, we enable co-deployment on top of SAP S/4HANA version 2022 onwards.

        This will lower operational costs for running SAP GTS, both on-premise and in the cloud.

        Please be aware that with co-deployment, SAP GTS will still be a logically separated system, not to be confused with embedded solutions. Integrations between ERP and GTS will be as before. The benefits lie in the possibility of sharing infrastructure costs.

        Co-deployment option

        As you can see, at least twelve good reasons exist to move to the SAP GTS, edition for SAP HANA. One reason is that you have to. In addition, you will find 11 reasons representing usability enhancements, enhanced functionality, and simplified operations.

        Rating: 0 / 5 (0 votes)

        The post 12 Good Reasons to Move to SAP GTS, edition for SAP HANA appeared first on ERP Q&A.

        ]]>
        SAP HANA Development with SAP Cloud Application Programming Model using SAP Business Application Studio https://www.erpqna.com/sap-hana-development-with-sap-cloud-application-programming-model-using-sap-business-application-studio/?utm_source=rss&utm_medium=rss&utm_campaign=sap-hana-development-with-sap-cloud-application-programming-model-using-sap-business-application-studio Sat, 03 Dec 2022 10:56:19 +0000 https://www.erpqna.com/?p=70664 Goal: This Blog explains how you can leverage native SAP HANA development artifacts with CAP. In particular we look at the use of Calculation Views, inside Cloud Application Programming (CAP) Applications. This includes OData access to Calculation Views. Solution: Pre-requisites: Set Up SAP Business Application Studio for Development NOTE: In the SAP BTP trial and […]

        The post SAP HANA Development with SAP Cloud Application Programming Model using SAP Business Application Studio appeared first on ERP Q&A.

        ]]>
        Goal:

        This Blog explains how you can leverage native SAP HANA development artifacts with CAP. In particular we look at the use of Calculation Views, inside Cloud Application Programming (CAP) Applications. This includes OData access to Calculation Views.

        Solution:

        Pre-requisites: Set Up SAP Business Application Studio for Development

        • Launch the Business Application Studio (BAS) and choose Create Dev Space

        NOTE: In the SAP BTP trial and free tier you are limited to only two Dev Spaces and only one can be active at a time. If you have performed other tutorials, you might already have reached your maximum. In that case you might have to delete one of the other dev spaces in order to continue with this tutorial.

        • Select the Full stack application and select the necessary extensions as shown in the image below and click on the create dev space.
        Selection of the type – Full Stack Application in Business Application Studio

        It will then create a dev space. Although it takes couple of minutes to start. Once it is RUNNING, you can click on CAP_DEMO and start creating your projects.

        Newly created DEV space in BAS
        • Add Cloud Foundry LOGIN Connection to your space ( 1.Plugin 2. F1- 3. CF )

        Login to Cloud Foundry, there are several ways to login to cloud foundry

        1. In the plugins, go to cloud foundry icon and click on the right arrow as shown in the picture.
        Cloud Foundry Login in BAS

        Fill in the credentials & necessary details such as Cloud Foundry Organization & Cloud Foundry Space.

        Enter Credentials and Dev Space details

        2. Using the Artifact Wizard by clicking on F1.

        Cloud Foundry login using Artifact Wizard (F1)

        3. Using the terminal: Execute the command,

        CF LOGIN

        Terminal – CF LOGIN
        • Create CAP Project: Once you have logged in to Cloud Foundry, Start with the creation of CAP Project by clicking on the Start from Template ( Help > Get Started > Start from Template) and click on CAP Project.
        Choose from the template ( CAP Project )

        Set your runtime as node.js and add the features that’s selected in the image below and click on finish.

        It will then create a CAP Project with some sample folders and files.

        Project Explorer
        • Adding HANA Target to the Project: Open new terminal (Terminal > New Terminal) and execute the command

        CDS ADD HANA

        You can see the dependencies in the mta.yaml file.

        mta.yaml file
        • Adjust the content in the files mta.yaml & package.json

        Now change the path from gen/db -> db

        Change the path in mta.yaml

        Now, change the cds section of the package.json to the below block of code.

        Changes in the file package.json
        Before & After the addition of CDS Section in package.json
        • Install the dependencies

        Open the terminal and execute

        • npm install ( NOTE: Skip the step if already installed )
        • npm install -g hana-cli (Open Source Utility called hana-cli )
        • hana-cli createModule
        Install the dependencies

        (OPTIONAL) You can clone your git repository or continue with the project with next steps.

        Initialize the Git Repository
        • View the files added from the CAP template

        1. src > data-model.cds

        data-model.cds

        2. srv > cat-service.cds

        cat-service.cds

        Run the following command

        cds build

        cds build

        CDS artifacts are now converted to hdbview and hdbtable artifacts and you can find them in the src folder.

        CDS Artifacts in the Explorer

        Deploy these objects into HANA database creating tables and views. Bind the project to a database connection and HDI container instance. Click on the bind icon.

        The connections with respect to CAP & HANA tooling are distinct. They do not share the same connection. Hence, binding must be done at 2 different points : (a) SAP HANA Project (b) Run Configurations.

        (a) Binding the HANA Project
        • Create Service Instance

        Select an SAP HANA Service and choose from the list displayed.

        SAP HANA Service

        Go to the plugin Run Configuration and bind the connection.

        (b) Binding in Run Configuration

        Select ‘Yes’ to the below dialog box.

        • Run the CAP Project

        Once deployed, go to Run Configurations, and click on the run button. This will give an another dialog box to open ‘Bookshop-1’ a new tab.

        Run the CAP Project
        Application running at port 4004

        If you click on the $metadata, you can view the technical description of the service.

        $metadata

        Click on Fiori Preview, attributes have to be selected by clicking on the gear icon ⚙.

        Fiori Preview
        • Create Calculation View

        Create a calculation view, click on F1 which in turn will open a wizard to create the database artifact.

        Database Artifact Wizard – Create Calculation View

        In the Calculation View Editor, Click on the + icon on the projection and add the table. On the right side, there is an icon to expand the details of the projection. By clicking on it, it opens the panel and here you can map the table attributes, by double clicking on the MY_BOOKSHOP_BOOKS header.

        Calculation View Editor

        Once deployed, you can view the Calculation View in the Database explorer.

        Database Explorer
        • Edit the .cds files ( data-model.cds & cat-service.cds )
        data-model.cds
        cat-service.cds
        • Create Synonym

        Create. hdbsynonym in your src folder

        Database Artifact Wizard – Create Synonym

        Click on <click_to_add> and enter the synonym name, MY_BOOKSHOP_CV_BOOK and click on the object field, the below table opens. Enter ** on the text field and choose the calculation view cv_book and click finish.

        If you open the cap.hdbsynonym in the text editor, it will be as follows.

        Synonym – cap.synonym

        Deploy the project by executing the command cds deploy -2 hana. You can refresh the browser. You can find the new cv_book entity.

        Application with new entity
        Rating: 5 / 5 (1 votes)

        The post SAP HANA Development with SAP Cloud Application Programming Model using SAP Business Application Studio appeared first on ERP Q&A.

        ]]>
        Installation Eclipse and configuration ADT tool https://www.erpqna.com/installation-eclipse-and-configuration-adt-tool/?utm_source=rss&utm_medium=rss&utm_campaign=installation-eclipse-and-configuration-adt-tool Sun, 06 Nov 2022 04:11:35 +0000 https://www.erpqna.com/?p=69425 Introduction Before Driving deep into Technical details. let me give some brief about why we need to do this ABAP Development Tool (ADT) is an Eclipse based tool provided by SAP, You will need ADT if you have to work on the ABAP CDS views. Even though,CDS views are emdedded into the ABAP Dictionary, there […]

        The post Installation Eclipse and configuration ADT tool appeared first on ERP Q&A.

        ]]>
        Introduction

        Before Driving deep into Technical details. let me give some brief about why we need to do this ABAP Development Tool (ADT) is an Eclipse based tool provided by SAP, You will need ADT if you have to work on the ABAP CDS views. Even though,CDS views are emdedded into the ABAP Dictionary, there are some Difference in the features available between the Eclipse and the Data Dictionary environments.

        Detail example how to Connect to S/4HANA 1809 SAP Server from Eclipse explained in this Blog.

        Follow below steps to install Eclipse and set up GUI ADT

        Install Eclipse

        Download Eclipse using this link – https://www.eclipse.org/downloads/

        Double Click the Eclipse Installation file which is downloaded above.

        We have selected Eclipse IDE for Enterprise Java Developers. You may choose the first one too i.e. Eclipse IDE for Java Developers.

        Accept the Terms and Conditions

        Once the Installation is complete, Launch the Eclipse

        This is how the Eclipse should look once the Launch is complete.

        If you want to check the version of your Eclipse, you may go the Help -> About.

        Ours is 2020-03 version

        Install ADT (ABAP Development Tool) in Eclipse

        Get the latest ADT tools from below link.

        https://tools.eu1.hana.ondemand.com/#abap

        Enter below URL if you installed the latest Eclipse 2020-03 like us:

        https://tools.hana.ondemand.com/latest

        Hit Finish and check the progress at the bottom right corner.

        Once it is complete, it will as ask to Restart Eclipse. Not Computer.

        Wait for Eclipse to start again.

        Open ABAP Perspective in Eclipse ADT

        Choose ABAP in Others Perspective.

        Top Left corner, ABAP perspective is visible.

        Connect to S/4HANA 1809 SAP Server from Eclipse

        Click on Create ABAP Project

        It will show the SAP System from the Local GUI. You need to have the GUI installed and the S/4HANA Server details added to it.

        Provide the SAP User Id and Password provided to you. If you have not received the id and password yet, please ignore this step.

        Hit Finish Button

        Add Your Favourite Package to Project

        Add Favourite Package and Hit Finish or Add Package later by right clicking and Add Package.

        To find anything in Eclipse, you may use short cut – SHIFT + CNTRL + A

        Rating: 0 / 5 (0 votes)

        The post Installation Eclipse and configuration ADT tool appeared first on ERP Q&A.

        ]]>
        AWS Serverless Lambda Functions to integrate S3 Buckets with SAP S/4 HANA using OData APIs https://www.erpqna.com/aws-serverless-lambda-functions-to-integrate-s3-buckets-with-sap-s-4-hana-using-odata-apis/?utm_source=rss&utm_medium=rss&utm_campaign=aws-serverless-lambda-functions-to-integrate-s3-buckets-with-sap-s-4-hana-using-odata-apis Wed, 02 Nov 2022 11:03:21 +0000 https://www.erpqna.com/?p=69323 Introduction This blog shows to integrate AWS S3 Buckets, AWS Lambda Functions and SAP S/4 HANA OData APIs. For those unfamiliar with AWS S3 and Lambda functions, here are descriptions from the AWS websites: AWS (Amazon Web Services) Lambda is a serverless, event-driven service that allows you to execute any type of application logic dynamically […]

        The post AWS Serverless Lambda Functions to integrate S3 Buckets with SAP S/4 HANA using OData APIs appeared first on ERP Q&A.

        ]]>
        Introduction

        This blog shows to integrate AWS S3 Buckets, AWS Lambda Functions and SAP S/4 HANA OData APIs.

        For those unfamiliar with AWS S3 and Lambda functions, here are descriptions from the AWS websites:

        AWS (Amazon Web Services) Lambda is a serverless, event-driven service that allows you to execute any type of application logic dynamically without the need of dedicated servers. Lambda functions can be triggered from most AWS services and you only pay for what you use.

        AWS (Amazon Web Services) Simple Storage Service (Amazon S3) is one of the leading cloud based object storage solutions that can be used for all data sizes from databases to datalakes and IoT. The scalability, reliability, simplicity and flexibility with full API functionality is state-of-the-start.

        Architecture

        Here is an architecture diagram for the prototype. Normally SAP S/4 HANA system would be behind a firewall and there would be another layer to safely expose the services. For the purposes of simplifying the prototype, we want to keep the scenario focused on AWS S3, AWS Lambda and the SAP S/4 HANA OData API call.

        AWS S3 Bucket triggering Lambda function to SAP OData API call

        Background

        I was having an architecture discussion last week with Jesse, a good friend mine whom I used to work together at SAP Labs in Palo Alto. One of the things I really enjoyed over the years is talking to him about the new technologies and how to leverage them in the context of SAP. We have implemented many cutting-edge SAP integration projects and witnessed the evolution of SAP integration starting from the lowest level of C, C++, Java, and Microsoft COM/.Net components in the RFC SDK, Business Connector, BizTalk Integration, XI, PI, PO, CPI to the latest SAP BTP Cloud Integration with the SAP API Business Hub and SAP API Portal.

        My friend mentioned an interesting scenario where he built serverless lambda functions on AWS that could be triggered on a scheduled basis to run the logic which in his case was to check for the prices of a certain item on a website and trigger an alert if the prices reached a certain threshold – similar to the Kayak application which checks for prices across Expedia, Travelocity, etc. No need for dedicated server….No need to build or maintain a server… Just the logic code run on demand… What a powerful and amazing concept!

        I immediately started thinking of all of the possibilities and how it could be used for an SAP focused integration scenario. Let’s drop a sales order file into an AWS S3 bucket and have it immediately trigger an AWS Lambda function written in Node.js that invokes an SAP sales order OData RESTful service and have the response dropped into another AWS S3 Bucket. You can picture this as the evolution of the traditional integration scenario whereby a file is dropped on an SFTP server and then a middleware server regularly polls this folder for new files and then calls the backend SAP system through a BAPI or custom RFC or the modern OData approach. We get away from the antiquated SFTP servers and use the more versatile, flexible, and powerful S3 bucket technology offering. We get away from the older SAP BAPIs (BAPI_SALEORDER_CREATEFROMDAT2) and move to the latest SAP OData API. Evolution…..

        Getting started with the fully activated SAP S/4 HANA 2021 appliance

        So I convinced Roland, another SAP colleague whom I used to work with at SAP Labs in Palo Alto, to get this scenario up and running. We decided to spin up a new trial SAP S/4 HANA 2021 full appliance on the AWS Cloud through the SAP Cloud Appliance Library and start the adventure.

        SAP S/4 HANA 2021 FPS02 Fully Activate Appliance on SAP CAL

        SAP Cloud Appliance Library

        If you have access to an SAP S/4 HANA system that you can call (through SAP BTP Cloud integration, SAP API Hub, or a reverse proxy that exposes your cloud or on premise SAP S/4 HANA system) from the internet, then no need to spin up an appliance. If you do not have an SAP S/4 HANA system, then definitely spin one up and follow this blog to implement the integration scenario. It is easy to spin up a test SAP S/4 HANA fully activated appliance and it only takes a matter of minutes. Getting an SAP system up and running to do prototyping would normally take weeks of planning the hardware, purchasing the SAP software, installing, patching, and configuring. Now it takes minutes with the deployment capabilities through AWS, Azure and Google. The new images on SAP’s cloud appliance library are really impressive – the SAP Fiori launchpad runs correctly right away and as for the remote desktop machine, even the Eclipse installation can be triggered through a few mouse clicks which will load the SAP ADT (ABAP Development Tools). All of the SAP OData services and SAP Fiori apps are enabled by default which is very helpful! I have deployed many of these test appliances in the past since this capability was introduced but it still used to take time to get it set up to start working on a prototype development. Not anymore. Here you can see the SAP S/4 HANA instance and the remote desktop instance running. I shut the SAP Business Objects BI Platform and SAP NetWeaver instances off since they are not needed for this demo,

        SAP S/4 HANA instances running on AWS Cloud

        SAP Sales Order JSON Request and SAP OData API Service

        Here is the SAP sales order request JSON that we drop into the AWS S3 bucket. Note that this JSON request works for invoking the standard SAP sales order service API_SALESORDER_SRV to create a sales order in the fully activated appliance. For this project, we do not want to add any complexity by adding mapping requirements. In most common integration scenarios, there may be mapping required from other formats such as cXML, OAG XML, etc. Also, the SAP S/4 HANA system is not exposed directly but through SAP BTP Cloud Integration or SAP API Hub which adds a layer of security control over the interfaces. For now we just want to expose the API available through HTTPS to the lambda function.

        {
           "SalesOrderType":"OR",
           "SalesOrganization":"1010",
           "DistributionChannel":"10",
           "OrganizationDivision":"00",
           "SalesGroup":"",
           "SalesOffice":"",
           "SalesDistrict":"",
           "SoldToParty":"10100011",
           "PurchaseOrderByCustomer":"File dropped into S3 Bucket",
           "to_Item":[
              {
                 "SalesOrderItem":"10",
                 "Material":"TG11",
                 "RequestedQuantity":"1"
              },
              {
                 "SalesOrderItem":"20",
                 "Material":"TG12",
                 "RequestedQuantity":"5"
              }
           ]
        }

        To verify that the SAP sales order service is running, run transaction code /IWFND/MAINT_SERVICE and check to see that the API_SALES_ORDER_SRV (Sales Order A2X) highlighted below is running. If it is not active, make sure to activate it through this transaction code.

        SAP Sales order service – API_SALES_ORDER_SRV

        AWS S3 Bucket for uploading the sales order request file

        Here is our ltc-inbound S3 bucket where we will upload the file to:

        S3 Bucket ltc-inbound

        When we upload the file, this automatically triggers the lambda function. We will show how to do this later in the document:

        File uploaded to the S3 Bucket

        AWS CloudWatch Monitoring View

        Once the Lambda function is triggered, you can see the log in AWS CloudWatch monitoring tool:

        AWS CloudWatch Lambda function execution log

        If you open up the log details, you can see that logging we have performed from our Node.js code – “writing sales order 4969 to S3 bucket ltc-outbound”.

        Lambda function detailed logs on AWS CloudWatch

        AWS S3 Bucket for writing the sales order response file

        In our Lambda function, we save the sales order response file in the ltc-outbound S3 bucket. Note that we have parsed out the order number and named the file with it to make it easier to discern amongst other files.

        Sales Order Response saved to AWS S3 Bucket

        Sales Order response JSON file

        Here is the response sales order JSON:

        {
            "d":
            {
                "__metadata":
                {
                    "id": "https://**.**.***.*:*****/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder('4969')",
                    "uri": "https://**.**.***.*:*****/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder('4969')",
                    "type": "API_SALES_ORDER_SRV.A_SalesOrderType",
                    "etag": "W/\"datetimeoffset'2022-10-24T06%3A05%3A25.7050130Z'\""
                },
                "SalesOrder": "4969",
                "SalesOrderType": "OR",
                "SalesOrganization": "1010",
                "DistributionChannel": "10",
                "OrganizationDivision": "00",
                "SalesGroup": "",
                "SalesOffice": "",
                "SalesDistrict": "",
                "SoldToParty": "10100011",
                "CreationDate": null,
                "CreatedByUser": "",
                "LastChangeDate": null,
                "SenderBusinessSystemName": "",
                "ExternalDocumentID": "",
                "LastChangeDateTime": "/Date(1666591525705+0000)/",
                "ExternalDocLastChangeDateTime": null,
                "PurchaseOrderByCustomer": "File dropped into S3 J1",
                "PurchaseOrderByShipToParty": "",
                "CustomerPurchaseOrderType": "",
                "CustomerPurchaseOrderDate": null,
                "SalesOrderDate": "/Date(1666569600000)/",
                "TotalNetAmount": "105.30",
                "OverallDeliveryStatus": "",
                "TotalBlockStatus": "",
                "OverallOrdReltdBillgStatus": "",
                "OverallSDDocReferenceStatus": "",
                "TransactionCurrency": "EUR",
                "SDDocumentReason": "",
                "PricingDate": "/Date(1666569600000)/",
                "PriceDetnExchangeRate": "1.00000",
                "RequestedDeliveryDate": "/Date(1666569600000)/",
                "ShippingCondition": "01",
                "CompleteDeliveryIsDefined": false,
                "ShippingType": "",
                "HeaderBillingBlockReason": "",
                "DeliveryBlockReason": "",
                "DeliveryDateTypeRule": "",
                "IncotermsClassification": "EXW",
                "IncotermsTransferLocation": "Walldorf",
                "IncotermsLocation1": "Walldorf",
                "IncotermsLocation2": "",
                "IncotermsVersion": "",
                "CustomerPriceGroup": "",
                "PriceListType": "",
                "CustomerPaymentTerms": "0001",
                "PaymentMethod": "",
                "FixedValueDate": null,
                "AssignmentReference": "",
                "ReferenceSDDocument": "",
                "ReferenceSDDocumentCategory": "",
                "AccountingDocExternalReference": "",
                "CustomerAccountAssignmentGroup": "01",
                "AccountingExchangeRate": "0.00000",
                "CustomerGroup": "01",
                "AdditionalCustomerGroup1": "",
                "AdditionalCustomerGroup2": "",
                "AdditionalCustomerGroup3": "",
                "AdditionalCustomerGroup4": "",
                "AdditionalCustomerGroup5": "",
                "SlsDocIsRlvtForProofOfDeliv": false,
                "CustomerTaxClassification1": "",
                "CustomerTaxClassification2": "",
                "CustomerTaxClassification3": "",
                "CustomerTaxClassification4": "",
                "CustomerTaxClassification5": "",
                "CustomerTaxClassification6": "",
                "CustomerTaxClassification7": "",
                "CustomerTaxClassification8": "",
                "CustomerTaxClassification9": "",
                "TaxDepartureCountry": "",
                "VATRegistrationCountry": "",
                "SalesOrderApprovalReason": "",
                "SalesDocApprovalStatus": "",
                "OverallSDProcessStatus": "",
                "TotalCreditCheckStatus": "",
                "OverallTotalDeliveryStatus": "",
                "OverallSDDocumentRejectionSts": "",
                "BillingDocumentDate": "/Date(1666569600000)/",
                "ContractAccount": "",
                "AdditionalValueDays": "0",
                "CustomerPurchaseOrderSuplmnt": "",
                "ServicesRenderedDate": null,
                "to_Item":
                {
                    "results":
                    []
                },
                "to_Partner":
                {
                    "__deferred":
                    {
                        "uri": "https://**.**.***.*:*****/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder('4969')/to_Partner"
                    }
                },
                "to_PaymentPlanItemDetails":
                {
                    "__deferred":
                    {
                        "uri": "https://**.**.***.*:*****/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder('4969')/to_PaymentPlanItemDetails"
                    }
                },
                "to_PricingElement":
                {
                    "__deferred":
                    {
                        "uri": "https://**.**.***.*:*****/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder('4969')/to_PricingElement"
                    }
                },
                "to_RelatedObject":
                {
                    "__deferred":
                    {
                        "uri": "https://**.**.***.*:*****/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder('4969')/to_RelatedObject"
                    }
                },
                "to_Text":
                {
                    "__deferred":
                    {
                        "uri": "https://**.**.***.*:*****/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder('4969')/to_Text"
                    }
                }
            }
        }

        Sales Order response JSON file

        Sales Order in SAP S/4 HANA and transaction codes

        Here is the list of orders from the SAP VBAK sales order header table through the transaction code SE16n. Note, the order number 4969 captured in our AWS CloudWatch logs above.

        Table entries in SAP table VBAK – sales order header table

        We can bring up the sales order in SAP transaction code VA03:

        Sales order transaction VA03

        Here is the sales order – note the customer reference:

        SAP Transaction VA03 – Sales order display

        Lambda function details

        So here are the details of the lambda function.

        AWS Lambda Dashboard

        Click on the button – Create the function:

        Choose the – Use a blueprint option to get the sample code and choose the Get S3 Object:

        Use the Blueprint to Get S3 Object
        Blueprint Get S3 Object

        Note that we also need to create a role – processOrderRole.- which we will need permissions to read the S3 bucket file content:

        processOrder Lambda function configuration

        Here is the S3 trigger that invokes the lambda function:

        S3 trigger
        Additional configuration for the lambda function

        Here is the auto-generated Node.js code that is triggered. This code gets the event from which you can get the S3 bucket name and the S3 object key which you can then use to read the file contents.

        Generated Lambda Node.js code

        Here is our Node.js code which takes the input file and then makes the OData RESTful call to SAP sales order service endpoint. A challenging part we encountered was making the call to the SAP system which has a self-signed certificate. Also, note that since we are doing an HTTP Post to create the sales order, we need to pass in an x-csrf-token. We get the token from the metadata request using a Get which is the first call to the system before the post call.

        Here is the lambda function node.js code. You can reuse this code:

        // Author: Roland & Jay
        // Note: This code requires the request and lodash npm modules
        // Description: This code uses the AWS S3 events and objects and calls the SAP S/4 HANA sales order
        // OData API service
        
        console.log('Loading function')
        
        const aws = require('aws-sdk')
        const request = require('request')
        const {get} = require('lodash')
        
        const s3 = new aws.S3({ apiVersion: '2006-03-01' })
        
        
        exports.handler = async (event, context) => {
          //console.log('Received event:', JSON.stringify(event, null, 2))
        
          // Get the object from the event and show its content type
          const bucket = event.Records[0].s3.bucket.name
          const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '))
          const params = {
            Bucket: bucket,
            Key: key,
          }
          try {
            const obj = await s3.getObject(params).promise()
            const { ContentType, Body } = obj
            const body = JSON.parse(Body.toString())
            await processOrder(body)
            return ContentType
          } catch (err) {
            console.log(err)
            const message = `Error getting object ${key} from bucket ${bucket}. Make sure they exist and your bucket is in the same region as this function.`
            console.log(message)
            throw new Error(message)
          }
        }
        
        
        const hostname = '**.**.***.*'
        const port = *****
        const interface = 'sap/opu/odata/sap/API_SALES_ORDER_SRV'
        const auth = {
          user: '*********',
          pass: '**********',
        }
        const bucket = 'ltc-outbound'
        
        const buildCallParameters = (url, request, method = 'GET', extraHeaders = {}, jar = request.jar(), token = 'fetch', body) => {
          console.log('build', method)
          const params = {
            url,
            jar,
            method,
            rejectUnauthorized: false,
            requestCert: true,
            agent: false,
            auth,
            headers: {
              'x-csrf-token': token,
              ...extraHeaders,
            },
          }
          return !body ? params : { ...params, body, json: true }
        }
        
        const httpCall = async (url, request, method = 'GET', extraHeaders = {}, jar, token, body) => {
          return new Promise((resolve, reject) => {
            const params = buildCallParameters(url, request, method, extraHeaders, jar, token, body)
            request(params,
              (error, response) => {
                if (error) {
                  return reject(error)
                }
                return resolve(response)
              })
          })
        }
        
        const postDataToSAP = async function (json, metaDataUrl, postUrl) {
          const jar = request.jar()
          const tokenResp = await httpCall(metaDataUrl, request, 'GET', {}, jar)
          const token = tokenResp.headers['x-csrf-token']
          console.log('token: ', token)
          const postResp = await httpCall(
            postUrl,
            request,
            'POST',
            { 'Content-Type': 'application/json' },
            jar,
            token,
            json,
          )
          return postResp
        }
        
        const processOrder = async (order) => {
          console.log('starting')
          try {
            const { body } = await postDataToSAP(
              order,
              `https://${hostname}:${port}/${interface}/$metadata`,
              `https://${hostname}:${port}/${interface}/A_SalesOrder`,
            )
            console.log('success: ', body)
            const orderNum = get(body, 'd.SalesOrder', 'error')
            console.log(`writing sales order ${orderNum} to S3 bucket ${bucket}`)
            await putObjectToS3(bucket, `${orderNum}.json`, JSON.stringify(body))
          } catch (error) {
            console.log('error:', error, ' error')
          }
        }
        
        
        const putObjectToS3 = (bucket, key, data) => {
          const params = {
            Bucket: bucket,
            Key: key,
            Body: data
          }
          return new Promise((resolve, reject) => {
            s3.putObject(params, (err, data) => {
              if (err) {
                return reject(err)
              }
              resolve(data)
            })
          })
        }

        This code makes backend calls using the request npm module using await, async functions, and promises. There are two SAP backend calls that are made. The first one is get the metadata and the x-csrf-token. The next call which is the main call is to create the sales order. In order to run this code in Node.js, we need to load the npm libraries for request and lodash to the Lambda project. To do this, create a directory in your filesystem and then run the following two commands and take the node_modules folder, zip it up and upload it to the Lambda function;

        npm install request
        
        npm install lodash
        Upload the node_modules zip file with request and lodash npm modules

        Besides loading the code, you need to ensure that the lambda function has permissions to read and write the files to the S3 buckets. The read permissions where created when creating the Lambda function above.

        Here is the role and policy to write out the file to the S3 bucket:

        processOrderRole

        We need to add a policy to the processOrderRole to allow writing to the S3 bucket:

        Attach policies
        Policy allow-sap-lambda-write-to-outbound
        Policy allow-sap-lambda-write-to-outbound-policy
        Rating: 0 / 5 (0 votes)

        The post AWS Serverless Lambda Functions to integrate S3 Buckets with SAP S/4 HANA using OData APIs appeared first on ERP Q&A.

        ]]>