SAP Cloud Platform Integration - ERP Q&A https://www.erpqna.com Trending SAP Career News and Guidelines Thu, 25 Jan 2024 11:48:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.erpqna.com/wp-content/uploads/2021/11/cropped-erpqna-32x32.png SAP Cloud Platform Integration - ERP Q&A https://www.erpqna.com 32 32 C_CPI_15 Certification Demystified: Your Roadmap to SAP Integration Proficiency https://www.erpqna.com/c-cpi-15-certification-accelerate-preparation-with-practice-test/?utm_source=rss&utm_medium=rss&utm_campaign=c-cpi-15-certification-accelerate-preparation-with-practice-test Wed, 20 Sep 2023 08:08:43 +0000 https://www.erpqna.com/?p=78071 The C_CPI_15 certification equips you with the knowledge and skills necessary to harness the full potential of the SAP Integration Suite.

The post C_CPI_15 Certification Demystified: Your Roadmap to SAP Integration Proficiency first appeared on ERP Q&A.

]]>
If you aspire to excel in this field and stand out as a certified professional, obtaining the C_CPI_15 certification is a significant step in your career journey. In today’s fast-paced digital landscape, staying ahead of the competition is crucial, especially in the dynamic world of SAP Integration Suite. This blog will take you through the ins and outs of C_CPI_15 certification preparation, offering valuable insights and actionable study tips to help you succeed.

The Importance of C_CPI_15 Certification in the SAP Integration Suite Field:

Unlocking Career Opportunities with the C_CPI_15 Certification:

The SAP Integration Suite field is highly competitive, with organizations constantly seeking experts who can seamlessly integrate their business processes. Having the C_CPI_15 certification on your resume is like holding a golden ticket. It validates your expertise and opens doors to lucrative job opportunities.

Demonstrate Expertise:

Certification is a tangible way to showcase your proficiency in SAP Integration Suite. It proves you have the knowledge and skills required to excel in this domain, giving you a competitive edge over non-certified peers.

Meet Industry Demands:

As businesses increasingly adopt SAP Integration Suite, the demand for certified professionals is rising. By becoming certified, you position yourself as a valuable asset in meeting the industry’s evolving needs.

What is the C_CPI_15 Certification All About?

The C_CPI_15 certification is a globally recognized credential offered by SAP. It validates your knowledge in SAP Integration Suite fundamentals, making you proficient in integration scenarios and the SAP Cloud Platform.

C_CPI_15 or the SAP Certified Development Associate – SAP Integration Suite certification examination confirms that the candidate possesses the fundamental and essential knowledge required for the SAP Integration Suite profile. The C_CPI_15 certification demonstrates that the candidate holds a comprehensive understanding and deep technical expertise necessary to actively contribute as a team member in a supervised capacity within a project.

This credential builds upon the foundational knowledge acquired through relevant SAP training. The certification awarded upon successfully passing the examination remains valid for 5 years. To enhance your chances of success, SAP recommends combining educational courses with practical experience in your certification exam preparation, as the questions will assess your ability to apply the knowledge acquired during your training.

Significance of the C_CPI_15 Certification:

Achieving the C_CPI_15 certification enhances your skills and allows career growth. Certified professionals often find themselves in roles such as Integration Consultants, Solution Architects, and Integration Managers, with opportunities for higher salaries and more challenging projects.

Practical Tips for C_CPI_15 Certification Preparation:

Set Clear Goals and Objectives:

Begin your preparation by setting clear, achievable goals. Define what you want to accomplish with the certification, whether career advancement or gaining deeper knowledge.

Cover All Topics Smoothly with the Help of C_CPI_15 Study Schedule:

Time management is crucial. Create a study schedule that helps you to cover all exam topics systematically. Allocate specific time slots for each subject.

Leverage Official SAP Resources:

SAP provides many resources, including documentation, guides, and sample questions. Use these official materials as your primary study sources.

Explore Online Courses and Tutorials:

Supplement your learning with online courses and tutorials. Many reputable platforms offer C_CPI_15 certification courses that provide in-depth insights.

Join Study Groups and Forums:

Engage with fellow aspirants by joining study groups and online forums. Discussing topics and sharing insights can enhance your understanding.

Practice with Hands-On Exercises:

Hands-on experience is invaluable. Do practice integration scenarios and exercises to apply your theoretical knowledge in real-world situations.

Staying Updated with SAP Integration Suite Developments:

The tech industry evolves rapidly. Stay updated with the latest developments in SAP Integration Suite to ensure your knowledge remains current.

Experience the Actual Exam Structure with the C_CPI_15 Practice Test:

Simulate exam conditions by taking mock exams. This helps you gauge your readiness and identify areas that need improvement.

Connect with Professionals:

Connect with professionals who have already achieved the C_CPI_15 certification. Their experience and advice can be invaluable in your preparation.

Maintain A Healthy Work-Study Balance:

Don’t let stress overwhelm you. Maintain a healthy balance between your studies and other aspects of your life to stay focused and motivated.

Why Practice Tests are Crucial for C_CPI_15 Certification?

Identify Strengths and Weaknesses with the C_CPI_15 Certification:

As you take C_CPI_15 practice tests, you’ll encounter a wide range of questions that mimic the content and format of the actual certification exam. Through your performance on these tests, you’ll gain valuable insights into the specific areas of the C_CPI_15 syllabus where you excel and need improvement.

Identifying your strengths allows you to build on your existing knowledge, focusing less on topics you already grasp and more on those that require attention. On the other hand, recognizing your weaknesses helps you pinpoint areas that need dedicated study time and practice.

Know the C_CPI_15 Exam Format:

Certification exams often follow a specific structure, including the types of questions, time constraints, and the overall layout. Gaining familiarity with this format through C_CPI_15 practice tests reduces anxiety and uncertainty on exam day.

When you know what to expect, you can approach the exam more confidently and clearly understand how to navigate it. You’ll be less likely to be caught off guard by unfamiliar question styles or the pressure of time constraints.

Boost Confidence and Reduce Anxiety:

Certification exams can be nerve-wracking experiences, especially if you need to become more accustomed to the format or the pressure of the test environment. Practice tests are a safe space to hone your skills and build confidence.

Each practice test you complete becomes a small victory, boosting your belief in your abilities. Confidence is a powerful motivator; the more confident you feel, the better you’ll perform during the exam.

SAP Integration Suite: A Brief Overview

SAP Integration Suite is a powerful platform facilitating seamless integration between various organizational applications and systems. It is pivotal in modern business operations, enabling efficient data flow and process automation.

In today’s business landscape, agility and efficiency are paramount. SAP Integration Suite empowers organizations to streamline processes, reduce manual intervention, and enhance productivity.

Learn How C_CPI_15 Certification Aligns with SAP Integration Suite Skills:

The C_CPI_15 certification equips you with the knowledge and skills necessary to harness the full potential of the SAP Integration Suite. It proves your ability to design, implement, and manage integrations effectively.

Concluding Thoughts:

Earning the C_CPI_15 certification opens doors to exciting opportunities and enhances your expertise. So, confidently embark on the certification journey and reap the long-term benefits of becoming a certified SAP Integration Suite professional. Don’t forget to follow the effective tips to simplify your preparation journey.

Rating: 0 / 5 (0 votes)

The post C_CPI_15 Certification Demystified: Your Roadmap to SAP Integration Proficiency first appeared on ERP Q&A.

]]>
C_CPI_14: Can You Rely on Practice Tests to Get through the SAP Cloud Platform Integration Certification? https://www.erpqna.com/c-cpi-14-certification-guide-secrets-to-exam-success-revealed/?utm_source=rss&utm_medium=rss&utm_campaign=c-cpi-14-certification-guide-secrets-to-exam-success-revealed Thu, 21 Jul 2022 11:51:51 +0000 https://www.erpqna.com/?p=65329 Explore the C_CPI_14 practice test materials to familiarize yourself with the exam structure. Learn more on the exam acing materials and practical study tips through this blog.

The post C_CPI_14: Can You Rely on Practice Tests to Get through the SAP Cloud Platform Integration Certification? first appeared on ERP Q&A.

]]>
Explore the C_CPI_14 practice test materials to familiarize yourself with the exam structure. Learn more on the exam acing materials and practical study tips through this blog.

What Is the C_CPI_14 Certification All About?

C_CPI_14 or the SAP Certified Development Associate – SAP Integration Suite certification exam proves the candidate possesses the basic and core-level knowledge needed to work as an SAP Integration Suite consultant.

The C_CPI_14 certification also proves that the candidate has the overall knowledge and in‐depth technical skills to join as a project team member under the senior consultant’s guidance. The C_CPI_14 certification focuses on the basic knowledge earned through related SAP training.

Syllabus Domains You Cover through the C_CPI_14 Certification Exam:

The C_CPI_14 exam covers the following topics-

  • API Provisioning
  • Integration Advisor capability within SAP Integration Suite
  • Consume and Process APIs
  • Model Extensions
  • Fundamentals

Preparation Tips to Pass the C_CPI_14 Exam:

Focus on Learning from the C_CPI_14 Training:

If you want to prepare in a better manner, joining the training could be of great help. Only studying and earning theoretical knowledge is not enough; therefore, join the training and boost your practical knowledge.

Plan out the C_CPI_14 Exam:

Proper exam planning is needed if you want to use your time productively. You must take the C_CPI_14 registration at the beginning for a fruitful preparation. When you know the exact exam date, planning for the exam becomes easy. Therefore, take at least two to three months to prepare for the exam.

Prepare the C_CPI_14 Syllabus without Skipping Any Section:

Whatever your knowledge base or how many plans you make, learning the syllabus topics plays the most important role in preparing for the C_CPI_14 exam. SAP certification syllabuses are divided in such a manner that you won’t get much scope to leave some of the topics or focus highly on some domains. Therefore, focus highly on all the syllabus domains and try to learn them from the core. 

Fix the Study Hours:

Do not follow the method of random studying. If you want to pass the exam, daily focused study hours are needed. Therefore, set aside a few hours when you devote yourself completely to learning the syllabus. Chalk out the most productive time from your daily routine and use those hours in study. Use the phone or internet wisely during study hours to widen your knowledge, but stay away from distractions.

You Must Remember the Things You Learn:

You might feel very confident during the exam preparation, but to score high, you must recall the topics till the exam day. A simple habit of writing and preparing notes help the candidates to remember the topics for longer. Notes making is also beneficial if you want to revise the topics quickly.

C_CPI_14 Practice Tests Help to Boost Your Confidence:

Knowing and exploring a thing is better for avoiding difficulties, and it goes true for C_CPI_14 exam preparation too. C_CPI_14 practice tests are made in a manner that you can experience the actual exam structure.

Therefore, cover the syllabus topics and enroll for C_CPI_14 practice test sessions to get insights regarding your preparation level.

C_THR86_2205: Practical Study Tips & Practice Tests Together Help Getting the SAP SuccessFactors Compensation Certification

Have the real-exam experience through practice tests and gradually boost your confidence by working on the guidance. Many aspirants use the C_CPI_14 dumps; they study from it but miss the valuable scope of learning about their strengths and weaknesses.

Overview of SAP Cloud Platform Integration:

The SAP Cloud Platform Integration Suite is a modular integration platform-as-a-service that helps in connecting the intelligent enterprise.

SAP Cloud Platform Integration integrates anything, anywhere, application-to-application (A2A) and business-to-business (B2B) scenarios in real-time.

Why Choose SAP Cloud Platform Integration?

Organizations are aware of the advantages that a cloud-based solution can offer to the table. So, the need for an integration platform to weave together the loose ends between different combinations of cloud-based and on-premise solutions becomes even more prominent. SAP Cloud Platform Integration is a cloud-based solution on the cloud platform. Thus, SAP Cloud Platform Integration aids from all the advantages just outlined and works on reliable message exchanges between all participants.

SAP Cloud Platform Integration is the solution of choice to connect many works. Whether we talk about SAP applications on-premise or in the cloud, non-SAP solutions, or B2B integrations, SAP Cloud Platform Integration works as SAP’s strategic integration platform to help businesses successfully move to the cloud. SAP Cloud Platform Integration helps to boost an organization’s Cloud strategy.

Bottom Line:

Certifications are helpful in two manners. They upgrade your skills regarding a specific solution and boost up your career to a new level. Therefore, earn the C_CPI_14 certification to work more efficiently with SAP Cloud Platform Integration.

Rating: 0 / 5 (0 votes)

The post C_CPI_14: Can You Rely on Practice Tests to Get through the SAP Cloud Platform Integration Certification? first appeared on ERP Q&A.

]]>
Propagate Hashmap values from primary iflow to secondary iflow using Soap adapter in SAP Cloud Integration https://www.erpqna.com/propagate-hashmap-values-from-primary-iflow-to-secondary-iflow-using-soap-adapter-in-sap-cloud-integration/?utm_source=rss&utm_medium=rss&utm_campaign=propagate-hashmap-values-from-primary-iflow-to-secondary-iflow-using-soap-adapter-in-sap-cloud-integration Mon, 13 Sep 2021 09:34:11 +0000 https://www.erpqna.com/?p=54003 Introduction: In various requirements in SAP Cloud Integration( SAP CPI) we need to use HashMap (key-value pair) to assign values at message mapping runtime using groovy scripts. Here, We use groovy scripts to initialize, load and get values from hashmaps. Now, In a scenario where we have created a hashmap object in the main Iflow(Primary […]

The post Propagate Hashmap values from primary iflow to secondary iflow using Soap adapter in SAP Cloud Integration first appeared on ERP Q&A.

]]>
Introduction:

In various requirements in SAP Cloud Integration( SAP CPI) we need to use HashMap (key-value pair) to assign values at message mapping runtime using groovy scripts.

Here, We use groovy scripts to initialize, load and get values from hashmaps.

Now, In a scenario where we have created a hashmap object in the main Iflow(Primary iflow) and from main iflow we need to call another secondary iflow (say subiflow) via soap adapter( for one way communication).And we need to use the hashmap values set on the primary iflow passed over to secondary iflow by leveraging Allowed Header(s) Runtime Configuration of an iflow,so that Hashmap values can be used on secondary iflow’s message mapping.

However, when we pass any Hashmap values through Allowed Headers(s) option from primary iflow to secondary iflow via soap channel,we need to again reload the hashmap object to make it usable on message mapping via groovy script. Here, I have explained how to achieve this functionality with step by step process.

I have leveraged the use of a groovy scripts to initialize, load and get values from hashmap.

While demonstrating the step by step process, we may go through few important technical aspects as below:

  1. Passing values of setHeader object from one iflow to another.
  2. One way communication from primary iflow to another sub-iflow using soap adapter.
  3. Retrieve value of setHeader object into secondary iflow(sub-iflow) using getHeader object.
  4. Reloading Header object with Hashmap values into secondary iflow.

This sample process can be leveraged by developers to edit, modify and enhanced according to other business requirements as well.

Prerequisites: You should have CPI tenant access and have understanding on Integration flow development.

Design approach: To explain these scenario, I have created two sample iflows (1st the Primary iflow which we call as main iflow and 2nd one as secondary iflow which we call as subiflow).

Main Iflow Processing :

Below Figure1 will show the high level steps performed on the Primary(main) iflow:

Figure 1 : Main Iflow(Primary Iflow Steps)

Detailed Steps of Main Iflow (Figure1) as below:

Step1(Figure1): Start Timer Event : This is to run once the main iflow on deploying

Step2/Step3/and Step4 (Figure2): Request Reply/SuccessFactors Channel/and Receiver component: These steps are used to call SuccessFactors system using SuccessFactors Odata adapter to get/fetch values from an SF entity .In our example, we are using SF odata entity “PicklistOption” as below:

SF Channel Configuration Screenshot

Here, as an example we are retrieving values for Picklist “personRelationshipType”

Step 5(Figure2:): Groovy Script : Below groovy script is used to Initialize Hashmap object “PicklistID_ExternalCode_Label” and load hashmap by looping on the incoming Picklist values from “PicklistOption” entity by creating key and value pairs assignments. After loading the Hashmap table object, finally hashmap object is assigned to a Property name “PicklistID_ExternalCode_Label”

Here Key = picklistId+”_”+externalCode ( say: personRelationshipType_2)

and Value = localeLabelValue ( say: Child)

(Note: I have kept same name for hashmap object and Property name)

/*This script is used for storing picklist values in a Hash Map*/
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import groovy.xml.XmlUtil;
import groovy.util.XmlSlurper;
def Message processData(Message message) {    
// HashMap initialisation to load Picklist external code to label
HashMap<String,String> PicklistID_ExternalCode_Label  = new HashMap<String,String>();
PicklistID_ExternalCode_Label.put("Dummy", "Dummy");

	def body = message.getBody(java.io.Reader);
	def list = new XmlSlurper().parse(body)
	def map = message.getProperties();

	list.PicklistOption.each {

		String picklistId = it.picklist.Picklist.picklistId.text();
		String externalCode = it.externalCode.text();
		String localeLabelValue = it.localeLabel.text();
		String keyObj = picklistId+"_"+externalCode;

		PicklistID_ExternalCode_Label.put(keyObj,localeLabelValue);
	}
	message.setProperty("PicklistID_ExternalCode_Label",PicklistID_ExternalCode_Label);
	return message;
}

Groovy Script Code Snippet (Initialize & Load Hashmap into a Property)

Step6(Figure1) : Content Modifier: Use to Assign Property value “PicklistID_ExternalCode_Label” to a Header value “PicklistID_ExternalCode_Label” which will be used in Allowed Headers(s:) Runtime Configuration to pass to any secondary iflow.

Note: Here Property and Header defined with same name.

Set Header value from HashMap property value

Step7/Step8/Step9/and Step10 (Figure2): Request-Reply/Soap Receiver Channel/ Receiver Component(for SubIFlow call) and Message end step. These steps are used to call the secondary iflow using soap receiver channel. Configuration done at channel mentioned on below images.

Soap Receiver Channel for calling Secondary iflow(Subiflow)

Updating the Allowed Header(s) Runtime Configuration on Main iflow: This is used to pass the Header values to the Secondary iflows as below:

Set Allowed Header Runtime Configuration Primary Iflow

Secondary/Sub-Iflow Processing :

Below Figure2 will show the high level steps performed on the Secondary(SubIflow) iflow:

Figure 2 : Secondary Iflow (Sub Iflow Steps)

Detailed Steps of Secondary Iflow (Figure 2) as below:

Step1 and Step 2(Figure2): Sender Component and Sender Soap Channel: These sender configuration used to receive data from the Main iflow step demonstrated above as a one way communication and no response will be sent back to main iflow.

Sender Channel Configurations are as below(Refer figure 3):

Figure 3:Sender Soap Channel Configuration SubIflow

Step3 (Figure2) : Groovy Script to reload the Hashmap Header values: This is the main script which is used to reload the Hashmap Header with Hashmap values from the Header values received from Allowed Header Runtime Configuration from Main iflow to Sub Iflow.

import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
def Message processData(Message message) {
	
	def messageLog = messageLogFactory.getMessageLog(message);
// Get Header Hashmap from Main Process flow and replace with local hashmap header values
def Headers = message.getHeaders();
String value = Headers.get("PicklistID_ExternalCode_Label");

value = value.replaceAll("=,", "=Null,"); // Replace blank key values with Null word
//value = value.replaceAll(", ", ","); //// Replace comma and blank with only comma
value = value.substring(1, value.length()-1);           //remove curly brackets
String[] keyValuePairs = value.split(", ");              //split the string to creat key-value pairs
Map<String,String> map = new HashMap<>();               
for(String pair : keyValuePairs)                        //iterate over the pairs
{
    String[] entry = pair.split("=");                   //split the pairs to get key and value 
    map.put(entry[0].trim(), entry[1].trim());          //add them to the hashmap and trim whitespaces
}

message.setHeader("PicklistID_ExternalCode_Label", map)
return message;
}

This code Snippet used to retrieve Header values and again reload the Hashmap Header using setHeader object.

Here, we are replacing blank hashmap value assignment with value “Null” to handle transformation while retrieving values on message mapping level on subsequent step.

Below trace shows value from Allowed Headers with name “PicklistID_ExternalCode_Label” before applying the above groovy script:

Header Value retrieved from Allowed Header Runtime Configuration from Main flow before applying groovy script step3 of Figure 3.

Below trace shows value from converted Hashmap Headers with name “PicklistID_ExternalCode_Label” after applying the above groovy script:

Header Hashmap Value after applying groovy script step3 of Figure 3.

Step4(Figure2) : Allowed Header(s:) Runtime Configuration : This parameter section is used to maintain the Header names with the same Header name set on main iflow to retrieve values from Main Iflow and groovy script defined at Step 3(Figure2) will use this Header name for getHeaders() for processing.

Step5(Figure2) : Content Modifier: used for testing purpose to maintain test payload on message Body as below

<Records>
	<Rows>
		<PicklistTypeCode>10</PicklistTypeCode>
	</Rows>
	<Rows>
		<PicklistTypeCode>99</PicklistTypeCode>
	</Rows>
</Records>

Test xml input payload.

Step6(Figure2) : Message Mapping : used to show how to retrieve values from SetHeader Hashmap defined on previous step on message mapping transformation. A groovy script is used to retrieve the value from stored Hashmap object by passing key for Hashmap input.

import com.sap.it.api.mapping.*;
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import groovy.xml.XmlUtil;
import groovy.util.XmlSlurper;

def String GetPerPersonRelationShipLocalLabel(String picklistId,String externalCode,MappingContext context) 

{ 
    String myKey = "";
    String Outputvalue = "";
HashMap<String,String> pickListMap = context.getHeader("PicklistID_ExternalCode_Label");
  
   myKey = picklistId+"_"+externalCode;
   Outputvalue = pickListMap.get(myKey);
   
	if (Outputvalue == null || Outputvalue == "Dummy" || Outputvalue == "" || Outputvalue  == "Null" )   
		
		{
	Outputvalue = "";
	
    }
    
return Outputvalue;


}
Message Mapping using groovy script to retrieve Header values

Here passing two inputs to groovy scripts which forms a key for Hashmap( Constant “personRelationshipType” and field value from input xml field “PicklistTypeCode”).

Output Payload:

<?xml version="1.0" encoding="UTF-8"?>
<Records>
	<Rows>
		<PicklistTypeCode>10</PicklistTypeCode>
		<HashMapRetrivedTypeValue>Separated / Divorced Partner</HashMapRetrivedTypeValue>
	</Rows>
	<Rows>
		<PicklistTypeCode>99</PicklistTypeCode>
		<HashMapRetrivedTypeValue/>
	</Rows>
</Records>
Rating: 0 / 5 (0 votes)

The post Propagate Hashmap values from primary iflow to secondary iflow using Soap adapter in SAP Cloud Integration first appeared on ERP Q&A.

]]>
Part 2: SAP Cloud Platform Integration for Data Services (CPI-DS) https://www.erpqna.com/part-2-sap-cloud-platform-integration-for-data-services-cpi-ds/?utm_source=rss&utm_medium=rss&utm_campaign=part-2-sap-cloud-platform-integration-for-data-services-cpi-ds Fri, 02 Jul 2021 09:19:51 +0000 https://www.erpqna.com/?p=50107 This is the second blog post on CPI-DS after the Introduction post. Refer to the Part 1 for configuring Data Services Agent, creation of Datastore and importing Metadata objects. Tasks, Processes, and Projects: Follow the integration development workflow to set up and test moving data to the cloud. Task: A task is a collection of […]

The post Part 2: SAP Cloud Platform Integration for Data Services (CPI-DS) first appeared on ERP Q&A.

]]>
This is the second blog post on CPI-DS after the Introduction post. Refer to the Part 1 for configuring Data Services Agent, creation of Datastore and importing Metadata objects.

Tasks, Processes, and Projects:

Follow the integration development workflow to set up and test moving data to the cloud.

Task: A task is a collection of one or more data flows that extract, transform, and load data to specific targets, and the connection and execution details that support those data flows.

Process: A process is an executable object that allows you to control the order in which your data is loaded.

Available Actions in Processes and Tasks: Some actions are possible for both processes and tasks, but some actions are possible only for one or the other.

Replicate a Task or Process: You can replicate an existing task or process to the same or different project.

Edit a Task or Process: An administrator or developer can edit a task or process as needed within the UI.

Create a Project:

Select Projects tab -> Create New Project.

You can Select Save & Close, which will create the Project. Or You can select Save and create Task, which will create a Task along with the Project.

Add a Task to the Project:

A task is a collection of one or more data flows that extract, transform, and load data to specific targets, and the connection and execution details that support those data flows. You can create tasks from scratch or from predefined templates.

Tasks contain the following information: Name, description, and project they belong to (Details tab).

Source and target datastores to be used in the task’s data flows (Connections tab).

One or more data flows (Data Flows tab). Scripts and global variables applicable to all data flows in the task (Execution Properties tab).

Select Source Datastore and Target Datastore. Click on “Test Connection” button. Click on “Save” and select “Save and Define Dataflow”.

Add a Data Flow to the Task:

A data flow defines the movement and transformation of data from one or more sources to a single target. Within a data flow, transforms are used to define the changes to the data that are required by the target. When the task or process is executed, the data flow steps are executed in left-to-right order.

Although a data flow can have more than one data source, it can have only one target. This target must be an object in the target datastore that is associated with the data flow’s parent task.

Defines how many times each transform within a data flow replicates to process a parallel subset of data. Default value is 2.

Source & Target Mapping:

Maps all columns from the Input pane to columns with the same name in the Output pane (target).

Note: If the Input pane contains more than one source, Automap cannot be used.

Transform Operations:

A transform step applies a set of rules or operations to transform the data. You can specify or modify the operations that the software performs.

  • Data transformation can include the following operations:
  • Map columns from input to output
  • Join data from multiple sources
  • Choose (filter) the data to extract from sources
  • Perform functions on the data
  • Perform data nesting and unnesting
  • Construct XML Map iteration rules
  • Define a web service response

Join Tables:

You can use the Join tab to join two or more source tables. You specify join pairs and join conditions based on primary/foreign keys and column names.

To join two or more tables:

  • In the Edit Data Flow view, select the transform in which you want to perform the join.
  • If the tables you want to join are not already available in the Input pane, click New to add additional tables.
  • In the Transform Details, in the Join tab, click the plus icon to add a new join.
  • Select the tables you want to join and the join type.
  • Type a join condition.
  • Click Save.
  • If needed, create additional join conditions. Subsequent join pairs take the results of the previous join as the left source.

Note: In an ABAP Query, mixed inner and left outer joins are not supported.

Sort Data:

You can sort the order of your data by using the Order By tab.

To sort your data:

  • In the Edit Data Flow wizard, select the transform in which you want to sort your data. Sorting is supported in the Query, ABAP Query, and XML Map transforms.
  • Click the Order By tab.
  • From the Input pane, drag the column containing the data you want you use to sort and drop it into the Order By table.
  • Specify whether you want to sort in ascending or descending order.
  • Add additional columns to the Order By tab and arrange them as necessary.
  • For example, you might choose to sort your data first by country in ascending order, and then by region in descending order.

Note: The data will be sorted in the order that the columns are listed in the Order By tab.

Filter Data:

You can filter or restrict your data using the Filter tab.

To filter your data:

  • In the Edit Data Flow wizard, select the transform in which you want to add a filter.
  • Click the Filter tab.
  • From the Input pane, drag the column containing the data you want you filter and drop it in the Filter field.
  • As needed, type filter conditions or use the built-in functions.

Examples of filter conditions are shown in the following table:

Open the Data Flow Editor:

  • Open the data flow editor to design and debug data flows.
  • Follow the steps below to open a data flow for editing.
  • From the Projects tab, expand the project that contains the task and data flow you want to edit.
  • Select the task that contains the data flow you want to edit and click Edit.
  • From the Data Flows tab of the task, select a data flow and click Actions Edit.
  • The data flow editor opens.

Transform Types in Dataflow:

Validate the Data Flow:

Select Validate.

Run the Task:

Select the Task and Click on “Run Now”.

Replicate a Task or Process:

You can replicate an existing task or process to the same or different project.

To replicate a task or process, select the task in the Projects tab and choose Replicate from the More Actions menu.

When you replicate a task, copies of the task and all data flows that it contains are created and added to the target project you select as the replication target.

When you replicate a process, copies of the process (including references to data flows), scripts and execution properties are created and added to the target you select as the replication target.

Versioning Tasks and Processes:

A new version is created each time you promote a task or process. You can also create a custom version if needed.

Versions allow you to keep track of major changes made to a task or process. You can consult the version history and return to a previously promoted or saved version to roll back unwanted or accidental changes.

It is recommended that you give each version a unique name and a meaningful description. They can remind you of the changes you made to the task or process, help you decide whether you want to roll back to a previous version, and decide which version you want to roll back to.

Caution: After you roll back to a previous version of a task, it is recommended that you check all processes that reference the task’s data flows to ensure that the references were maintained.

Scripts and Global Variables:

Scripts and global variables can be used in tasks and processes.

Scripts are used to call functions and assign values to variables in a task or process.

Global variables are symbolic placeholders. When a task or process runs, these placeholders are populated with values.

Create Custom Calendars:

Custom calendars allow you to specify a customized schedule for running tasks or processes.

In the Administration tab, click Calendars. Click the plus button + to create a new custom calendar. Enter a name and optionally a description for your calendar.

Add the dates you want a task or process to run by doing one of the following:

  • Manually enter the dates,
  • Select dates by using the calendar button,
  • Upload a Calendar File,
  • Click Save.

Configure Email Notification (Task, Process & Agent):

Tasks and processes must be set up to run on a scheduled basis.

Note:

  • Email notifications for tasks or processes can be set for the Production environments. Notifications are not available for Sandbox. Enter email addresses to receive notification. Use a semicolon to separate multiple email addresses.
  • Agent downtime notifications are sent for all environments including sandbox, production, and additional environments such as development or test. Downtime is a period of five minutes or longer. The server checks every 15 minutes.

Monitoring Production Status:

In the Dashboards, the production status displays whether your production tasks and processes succeeded or failed over a given period of time.

From the production status, you can: Set the time period for which you want to analyze results.

Click on an area of the pie chart to filter tasks and processes displayed in the table.

Click on a task or process in the table to view its history and log data.

Rating: 0 / 5 (0 votes)

The post Part 2: SAP Cloud Platform Integration for Data Services (CPI-DS) first appeared on ERP Q&A.

]]>
Part 1: SAP Cloud Platform Integration for Data Services (CPI-DS) https://www.erpqna.com/part-1-sap-cloud-platform-integration-for-data-services-cpi-ds/?utm_source=rss&utm_medium=rss&utm_campaign=part-1-sap-cloud-platform-integration-for-data-services-cpi-ds Tue, 25 May 2021 04:45:04 +0000 https://www.erpqna.com/?p=47860 Introduction: SAP Cloud Platform Integration for data services allows you to efficiently and securely use ETL (extract, transform, load) tasks to move data between on premise systems and the cloud. SAP Cloud Platform Integration interacts with your local SAP landscape via the SAP Data Services Agent and secure HTTPS and RFC connections. SAP ERP Systems […]

The post Part 1: SAP Cloud Platform Integration for Data Services (CPI-DS) first appeared on ERP Q&A.

]]>
Introduction:

SAP Cloud Platform Integration for data services allows you to efficiently and securely use ETL (extract, transform, load) tasks to move data between on premise systems and the cloud.

SAP Cloud Platform Integration interacts with your local SAP landscape via the SAP Data Services Agent and secure HTTPS and RFC connections. SAP ERP Systems are registered with the HCI Agent as data stores via RFC.

Before you can use SAP Cloud Platform Integration, your administrator needs to prepare your environment.

Agents:

The SAP Data Services Agent provides connectivity to on-premise sources in your system landscape.

At design-time, the agent is used to provide metadata browsing functionality for on-premise sources to the web based user interface. At run-time, the agent manages the secure data transfer from your on-premise sources to your cloud-based target application.

Agent groups ensure high-availability by clustering one or more agents and making sure tasks and processes get assigned only to available agents in the group.

Download & Install the Agent:

Create a New Agent, download the configuration file (AgentConfig.txt).

Select Agents tab -> Click on “New Agent”.

Enter the Name and Description for the Agent. Select the Agent Group.

An agent group is a collection of agents that are logically grouped. When scheduling or running tasks and processes, an agent group is specified. At run-time, tasks and processes are automatically assigned to an available agent within the group.

Download the Configuration File. Save the configuration file in your local system where you want to install the agent.

After downloading the Configuration file, select Save and Close. Once you click on Save and Close, you can see the Agent Information under the group “GBOBBILI”.

Install above agent by right click on it and run it with elevated/admin rights.

After the above step, you will see below screen. You need to provide your Windows credentials and password over there and click install.

After clicking Install button you will see below screen click finish and YES.

Then you will see below screen. Provide your credentials of CPIDS and password. Attach the AgentConfig.txt file and provide the proxy host & port and then click Upload and yes.

Login to CPIDS (from the above link) and check if the agent is active or not.

Create Datastore:

Datastores are the objects that connect SAP Cloud Platform Integration for data services to your cloud and on-premise applications and databases.

Through these connections, SAP Cloud Platform Integration for data services can access metadata from and read and write data to your applications and databases.

Within the Datastores tab, you can create and manage datastores, which connect SAP Cloud Platform Integration for data services to your applications and databases.

From this tab, you can:

  • Create and delete datastores
  • Test the connection to a datastore
  • View and edit a datastore’s configuration options (Configuration)
  • Browse a datastore’s metadata objects (File Formats or Tables)
  • Import and manage file format or table metadata objects for a datastore
  • View data loaded to a table in a target datastore to ensure it is correct

Select Datastores tab -> Create “+” icon to create New Datastore.

SAP CPI – DS supports datastores the following types of applications and databases:

  • SAP Business Suite applications
  • SAP BW sources
  • SAP HANA application clouds
  • SAP HANA cloud applications such as SAP IBP and SuccessFactors BizX
  • Applications that have pre-packaged or user-written adapters
  • Databases
  • File format groups
  • SOAP and REST Web services

Once the Datastore is created Test the connection.

Import Metadata Objects:

Importing metadata objects adds the table and file names from your source and target databases and applications to your datastores.

In the Datastores area, select a datastore.

Open the Tables or File Format tab (which one appears depends on the datastore type).

Do one of the following:

If the datastore has a Tables tab, click Import Objects or Import Object by Name and select the tables whose metadata you want to import. (To import a web service object, the web service must be up and running.)

If it has a File Formats tab, click Create File Format and select the option you want to create.

Select the Type of Meta data to be imported and enter the Name.

Table: A table is a collection of related data held in a table format within an SAP (or non-SAP) system. It consists of columns, and rows.

Extractor: An extractor is a pre-defined SAP program that gathers data from various tables in an SAP source system (typically SAP ECC) then processes this data to create specific business content for insertion into another SAP system (such as SAP BW or SAP IBP).

Function: An SAP Function (or Function Module) is a pre-written custom program that typically extracts data from an SAP system and writes this to output fields or tables (that can be read by SAP CPI-DS).

Once the table meta data is imported we can see the column Names and data types in details tab, as shown below.

Create the Target Datastore by following the above steps and import the target table meta data.

Rating: 0 / 5 (0 votes)

The post Part 1: SAP Cloud Platform Integration for Data Services (CPI-DS) first appeared on ERP Q&A.

]]>
Integrating AWS (DynamoDB) with SAP CPI https://www.erpqna.com/integrating-aws-dynamodb-with-sap-cpi/?utm_source=rss&utm_medium=rss&utm_campaign=integrating-aws-dynamodb-with-sap-cpi Wed, 24 Feb 2021 10:41:05 +0000 https://www.erpqna.com/?p=43642 As the world is moving towards the Cloud from On-Premise, the need of integrating with Cloud service provider raises. AWS is one of the major cloud service provider and hence, the integration of SAP (or) other third party systems with the services in AWS is needed. AWS has n number of dedicated services and each […]

The post Integrating AWS (DynamoDB) with SAP CPI first appeared on ERP Q&A.

]]>
As the world is moving towards the Cloud from On-Premise, the need of integrating with Cloud service provider raises. AWS is one of the major cloud service provider and hence, the integration of SAP (or) other third party systems with the services in AWS is needed. AWS has n number of dedicated services and each one serves it own purposes. As an integration tool, SAP CPI should be able to integrate with AWS services too.

One of the easiest and existing possibility to integrate with AWS through SAP CPI is via Advantco Adapter. SAP has collaborated with Advantco and they have published an Amazon WS Adapter which can be installed in SAP CPI to enable the integration with AWS. The Adapter installation needs to be done via Eclipse Oxygen. Installation guide and User guide has been published by SAP in the discover section of SAP CPI. We can able to see the list of documents attached in the document section.

Advantco adapter is available in both sender and Receiver. The sender adapter enables us to integrate with AWS services AWS S3 and AWS SQS whereas the receiver adapter enables us to integrate with services S3, SNS, SQS and SWF.

As the Advantco adapter enables us to integrate with only few major services in AWS, we have to think out of the box to integrate with other services.

In this document, we are going to see on how to integrate with one of the other AWS services (Dynamo DB) which is not available with Advantco Adapter.

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.

Requirement:

Integrating SAP CPI with AWS services.

Solution:

In the first few sections of our blog, we will see how to set up a DynamoDB in AWS account which we will be integrating from SAP CPI.

Followed by that, we will look into see how to access the Dynamo DB from Postman.

Finally, we will see the steps to be followed in the SAP CPI to achieve our need.

AWS Trial Account Setup:

  • Login to https://aws.amazon.com/account/sign-up and create a free account with your Gmail.
  • Once the account is created, navigate to the same page and Sign In to the console with your user name and password by selecting the root user.

Creating DynamoDB:

  • Navigate to DynamoDB from AWS console Home page –> Create table –> Enter Table name and Primary key.
  • Now, the table will be created with the primary key field alone. You can insert more columns through the Create Item option. Anyways, DynamoDB is capable of creating the columns at runtime if it has received any create request with column name not described in the schema.

Creating IAM User with roles to access DynamoDB:

  • Navigate to console home screen and search for service “IAM Console”
  • Navigate to Users –> Add user –> Input your user name –> Select Access type as Programmatic access a Click Next.
  • Select “Attach existing policies directly” –> Search DynamoDB –> Select “AmazonDynamoDBFullAccess”(Allows to perform CRUD operations on DynamoDB) –>Click Next –> Skip next screen and then, Click Next.
  • Click Next and you could see a Success message for creation of user along with the Access key and Secret access key.
  • This Access key and Secret access key is needed for authentication while communicating to AWS either via Postman (or) CPI.
  • Save these creds for later reference (or) you can download the .csv file as well.

Connecting DynamoDB from Postman:

  • In this topic, we are going to see how we can connect DynamoDB and perform CRUD operation from Postman.
  • Create post request from Postman and enter the request URL as “https://dynamodb.{{regionname}}.amazonaws.com/”.
  • The above URL is the standard URL to access the DynamoDB where the region name is the region in which our DynamoDB resource has been created. In our case, its “ap-south-1”
  • Choose the Authentication method as “AWS Signature”. Input the “AccessKey” and “SecretKey” which we have downloaded in the previous step.
  • Select Advanced option. Input your region name in the field “AWS Region” and Input “dynamodb” in the field “Service Name”.
  • Add a custom header which has been highlighted in the screenshot where the header “Content-Type” specifies the request type as “json” and the header “X-Amz-Target” specifies which one of the CRUD operation need to be performed.
  • We have to post the data in the json format in the specific structure. The structure to perform the Create/Put operation has been shown in the screenshot.
  • Now, we are all set to post this message to AWS and let we see whether this row has got created in the DynamoDB.
  • Click Send and check whether you have got 200 status. If yes, logon to AWS DynamoDB and you can observe that the entry has been added.
  • In the similar way, you can perform all the CRUD operations. The only action needed is, the input body and the header “X-Amz-Target” needs to be altered accordingly.
  • The header value for the “X-Amz-Target” for the different CRUD operation are as follows.
    • Create – DynamoDB_20120810.PutItem
    • Get – DynamoDB_20120810.GetItem
    • Update – DynamoDB_20120810.UpdateItem
    • Delete – DynamoDB_20120810.DeleteItem
  • The body structure for different CRUD operation has been shown below.

Connecting DynamoDB from SAP CPI

Get: Update: Delete:

Connecting DynamoDB from SAP CPI:

  • We are going to follow the same procedure in CPI like we did in Postman. But, the Postman has the inbuild capacity to calculate the “Host” and “AWS Signature” based on the inputs that we have given.
  • These two things need to be carried out exclusively in CPI to integrate the DynamoDB. I have achieved this via Groovy.
  • The sender http is to trigger a integration flow. In the content modifier, hardcode the body we used in the postman. This can be populated dynamically either through Mapping (or) with content modifier itself in our real time case.
  • Add an header as shown in the below screenshot
  • We have added the necessary headers and body with the help of Content modifier.
  • The left out part is the Authentication. AWS supports only the AWS signature authentication which is not available by default in any of the adapter in CPI. So, we are going to calculate this AWS signature through Groovy script and place it as authorization header.
  • Add a Groovy script and place the below code from snippet.
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.security.MessageDigest;
import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import com.sap.it.api.ITApiFactory;
import com.sap.it.api.securestore.SecureStoreService;
import com.sap.it.api.securestore.UserCredential;

def Message processData(Message message) {
    //************* REQUEST VALUES *************
    String method = 'POST';
     def headers = message.getHeaders();
     String host = headers.get("Host");
     String endpoint = 'https://'+ host+'/'
    // POST requests use a content type header. For DynamoDB, the content is JSON.
    String amz_target = headers.get("X-Amz-Target");
    String content_type = headers.get("Content-Type");
    
    // Request parameters for Create/Update new item--passed in a JSON block.
    String body = message.getBody(java.lang.String) as String;
    String request_parameters = body;
    
   // Read AWS access key from security artifacts. 
   def secureStoreService = ITApiFactory.getApi(SecureStoreService.class, null);
   //AWS_ACCESS_KEY
   def aws_access_key = secureStoreService.getUserCredential("AWS_ACCESS_KEY");
   if (aws_access_key == null){
      throw new IllegalStateException("No credential found for alias 'AWS_ACCESS_KEY' ");
   }
   //AWS_SECRET_KEY
  def aws_secret_key = secureStoreService.getUserCredential("AWS_SECRET_KEY");
   if (aws_secret_key == null){
      throw new IllegalStateException("No credential found for alias 'AWS_SECRET_KEY' ");
   }
   //AWS_REGION
   def aws_region = secureStoreService.getUserCredential("AWS_REGION");
   if (aws_region == null){
      throw new IllegalStateException("No credential found for alias 'AWS_REGION' ");
   }
   
   //AWS_SERVICE_NAME
   def aws_service = secureStoreService.getUserCredential("AWS_SERVICE_NAME");
   if (aws_service == null){
      throw new IllegalStateException("No credential found for alias 'AWS_SERVICE_NAME' ");
   }
   
    String access_key = new String(aws_access_key.getPassword());
    String secret_key = new String(aws_secret_key.getPassword());
    String service =   new String(aws_service.getPassword());
    String region =   new String(aws_region.getPassword());
   
    // Create a date for headers and the credential string
    def date = new Date();
    DateFormat dateFormat = new SimpleDateFormat("yyyyMMdd'T'HHmmss'Z'");
    dateFormat.setTimeZone(TimeZone.getTimeZone("UTC"));//server timezone
    String amz_date = dateFormat.format(date);
    dateFormat = new SimpleDateFormat("yyyyMMdd");
    dateFormat.setTimeZone(TimeZone.getTimeZone("UTC"));//server timezone
    String date_stamp = dateFormat.format(date);
    
    // ************* TASK 1: CREATE A CANONICAL REQUEST *************
    // http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
    
    // Step 1 is to define the verb (GET, POST, etc.)--already done.
    
    // Step 2: Create canonical URI--the part of the URI from domain to query 
    // string (use '/' if no path)
    String canonical_uri = '/';
    
    // Step 3: Create the canonical query string. In this example, request
    // parameters are passed in the body of the request and the query string is blank.
    String canonical_querystring = '';
    
    // Step 4: Create the canonical headers. Header names must be trimmed
    // and lowercase, and sorted in code point order from low to high. Note that there is a trailing \n.
    String canonical_headers = 'content-type:' + content_type + '\n' + 'host:' + host + '\n' + 'x-amz-date:' + amz_date + '\n' + 'x-amz-target:' + amz_target + '\n';
    
    // Step 5: Create the list of signed headers. This lists the headers
    // in the canonical_headers list, delimited with ";" and in alpha order.
    // Note: The request can include any headers; canonical_headers and
    // signed_headers include those that you want to be included in the
    // hash of the request. "Host" and "x-amz-date" are always required.
    // For DynamoDB, content-type and x-amz-target are also required.
    String signed_headers = 'content-type;host;x-amz-date;x-amz-target';
    
    // Step 6: Create payload hash. In this example, the payload (body of the request) contains the request parameters.
    String payload_hash = generateHex(request_parameters);
    
    // Step 7: Combine elements to create canonical request
    String canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash;
    
    // ************* TASK 2: CREATE THE STRING TO SIGN*************
    // Match the algorithm to the hashing algorithm you use, either SHA-1 or SHA-256 (recommended)
    String algorithm = 'AWS4-HMAC-SHA256';
    String credential_scope = date_stamp + '/' + region + '/' + service + '/' + 'aws4_request';
    String string_to_sign = algorithm + '\n' +  amz_date + '\n' +  credential_scope + '\n' +  generateHex(canonical_request);
    
    // ************* TASK 3: CALCULATE THE SIGNATURE *************
    // Create the signing key using the function defined above.
    byte[] signing_key = getSignatureKey(secret_key, date_stamp, region, service);
    
    // Sign the string_to_sign using the signing_key
    byte[] signature = HmacSHA256(string_to_sign,signing_key);
    
     /* Step 3.2.1 Encode signature (byte[]) to Hex */
    String strHexSignature = bytesToHex(signature);
    
    // ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST *************
    // Put the signature information in a header named Authorization.
    String authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' +  'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + strHexSignature;
    
    // For DynamoDB, the request can include any headers, but MUST include "host", "x-amz-date",
    // "x-amz-target", "content-type", and "Authorization". Except for the authorization
    // header, the headers must be included in the canonical_headers and signed_headers values, as
    // noted earlier. Order here is not significant.

    // set X-Amz-Date Header and Authorization Header
    message.setHeader("X-Amz-Date",amz_date);
    message.setHeader("Authorization", authorization_header);
      
    return message;
}

String bytesToHex(byte[] bytes) {
    char[] hexArray = "0123456789ABCDEF".toCharArray();            
    char[] hexChars = new char[bytes.length * 2];
    for (int j = 0; j < bytes.length; j++) {
        int v = bytes[j] & 0xFF;
        hexChars[j * 2] = hexArray[v >>> 4];
        hexChars[j * 2 + 1] = hexArray[v & 0x0F];
    }
    return new String(hexChars).toLowerCase();
} 

String generateHex(String data) {
    MessageDigest messageDigest;

    messageDigest = MessageDigest.getInstance("SHA-256");
    messageDigest.update(data.getBytes("UTF-8"));
    byte[] digest = messageDigest.digest();
    return String.format("%064x", new java.math.BigInteger(1, digest));
}

byte[] HmacSHA256(String data, byte[] key) throws Exception {
    String algorithm="HmacSHA256";
    Mac mac = Mac.getInstance(algorithm);
    mac.init(new SecretKeySpec(key, algorithm));
    return mac.doFinal(data.getBytes("UTF8"));
}

byte[] getSignatureKey(String key, String dateStamp, String regionName, String serviceName) throws Exception {
    byte[] kSecret = ("AWS4" + key).getBytes("UTF8");
    byte[] kDate = HmacSHA256(dateStamp, kSecret);
    byte[] kRegion = HmacSHA256(regionName, kDate);
    byte[] kService = HmacSHA256(serviceName, kRegion);
    byte[] kSigning = HmacSHA256("aws4_request", kService);
    return kSigning;
}
  • You can observe that, there were four inputs which were required predominantly in the groovy script to proceed with the further steps.
  • These four parameters is actually the same value what we have given in the postman. Create a secure parameter for each one respectively with the same naming conventions what you have declared in the groovy script.
  • Navigate to Monitoring Overview à Manage Security à Security Material a “Create” à “Secure Parameter”.
  • Now, we are almost set. The related headers are created and AWS signature has been calculated and set as header in the groovy script.
  • Now, make a call to AWS DynamoDB via http adapter with the help of request reply. Then, deploy the IFlow and trigger from Postman to check whether the things are working fine.
  • *** While Testing the flow for first time, you may get into error as AWS DynamoDB host was not trusted by default in SAP CPI. In this case, download the certificate and add it in SAP CPI trust store.
Rating: 0 / 5 (0 votes)

The post Integrating AWS (DynamoDB) with SAP CPI first appeared on ERP Q&A.

]]>
Amazon Web Services Adapter for SAP Cloud Integration https://www.erpqna.com/amazon-web-services-adapter-for-sap-cloud-integration/?utm_source=rss&utm_medium=rss&utm_campaign=amazon-web-services-adapter-for-sap-cloud-integration Thu, 18 Feb 2021 10:22:02 +0000 https://www.erpqna.com/?p=43337 Introduction As indicated in our previous blog post, the Amazon Web Service Adapter was added to the family of adapters of SAP Cloud Integration in January 2021. Amazon Web Services (AWS) is Amazon’s cloud web hosting platform that offers flexible, reliable, scalable, and cost-effective solutions. AWS provides a variety of basic abstract technical infrastructure and […]

The post Amazon Web Services Adapter for SAP Cloud Integration first appeared on ERP Q&A.

]]>
Introduction

As indicated in our previous blog post, the Amazon Web Service Adapter was added to the family of adapters of SAP Cloud Integration in January 2021.

Amazon Web Services (AWS) is Amazon’s cloud web hosting platform that offers flexible, reliable, scalable, and cost-effective solutions. AWS provides a variety of basic abstract technical infrastructure and distributed computing building blocks and tools.

Amongst the many services provided by AWS, the Amazon Web Service adapter currently supports the following 4 services:

  • S3 (Amazon Simple Storage Service)

Amazon Simple Storage Service is known as the storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its global network of websites. The service aims to maximize the benefits of scale and to pass those benefits on to developers.

  • SQS (Amazon Simple Queue Service)

Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components.

  • SNS (Amazon Simple Notification Service)

Amazon Simple Notification Service (Amazon SNS) is a web service that manages the delivery or sending of messages to subscribed clients. In Amazon SNS, there are two types of clients, namely publishers (or producers) and subscribers (or consumers). Publishers communicate asynchronously with subscribers by producing and sending a message to a topic. Subscribers can consume or receive the message via one of the supported protocols (Amazon SQS, HTTP/S, email, SMS, Lambda).

  • SWF (Amazon Simple Workflow Service)

The Amazon Simple Workflow Service (Amazon SWF) enables the building of applications that can coordinate work across distributed components. In Amazon SWF, a task represents a logical unit of work that is performed by a component of your application. Coordinating tasks across the application involves managing inter-task dependencies, scheduling, and concurrency following the logical flow of the application.

High-level features:

Amazon Web Service Adapter provides you with the following features:

Sender Adapter:

  • AWS S3 to read file from AWS S3 service:
    • Support for patterns in filename.
    • Possibility to archive processed files to the same bucket as part of the post-processing step.
    • Possibility to archive processed files to a different bucket as part of the post-processing step.
    • Possibility to sort files based on Filename, Filesize, and Timestamp.
    • Capability to retrieve additional metadata maintained for the file on the S3 bucket. Multiple attributes can be retrieved at the same time.
    • Support for Server-Side Decryption.
    • Most properties support dynamic properties and headers.
    • Functionality to generate pre-signed URL.
  • AWS SQS to read a message from AWS SQS queue:
    • Support for Standard and FIFO queues.
    • Possibility to keep or delete the message from the queue after reading as part of the post-processing step.
    • Possibility to keep a message in the queue after processing.
    • Capability to retrieve additional metadata maintained for a message on the SQS queue. Multiple attributes can be retrieved at the same time.

Receiver Adapter:

  • AWS S3 to push files into AWS S3 service:
    • Option to append timestamp and messageId to the file name during the creation process.
    • Option to select a storage class.
    • Different handling options for existing S3 bucket files
    • Option to upload attachments to the S3 bucket.
    • Server-Side Encryption.
    • Possibility to add multiple custom metadata to the file while storing it.
    • Capability to read a particular file from the S3 bucket using the Read operation of the receiver adapter.
  • AWS SQS to send a message to the AWS SQS queue:
    • Support for Standard and FIFO queue.
    • Option to add multiple message attributes while writing a message to a queue.
    • For Standard queue, option to provide delay seconds to avoid subsequent processing by any other consumer.
    • For a FIFO queue, the option to provide message deduplication id and message group ID.
  • AWS SNS to push real-time notification messages to interested subscribers over multiple delivery protocols:
    • Support for Standard topics.
    • Option to provide Identical Payload for all consumers.
    • Option to provide custom payload for different consumers.
    • Option to format the response in XML and JSON formats.
    • Option to provide multiple Message attributes.
  • AWS SWF to provide control over implementing tasks and coordinating them:
    • Support for multiple operations that can be selected from a pre-defined list.
    • Option to determine request and response format. Currently, JSON is supported.

Let us next explore an example scenario that uses the Amazon Web Services adapter.

Example Scenario:

Let us use a simple S3-based integration scenario that illustrates the use of the Amazon Web Service Adapter. Assume that you have an AWS application that stores and writes files to be an S3 Bucket. In AWS, an S3 bucket is a container that can store objects. It also helps in organizing the Amazon S3 namespace, and manage access control. To keep things simple, in a bucket, you can store objects that include folders and files.

As part of your integration scenario, you want to read a file from your S3 Bucket, archive it and push it to an SQS Queue. An AWS SQS provides a message queuing service that enables you to decouple your applications. The figure below represents the integration scenario to be built.

To achieve this, follow the steps below:

Step 1: For the sender side, Create the Folder and add files to S3 Bucket

The First step will be to ensure that a bucket, required folder, and files exist in our S3 bucket. For that, proceed as follows:

  • Login to your AWS account, and select the S3 Service.
  • Create a Bucket.

Name your bucket and click on the “Create bucket”.

  • Create a folder and upload a file to simulate the AWS that places a file. In our case, we placed a file named “Example.json”.

We have set up the sender side in AWS. To connect SAP Cloud Integration to AWS S3, you will need to capture the following details:

  • Region of the bucket.
  • Bucket Name.
  • Access Key.
  • Secret Key.

Please note these details for later.

Step 2: For the receiver side, Create an SQS queue

The next step will be to ensure that an SQS Queue is available in AWS. For that, proceed as follows:

  • From your AWS account, select the Simple Queue Service.
  • In the next screen, click on “Create queue” and specify its name. As the figure below shows, I have created a queue named “demo”. Note that this a standard queue.

We have set up the receiver side in AWS. To connect SAP Cloud Integration to AWS SQS, you will need to capture the following details:

  • Region of the bucket.
  • Account Number.
  • Queue Name.
  • Access Key.
  • Secret Key.

Now we need to create the Integration Flow in SAP Cloud Integration.

Step 3: Your integration Flow in SAP Cloud Integration

Create an Integration Flow with a Sender and receiver system representing AWS S3 and AWS SQS respectively. See below.

Note that to keep things simple this is a pass-through scenario that does not include any mapping or transformation.

On the sender side, you will need to select the AmazonWebServices adapter and S3 as protocol.

The figure below shows its configuration. You will need to re-use the information that you saved in Step 1. Make sure to select the correct region where the bucket was created.

Note that for both the Access Key and Secret Key, you will need to create a Secure Parameter in SAP Cloud Integration via the Security Material. In our case, we create a Secure Parameter named “AWS_Tester_AccessKey” for the Access Key and “AWS_Tester_SecretKey” for the Secret Key.

We also need to configure the Processing tab as shown below. You will need to specify the name of the Directory and the pattern to be used for the file name as shown below.

Now we can configure the receiver adapter to use the SQS service and write the file to the queue.

The figure below shows its configuration. You will need to re-use the information that you saved in Step 2. Make sure to select the correct region where the queue was created. Furthermore, you should specify the AWS Account number and the Queue as shown below.

Finally, let us deploy our Integration Flow. See below an example of the Integration flow which runs and can be seen in the Monitor.

Lastly, you can see check that a message has arrived in the Queue, as seen below.

Rating: 0 / 5 (0 votes)

The post Amazon Web Services Adapter for SAP Cloud Integration first appeared on ERP Q&A.

]]>
SAP Cloud Platform Integration- Connecting Microsoft SQL Sever https://www.erpqna.com/sap-cloud-platform-integration-connecting-microsoft-sql-sever/?utm_source=rss&utm_medium=rss&utm_campaign=sap-cloud-platform-integration-connecting-microsoft-sql-sever Fri, 25 Dec 2020 08:53:33 +0000 https://www.erpqna.com/?p=40576 Introduction: JDBC connectivity to On Prem Database Systems-it was long awaited feature in SAP Cloud Platform Integration space. Latest release finally have option to work on it. Lets see how to work on the same. Scenario: Lets consider 3 most used concepts from integration perspective. Setup your SAP Cloud Connector: Follow below tutorial to install […]

The post SAP Cloud Platform Integration- Connecting Microsoft SQL Sever first appeared on ERP Q&A.

]]>
Introduction:

JDBC connectivity to On Prem Database Systems-it was long awaited feature in SAP Cloud Platform Integration space. Latest release finally have option to work on it. Lets see how to work on the same.

Scenario:

Lets consider 3 most used concepts from integration perspective.

  • Simple SELECT and UPDATE to get and update the the tables.
  • Calling simple and parameterized Stored Procedure including function.

Setup your SAP Cloud Connector:

Follow below tutorial to install and configure SAP Cloud Connector in your landscape.

Install and Configure SAP Cloud Connector

One you are done with cloud connector installation and initial configuration follow below to create TCP connection to your on-premise Microsoft SQL Server.

Note down virtual server address you have specified in SAP Cloud Connector later we will be using same in SAP Cloud Platform Integration.

Configure your Microsoft SQL server:

Install any client you want to leverage for better user experience while performing SQL transactions. In my scenario I will be using Data Grip by Jet Brains.

Get into Data Grip and create new project as shown below.

Click on add new data source and select type as Microsoft SQL server.

Keep your SQL Server urls and credentials ready to configure new data source as shown below.

From above screen make a note of JDBC driver version and copy the URL which we are going to use in SAP CPI.

Lets create 3 scenarios

  1. Create a Stored Procedure
  2. Create Stored Procedure with input parameter.
  3. Create Stored Procedure which calls Function internally.
  4. Create table which can be later used to perform SELECT and INSERT operations.

I have explained steps for creating Simple Stored Procedure which can be further extended for other features as its self explanatory.

Stored Procedure:

Open Console and click Alt + Insert to create Stored Procedure and execute it once done.

Copy Paste below

CREATE PROC What_DB_is_this5
AS
SELECT DB_NAME() AS ThisDB;
go

Click Ctrl + Shift +F10 to run the procedure and you are good to execute from SAP CPI.

Parameterized Stored Procedure:

CREATE PROC What_DB_is_that @ID INT
AS
SELECT DB_NAME(@ID) AS ThatDB;
go

Stored Procedure Internally calling function:

create procedure PROC_PRINT_HW
as
select dbo.helloworldfunction()
go

Function:

CREATE FUNCTION dbo.helloworldfunction()
RETURNS varchar(20)
AS
BEGIN
	 RETURN 'Hello world'
END
go

Creating artifact in SAP Cloud Platform Integration:

Setting up JDBC connections:

Login into SAP CPI and Navigate to “Manage JDBC Material” to maintain Connection profile and required JDBC driver.

Maintain JDBC Driver:

Click on add new and select type of database you are trying to connect. You can cross check version of JDBC driver required with screenshot provided in Test Connection option of Data Grip.

Below will give overview of JDBC driver compatibility with different versions of Microsoft SQL Server.

Microsoft JDBC Driver for SQL Server support matrix

Maintain JDBC Data Source:

Click on Add JDBC Data Source and follow below configuration. Make sure you have selected Cloud Connector Checkbox if you are connecting your On-premise SQL server. End Point url would be your virtual server address replaced with actual server ( check cloud connector to get the virtual server address)

IFlow Configuration:

Lets try to simulate all the scenarios with simple IFlow having Postman as HTTP client.

Also make sure to add header name “Action” in the allowed headers part.

Step 1:

Add HTTP Receiver Adapter with url.

Step 2:

Add Router branch to identify different actions we are going to perform.

Choose Expression Type as Non XML and add below condition.

${header.Action} = 'QUERY'

Sample query structure:

SELECT * FROM <SCHEMA_NAME>.<TABLENAME>;

Step 3:

Add Router branch to identify different actions we are going to perform.

Choose Expression Type as Non XML and add below condition.

${header.Action} = 'INSERT'

Sample query structure:

INSERT INTO <SCHEMA_NAME>.<TABLE_NAME> (
    ID,
    COMPANY_NAME,
    COMPANY_CODE,
    LOCATION,
    ADDRESS_1,
    ADDRESS_2,
    STATE,
    COUNTRY,
    PINCODE,
    EMAIL,
    WEBSITE,
    PHONE_1,
    PHONE_2,
    NMBR_OF_EMPLOYEES,
    FACILITIES,
    PROCESSED
)
VALUES
    (
        'COMPANY 1S',
        'C101S',
        'LOCATION 1S',
        'ADDRESS 1S',
		'ADDRESS 2S',
		'STATE 1S',
		'COUNTRY 1S',
		560085,
		'c1S.email@email',
		'company1S.com',
		12345,
		12342,
		25,
		'FACILITIES 1S',
		'NO'	
		
    ),
    (
        'COMPANY 2S',
        'C102S',
        'LOCATION 2S',
        'ADDRESS 2S',
		'ADDRESS 2S',
		'STATE 2S',
		'COUNTRY 3S',
		560085,
		'c2S.email@email',
		'company2S.com',
		12345,
		12342,
		25,
		'FACILITIES 2S',
		'NO'
    );

Step 4:

Add Router branch to identify different actions we are going to perform.

Choose Expression Type as Non XML and add below condition.

${header.Action} = 'STOREDPROCEDURE'

Sample Stored Procedure structure:

<root>
	<StatementName>
		<storedProcedureName action="EXECUTE">
			<table><STORED_PROCEDURE_NAME></table>		
		</storedProcedureName >
	</StatementName>
</root>

Sample Stored Procedure With Input Parameter structure:

<root>
	<StatementName>
		<storedProcedureName action="EXECUTE">
			<table><STORED_PROCEDURE_NAME></table>
			<param1 type="INTEGER">1</param1>
		</storedProcedureName >
	</StatementName>
</root>

Step 5:

Add Content Modifier to make sure if none of the action item mentioned it will give response back as Enter valid Action.

Step 6:

Add JDBC receiver adapter and enter JDBC Data Source name you have created.

Go ahead and deploy your Iflow. Once deployed get runtime URL from Monitoring -> Manage Artifact-> Select endpoint url.

Now get into post man tool and import the below JSON collection. Make sure to update url /password/body as required.

{
	"info": {
		"_postman_id": "fe787994-b71a-4020-a8f8-25ddfcff5d55",
		"name": "MS_SQL_ONPREM",
		"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
	},
	"item": [
		{
			"name": "SELECT DATA",
			"protocolProfileBehavior": {
				"disableBodyPruning": true
			},
			"request": {
				"auth": {
					"type": "basic",
					"basic": [
						{
							"key": "password",
							"value": "PASSWORD",
							"type": "string"
						},
						{
							"key": "username",
							"value": "USER",
							"type": "string"
						},
						{
							"key": "saveHelperData",
							"type": "any"
						},
						{
							"key": "showPassword",
							"value": false,
							"type": "boolean"
						}
					]
				},
				"method": "GET",
				"header": [
					{
						"key": "Action",
						"value": "QUERY",
						"type": "text"
					}
				],
				"body": {
					"mode": "raw",
					"raw": "Select * from <SCHEMA_NAME>.<TABLENAME>;"
				},
				"url": {
					"raw": "https://<RUNTIME_URL>/http/UpdateMSSQL",
					"protocol": "https",
					"host": [
						"<RUNTIME_URL>"
					],
					"path": [
						"http",
						"UpdateMSSQL"
					]
				}
			},
			"response": []
		},
		{
			"name": "STORED PROCEDURE",
			"protocolProfileBehavior": {
				"disableBodyPruning": true
			},
			"request": {
				"auth": {
					"type": "basic",
					"basic": [
						{
							"key": "password",
							"value": "PASSWORD",
							"type": "string"
						},
						{
							"key": "username",
							"value": "USERNAME",
							"type": "string"
						},
						{
							"key": "saveHelperData",
							"type": "any"
						},
						{
							"key": "showPassword",
							"value": false,
							"type": "boolean"
						}
					]
				},
				"method": "GET",
				"header": [
					{
						"key": "Action",
						"value": "STOREDPROCEDURE",
						"type": "text"
					}
				],
				"body": {
					"mode": "raw",
					"raw": "<root>\r\n\t<StatementName>\r\n\t\t<storedProcedureName action=\"EXECUTE\">\r\n\t\t\t<table>What_DB_is_this5</table>\t\t\r\n\t\t</storedProcedureName >\r\n\t</StatementName>\r\n</root>"
				},
				"url": {
					"raw": "https://<RUNTIME_URL>/http/UpdateMSSQL",
					"protocol": "https",
					"host": [
						"<RUNTIME_URL>"
					],
					"path": [
						"http",
						"UpdateMSSQL"
					]
				}
			},
			"response": []
		},
		{
			"name": "STORED PROCEDURE PARAM",
			"protocolProfileBehavior": {
				"disableBodyPruning": true
			},
			"request": {
				"auth": {
					"type": "basic",
					"basic": [
						{
							"key": "password",
							"value": "PASSWORD",
							"type": "string"
						},
						{
							"key": "username",
							"value": "USER",
							"type": "string"
						},
						{
							"key": "saveHelperData",
							"type": "any"
						},
						{
							"key": "showPassword",
							"value": false,
							"type": "boolean"
						}
					]
				},
				"method": "GET",
				"header": [
					{
						"key": "Action",
						"value": "STOREDPROCEDURE",
						"type": "text"
					}
				],
				"body": {
					"mode": "raw",
					"raw": "<root>\r\n\t<StatementName>\r\n\t\t<storedProcedureName action=\"EXECUTE\">\r\n\t\t\t<table>What_DB_is_that</table>\r\n\t\t\t<param1 type=\"INTEGER\">1</param1>\r\n\t\t</storedProcedureName >\r\n\t</StatementName>\r\n</root>"
				},
				"url": {
					"raw": "https://<RUNTIME_URL>/http/UpdateMSSQL",
					"protocol": "https",
					"host": [
						"<RUNTIME_URL>"
					],
					"path": [
						"http",
						"UpdateMSSQL"
					]
				}
			},
			"response": []
		},
		{
			"name": "INSERT DATA",
			"protocolProfileBehavior": {
				"disableBodyPruning": true
			},
			"request": {
				"auth": {
					"type": "basic",
					"basic": [
						{
							"key": "password",
							"value": "PASSWORD",
							"type": "string"
						},
						{
							"key": "username",
							"value": "USER",
							"type": "string"
						},
						{
							"key": "saveHelperData",
							"type": "any"
						},
						{
							"key": "showPassword",
							"value": false,
							"type": "boolean"
						}
					]
				},
				"method": "GET",
				"header": [
					{
						"key": "Action",
						"value": "INSERT",
						"type": "text"
					}
				],
				"body": {
					"mode": "raw",
					"raw": "INSERT INTO <SCHEMA_NAME>.<TABLENAME> (\r\n    ID,\r\n    COMPANY_NAME,\r\n    COMPANY_CODE,\r\n    LOCATION,\r\n    ADDRESS_1,\r\n    ADDRESS_2,\r\n    STATE,\r\n    COUNTRY,\r\n    PINCODE,\r\n    EMAIL,\r\n    WEBSITE,\r\n    PHONE_1,\r\n    PHONE_2,\r\n    NMBR_OF_EMPLOYEES,\r\n    FACILITIES,\r\n    PROCESSED\r\n)\r\nVALUES\r\n    (\r\n        'COMPANY 1S',\r\n        'C101S',\r\n        'LOCATION 1S',\r\n        'ADDRESS 1S',\r\n\t\t'ADDRESS 2S',\r\n\t\t'STATE 1S',\r\n\t\t'COUNTRY 1S',\r\n\t\t560085,\r\n\t\t'c1S.email@email',\r\n\t\t'company1S.com',\r\n\t\t12345,\r\n\t\t12342,\r\n\t\t25,\r\n\t\t'FACILITIES 1S',\r\n\t\t'NO'\t\r\n\t\t\r\n    ),\r\n    (\r\n        'COMPANY 2S',\r\n        'C102S',\r\n        'LOCATION 2S',\r\n        'ADDRESS 2S',\r\n\t\t'ADDRESS 2S',\r\n\t\t'STATE 2S',\r\n\t\t'COUNTRY 3S',\r\n\t\t560085,\r\n\t\t'c2S.email@email',\r\n\t\t'company2S.com',\r\n\t\t12345,\r\n\t\t12342,\r\n\t\t25,\r\n\t\t'FACILITIES 2S',\r\n\t\t'NO'\r\n    ),\r\n    (\r\n       'COMPANY 3S',\r\n        'C103S',\r\n        'LOCATION 3S',\r\n        'ADDRESS 3S',\r\n\t\t'ADDRESS 3S',\r\n\t\t'STATE 3S',\r\n\t\t'COUNTRY 3S',\r\n\t\t560085,\r\n\t\t'c3S.email@email',\r\n\t\t'company3S.com',\r\n\t\t12345,\r\n\t\t12342,\r\n\t\t25,\r\n\t\t'FACILITIES 3S',\r\n\t\t'NO'\r\n    );\r\n\t"
				},
				"url": {
					"raw": "https://<RUNTIME_URL>/http/UpdateMSSQL",
					"protocol": "https",
					"host": [
						"<RUNTIME_URL>"
					],
					"path": [
						"http",
						"UpdateMSSQL"
					]
				}
			},
			"response": []
		}
	],
	"protocolProfileBehavior": {}
}

Once you update you should be able to see below

Rating: 0 / 5 (0 votes)

The post SAP Cloud Platform Integration- Connecting Microsoft SQL Sever first appeared on ERP Q&A.

]]>
SAP Cloud Platform Integration – Simulate Integration flow with various types of body files https://www.erpqna.com/sap-cloud-platform-integration-simulate-integration-flow-with-various-types-of-body-files/?utm_source=rss&utm_medium=rss&utm_campaign=sap-cloud-platform-integration-simulate-integration-flow-with-various-types-of-body-files Mon, 26 Oct 2020 09:32:58 +0000 https://www.erpqna.com/?p=38017 Introduction SAP Cloud Platform Integration October 2020 release (3.30.x/6.6.x) provides an extended feature to support simulation of an integration flow with the various types of body files. You can simulate Zip and Tar splitters which require an incoming payload in Zip and Tar format. This blog’s primary focus is to explain how you can simulate […]

The post SAP Cloud Platform Integration – Simulate Integration flow with various types of body files first appeared on ERP Q&A.

]]>
Introduction

SAP Cloud Platform Integration October 2020 release (3.30.x/6.6.x) provides an extended feature to support simulation of an integration flow with the various types of body files. You can simulate Zip and Tar splitters which require an incoming payload in Zip and Tar format.

This blog’s primary focus is to explain how you can simulate an integration flow with zip and tar splitter which essentially requires input body file in the format of zip and tar.

In addition to this, we will also explain how other types of body files such as *.pdf, *.xlsx, *.txt, *.docx , *.xml, *.json etc. can be simulated to describe SFTP scenario.

Let us understand the enhanced capability of simulation input dialog in detail before we hands-on few scenarios.

Simulation Input Dialog capability

  • Body section of the simulation input dialog has been enhanced to support upload of various types of body files like *.zip, *.tar, *.txt, *.pdf, *. *.xlsx, *.docx. *. json, *.xml etc.
  • Body section has the following attributes:
    • File: Display the file name and type.
    • Content: Display content of the file.
    • Edit: Action to edit the content of the file.
    • Reset: Revert to the original content of the file.
  • Upload of the trace message content file which comprises of *. header, *. properties and *. body will be supported and individually you can upload *. header, *. properties and *. body as well. This is the existing feature and it will be maintained.
  • Any distortion in the trace message content file will be treated as a body file on upload.
  • File types like *.tar, *.zip, *.pdf, *docx, *.xlsx which are essentially binary content file will not be displayed in the content section.
    File types like *.txt, *.xml and *. json which has a readable content will be displayed in the content section of the dialog.
  • When the file is edited, it will invalidate the file and file name will be removed.
    • Readable file content is maintained, and modification is allowed.
    • Binary file content is cleared and not allowed for any modification.
  • Revocation to the original file is possible by using Reset functionality.

I hope you understood the enhanced capability of simulation input dialog, now let’s apply this capability in few of the below scenarios.

Scenario # 1 – Unzipping input ZIP file from SFTP that have multiple files and send it to receiver with same source file name.

Let simulate this scenario through simulation feature of an integration flow.

Step # 1 – Model the scenario as shown below.

Step # 2- Configure ZIP Splitter to unzip incoming compressed zip file.

Step # 3 – Configure content modifier to parse the incoming file name to keep same file name at the target side. We will be getting file name in camel header CamelFileNameOnly with value ZIPNAME/Filename in order to get only file name, we can use below expression in the value.

${header.CamelFileNameOnly.replaceAll("[0-9A-Za-z]+\/","")}

Step # 4 – Define the start and stop point to simulate the subset of flow.

Step # 5 – In the simulation input dialog, upload the compressed zip file as a body input file which contains the text files.

  • compressed-file.zip
    • file1.txt
    • file2.txt
    • file3.txt

Note: Binary file content will not be displayed in the content section of the dialog.

Step # 6 – Run the simulation and validate the outgoing message from the content modifier.

Zip file is split and the content of each text file will be shown.

Content of the file1.txt is displayed.

Content of the file3.txt is displayed.

Content of the file2.txt is displayed.

Note: When the binary file is edited, it will invalidate the file and file name and content will be removed. Lets experience this behavior.

Click Edit button.

Content section of the Body is available for an input.

Scenario # 2 – JSON file from SFTP is converted into XML and sent to the receiver.

Let simulate this scenario through simulation feature of an integration flow.

Step # 1 – Model the scenario as shown below.

Step # 2 – Define the start and stop point to simulate the subset of flow.

Step # 3 – In the simulation input dialog, upload sample json file as a body input file.

Note: Readable (Non-binary) file content will be displayed in the content section of the dialog.

Step # 4 – Run the simulation, validate the outgoing message from the JSON to xml convertor and content modifier step, the json is converted in the XML format.

Note: When the readable (Non-binary) file is edited, it will invalidate the file and file name is removed. However, content of the file is available for modification. Let’s experience this behavior.

Click Edit to modify the content of the body file.

Content of the file is modified.

Run the simulation and expected output will be noticed.

Scenario # 3 – How to revoke the modified content to the original content of the file.

Step # 1 – Upload the body input file.

Step # 2 – Click Edit to modify the content of the file. File content is available for change.

Step # 3 – Modify the content of the file.

Step # 4 – Click Reset button to revoke the change to the original content of the file.

Content is reset to the original file you uploaded.

Rating: 0 / 5 (0 votes)

The post SAP Cloud Platform Integration – Simulate Integration flow with various types of body files first appeared on ERP Q&A.

]]>
S/4HANA iDoc integration with SCP Integration Suite https://www.erpqna.com/s-4hana-idoc-integration-with-scp-integration-suite/?utm_source=rss&utm_medium=rss&utm_campaign=s-4hana-idoc-integration-with-scp-integration-suite Tue, 06 Oct 2020 09:23:36 +0000 https://www.erpqna.com/?p=37287 In this post, I will show you how to configure your S/4HANA system and create an integration flow in SCPI to perform this. The integration flow I will create will be very simple, getting an iDoc from a S/4HANA system of Master Data Customer information, using DEBMAS, and I will push this data on a […]

The post S/4HANA iDoc integration with SCP Integration Suite first appeared on ERP Q&A.

]]>
In this post, I will show you how to configure your S/4HANA system and create an integration flow in SCPI to perform this. The integration flow I will create will be very simple, getting an iDoc from a S/4HANA system of Master Data Customer information, using DEBMAS, and I will push this data on a SFTP system hosted in Google Cloud.

First, let’s configure our S/4HANA system. We need to define the SCPI logical system. We’ll use transaction BD54 and simply define it here.

Next we will go in transaction SALE to create a distribution model and message type.

First create the model, then create a message type.

For the message type, the model view will be the one we just created, the sender will be our S/4HANA system, the receiver will be our SCPI logical system that we just created in BD54 and the message type will be the one I decided to use in this demo – DEBMAS.

Now we need to define the trust between our S/4HANA system and our SCPI system. For that will import our certificates. Go to your SCPI system. Click on the certificates and export the 3 levels of certificates using Base-64 encoding to files.

You now have the three certificates, ROOT, CA and HCI certificates. Go back to your S/4HANA system and launch STRUST transaction.

Click on SSL Client Anonymous, import the 3 certificates, one by one and each add them to the certificate list.

Now we can configure our RFC Connection to SCPI with transaction SM59, create a new one in HTTP Connections to External Server.

In the technical Settings, we will give information about our SCPI host, the port will be 443 for HTTPS and the prefix will be whatever you will need for your iDoc, in my case, I am using DEBMAS so I used this : /cxf/debmas05.

You will also define this in SCPI, so exactly use the same deploy URL.

In the Logon & Security Tab, define your authentication method and also select anonymous SSL for secure protocol :

You could test you connection here, but since we didn’t create the Integration flow, we will do that afterwards.

Next we need to create the Port for our iDoc processing, we will use transaction WE21, click on XML HTTP and create a new port. Use the settings in the following screenshot.

Hold on, we’re nearly there!

Now we need to create a partner profile, for that use transaction WE20, click on Type LS and create a new profile. Give it a name. Define an agent and then add an Outbound Message by clicking the add button for outbound parameter.

Choose the message type, for me it is DEBMAS, choose to pass the iDoc immediately and define the iDoc basic type, here I am using debmas05

Now that’s it for all the configuration on the S/4HANA part! Well done!

Heading to SCPI for designing the integration flow, we’ll do something very basic. I’ll just get the iDoc for S/4HANA, add some information and save it back to my SFTP server in Google Cloud.

Remember when we defined the path prefix in SM59 RFC destination, here I entered just the last part, the cxf will be added automatically. I also won’t go into the details of how I set up the SFTP part. Now I can deploy this Integration flow. Once it is done, I could also go back to SM59 to test the connection but I’ll test the whole thing but sending an iDoc directly.

Going back to my S/4HANA system, I will use transaction BD12. Select the customer information you want to send, choose DEBMAS for the output type and as a target, the Logical System we defined at the very first!

You’re ready to EXECUTE! ^_^

You should see the following :

Going to SCPI to see if everything went fine, we head into the Message monitor.

The message was correctly processed.

You could also verify that by using transaction WE05 in the S/4HANA system.

If we check our destination SFTP system.

The iDoc was sent to our SFTP system.

Rating: 0 / 5 (0 votes)

The post S/4HANA iDoc integration with SCP Integration Suite first appeared on ERP Q&A.

]]>