sap hana database - ERP Q&A https://www.erpqna.com/tag/sap-hana-database/ Trending SAP Career News and Guidelines Sat, 15 Feb 2025 10:50:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.erpqna.com/wp-content/uploads/2021/11/cropped-erpqna-32x32.png sap hana database - ERP Q&A https://www.erpqna.com/tag/sap-hana-database/ 32 32 Cost optimized SAP HANA DR options on Google Cloud https://www.erpqna.com/cost-optimized-sap-hana-dr-options-on-google-cloud/?utm_source=rss&utm_medium=rss&utm_campaign=cost-optimized-sap-hana-dr-options-on-google-cloud Sat, 15 Feb 2025 10:50:48 +0000 https://www.erpqna.com/?p=90491 Abstract Business Continuity is of utmost importance for any organization. A well defined High Availability (HA) & Disaster Recovery (DR) strategy ensures business critical SAP Applications accessibility during any planned or unplanned outage. SAP HANA database being a central component of a SAP Application is configured with relevant HA/DR setup to make the business data […]

The post Cost optimized SAP HANA DR options on Google Cloud appeared first on ERP Q&A.

]]>
Abstract

Business Continuity is of utmost importance for any organization. A well defined High Availability (HA) & Disaster Recovery (DR) strategy ensures business critical SAP Applications accessibility during any planned or unplanned outage. SAP HANA database being a central component of a SAP Application is configured with relevant HA/DR setup to make the business data available at secondary node/site to ensure business continuity.

HA for HANA database is being used as fault tolerance mostly for any infra related failures where HANA database fails over to secondary hot standby node deployed in cluster mode within a single region. RPO and RTO are almost zero in this case as most of the steps are seamless and automated with cluster management. Synchronous HANA replication across zones of the same region ensures the secondary HANA node is in sync with the primary node all the time.

In case of complete primary site loss, where Primary Node along with HA hot standby is not available, DR solution on a separate geographical location termed as Secondary or DR Region in public cloud acts as a safety net to ensure business continuity. Asynchronous HANA replication to the standby node in the secondary region keeps pushing the data from the Primary node. Manual failover is required to make the DR node as Primary.

Let’s discuss some SAP HANA DR options along with their pros & cons respectively.

Performance optimized DR setup (cost challenge)

For mission critical applications with requirement of minimum RTO (few minutes to hours) to have the system up & running in the secondary region with all the business data, performance optimized DR using SAP HANA HSR is deployed. In this DR setup, computing capacity of DR HANA node is kept the same as Primary HANA Node and full data is loaded in memory of DR HANA node at the time of replication setup. Then all the delta data committed in primary, post initial load, is replicated regularly in secondary through archive logs.

As depicted in the diagram below, secondary node cpu and memory configuration is kept the same as primary node so the major chunk of data is already loaded in secondary HANA node memory. In a disaster scenario, such configuration enables the Secondary node as Primary in minimum possible time and almost no data loss. However maintaining the PRD equal hardware at secondary site for DR node adds to a significant cost.

Cost optimized DR setup (higher RTO)

To overcome the cost challenge with performance optimized DR setup, we can consider following cost optimized SAP HANA DR options. Trade off with such setups will be higher RTO but with low DR cost.

(i) Shared HANA DR node

In a DR setup where secondary node sizing is kept identical to PRD primary, generally resources on secondary node can not be used for anything else until the takeover takes place. In this shared DR setup, PRD data is not loaded in memory but in disk at secondary site and thus resources can be shared by another non-PRD SAP HANA instance e.g. QAS or TEST on the same node. In order to achieve it, memory allocation to the PRD secondary node is restricted and the rest of the memory is allocated to non-PRD (QAS/TEST) instance.

In case of a PRD Primary disaster scenario, non-PRD system QAS/TEST to be stopped, full memory/resources to be allocated to the PRD standby node, load data from disk to memory and bring it up as Primary. Apparently these steps will increase the recovery time but it has the advantage of low cost DR setup because we are using the same DR node for our QAS/TEST instance.

(ii) Lean HANA DR node

As compared to Shared DR setup, here we opt for bare minimum memory configuration for PRD secondary instance so as the replication of PRD data keeps loading in the disk. Thus we don’t need a DR node to match the same sizing/memory configuration as PRD primary. As in shared DR setup, preload of column tables to memory of Standby HANA node is disabled by setting the database parameter “preload_column_tables” as false.

In case of a PRD Primary disaster scenario, DR node to be stopped & upgraded to configuration matching the PRD primary (Google VM type approved by SAP for PRD use) and full memory/resources to be allocated to PRD standby node. The value of database parameter “preload_column_tables” must be changed back to default value as true so as to load the complete data including the column tables to memory. As compared to Shared mode, this setup will have significant reduction in the DR cost as we are keeping the Standby node computing/memory to bare minimum to support the data replication from the Primary node.

Minimum memory configuration/Google VM type needed for Cost optimized lean DR secondary node to be calculated as per SAP guidelines and supporting documentation (SAP Note 1999880 FAQ SAP HANA System Replication). It is advisable to run a Pilot/PoC to come up with exact memory & sizing configuration requirements for lean DR node and arrest any other unforeseen issues upfront.

(iii) Backup-Restore HANA DR

In this most cost effective HANA DR solution, no dedicated secondary HANA Node is deployed and no real time data replication from Primary HANA node happens. However, we need to ensure that backups (HANA Database – data & logs and Application/file systems) are being stored (dual region/multi region mode) on another region identified as a DR site. RTO to bring up the Primary instance at identified DR site will be quite high as , in case of disaster scenario of Primary region not being available, one needs to set up the Servers in DR region from scratch, Install the Application along with Database and then restore the Database from the backup.

We also must reserve needed computing capacity in the DR region so that required VMs can be deployed quickly with needed capacity at DR site in a disaster scenario. We also must ensure to have a network connectivity (VPN) to DR side to access the Applications.

Conclusion

Performance based HANA DR setup is the preferred one with minimum RTO & RPO as the business would like to have the critical SAP Application up & running on a secondary site in the shortest possible time. But having hardware configuration same as Production will be a cost overhead.

All cost optimized HANA DR options discussed here will definitely save cost as compared to performance optimized HANA DR setup. However, one must sacrifice on the time needed to stand up the functional DR in the secondary region. Depending upon the acceptable RTO and criticality of the HANA based SAP application in a DR scenario, appropriate cost optimized HANA DR setup can be deployed.

Rating: 5 / 5 (1 votes)

The post Cost optimized SAP HANA DR options on Google Cloud appeared first on ERP Q&A.

]]>
Connecting SAP HANA on-premises to SAP Cloud Platform Integration (CPI) https://www.erpqna.com/connecting-sap-hana-on-premises-to-sap-cloud-platform-integration-cpi/?utm_source=rss&utm_medium=rss&utm_campaign=connecting-sap-hana-on-premises-to-sap-cloud-platform-integration-cpi Sat, 26 Oct 2024 12:40:35 +0000 https://www.erpqna.com/?p=88502 Introduction Connecting SAP HANA on-premises to SAP Cloud Platform Integration (CPI) is a crucial step in establishing a seamless data flow between on-premise and cloud-based systems. This integration allows organizations to leverage the power of cloud-based integration capabilities while maintaining access to their existing on-premise HANA databases. This document will provide a comprehensive guide on […]

The post Connecting SAP HANA on-premises to SAP Cloud Platform Integration (CPI) appeared first on ERP Q&A.

]]>
Introduction

Connecting SAP HANA on-premises to SAP Cloud Platform Integration (CPI) is a crucial step in establishing a seamless data flow between on-premise and cloud-based systems. This integration allows organizations to leverage the power of cloud-based integration capabilities while maintaining access to their existing on-premise HANA databases.

This document will provide a comprehensive guide on how to achieve this connection, covering essential steps, configurations, and potential challenges. By following the outlined procedures, you can effectively establish a bridge between your on-premise HANA environment and CPI, enabling efficient data exchange and process automation.

In order to Fetch Data from the SAP Hana Database from CPI iFlow – you need to complete the following configurations. (Step A and B as mentioned in the following diagram)

A. Cloud Connector

Cloud Connector is a software application that acts as a bridge between cloud-based services and on-premises systems. It establishes a secure connection, allowing these systems to communicate and exchange data seamlessly.

Login into the Cloud Connector Application and establish the connection for Cloud to On-Premises system

  • Map the Hana Server into the Cloud Connector using TCP Protocol
  • Provide Internal IP and Port , Provide Virtual IP and Port
  • Note down the Virtual IP and Port ( this will be used in the JDBC Connection inside the CPI subaccount)

To Add new System – you need to provide details as below:

  • Backend Type : SAP – Hana
  • Protocol : TCP
  • Internal IP & Port
  • Virtual IP & Port

Once saved, you can verify the connectivity test by checking the Check Availability icon – that will shows Reachable or Not-reachable. (Please note down : The Virtual IP and Port )

B. Cloud Platform (CPI Account)

SAP Cloud Platform Integration (CPI) is a cloud-based integration platform as a service. CPI provides a flexible and scalable solution for integrating cloud and on-premise applications, facilitating seamless data exchange and process automation.

Login into the CPI Subaccount and Create the new JDBC Material / Connection.

  • Create the JDBC connection
  • Provide the JDBC URL : format jdbc:sap://virtualIP:virtualPort/?databaseName=myhanadb
  • where virtual IP is the virtual IP you mentioned in cloud connector in step A, virtual port is the virtual port you mentioned in cloud connector in step A

While adding JDBC, you need to provide the following details

Name: name as per your wish
Description: as per your wish
Database type: SAP Hana Platform (on premise)
User ID and Password: your database userid and password
JDBC Url: jdbc:sap://virtual ip:virtualport/?databaseName=your database name

Finally, in the iflow, you use JDBC adapter and in the properties tab, you mention the JDBC connection name you created in the step B.

C. Iflow

In the iFlow, the Hana Server is connected via the Request Reply pallet. The connecting adapter properties should mention the JDBC connection you created in the step B.

In the iflow, In the JDBC Adapter properties – you mention the JDBC connection name you defined in step B.

D. Execute the iflow in Postman

In the Postman, you pass the SQL statement in the body.

select * from sapabap1.sflight;

The output you will receive in xml format.

This is how, we can establish the connectivity from CPI-iFlow to SAP Hana On premises and execute the SQL Queries to fetch data.

POSTMAN

The SQL output will be displayed in the xml format.

Conclusion

In conclusion, this document has outlined the steps and considerations involved in establishing a seamless connection between an on-premises HANA database and SAP Cloud Platform Integration (CPI). By leveraging the capabilities of CPI, organizations can effectively bridge the gap between their on-premises systems and cloud-based services. By following the outlined procedures and best practices, we can ensure a successful and reliable connection between their HANA database and CPI, unlocking the full potential of both platforms.

Rating: 5 / 5 (1 votes)

The post Connecting SAP HANA on-premises to SAP Cloud Platform Integration (CPI) appeared first on ERP Q&A.

]]>
Safeguarding Enterprise Personal and Financial Data in SAP HANA with IBM Security Guardium https://www.erpqna.com/safeguarding-enterprise-personal-and-financial-data-in-sap-hana-with-ibm-security-guardium/?utm_source=rss&utm_medium=rss&utm_campaign=safeguarding-enterprise-personal-and-financial-data-in-sap-hana-with-ibm-security-guardium Wed, 03 Jul 2024 11:09:22 +0000 https://www.erpqna.com/?p=86118 Introduction In the modern digital world, protecting sensitive business data is more important than ever. SAP HANA Cloud databases, known for their high performance and advanced analytics, serve as essential to many organisations’ operations. However, the huge amounts of personal and financial data they handle make them potential targets for cyber-attacks. Implementing advanced security measures […]

The post Safeguarding Enterprise Personal and Financial Data in SAP HANA with IBM Security Guardium appeared first on ERP Q&A.

]]>
Introduction

In the modern digital world, protecting sensitive business data is more important than ever. SAP HANA Cloud databases, known for their high performance and advanced analytics, serve as essential to many organisations’ operations. However, the huge amounts of personal and financial data they handle make them potential targets for cyber-attacks. Implementing advanced security measures is critical for protecting these datasets from any possible breaches.

This blog explains how IBM Security Guardium offers an additional level of safety to SAP HANA Cloud databases. You can ensure that enterprise personal and financial data is secure and meets regulatory standards by leveraging Guardium’s complete capabilities. Learn how this powerful combo may improve your data security strategy and safeguard your company’s most precious assets.

Importance of Data classification and identification for Data security

Identifying and classifying data is crucial for maintaining data security and ensuring compliance with regulatory standards. It helps in understanding the sensitivity and value of data, enabling organisations to implement appropriate security measures. Proper classification aids in protecting sensitive information from unauthorised access and potential breaches, while also facilitating efficient data management and retrieval.

About this blog

In this blog, IBM Guardium can be utilised to discover sensitive data within an SAP HANA DB. By scanning the database, Guardium identifies and classifies sensitive information, such as personal data, financial records, and intellectual property. Once discovered, this data is added to specific groups of fields or objects for continuous observation. This grouping facilitates targeted monitoring and protection, ensuring that sensitive data is safeguarded against unauthorized access and potential breaches. Guardium’s scanning and classification capabilities help maintain data security and compliance with regulatory standards for data protection in SAP HANA environments.

Prerequisites

  • SAP BTP Account with access to SAP HANA Cloud Database
  • IBM Security Guardium

Architecture

SAP HANA Cloud, a cloud-based version of the SAP HANA database, offers a multi-model platform for storing and processing diverse data. It integrates with SAP S/4HANA, the latest ERP suite, and SAP Business Technology Platform for application development. Here, security is ensured through IBM Guardium. IBM Security Guardium will scan the SAP HANA Cloud DB for the identification and classification of sensitive data such as personal details, financial details … etc. This data classification will enable administrator to keep an eye on specific table fields and help them formulate further business strategies such as data masking of data hiding for the database for the security purpose. Hence, this architecture positions SAP HANA Cloud as a secured and strong foundation for building versatile cloud-based enterprise applications.

Steps for integration

Log in to Guardium, and you will be directed to the home page as shown below:

Go to the Discover button on the left-hand panel, open the “Classification” dropdown, and select “Datasource Definitions” as shown below:

Click the “New” button, as highlighted below:

Enter details such application type, name, database type and other details in the pop-up screen as shown below:

Please keep in mind that the username and password for the SAP HANA Cloud database must be entered here.

To obtain the host name/IP address and port number, log into your SAP BTP account and click to the space for which you want to integrate Guardium with SAP HANA Cloud DB.

Select “SAP HANA Cloud” as indicated below:

Now, click “Actions” and choose “Copy SQL Endpoint”.

Paste the copied SQL endpoint and receive the hostname/IP data as shown below:

And get the port number details displayed follows from the same:

To check the status of your connection, click the “Test Connection” button.

The SAP HANA Cloud database setup is now complete. You can see the details as follows:

Click the Discover button on the left-hand panel, then open the drop-down menu by clicking “Classification” and selecting “Discover Sensitive Data”. Refer to the image below.

On the following screen, select “PII [template]”. Check out the information as recommended below, then click “Roles” to assign them, and then click the “Next” button.

Select the check box for the template pattern you wish to include (for example, birth date, city) and click the “Copy” button as displayed below and click on “Next” button:

Once we’ve completed “What to discover,” we’ll go on to “Where to search” and choose the integrated SAP HANA Cloud database and click on “Next”.

“Run discovery” is a convenience feature that allows you to conduct classification and check the status. Click “Next”.

We are now in the “Review report” stage, where we select a list of fields and select “Add to Groupof Object/Field” from the “Add to Group” drop-down and click on the “Next” button.

Select group “SAP Sensitive Data” and click on the “OK” button.

Select group “SAP Sensitive Data” and click on the “OK” button.

Let’s Test

Click the “Setup” button on the left-hand panel and choose “Group Builder” from the “Tools and Views” drop-down list.

Select “Object/Field” from the “Action” drop-down, then select “SAP Sensitive Data” from the list. Click the “Edit” button.

In the pop-up screen, select “Members”.

You will be able to see the relevant personal and financial table and fields from SAP HANA Cloud database.

Now that you identified and categorised that sensitive data in your HANA database, IBM Security Guardium can further help to improve data security by adoption of specialised security measures, such as to

  • Add encryption or access controls, to safeguard important data from unauthorised access and breaches; or by
  • Masking or blocking data access requests that violate regulations or policies
  • Configuring alerts for unauthorised access attempts, e.g. if someone from a non-finance department tries to access financial data, an alert can be triggered.

In general, classifying data based on its sensitivity in the first place helps to increase visibility and in turn to comply with regulatory obligations (e.g. by generating detailed reports for audits), prevent data loss, and reduce risks associated with data misuse. These features ensure that data handling procedures are consistent with organisational rules and legal standards, hence improving overall data security.

Conclusion

Securing SAP HANA Cloud databases is critical for safeguarding company personal and financial information from changing cyber threats. IBM Security Guardium improves your SAP HANA environment by offering strong data protection, continuous monitoring, and compliance capabilities, ensuring that critical information is protected. Investing in these advanced security measures not only protects essential data, but it also demonstrates how committed your company is to data privacy and compliance with laws and regulations. As cyber threats become more sophisticated, using IBM Security Guardium is a proactive step towards strengthening your SAP HANA Cloud databases and ensuring the integrity and security of your company data.

Rating: 0 / 5 (0 votes)

The post Safeguarding Enterprise Personal and Financial Data in SAP HANA with IBM Security Guardium appeared first on ERP Q&A.

]]>
Importance of Currency Conversion in SAP Datasphere https://www.erpqna.com/importance-of-currency-conversion-in-sap-datasphere/?utm_source=rss&utm_medium=rss&utm_campaign=importance-of-currency-conversion-in-sap-datasphere Mon, 01 Jul 2024 07:41:49 +0000 https://www.erpqna.com/?p=86026 Accurate currency conversion ensures that financial statements reflect true value, enabling better decision-making and compliance with regulatory requirements. Without effective currency conversion, businesses may face issues such as financial discrepancies, inaccurate forecasting, and challenges in performance evaluation across different regions. “Twenty-four-hour global nature of currency markets, exchange rates are constantly shifting from day to day […]

The post Importance of Currency Conversion in SAP Datasphere appeared first on ERP Q&A.

]]>
Accurate currency conversion ensures that financial statements reflect true value, enabling better decision-making and compliance with regulatory requirements. Without effective currency conversion, businesses may face issues such as financial discrepancies, inaccurate forecasting, and challenges in performance evaluation across different regions.

“Twenty-four-hour global nature of currency markets, exchange rates are constantly shifting from day to day and even from minute to minute, sometimes in small increments and sometimes quite dramatically” – Harvard Business Services.

In an interconnected world where businesses operate across borders, efficient currency conversion is essential. Currency conversion significantly impacts reporting and business analytics in the following ways:

Translation of Financial Statements:

Multinational corporations translate foreign incomes, expenses, assets, and liabilities into their reporting currency using relevant exchange rates.

Variances in Reported Financials:

Exchange rate fluctuations can result in significant variances in reported financials.

Currency conversion in SAP involves converting monetary values from one currency to another based on predefined exchange rates. This is important for multinational companies and businesses engaged in cross-border transactions.

Exchange rates in SAP define how one currency is converted to another. Let’s explore some significant business use cases where reporting with currency conversion is extensively required:

Case 1: Global Operations and Multinational Businesses

Companies operating in multiple countries need to manage different currencies. Currency conversion allows them to integrate financial data across various locations and ensure accurate financial reporting.

Case 2: Consolidated Financial Statements

Currency conversion in SAP enables the creation of consolidated financial reports in a single group currency.

Case 3: Budgeting and Forecasting

Companies often need to budget and forecast in a specific currency while dealing with costs and revenues in other currencies. Currency conversion allows for accurate planning and forecasting, providing a unified view of the organization’s financial health.

Currency conversion is used when posting financial information where the reporting currency differs from the transaction currency.

Currency Conversion Methods In SAP Datasphere

In an increasingly globalized business environment, companies often deal with transactions and data from multiple countries, involving various currencies. This complexity makes accurate currency conversion a critical aspect of financial reporting, budgeting, and analytics. SAP Datasphere, with its robust currency conversion capabilities, ensures businesses maintain financial accuracy and consistency across their operations.

Steps to Create Currency Conversion

Use Case

Step 01:

The client needs a report in Indian currency, but we have data in Datasphere in USD. So, using the currency conversion, we are converting from USD to INR.

Step 02:

Check whether the connection of the source is in an active state.

  • Confirm access to Data flows, Tables, and Views, because running the data flow loads data into the local table before it becomes available in the views.
  • To perform currency translation in SAP Datasphere, the following tables must be available in your space:
  1. TCURV – Exchange rate types
  2. TCURW – Exchange rate type text
  3. TCURX – Decimal places in currencies
  4. TCURN – Quotations
  5. TCURR – Exchange rates
  6. TCURF – Conversion factors
  7. TCURC – Currency codes
  8. TCURT – Currency text
  • DataFlows

  • Tables

  • Views

Step 03:

  • In this case we don’t have data in the target table, so we have to run the data flow to load the data to local table.

Step 04:

  • Here, we are converting the currency from USD to INR.
  • We have filtered “From Currency” as “USD” and “To Currency” as “INR” to obtain the exchange rate type.
  • After that, obtain exchange rate types M & P for the scenario.

Step 05:

  • For this scenario we have selected the source measure as “Gross Amount,” so we have to change the “Semantic Type” to “Amount with Currency” and “Unit Column” can be selected accordingly.

Step 06:

  • We have selected the billing document date as the transaction date because we are using the billing document fact model.

Step 07:

  • The conversion from USD to INR is now complete.

Best Practices for Currency Conversion

  • Address rounding and precision issues:

Rounding Rules: Apply consistent rounding rules to avoid discrepancies in financial reports.

Precision: Ensure that rounding practices maintain accuracy, especially withlarge datasets.

  • Maintain consistency and accuracy:

Accurate Data Entry: Ensure accurate entry of financial data and exchange rates to minimize errors during currency conversion.

Data Quality Checks: Regularly perform data quality checks to identify and rectify inaccuracies in exchange rates and financial data.

  • Regular updates and monitoring of exchange rates:

Periodic Reviews: Conduct regular reviews of your currency conversion processes and update procedures as necessary to adapt to changing financial environments.

Rating: 0 / 5 (0 votes)

The post Importance of Currency Conversion in SAP Datasphere appeared first on ERP Q&A.

]]>
SAP AI Technologies Evolution. Seamless Transition from SAP CAI to Skybuffer AI on SAP BTP https://www.erpqna.com/sap-ai-technologies-evolution-seamless-transition-from-sap-cai-to-skybuffer-ai-on-sap-btp/?utm_source=rss&utm_medium=rss&utm_campaign=sap-ai-technologies-evolution-seamless-transition-from-sap-cai-to-skybuffer-ai-on-sap-btp Wed, 29 May 2024 10:23:59 +0000 https://www.erpqna.com/?p=85105 Introduction Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our Conversational AI development, a journey we embarked on in 2018. Our solution now addresses a wide range of AI needs, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG) Initially integrating […]

The post SAP AI Technologies Evolution. Seamless Transition from SAP CAI to Skybuffer AI on SAP BTP appeared first on ERP Q&A.

]]>
Introduction

Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our Conversational AI development, a journey we embarked on in 2018. Our solution now addresses a wide range of AI needs, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG)

Initially integrating SAP Conversational AI, Skybuffer has been keenly tracking SAP’s strategic AI roadmap. In developing the Skybuffer AI technological environment, our goal has been to ensure our product is compatible with SAP’s cutting-edge AI technologies while offering seamless transition support.

Challenge

Back in 2018, SAP introduced the Conversational AI tool, revolutionizing the way businesses build AI assistants and chatbots. Many innovative companies seized this opportunity, integrating Conversational AI into their business processes and using these chatbots in their B2B, B2E and even B2C scenarios.

However, as SAP’s AI strategy evolved, the decision was made to sunset SAP Conversational AI. Many companies are keen to protect their investments and retain their Conversational AI-based developments. These systems have been meticulously crafted and are being used productively, making it essential to find a balance between adopting new SAP AI technologies and maintaining the value of their existing solutions.

The challenge lies in balancing the transition to new technologies while maintaining the productivity and value of the existing SAP Conversational AI solutions.

Solution

Skybuffer AI, a robust all-in-one AI mastering tool, offers a seamless transition mechanism for any SAP CAI chatbot to our new universal AI platform available on SAP BTP. With just one click, you can migrate your SAP CAI chatbot to Skybuffer AI deployed into your SAP BTP instance. Here’s a quick overview of the steps to transition your bot to Skybuffer AI on SAP BTP in just 5-10 minutes.

Step 1: Export the chatbot from your SAP CAI using SAP standard functionality.

Picture 1. SAP Conversational AI export interface

Step 2: Call the Skybuffer AI API secured by the oAuth2 protocol pointing to the Skybuffer AI application deployed on your SAP BTP account and feed exported from SAP CAI bot files.

Picture 2. Skybuffer AI API on SAP BTP import interface for SAP CAI development

Step 3: Voila, your built on SAP CAI AI assistant (bot) is now available in AI Model Configuration application at Skybuffer AI Launchpad

Picture 3. AI Model overview once it is imported to Skybuffer AI
Picture 4. Drill down to the skill level on Action Server

Step 4: Now you can Train and Deploy AI models via simple and intuitive Fiori applications of Skybuffer AI.

Picture 5. Train and Deploy functionality

Step 5: Make basic channel settings within the Communication Channels application from Skybuffer AI Launchpad.

Picture 6. Communication channel setup for AI model

Step 6. Your new level AI Model is now operational within Skybuffer AI. Use Web Chat Preview function to test your webchat-based AI Model at once.

Picture 7. Web Chat Preview function of Skybuffer AI to test an AI model

Outcome

The development based on SAP Conversational AI is secure and ready for further enhancement within the user-friendly Skybuffer AI solution. This solution seamlessly integrates all the cutting-edge SAP Business AI capabilities of SAP BTP, empowering your business to be more innovative and future-ready.

Rating: 0 / 5 (0 votes)

The post SAP AI Technologies Evolution. Seamless Transition from SAP CAI to Skybuffer AI on SAP BTP appeared first on ERP Q&A.

]]>
SAP HANA Database Creation from SAP BTP https://www.erpqna.com/sap-hana-database-creation-from-sap-btp/?utm_source=rss&utm_medium=rss&utm_campaign=sap-hana-database-creation-from-sap-btp Mon, 20 May 2024 11:40:06 +0000 https://www.erpqna.com/?p=84942 Creation of SAP HANA Database Instance in SAP BTP 1. Open SAP BTP Cockpit and click on subaccount. 2. Click on Instances and Subscriptions and then click on create button. 3. Select Service as SAP HANA Cloud and Plan as tools. 4. SAP HANA Cloud is now subscribed. 5. Click on Security -> Users and […]

The post SAP HANA Database Creation from SAP BTP appeared first on ERP Q&A.

]]>
Creation of SAP HANA Database Instance in SAP BTP

1. Open SAP BTP Cockpit and click on subaccount.

2. Click on Instances and Subscriptions and then click on create button.

3. Select Service as SAP HANA Cloud and Plan as tools.

4. SAP HANA Cloud is now subscribed.

5. Click on Security -> Users and select the user.

6. Click on Assign Role Collection.

7. Assign the following three roles.

8. Click on SAP HANA Cloud Application; it will open in new window.

9. Click on create Instance.

10. Select SAP HANA Cloud, SAP HANA Database and Next Step

11. Provide the instance name and password (next step).

12. Click on Next Step.

13. Click on Next Step.

14. Select Allow all IP addresses, Enable Cloud Connector, and select Allow all IP addresses –> Next Step.

15. Click on Review and Create.

16. Click on Create Instance.

17. Wait until the Instance status turns into Running state and click on Actions three dots.

18. Go to Open in SAP HANA Database Explorer; it will open in new window.

19. Enter the Credentials

Username: DBADMIN

Password: “use the password which was created at Step 11”.

20. Click on SQL Console Icon at the top left.

Now the SAP HANA Database is ready to use.

Rating: 0 / 5 (0 votes)

The post SAP HANA Database Creation from SAP BTP appeared first on ERP Q&A.

]]>
Nested JSON to SAP HANA Tables with SAP Integration Suite https://www.erpqna.com/nested-json-to-sap-hana-tables-with-sap-integration-suite/?utm_source=rss&utm_medium=rss&utm_campaign=nested-json-to-sap-hana-tables-with-sap-integration-suite Mon, 15 Apr 2024 08:11:47 +0000 https://www.erpqna.com/?p=83368 In this blog post, I will demonstrate how to send data to SAP HANA Cloud using the Integration Suite. Additionally, I will explain how to handle nested JSON data and distribute it across multiple tables utilizing parallel multicast and mapping functions. Problem Statement: We have exposed an API endpoint through which we push data in […]

The post Nested JSON to SAP HANA Tables with SAP Integration Suite appeared first on ERP Q&A.

]]>
In this blog post, I will demonstrate how to send data to SAP HANA Cloud using the Integration Suite. Additionally, I will explain how to handle nested JSON data and distribute it across multiple tables utilizing parallel multicast and mapping functions.

Problem Statement:

We have exposed an API endpoint through which we push data in JSON format and in response we get the insert count in particular tables. The input data contains user details and role details in nested form. We are supposed to insert the user details in User Table whereas in the User-Role mapping table for each role associated with a user, we ensure the creation of a corresponding entry, linking the user’s details with their roles. Our requirement is to process the JSON data via CPI and populate these two tables.

Architecture:

Adding JDBC Data Source in Integration Suite

We need JDBC URL, User ID and Password to connect to the SAP HANA Cloud Database. We can get it from instance credentials.

Here we get the JDBC URL for our database which will be like “jdbc:sap://<instanceID>.hana.trial-us10.hanacloud.ondemand.com:443/?encrypt=true”

We also get the Runtime User ID and Password for the Schema in the same key.

Go to your integration suite tenant and click on JDBC material under Manage security section.

Add the instance details we got from credentials and deploy.

In case of 3rd party database such as Microsoft SQL Server, Oracle and DB2 you need to upload the JDBC driver jar file under JDBC driver tab. In case of HANA, it is not required.

Integration Flow Design:

I have created this integration flow where I have exposed one endpoint to get the JSON data. Firstly, I have converted the JSON data to XML format because we will be sending data through JDBC Adapter in standard XML format. Below is the standard format for the XML which is used to insert data into database.

<root>
   <StatementName1>
       <dbTableName action="INSERT">
           <table>SCHEMA.”TABLENAME”</table>
           <access>
               <col1>val1</col1>
               <col2>val2</col2>
               <col3>val3</col3>
           </access>
       </dbTableName>
   </StatementName1>
</root>

I have used parallel multicast to route the message simultaneously in two different branches since we have to insert data in two different tables.

Let’s look at message mappings for both the routes.

For User Table we are using simple one to one mapping and passing “INSERT” in the action attribute.

This is the mapping for the mapping table where we map users and their roles. As you can see the userID node has single occurrence per record, but we need it to occur multiple times as per the number of roles that particular user has. So, we will be using the useOneAsMany Node function.

We then gather the messages from both the routes and pass them together through the JDBC Adapter using Batch processing.

Let’s look at the configuration of JDBC Adapter.

Here is the input payload content I am passing from the Postman.

{
  "root": {
    "UserDetails": [
		{
			"userID": 6666,
			"username": "harshal",
			"gender": "male",
			"email": "harshal@test.com",
			"roles": [
				{ "roleID": 1 },
				{ "roleID": 2 }
			]
		}
	]
  }
}

Here is the XML payload content that goes through JDBC Adapter.

<root>
   <StatementName>
       <dbTableName action="INSERT">
           <table>TEST2_HDI_DB_1.USER</table>
           <access>
               <user_ID>6666</user_ID>
               <username>harshal</username>
               <gender>male</gender>
               <email_ID>harshal@test.com</email_ID>
           </access>
       </dbTableName>
   </StatementName>>
   <StatementName>
       <dbTableName action="INSERT">
           <table>TEST2_HDI_DB_1.MAPPING</table>
           <access>
               <user_ID>6666</user_ID>
               <role_ID>1</role_ID>
           </access>
           <access>
               <user_ID>6666</user_ID>
               <role_ID>2</role_ID>
           </access>
       </dbTableName>
   </StatementName>
</root>

Response in Postman:

Let’s check our entries in HANA Database.

Rating: 0 / 5 (0 votes)

The post Nested JSON to SAP HANA Tables with SAP Integration Suite appeared first on ERP Q&A.

]]>
Connect & Visualize: SAP Datasphere with Qlik Sense https://www.erpqna.com/connect-visualize-sap-datasphere-with-qlik-sense/?utm_source=rss&utm_medium=rss&utm_campaign=connect-visualize-sap-datasphere-with-qlik-sense Mon, 01 Apr 2024 08:51:33 +0000 https://www.erpqna.com/?p=83103 In this Blog, We’ll explore how to consume data from SAP Datasphere through ODBC (Open Database Connectivity) and Visualize the data in Qlik Sense which is one of the leading Data Visualization Tools. Why SAP Datasphere over others? SAP Datasphere allows seamless connectivity with a wide range of data sources, including on-premises and cloud-based systems. […]

The post Connect & Visualize: SAP Datasphere with Qlik Sense appeared first on ERP Q&A.

]]>
In this Blog, We’ll explore how to consume data from SAP Datasphere through ODBC (Open Database Connectivity) and Visualize the data in Qlik Sense which is one of the leading Data Visualization Tools.

Why SAP Datasphere over others?

SAP Datasphere allows seamless connectivity with a wide range of data sources, including on-premises and cloud-based systems. SAP Datasphere is designed to handle large volumes of data efficiently, making it suitable for organizations of all sizes, from small businesses to large enterprises. Its scalable architecture ensures optimal performance even as data volumes grow over time. Graphical low-code/no-code tools to support self-service modeling needs for business users. It has Powerful built-in SQL and data flow editors for sophisticated modeling and data transformation needs.

It has a graphical impact and lineage analysis to visualize data movements, transformations, and other dependencies.

Steps to connect SAP Datasphere with Qliksense

In SAP Datasphere go to the Space Management tab and click on edit in the appropriate space

Make sure to turn on Expose for Consumption, so that the data can be retrieved from Datasphere and consumed in other tools.

Now click on create in the database users and give the appropriate name

Make sure to deploy the space to access the user credentials

Make sure to copy the username, hostname and password in the database user details.

Go to System-> Configuration-> IP Allowlist-> Trusted Ips

Click on Add IP

Get the IPv4 address of your appropriate Internet Service Provider

and make sure to add the IP address in the IP allowlist.

Below is the view I am going to consume in the Qlik Sense to visualize the data

ODBC Part

First download and Install the SAP HDODBC driver in the system. Here’s the URL to download SAP Development Tools (ondemand.com)

Then open the ODBC

Click on Add

Click on HSODBC

  • Give any meaningful name to the Data source name, description.
  • Database type: SAP HANA Cloud or SAP HANA Single tenant.
  • Already copied Host URL in datasphere space, Paste the copied Host URL.

  • Click Test connection
  • Paste the Database username in Username and Password.

Now the connection is successful.

Qlik Sense Part

In Qlik Sense Click on Create new app

Here Click on Add data

There will be several data connections, among that choose the ODBC connection

Now click on appropriate connection which is already created in ODBC part and give the username and password

Now Click on Owner and click on the space name in which the data (view) is available.

Now click on the view that needs to be visualized in Qlik Sense, the data preview will be available.

Now the data is loaded successfully. Hence the connection between SAP Datasphere and Qlik Sense is successful via ODBC connection

Here’s the sample dashboard created using the fact view consumed from SAP Datasphere

Rating: 0 / 5 (0 votes)

The post Connect & Visualize: SAP Datasphere with Qlik Sense appeared first on ERP Q&A.

]]>
SAP Datasphere – Enable SAP HANA Development Infrastructure (HDI) https://www.erpqna.com/sap-datasphere-enable-sap-hana-development-infrastructure-hdi/?utm_source=rss&utm_medium=rss&utm_campaign=sap-datasphere-enable-sap-hana-development-infrastructure-hdi Wed, 14 Feb 2024 10:10:22 +0000 https://www.erpqna.com/?p=81522 Introduction Use SAP SQL Data Warehousing to build calculation views and other SAP HANA Cloud HDI objects directly in SAP Datasphere run-time database and then exchange data between HDI containers and SAP Datasphere spaces. SAP SQL Data Warehousing can be used to bring existing HDI objects into SAP Datasphere environment, and to allow users familiar […]

The post SAP Datasphere – Enable SAP HANA Development Infrastructure (HDI) appeared first on ERP Q&A.

]]>
Introduction

Use SAP SQL Data Warehousing to build calculation views and other SAP HANA Cloud HDI objects directly in SAP Datasphere run-time database and then exchange data between HDI containers and SAP Datasphere spaces. SAP SQL Data Warehousing can be used to bring existing HDI objects into SAP Datasphere environment, and to allow users familiar with the HDI tools to leverage advanced SAP HANA Cloud features.

Following are the steps required to followed

  1. Gather the relevant information
  2. Create SAP Support ticket to mapping/sharing DWC HANA Cloud to SAP Datasphere tenant space
  3. Create a HDI service instance on SAP Cloud Platform (SCP) under SAP Datasphere space
  4. Create roles on HDI container by using HANA Web IDE (BAS)
  5. Add/Deploy HDI Container instance into SAP Datasphere

1. Gather the relevant information

To Enable HDI container on DWC in SAP Datasphere User must require below details:

  1. SAP Datasphere Tenant ID
  2. SAP Business Technology Platform Org GUID
  3. SAP Business Technology Platform Space GUID

Follow below Procedure to collect above details

  • To collect the SAP Datasphere Tenant ID user should be part of any one of this roles DW Administrator, System Owner
    • Go to Expand Navigation bar ==>System==> About==> Tenant

  • To collect the SAP Datasphere Tenant ID & Space GUID user should be part of any one of this roles SAP HANA Cloud Administrator, Space Developer
    • Go to SAP BTP Cockpit==>Global account==>Subaccount==>SAP Datasphere Space

2. Create SAP Support ticket to mapping/sharing DWC HANA Cloud to SAP Datasphere tenant space

Create a Request to Product team under DS_SM category along with below details:

  • SAP Datasphere Tenant ID :
  • SAP Business Technology Platform Org GUID :
  • SAP Business Technology Platform Space GUID:

In the Schema Access area, go to HDI Containers and click Enable Access. A pop-up window appears there is need to open a support ticket so Product team can map the HDI containers with SAP Datasphere space. Please add the above-mentioned details into that ticket.

Notification will be shared once the ticket has been Processed/Mapping is completed.

3. Create a HDI service instance on SAP Cloud Platform (SCP) under SAP Datasphere space

To create HDI Service instance on SCP user should be part of any one of this roles SAP HANA Cloud Administrator, Space Developer

  • Go to SAP BTP Cockpit==>Global account==>Subaccount==>SAP Datasphere Space
  • Under Instances Tab Click on Create button by selecting service instance type .
  • Select the service type as an SAP HANA Schemas & HDI Containers & Plan is hdi-shared & Instance Name is name (ex: DSP_HDI_PLAY) of HDI Container. Click on NEXT and add here Schema name with JSON format (ex: {“schema”:”DSP_HDI_PLAY”}) . This is how User can define HDI Container own Schema name again this is not an mandatory to add here Schema name if left with blank this field then that schema name will autogenerated number format one. Click NEXT and Click Create.

Now HDI Container getting creating like below with staring of the status

With this status now have successfully created HDI Container.

4. Create roles on HDI container by using HANA Web IDE

Create the required roles to allow SAP Datasphere to read from and, optionally, write to the container.

Must define the roles DWC_CONSUMPTION_ROLE and DWC_CONSUMPTION_ROLE# (with grant option) in the container to allow to add into DSP space and allow to exchange data between the container and the space.

  • To Create this roles on HDI Container user should be part of this roles Business_Application_Studio_Developer on BTP Subaccount.
  • Login tot the Business Application Studio (BAS) and create Dev Space by selecting SAP HANA Native Application.

  • Open Dev Workspace and Create one SAP HANA Database project
  • Click on New Project from Template and select Create SAP HANA Database project click start.

  • Enter the project name (ex: TEST) and SAP HANA Database Version should be an SAP HANA CLOUD
  • Login to CF account by using credentials authentication method (By entering SAP Email and SAP Global Password) and then click Sign in

  • Select the CF target details Subaccount/organization and CF space

  • And then Bind to HDI Container Service like below

  • Click on Finish Button and with this step User have successfully created SAP HANA Database Project.

Can also verify either User have connected to correct HDI container or not by checking the service key on HDI container.

Once Project get created it will also generate and Create one service key “SharedDevKey” for the Cloud Foundry service instance “<HDI Container_name>”

  • Create a DWC_CONSUMPTION_ROLE and DWC_CONSUMPTION_ROLE# roles under this Project on BAS.
    • Under db==>src folder User will have to create above two roles like below

Role Name -1

DWC_CONSUMPTION_ROLE_WITH_GRANT.hdbrole

{    "role": {"name": "DWC_CONSUMPTION_ROLE#",
               "schema_privileges": [{"privileges_with_grant_option":[ "SELECT","SELECT METADATA","EXECUTE" ]}]}}

Role Name -2

DWC_CONSUMPTION_ROLE.hdbrole

{    "role": {"name": "DWC_CONSUMPTION_ROLE",
              "schema_privileges": [{"privileges":[ "SELECT","SELECT METADATA","EXECUTE" ]}]}}

Add HDI container info into mta.yaml file:

And then try deploy this into Database connection like below

Under SAP HANA PROJECTS folder see all the files and then click on bind icon on project folder like below

Make succeeded (0 warnings): 2 files deployed (effective 2), 0 files undeployed (effective 0), 0 dependent files redeployed. Making… ok (0s 303ms)

With this now have successfully created roles into HDI Container.

5. Add/Deploy HDI Container instance into SAP Datasphere

To Deploy the HDI Container into SAP Datasphere Space user should be part of any one of this roles DW Administrator, System Owner, Space Owner

  • Go to Expand Navigation bar ==>Space management==> <Select the space> ==> Database Access ==>HDI Containers

Select the HDI Container by click on +

Deploy the space by Click on Deploy button.

If deployment went successful then HDI container successfully added into SAP Datasphere space.

To verify this HDI container is deployed successfully or not check/test this HDI container is appearing under Data Builder application

Rating: 0 / 5 (0 votes)

The post SAP Datasphere – Enable SAP HANA Development Infrastructure (HDI) appeared first on ERP Q&A.

]]>
SAP HANA Cloud – Catalog & HDI Role Creation (A step-by-step guide) https://www.erpqna.com/sap-hana-cloud-catalog-hdi-role-creation-a-step-by-step-guide/?utm_source=rss&utm_medium=rss&utm_campaign=sap-hana-cloud-catalog-hdi-role-creation-a-step-by-step-guide Wed, 07 Sep 2022 12:03:36 +0000 https://www.erpqna.com/?p=67386 Introduction Roles defined in SAP HANA Cloud using HANA Cockpit or HANA Database Explorer (SQL Console) are called Catalog based roles whereas roles defined using Business Application Studio (BAS) are called HDI roles. Catalog and HDI both have their own advantages and disadvantages, some of the key differences are as follows: HDI Role Creation: Pre-requisite: […]

The post SAP HANA Cloud – Catalog & HDI Role Creation (A step-by-step guide) appeared first on ERP Q&A.

]]>
Introduction

Roles defined in SAP HANA Cloud using HANA Cockpit or HANA Database Explorer (SQL Console) are called Catalog based roles whereas roles defined using Business Application Studio (BAS) are called HDI roles. Catalog and HDI both have their own advantages and disadvantages, some of the key differences are as follows:

Figure 1: Catalog v/s HDI Role

HDI Role Creation:

Pre-requisite:

  • BTP Onboarding.
  • User has access to Business Application Studio.

Step1: Login to Cloud Foundry

Open Business Application Studio (BAS)

Figure 2: Business Application Studio

Login to Cloud Foundry (Navigation: View -> Find Command -> Search CF: Login to Cloud Foundry)

Figure 3: Login to Cloud Foundry

Note: Make sure your cloud foundry endpoint is correct.

Select Cloud Foundry Organization and Space, click Apply.

Figure 4: Select target Cloud Foundry Org. and Space

Step2: Create Project

In Business Application Studio home page, click Start from template.

Figure 5: Start from template

Select SAP HANA Database Project, click Start.

Figure 6: Select Template and Target Location

Enter Project Name, click Next.

Figure 7: Add Basic Information

Enter Module Name db, click Next.

Figure 8: Set Basic Properties

Enter Schema Name and Database Version, click Next.

Figure 9: Set Database Information

Enter Service Instance Name, click Finish.

Figure 10: Bind to HDI Container Service

Created project available under Workspace folder.

Figure 11: Workspace Folder

Step3: Maintain mta.yaml file and bind Database Connections

Open mta.yaml file under created project (SECURITY_ROLES) and make the changes as required e.g. add service for UPS, cross container access etc.

Figure 12: Maintain mta.yaml file

Bind all required Database Connections (Navigation: SAP HANA Projects -> SECURITY_ROLES/db -> Database Connections)

Figure 13: Bind the Database Connections

Step4: Define .hdbgrants

Create a cfg folder under db and create synonym-grantor-service.hdbgrants file.

Figure 14: Create .hdbgrants file

Maintain the entries to grant external access to Container Object Owner and Application User, deploy the file.

Figure 15: Maintain .hdbgrants file

Step5: Define .hdinamespace

Create .hdinamespace file under cfg folder, maintain the entries for role name convention, deploy the file.

Figure 16: Create and maintain .hdinamespace file

Step6: Define .hdiconfig

Copy .hdiconfig file from src folder and paste it in cfg folder.

Figure 17: Create .hdiconfig file

Step7: Create roles folder under src

Right click on src folder, select New Folder and enter roles.

Figure 18: Create roles folder

Step8: Create .hdbrole

Right click on roles folder, click New File and enter .hdbrole name.

Figure 19: Create .hdbrole

Right click on .hdbrole and select open with Code Editor.

Figure 20: Open role in Code Editor mode

Define JSON for roles and privileges.

Figure 21: Define JSON

Note: Using Role Editor mode, role can be created without defining JSON manually, system automatically defines JSON based on selection of role attributes.

Some useful JSON codes:

-> Global Object Privileges:

      “global_object_privileges”: [
        {
           “name”: “DEFAULT”,
           “type”: “USERGROUP”,
           “privileges”: [
            “OPERATOR”
        ],
        “schema_reference”: “_SYS_DI#BROKER_CG”
        }
     ]
-> Global Roles:
      “global_roles”: [
        “MONITORING”
      ]
-> System Privileges
        “system_privileges”: [
        “ADAPTER ADMIN”
      ]
-> Schema Privileges
      “schema_privileges”: [
        {
        “reference”: “_SYS_BI”,
        “privileges”: [
        “SELECT”
        ]
    }
 ]

Right click on roles folder, select New File, enter .hdbroleconfig file and define reference schemas.

Figure 22: Create .hdiroleconfig file

Deploy .hdbroleconfig file first and then .hdbrole file.

Figure 23: Deploy role

Step9: Validate role in HANA Cockpit

Deployed role available on HANA Cockpit for assignment.

Figure 24: HANA Cockpit

HDI Role created successfully using Business Application Studio.

Catalog Role Creation: Using HANA Cockpit

Pre-requisite:

  • BTP Onboarding.
  • User has ROLE ADMIN System Privilege to create role and other system/object privilege as required.

Step1: Login to SAP HANA Cockpit

Open SAP BTP Cockpit and Launch SAP HANA Cockpit.

Figure 25: SAP BTP Cockpit

Enter username and password.

Figure 26: Login to HANA Cockpit

Step2: Open Role Management

Select Role Management under Security and User Management.

Figure 27: HANA Cockpit – Security and User Management

Step3: Create Role

Click Create Role button.

Figure 28: Create Role

Define Role Name, click Create.

Figure 29: Define Role Name

Navigate to required tab i.e. Roles, System Privileges, Object Privileges etc and add the roles / privileges as required.

Figure 30: Add roles/privileges

Catalog Role created successfully using SAP HANA Cockpit.

Catalog Role Creation: Using HANA Database Explorer

Pre-requisite:

  • BTP Onboarding.
  • User has ROLE ADMIN System Privilege to create role and other system/object privilege as required.

Step1: Login to SAP HANA Cockpit

Open SAP BTP Cockpit and Launch SAP HANA Database Explorer.

Figure 31: SAP BTP Cockpit

Enter username and password.

Figure 32: Login to HANA Cockpit

Step2: Open SQL Console & execute commands

Open SQL console, enter SQL command to create role and assign the privileges.

Figure 33: Execute SQL query

Step3: Validate role in HANA Cockpit

Created role available on HANA Cockpit for the assignment.

Figure 34: HANA Cockpit – Role Management

Catalog Role created successfully using SAP HANA Database Explorer (SQL Console)

Rating: 0 / 5 (0 votes)

The post SAP HANA Cloud – Catalog & HDI Role Creation (A step-by-step guide) appeared first on ERP Q&A.

]]>