SAP HANA Cloud - ERP Q&A https://www.erpqna.com Trending SAP Career News and Guidelines Thu, 24 Apr 2025 10:52:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.erpqna.com/wp-content/uploads/2021/11/cropped-erpqna-32x32.png SAP HANA Cloud - ERP Q&A https://www.erpqna.com 32 32 My Experience Developing Side-by-Side Applications in SAP BTP for SAP S/4HANA Cloud https://www.erpqna.com/my-experience-developing-side-by-side-applications-in-sap-btp-for-sap-s-4hana-cloud/?utm_source=rss&utm_medium=rss&utm_campaign=my-experience-developing-side-by-side-applications-in-sap-btp-for-sap-s-4hana-cloud Thu, 24 Apr 2025 10:52:43 +0000 https://www.erpqna.com/?p=91684 The architecture I’m sharing here isn’t the only way or necessarily the best approach for every scenario. Rather, it’s the solution I’ve found effective and applied successfully across multiple projects. Why did I choose a Side-by-Side Architecture? I opted for this architecture mainly because it allows me to extend the standard functionalities of SAP S/4HANA […]

The post My Experience Developing Side-by-Side Applications in SAP BTP for SAP S/4HANA Cloud first appeared on ERP Q&A.

]]>
The architecture I’m sharing here isn’t the only way or necessarily the best approach for every scenario. Rather, it’s the solution I’ve found effective and applied successfully across multiple projects.

Why did I choose a Side-by-Side Architecture?

I opted for this architecture mainly because it allows me to extend the standard functionalities of SAP S/4HANA Cloud without directly impacting the ERP core, thus making it easier to maintain a clean core and simplifying the management, maintenance, and evolution of developed applications.

How do I organize SAP BTP?

I usually set up separate subaccounts for each project environment:

  • Development: Where I perform initial testing and functionality development.
  • Quality (Test): Used for integrated testing before final deployment.
  • Production: A stable environment accessed by end users.

This approach has helped me keep things organized, avoid confusion, and significantly simplify the management of BTP services and instances.

Frontend using SAPUI5 and Fiori Elements

My frontend applications are typically developed using SAPUI5 or Fiori Elements, which accelerates development. I deploy these apps in the HTML5 Application Repository and publish them using SAP Build Work Zone, simplifying the management of connections to backend services.

Backend: Choosing between CAP and Java Spring Boot based on needs

For backend development, I divide responsibilities across two main application types, depending on technical requirements and project complexity:

  • CAP (Cloud Application Programming) Applications:
    I use CAP primarily for simpler OData services or functionalities with low concurrency. CAP has been practical for me due to its ease of development and seamless integration with standard SAP services.
  • Java Applications with Spring Boot:
    For scenarios that require complex processing, high concurrency, transactional logic, advanced reporting, or parallel processes, I prefer Java with Spring Boot. This stack consistently delivers excellent performance for demanding scenarios.

Combining these technologies within the same architecture allows me to select the most suitable tool for each specific need, rather than forcing one technology into unsuitable scenarios.

Shared SAP HANA Database

To simplify data management, I typically use a single SAP HANA database shared by both backend application types. A particular choice I make, perhaps less common, is not to use the default HDI infrastructure provided by CAP.

Instead, I manually create a dedicated HANA schema directly accessed by both CAP and Java applications.

This approach has advantages such as simpler database management and easier integration, but it also has some drawbacks, like losing certain automated management features provided by HDI. Still, based on my experience, the benefits have outweighed these disadvantages.

Direct Integration with SAP S/4HANA Cloud

Another critical aspect of my architecture is integration with S/4HANA Cloud. Here, I’ve applied two methods based on backend technology:

  • For Java/Spring Boot, I typically use the SAP S/4HANA Cloud SDK for Java, as it simplifies the connection and management of S/4HANA services significantly.
  • For CAP, I often import EDMX models exposed by S/4HANA Cloud directly, facilitating quick and straightforward integration with standard SAP services.

Both approaches have proven practical and effective in various contexts.

Centralized Authentication with Single Sign-On (SSO)

In most implementations, end users log in using the same authentication they use for S/4HANA Cloud. I achieve this by setting up trust relationships between SAP BTP and the Identity Provider (IDP) used by the ERP. Consequently, once logged into BTP, users can seamlessly interact with S/4HANA Cloud through transparent Single Sign-On (SSO).

This significantly improves the user experience and simplifies access and permission management.

Everything I’ve shared here comes directly from my practical experience implementing real projects. While this isn’t the only method, it’s an architecture I’ve found functional, flexible, and effective. I’m always open to learning from other experiences and alternatives to continuously evolve my approach according to future needs.

Rating: 0 / 5 (0 votes)

The post My Experience Developing Side-by-Side Applications in SAP BTP for SAP S/4HANA Cloud first appeared on ERP Q&A.

]]>
Importance of Currency Conversion in SAP Datasphere https://www.erpqna.com/importance-of-currency-conversion-in-sap-datasphere/?utm_source=rss&utm_medium=rss&utm_campaign=importance-of-currency-conversion-in-sap-datasphere Mon, 01 Jul 2024 07:41:49 +0000 https://www.erpqna.com/?p=86026 Accurate currency conversion ensures that financial statements reflect true value, enabling better decision-making and compliance with regulatory requirements. Without effective currency conversion, businesses may face issues such as financial discrepancies, inaccurate forecasting, and challenges in performance evaluation across different regions. “Twenty-four-hour global nature of currency markets, exchange rates are constantly shifting from day to day […]

The post Importance of Currency Conversion in SAP Datasphere first appeared on ERP Q&A.

]]>
Accurate currency conversion ensures that financial statements reflect true value, enabling better decision-making and compliance with regulatory requirements. Without effective currency conversion, businesses may face issues such as financial discrepancies, inaccurate forecasting, and challenges in performance evaluation across different regions.

“Twenty-four-hour global nature of currency markets, exchange rates are constantly shifting from day to day and even from minute to minute, sometimes in small increments and sometimes quite dramatically” – Harvard Business Services.

In an interconnected world where businesses operate across borders, efficient currency conversion is essential. Currency conversion significantly impacts reporting and business analytics in the following ways:

Translation of Financial Statements:

Multinational corporations translate foreign incomes, expenses, assets, and liabilities into their reporting currency using relevant exchange rates.

Variances in Reported Financials:

Exchange rate fluctuations can result in significant variances in reported financials.

Currency conversion in SAP involves converting monetary values from one currency to another based on predefined exchange rates. This is important for multinational companies and businesses engaged in cross-border transactions.

Exchange rates in SAP define how one currency is converted to another. Let’s explore some significant business use cases where reporting with currency conversion is extensively required:

Case 1: Global Operations and Multinational Businesses

Companies operating in multiple countries need to manage different currencies. Currency conversion allows them to integrate financial data across various locations and ensure accurate financial reporting.

Case 2: Consolidated Financial Statements

Currency conversion in SAP enables the creation of consolidated financial reports in a single group currency.

Case 3: Budgeting and Forecasting

Companies often need to budget and forecast in a specific currency while dealing with costs and revenues in other currencies. Currency conversion allows for accurate planning and forecasting, providing a unified view of the organization’s financial health.

Currency conversion is used when posting financial information where the reporting currency differs from the transaction currency.

Currency Conversion Methods In SAP Datasphere

In an increasingly globalized business environment, companies often deal with transactions and data from multiple countries, involving various currencies. This complexity makes accurate currency conversion a critical aspect of financial reporting, budgeting, and analytics. SAP Datasphere, with its robust currency conversion capabilities, ensures businesses maintain financial accuracy and consistency across their operations.

Steps to Create Currency Conversion

Use Case

Step 01:

The client needs a report in Indian currency, but we have data in Datasphere in USD. So, using the currency conversion, we are converting from USD to INR.

Step 02:

Check whether the connection of the source is in an active state.

  • Confirm access to Data flows, Tables, and Views, because running the data flow loads data into the local table before it becomes available in the views.
  • To perform currency translation in SAP Datasphere, the following tables must be available in your space:
  1. TCURV – Exchange rate types
  2. TCURW – Exchange rate type text
  3. TCURX – Decimal places in currencies
  4. TCURN – Quotations
  5. TCURR – Exchange rates
  6. TCURF – Conversion factors
  7. TCURC – Currency codes
  8. TCURT – Currency text
  • DataFlows

  • Tables

  • Views

Step 03:

  • In this case we don’t have data in the target table, so we have to run the data flow to load the data to local table.

Step 04:

  • Here, we are converting the currency from USD to INR.
  • We have filtered “From Currency” as “USD” and “To Currency” as “INR” to obtain the exchange rate type.
  • After that, obtain exchange rate types M & P for the scenario.

Step 05:

  • For this scenario we have selected the source measure as “Gross Amount,” so we have to change the “Semantic Type” to “Amount with Currency” and “Unit Column” can be selected accordingly.

Step 06:

  • We have selected the billing document date as the transaction date because we are using the billing document fact model.

Step 07:

  • The conversion from USD to INR is now complete.

Best Practices for Currency Conversion

  • Address rounding and precision issues:

Rounding Rules: Apply consistent rounding rules to avoid discrepancies in financial reports.

Precision: Ensure that rounding practices maintain accuracy, especially withlarge datasets.

  • Maintain consistency and accuracy:

Accurate Data Entry: Ensure accurate entry of financial data and exchange rates to minimize errors during currency conversion.

Data Quality Checks: Regularly perform data quality checks to identify and rectify inaccuracies in exchange rates and financial data.

  • Regular updates and monitoring of exchange rates:

Periodic Reviews: Conduct regular reviews of your currency conversion processes and update procedures as necessary to adapt to changing financial environments.

Rating: 0 / 5 (0 votes)

The post Importance of Currency Conversion in SAP Datasphere first appeared on ERP Q&A.

]]>
Nested JSON to SAP HANA Tables with SAP Integration Suite https://www.erpqna.com/nested-json-to-sap-hana-tables-with-sap-integration-suite/?utm_source=rss&utm_medium=rss&utm_campaign=nested-json-to-sap-hana-tables-with-sap-integration-suite Mon, 15 Apr 2024 08:11:47 +0000 https://www.erpqna.com/?p=83368 In this blog post, I will demonstrate how to send data to SAP HANA Cloud using the Integration Suite. Additionally, I will explain how to handle nested JSON data and distribute it across multiple tables utilizing parallel multicast and mapping functions. Problem Statement: We have exposed an API endpoint through which we push data in […]

The post Nested JSON to SAP HANA Tables with SAP Integration Suite first appeared on ERP Q&A.

]]>
In this blog post, I will demonstrate how to send data to SAP HANA Cloud using the Integration Suite. Additionally, I will explain how to handle nested JSON data and distribute it across multiple tables utilizing parallel multicast and mapping functions.

Problem Statement:

We have exposed an API endpoint through which we push data in JSON format and in response we get the insert count in particular tables. The input data contains user details and role details in nested form. We are supposed to insert the user details in User Table whereas in the User-Role mapping table for each role associated with a user, we ensure the creation of a corresponding entry, linking the user’s details with their roles. Our requirement is to process the JSON data via CPI and populate these two tables.

Architecture:

Adding JDBC Data Source in Integration Suite

We need JDBC URL, User ID and Password to connect to the SAP HANA Cloud Database. We can get it from instance credentials.

Here we get the JDBC URL for our database which will be like “jdbc:sap://<instanceID>.hana.trial-us10.hanacloud.ondemand.com:443/?encrypt=true”

We also get the Runtime User ID and Password for the Schema in the same key.

Go to your integration suite tenant and click on JDBC material under Manage security section.

Add the instance details we got from credentials and deploy.

In case of 3rd party database such as Microsoft SQL Server, Oracle and DB2 you need to upload the JDBC driver jar file under JDBC driver tab. In case of HANA, it is not required.

Integration Flow Design:

I have created this integration flow where I have exposed one endpoint to get the JSON data. Firstly, I have converted the JSON data to XML format because we will be sending data through JDBC Adapter in standard XML format. Below is the standard format for the XML which is used to insert data into database.

<root>
   <StatementName1>
       <dbTableName action="INSERT">
           <table>SCHEMA.”TABLENAME”</table>
           <access>
               <col1>val1</col1>
               <col2>val2</col2>
               <col3>val3</col3>
           </access>
       </dbTableName>
   </StatementName1>
</root>

I have used parallel multicast to route the message simultaneously in two different branches since we have to insert data in two different tables.

Let’s look at message mappings for both the routes.

For User Table we are using simple one to one mapping and passing “INSERT” in the action attribute.

This is the mapping for the mapping table where we map users and their roles. As you can see the userID node has single occurrence per record, but we need it to occur multiple times as per the number of roles that particular user has. So, we will be using the useOneAsMany Node function.

We then gather the messages from both the routes and pass them together through the JDBC Adapter using Batch processing.

Let’s look at the configuration of JDBC Adapter.

Here is the input payload content I am passing from the Postman.

{
  "root": {
    "UserDetails": [
		{
			"userID": 6666,
			"username": "harshal",
			"gender": "male",
			"email": "harshal@test.com",
			"roles": [
				{ "roleID": 1 },
				{ "roleID": 2 }
			]
		}
	]
  }
}

Here is the XML payload content that goes through JDBC Adapter.

<root>
   <StatementName>
       <dbTableName action="INSERT">
           <table>TEST2_HDI_DB_1.USER</table>
           <access>
               <user_ID>6666</user_ID>
               <username>harshal</username>
               <gender>male</gender>
               <email_ID>harshal@test.com</email_ID>
           </access>
       </dbTableName>
   </StatementName>>
   <StatementName>
       <dbTableName action="INSERT">
           <table>TEST2_HDI_DB_1.MAPPING</table>
           <access>
               <user_ID>6666</user_ID>
               <role_ID>1</role_ID>
           </access>
           <access>
               <user_ID>6666</user_ID>
               <role_ID>2</role_ID>
           </access>
       </dbTableName>
   </StatementName>
</root>

Response in Postman:

Let’s check our entries in HANA Database.

Rating: 0 / 5 (0 votes)

The post Nested JSON to SAP HANA Tables with SAP Integration Suite first appeared on ERP Q&A.

]]>
Connect & Visualize: SAP Datasphere with Qlik Sense https://www.erpqna.com/connect-visualize-sap-datasphere-with-qlik-sense/?utm_source=rss&utm_medium=rss&utm_campaign=connect-visualize-sap-datasphere-with-qlik-sense Mon, 01 Apr 2024 08:51:33 +0000 https://www.erpqna.com/?p=83103 In this Blog, We’ll explore how to consume data from SAP Datasphere through ODBC (Open Database Connectivity) and Visualize the data in Qlik Sense which is one of the leading Data Visualization Tools. Why SAP Datasphere over others? SAP Datasphere allows seamless connectivity with a wide range of data sources, including on-premises and cloud-based systems. […]

The post Connect & Visualize: SAP Datasphere with Qlik Sense first appeared on ERP Q&A.

]]>
In this Blog, We’ll explore how to consume data from SAP Datasphere through ODBC (Open Database Connectivity) and Visualize the data in Qlik Sense which is one of the leading Data Visualization Tools.

Why SAP Datasphere over others?

SAP Datasphere allows seamless connectivity with a wide range of data sources, including on-premises and cloud-based systems. SAP Datasphere is designed to handle large volumes of data efficiently, making it suitable for organizations of all sizes, from small businesses to large enterprises. Its scalable architecture ensures optimal performance even as data volumes grow over time. Graphical low-code/no-code tools to support self-service modeling needs for business users. It has Powerful built-in SQL and data flow editors for sophisticated modeling and data transformation needs.

It has a graphical impact and lineage analysis to visualize data movements, transformations, and other dependencies.

Steps to connect SAP Datasphere with Qliksense

In SAP Datasphere go to the Space Management tab and click on edit in the appropriate space

Make sure to turn on Expose for Consumption, so that the data can be retrieved from Datasphere and consumed in other tools.

Now click on create in the database users and give the appropriate name

Make sure to deploy the space to access the user credentials

Make sure to copy the username, hostname and password in the database user details.

Go to System-> Configuration-> IP Allowlist-> Trusted Ips

Click on Add IP

Get the IPv4 address of your appropriate Internet Service Provider

and make sure to add the IP address in the IP allowlist.

Below is the view I am going to consume in the Qlik Sense to visualize the data

ODBC Part

First download and Install the SAP HDODBC driver in the system. Here’s the URL to download SAP Development Tools (ondemand.com)

Then open the ODBC

Click on Add

Click on HSODBC

  • Give any meaningful name to the Data source name, description.
  • Database type: SAP HANA Cloud or SAP HANA Single tenant.
  • Already copied Host URL in datasphere space, Paste the copied Host URL.

  • Click Test connection
  • Paste the Database username in Username and Password.

Now the connection is successful.

Qlik Sense Part

In Qlik Sense Click on Create new app

Here Click on Add data

There will be several data connections, among that choose the ODBC connection

Now click on appropriate connection which is already created in ODBC part and give the username and password

Now Click on Owner and click on the space name in which the data (view) is available.

Now click on the view that needs to be visualized in Qlik Sense, the data preview will be available.

Now the data is loaded successfully. Hence the connection between SAP Datasphere and Qlik Sense is successful via ODBC connection

Here’s the sample dashboard created using the fact view consumed from SAP Datasphere

Rating: 0 / 5 (0 votes)

The post Connect & Visualize: SAP Datasphere with Qlik Sense first appeared on ERP Q&A.

]]>
Consuming data from SAP Datasphere into Power BI via ODBC connector https://www.erpqna.com/consuming-data-from-sap-datasphere-into-power-bi-via-odbc-connector/?utm_source=rss&utm_medium=rss&utm_campaign=consuming-data-from-sap-datasphere-into-power-bi-via-odbc-connector Sat, 17 Feb 2024 11:01:54 +0000 https://www.erpqna.com/?p=81617 I wanted to share my proof of concept about consuming data from SAP Datasphere into Power BI via ODBC connection (Open database connection) Required: Datasphere part: Login into Datasphere -> Space Management -> Choose the space and select Edit. Click Create and Make sure that you have enabled Expose for consumption by default Copy Database […]

The post Consuming data from SAP Datasphere into Power BI via ODBC connector first appeared on ERP Q&A.

]]>
I wanted to share my proof of concept about consuming data from SAP Datasphere into Power BI via ODBC connection (Open database connection)

Required:

  • Datasphere tenant.
  • Power BI.
  • Need to create a database user for the particular space in the space management.
  • Host name, Port number, Password, Database username.
  • Need to add External IPV4 Address in the IP Allowlist.
  • Hana ODBC client.
  • Need to add credentials in Power BI.

Datasphere part:

Login into Datasphere -> Space Management -> Choose the space and select Edit.

Click Create and Make sure that you have enabled Expose for consumption by default

Copy Database Username, Host name, Port, Password.

Go to System-> Configuration-> IP Allowlist-> Trusted Ips

EXTERNAL IPV4 ADDRESS should be added here, not Internal IPV4

To get External IPV4 Address

  • Open Command prompt and enter curl ifcfg.me
  • Then run curl ifcfg.me in command prompt and add that IPV4 Address in the IP Allowlist.

Add and Save the External ipv4 address here.

ODBC PART:

Need to install SAP HDODBC driver SAP Development Tools (ondemand.com) in the system.

Open ODBC in the system

Click Add

Select HDODBC

  • Give any meaningful name to Data source name, description.
  • Database type: SAP HANA Cloud or SAP HANA Single tenant (both will work fine).
  • Already copied Host URL in datasphere space, Paste the copied Host URL.
  • Click Test connection
  • Paste the Database username in Username and Password.

POWER BI PART:

Power BI-> Get data-> ODBC.

Select the created ODBC Connection

Paste the same Username and Password which copied in datasphere part, click “Connect”.

Successfully Power BI consuming data from SAP Datasphere

Dashboard built in Power BI

From this POC:

  • Data from Datasphere can be consume via OBDC connection 100/100.
  • Since ODBC connection is possible, we can consume data into any third-party reporting tools that we want.
Rating: 0 / 5 (0 votes)

The post Consuming data from SAP Datasphere into Power BI via ODBC connector first appeared on ERP Q&A.

]]>
SAP Datasphere – Enable SAP HANA Development Infrastructure (HDI) https://www.erpqna.com/sap-datasphere-enable-sap-hana-development-infrastructure-hdi/?utm_source=rss&utm_medium=rss&utm_campaign=sap-datasphere-enable-sap-hana-development-infrastructure-hdi Wed, 14 Feb 2024 10:10:22 +0000 https://www.erpqna.com/?p=81522 Introduction Use SAP SQL Data Warehousing to build calculation views and other SAP HANA Cloud HDI objects directly in SAP Datasphere run-time database and then exchange data between HDI containers and SAP Datasphere spaces. SAP SQL Data Warehousing can be used to bring existing HDI objects into SAP Datasphere environment, and to allow users familiar […]

The post SAP Datasphere – Enable SAP HANA Development Infrastructure (HDI) first appeared on ERP Q&A.

]]>
Introduction

Use SAP SQL Data Warehousing to build calculation views and other SAP HANA Cloud HDI objects directly in SAP Datasphere run-time database and then exchange data between HDI containers and SAP Datasphere spaces. SAP SQL Data Warehousing can be used to bring existing HDI objects into SAP Datasphere environment, and to allow users familiar with the HDI tools to leverage advanced SAP HANA Cloud features.

Following are the steps required to followed

  1. Gather the relevant information
  2. Create SAP Support ticket to mapping/sharing DWC HANA Cloud to SAP Datasphere tenant space
  3. Create a HDI service instance on SAP Cloud Platform (SCP) under SAP Datasphere space
  4. Create roles on HDI container by using HANA Web IDE (BAS)
  5. Add/Deploy HDI Container instance into SAP Datasphere

1. Gather the relevant information

To Enable HDI container on DWC in SAP Datasphere User must require below details:

  1. SAP Datasphere Tenant ID
  2. SAP Business Technology Platform Org GUID
  3. SAP Business Technology Platform Space GUID

Follow below Procedure to collect above details

  • To collect the SAP Datasphere Tenant ID user should be part of any one of this roles DW Administrator, System Owner
    • Go to Expand Navigation bar ==>System==> About==> Tenant

  • To collect the SAP Datasphere Tenant ID & Space GUID user should be part of any one of this roles SAP HANA Cloud Administrator, Space Developer
    • Go to SAP BTP Cockpit==>Global account==>Subaccount==>SAP Datasphere Space

2. Create SAP Support ticket to mapping/sharing DWC HANA Cloud to SAP Datasphere tenant space

Create a Request to Product team under DS_SM category along with below details:

  • SAP Datasphere Tenant ID :
  • SAP Business Technology Platform Org GUID :
  • SAP Business Technology Platform Space GUID:

In the Schema Access area, go to HDI Containers and click Enable Access. A pop-up window appears there is need to open a support ticket so Product team can map the HDI containers with SAP Datasphere space. Please add the above-mentioned details into that ticket.

Notification will be shared once the ticket has been Processed/Mapping is completed.

3. Create a HDI service instance on SAP Cloud Platform (SCP) under SAP Datasphere space

To create HDI Service instance on SCP user should be part of any one of this roles SAP HANA Cloud Administrator, Space Developer

  • Go to SAP BTP Cockpit==>Global account==>Subaccount==>SAP Datasphere Space
  • Under Instances Tab Click on Create button by selecting service instance type .
  • Select the service type as an SAP HANA Schemas & HDI Containers & Plan is hdi-shared & Instance Name is name (ex: DSP_HDI_PLAY) of HDI Container. Click on NEXT and add here Schema name with JSON format (ex: {“schema”:”DSP_HDI_PLAY”}) . This is how User can define HDI Container own Schema name again this is not an mandatory to add here Schema name if left with blank this field then that schema name will autogenerated number format one. Click NEXT and Click Create.

Now HDI Container getting creating like below with staring of the status

With this status now have successfully created HDI Container.

4. Create roles on HDI container by using HANA Web IDE

Create the required roles to allow SAP Datasphere to read from and, optionally, write to the container.

Must define the roles DWC_CONSUMPTION_ROLE and DWC_CONSUMPTION_ROLE# (with grant option) in the container to allow to add into DSP space and allow to exchange data between the container and the space.

  • To Create this roles on HDI Container user should be part of this roles Business_Application_Studio_Developer on BTP Subaccount.
  • Login tot the Business Application Studio (BAS) and create Dev Space by selecting SAP HANA Native Application.

  • Open Dev Workspace and Create one SAP HANA Database project
  • Click on New Project from Template and select Create SAP HANA Database project click start.

  • Enter the project name (ex: TEST) and SAP HANA Database Version should be an SAP HANA CLOUD
  • Login to CF account by using credentials authentication method (By entering SAP Email and SAP Global Password) and then click Sign in

  • Select the CF target details Subaccount/organization and CF space

  • And then Bind to HDI Container Service like below

  • Click on Finish Button and with this step User have successfully created SAP HANA Database Project.

Can also verify either User have connected to correct HDI container or not by checking the service key on HDI container.

Once Project get created it will also generate and Create one service key “SharedDevKey” for the Cloud Foundry service instance “<HDI Container_name>”

  • Create a DWC_CONSUMPTION_ROLE and DWC_CONSUMPTION_ROLE# roles under this Project on BAS.
    • Under db==>src folder User will have to create above two roles like below

Role Name -1

DWC_CONSUMPTION_ROLE_WITH_GRANT.hdbrole

{    "role": {"name": "DWC_CONSUMPTION_ROLE#",
               "schema_privileges": [{"privileges_with_grant_option":[ "SELECT","SELECT METADATA","EXECUTE" ]}]}}

Role Name -2

DWC_CONSUMPTION_ROLE.hdbrole

{    "role": {"name": "DWC_CONSUMPTION_ROLE",
              "schema_privileges": [{"privileges":[ "SELECT","SELECT METADATA","EXECUTE" ]}]}}

Add HDI container info into mta.yaml file:

And then try deploy this into Database connection like below

Under SAP HANA PROJECTS folder see all the files and then click on bind icon on project folder like below

Make succeeded (0 warnings): 2 files deployed (effective 2), 0 files undeployed (effective 0), 0 dependent files redeployed. Making… ok (0s 303ms)

With this now have successfully created roles into HDI Container.

5. Add/Deploy HDI Container instance into SAP Datasphere

To Deploy the HDI Container into SAP Datasphere Space user should be part of any one of this roles DW Administrator, System Owner, Space Owner

  • Go to Expand Navigation bar ==>Space management==> <Select the space> ==> Database Access ==>HDI Containers

Select the HDI Container by click on +

Deploy the space by Click on Deploy button.

If deployment went successful then HDI container successfully added into SAP Datasphere space.

To verify this HDI container is deployed successfully or not check/test this HDI container is appearing under Data Builder application

Rating: 0 / 5 (0 votes)

The post SAP Datasphere – Enable SAP HANA Development Infrastructure (HDI) first appeared on ERP Q&A.

]]>
Data Transfer to GBQ From S/4HANA Private Cloud https://www.erpqna.com/data-transfer-to-gbq-from-s-4hana-private-cloud/?utm_source=rss&utm_medium=rss&utm_campaign=data-transfer-to-gbq-from-s-4hana-private-cloud Wed, 31 Jan 2024 09:57:43 +0000 https://www.erpqna.com/?p=81255 In SAP Implementation one of the most critical aspects is reporting. As SAP data is being integrated with many other Non SAP system’s to create reports in systems such as looker, this blog explores the option of how SAP can be Integrated to GBQ using GBQ Connector. This blog explores the option of data transfer […]

The post Data Transfer to GBQ From S/4HANA Private Cloud first appeared on ERP Q&A.

]]>
In SAP Implementation one of the most critical aspects is reporting. As SAP data is being integrated with many other Non SAP system’s to create reports in systems such as looker, this blog explores the option of how SAP can be Integrated to GBQ using GBQ Connector.

This blog explores the option of data transfer from SAP to GBQ using embedded SLT in S/4HANA.

1. System Considerations:

  • S/4HANA Rise Private Edition
  • S/4HANA Embedded SLT
  • Google Big Query

2. Scenario

As part of this blog, we will be setting up data transmission from SAP S/4HANA system hosted in GCP to GBQ using GBQ Connector and embedded SLT.

3. Infrastructure Setup between SAP and GBQ

https://cloud.google.com/solutions/sap/docs/bq-connector/latest/all-guides

The infrastructure setup can be divided into three part, depending on where the SAP system is hosted. This use case considers S/4HANA Private cloud in Compute Engine virtual machine (VM) on Google Cloud.

3.1 SAP Application server Infrastructure setup

  • Install gcloud CLI in S4HANA OS : If not already available , install gcloud CLI in OS of application server of SAP and verify that it is installed using command gcloud -v

  • Cloud API access scopes for Application VM of SAP : This can be provided in two way, one being full access to all API and the other is only provide Big Query and Cloud Platform API access. This access should be enabled in all application server of SAP system .

  • Enable the host VM to obtain access tokens : The below permission should be provided to the service account associated with the VM of SAP application server.

Service Account Token Creator, BigQuery Data Editor , BigQuery Job User

  • Enable firewall to allow access to Google Cloud APIs : Allow access to API (https://bigquery.googleapis.com ,https://iamcredentials.googleapis.com ) which will be used for RFC connection between SAP and GBQ.

3.2 GCP Infrastructure Setup

As this use case considered SAP Rise system is hosted in GCP, we have 2 steps to perform in the customer’s GCP environment to allow

Data transfer.

  • Create a Big Query Dataset in GBQ: As this process varies across organizations , please create a Big Query Dataset in your environment and take a note of the project name and dataset name . This information will be later used for the SLT configuration.
  • Allow SAP VM Service Account access in Customer GCP: In this use case GBQ Dataset will be created is separate project from SAP VM. The GCP admin where the GBQ dataset was created should provide access to Service account of SAP VM so it can write data into the dataset.

4. Configuration Setup

As the guide covers detailed instructions to be performed, this blog will demonstrate highlights of the configuration in SAP. Feel free to ask any questions in the comment section of the blog for additional information.

  • Set up SSL certificates in STRUST: Download the relevant root certificate (GTS Root R1 ,GTS CA 1C3) for GCP and import into SSL Client PSE of SAP.
  • Specify access settings in /GOOG/CLIENT_KEY: The below values are considered in this setup.

Service Account Name : The Service Account associated with SAP Application Server VM is used

Scope: https://www.googleapis.com/auth/bigquery can be used if the permission were restricted in earlier step

Project ID : The project name created is target GCP which contains the GBQ dataset

  • Create the RFC in SM59 : Create 2 Type G RFC for GCP API and One Type 3 for SAP RFC (Own system ) for data transfer

  • Create SLT Configuration using LTRC

  • Configure /GOOG/SLT_SETTINGS : configure a mass transfer for BigQuery and specify the table and field mappings.

Project Identifier: This is the project name from Target GCP

BQ Dataset: Name of the GBQ Dataset

5. Test Replication:

After the above setup, test the replication using LTRC. All the other relevant configurations for SLT can be used in the above use case as well (LTRS) .

6. Considerations

  • This setup is intended for replication where the data transfer is not in large scale as there will be performance impact to SAP system for embedded SLT scenario.
  • Check with SAP ECS team regarding the security approval required for the changes to the SAP VM access.
  • Advance performance replication for certain larger table such as ACDOCA should be considered using Performance Optimization Guide for SLT
  • As the use case had considered embedded SLT, certain performance optimization option will increase the database table size for temporary logging table.
  • Periodically program CNV_NOTE_ANALYZER_SLT to check for latest available SAP notes which can be applied to improve and fix any bugs associated with SLT.
Rating: 0 / 5 (0 votes)

The post Data Transfer to GBQ From S/4HANA Private Cloud first appeared on ERP Q&A.

]]>
Git Sequencing Strategy and Best Practices: SAP HANA XS Advanced and/or SAP HANA Cloud https://www.erpqna.com/git-sequencing-strategy-and-best-practices-sap-hana-xs-advanced-and-or-sap-hana-cloud/?utm_source=rss&utm_medium=rss&utm_campaign=git-sequencing-strategy-and-best-practices-sap-hana-xs-advanced-and-or-sap-hana-cloud Wed, 27 Dec 2023 07:25:55 +0000 https://www.erpqna.com/?p=80522 As we see more and more customers adopting the cloud based development methodologies in SAP especially with customers adopting XS Advanced and SAP HANA Cloud as their data warehousing or data mart solution, there was need to lay down some best practices for source code management and version control. This blog post talks about some […]

The post Git Sequencing Strategy and Best Practices: SAP HANA XS Advanced and/or SAP HANA Cloud first appeared on ERP Q&A.

]]>
As we see more and more customers adopting the cloud based development methodologies in SAP especially with customers adopting XS Advanced and SAP HANA Cloud as their data warehousing or data mart solution, there was need to lay down some best practices for source code management and version control. This blog post talks about some of the best practices to be followed while working in a parallel development environment especially while working with multiple features development.

This blog provides best practices for sequencing using git based tools and addresses the following pain points:

  • How to build/deploy different features in different systems in the landscape and not have a deployment conflict ?
  • What to do in case a feature is not ready, and there’s a Priority 1 bug is to be fixed ?
  • How to select or drop a feature/commit in case of an erroneous situation ?

Why is branching strategy important:

Branches are primarily used as a means for teams to develop features giving them a separate workspace for their code. These branches are usually merged back to a master/main branch upon completion of work. In this way, features (and any bug and bug fixes) are kept apart from each other allowing you to fix mistakes more easily.

Herewith a sample branching model:

Figure1: Sample branching strategy

This branching strategy consists of the following branches:

Master: main branch

Develop: where the developers merge their changes.

Feature(s): to develop new features that branches off the develop branch

Hotfix: also helps prepare for a release but unlike release branches, hotfix branches arise from a bug that has been discovered and must be resolved; it enables developers to keep working on their own changes on the develop branch while the bug is being fixed.

The main and develop branches are considered the main branches, with an infinite lifetime, while the rest are supporting branches that are meant to aid parallel development among developers, usually short-lived.

How would the flow work:

Let’s start from a state where all the systems in the landscape have the same code running. For building new independent features, multiple feature branches are created by multiple developers:

Figure 2: Start state

Now, let’s say, each of the feature branch, the code was tested by individual developers and let’s assume, the unit testing succeeded on built containers.

Note: Practice to follow keep committing the code as and when ready and keep fetching the changes often made by other developers so that each developer always has the latest code and better coverage on unit testing. For example: daily

If in case, the unit testing on either of the feature branches, fails, then that feature branch would not be merged further.

Figure 3: All feature branches tested OK by developers

Once, the unit testing is completed successfully, the code from each of these feature branches is merged into a new branch called D-CONS (consolidated branch in development environment)

Figure 4: Merge the feature branches to D-CONS branch created from V1 branch

In D-CONS branch, let’s assume after completing Functional/Integration testing, it was identified that that the code from Feature 2 brought an issue.

Usually, in such cases, we have two options:

– Fix the issue if the fix is known

– Revert the change made which caused the issue

Merge takes care of combining the changes not having to do selective deployment.

Figure 5: Functional/Integration testing failed for Feature branch 2

Since, the code in feature 2 branch must be re-worked, it cannot be moved further. Handling another corner case here, where we carry out additional test cycles after merging F1 and F3 to D-CONS-2 to see if F1 and F3 work fine or not without F2.

Figure 6: Merging Feature branch 1 and 3 to see if they work fine together

It was identified that F1 and F3 work fine together but F2 needs more time to get a fix, so, F1 and F3 after testing were merged with the main code-line of DEV and a new version was created as V2. This is needed so that if a developer now must fetch the code, will fetch the latest from V2. It’s important to note here, is D-CONS schema is different from original DEV container.

Differentiating container can be created with the help of .mtaext

Once the code is merged with V2, we could do a XS deploy in DEV depending on if such a deployed version of the master development branch is needed in the development landscape. In some cases, it may be needed, so yes, following the merging of the successfully tested feature branches into the main development branch the new version of this branch should be deployed to the (existing) HDI container. MTA version number can also be increases.

With that, we’d proceed to the next environment: Quality

Figure 7: Merge Feature1 and 3 with QA branch

F1 and F2 are merged in QA CONS branch and tested for non-functional requirements like memory, CPU, and execution time. The QA-CONS branch must be deployed for testing. Logically it would be deployed into a corresponding “QA-CONS” HDI container to separate it from the main QA HDI container. That means the applications used for testing must take a potentially different schema name into account (if a dedicated QA-CONS HDI container is used). To test in QA and UAT landscape, pointing the reporting tools to this new container can be a big task, in that case: Have only one version of the report but make it point to the _CONS container.

Let’s proceed to the testing cycle in QA and it was found that F1 failed but F3 succeeded. Please Note: The testing takes place using the reporting tools and testing is carried on the respective deployed container. Since, only F3 succeeded in non-functional tests in QA, the code from F3 will be merged with V2 in QA and with the branch in PROD

The branches, QCONS, QCONS-2 could be deleted once the code is merged with V2 or QA and PROD.

Figure 8: Testing result in QA

Branching mechanism for Hotfix or Priority1 bug:

In case, there’s a fix that’s urgently needed in Production, create a new branch from the code deployed on production landscape and fix the bug. You may test it in the same system and deploy the code to production landscape. We could either test the fix in hotfix container in PROD environment or take the same code and merge it with Q-CONS-hotfix and deploy it in QA environment for testing.

Figure 9: Spin a new branch in PROD from the current working branch in PROD
Figure 10: Spin a new branch in QA from the current working branch in PROD
Figure 11: Once, the code is tested, merge it with the PROD codeline.
Figure 12: Make sure to double maintain it in the DEV environment as well
Rating: 0 / 5 (0 votes)

The post Git Sequencing Strategy and Best Practices: SAP HANA XS Advanced and/or SAP HANA Cloud first appeared on ERP Q&A.

]]>
How to Connect SAP HANA Calculation View to SAP Analytics Cloud (SAC) https://www.erpqna.com/how-to-connect-sap-hana-calculation-view-to-sap-analytics-cloud-sac/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-connect-sap-hana-calculation-view-to-sap-analytics-cloud-sac Tue, 17 Oct 2023 10:40:36 +0000 https://www.erpqna.com/?p=78977 1. Introduction 1.1. Background Jeff possesses extensive expertise in SAP BW, SAP BW on HANA, SAP BW4HANA, Native Hana, Business Objects, Data Services, SAP DataSphere, and SAP Analytics Cloud (SAC). He is currently based in Brisbane, Australia. 1.2. Purpose of this Post In my role as an SAP Data Analytics Consultant, my experience with Cloud […]

The post How to Connect SAP HANA Calculation View to SAP Analytics Cloud (SAC) first appeared on ERP Q&A.

]]>
1. Introduction

1.1. Background

Jeff possesses extensive expertise in SAP BW, SAP BW on HANA, SAP BW4HANA, Native Hana, Business Objects, Data Services, SAP DataSphere, and SAP Analytics Cloud (SAC). He is currently based in Brisbane, Australia.

1.2. Purpose of this Post

In my role as an SAP Data Analytics Consultant, my experience with Cloud and the SAP Cloud Application Programming Model (CAP) solution is limited. However, I welcome the opportunity to build a Proof of Concept (POC) focused on creating SAP HANA Calculation views in HANA Cloud and generating insightful stories through SAP Analytics Cloud (SAC).

1.3. Calculation View VS CDS View

We are constructing a custom solution directly within SAP HANA Cloud, with no direct integration with S/4HANA Cloud. Therefore, the most optimal approach is to create Live SAP Analytics Cloud stories through HANA Calculation Views. There will be no involvement of SAP Datasphere since there is no hyperscaler scenario.

1.4. High Level Architecture

high level architecture

2. Steps by steps in SAP HANA Cloud

2.1. Get access to the SAP HANA Cloud and relevant tools.

Please follow the link to request a free trial access to SAP HANA Cloud and set up SAP Business Application Studio.

2.2. Create Database Artifacts in SAP HANA Cloud.

Create tables in SAP HANA Cloud that will be joined in the SAP HANA Calculation view later.

2.3. Build the Calculation View and deploy it.

Click “Ctrl+Shift+P” / Go to “View” in toolbar & select “Find Command” > Create “SAP HANA Database Artifact”

Fill up all the details and below and create a unique artifact name for your calculation view (eg. CV_TEST_QUERY).

The Calculation View that has been successfully created will now be visible in the selected folder.

Use the “Rocket” icon to deploy single file/ entire analytics folder and you should get a successfully deployed message in your console at the bottom.

To verify the calculation view is deployed properly and validate the data you can open the SAP HANA database explorer and run the calculation view to visualize the data.

Full detail steps to create the HANA calculation view can follow the steps via this link.

2.4. Get the connection details from SAP BTP Cockpit

Log into the SAP BTP Cockpit. Select the relevant Subaccount.

Select the relevant space.

Select Services > type the instance name > click on it.

Select the right service key.

Copy the host, user and password for the connection details which needed in SAC.

3. Steps by steps in SAC

3.1. Access to SAC.

You may request SAC free trail account via this link.

3.2. Create HANA Cloud Connection

Note: You might not have the authorization to create HANA Cloud connection via SAC free trail account.

Select Connection > “+”

Enter the host, username and password from step 2.4

3.3. Create Live Data Model.

I need to add the SAP HANA Calculation view into the Live Data Model.

Select Modeler > Live Data Model

Select the System Type “SAP HANA”, relevant connection that just created and search the Calculation View.

Have a quick view in the measure and rename the descriptions.

Select All Dimension > Update the descriptions and update the groups as well. This is helpful when you try to drag and drop the dimensions in SAC stories. Remember save the Model.

3.4. Create Stories

Select Stories > Responsive.

Below is the sample story that created. The performance is great even though the server is located in US and I’m based in Australia.

Rating: 0 / 5 (0 votes)

The post How to Connect SAP HANA Calculation View to SAP Analytics Cloud (SAC) first appeared on ERP Q&A.

]]>
SAP HANA DB Authorization concept https://www.erpqna.com/sap-hana-db-authorization-concept/?utm_source=rss&utm_medium=rss&utm_campaign=sap-hana-db-authorization-concept Fri, 29 Sep 2023 07:03:57 +0000 https://www.erpqna.com/?p=78511 SAP HANA SAP HANA is an in-memory, column-oriented, relational database management system developed and marketed by SAP SE. SAP HANA Security Sap Hana Security is protecting important data from unauthorized access and ensures that the standards and compliance meet as per the security standard. User Type in SAP HANA Depending on the different security policy […]

The post SAP HANA DB Authorization concept first appeared on ERP Q&A.

]]>
SAP HANA

SAP HANA is an in-memory, column-oriented, relational database management system developed and marketed by SAP SE.

SAP HANA Security

Sap Hana Security is protecting important data from unauthorized access and ensures that the standards and compliance meet as per the security standard.

User Type in SAP HANA

Depending on the different security policy there are two types of users in SAP HANA as below –

Technical User (DBA User) –

It is a user who directly work with SAP HANA database with necessary privileges. SAP HANA Database system provides following user by default as standard user –

  • SYSTEM
  • SYS
  • _SYS_REPO

Database or Real User:

Database user is a real person who works on SAP HANA. There are two types of Database user as below –

Standard User: This user can create objects in an own schema and reads data in system views. Standard User created with “CREATE USER” statement. PUBLIC role is assigned for read system views.

Restricted User: Restricted User connects to database through HTTP Only. ODBC/JDBC access for client connection must be enabled with SQL statement.

User creation in SAP HANA –

only database user with ROLE ADMIN privileges can create user and role in SAP HANA

Step 1) To create new user in SAP HANA Studio go to security tab as shown below and follow the following steps;

Go to security node.

Select Users (Right Click) -> New User.

User Creation

Step 2) A user creation screen appears.

Enter User Name.

Enter Password for the user.

These are authentication mechanism, by default User name / password is used for authentication.

By Clicking on the deploy Button user will be created.

SAP HANA privileges:

Privilege is the permission to execute certain actions. Total 6 types of privileges are available in SAP HANA

1)System privileges

2)Object privileges

3)Analytic privileges

4)Package privileges

5)Application privileges

6)Privileges on User

1- System Privileges

It controls normal system activity. System Privileges are mainly used for

  • Managing license
  • Managing version
  • Creating and Deleting Schema in SAP HANA Database
  • Managing Audit
  • Importing and Exporting content
  • Managing user and role in SAP HANA Database
  • Monitoring and tracing of SAP HANA database
  • Performing data backups
  • Maintaining Delivery Units
System Privilege

2- Object Privileges

Object Privileges are SQL privileges that are used to give authorization to read and modify database objects. Object privileges can be granted to catalog objects (table, view, etc.) or non-catalog objects (development objects).

Object Privilege

3- Analytic Privileges

Analytic Privileges are used to allow read access on data of SAP HANA Information model (attribute view, Analytic View, calculation View).

This privilege is evaluated during query processing.

Analytic Privileges grants different user access on different part of data in the Same information view based on user role.

Analytic Privileges are used in SAP HANA database to provide row level data Control for individual users to see the data is in the same view.

Analytical Privilege

4- Package Privileges

Package Privileges are used to provide authorization for actions on individual packages in SAP HANA Repository

Package Privilege

5- Application Privileges

Application Privileges are required in In SAP HANA Extended Application Services (SAP HANA XS) for access application.

Application Privilege

6- Privileges on User

It is an SQL Privileges, which can grant by the user on own user. ATTACH DEBUGGER is the only privilege that can be granted to a user.

Privileges on User

Define and Create Role

A role is a collection of privileges that can be granted to other users or role. The role includes privileges for database object & application and depending on the nature of the job. We can use the standard role as a template for creating a custom role. A role can contain following privileges –

System Privileges for administrative and development task

Object Privileges for database objects

Analytic Privileges for SAP HANA Information View

Package Privileges on repository packages

Application Privileges for SAP HANA XS applications.

Privileges on the user (For Debugging of procedure).

Role Creation

Step 1) In this step,

Go to Security node in SAP HANA System.

Select Role Node (Right Click) and select New Role.

Role Creation

Step 2) A role creation screen is displayed.

Give Role name under New Role Block.

Select Granted Role tab, and click “+” Icon to add Standard Role or exiting role.

Select Desired role

Step 3) In this step,

Selected Role is added in Granted Roles Tab.

Privileges can be assigned to the user directly by selecting System Privileges, object Privileges, Analytic Privileges, Package Privileges, etc.

Click on deploy icon to create Role.

Role assignment

Tick option “Grantable to other users and roles”, if you want to assign this role to other user and role.

Grant Role to User

Step 1) In this step, we will Assign Role to a user.

Go to User sub-node under Security node and double click it. User window will show.

Click on Granted roles “+” Icon.

A pop-up will appear, Search Role name which will be assign to the user.

Grant Role

Step 2) In this step, role will be added.

Step 3) In this step,

Click on Deploy Button.

A Message changed is displayed.

Resetting User Password

If user password needs to reset, then go to User sub-node under Security node and double click it. User window will show.

Step 1) In this step,

Enter new password.

Enter Confirm password.

Password reset

Step 2) In this step,

Click on Deploy Button.

A message changed is displayed.

Re-Activate/De-activate User

Go to User sub-node under Security node and double click it. User window will show.

There is De-Activate User icon. Click on it

User De-Activate

A confirmation message “Popup” will appear. Click on ‘Yes’ Button.

A message “User’ deactivated” will be displayed. The De-Activate icon changes with name “Activate user”. Now we can activate user from the same icon

SAP HANA License Management

The license key is required to use SAP HANA Database.

SAP HANA database support two types of license key –

Permanent License Key: Permanent license keys are valid till expiration date.

Temporary License Key: This is valid for 90 days and automatically installed with a new SAP HANA Database Installation.

SAP HANA License

Authorization of License Management

SAP HANA Auditing

SAP HANA Auditing features allow you to monitor and record action which is performed in SAP HANA System.

SAP HANA Auditing

Authorization for SAP HANA Auditing

Rating: 0 / 5 (0 votes)

The post SAP HANA DB Authorization concept first appeared on ERP Q&A.

]]>