SAP HANA Cloud Archives - ERP Q&A https://www.erpqna.com/category/sap-hana-cloud/ Trending SAP Career News and Guidelines Fri, 05 Dec 2025 03:32:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.erpqna.com/wp-content/uploads/2021/11/cropped-erpqna-32x32.png SAP HANA Cloud Archives - ERP Q&A https://www.erpqna.com/category/sap-hana-cloud/ 32 32 Cost optimized SAP HANA DR options on Google Cloud https://www.erpqna.com/cost-optimized-sap-hana-dr-options-on-google-cloud/?utm_source=rss&utm_medium=rss&utm_campaign=cost-optimized-sap-hana-dr-options-on-google-cloud Sat, 15 Feb 2025 10:50:48 +0000 https://www.erpqna.com/?p=90491 Abstract Business Continuity is of utmost importance for any organization. A well defined High Availability (HA) & Disaster Recovery (DR) strategy ensures business critical SAP Applications accessibility during any planned or unplanned outage. SAP HANA database being a central component of a SAP Application is configured with relevant HA/DR setup to make the business data […]

The post Cost optimized SAP HANA DR options on Google Cloud appeared first on ERP Q&A.

]]>
Abstract

Business Continuity is of utmost importance for any organization. A well defined High Availability (HA) & Disaster Recovery (DR) strategy ensures business critical SAP Applications accessibility during any planned or unplanned outage. SAP HANA database being a central component of a SAP Application is configured with relevant HA/DR setup to make the business data available at secondary node/site to ensure business continuity.

HA for HANA database is being used as fault tolerance mostly for any infra related failures where HANA database fails over to secondary hot standby node deployed in cluster mode within a single region. RPO and RTO are almost zero in this case as most of the steps are seamless and automated with cluster management. Synchronous HANA replication across zones of the same region ensures the secondary HANA node is in sync with the primary node all the time.

In case of complete primary site loss, where Primary Node along with HA hot standby is not available, DR solution on a separate geographical location termed as Secondary or DR Region in public cloud acts as a safety net to ensure business continuity. Asynchronous HANA replication to the standby node in the secondary region keeps pushing the data from the Primary node. Manual failover is required to make the DR node as Primary.

Let’s discuss some SAP HANA DR options along with their pros & cons respectively.

Performance optimized DR setup (cost challenge)

For mission critical applications with requirement of minimum RTO (few minutes to hours) to have the system up & running in the secondary region with all the business data, performance optimized DR using SAP HANA HSR is deployed. In this DR setup, computing capacity of DR HANA node is kept the same as Primary HANA Node and full data is loaded in memory of DR HANA node at the time of replication setup. Then all the delta data committed in primary, post initial load, is replicated regularly in secondary through archive logs.

As depicted in the diagram below, secondary node cpu and memory configuration is kept the same as primary node so the major chunk of data is already loaded in secondary HANA node memory. In a disaster scenario, such configuration enables the Secondary node as Primary in minimum possible time and almost no data loss. However maintaining the PRD equal hardware at secondary site for DR node adds to a significant cost.

Cost optimized DR setup (higher RTO)

To overcome the cost challenge with performance optimized DR setup, we can consider following cost optimized SAP HANA DR options. Trade off with such setups will be higher RTO but with low DR cost.

(i) Shared HANA DR node

In a DR setup where secondary node sizing is kept identical to PRD primary, generally resources on secondary node can not be used for anything else until the takeover takes place. In this shared DR setup, PRD data is not loaded in memory but in disk at secondary site and thus resources can be shared by another non-PRD SAP HANA instance e.g. QAS or TEST on the same node. In order to achieve it, memory allocation to the PRD secondary node is restricted and the rest of the memory is allocated to non-PRD (QAS/TEST) instance.

In case of a PRD Primary disaster scenario, non-PRD system QAS/TEST to be stopped, full memory/resources to be allocated to the PRD standby node, load data from disk to memory and bring it up as Primary. Apparently these steps will increase the recovery time but it has the advantage of low cost DR setup because we are using the same DR node for our QAS/TEST instance.

(ii) Lean HANA DR node

As compared to Shared DR setup, here we opt for bare minimum memory configuration for PRD secondary instance so as the replication of PRD data keeps loading in the disk. Thus we don’t need a DR node to match the same sizing/memory configuration as PRD primary. As in shared DR setup, preload of column tables to memory of Standby HANA node is disabled by setting the database parameter “preload_column_tables” as false.

In case of a PRD Primary disaster scenario, DR node to be stopped & upgraded to configuration matching the PRD primary (Google VM type approved by SAP for PRD use) and full memory/resources to be allocated to PRD standby node. The value of database parameter “preload_column_tables” must be changed back to default value as true so as to load the complete data including the column tables to memory. As compared to Shared mode, this setup will have significant reduction in the DR cost as we are keeping the Standby node computing/memory to bare minimum to support the data replication from the Primary node.

Minimum memory configuration/Google VM type needed for Cost optimized lean DR secondary node to be calculated as per SAP guidelines and supporting documentation (SAP Note 1999880 FAQ SAP HANA System Replication). It is advisable to run a Pilot/PoC to come up with exact memory & sizing configuration requirements for lean DR node and arrest any other unforeseen issues upfront.

(iii) Backup-Restore HANA DR

In this most cost effective HANA DR solution, no dedicated secondary HANA Node is deployed and no real time data replication from Primary HANA node happens. However, we need to ensure that backups (HANA Database – data & logs and Application/file systems) are being stored (dual region/multi region mode) on another region identified as a DR site. RTO to bring up the Primary instance at identified DR site will be quite high as , in case of disaster scenario of Primary region not being available, one needs to set up the Servers in DR region from scratch, Install the Application along with Database and then restore the Database from the backup.

We also must reserve needed computing capacity in the DR region so that required VMs can be deployed quickly with needed capacity at DR site in a disaster scenario. We also must ensure to have a network connectivity (VPN) to DR side to access the Applications.

Conclusion

Performance based HANA DR setup is the preferred one with minimum RTO & RPO as the business would like to have the critical SAP Application up & running on a secondary site in the shortest possible time. But having hardware configuration same as Production will be a cost overhead.

All cost optimized HANA DR options discussed here will definitely save cost as compared to performance optimized HANA DR setup. However, one must sacrifice on the time needed to stand up the functional DR in the secondary region. Depending upon the acceptable RTO and criticality of the HANA based SAP application in a DR scenario, appropriate cost optimized HANA DR setup can be deployed.

Rating: 5 / 5 (1 votes)

The post Cost optimized SAP HANA DR options on Google Cloud appeared first on ERP Q&A.

]]>
Connecting SAP HANA on-premises to SAP Cloud Platform Integration (CPI) https://www.erpqna.com/connecting-sap-hana-on-premises-to-sap-cloud-platform-integration-cpi/?utm_source=rss&utm_medium=rss&utm_campaign=connecting-sap-hana-on-premises-to-sap-cloud-platform-integration-cpi Sat, 26 Oct 2024 12:40:35 +0000 https://www.erpqna.com/?p=88502 Introduction Connecting SAP HANA on-premises to SAP Cloud Platform Integration (CPI) is a crucial step in establishing a seamless data flow between on-premise and cloud-based systems. This integration allows organizations to leverage the power of cloud-based integration capabilities while maintaining access to their existing on-premise HANA databases. This document will provide a comprehensive guide on […]

The post Connecting SAP HANA on-premises to SAP Cloud Platform Integration (CPI) appeared first on ERP Q&A.

]]>
Introduction

Connecting SAP HANA on-premises to SAP Cloud Platform Integration (CPI) is a crucial step in establishing a seamless data flow between on-premise and cloud-based systems. This integration allows organizations to leverage the power of cloud-based integration capabilities while maintaining access to their existing on-premise HANA databases.

This document will provide a comprehensive guide on how to achieve this connection, covering essential steps, configurations, and potential challenges. By following the outlined procedures, you can effectively establish a bridge between your on-premise HANA environment and CPI, enabling efficient data exchange and process automation.

In order to Fetch Data from the SAP Hana Database from CPI iFlow – you need to complete the following configurations. (Step A and B as mentioned in the following diagram)

A. Cloud Connector

Cloud Connector is a software application that acts as a bridge between cloud-based services and on-premises systems. It establishes a secure connection, allowing these systems to communicate and exchange data seamlessly.

Login into the Cloud Connector Application and establish the connection for Cloud to On-Premises system

  • Map the Hana Server into the Cloud Connector using TCP Protocol
  • Provide Internal IP and Port , Provide Virtual IP and Port
  • Note down the Virtual IP and Port ( this will be used in the JDBC Connection inside the CPI subaccount)

To Add new System – you need to provide details as below:

  • Backend Type : SAP – Hana
  • Protocol : TCP
  • Internal IP & Port
  • Virtual IP & Port

Once saved, you can verify the connectivity test by checking the Check Availability icon – that will shows Reachable or Not-reachable. (Please note down : The Virtual IP and Port )

B. Cloud Platform (CPI Account)

SAP Cloud Platform Integration (CPI) is a cloud-based integration platform as a service. CPI provides a flexible and scalable solution for integrating cloud and on-premise applications, facilitating seamless data exchange and process automation.

Login into the CPI Subaccount and Create the new JDBC Material / Connection.

  • Create the JDBC connection
  • Provide the JDBC URL : format jdbc:sap://virtualIP:virtualPort/?databaseName=myhanadb
  • where virtual IP is the virtual IP you mentioned in cloud connector in step A, virtual port is the virtual port you mentioned in cloud connector in step A

While adding JDBC, you need to provide the following details

Name: name as per your wish
Description: as per your wish
Database type: SAP Hana Platform (on premise)
User ID and Password: your database userid and password
JDBC Url: jdbc:sap://virtual ip:virtualport/?databaseName=your database name

Finally, in the iflow, you use JDBC adapter and in the properties tab, you mention the JDBC connection name you created in the step B.

C. Iflow

In the iFlow, the Hana Server is connected via the Request Reply pallet. The connecting adapter properties should mention the JDBC connection you created in the step B.

In the iflow, In the JDBC Adapter properties – you mention the JDBC connection name you defined in step B.

D. Execute the iflow in Postman

In the Postman, you pass the SQL statement in the body.

select * from sapabap1.sflight;

The output you will receive in xml format.

This is how, we can establish the connectivity from CPI-iFlow to SAP Hana On premises and execute the SQL Queries to fetch data.

POSTMAN

The SQL output will be displayed in the xml format.

Conclusion

In conclusion, this document has outlined the steps and considerations involved in establishing a seamless connection between an on-premises HANA database and SAP Cloud Platform Integration (CPI). By leveraging the capabilities of CPI, organizations can effectively bridge the gap between their on-premises systems and cloud-based services. By following the outlined procedures and best practices, we can ensure a successful and reliable connection between their HANA database and CPI, unlocking the full potential of both platforms.

Rating: 5 / 5 (1 votes)

The post Connecting SAP HANA on-premises to SAP Cloud Platform Integration (CPI) appeared first on ERP Q&A.

]]>
Safeguarding Enterprise Personal and Financial Data in SAP HANA with IBM Security Guardium https://www.erpqna.com/safeguarding-enterprise-personal-and-financial-data-in-sap-hana-with-ibm-security-guardium/?utm_source=rss&utm_medium=rss&utm_campaign=safeguarding-enterprise-personal-and-financial-data-in-sap-hana-with-ibm-security-guardium Wed, 03 Jul 2024 11:09:22 +0000 https://www.erpqna.com/?p=86118 Introduction In the modern digital world, protecting sensitive business data is more important than ever. SAP HANA Cloud databases, known for their high performance and advanced analytics, serve as essential to many organisations’ operations. However, the huge amounts of personal and financial data they handle make them potential targets for cyber-attacks. Implementing advanced security measures […]

The post Safeguarding Enterprise Personal and Financial Data in SAP HANA with IBM Security Guardium appeared first on ERP Q&A.

]]>
Introduction

In the modern digital world, protecting sensitive business data is more important than ever. SAP HANA Cloud databases, known for their high performance and advanced analytics, serve as essential to many organisations’ operations. However, the huge amounts of personal and financial data they handle make them potential targets for cyber-attacks. Implementing advanced security measures is critical for protecting these datasets from any possible breaches.

This blog explains how IBM Security Guardium offers an additional level of safety to SAP HANA Cloud databases. You can ensure that enterprise personal and financial data is secure and meets regulatory standards by leveraging Guardium’s complete capabilities. Learn how this powerful combo may improve your data security strategy and safeguard your company’s most precious assets.

Importance of Data classification and identification for Data security

Identifying and classifying data is crucial for maintaining data security and ensuring compliance with regulatory standards. It helps in understanding the sensitivity and value of data, enabling organisations to implement appropriate security measures. Proper classification aids in protecting sensitive information from unauthorised access and potential breaches, while also facilitating efficient data management and retrieval.

About this blog

In this blog, IBM Guardium can be utilised to discover sensitive data within an SAP HANA DB. By scanning the database, Guardium identifies and classifies sensitive information, such as personal data, financial records, and intellectual property. Once discovered, this data is added to specific groups of fields or objects for continuous observation. This grouping facilitates targeted monitoring and protection, ensuring that sensitive data is safeguarded against unauthorized access and potential breaches. Guardium’s scanning and classification capabilities help maintain data security and compliance with regulatory standards for data protection in SAP HANA environments.

Prerequisites

  • SAP BTP Account with access to SAP HANA Cloud Database
  • IBM Security Guardium

Architecture

SAP HANA Cloud, a cloud-based version of the SAP HANA database, offers a multi-model platform for storing and processing diverse data. It integrates with SAP S/4HANA, the latest ERP suite, and SAP Business Technology Platform for application development. Here, security is ensured through IBM Guardium. IBM Security Guardium will scan the SAP HANA Cloud DB for the identification and classification of sensitive data such as personal details, financial details … etc. This data classification will enable administrator to keep an eye on specific table fields and help them formulate further business strategies such as data masking of data hiding for the database for the security purpose. Hence, this architecture positions SAP HANA Cloud as a secured and strong foundation for building versatile cloud-based enterprise applications.

Steps for integration

Log in to Guardium, and you will be directed to the home page as shown below:

Go to the Discover button on the left-hand panel, open the “Classification” dropdown, and select “Datasource Definitions” as shown below:

Click the “New” button, as highlighted below:

Enter details such application type, name, database type and other details in the pop-up screen as shown below:

Please keep in mind that the username and password for the SAP HANA Cloud database must be entered here.

To obtain the host name/IP address and port number, log into your SAP BTP account and click to the space for which you want to integrate Guardium with SAP HANA Cloud DB.

Select “SAP HANA Cloud” as indicated below:

Now, click “Actions” and choose “Copy SQL Endpoint”.

Paste the copied SQL endpoint and receive the hostname/IP data as shown below:

And get the port number details displayed follows from the same:

To check the status of your connection, click the “Test Connection” button.

The SAP HANA Cloud database setup is now complete. You can see the details as follows:

Click the Discover button on the left-hand panel, then open the drop-down menu by clicking “Classification” and selecting “Discover Sensitive Data”. Refer to the image below.

On the following screen, select “PII [template]”. Check out the information as recommended below, then click “Roles” to assign them, and then click the “Next” button.

Select the check box for the template pattern you wish to include (for example, birth date, city) and click the “Copy” button as displayed below and click on “Next” button:

Once we’ve completed “What to discover,” we’ll go on to “Where to search” and choose the integrated SAP HANA Cloud database and click on “Next”.

“Run discovery” is a convenience feature that allows you to conduct classification and check the status. Click “Next”.

We are now in the “Review report” stage, where we select a list of fields and select “Add to Groupof Object/Field” from the “Add to Group” drop-down and click on the “Next” button.

Select group “SAP Sensitive Data” and click on the “OK” button.

Select group “SAP Sensitive Data” and click on the “OK” button.

Let’s Test

Click the “Setup” button on the left-hand panel and choose “Group Builder” from the “Tools and Views” drop-down list.

Select “Object/Field” from the “Action” drop-down, then select “SAP Sensitive Data” from the list. Click the “Edit” button.

In the pop-up screen, select “Members”.

You will be able to see the relevant personal and financial table and fields from SAP HANA Cloud database.

Now that you identified and categorised that sensitive data in your HANA database, IBM Security Guardium can further help to improve data security by adoption of specialised security measures, such as to

  • Add encryption or access controls, to safeguard important data from unauthorised access and breaches; or by
  • Masking or blocking data access requests that violate regulations or policies
  • Configuring alerts for unauthorised access attempts, e.g. if someone from a non-finance department tries to access financial data, an alert can be triggered.

In general, classifying data based on its sensitivity in the first place helps to increase visibility and in turn to comply with regulatory obligations (e.g. by generating detailed reports for audits), prevent data loss, and reduce risks associated with data misuse. These features ensure that data handling procedures are consistent with organisational rules and legal standards, hence improving overall data security.

Conclusion

Securing SAP HANA Cloud databases is critical for safeguarding company personal and financial information from changing cyber threats. IBM Security Guardium improves your SAP HANA environment by offering strong data protection, continuous monitoring, and compliance capabilities, ensuring that critical information is protected. Investing in these advanced security measures not only protects essential data, but it also demonstrates how committed your company is to data privacy and compliance with laws and regulations. As cyber threats become more sophisticated, using IBM Security Guardium is a proactive step towards strengthening your SAP HANA Cloud databases and ensuring the integrity and security of your company data.

Rating: 0 / 5 (0 votes)

The post Safeguarding Enterprise Personal and Financial Data in SAP HANA with IBM Security Guardium appeared first on ERP Q&A.

]]>
SAP AI Technologies Evolution. Seamless Transition from SAP CAI to Skybuffer AI on SAP BTP https://www.erpqna.com/sap-ai-technologies-evolution-seamless-transition-from-sap-cai-to-skybuffer-ai-on-sap-btp/?utm_source=rss&utm_medium=rss&utm_campaign=sap-ai-technologies-evolution-seamless-transition-from-sap-cai-to-skybuffer-ai-on-sap-btp Wed, 29 May 2024 10:23:59 +0000 https://www.erpqna.com/?p=85105 Introduction Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our Conversational AI development, a journey we embarked on in 2018. Our solution now addresses a wide range of AI needs, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG) Initially integrating […]

The post SAP AI Technologies Evolution. Seamless Transition from SAP CAI to Skybuffer AI on SAP BTP appeared first on ERP Q&A.

]]>
Introduction

Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our Conversational AI development, a journey we embarked on in 2018. Our solution now addresses a wide range of AI needs, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG)

Initially integrating SAP Conversational AI, Skybuffer has been keenly tracking SAP’s strategic AI roadmap. In developing the Skybuffer AI technological environment, our goal has been to ensure our product is compatible with SAP’s cutting-edge AI technologies while offering seamless transition support.

Challenge

Back in 2018, SAP introduced the Conversational AI tool, revolutionizing the way businesses build AI assistants and chatbots. Many innovative companies seized this opportunity, integrating Conversational AI into their business processes and using these chatbots in their B2B, B2E and even B2C scenarios.

However, as SAP’s AI strategy evolved, the decision was made to sunset SAP Conversational AI. Many companies are keen to protect their investments and retain their Conversational AI-based developments. These systems have been meticulously crafted and are being used productively, making it essential to find a balance between adopting new SAP AI technologies and maintaining the value of their existing solutions.

The challenge lies in balancing the transition to new technologies while maintaining the productivity and value of the existing SAP Conversational AI solutions.

Solution

Skybuffer AI, a robust all-in-one AI mastering tool, offers a seamless transition mechanism for any SAP CAI chatbot to our new universal AI platform available on SAP BTP. With just one click, you can migrate your SAP CAI chatbot to Skybuffer AI deployed into your SAP BTP instance. Here’s a quick overview of the steps to transition your bot to Skybuffer AI on SAP BTP in just 5-10 minutes.

Step 1: Export the chatbot from your SAP CAI using SAP standard functionality.

Picture 1. SAP Conversational AI export interface

Step 2: Call the Skybuffer AI API secured by the oAuth2 protocol pointing to the Skybuffer AI application deployed on your SAP BTP account and feed exported from SAP CAI bot files.

Picture 2. Skybuffer AI API on SAP BTP import interface for SAP CAI development

Step 3: Voila, your built on SAP CAI AI assistant (bot) is now available in AI Model Configuration application at Skybuffer AI Launchpad

Picture 3. AI Model overview once it is imported to Skybuffer AI
Picture 4. Drill down to the skill level on Action Server

Step 4: Now you can Train and Deploy AI models via simple and intuitive Fiori applications of Skybuffer AI.

Picture 5. Train and Deploy functionality

Step 5: Make basic channel settings within the Communication Channels application from Skybuffer AI Launchpad.

Picture 6. Communication channel setup for AI model

Step 6. Your new level AI Model is now operational within Skybuffer AI. Use Web Chat Preview function to test your webchat-based AI Model at once.

Picture 7. Web Chat Preview function of Skybuffer AI to test an AI model

Outcome

The development based on SAP Conversational AI is secure and ready for further enhancement within the user-friendly Skybuffer AI solution. This solution seamlessly integrates all the cutting-edge SAP Business AI capabilities of SAP BTP, empowering your business to be more innovative and future-ready.

Rating: 0 / 5 (0 votes)

The post SAP AI Technologies Evolution. Seamless Transition from SAP CAI to Skybuffer AI on SAP BTP appeared first on ERP Q&A.

]]>
C_S4CFI_2302: Preparation Tips to Become SAP S/4HANA Cloud Finance Associate https://www.erpqna.com/c-s4cfi-2302-achieve-sap-s-4hana-cloud-finance-mastery/?utm_source=rss&utm_medium=rss&utm_campaign=c-s4cfi-2302-achieve-sap-s-4hana-cloud-finance-mastery Fri, 19 May 2023 12:22:11 +0000 https://www.erpqna.com/?p=74886 C_S4CFI_2302 or the SAP Certified Application Associate - SAP S/4HANA Cloud, public edition – Finance certification is designed to verify a candidate's proficiency in essential aspects of SAP Activate onboarding fundamentals and core knowledge in the Finance domain, which are vital for a consultant's role.

The post C_S4CFI_2302: Preparation Tips to Become SAP S/4HANA Cloud Finance Associate appeared first on ERP Q&A.

]]>
C_S4CFI_2302 or the SAP Certified Application Associate – SAP S/4HANA Cloud, public edition – Finance certification is designed to verify a candidate’s proficiency in essential aspects of SAP Activate onboarding fundamentals and core knowledge in the Finance domain, which are vital for a consultant’s role.

The C_S4CFI_2302 certification proves that the candidate understands the subject matter and has extensive technical expertise. It qualifies individuals to actively participate as a mentored member of a RISE with SAP S/4HANA Cloud, a public edition implementation project team, specifically focusing on Finance.

Who Should Take the C_S4CFI_2302 Exam and How to Maintain the Certification?

The 2302 version of this certification exam is highly recommended for individuals seeking entry-level qualifications. It’s important to note that this particular exam is part of the Stay Current with SAP Certification program. Upon successfully passing the C_S4CFI_2302 version of the exam, it is crucial to initiate the stay current process.

It involves taking the Stay Current Assessment for all subsequent major releases through the SAP Learning Hub, starting from the 2308 release onwards. By following this process, you can ensure the maintenance of your SAP S/4HANA Cloud, public edition consultant certification status, and badge.

Go through the Structured Method of Preparing for the C_S4CFI_2302 Certification Exam:

Preparing for the C_S4CFI_2302 certification requires a structured and focused approach. Here are some steps you can take to prepare for the certification exam effectively:

Get Familiar with the C_S4CFI_2302 Exam Topics:

Begin by familiarizing yourself with the C_S4CFI_2302 exam objectives and the topics covered. It will give you a clear knowledge of what to expect and help you prioritize your study efforts.

Review the Official Documentation:

SAP provides official documentation and study materials for the C_S4CFI_2302 certification exam. You must take advantage of these resources, such as the SAP Learning Hub and the SAP S/4HANA Cloud Implementation Learning Journeys, to understand the concepts and functionalities related to the Finance line of the business area.

Practical Experience:

Gaining practical experience in implementing SAP S/4HANA Cloud is highly recommended, particularly in the Finance domain. Hands-on experience will enhance your understanding of the system and its capabilities and allow you to apply theoretical knowledge to real-world scenarios.

Training Courses Help You Get Ready for the Exam:

Consider enrolling in training courses specifically designed for the C_S4CFI_2302 certification. These courses are led by experienced instructors who can provide valuable insights and guidance, helping you grasp the intricacies of SAP S/4HANA Cloud implementation in Finance.

Practice with C_S4CFI_2302 Sample Questions:

Get familiar with the exam format and question types by practicing with sample and mock exams. It will help you become more comfortable with the exam structure and improve your time management skills.

Stay Updated with SAP Updates:

As SAP continually releases updates and new versions, staying current with the latest developments is essential. Subscribe to SAP newsletters, participate in webinars, and join relevant forums to stay informed about changes and enhancements in SAP S/4HANA Cloud, especially in the Finance domain.

Create Your Study Schedule to Prepare for the C_S4CFI_2302 Certification:

Develop a study plan that outlines your study schedule, topics to cover, and milestones to achieve. Breaking down the C_S4CFI_2302 study materials into manageable chunks and setting specific goals will help you stay organized and track your progress effectively.

Collaborate and Seek Support:

Engage with fellow learners, consultants, or mentors who are also preparing for the certification. Collaborating with others can provide valuable insights, foster discussions, and create a supportive learning environment.

Review and Reinforce:

Regularly review the C_S4CFI_2302 topics you’ve covered to reinforce your understanding. Take notes, create summaries, and revisit challenging areas to ensure you solid grasp the exam concepts.

Mock Exams and Self-Assessment:

Before the actual exam, attempt C_S4CFI_2302 mock exams and self-assessment quizzes to evaluate your readiness. Identify areas where you need further improvement and focus your final preparations accordingly.

What Is SAP S/4HANA Cloud Finance?

Effective financial management is essential for organizations to thrive in today’s rapidly evolving business landscape. SAP S/4HANA Cloud Finance is a solution that empowers businesses with advanced tools and functionalities to streamline financial processes, enhance decision-making, and drive sustainable growth.

Transforming Financial Operations:

SAP S/4HANA Cloud Finance transforms financial operations, offering a comprehensive suite of features designed to optimize efficiency, accuracy, and transparency. With its intuitive interface and robust capabilities, organizations can seamlessly manage financial processes such as accounting, treasury management, cash flow analysis, financial planning, and more.

Real-time Insights for Informed Decision-Making:

One of the key advantages of SAP S/4HANA Cloud Finance is its ability to provide real-time insights into financial data. Leveraging advanced analytics and reporting capabilities, businesses can access up-to-the-minute financial information, enabling them to make informed decisions promptly. This agile decision-making approach can significantly impact profitability, cost control, and financial performance.

Concluding thoughts:

SAP S/4HANA Cloud Finance empowers businesses to achieve financial excellence and confidently navigate the complexities of the modern business landscape. Embrace SAP S/4HANA Cloud Finance today and unlock the potential for financial success.

By following these steps and maintaining a consistent and dedicated study routine, you can effectively prepare for the C_S4CFI_2302 certification and increase your chances of success. Remember to stay focused, stay motivated, and leverage the available resources to enhance your knowledge and skills in SAP S/4HANA Cloud implementation in the Finance line of business.

Rating: 5 / 5 (1 votes)

The post C_S4CFI_2302: Preparation Tips to Become SAP S/4HANA Cloud Finance Associate appeared first on ERP Q&A.

]]>
C_S4CWM_2208: Assess Yourself with Practice Test to Pass the S4CWM Exam https://www.erpqna.com/c-s4cwm-2208-master-the-s4cwm-exam-with-practice-test/?utm_source=rss&utm_medium=rss&utm_campaign=c-s4cwm-2208-master-the-s4cwm-exam-with-practice-test Sat, 31 Dec 2022 06:11:30 +0000 https://www.erpqna.com/?p=71443 Do self-assessment with C_S4CWM_2208 practice tests to earn the SAP Certified Application Associate – SAP S/4HANA Cloud (public) – Warehouse Management Implementation exam on your first attempt. Overview of the C_S4CWM_2208 Exam: C_S4CWM_2208, or the SAP Certified Application Associate – SAP S/4HANA Cloud (public) – Warehouse Management Implementation certification exam validates the candidate has SAP […]

The post C_S4CWM_2208: Assess Yourself with Practice Test to Pass the S4CWM Exam appeared first on ERP Q&A.

]]>
Do self-assessment with C_S4CWM_2208 practice tests to earn the SAP Certified Application Associate – SAP S/4HANA Cloud (public) – Warehouse Management Implementation exam on your first attempt.

Overview of the C_S4CWM_2208 Exam:

C_S4CWM_2208, or the SAP Certified Application Associate – SAP S/4HANA Cloud (public) – Warehouse Management Implementation certification exam validates the candidate has SAP Activate onboarding fundamentals and core knowledge in the Warehouse Management line of business area needed to work on the consultant’s profile. 

The C_S4CWM_2208 certification proves that the candidate possesses overall knowledge and in‐depth technical skills to work as a RISE member with SAP S/4HANA Cloud (public) implementation project team while focusing on Warehouse Management in a mentored role. 

What Is the Level of the C_S4CWM_2208 Certification?

The C_S4CWM_2208 certification exam is suggested as an entry-level certification. The aspirant must note that this 2208 version of the exam takes part in the Stay Current with SAP Certification program. Once you pass this version of the exam, make sure that you start your stay current process. You will be required to take the Stay Current Assessment for all subsequent major releases via the SAP Learning Hub, starting with the 2302 release, to maintain your SAP S/4HANA Cloud (public) consultant certification status and badge.

Topics Covered Under the C_S4CWM_2208 Certification Exam:

The C_S4CWM_2208 exam covers the following topics-

  • Cloud Security, GDPR, and Identity Access Management
  • SAP S/4HANA Cloud (public) – Warehouse Management Overview
  • SAP Activate Methodology and Best Practices
  • Data Migration
  • Business Process Testing
  • Configuration
  • Integration and Extensibility
  • Implementation and Configuration for Scope Items Related to SAP EWM Integration
  • Implementation and Configuration for Scope Items Related to core Warehouse Management

Preparation Strategies to Pass the C_S4CWM_2208 Certification Exam:

Do the Registration First:

Planning and starting the first step confuses many aspirants; therefore, take a solid approach to registration. The thought of passing the exam keeps you motivated and helps you to stay focused to some extent. The next step in your preparation approach is registering with Pearson Vue. When you begin with a solid approach to registering yourself, your next actions will also fall into place, and getting ready for the exam becomes systematic.

C_THR85_2205 Certification: Gear Up Your SAP SF Succession Management Skills

Do Not Hurry to Appear for the Exam:

Setting a study plan works well for any work, and how could it go wrong for exam preparations? If you want to put a better effort into the C_S4CWM_2208 exam preparation, you must take enough time to get ready. There should be a gap of at least two months between the registration and the actual exam day to prepare better for the C_S4CWM_2208 exam. Daily studying is important, and it could work wonders if you study for two to three hours daily. 

Do Not Forget to Learn from the C_S4CWM_2208 Training:

You might get confused about some of the syllabus domains, but learning from the experts is the best solution to get clarity on the subjects. You must be strong with theoretical and practical knowledge, and training helps you in this regard.

Assess Yourself through the C_S4CWM_2208 Practice Tests:

Do you need to assess yourself with practice tests? The answer is yes. You might read well, right down the topics, and feel confident within, but a self-assessment tells you where you stand in the exam preparation. When you are confident about the syllabus domains, start taking the C_S4CWM_2208 practice tests and evaluate. You can get valuable insights regarding your strengths and weaknesses through practicing. If you follow these instances, you can improve highly regarding the syllabus domains. 

Overview of SAP S/4HANA Cloud (public) – Warehouse Management:

The latest S/4HANA Cloud (public) – Warehouse Management release talks about the improved integration between production and warehousing. It offers an enhanced user experience by improving the scan fields for HU identification, an expanded list of available APIs, and more.

With the S/4HANA Cloud (public) – Warehouse Management processes are allowed for synchronous goods movements from Inventory Management (IM) to the warehouse. Some of them are also used in transaction MIGO. The supported methods for synchronous goods movements using transaction MIGO are, as of now:

  • Postings related to purchasing order receipts for external procurement.
  • Processes related to production

Synchronous goods movements are especially beneficial for those customers with simple in and outbound scenarios as the process gets simplified and requires less user interaction and system communication.

Mobile Warehousing Is Better with S/4HANA Cloud (public) – Warehouse Management:

S/4HANA Cloud (public) – Warehouse Management release offers Radio Frequency (RF) improvements and expanded the list of available APIs.

Regarding RF SAP delivered some improvements to the product-specific ad hoc Physical Inventory (PI) operations using RF. With the latest feature, the user can find a product in a bin in a physical warehouse but not in the system bin and can still perform ad hoc PI for the product using a radio frequency device. 

Bottom Line:

Warehouse Management is an important aspect of every organization’s growth. Improving your skills regarding warehouse management is highly beneficial for job aspects. Therefore, grab the C_S4CWM_2208 certification and prove your skills to potential employers.

Rating: 0 / 5 (0 votes)

The post C_S4CWM_2208: Assess Yourself with Practice Test to Pass the S4CWM Exam appeared first on ERP Q&A.

]]>
SAP HANA Development with SAP Cloud Application Programming Model using SAP Business Application Studio https://www.erpqna.com/sap-hana-development-with-sap-cloud-application-programming-model-using-sap-business-application-studio/?utm_source=rss&utm_medium=rss&utm_campaign=sap-hana-development-with-sap-cloud-application-programming-model-using-sap-business-application-studio Sat, 03 Dec 2022 10:56:19 +0000 https://www.erpqna.com/?p=70664 Goal: This Blog explains how you can leverage native SAP HANA development artifacts with CAP. In particular we look at the use of Calculation Views, inside Cloud Application Programming (CAP) Applications. This includes OData access to Calculation Views. Solution: Pre-requisites: Set Up SAP Business Application Studio for Development NOTE: In the SAP BTP trial and […]

The post SAP HANA Development with SAP Cloud Application Programming Model using SAP Business Application Studio appeared first on ERP Q&A.

]]>
Goal:

This Blog explains how you can leverage native SAP HANA development artifacts with CAP. In particular we look at the use of Calculation Views, inside Cloud Application Programming (CAP) Applications. This includes OData access to Calculation Views.

Solution:

Pre-requisites: Set Up SAP Business Application Studio for Development

  • Launch the Business Application Studio (BAS) and choose Create Dev Space

NOTE: In the SAP BTP trial and free tier you are limited to only two Dev Spaces and only one can be active at a time. If you have performed other tutorials, you might already have reached your maximum. In that case you might have to delete one of the other dev spaces in order to continue with this tutorial.

  • Select the Full stack application and select the necessary extensions as shown in the image below and click on the create dev space.
Selection of the type – Full Stack Application in Business Application Studio

It will then create a dev space. Although it takes couple of minutes to start. Once it is RUNNING, you can click on CAP_DEMO and start creating your projects.

Newly created DEV space in BAS
  • Add Cloud Foundry LOGIN Connection to your space ( 1.Plugin 2. F1- 3. CF )

Login to Cloud Foundry, there are several ways to login to cloud foundry

  1. In the plugins, go to cloud foundry icon and click on the right arrow as shown in the picture.
Cloud Foundry Login in BAS

Fill in the credentials & necessary details such as Cloud Foundry Organization & Cloud Foundry Space.

Enter Credentials and Dev Space details

2. Using the Artifact Wizard by clicking on F1.

Cloud Foundry login using Artifact Wizard (F1)

3. Using the terminal: Execute the command,

CF LOGIN

Terminal – CF LOGIN
  • Create CAP Project: Once you have logged in to Cloud Foundry, Start with the creation of CAP Project by clicking on the Start from Template ( Help > Get Started > Start from Template) and click on CAP Project.
Choose from the template ( CAP Project )

Set your runtime as node.js and add the features that’s selected in the image below and click on finish.

It will then create a CAP Project with some sample folders and files.

Project Explorer
  • Adding HANA Target to the Project: Open new terminal (Terminal > New Terminal) and execute the command

CDS ADD HANA

You can see the dependencies in the mta.yaml file.

mta.yaml file
  • Adjust the content in the files mta.yaml & package.json

Now change the path from gen/db -> db

Change the path in mta.yaml

Now, change the cds section of the package.json to the below block of code.

Changes in the file package.json
Before & After the addition of CDS Section in package.json
  • Install the dependencies

Open the terminal and execute

  • npm install ( NOTE: Skip the step if already installed )
  • npm install -g hana-cli (Open Source Utility called hana-cli )
  • hana-cli createModule
Install the dependencies

(OPTIONAL) You can clone your git repository or continue with the project with next steps.

Initialize the Git Repository
  • View the files added from the CAP template

1. src > data-model.cds

data-model.cds

2. srv > cat-service.cds

cat-service.cds

Run the following command

cds build

cds build

CDS artifacts are now converted to hdbview and hdbtable artifacts and you can find them in the src folder.

CDS Artifacts in the Explorer

Deploy these objects into HANA database creating tables and views. Bind the project to a database connection and HDI container instance. Click on the bind icon.

The connections with respect to CAP & HANA tooling are distinct. They do not share the same connection. Hence, binding must be done at 2 different points : (a) SAP HANA Project (b) Run Configurations.

(a) Binding the HANA Project
  • Create Service Instance

Select an SAP HANA Service and choose from the list displayed.

SAP HANA Service

Go to the plugin Run Configuration and bind the connection.

(b) Binding in Run Configuration

Select ‘Yes’ to the below dialog box.

  • Run the CAP Project

Once deployed, go to Run Configurations, and click on the run button. This will give an another dialog box to open ‘Bookshop-1’ a new tab.

Run the CAP Project
Application running at port 4004

If you click on the $metadata, you can view the technical description of the service.

$metadata

Click on Fiori Preview, attributes have to be selected by clicking on the gear icon ⚙.

Fiori Preview
  • Create Calculation View

Create a calculation view, click on F1 which in turn will open a wizard to create the database artifact.

Database Artifact Wizard – Create Calculation View

In the Calculation View Editor, Click on the + icon on the projection and add the table. On the right side, there is an icon to expand the details of the projection. By clicking on it, it opens the panel and here you can map the table attributes, by double clicking on the MY_BOOKSHOP_BOOKS header.

Calculation View Editor

Once deployed, you can view the Calculation View in the Database explorer.

Database Explorer
  • Edit the .cds files ( data-model.cds & cat-service.cds )
data-model.cds
cat-service.cds
  • Create Synonym

Create. hdbsynonym in your src folder

Database Artifact Wizard – Create Synonym

Click on <click_to_add> and enter the synonym name, MY_BOOKSHOP_CV_BOOK and click on the object field, the below table opens. Enter ** on the text field and choose the calculation view cv_book and click finish.

If you open the cap.hdbsynonym in the text editor, it will be as follows.

Synonym – cap.synonym

Deploy the project by executing the command cds deploy -2 hana. You can refresh the browser. You can find the new cv_book entity.

Application with new entity
Rating: 5 / 5 (1 votes)

The post SAP HANA Development with SAP Cloud Application Programming Model using SAP Business Application Studio appeared first on ERP Q&A.

]]>
C_HCMOD_03: SAP HANA Cloud Modeling Certification Is Yours Now! https://www.erpqna.com/c-hcmod-03-sap-hana-cloud-modeling-get-certified-now/?utm_source=rss&utm_medium=rss&utm_campaign=c-hcmod-03-sap-hana-cloud-modeling-get-certified-now Fri, 02 Dec 2022 10:43:15 +0000 https://www.erpqna.com/?p=70659 Becoming C_HCMOD_03 or the SAP Certified Application Associate - SAP HANA Cloud Modeling certified is not time-consuming now. Practice tests help you easily pass the 80 questions long the SAP Certified Application Associate - SAP HANA Cloud Modeling certification exam and start your SAP career.

The post C_HCMOD_03: SAP HANA Cloud Modeling Certification Is Yours Now! appeared first on ERP Q&A.

]]>
Becoming C_HCMOD_03 or the SAP Certified Application Associate – SAP HANA Cloud Modeling certified is not time-consuming now. Practice tests help you easily pass the 80 questions long the SAP Certified Application Associate – SAP HANA Cloud Modeling certification exam and start your SAP career.

What Is Proved through the C_HCMOD_03 Certification?

C_HCMOD_03 or the SAP Certified Application Associate – SAP HANA Cloud Modeling certification exam proves that the candidate has the basic knowledge regarding SAP HANA Cloud QRC 01/2022 to work as the SAP HANA Cloud application consultant.

What Else Is Built through the C_HCMOD_03 Certification? 

The C_HCMOD_03 certification talks about the basic knowledge earned through related SAP HANA Cloud training, and it gets enhanced by practical experience within the SAP HANA Cloud project team. The consultant applies his knowledge in the specialist area practical projects under the guidance of a senior mentor. The C_HCMOD_03 exam also proves an aspirant’s knowledge to implement calculation view graphical modeling and to manage modeling content in SAP Business Application Studio needed by the profile of an SAP HANA Cloud application consultant.

Syllabus Domains Covered Under the C_HCMOD_03 Exam: 

The C_HCMOD_03 exam covers the following topics-

  • SQLScript in models
  • Secure data models
  • Optimize the performance of models
  • Manage and administer models
  • Configuring modeling functions
  • Build calculation views
  • SAP HANA Cloud modeling basics

How Should You Prepare for the C_HCMOD_03 Exam? 

Understand the Exam Structure:

It feels like a simple task to learn about the exam structure, but knowing about the exam structure and topics is essential. The candidate must visit the official page to learn more about the topics distribution and books. They can follow the online study links or get printed versions of study books.

Learn All the Syllabus Domains Properly:

The C_HCMOD_03 exam checks an aspirant in many areas, and they should learn all the syllabus domains properly to attempt the maximum number of questions. If you are not well-planned, you could find it a bit difficult to cover all syllabus domains.

Do Not Skip Any Syllabus Domain:

In the case of the SAP C_HCMOD_03 exam, the syllabus is divided into mostly 8 to 12%. The division can ask a maximum number of questions from each section. Make your preparation journey easy by focusing on two to three topics and writing different notes.

Write Notes and Learn the Topics from the Core:

You must memorize the C_HCMOD_03 certification topics for longer to score high. Therefore, make notes while you study. These notes help in memorizing the topics for longer and also help to revise quickly when you have less time during preparation.

Depend on Additional Resources Like C_HCMOD_03 Sample Questions & Videos:

Studying from different resources would widen the knowledge base regarding the C_HCMOD_03 exam. A candidate must cover the syllabus topics first and then move towards finding other resources like sample questions and video resources to strengthen his base regarding the syllabus topics.

Lastly, Do the Self-Assessment with C_HCMOD_03 Practice Tests:

Only getting ready for the C_HCMOD_03 exam is not enough; taking a self-assessment plays a crucial role in earning the certification. There are many useful mock and premium practice exams to prepare for the exam. These C_HCMOD_03 practice tests offer valuable insights regarding your strengths and weaknesses and help you get ready accordingly. Through these tests, the candidate becomes ready with time management and attempts the questions smoothly in the exam hall.  

What Is SAP HANA Cloud?

SAP HANA Cloud’s SAP HANA database component offers mission-critical data at proven in-memory speed. Its highly scalable data lake has flexible storage tiering possibilities, allowing an optimal price-performance ratio for all scenarios.

How Does It Benefit Organizations?

Hybrid In-Memory Database:

Expand your workloads across on-premise and cloud landscapes with a transparent gateway to local, real-time, virtualized, or distributed data.

Transactional and Analytical Operations:

Support any data and run many different types of workloads.

C_BOWI_4302 Certification: Get It on Your First Attempt!

Have High Performance with Graphical Models:

Help ensure high performance with graphical models that foster collaboration between stakeholders, getting complex analyses in real-time.

Improve Business Operations:

Augment and improve business operations with data-driven insights from embedded and external machine learning.

Experience Unified Systems:

Get a unified database solution that stores, processes, and analyzes geospatial, graph, JSON documents, and more.

Bottom Line:

SAP HANA Cloud offers a complete set of capabilities, including database administration, database management, data security, multimodel processing, application development, and data virtualization. Therefore, grab the C_HCMOD_03 certification and flaunt the SAP badge. 

Rating: 0 / 5 (0 votes)

The post C_HCMOD_03: SAP HANA Cloud Modeling Certification Is Yours Now! appeared first on ERP Q&A.

]]>
Installation Eclipse and configuration ADT tool https://www.erpqna.com/installation-eclipse-and-configuration-adt-tool/?utm_source=rss&utm_medium=rss&utm_campaign=installation-eclipse-and-configuration-adt-tool Sun, 06 Nov 2022 04:11:35 +0000 https://www.erpqna.com/?p=69425 Introduction Before Driving deep into Technical details. let me give some brief about why we need to do this ABAP Development Tool (ADT) is an Eclipse based tool provided by SAP, You will need ADT if you have to work on the ABAP CDS views. Even though,CDS views are emdedded into the ABAP Dictionary, there […]

The post Installation Eclipse and configuration ADT tool appeared first on ERP Q&A.

]]>
Introduction

Before Driving deep into Technical details. let me give some brief about why we need to do this ABAP Development Tool (ADT) is an Eclipse based tool provided by SAP, You will need ADT if you have to work on the ABAP CDS views. Even though,CDS views are emdedded into the ABAP Dictionary, there are some Difference in the features available between the Eclipse and the Data Dictionary environments.

Detail example how to Connect to S/4HANA 1809 SAP Server from Eclipse explained in this Blog.

Follow below steps to install Eclipse and set up GUI ADT

Install Eclipse

Download Eclipse using this link – https://www.eclipse.org/downloads/

Double Click the Eclipse Installation file which is downloaded above.

We have selected Eclipse IDE for Enterprise Java Developers. You may choose the first one too i.e. Eclipse IDE for Java Developers.

Accept the Terms and Conditions

Once the Installation is complete, Launch the Eclipse

This is how the Eclipse should look once the Launch is complete.

If you want to check the version of your Eclipse, you may go the Help -> About.

Ours is 2020-03 version

Install ADT (ABAP Development Tool) in Eclipse

Get the latest ADT tools from below link.

https://tools.eu1.hana.ondemand.com/#abap

Enter below URL if you installed the latest Eclipse 2020-03 like us:

https://tools.hana.ondemand.com/latest

Hit Finish and check the progress at the bottom right corner.

Once it is complete, it will as ask to Restart Eclipse. Not Computer.

Wait for Eclipse to start again.

Open ABAP Perspective in Eclipse ADT

Choose ABAP in Others Perspective.

Top Left corner, ABAP perspective is visible.

Connect to S/4HANA 1809 SAP Server from Eclipse

Click on Create ABAP Project

It will show the SAP System from the Local GUI. You need to have the GUI installed and the S/4HANA Server details added to it.

Provide the SAP User Id and Password provided to you. If you have not received the id and password yet, please ignore this step.

Hit Finish Button

Add Your Favourite Package to Project

Add Favourite Package and Hit Finish or Add Package later by right clicking and Add Package.

To find anything in Eclipse, you may use short cut – SHIFT + CNTRL + A

Rating: 0 / 5 (0 votes)

The post Installation Eclipse and configuration ADT tool appeared first on ERP Q&A.

]]>
Uplift Modeling with APL Gradient Boosting https://www.erpqna.com/uplift-modeling-with-apl-gradient-boosting/?utm_source=rss&utm_medium=rss&utm_campaign=uplift-modeling-with-apl-gradient-boosting Thu, 06 Oct 2022 11:40:30 +0000 https://www.erpqna.com/?p=68587 Marketing campaigns need to be relevant for the recipient, and worthwhile for the sender. It is not worth sending a promotional offer about a product or service if you know that your customer will buy it anyway. It is not worth calling a subscriber to persuade him to maintain his subscription if you know he […]

The post Uplift Modeling with APL Gradient Boosting appeared first on ERP Q&A.

]]>
Marketing campaigns need to be relevant for the recipient, and worthwhile for the sender. It is not worth sending a promotional offer about a product or service if you know that your customer will buy it anyway. It is not worth calling a subscriber to persuade him to maintain his subscription if you know he will stay or leave regardless of being contacted. Marketers must identify which customers their upsell or retention campaign can influence positively. Uplift modeling provides them a solution to do just that.

Uplift modeling is a generic technique to predict how individuals will respond to an action or a treatment. The dataset must contain a group of treated individuals, plus a control group with non-treated individuals. In this blog we will cover two uplift methods: S-Learner and Y star transform. Both methods require building only one predictive model. We will walk you through an example to estimate a treatment effect with the APL Gradient Boosting algorithm in Python.

This blog leverages work done by my colleague Yann LE BIANNIC from SAP Labs and his student intern Jonathan DAO. A big thank-you to them.

Preparing the Training and Test datasets

Kevin Hillstrom made available a dataset of 64000 customers who last purchased within twelve months. These customers were involved in two e-mail campaigns. We want here to apply uplift modeling on one of these campaigns.

Let’s start by a few data preparation steps.

We expect the dataset when prepared to have three special columns:

key_col = 'id'
target_col = 'visit'
treatment_flag = 'treated'

We need to turn the Hillstrom bunch object …

from sklift.datasets import fetch_hillstrom
bunch_local = fetch_hillstrom()

into a Pandas dataframe.

import pandas as pd
df_local = pd.DataFrame(data = bunch_local.data)
df_local['treatment'] = bunch_local.treatment
df_local[target_col] = bunch_local.target
df_local.head(6)

Visit will be our target column. Visit equals to 1 when the customer visited the company website in the last two weeks, and 0 otherwise.

We delete all the rows related to the treatment: Men’s E-Mail.

df_local.drop(df_local[df_local.treatment == 'Mens E-Mail'].index, inplace=True)

Two values are left:

  • Women’s E-Mail (the treated group)
  • No E-Mail (the control group).

A treatment flag is added …

import numpy as np
df_local['treated'] = np.where(df_local['treatment'] == 'No E-Mail', 0, 1)

and redundant columns are removed.

df_local.drop('history_segment', axis=1, inplace=True)
df_local.drop('treatment', axis=1, inplace=True)
df_local.head(6)

We create an SAP HANA dataframe from the Pandas dataframe.

from hana_ml import dataframe as hd
conn = hd.ConnectionContext(userkey='MYHANACLOUD')

df_remote = hd.create_dataframe_from_pandas(connection_context=conn, 
                pandas_df= df_local, 
                table_name='HILLSTROM_VISITS',
                force=True,
                drop_exist_tab=True,
                replace=False)

The Hillstrom dataset happens not to have a primary key. We add one.

df_remote = df_remote.add_id(id_col= key_col)

We check the distribution of the treatment flag.

df_remote.agg([('count', 'treated', 'rows')], group_by='treated').collect()

The two distinct values are distributed roughly half-half, but a balanced distribution is not mandatory for the S-Learner method, nor for the Y star transform method.

It is time to split the data into Training and Test. We use the stratified partition method to preserve the treatment distribution.

from hana_ml.algorithms.pal.partition import train_test_val_split
hdf_train, hdf_test, hdf_valid = train_test_val_split(
training_percentage= 0.8, testing_percentage= 0.2, validation_percentage= 0, 
id_column= key_col, partition_method= 'stratified', stratified_column= treatment_flag, data= df_remote )

We persist the two partitions as tables in the database …

hdf_train.save(('HILLSTROM_TRAIN'), force = True)
hdf_test.save(('HILLSTROM_TEST'), force = True)

so that these tables can be used later in a separate notebook as follows:

sql_cmd = 'select * from HILLSTROM_TRAIN order by 1'
hdf_train = hd.DataFrame(conn, sql_cmd)

sql_cmd = 'select * from HILLSTROM_TEST order by 1'
hdf_test  = hd.DataFrame(conn, sql_cmd)

We can now get to the heart of the matter.

S-Learner

Our target “visit” is a 0/1 outcome, therefore we will use a binary classification. We train an APL Gradient Boosting model on the dataset that includes the two groups: treated individuals and non-treated individuals.

predictors_col = hdf_train.columns
predictors_col.remove(key_col)
predictors_col.remove(target_col)

from hana_ml.algorithms.apl.gradient_boosting_classification import GradientBoostingBinaryClassifier
apl_model = GradientBoostingBinaryClassifier()
apl_model.set_params(other_train_apl_aliases={'APL/LearningRate':'0.025'})
apl_model.fit(hdf_train, label=target_col, key=key_col, features=predictors_col)

We display the variables ranked by their importance (aka contribution).

my_filter = "\"Contribution\">0"
df = apl_model.get_debrief_report('ClassificationRegression_VariablesContribution').filter(my_filter).collect()
df.drop('Oid', axis=1, inplace=True)
df.drop('Method', axis=1, inplace=True)
format_dict = {'Contribution':'{:,.2f}', 'Cumulative':'{:,.2f}'}
df.style.format(format_dict).hide(axis='index')

The variable “treated” at rank 2 contributes to predict the customer visit, but to know whether customers react to the treatment, and if so, if they react favorably or unfavorably, we need to estimate the uplift value.

This first phase to get our model trained is somehow very ordinary. The unconventional part starts now with the prediction phase. In the S-Learner method, the classification model must be applied twice.

We predict a first time using unseen data (our test dataset) with the treatment flag forced to 1.

apply_in = hdf_test.drop([target_col])
apply_in = apply_in.drop([treatment_flag])
apply_in = apply_in.add_constant(treatment_flag, 1)
apply_out = apl_model.predict(apply_in)

apply_out = apply_out.rename_columns({'PREDICTED':'pred_trt1','PROBABILITY':'proba_trt1'})
apply_out.save(('#apply_out_trt1'), force = True)
apply_out_trt1  = conn.table('#apply_out_trt1')

We predict a second time using the same individuals but with the treatment flag forced to 0.

apply_in = hdf_test.drop([target_col])
apply_in = apply_in.drop([treatment_flag])
apply_in = apply_in.add_constant(treatment_flag, 0)
apply_out = apl_model.predict(apply_in)

apply_out = apply_out.rename_columns({'PREDICTED':'pred_trt0','PROBABILITY':'proba_trt0'})
apply_out.save(('#apply_out_trt0'), force = True)
apply_out_trt0  = conn.table('#apply_out_trt0')

The uplift is estimated by calculating:

When the APL prediction is 0, we must transform the probability by doing: 1 – Probability

joined_outputs = apply_out_trt1.set_index("id").join(apply_out_trt0.set_index("id"))

hdf_uplift = joined_outputs.select('*', (""" 
Case When "pred_trt1" = 1 Then "proba_trt1" else 1-"proba_trt1" End - 
Case When "pred_trt0" = 1 Then "proba_trt0" else 1-"proba_trt0" End
""", 'uplift'))

hdf_uplift.head(5).collect()

The uplift can be plotted as an Area Under the Uplift Curve.

from sklift.viz.base import plot_uplift_curve, plot_qini_curve

plot_uplift_curve(
    hdf_test.collect()[target_col], 
    hdf_uplift.collect()['uplift'], 
    hdf_test.collect()[treatment_flag],  
    random=True, perfect=False)

By only targeting the first half of the population (4000 customers who are the most likely to visit the website because of the campaign e-mail they received) we reach near the maximum uplift.

Y star Transform

The Y star method consists of transforming the target so that the gradient boosting model directly predicts the uplift. Here is the transformation:

For our Hillstrom case, Y is the visit column and W the treated column. As for p, it is the proportion of treated individuals. When the distribution of the treated column is half and half (p=0.5), the Y star transform gives the following results:

With that method we must build a regression model where Y* is the variable to predict.

First, we add the proportion column to our training dataframe.

prop_trt1 = hdf_train.filter(f'"{treatment_flag}" = 1').shape[0] / hdf_train.shape[0]
hdf_train = hdf_train.add_constant("prop_trt1", prop_trt1)
hdf_train = hdf_train.cast('prop_trt1', 'FLOAT')

Then we add the Y* column.

hdf_train = hdf_train.select('*', (f""" 
"{target_col}" *  ( ("{treatment_flag}" - "prop_trt1") / "prop_trt1" / (1 - "prop_trt1") ) 
"""
,'y_star'))

We check its values on different pairs (visit, treated).

hdf_train.filter('"id" in (2, 3, 5, 13)').collect()

We fit the training data with the APL Gradient Boosting regressor. The treatment flag is removed from the features list because this information is now included in the y_star variable.

predictors_col = hdf_train.columns
predictors_col.remove(key_col)
predictors_col.remove(target_col)
predictors_col.remove('y_star')
predictors_col.remove(treatment_flag)

from hana_ml.algorithms.apl.gradient_boosting_regression import GradientBoostingRegressor
apl_model = GradientBoostingRegressor()  
apl_model.set_params(other_train_apl_aliases={'APL/LearningRate':'0.025'}) 
apl_model.fit(hdf_train, label= 'y_star', key= key_col, features= predictors_col)

Using the same code snippet from the previous section we display the variables contribution.

The Y* star model shows different contributions simply because it predicts uplift, not visit. The most important variable by far (67%) is a flag telling if the customer purchased women’s merchandise in the past year. Remember the treatment, it is a campaign for women.

Newbie, a feature indicating if the customer was acquired in the past twelve months, shows as the least important variable to predict uplift, while it is the most important to predict the customer visit according to the S-Learner model.

The prediction phase is straightforward.

apply_in = hdf_test.drop([target_col])
apply_out = apl_model.predict(apply_in)

apply_out.head(5).collect()

Here is the area curve.

plot_uplift_curve(
    hdf_test.collect()[target_col], 
    apply_out.collect()['PREDICTED'], 
    hdf_test.collect()[treatment_flag],  
    random=True, perfect=False)

The AUC value here is the same AUC value we saw previously in the S-Learner uplift curve. The results from the two uplift modeling methods are equivalent in the case of the Hillstrom dataset.

You may want to try these two methods on your own dataset and compare their AUC values. If they are equivalent, the Y* model is preferable since it directly provides the SHAP explanations (global and local) on the uplift.

Rating: 0 / 5 (0 votes)

The post Uplift Modeling with APL Gradient Boosting appeared first on ERP Q&A.

]]>