SAP Master Data Governance - ERP Q&A https://www.erpqna.com/tag/sap-master-data-governance/ Trending SAP Career News and Guidelines Fri, 05 Dec 2025 03:29:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.erpqna.com/wp-content/uploads/2021/11/cropped-erpqna-32x32.png SAP Master Data Governance - ERP Q&A https://www.erpqna.com/tag/sap-master-data-governance/ 32 32 MDG DQM: Employing rules in central governance & mass processing in S/4 HANA https://www.erpqna.com/mdg-dqm-employing-rules-in-central-governance-mass-processing-in-s-4-hana/?utm_source=rss&utm_medium=rss&utm_campaign=mdg-dqm-employing-rules-in-central-governance-mass-processing-in-s-4-hana Thu, 20 Jun 2024 10:50:52 +0000 https://www.erpqna.com/?p=85721 The blog provided a step by step overview of how to create a basic rule in DQM, enable it for data quality evaluation and generating evaluation scores on the in the system for product master. In today’s data-driven landscape, enterprises are increasingly recognizing the significance of maintaining accurate, reliable, and consistent master data to extract […]

The post MDG DQM: Employing rules in central governance & mass processing in S/4 HANA appeared first on ERP Q&A.

]]>
The blog provided a step by step overview of how to create a basic rule in DQM, enable it for data quality evaluation and generating evaluation scores on the in the system for product master.

In today’s data-driven landscape, enterprises are increasingly recognizing the significance of maintaining accurate, reliable, and consistent master data to extract meaningful insights and drive informed decision-making.

Introduction

DQM aims to create a single repository for rules for both validation and derivation purposes which can be exposed in multiple contexts like central governance, mass processing & consolidation. In this blog we will cover the steps required to enable usage of a DQM rule in central governance and mass processing.

Data quality mantra

Main Content

Central governance

1. Create a change request type or you can choose to use any existing change request type in your system. Go to the step properties and mark relevant for data quality rules. This will let the system know while processing the change request that the DQM rules( if any) needs to be executed.

    Change request
    Step properties

    2. Go the validation rule Fiori app and add the usage “Check change requests” and do preparation.The preparation step will ensure the same BRF+ expressions can now be used for central governance as well.

    3. Once preparation is completed, enable the rule usage for change requests.

      Rule usage

      4. Now to go creation of a material using your custom change request type and key in the relevant fields. Leave the validation fields blank. In our case i entered the unit of weight but kept gross and net weight 0.

      Rule in central governance

      5. The system displays the error message and also indicates from which DQM rule, the error is raised for better tracking. The traditional central governance error messages doesn’t have help which will tell from where the error is being raised.

      Error description/logging

      Mass Processing

      1. Activate the BC set for product master mass processing.

        Product master mass processing BC set

        2. The standard process template SAP_MM_MAS has 3 steps Edit, Validation & activation. However the validation step adapter configuration doesn’t have any checks apart from the standard back end customizing checks.

        Standard process template

        3. We will copy the standard template to a Z template & replace the validation configuration adapter with a Z config ID, which will have “Apply validation rules” checked.

        Custom process template
        Validation adapter configuration

        4. In Validation rules, add the mass processing usage and enable it for mass processing.

        Enable mass processing rule usage

        5. Create a process with the Z process template and define a scope, I have taken “Authorization group” as scope.

        Scope of mass processing

        6. Pass the material number and edit the authorization group field.

        7. The validation stage prompts an error coming from the rules defined in DQM.

        Validation error from DQM
        Rating: 0 / 5 (0 votes)

        The post MDG DQM: Employing rules in central governance & mass processing in S/4 HANA appeared first on ERP Q&A.

        ]]>
        Master Data Governance C_MDG_1909 Certification: Exam Strategies and Incorporating Practice Tests https://www.erpqna.com/c_mdg_1909-certification-exam-strategies-practice-tests/?utm_source=rss&utm_medium=rss&utm_campaign=c_mdg_1909-certification-exam-strategies-practice-tests Tue, 16 Apr 2024 08:31:20 +0000 https://www.erpqna.com/?p=83400 Are you gearing up for the C_MDG_1909 certification exam? Achieving this certification can open up numerous career opportunities in master data governance. To ensure your success, it’s crucial to prepare effectively. Here are some expert strategies to help you ace the C_MDG_1909 exam, along with the importance of incorporating practice tests into your study routine. […]

        The post Master Data Governance C_MDG_1909 Certification: Exam Strategies and Incorporating Practice Tests appeared first on ERP Q&A.

        ]]>
        Are you gearing up for the C_MDG_1909 certification exam? Achieving this certification can open up numerous career opportunities in master data governance. To ensure your success, it’s crucial to prepare effectively. Here are some expert strategies to help you ace the C_MDG_1909 exam, along with the importance of incorporating practice tests into your study routine.

        What Is the C_MDG_1909 Certification All About?

        C_MDG_1909 or the SAP Certified Associate – SAP Master Data Governance certification exam confirms that the individual possesses the essential foundational knowledge necessary for the application consultant role. It demonstrates that the candidate possesses a comprehensive understanding and proficient technical abilities to contribute effectively as a team member in a supervised capacity within project environments. The certification is advised as an initial qualification for entry-level positions.

        Study Tips for C_MDG_1909 Certification Preparation:

        Discover the C_MDG_1909 Exam Structure Well:

        Familiarize yourself with the exam format, including the number of questions, types of questions, and duration. Knowing what to expect can alleviate test anxiety and help you manage your time effectively during the exam. Remember, the C_MDG_1909 exam consists of multiple-choice questions, so practice answering similar questions beforehand.

        Follow A Study Schedule for the C_MDG_1909 Exam:

        Plan out your C_MDG_1909 study sessions to cover all the exam topics systematically. Allocate dedicated time slots for each subject area, ensuring comprehensive coverage. Consistency is key, so stick to your study schedule rigorously to maintain momentum and avoid last-minute cramming.

        Use Official Study Materials:

        Take advantage of official study guides, books, and online resources provided by SAP. These materials are specifically designed to align with the exam objectives, offering comprehensive coverage of essential topics. Incorporate these resources into your study plan for a well-rounded preparation.

        Have Practical Knowledge:

        Theory is important, but practical experience is equally crucial for mastering master data governance concepts. Utilize sandbox environments or trial versions of relevant software to gain hands-on experience with key concepts and processes. This practical approach will enhance your understanding and retention of the C_MDG_1909 exam material.

        Collaborate with Fellow Aspirants:

        Collaborate with fellow candidates preparing for the C_MDG_1909 exam by joining study groups or online forums. Engaging in discussions, sharing resources, and solving practice questions collectively can provide valuable insights and support. Additionally, explaining concepts to others can reinforce your own understanding.

        Take Regular Breaks During Preparation:

        Avoid burnout by incorporating regular breaks into your study sessions. Short, frequent breaks can help prevent mental fatigue and improve overall productivity. Use break times to recharge, relax, or engage in physical activity to keep your mind fresh and focused.

        Practice Active Learning Techniques:

        Passive reading alone may not suffice for effective learning. Instead, adopt active learning techniques such as summarizing key C_MDG_1909 exam concepts in your own words, teaching the material to someone else, or creating mnemonic devices to aid memory retention. These techniques promote deeper understanding and long-term retention of information.

        Review and Reinforce Your Knowledge:

        Don’t wait until the last minute to review the material. Periodically revisit previously covered topics to reinforce your understanding regarding the C_MDG_1909 exam and identify any areas that require further clarification. Regular review sessions will help solidify your knowledge and address any weak areas before the exam.

        Simulate Exam Conditions with C_MDG_1909 Practice Tests:

        To familiarize yourself with the C_MDG_1909 exam environment and build confidence, simulate exam conditions during your practice sessions. Set aside dedicated time slots to complete practice tests under timed conditions, mimicking the actual exam scenario. This will help you gauge your readiness and identify areas for improvement.

        Stay Positive and Confident:

        Maintain a positive mindset throughout your preparation journey. Believe in your abilities and stay confident in your C_MDG_1909 preparation efforts. Visualize success, stay motivated, and approach the exam with a calm and focused attitude. Remember, your hard work and dedication will pay off.

        Importance of Practice Tests for C_MDG_1909 Certification Preparation:

        Assessment of Knowledge with Practice Tests:

        Practice tests serve as invaluable assessment tools to gauge your level of preparedness for the C_MDG_1909 exam. By taking practice tests, you can identify your strengths and weaknesses, allowing you to tailor your study plan accordingly. This targeted approach ensures efficient use of study time and maximizes your chances of success.

        Familiarization with the Exam Format:

        Practice tests provide a simulated exam experience, allowing you to familiarize yourself with the format, structure, and types of questions featured in the C_MDG_1909 exam. This firsthand experience reduces test anxiety and builds confidence, ensuring you’re well-prepared on exam day.

        Manage Your Time Well:

        Effective time management is crucial for success in any exam. Practice tests help you refine your time management skills by challenging you to answer questions within the allocated time frame. By practicing under timed conditions, you’ll learn to prioritize tasks, allocate time wisely, and complete the exam within the specified time limit.

        Concluding Thoughts:

        Preparing for the C_MDG_1909 certification exam requires a strategic approach and dedicated effort. By following these study tips and incorporating practice tests into your preparation routine, you can enhance your knowledge, build confidence, and maximize your chances of success. Remember, consistency, practice, and a positive mindset are the keys to mastering the exam and advancing your career in master data governance. Best of luck on your certification journey!

        Rating: 0 / 5 (0 votes)

        The post Master Data Governance C_MDG_1909 Certification: Exam Strategies and Incorporating Practice Tests appeared first on ERP Q&A.

        ]]>
        C_MDG_1909 Certification Practice Test: Embrace the Journey of SAP Master Data Governance Success https://www.erpqna.com/c-mdg-1909-certification-practice-test-sap-mastery-unlocked/?utm_source=rss&utm_medium=rss&utm_campaign=c-mdg-1909-certification-practice-test-sap-mastery-unlocked Mon, 04 Sep 2023 13:10:34 +0000 https://www.erpqna.com/?p=77154 The SAP Master Data Governance (MDG) certification, specifically the C_MDG_1909 exam, is a valuable opportunity that can significantly enhance your expertise in data management and open doors to exciting career prospects.

        The post C_MDG_1909 Certification Practice Test: Embrace the Journey of SAP Master Data Governance Success appeared first on ERP Q&A.

        ]]>
        In the rapidly evolving world of enterprise software, staying ahead of the curve is essential for career growth and success. The SAP Master Data Governance (MDG) certification, specifically the C_MDG_1909 exam, is a valuable opportunity that can significantly enhance your expertise in data management and open doors to exciting career prospects. This comprehensive guide will delve into every aspect of the C_MDG_1909 certification, providing valuable insights and actionable tips to ensure your success. So, let’s embark on this journey to master SAP Master Data Governance and ace the C_MDG_1909 certification exam.

        What Is the C_MDG_1909 Certification All About?

        C_MDG_1909 or the SAP Certified Application Associate – SAP Master Data Governance certification test confirms that the individual has the essential and central expertise needed for the role of an application consultant. The certification demonstrates that the person has a comprehensive grasp and profound technical abilities to contribute effectively to a project group under guidance. This certification assessment is suggested as an initial level credential.

        How Significant Is the C_MDG_1909 Certification?

        In today’s data-driven landscape, SAP Master Data Governance is pivotal in ensuring data accuracy, consistency, and integrity across organizations. Achieving the C_MDG_1909 certification showcases your commitment to mastering this crucial domain, making you a sought-after professional in data management. Beyond validating your skills, this certification opens doors to many career opportunities, ranging from data governance specialist roles to senior positions demanding SAP MDG expertise.

        Demystifying C_MDG_1909 Certification:

        The C_MDG_1909 certification is a crucial milestone for professionals looking to validate their proficiency in SAP Master Data Governance. This certification targets individuals who possess a strong grasp of data models, data replication, data quality, and user interfaces within the context of SAP MDG. If you want to succeed in this exam, it’s essential to understand its objectives, eligibility criteria, and format. The well-structured exam consists of carefully crafted questions that test your theoretical knowledge and practical application of SAP MDG concepts. The C_MDG_1909 certification exam asks 80 questions, and you must get 59% marks to pass the exam.

        Preparing for Success: A Step-by-Step Approach:

        Explore Study Resources Like C_MDG_1909 Practice Tests:

        Embarking on your C_MDG_1909 certification journey requires access to high-quality study materials. Official SAP documentation, online courses, and relevant books serve as your compass in navigating the vast landscape of SAP MDG. Additionally, practice tests play a pivotal role in assessing your understanding of exam topics pinpointing areas that require further attention. Embrace these resources to lay a strong foundation for your exam preparation.

        Master the Key Topics and Concepts Regarding the C_MDG_1909 Certification Exam:

        The core topics covered in the C_MDG_1909 exam form the pillars of your certification journey. Dive deep into data modeling, understanding entity types, relationships, attributes, and hierarchies. Explore the intricacies of data replication and learn to maintain data quality precisely. Familiarize yourself with user interfaces to manage master data effectively. Each concept carries real-world significance, equipping you to excel in your professional endeavors.

        Crafting a Study Plan: Your Path to Excellence:

        Setting Clear Goals Is Essential During the C_MDG_1909 Certification: 

        Before embarking on your certification journey, define your goals clearly. Whether you seek career advancement, personal growth, or a combination of both, setting goals gives your efforts purpose and direction. Visualize the impact of achieving C_MDG_1909 certification and leverage it to fuel your dedication throughout the preparation process.

        Follow an Effective Timetable:

        A well-structured study timetable is your roadmap to success. Break down the weeks or months leading up to the exam, allocating time for studying materials, reviewing notes, and taking practice tests. Consider dedicating specific time slots for challenging topics to ensure comprehensive coverage. Regular revision and practice are crucial to retaining information and honing your skills.

        Understanding Data Governance Processes:

        Data governance is the backbone of data integrity. Learn about data creation, change request management, and data distribution processes. Gain insights into how these processes collaborate to ensure consistent and reliable data across the organization. This knowledge empowers you to drive data-driven decision-making and maintain data quality standards.

        Practical Application and Hands-On Experience: Bridging Theory and Practice:

        Real-World Scenarios: Applying SAP MDG Concepts:

        SAP MDG concepts find real-world relevance in scenarios that demand accurate data management. Explore how data governance processes impact day-to-day operations, such as data creation for new products or customer records. Understand how adhering to best practices in data management streamlines processes, minimizes errors, and optimizes decision-making.

        Engaging in Hands-On Exercises:

        Theory is valuable, but practical application solidifies your understanding. Engage in hands-on exercises that mirror real-world challenges. Configure data replication, create and manage data models, and experience firsthand how different aspects of SAP MDG interconnect. These exercises empower you to apply theoretical knowledge in practical scenarios.

        C_MDG_1909 Practice Test and Self-Assessment: Sharpening Your Exam Skills:

        The Significance of C_MDG_1909 Practice Tests:

        C_MDG_1909 practice tests are invaluable tools for exam preparation. They simulate the exam environment, familiarizing you with question formats, time constraints, and content distribution. Regularly taking C_MDG_1909 practice tests enhances your confidence, improves time management, and identifies areas that require further study.

        Evaluating Your Performance with C_MDG_1909 Practice Test:

        After completing C_MDG_1909 practice tests, evaluate your performance objectively. Identify strengths and weaknesses, focusing on areas that need improvement. This self-assessment guides your study efforts, ensuring you allocate time to concepts that require more attention. Continued refinement of your study plan maximizes your chances of success.

        Final Weeks of Preparation: Reaching the Finish Line with Confidence:

        Effective Review and Revision Strategies:

        Prioritize effective review and revision strategies in the final weeks leading up to the exam. Summarize complex topics in concise notes, create flashcards to reinforce key concepts, and solve practice problems to solidify your understanding. Regular revision enhances memory retention and boosts your confidence for the upcoming exam.

        Managing Exam Anxiety:

        As the exam day approaches, managing anxiety is paramount. To calm your nerves, practice relaxation techniques like deep breathing and meditation. Visualize your success and focus on your preparation journey. Confidence in your abilities is the key to tackling exam-related stress with poise.

        Mastering Exam Day: Navigating the Final Frontier:

        On the day of the C_MDG_1909 exam, precisely execute your preparation plan. Arrive early at the exam center to eliminate unnecessary stress. Manage your time wisely, allocating a specific duration for each exam section. Maintain your focus and stay composed, drawing upon the confidence you’ve built throughout your preparation.

        Post-Certification Benefits and Continuous Growth with C_MDG_1909 Certification:

        Achieving the C_MDG_1909 certification is not the end; it’s the beginning of a new chapter. This certification paves the way for diverse career opportunities, from data governance roles to leadership positions. Embrace continuous learning to stay updated with evolving industry trends and technologies. The journey towards mastering SAP MDG is ongoing, and each step contributes to your growth and success.

        Concluding Thoughts:

        The C_MDG_1909 certification is your gateway to mastering SAP Master Data Governance. From understanding its significance to navigating its intricacies, this guide has equipped you with a comprehensive roadmap for success. You can confidently approach the certification exam by embracing study resources, mastering key topics, and engaging in practical exercises. Remember that the journey towards mastering SAP MDG is not just about the destination; it’s about the growth, knowledge, and expertise you acquire along the way. So, embark on this journey with determination, dedication, and the firm belief that you can leave a lasting mark in SAP Master Data Governance.

        Rating: 5 / 5 (1 votes)

        The post C_MDG_1909 Certification Practice Test: Embrace the Journey of SAP Master Data Governance Success appeared first on ERP Q&A.

        ]]>
        C_MDG_1909: 10 Functional Study Tips to Conquer the SAP MDG Certification Exam! https://www.erpqna.com/c-mdg-1909-top-10-study-tips-for-sap-mdg-certification/?utm_source=rss&utm_medium=rss&utm_campaign=c-mdg-1909-top-10-study-tips-for-sap-mdg-certification Wed, 10 May 2023 05:59:47 +0000 https://www.erpqna.com/?p=74521 C_MDG_1909, or the SAP Certified Application Associate - SAP Master Data Governance, verifies that the candidate has the fundamental and core knowledge required for the application consultant profile.

        The post C_MDG_1909: 10 Functional Study Tips to Conquer the SAP MDG Certification Exam! appeared first on ERP Q&A.

        ]]>
        C_MDG_1909, or the SAP Certified Application Associate – SAP Master Data Governance, verifies that the candidate has the fundamental and core knowledge required for the application consultant profile.

        The C_MDG_1909 certification proves that the candidate possesses a comprehensive understanding and excellent technical skills to contribute as a team member in a mentored role within a project.

        What Is the Level of the C_MDG_1909 Certification?

        The C_MDG_1909 exam is recommended as an entry-level qualification for individuals seeking to establish their expertise in SAP Master Data Governance.

        What Are Some Tips to Prepare for the C_MDG_1909 Certification?

        Preparing for the SAP C_MDG_1909 exam requires a systematic approach and a thorough understanding of the exam topics. Here are some steps to help you effectively prepare for the exam:

        #1 Review the C_MDG_1909 Exam Blueprint:

        Start by carefully reviewing the official C_MDG_1909 exam blueprint provided by SAP. This document outlines the exam topics, the weighting of each topic, and the subtopics that will be covered. It serves as a roadmap for your preparation and helps you prioritize your study efforts.

        #2 Gain A Clear Understanding of the C_MDG_1909 Exam Content:

        Gain a clear understanding of the content and concepts covered in the exam. Familiarize yourself with the key areas, such as data modeling, data governance, data quality, data replication, and data integration. Pay attention to any recent updates or changes in the C_MDG_1909 exam syllabus.

        #3 Access Official SAP Documentation: 

        SAP provides comprehensive documentation, including guides, manuals, and online resources specific to the C_MDG_1909 exam. Utilize these resources to deepen your knowledge and grasp the technical aspects of SAP Master Data Governance.

        #4 Training Courses and Learning Materials: 

        Enroll in SAP-approved training courses designed for the C_MDG_1909 exam. These courses offer structured learning paths and provide hands-on experience with SAP MDG. Additionally, explore supplementary learning materials, such as books, online tutorials, and video lectures, to enhance your understanding.

        #5 Do Rigorous Practice on C_MDG_1909 Practice Test Questions:

        Solve practice questions and sample exams to become familiar with the exam format, time constraints, and types of questions asked. SAP offers sample questions on its website, or you can explore third-party resources that provide practice exams specifically for the C_MDG_1909 exam. These C_MDG_1909 practice tests will offer you more ideas about your strengths and flaws, and taking the exam becomes simple.

        #6 Gain Practical Experience: 

        Gain practical experience by working on real-world scenarios or through simulated environments. If possible, seek opportunities to apply SAP Master Data Governance in a professional setting or participate in hands-on exercises provided by training courses.

        #7 Join Study Groups or Forums:

        Engage with fellow learners and professionals studying for the C_MDG_1909 exam. Join online study groups, forums, or SAP community platforms where you can ask questions, share insights, and benefit from the experiences of others. Collaboration and knowledge sharing can enrich your understanding and help clarify any doubts.

        #8 Dedicate Time for Every C_MDG_1909 Syllabus Domain:

        Follow a study schedule that allocates sufficient time for each exam topic. Practice effective time management to ensure you cover all areas thoroughly. Develop a C_MDG_1909 exam strategy that allows you to allocate time wisely during the actual exam, focusing on questions you are confident about first and revisiting more challenging ones later.

        #9 Are You Ready for the C_MDG_1909 Certification Exam?

        As the exam date approaches, revise key concepts and review your notes. Take mock exams to assess your readiness and identify areas that require further attention. Analyze your performance in these practice exams to identify strengths and weaknesses.

        #10 Exam Day Preparation:

        On the day of the C_MDG_1909 exam, ensure you have a good night’s sleep and arrive at the exam center well in advance. Read and follow all instructions provided during the exam. Stay calm, manage your time effectively, and carefully analyze each question before selecting your answer.

        What Is SAP Master Data Governance?

        SAP Master Data Governance (MDG) is a comprehensive solution that enables organizations to manage and govern their master data effectively. Master data refers to the critical business information shared across various systems and applications within an organization, such as customer, product, supplier, and financial data.

        How Does SAP MDG Help Organizations?

        SAP MDG is to establish a centralized and standardized approach to managing master data, ensuring its accuracy, consistency, and integrity across the enterprise. By implementing SAP MDG, organizations can overcome the challenges associated with decentralized and inconsistent master data, which can lead to data errors, redundancy, and inefficiencies.

        SAP MDG provides a robust framework and tools to streamline and automate master data management processes. It enables organizations to define data models, validation rules, and governance policies specific to their business requirements. Through workflows and approval processes, SAP MDG ensures that master data changes are properly reviewed, authorized, and audited.

        Bottom Line:

        By implementing SAP Master Data Governance, organizations can achieve greater data consistency, improved data quality, enhanced operational efficiency, and better decision-making. The solution empowers businesses to establish a solid foundation for their data management processes and supports digital transformation initiatives by providing reliable and trustworthy master data across the enterprise. Using SAP MDG makes it an essential solution for users, and it can open various career opportunities for aspirants. Therefore, study hard and earn the C_MDG_1909 certification.

        Rating: 5 / 5 (1 votes)

        The post C_MDG_1909: 10 Functional Study Tips to Conquer the SAP MDG Certification Exam! appeared first on ERP Q&A.

        ]]>
        Material Master Data – compare data in two systems https://www.erpqna.com/material-master-data-compare-data-in-two-systems/?utm_source=rss&utm_medium=rss&utm_campaign=material-master-data-compare-data-in-two-systems Wed, 22 Feb 2023 11:10:04 +0000 https://www.erpqna.com/?p=72265 In Master Data Hub scenario the MDG system is the “source of truth”, master data is replicated to satellite systems and can be extended there, but in general the common part should not be changed locally because at next replication it will get back overwritten with the version from MDG. In some cases, it may […]

        The post Material Master Data – compare data in two systems appeared first on ERP Q&A.

        ]]>
        In Master Data Hub scenario the MDG system is the “source of truth”, master data is replicated to satellite systems and can be extended there, but in general the common part should not be changed locally because at next replication it will get back overwritten with the version from MDG.

        In some cases, it may happen that either naughty user or external interface modify part of the data which should not be changed, and it is often needed to detect such situation ASAP.

        On the market exist tools to compare the master data between systems, but usually they are not free, have complex configuration and it takes time to learn how they work.

        In this blog I provide a solution how to compare Material Master Data between two systems with relatively short coding (in total about 1k lines of ABAP) and “some clicks of the mouse”. The code can be implemented in Sandbox system and is ready to compare data in any systems to which you set up an RFC connection.

        How it works

        A Delta Collection Program connects to two selected systems (RFC destinations must be set up in SM59), and reads Material Master Data tables from System1 and System2 (with standard RFC function module RFC_READ_TABLE – read carefully note 382318). After field-by-field comparison the calculated delta is shown on output.

        As the Material Master Data tables are usually huge and the delta collection takes time – the output of the program mentioned above can be saved in a Custom Delta Table.

        For nice visualization of the comparison there is simple FPM application to be created (however you can also use MS Excel).

        As result we get something like below:

        Where Delta Collection Program looks like:

        and the Custom Delta Table:

        How to set it up

        (I’ve tried to provide detailed instructions to make it clear for person with just basic ABAP knowledge)

        First, the Custom Delta Table where collected delta records will be stored, so SE11 and create table as below (to make it simple I’ve used generic data elements, but of course you can set up your own with nice labels)

        Then the class used to collect the data and calculate delta:

        Simply create new class ZCL_TABLES_COMPARE, switch into “Source Code-Based” mode and paste the code below (+ save and activate). See the comments at call of ‘RFC_READ_TABLE’

        CLASS zcl_tables_compare DEFINITION
          PUBLIC
          FINAL
          CREATE PUBLIC .
        
          PUBLIC SECTION.
        
            TYPES:
              ty_fields_t TYPE STANDARD TABLE OF rfc_db_fld WITH DEFAULT KEY .
            TYPES:
              BEGIN OF ty_log,
                rkey   TYPE ztab_compare-rkey,
                tfname TYPE ztab_compare-tfname,
                v1     TYPE ztab_compare-v1,
                v2     TYPE ztab_compare-v2,
                fdescr TYPE string,
              END OF ty_log .
            TYPES:
              ty_log_t TYPE STANDARD TABLE OF ty_log WITH KEY rkey tfname .
        
            CONSTANTS gc_key_delimeter TYPE char1 VALUE '/' ##NO_TEXT.
        
            CLASS-METHODS run_comparison
              IMPORTING
                !iv_rfc_dest1    TYPE rfcdest
                !iv_rfc_dest2    TYPE rfcdest
                !iv_tabname      TYPE tabname
                !iv_key_fields   TYPE string
                !iv_excl_fields  TYPE string OPTIONAL
                !iv_range_from   TYPE char100
                !iv_range_to     TYPE char100
                !iv_ec_s1        TYPE abap_bool DEFAULT abap_true
                !iv_ec_s2        TYPE abap_bool DEFAULT abap_true
              RETURNING
                VALUE(rt_result) TYPE ty_log_t .
            CLASS-METHODS get_table_key_fields
              IMPORTING
                !iv_tabname      TYPE tabname
              RETURNING
                VALUE(rv_result) TYPE string .
            CLASS-METHODS get_table_field_descr
              IMPORTING
                !iv_tfname       TYPE ztab_compare-tfname
              RETURNING
                VALUE(rv_result) TYPE string .
          PROTECTED SECTION.
          PRIVATE SECTION.
        
            CLASS-METHODS is_status_different
              IMPORTING
                !iv_s1           TYPE any
                !iv_s2           TYPE any
              RETURNING
                VALUE(rv_result) TYPE abap_bool .
            CLASS-METHODS compare_tables
              IMPORTING
                !iv_tabname TYPE tabname
                !ir_tab1    TYPE REF TO data
                !ir_tab2    TYPE REF TO data
                !iv_ec_s1   TYPE abap_bool
                !iv_ec_s2   TYPE abap_bool
              CHANGING
                !ct_log     TYPE ty_log_t .
            CLASS-METHODS read_table
              IMPORTING
                !iv_rfc_dest     TYPE rfcdest
                !iv_tabname      TYPE tabname
                !iv_key_fields   TYPE string
                !iv_excl_fields  TYPE string OPTIONAL
                !iv_range_from   TYPE char100 OPTIONAL
                !iv_range_to     TYPE char100 OPTIONAL
              RETURNING
                VALUE(rt_result) TYPE REF TO data .
        ENDCLASS.
        
        
        
        CLASS ZCL_TABLES_COMPARE IMPLEMENTATION.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Static Private Method ZCL_TABLES_COMPARE=>COMPARE_TABLES
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IV_TABNAME                     TYPE        TABNAME
        * | [--->] IR_TAB1                        TYPE REF TO DATA
        * | [--->] IR_TAB2                        TYPE REF TO DATA
        * | [--->] IV_EC_S1                       TYPE        ABAP_BOOL
        * | [--->] IV_EC_S2                       TYPE        ABAP_BOOL
        * | [<-->] CT_LOG                         TYPE        TY_LOG_T
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD compare_tables.
            DEFINE copy_tab.
              ASSIGN &1->* TO <lt_data>.
              IF sy-subrc EQ 0.
                &2 = CORRESPONDING #( <lt_data> ).
                SORT &2.
              ENDIF.
            END-OF-DEFINITION.
        
            DEFINE check1.
              IF sy-subrc EQ 0.
                IF &1 NE &2.
                  LOOP AT lo_str_descr->components ASSIGNING <ls_comp>.
                    ASSIGN COMPONENT <ls_comp>-name OF STRUCTURE &1 TO <lv_value1>.
                    ASSIGN COMPONENT <ls_comp>-name OF STRUCTURE &2 TO <lv_value2>.
                    IF ';VPSTA;PSTAT;' CS |;{ <ls_comp>-name };|. " status field
                      IF abap_true EQ is_status_different( iv_s1 = <lv_value1> iv_s2 = <lv_value2> ).
                        APPEND VALUE #( rkey = lv_key tfname = |{ iv_tabname }-{ <ls_comp>-name }| v1 = <lv_value1> v2 = <lv_value2> ) TO ct_log.
                      ENDIF.
                    ELSE.
                      IF <lv_value1> NE <lv_value2>.
                        APPEND VALUE #( rkey = lv_key tfname = |{ iv_tabname }-{ <ls_comp>-name }| v1 = <lv_value1> v2 = <lv_value2> ) TO ct_log.
                      ENDIF.
                    ENDIF.
                  ENDLOOP.
                ENDIF.
              ELSE.
                IF iv_ec_s1 EQ abap_true.
                APPEND VALUE #( rkey = lv_key tfname = iv_tabname v1 = 'record exists' v2 = 'does not exist' ) TO ct_log. " key does not exist in System 2
                ENDIF.
              ENDIF.
            END-OF-DEFINITION.
        
            DEFINE check2.
              IF sy-subrc NE 0.
                APPEND VALUE #( rkey = lv_key tfname = iv_tabname v1 = 'does not exist' v2 = 'record exists' ) TO ct_log. " key does not exist in System 1
              ENDIF.
            END-OF-DEFINITION.
        
            DATA: lt_mara1     TYPE STANDARD TABLE OF mara WITH KEY matnr,
                  lt_mara2     LIKE lt_mara1,
                  lt_marc1     TYPE STANDARD TABLE OF marc WITH KEY matnr werks,
                  lt_marc2     LIKE lt_marc1,
                  lt_makt1     TYPE STANDARD TABLE OF makt WITH KEY matnr spras,
                  lt_makt2     LIKE lt_makt1,
                  lt_mard1     TYPE STANDARD TABLE OF mard WITH KEY matnr werks lgort,
                  lt_mard2     LIKE lt_mard1,
                  lt_marm1     TYPE STANDARD TABLE OF marm WITH KEY matnr meinh,
                  lt_marm2     LIKE lt_marm1,
                  lt_mbew1     TYPE STANDARD TABLE OF mbew WITH KEY matnr bwkey bwtar,
                  lt_mbew2     LIKE lt_mbew1,
                  lt_mean1     TYPE STANDARD TABLE OF mean WITH KEY matnr meinh ean11, "this one is special case - LFNUM may differ between sytstems
                  lt_mean2     LIKE lt_mean1,
                  lt_mlgn1     TYPE STANDARD TABLE OF mlgn WITH KEY matnr lgnum,
                  lt_mlgn2     LIKE lt_mlgn1,
                  lt_mlgt1     TYPE STANDARD TABLE OF mlgt WITH KEY matnr lgnum lgtyp,
                  lt_mlgt2     LIKE lt_mlgt1,
                  lt_mvke1     TYPE STANDARD TABLE OF mvke WITH KEY matnr vkorg vtweg,
                  lt_mvke2     LIKE lt_mvke1,
                  lt_qmat1     TYPE STANDARD TABLE OF qmat WITH KEY art matnr werks,
                  lt_qmat2     LIKE lt_qmat1,
                  lo_str_descr TYPE REF TO cl_abap_structdescr,
                  lv_key       TYPE string.
        
            FIELD-SYMBOLS: <lt_data>   TYPE ANY TABLE,
                           <ls_comp>   TYPE abap_compdescr,
                           <lv_value1> TYPE any,
                           <lv_value2> TYPE any.
        
            lo_str_descr = CAST #( cl_abap_structdescr=>describe_by_name( iv_tabname ) ).
        
            CASE iv_tabname.
              WHEN 'MARA'.
                copy_tab ir_tab1 lt_mara1.
                copy_tab ir_tab2 lt_mara2.
        
                LOOP AT lt_mara1 ASSIGNING FIELD-SYMBOL(<ls_mara1>).
                  lv_key = |{ <ls_mara1>-matnr ALPHA = OUT }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_mara2 WITH KEY matnr = <ls_mara1>-matnr ASSIGNING FIELD-SYMBOL(<ls_mara2>) BINARY SEARCH.
                  check1 <ls_mara1> <ls_mara2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_mara2 ASSIGNING <ls_mara2>.
                    lv_key = |{ <ls_mara1>-matnr ALPHA = OUT }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_mara1 WITH KEY matnr = <ls_mara2>-matnr TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
              WHEN 'MARC'.
                copy_tab ir_tab1 lt_marc1.
                copy_tab ir_tab2 lt_marc2.
        
                LOOP AT lt_marc1 ASSIGNING FIELD-SYMBOL(<ls_marc1>).
                  lv_key = |{ <ls_marc1>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_marc1>-werks }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_marc2 WITH KEY matnr = <ls_marc1>-matnr werks = <ls_marc1>-werks ASSIGNING FIELD-SYMBOL(<ls_marc2>) BINARY SEARCH.
                  check1 <ls_marc1> <ls_marc2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_marc2 ASSIGNING <ls_marc2>.
                    lv_key = |{ <ls_marc2>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_marc2>-werks }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_marc1 WITH KEY matnr = <ls_marc2>-matnr werks = <ls_marc2>-werks TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
              WHEN 'MAKT'.
                copy_tab ir_tab1 lt_makt1.
                copy_tab ir_tab2 lt_makt2.
        
                LOOP AT lt_makt1 ASSIGNING FIELD-SYMBOL(<ls_makt1>).
                  lv_key = |{ <ls_makt1>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_makt1>-spras }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_makt2 WITH KEY matnr = <ls_makt1>-matnr spras = <ls_makt1>-spras ASSIGNING FIELD-SYMBOL(<ls_makt2>) BINARY SEARCH.
                  check1 <ls_makt1> <ls_makt2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_makt2 ASSIGNING <ls_makt2>.
                    lv_key = |{ <ls_makt2>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_makt2>-spras }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_makt1 WITH KEY matnr = <ls_makt2>-matnr spras = <ls_makt2>-spras TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
              WHEN 'MARD'.
                copy_tab ir_tab1 lt_mard1.
                copy_tab ir_tab2 lt_mard2.
        
                LOOP AT lt_mard1 ASSIGNING FIELD-SYMBOL(<ls_mard1>).
                  lv_key = |{ <ls_mard1>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mard1>-werks }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mard1>-lgort }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_mard2 WITH KEY matnr = <ls_mard1>-matnr werks = <ls_mard1>-werks lgort = <ls_mard1>-lgort ASSIGNING FIELD-SYMBOL(<ls_mard2>) BINARY SEARCH.
                  check1 <ls_mard1> <ls_mard2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_mard2 ASSIGNING <ls_mard2>. " check form the other side to detect missing records in System 1
                    lv_key = |{ <ls_mard2>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mard2>-werks }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mard2>-lgort }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_mard1 WITH KEY matnr = <ls_mard2>-matnr werks = <ls_mard2>-werks lgort = <ls_mard2>-lgort TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
              WHEN 'MARM'.
                copy_tab ir_tab1 lt_marm1.
                copy_tab ir_tab2 lt_marm2.
        
                LOOP AT lt_marm1 ASSIGNING FIELD-SYMBOL(<ls_marm1>).
                  lv_key = |{ <ls_marm1>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_marm1>-meinh }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_marm2 WITH KEY matnr = <ls_marm1>-matnr meinh = <ls_marm1>-meinh ASSIGNING FIELD-SYMBOL(<ls_marm2>) BINARY SEARCH.
                  check1 <ls_marm1> <ls_marm2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_marm2 ASSIGNING <ls_marm2>.
                    lv_key = |{ <ls_marm2>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_marm2>-meinh }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_marm1 WITH KEY matnr = <ls_marm2>-matnr meinh = <ls_marm2>-meinh TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
              WHEN 'MBEW'.
                copy_tab ir_tab1 lt_mbew1.
                copy_tab ir_tab2 lt_mbew2.
        
                LOOP AT lt_mbew1 ASSIGNING FIELD-SYMBOL(<ls_mbew1>).
                  lv_key = |{ <ls_mbew1>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mbew1>-bwkey }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mbew1>-bwtar }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_mbew2 WITH KEY matnr = <ls_mbew1>-matnr bwkey = <ls_mbew1>-bwkey bwtar = <ls_mbew1>-bwtar ASSIGNING FIELD-SYMBOL(<ls_mbew2>) BINARY SEARCH.
                  check1 <ls_mbew1> <ls_mbew2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_mbew2 ASSIGNING <ls_mbew2>. " check form the other side to detect missing records in System 1
                    lv_key = |{ <ls_mbew2>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mbew2>-bwkey }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mbew2>-bwtar }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_mbew1 WITH KEY matnr = <ls_mbew2>-matnr bwkey = <ls_mbew2>-bwkey bwtar = <ls_mbew2>-bwtar TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
              WHEN 'MEAN'.
                copy_tab ir_tab1 lt_mean1.
                copy_tab ir_tab2 lt_mean2.
        
                LOOP AT lt_mean1 ASSIGNING FIELD-SYMBOL(<ls_mean1>).
                  lv_key = |{ <ls_mean1>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mean1>-meinh }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mean1>-ean11 }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_mean2 WITH KEY matnr = <ls_mean1>-matnr meinh = <ls_mean1>-meinh ean11 = <ls_mean1>-ean11 ASSIGNING FIELD-SYMBOL(<ls_mean2>) BINARY SEARCH.
                  CLEAR: <ls_mean1>-lfnum, <ls_mean2>-lfnum. " we don't want to compare this field, it may be different in both systems
                  check1 <ls_mean1> <ls_mean2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_mean2 ASSIGNING <ls_mean2>. " check form the other side to detect missing records in System 1
                    lv_key = |{ <ls_mean2>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mean2>-meinh }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mean2>-ean11 }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_mean1 WITH KEY matnr = <ls_mean2>-matnr meinh = <ls_mean2>-meinh ean11 = <ls_mean2>-ean11 TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
              WHEN 'MLGN'.
                copy_tab ir_tab1 lt_mlgn1.
                copy_tab ir_tab2 lt_mlgn2.
        
                LOOP AT lt_mlgn1 ASSIGNING FIELD-SYMBOL(<ls_mlgn1>).
                  lv_key = |{ <ls_mlgn1>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mlgn1>-lgnum }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_mlgn2 WITH KEY matnr = <ls_mlgn1>-matnr lgnum = <ls_mlgn1>-lgnum ASSIGNING FIELD-SYMBOL(<ls_mlgn2>) BINARY SEARCH.
                  check1 <ls_mlgn1> <ls_mlgn2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_mlgn2 ASSIGNING <ls_mlgn2>.
                    lv_key = |{ <ls_mlgn2>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mlgn2>-lgnum }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_mlgn1 WITH KEY matnr = <ls_mlgn2>-matnr lgnum = <ls_mlgn2>-lgnum TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
              WHEN 'MLGT'.
                copy_tab ir_tab1 lt_mlgt1.
                copy_tab ir_tab2 lt_mlgt2.
        
                LOOP AT lt_mlgt1 ASSIGNING FIELD-SYMBOL(<ls_mlgt1>).
                  lv_key = |{ <ls_mlgt1>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mlgt1>-lgnum }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mlgt1>-lgtyp }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_mlgt2 WITH KEY matnr = <ls_mlgt1>-matnr lgnum = <ls_mlgt1>-lgnum lgtyp = <ls_mlgt1>-lgtyp ASSIGNING FIELD-SYMBOL(<ls_mlgt2>) BINARY SEARCH.
                  check1 <ls_mlgt1> <ls_mlgt2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_mlgt2 ASSIGNING <ls_mlgt2>. " check form the other side to detect missing records in System 1
                    lv_key = |{ <ls_mlgt2>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mlgt2>-lgnum }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mlgt2>-lgtyp }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_mlgt1 WITH KEY matnr = <ls_mlgt2>-matnr lgnum = <ls_mlgt2>-lgnum lgtyp = <ls_mlgt2>-lgtyp TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
              WHEN 'MVKE'.
                copy_tab ir_tab1 lt_mvke1.
                copy_tab ir_tab2 lt_mvke2.
        
                LOOP AT lt_mvke1 ASSIGNING FIELD-SYMBOL(<ls_mvke1>).
                  lv_key = |{ <ls_mvke1>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mvke1>-vkorg }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mvke1>-vtweg }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_mvke2 WITH KEY matnr = <ls_mvke1>-matnr vkorg = <ls_mvke1>-vkorg vtweg = <ls_mvke1>-vtweg ASSIGNING FIELD-SYMBOL(<ls_mvke2>) BINARY SEARCH.
                  check1 <ls_mvke1> <ls_mvke2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_mvke2 ASSIGNING <ls_mvke2>. " check form the other side to detect missing records in System 1
                    lv_key = |{ <ls_mvke2>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mvke2>-vkorg }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_mvke2>-vtweg }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_mvke1 WITH KEY matnr = <ls_mvke2>-matnr vkorg = <ls_mvke2>-vkorg vtweg = <ls_mvke2>-vtweg TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
              WHEN 'QMAT'.
                copy_tab ir_tab1 lt_qmat1.
                copy_tab ir_tab2 lt_qmat2.
        
                LOOP AT lt_qmat1 ASSIGNING FIELD-SYMBOL(<ls_qmat1>).
                  lv_key = |{ <ls_qmat1>-art }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_qmat1>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_qmat1>-werks }|.
                  CONDENSE lv_key NO-GAPS.
                  READ TABLE lt_qmat2 WITH KEY art = <ls_qmat1>-art matnr = <ls_qmat1>-matnr werks = <ls_qmat1>-werks ASSIGNING FIELD-SYMBOL(<ls_qmat2>) BINARY SEARCH.
                  check1 <ls_qmat1> <ls_qmat2>.
                ENDLOOP.
        
                IF iv_ec_s2 EQ abap_true.
                  LOOP AT lt_qmat2 ASSIGNING <ls_qmat2>. " check form the other side to detect missing records in System 1
                    lv_key = |{ <ls_qmat2>-art }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_qmat2>-matnr ALPHA = OUT }{ zcl_tables_compare=>gc_key_delimeter }{ <ls_qmat2>-werks }|.
                    CONDENSE lv_key NO-GAPS.
                    READ TABLE lt_qmat1 WITH KEY art = <ls_qmat2>-art matnr = <ls_qmat2>-matnr werks = <ls_qmat2>-werks TRANSPORTING NO FIELDS BINARY SEARCH.
                    check2.
                  ENDLOOP.
                ENDIF.
        
            ENDCASE.
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Static Public Method ZCL_TABLES_COMPARE=>GET_TABLE_FIELD_DESCR
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IV_TFNAME                      TYPE        ZTAB_COMPARE-TFNAME
        * | [<-()] RV_RESULT                      TYPE        STRING
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD get_table_field_descr.
            TYPES: BEGIN OF lty_tf_name,
                     tfname TYPE ztab_compare-tfname,
                     fdescr TYPE string,
                   END OF lty_tf_name.
        
            STATICS: st_fields TYPE HASHED TABLE OF lty_tf_name WITH UNIQUE KEY tfname,
                     sv_tables TYPE string.
        
            READ TABLE st_fields ASSIGNING FIELD-SYMBOL(<ls_fld>) WITH TABLE KEY tfname = iv_tfname.
            IF sy-subrc EQ 0.
              rv_result = <ls_fld>-fdescr.
              RETURN.
            ENDIF.
        
            DATA(lv_tname) = substring_before( val = iv_tfname sub = '-' ).
            IF lv_tname IS INITIAL OR sv_tables CS lv_tname.
              RETURN.
            ENDIF.
        
            sv_tables = |{ sv_tables };{ lv_tname }|.
        
            DATA(lo_struc_descr) = CAST cl_abap_structdescr( cl_abap_structdescr=>describe_by_name( lv_tname ) ).
            LOOP AT lo_struc_descr->get_ddic_field_list( p_langu = sy-langu ) ASSIGNING FIELD-SYMBOL(<ls_comp>).
              INSERT VALUE #( tfname = |{ lv_tname }-{ <ls_comp>-fieldname }| fdescr = <ls_comp>-scrtext_l ) INTO TABLE st_fields.
            ENDLOOP.
        
            rv_result = VALUE #( st_fields[ tfname = iv_tfname ]-fdescr OPTIONAL ).
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Static Public Method ZCL_TABLES_COMPARE=>GET_TABLE_KEY_FIELDS
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IV_TABNAME                     TYPE        TABNAME
        * | [<-()] RV_RESULT                      TYPE        STRING
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD get_table_key_fields.
            SELECT fieldname INTO TABLE @DATA(lt_key_fields) FROM dd03l
              WHERE tabname EQ @iv_tabname AND keyflag EQ @abap_true AND fieldname NE 'MANDT'
              ORDER BY position.
        
            CONCATENATE LINES OF lt_key_fields INTO rv_result SEPARATED BY ';'.
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Static Private Method ZCL_TABLES_COMPARE=>IS_STATUS_DIFFERENT
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IV_S1                          TYPE        ANY
        * | [--->] IV_S2                          TYPE        ANY
        * | [<-()] RV_RESULT                      TYPE        ABAP_BOOL
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD is_status_different.
            DATA: lv_pos TYPE i VALUE 0,
                  lv_len TYPE i.
        
            rv_result = abap_false.
            lv_len = strlen( iv_s1 ).
        
            IF lv_len NE strlen( iv_s2 ).
              rv_result = abap_true. RETURN.
            ENDIF.
        
            DO.
              IF lv_pos GE lv_len.
                RETURN.
              ENDIF.
              IF iv_s2 NS iv_s1+lv_pos(1).
                rv_result = abap_true. RETURN.
              ENDIF.
              lv_pos = lv_pos + 1.
            ENDDO.
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Static Private Method ZCL_TABLES_COMPARE=>READ_TABLE
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IV_RFC_DEST                    TYPE        RFCDEST
        * | [--->] IV_TABNAME                     TYPE        TABNAME
        * | [--->] IV_KEY_FIELDS                  TYPE        STRING
        * | [--->] IV_EXCL_FIELDS                 TYPE        STRING(optional)
        * | [--->] IV_RANGE_FROM                  TYPE        CHAR100(optional)
        * | [--->] IV_RANGE_TO                    TYPE        CHAR100(optional)
        * | [<-()] RT_RESULT                      TYPE REF TO DATA
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD read_table.
            DATA: lt_options    TYPE STANDARD TABLE OF rfc_db_opt WITH DEFAULT KEY,
                  lt_all_fields TYPE ty_fields_t,
                  lt_key_fields TYPE ty_fields_t,
                  lt_sel_fields TYPE ty_fields_t,
                  lt_pkg_fields TYPE ty_fields_t,
                  lt_data       TYPE STANDARD TABLE OF tab512 WITH DEFAULT KEY,
                  lt_comp       TYPE cl_abap_structdescr=>component_table,
                  lr_tab        TYPE REF TO data,
                  lr_wa         TYPE REF TO data,
                  lt_tr_fields  TYPE ty_fields_t,
                  lv_rowcount   TYPE soid-accnt VALUE 0.
        
            FIELD-SYMBOLS: <lt_out> TYPE INDEX TABLE.
        
        
            DATA: lt_packages TYPE stringtab,
                  lv_offset   TYPE i.
        
            " first compare if structures of table are the same in both systems
            CALL FUNCTION 'RFC_READ_TABLE'
              DESTINATION iv_rfc_dest
              EXPORTING
                query_table = iv_tabname
                no_data     = abap_true
              TABLES
                fields      = lt_all_fields
              EXCEPTIONS
                OTHERS      = 7.
        
            " remove excluded fields and get key fields
            LOOP AT lt_all_fields ASSIGNING FIELD-SYMBOL(<ls_field>).
              IF iv_key_fields CS <ls_field>-fieldname.
                APPEND <ls_field> TO lt_key_fields.
                CONTINUE.
              ENDIF.
              IF iv_excl_fields CS <ls_field>-fieldname.
                CONTINUE.
              ENDIF.
              APPEND <ls_field> TO lt_sel_fields.
            ENDLOOP.
        
            ASSERT lt_key_fields IS NOT INITIAL.
        
            " ... and prepare selection range for WHERE clause
            DATA(lv_first_key) = VALUE #( lt_key_fields[ 1 ]-fieldname ).
            IF iv_range_from IS INITIAL AND iv_range_to IS INITIAL.
              lt_options = VALUE #( ( text = |1 EQ 1| ) ).  " select all but with some fuse
              lv_rowcount = 100000.
            ELSEIF iv_range_from IS NOT INITIAL AND iv_range_to IS NOT INITIAL.
              lt_options = VALUE #( ( text = |{ lv_first_key } BETWEEN '{ iv_range_from }' AND '{ iv_range_to }'| ) ).
            ELSEIF iv_range_from IS NOT INITIAL.
              lt_options = VALUE #( ( text = |{ lv_first_key } EQ '{ iv_range_from }'| ) ).
            ELSE. " upper limit provided without lower
              ASSERT 1 EQ 2.
            ENDIF.
        
        *>>> prepare output table
            LOOP AT lt_all_fields ASSIGNING <ls_field>.
              APPEND VALUE #( name = <ls_field>-fieldname type = cl_abap_elemdescr=>get_c( p_length = CONV #( <ls_field>-length ) ) ) TO lt_comp.
            ENDLOOP.
        
            DATA(lo_tab_struc) = cl_abap_structdescr=>create( p_components = lt_comp ).
            DATA(lt_ddic) = CAST cl_abap_structdescr( cl_abap_structdescr=>describe_by_name( iv_tabname ) )->get_ddic_field_list( ).
            SORT lt_ddic BY fieldname.
        
            CREATE DATA lr_wa TYPE HANDLE lo_tab_struc.
            ASSIGN lr_wa->* TO FIELD-SYMBOL(<ls_wa_out>).
        
            DATA(lt_keys) = VALUE abap_table_keydescr_tab( ( name = 'MAIN'
                                                             is_primary = abap_true
                                                             is_unique = abap_true
                                                             access_kind = cl_abap_tabledescr=>tablekind_sorted
                                                             key_kind = cl_abap_tabledescr=>keydefkind_user "keydefkind_user
                                                             components = CORRESPONDING #( lt_key_fields MAPPING name = fieldname ) ) ).
            TRY.
                DATA(lo_tab_descr) = cl_abap_tabledescr=>create_with_keys( p_line_type = lo_tab_struc
                                                                           p_keys = lt_keys ).
              CATCH cx_sy_table_creation INTO DATA(lx_error).
                RETURN.
            ENDTRY.
            CREATE DATA lr_tab TYPE HANDLE lo_tab_descr.
        
            ASSIGN lr_tab->* TO <lt_out>.
        *<<<
        
            " collect data in packages up to 512 characters
            DO.
              IF lt_sel_fields IS INITIAL.
                EXIT. " DO..ENDDO
              ENDIF.
        
              CLEAR: lv_offset, lt_pkg_fields, lt_tr_fields, lt_data.
        
              LOOP AT lt_key_fields ASSIGNING <ls_field>.
                lv_offset = lv_offset + <ls_field>-length.
                APPEND <ls_field> TO lt_pkg_fields.
              ENDLOOP.
        
              LOOP AT lt_sel_fields ASSIGNING <ls_field>.
                lv_offset = lv_offset + <ls_field>-length.
                IF lv_offset GE 512.
                  EXIT. "loop
                ENDIF.
        
                APPEND <ls_field> TO: lt_pkg_fields, lt_tr_fields.
                DELETE lt_sel_fields USING KEY loop_key.
              ENDLOOP.
        
              CALL FUNCTION 'RFC_READ_TABLE'
                DESTINATION iv_rfc_dest
                EXPORTING
                  query_table = iv_tabname
                  rowcount    = lv_rowcount
                TABLES
                  options     = lt_options
                  fields      = lt_pkg_fields
                  data        = lt_data
                EXCEPTIONS
                  OTHERS      = 7.
        
              IF sy-subrc <> 0.
        * Implement suitable error handling here
              ENDIF.
        
              LOOP AT lt_data ASSIGNING FIELD-SYMBOL(<ls_data>).
                CLEAR <ls_wa_out>.
                LOOP AT lt_pkg_fields ASSIGNING <ls_field>.
                  " the RFC FM is a bit crappy, when e.g. field is of type P length 3 dec 1 (example MDA MARC-LGRAD for 23430/0400)
                  " it tries to convert the 99.0 into CHAR3 and in result there is set '*.0' there !?!?
                  " in fact the FM is obsolete (see note 382318), see also something new in notes 2246160 and 3139000
                  ASSIGN COMPONENT <ls_field>-fieldname OF STRUCTURE <ls_wa_out> TO FIELD-SYMBOL(<lv_fvalue>).
                  IF sy-subrc EQ 0.
                    READ TABLE lt_ddic ASSIGNING FIELD-SYMBOL(<ls_ddic>) WITH KEY fieldname = <ls_field>-fieldname.
                    IF sy-subrc EQ 0 AND <ls_ddic>-inttype EQ 'P'.
                      IF <ls_data>-wa+<ls_field>-offset(<ls_field>-length) CS '*'.
                        <lv_fvalue> = '6.9'. " set here something not initial to at least detect comparison with 0
                      ELSE.
                        <lv_fvalue> = <ls_data>-wa+<ls_field>-offset(<ls_field>-length).
                      ENDIF.
                    ELSE.
                      <lv_fvalue> = <ls_data>-wa+<ls_field>-offset(<ls_field>-length).
                    ENDIF.
                  ENDIF.
                ENDLOOP.
                READ TABLE <lt_out> FROM <ls_wa_out> ASSIGNING FIELD-SYMBOL(<ls_found>).
                IF sy-subrc EQ 0.
                  LOOP AT lt_tr_fields ASSIGNING <ls_field>.
                    ASSIGN COMPONENT <ls_field>-fieldname OF STRUCTURE <ls_wa_out> TO FIELD-SYMBOL(<lv_src>).
                    IF sy-subrc EQ 0.
                      ASSIGN COMPONENT <ls_field>-fieldname OF STRUCTURE <ls_found> TO FIELD-SYMBOL(<lv_dst>).
                      IF sy-subrc EQ 0.
                        <lv_dst> = <lv_src>.
                      ENDIF.
                    ENDIF.
                  ENDLOOP.
                ELSE.
                  INSERT <ls_wa_out> INTO TABLE <lt_out>.
                ENDIF.
              ENDLOOP.
        
            ENDDO.
        
            rt_result = lr_tab.
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Static Public Method ZCL_TABLES_COMPARE=>RUN_COMPARISON
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IV_RFC_DEST1                   TYPE        RFCDEST
        * | [--->] IV_RFC_DEST2                   TYPE        RFCDEST
        * | [--->] IV_TABNAME                     TYPE        TABNAME
        * | [--->] IV_KEY_FIELDS                  TYPE        STRING
        * | [--->] IV_EXCL_FIELDS                 TYPE        STRING(optional)
        * | [--->] IV_RANGE_FROM                  TYPE        CHAR100
        * | [--->] IV_RANGE_TO                    TYPE        CHAR100
        * | [--->] IV_EC_S1                       TYPE        ABAP_BOOL (default =ABAP_TRUE)
        * | [--->] IV_EC_S2                       TYPE        ABAP_BOOL (default =ABAP_TRUE)
        * | [<-()] RT_RESULT                      TYPE        TY_LOG_T
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD run_comparison.
            " get data from RFC1
            DATA(lr_data1) = read_table( EXPORTING iv_rfc_dest = iv_rfc_dest1
                                                   iv_tabname = iv_tabname
                                                   iv_key_fields = iv_key_fields
                                                   iv_excl_fields = iv_excl_fields
                                                   iv_range_from = iv_range_from
                                                   iv_range_to = iv_range_to ).
            " get data from RFC2
            DATA(lr_data2) = read_table( EXPORTING iv_rfc_dest = iv_rfc_dest2
                                                   iv_tabname = iv_tabname
                                                   iv_key_fields = iv_key_fields
                                                   iv_excl_fields = iv_excl_fields
                                                   iv_range_from = iv_range_from
                                                   iv_range_to = iv_range_to ).
        
            compare_tables( EXPORTING iv_tabname = iv_tabname
                                      ir_tab1 = lr_data1
                                      ir_tab2 = lr_data2
                                      iv_ec_s1 = iv_ec_s1
                                      iv_ec_s2 = iv_ec_s2
                            CHANGING  ct_log = rt_result ).
          ENDMETHOD.
        ENDCLASS.

        Now the program which collects the delta:

        Create new executable program ZPST_TEST_COMPARE with the code below (program name is nowhere used, so set it as you wish):

        *&---------------------------------------------------------------------*
        *& Report ZPST_TEST_COMPARE
        *&---------------------------------------------------------------------*
        *&
        *&---------------------------------------------------------------------*
        REPORT zpst_test_compare.
        
        DATA: gr_salv TYPE REF TO cl_salv_table,
              gt_log  TYPE zcl_tables_compare=>ty_log_t.
        
        PARAMETERS: p_rfc1 TYPE rfcdest DEFAULT 'MDGCLNT100',
                    p_rfc2 TYPE rfcdest DEFAULT 'ECCCLNT100'.
        
        SELECT-OPTIONS: s_matnr FOR ('MATNR') NO-EXTENSION OBLIGATORY DEFAULT '000000000000042373' TO '000000000000044335',
                        s_fexcl FOR ('USMD_FIELDNAME') NO INTERVALS.
        
        PARAMETERS: p_mara TYPE c AS CHECKBOX DEFAULT 'X',
                    p_marc TYPE c AS CHECKBOX,
                    p_makt TYPE c AS CHECKBOX,
                    p_mard TYPE c AS CHECKBOX,
                    p_marm TYPE c AS CHECKBOX,
                    p_mbew TYPE c AS CHECKBOX,
                    p_mean TYPE c AS CHECKBOX,
                    p_mlgn TYPE c AS CHECKBOX,
                    p_mlgt TYPE c AS CHECKBOX,
        *            p_mpgd TYPE c AS CHECKBOX,
        *            p_mpop TYPE c AS CHECKBOX,
                    p_mvke TYPE c AS CHECKBOX,
                    p_qmat TYPE c AS CHECKBOX.
        *            p_mdma  TYPE c AS CHECKBOX,
        
        SELECTION-SCREEN SKIP.
        
        PARAMETERS: p_s1_ec TYPE abap_bool AS CHECKBOX DEFAULT abap_true,
                    p_s2_ec TYPE abap_bool AS CHECKBOX DEFAULT abap_false.
        
        INCLUDE zpst_test_compare_forms.
        *-----------------------------------------------------------------------------------
        START-OF-SELECTION.
        
          IF p_mara EQ abap_true.
            PERFORM compare_table USING 'MARA' CHANGING gt_log.
          ENDIF.
        
          IF p_marc EQ abap_true.
            PERFORM compare_table USING 'MARC' CHANGING gt_log.
          ENDIF.
        
          IF p_makt EQ abap_true.
            PERFORM compare_table USING 'MAKT' CHANGING gt_log.
          ENDIF.
        
          IF p_mard EQ abap_true.
            PERFORM compare_table USING 'MARD' CHANGING gt_log.
          ENDIF.
        
          IF p_marm EQ abap_true.
            PERFORM compare_table USING 'MARM' CHANGING gt_log.
          ENDIF.
        
          IF p_mbew EQ abap_true.
            PERFORM compare_table USING 'MBEW' CHANGING gt_log.
          ENDIF.
        
          IF p_mean EQ abap_true.
            PERFORM compare_table USING 'MEAN' CHANGING gt_log.
          ENDIF.
        
          IF p_mlgn EQ abap_true.
            PERFORM compare_table USING 'MLGN' CHANGING gt_log.
          ENDIF.
        
          IF p_mlgt EQ abap_true.
            PERFORM compare_table USING 'MLGT' CHANGING gt_log.
          ENDIF.
        
          IF p_mvke EQ abap_true.
            PERFORM compare_table USING 'MVKE' CHANGING gt_log.
          ENDIF.
        
          IF p_qmat EQ abap_true.
            PERFORM compare_table USING 'QMAT' CHANGING gt_log.
          ENDIF.
        
          SORT gt_log BY rkey tfname.
        *-----------------------------------------------------------------------------------
        END-OF-SELECTION.
          DATA(gv_repid) = sy-repid.
        
          TRY.
              cl_salv_table=>factory( IMPORTING r_salv_table = gr_salv
                                      CHANGING  t_table      = gt_log ).
            CATCH cx_salv_msg.                                  "#EC NO_HANDLER
              RETURN.
          ENDTRY.
        
          gr_salv->set_screen_status( pfstatus      = 'SALV_STANDARD'
                                      report        = gv_repid
                                      set_functions = cl_salv_table=>c_functions_all ).
        
          PERFORM salv_set_columns USING gr_salv.
        
          gr_salv->get_layout( )->set_key( VALUE #( report = gv_repid ) ).
          gr_salv->get_layout( )->set_default( abap_true ).
          gr_salv->get_layout( )->set_save_restriction( if_salv_c_layout=>restrict_none ).
        
          gr_salv->get_functions( )->set_print_preview( abap_false ).
        
          DATA(gr_events) = NEW lcl_handle_events( ).               "#EC NEEDED
          SET HANDLER gr_events->on_user_command FOR gr_salv->get_event( ).
        
          gr_salv->display( ).

        Double click:

        And create the include with code below:

        *&---------------------------------------------------------------------*
        *&  Include           ZPST_TEST_COMPARE_FORMS
        *&---------------------------------------------------------------------*
        CLASS lcl_handle_events DEFINITION.
          PUBLIC SECTION.
            METHODS:
              on_user_command FOR EVENT added_function OF cl_salv_events IMPORTING e_salv_function.
        ENDCLASS.
        
        CLASS lcl_handle_events IMPLEMENTATION.
          METHOD on_user_command.
            PERFORM show_function_info USING e_salv_function.
          ENDMETHOD.                    "on_user_command
        ENDCLASS.
        
        *-----------------------------------------------------------------------------------
        FORM salv_set_columns USING ir_alv TYPE REF TO cl_salv_table .
          DATA: lo_col TYPE REF TO cl_salv_column_table.
        
          DATA(lo_columns) = ir_alv->get_columns( ).
          lo_columns->set_optimize( ).
        
          TRY.
              " set Code Text columns names
              DATA(lt_columns) = lo_columns->get( ).
              LOOP AT lt_columns ASSIGNING FIELD-SYMBOL(<ls_column>).
                lo_col = CAST #( <ls_column>-r_column ).
                CASE <ls_column>-columnname.
                  WHEN 'RKEY'. lo_col->set_long_text( CONV #( 'Key' ) ).
                  WHEN 'TFNAME'. lo_col->set_long_text( CONV #( 'Table-Field Name' ) ).
                  WHEN 'V1'. lo_col->set_long_text( CONV #( |Value in { p_rfc1 }| ) ).
                  WHEN 'V2'. lo_col->set_long_text( CONV #( |Value in { p_rfc2 }| ) ).
                  WHEN 'FDESCR'. lo_col->set_long_text( CONV #( 'Field Description' ) ).
                ENDCASE.
              ENDLOOP.
            CATCH cx_salv_not_found.
              RETURN.
          ENDTRY.
        ENDFORM.
        
        FORM show_function_info USING i_function TYPE salv_de_function.
          DATA: ls_ztab_compare TYPE ztab_compare,
                lt_ztab_compare TYPE STANDARD TABLE OF ztab_compare WITH KEY tfname rkey.
        
          CASE i_function.
            WHEN 'SAVEDB'.
              IF gt_log IS NOT INITIAL.
                LOOP AT gt_log ASSIGNING FIELD-SYMBOL(<ls_log>).
                  ls_ztab_compare = CORRESPONDING #( <ls_log> ).
                  ls_ztab_compare-v1 = condense( val = ls_ztab_compare-v1 from = '' ).
                  ls_ztab_compare-v2 = condense( val = ls_ztab_compare-v2 from = '' ).
                  IF ls_ztab_compare-v1 EQ 'does not exist'. CLEAR ls_ztab_compare-v1. ENDIF.
                  IF ls_ztab_compare-v2 EQ 'does not exist'. CLEAR ls_ztab_compare-v2. ENDIF.
                  IF ls_ztab_compare-v1 EQ 'record exists'. ls_ztab_compare-v1 = abap_true. ENDIF.
                  IF ls_ztab_compare-v2 EQ 'record exists'. ls_ztab_compare-v2 = abap_true. ENDIF.
                  APPEND ls_ztab_compare TO lt_ztab_compare.
                ENDLOOP.
                MODIFY ztab_compare FROM TABLE lt_ztab_compare.
                COMMIT WORK.
                MESSAGE s208(00) WITH 'Data saved in ZTAB_COMPARE'.
              ENDIF.
        
            WHEN 'CLEARDB'.
              DELETE FROM ztab_compare.
              COMMIT WORK.
              MESSAGE s208(00) WITH 'Content of ZTAB_COMPARE deleted'.
          ENDCASE.
        ENDFORM.
        
        *-----------------------------------------------------------------------------------
        FORM get_excluded USING iv_tabname TYPE tabname
                          CHANGING cv_excluded TYPE string.
          CLEAR cv_excluded.
          LOOP AT s_fexcl INTO DATA(ls_so) WHERE sign EQ 'I' AND option EQ 'EQ'.
            IF ls_so-low CP |{ iv_tabname }-*|.
              IF cv_excluded IS INITIAL.
                cv_excluded = substring_after( val = ls_so-low sub = |{ iv_tabname }-| ).
              ELSE.
                cv_excluded = cv_excluded && ';' && substring_after( val = ls_so-low sub = |{ iv_tabname }-| ).
              ENDIF.
            ENDIF.
          ENDLOOP.
        ENDFORM.
        
        FORM compare_table USING iv_tabname TYPE tabname
                           CHANGING ct_log TYPE zcl_tables_compare=>ty_log_t.
          DATA: lv_excluded TYPE string,
                lt_log      TYPE zcl_tables_compare=>ty_log_t.
        
          PERFORM get_excluded USING iv_tabname CHANGING lv_excluded.
          lt_log = zcl_tables_compare=>run_comparison( iv_rfc_dest1 = p_rfc1
                                                       iv_rfc_dest2 = p_rfc2
                                                       iv_tabname = iv_tabname
                                                       iv_key_fields = zcl_tables_compare=>get_table_key_fields( iv_tabname )
                                                       iv_excl_fields = lv_excluded
                                                       iv_range_from = VALUE #( s_matnr[ 1 ]-low OPTIONAL )
                                                       iv_range_to = VALUE #( s_matnr[ 1 ]-high OPTIONAL )
                                                       iv_ec_s1 = p_s1_ec
                                                       iv_ec_s2 = p_s2_ec ).
        
          "update_field_descriptions
          LOOP AT lt_log ASSIGNING FIELD-SYMBOL(<ls_log>).
            <ls_log>-fdescr =  zcl_tables_compare=>get_table_field_descr( <ls_log>-tfname ).
          ENDLOOP.
        
          APPEND LINES OF lt_log TO ct_log.
        ENDFORM.

        Create GUI status SALV_STANDARD (Normal Screen):

        Copy from template like below:

        Use template status: SALV_STANDARD of program SALV_DEMO_TABLE_FUNCTIONS

        Delete the MYFUNCTION and adjust the three function codes:

        (Save and activate)

        Then program’s selection texts:

        After activation the program should work, now a few words abut the selection parameters

        RFC System 1 and 2 – these are RFC connections configured in SM59 pointing to two systems from which you want to compare the material master data. If the connections are set up without user/password (recommended) then each time at program execution there will be logon window shown (standard behavior with RFC call). Usually select the Data Hub as System 1 and as System 2 the data destination system.

        In Material Number Range provide the range of materials you want to compare. It is recommended to first test on some small range (e.g., 100 materials) how fast they are processed, the speed will depend on size of the sub-tables (MARC, MVKE, …). Normally you run the comparison in packages of a few thousands records and save each comparison result in Custom Delta Table with “Save DB” button (see below).

        Excluded field names – here you list fields which you don’t want to compare these will be “created on”, “created by” and all the fields which are “editable” in satellite system (like forecasting, costing, current period/year, stocks, etc.). Usually, you can identify such fields by running the delta collection without exclusion of any fields (on small amount of records) and checking which fields have many delta records. The form of the field to be set here is <table name>-<field name>, e.g. MARA-AENAM, MARC-LFGJA, MAKT-MAKTG

        Table Names Checkboxes – here you select which tables you want to compare (the program can be easily extended with other material tables like MPOP, MDMA or custom extensions)

        The final two checkboxes are used to collect records which exist in one system but not in the other. By default, the first is ON (we want to find if all records from source system we replicated to destination) and the second is OFF (in destination system the material can be enhanced with additional plants, sales orgs, etc. if there is many of such enhancements the delta table may grow up rapidly if have the flag ON)

        After running the program, you should get in result something like below

        With the two pointed above buttons you can: save the result in table ZTAB_COMPARE (appending the collected records to existing ones) and clean up the whole table ZTAB_COMPARE.

        Results Display Program

        Already at this point you can export the collected records from Custom Delta Table to xlsx and format them as needed. However, for those who like FPM but don’t know how to use it I’m describing below the steps how to create simple report.

        First create new class ZCL_TABLES_COMPARE_FEEDER from the code below:

        class ZCL_TABLES_COMPARE_FEEDER definition
          public
          final
          create public .
        
        public section.
        
          interfaces IF_FPM_GUIBB .
          interfaces IF_FPM_GUIBB_LIST .
          PROTECTED SECTION.
          PRIVATE SECTION.
        
            TYPES:
              BEGIN OF ty_difference,
                tabname TYPE tabname16,
                tfname  TYPE ztab_compare-tfname,
                fdescr  TYPE text40,
                rkey    TYPE ztab_compare-rkey,
                v1      TYPE ztab_compare-v1,
                v2      TYPE ztab_compare-v2,
              END OF ty_difference .
            TYPES:
              ty_difference_tab TYPE STANDARD TABLE OF ty_difference WITH DEFAULT KEY .
        
            CLASS-DATA gt_data TYPE ty_difference_tab .
        ENDCLASS.
        
        
        
        CLASS ZCL_TABLES_COMPARE_FEEDER IMPLEMENTATION.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Instance Public Method ZCL_TABLES_COMPARE_FEEDER->IF_FPM_GUIBB_LIST~CHECK_CONFIG
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IO_LAYOUT_CONFIG               TYPE REF TO IF_FPM_GUIBB_LIST_CONFIG
        * | [<---] ET_MESSAGES                    TYPE        FPMGB_T_MESSAGES
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD IF_FPM_GUIBB_LIST~CHECK_CONFIG.
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Instance Public Method ZCL_TABLES_COMPARE_FEEDER->IF_FPM_GUIBB_LIST~FLUSH
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IT_CHANGE_LOG                  TYPE        FPMGB_T_CHANGELOG
        * | [--->] IT_DATA                        TYPE REF TO DATA
        * | [--->] IV_OLD_LEAD_SEL                TYPE        I(optional)
        * | [--->] IV_NEW_LEAD_SEL                TYPE        I(optional)
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD IF_FPM_GUIBB_LIST~FLUSH.
            FIELD-SYMBOLS: <lt_data> LIKE gt_data.
            ASSIGN it_data->* TO <lt_data>.
            gt_data = <lt_data>.
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Instance Public Method ZCL_TABLES_COMPARE_FEEDER->IF_FPM_GUIBB_LIST~GET_DATA
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IV_EVENTID                     TYPE REF TO CL_FPM_EVENT
        * | [--->] IT_SELECTED_FIELDS             TYPE        FPMGB_T_SELECTED_FIELDS(optional)
        * | [--->] IV_RAISED_BY_OWN_UI            TYPE        BOOLE_D(optional)
        * | [--->] IV_VISIBLE_ROWS                TYPE        I(optional)
        * | [--->] IV_EDIT_MODE                   TYPE        FPM_EDIT_MODE(optional)
        * | [--->] IO_EXTENDED_CTRL               TYPE REF TO IF_FPM_LIST_ATS_EXT_CTRL(optional)
        * | [<---] ET_MESSAGES                    TYPE        FPMGB_T_MESSAGES
        * | [<---] EV_DATA_CHANGED                TYPE        BOOLE_D
        * | [<---] EV_FIELD_USAGE_CHANGED         TYPE        BOOLE_D
        * | [<---] EV_ACTION_USAGE_CHANGED        TYPE        BOOLE_D
        * | [<---] EV_SELECTED_LINES_CHANGED      TYPE        BOOLE_D
        * | [<---] EV_DND_ATTR_CHANGED            TYPE        BOOLE_D
        * | [<---] EO_ITAB_CHANGE_LOG             TYPE REF TO IF_SALV_ITAB_CHANGE_LOG
        * | [<-->] CT_DATA                        TYPE        DATA
        * | [<-->] CT_FIELD_USAGE                 TYPE        FPMGB_T_FIELDUSAGE
        * | [<-->] CT_ACTION_USAGE                TYPE        FPMGB_T_ACTIONUSAGE
        * | [<-->] CT_SELECTED_LINES              TYPE        RSTABIXTAB
        * | [<-->] CV_LEAD_INDEX                  TYPE        SYTABIX
        * | [<-->] CV_FIRST_VISIBLE_ROW           TYPE        I
        * | [<-->] CS_ADDITIONAL_INFO             TYPE        FPMGB_S_ADDITIONAL_INFO(optional)
        * | [<-->] CT_DND_ATTRIBUTES              TYPE        FPMGB_T_DND_DEFINITION(optional)
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD IF_FPM_GUIBB_LIST~GET_DATA.
            DATA: lv_tname TYPE string,
                  lv_fname TYPE string.
        
            IF iv_eventid->mv_event_id EQ 'FPM_START'.
              SELECT tfname, rkey, v1, v2 INTO CORRESPONDING FIELDS OF TABLE @gt_data FROM ztab_compare ORDER BY PRIMARY KEY.
              LOOP AT gt_data ASSIGNING FIELD-SYMBOL(<ls_data>).
                SPLIT <ls_data>-tfname AT '-' INTO <ls_data>-tabname lv_fname.
        
                IF 'MAKT;MARA;MARC;MARD;MARM;MBEW;MEAN;MLGN;MLGT;MVKE;QMAT' CS <ls_data>-tfname.
                  IF <ls_data>-v1 EQ abap_true.
                    <ls_data>-fdescr = | Record exists only in System 1|.
                  ELSE.
                    <ls_data>-fdescr = | Record exists only in System 2|.
                  ENDIF.
                ELSE.
                  <ls_data>-fdescr = |{ lv_fname } : { zcl_tables_compare=>get_table_field_descr( <ls_data>-tfname ) }|.
                ENDIF.
              ENDLOOP.
              ct_data = gt_data.
              ev_data_changed =  abap_true.
            ENDIF.
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Instance Public Method ZCL_TABLES_COMPARE_FEEDER->IF_FPM_GUIBB_LIST~GET_DEFAULT_CONFIG
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IO_LAYOUT_CONFIG               TYPE REF TO IF_FPM_GUIBB_LIST_CONFIG
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD if_fpm_guibb_list~get_default_config.
            DEFINE add_column.
              io_layout_config->add_column( iv_name               = &2
                                            iv_display_type       = 'TV'
                                            iv_index              = &1
                                            iv_header	          = &3 ).
            END-OF-DEFINITION.
        
            io_layout_config->set_settings( iv_height_mode_ats = if_fpm_list_types=>cs_height_mode_ats-all_rows
                                            iv_export_to_excel = abap_true
                                            iv_export_format = if_fpm_list_types=>cs_export_format-office_open_xml
                                            iv_fit_to_table_width = abap_true
                                            iv_selection_mode_ats = if_fpm_list_types=>cs_selection_mode-single_no_lead
                                            iv_scroll_mode = if_fpm_list_types=>cs_scroll_mode-scrolling
                                            iv_allow_sorting = if_fpm_list_types=>cs_settings_allow_sorting-only_ad_hoc
                                            iv_allow_grouping = if_fpm_list_types=>cs_settings_allow_grouping-no_grouping
                                            iv_sort_by_relevance = abap_true
                                            it_default_sorting_ats = VALUE #( ( column_name = 'TABNAME' is_grouped = abap_true )
                                                                              ( column_name = 'FDESCR' is_grouped = abap_true ) )
                                          ).
        
            TRY.
                add_column 1 'TABNAME' 'Table'.
                add_column 2 'FDESCR' 'Field'.
                add_column 3 'RKEY' 'Key'.
                add_column 4 'V1' 'Value in System 1'.
                add_column 5 'V2' 'Value in System 2'.
              CATCH cx_fpm_configuration.                       "#EC NO_HANDLER
            ENDTRY.
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Instance Public Method ZCL_TABLES_COMPARE_FEEDER->IF_FPM_GUIBB_LIST~GET_DEFINITION
        * +-------------------------------------------------------------------------------------------------+
        * | [<---] EO_FIELD_CATALOG               TYPE REF TO CL_ABAP_TABLEDESCR
        * | [<---] ET_FIELD_DESCRIPTION           TYPE        FPMGB_T_LISTFIELD_DESCR
        * | [<---] ET_ACTION_DEFINITION           TYPE        FPMGB_T_ACTIONDEF
        * | [<---] ET_SPECIAL_GROUPS              TYPE        FPMGB_T_SPECIAL_GROUPS
        * | [<---] ES_MESSAGE                     TYPE        FPMGB_S_T100_MESSAGE
        * | [<---] EV_ADDITIONAL_ERROR_INFO       TYPE        DOKU_OBJ
        * | [<---] ET_DND_DEFINITION              TYPE        FPMGB_T_DND_DEFINITION
        * | [<---] ET_ROW_ACTIONS                 TYPE        FPMGB_T_ROW_ACTION
        * | [<---] ES_OPTIONS                     TYPE        FPMGB_S_LIST_OPTIONS
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD IF_FPM_GUIBB_LIST~GET_DEFINITION.
            eo_field_catalog ?= cl_abap_tabledescr=>describe_by_data( gt_data ).
        
            et_field_description = VALUE #( ( name = 'TFNAME' technical_field = abap_true )
                                            ( name = 'TABNAME' allow_sort = abap_true group_same_cells = abap_true )
                                            ( name = 'FDESCR' allow_sort = abap_true group_same_cells = abap_true )
                                            ( name = 'RKEY' )
                                            ( name = 'V1' )
                                            ( name = 'V2' ) ).
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Instance Public Method ZCL_TABLES_COMPARE_FEEDER->IF_FPM_GUIBB_LIST~PROCESS_EVENT
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IO_EVENT                       TYPE REF TO CL_FPM_EVENT
        * | [--->] IV_RAISED_BY_OWN_UI            TYPE        BOOLE_D(optional)
        * | [--->] IV_LEAD_INDEX                  TYPE        SYTABIX
        * | [--->] IV_EVENT_INDEX                 TYPE        SYTABIX
        * | [--->] IT_SELECTED_LINES              TYPE        RSTABIXTAB
        * | [--->] IO_UI_INFO                     TYPE REF TO IF_FPM_LIST_ATS_UI_INFO(optional)
        * | [<---] EV_RESULT                      TYPE        FPM_EVENT_RESULT
        * | [<---] ET_MESSAGES                    TYPE        FPMGB_T_MESSAGES
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD IF_FPM_GUIBB_LIST~PROCESS_EVENT.
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Instance Public Method ZCL_TABLES_COMPARE_FEEDER->IF_FPM_GUIBB~GET_PARAMETER_LIST
        * +-------------------------------------------------------------------------------------------------+
        * | [<-()] RT_PARAMETER_DESCR             TYPE        FPMGB_T_PARAM_DESCR
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD IF_FPM_GUIBB~GET_PARAMETER_LIST.
          ENDMETHOD.
        
        
        * <SIGNATURE>---------------------------------------------------------------------------------------+
        * | Instance Public Method ZCL_TABLES_COMPARE_FEEDER->IF_FPM_GUIBB~INITIALIZE
        * +-------------------------------------------------------------------------------------------------+
        * | [--->] IT_PARAMETER                   TYPE        FPMGB_T_PARAM_VALUE
        * | [--->] IO_APP_PARAMETER               TYPE REF TO IF_FPM_PARAMETER(optional)
        * | [--->] IV_COMPONENT_NAME              TYPE        FPM_COMPONENT_NAME(optional)
        * | [--->] IS_CONFIG_KEY                  TYPE        WDY_CONFIG_KEY(optional)
        * | [--->] IV_INSTANCE_ID                 TYPE        FPM_INSTANCE_ID(optional)
        * +--------------------------------------------------------------------------------------</SIGNATURE>
          METHOD IF_FPM_GUIBB~INITIALIZE.
          ENDMETHOD.
        ENDCLASS.

        Then, FPM application (some FPM screens will look different on newer SAP versions):

        Set up OVP layout:

        add there list UIBB with the feeder class created previously:

        Provide „Config ID“ and „Title“, Save and click „Configure UIBB“ (ignore errors):

        Because of some coding in the feeder class you don’t need to set anything more manually here except one crucial configuration:

        Set “Collapse Groups by Default” – otherwise the application will want to show all the records at once which might be too big challenge in case of big deltas.

        (In S/4 the screen below looks different, but still the option can be found there)

        After Save the application should work, you can start it from SE80 or with configured link in SAP GUI:

        Tips on usage of the tool

        • In remote systems there is only one function module called: RFC_READ_TABLE, it just reads data, however might be blocked in your systems.
        • To collect the data run the Delta Collection Program several times with smaller ranges of materials (and save each result in Custom Delta Table).
        • Initially do some trial runs on <1000 materials to identify fields which you don’t want to compare, and save the excluded fields in selection variant for later use.

        Each line of the code provided here was written by me and you can freely use/modify it.

        Rating: 0 / 5 (0 votes)

        The post Material Master Data – compare data in two systems appeared first on ERP Q&A.

        ]]>
        10 Strategic Actions for Creating Your Successful Data Governance Strategy https://www.erpqna.com/10-strategic-actions-for-creating-your-successful-data-governance-strategy/?utm_source=rss&utm_medium=rss&utm_campaign=10-strategic-actions-for-creating-your-successful-data-governance-strategy Fri, 23 Sep 2022 09:11:29 +0000 https://www.erpqna.com/?p=68071 Data governance is the code of conduct for data in an organization. Data governance is essential with every cloud investment considering the volume, the various types and sources of data, the numerous business and technical processes, and user profiles that create the data ecosystem. The business must have a data strategy and a data governance […]

        The post 10 Strategic Actions for Creating Your Successful Data Governance Strategy appeared first on ERP Q&A.

        ]]>
        Data governance is the code of conduct for data in an organization. Data governance is essential with every cloud investment considering the volume, the various types and sources of data, the numerous business and technical processes, and user profiles that create the data ecosystem. The business must have a data strategy and a data governance policy to achieve its goals.

        Data governance is a set of rules, agreements, methods, policies, and leadership principles for optimizing data strategy and usage. It improves processes and their efficiencies, increases data quality and reliability, and produces a more valuable output. It also helps reduce and mitigate various risks at all levels of the organization and throughout the data lifecycle.

        Businesses today have a vast amount of data captured in a way that is not intended for leveraging digital transformation technologies. In addition, there is a critical lack of data governance surrounding this information. It is reflected in poor data categorization, standardization, labeling, security, and lifecycle management; as a result, the data is more challenging to transform into valuable output.

        Data governance prepares businesses for more advanced digitalization with process optimization, making way for process automation, analytics, and artificial intelligence.

        SAP S/4HANA cloud provides a rich data ecosystem for intelligent enterprises. This technology pertains to every business defining its data governance rules and practices before, during, and after migration. Any organization that adopts all (or most) of these ten strategic guidelines can further develop its data governance approach for successful cloud migration and sustainable business transformation.

        1. Create aData Governance Strategy Leadership Team

        Just as the most critical items are managed at the organization’s top, data – one of the most precious assets today – also requires innovative management and leadership. Data governance should address data for the entire organization, including every business unit and its collective interests. A transforming organization can be a battleground of conflicting interests and challenges, but many opportunities may arise in its contradictions. Outstanding leadership, data literacy, analysis, communication, business, and technical operations skills are necessary to achieve their stated goals. It is beneficial for the different business units to have a leader representing them within the data governance leadership team. The leadership team should be responsible for managing data as a continuous value chain.

        2. Discard the Silo Operating Model

        Silos (or compartmented operating models) in the business and the technology departments are unhelpful in achieving proper data governance. The digital age demands an open architecture that allows flexibility for data sharing across the organization’s business units while managing access and security risks. For better business intelligence and agility, information must be shared and accessible across the organization. This collaborative approach should not only happen in the infrastructure but also in the way of doing business.

        3. Invest in Data Architecture and Processing Rules

        The success of data analytics will depend on data structure, accessibility, and various other factors. Businesses should have their data architecture solidified and ready to conform to business needs and industry regulations. Define clear data migration policy and processes along with the architecture. Have information architects and their team define the necessary documentation, skills, and rules for better organizational data management, valuation, accessibility, and distribution.

        Digital Transformation, SAP Master Data Governance

        4. Develop an Agile and Data-Driven Culture in the Organization

        Create a data-driven culture where the organization views data as a primary business asset and a value chain. Develop data leveraging approaches, and make sure that the organization adopts and practices these processes daily. Invest in learning various data skills and technologies for the organization.

        Consider soft skill enhancements of its workforce, including a review of cultural and behavioral habits. The organization should support new, healthier digital habits to create a new data-driven culture among its employees, collaborators, and suppliers. This process intends to shift the organization’s mindset to become agile, innovative, and collaborative.

        The digital organization should constantly and quickly learn and adapt to internal and external changes while using these changes to drive innovation and business competitiveness. Starting with every business unit leader and manager, organizations should embrace the vision and mission of the greater collective and collaborate to support and achieve business goals.

        When silos are broken, the organization can share information between departments; as a result, risks will be mitigated across the business units, the data value chain will increase, and digital maturity and agility will reach greater heights.

        Imagine how unstoppable and competitive your organization could be if it reached peak data governance. Everyone becomes agile, understands the value of data, and has the skills, resources, knowledge, and attitude to process data, create innovative and rewarding assets and distribute it efficiently across the organization – all while practicing data security and privacy. Such an organization can quickly innovate and rapidly adjust its movement to respond to external changes and sync promptly. The advantages of an agile organization with data-driven culture are truly vast.

        5. Invest in Data Skills and Technology for your Organization to Remain Current.

        Data science is a new field with much of its future still to be written. Data skills are evolving fast, and it pays dividends for every organization to adapt its in-house data skills with multiple data professions, such as data scientists, statisticians, data architects, business analysts, DevOps, and more.

        Consider the skills of external data sources such as suppliers and collaborators and create some requirements for collaboration. It is essential to communicate and align data standardization for better synchronization.

        It is imperative to increase data literacy at all levels of the organization, especially regarding standardization, security, data capability, and evolution.

        6. Define Data Quality, Standards, Methods, Policies, and Maturity Levels

        Decide on a rating for data quality. Standardized data can be easily exploited to identify patterns and processes within digital data technologies. There is a need to define each level of data maturity, data qualifications, categorizations, attributes, type, labeling, level of importance, and sensitivity.

        Have terms of reference for various documents, and be sure to keep it simple. Ensure that each detail you include is helpful and does not complicate things by discouraging adoption.

        Remember to avoid conflicting policies. When working across teams, it is necessary to maintain the organization’s unity and not fragment it unnecessarily.

        Depending on the organization, some policies to define include data security, approach to creating policies, classification, sharing, governance, analytics, data science, data protection, naming conventions, and versioning.

        • Create inclusion, ownership, accountability, and momentum for continuous evolution:
        • Receive approval from units and the data leadership team before implementing the policies.
        • Get people involved and make their involvement decisive.
        • Make the standards available and accessible, educate the organization about them and provide support for their everyday application and adoption.
        • Designate at least one person for each unit (depending on your preference for division) for adoption.
        • Consider a continuous review or re-evaluation period for certain key documents as things evolve in the business.
        Digital Transformation, SAP Master Data Governance

        7. Comply with Audit and Security Regulations while Adopting Thorough Risk Management at all Levels

        Decide on the obligatory compliance(s) for your company. Data compliance and responsible or accountable people should be decided at the top management level.

        While discarding the practice of silo management is hugely beneficial, one of the very few areas where you need a siloed approach is risk management. Create risk assessments while considering the context or the area to achieve better risk management. In a house, for example, the risk pertinence differs whether the context is a kitchen, a bathroom, a bedroom, or a perched balcony. Similarly, it is wise to analyze and evaluate data risks in various contexts and situations and define rules, processes, and mitigating actions against threats.

        Define the access control applied when sharing data across (and outside) the organization, the data exchange protocols, and the risks and contingencies of each. Sensitive data management should follow strict regulations and protocols. Invest in significant data security upfront because it is cheaper than managing a data breach disaster.

        8. Invest in Data Transformation and Innovative Value Creation

        Data transformation includes dashboard creation, analysis (both predictive and prescriptive), reporting, and the individual output you desire in a business unit context for data-driven decision-making. A data-driven decision is a precious investment and asset in your organization.

        The best approach to getting optimal output from your data is to be result-driven. Dashboards are becoming tiresome and often routine, with little value derived from them – mainly because there has been a focus on fancy details with little value instead of the desired output. Leaders should focus on the desired outcomes before thinking of dashboards.

        Some questions to consider for a result-driven approach to dashboard creation.

        • What are the main outputs I want for my business units?
        • What are the secondary outputs I want to achieve?
        • What are these outputs made of?
        • What are the KPIs?
        • What KPI boundaries do I want to define that indicate a significant difference for me? (winning, need for improvement, warning levels)

        Define effective methods and procedures for data transformation and output procedures. Don’t get lost in the details; focus on what matters and leave some flexibility for analysts.

        9. Optimize Business Processes

        When embarking on business process optimization, you are refining the quality and value of data in your organization, improving process effectiveness, delivery, and overall business efficiency. During this process, review data entry points, data structure and relevancy, needs for data refining, data content and process speed, storage procedures, process delays, output definition, data lifecycle, and archiving. The relevance and efficiency of each process (and sub-process) should be revisited regularly as developments evolve.

        There is an absolute need for process ownership and having accountable people to monitor, improve and act as needed.

        10. Invest in Employee Self-Transformation for Better Employee Engagement and Agility

        Changes in personal habits require an individual approach. Regardless of the organization, some resistance is expected, even when not seen upfront. It would be best to prevent non-adoption by supporting individuals who continuously buy into data-driven culture and become more agile. It is easy for people to mimic what they perceive to be expected and immediately return to their previous habits; this result is unhelpful and will undo much of the organizational progress.

        Facilitating life-changing habits from an individual level within the whole organizational ecosystem is necessary for sustainable transformation and digital maturity.

        Leaders should also identify the “why” of any non-adoption to solve issues effectively. Two-way communication on various levels is essential; investing in our people is one of the most strategic ways to adopt a sustainable data-driven culture and achieve organizational success.

        Rating: 5 / 5 (1 votes)

        The post 10 Strategic Actions for Creating Your Successful Data Governance Strategy appeared first on ERP Q&A.

        ]]>
        Using SAP BTP for Master Data Governance with Fiori Tiles https://www.erpqna.com/using-sap-btp-for-master-data-governance-with-fiori-tiles/?utm_source=rss&utm_medium=rss&utm_campaign=using-sap-btp-for-master-data-governance-with-fiori-tiles Sat, 18 Sep 2021 06:30:49 +0000 https://www.erpqna.com/?p=54284 We can connect to the services for SAP Master Data Governance, Cloud edition from SAP Business Technology Platform (SAP BTP). This is a futuristic idea which we should focus on. In this new blog post, I will give the outline for the end-to-end process for connecting and using the SAP MDG Cloud edition by utilizing […]

        The post Using SAP BTP for Master Data Governance with Fiori Tiles appeared first on ERP Q&A.

        ]]>
        We can connect to the services for SAP Master Data Governance, Cloud edition from SAP Business Technology Platform (SAP BTP). This is a futuristic idea which we should focus on. In this new blog post, I will give the outline for the end-to-end process for connecting and using the SAP MDG Cloud edition by utilizing the SAP BTP features.

        Since May 2021 SAP MDG Cloud Edition on SAP BTP is available. It gives us the scope for master data management initiative in the cloud with a minimal barrier for entry and an option to build additional master data governance scenarios at our own pace.

        • Run data anywhere
        • Reduce data redundancy
        • Connect and understand data

        Initially, SAP Master Data Governance, cloud edition focuses on Business Partner data. SAP Master Data Governance, cloud edition provides fast time to value and is planned as a low entry point for new SAP MDG customers while offering existing SAP MDG implementations a non-disruptive deployment option to build to a more granular system network for master data management.

        PREREQUISITES

        1. Get your access for SAP Business Technology Platform (SAP BTP)
        2. Select the service SAP Master Data Governance, cloud edition and create.

        3. Configure Entitlements, Role Collections and Roles

        4. Go to the application

        STEP BY STEP GUIDE

        Step 1 – Create a new BP with Central Governance

        Start the app “Manage Business Partner Central Governance” and create a new organization

        Click on OK and review the data in the following screen. You can add more information in this form such as Tax Numbers and Bank Details. We call this information core master data attributes. Click on Save and Submit

        Go back to your home screen and start the app “My Inbox”

        Click on Approve to successfully create a new Business Partner in SAP MDG Cloud Edition using Central Governance.

        Step 2 – Consolidate & Onboard new BP’s

        Please be informed that below sample records include parts which will be used to show features of the MDG Cloud Edition like:

        • potential duplicates dependent on company name and addresses within the source file
        • potential duplicates dependent on company name and addresses against active area
        • data issues which will be identified by a DQM evaluation run in step 3

        Start the app “Manage Imports – Business Partners”

        Click on Create button and enter a Source System ( or select “Import without Source System”)

        Click on “Upload” and select your file. Click on Save.

        Click Consolidate button and enter a description

        In the next window: Start the process

        Review the Initial Check Step and click continue

        Review the matching and approve open match groups

        Continue with the process

        Complete the consolidation with the Validation and Activation step

        Now you can start the app “Manage Business Partners – Central Governance” and you will see the newly onboarded data.

        Step 3 – Evaluate Data Quality and review the results

        After max 60 minutes of Step 2 you can start the app “Evaluation results”

        Review the incorrect records and you will find the records from Step 2 because one predefined DQM rule checks if a region is maintained.

        The onboarded records are not having region maintained

        Step 4 – Remediate incorrect records with mass processing

        You can trigger a mass processing process from the “Evaluation results”

        Select the records and start the mass processing. You can also use the visual filters to do a drill down.

        In the edit step add a region value to each of the records and Save it and hit the Submit button.

        Start again the app “My Inbox” and review the newly received task, Approve the task

        Wait for some minutes until the next DQM evaluation run is triggered and check in the result that the incorrect records from step 3 has been remediated.

        Moving towards the future:

        SAP API Business Hub

        For all business applications, SAP Master Data Integration will become the single point of access to master data. The applications creating new master data will just inform SAP Master Data Integration about the new master data record. SAP Master Data Integration is planned to involve SAP Master Data Governance, cloud edition if this system is configured as the “owner” for that master data in SAP Master Data Integration. In the case that SAP Master Data Governance, cloud edition detects a duplicate, it will merge the core attributes of the new record with the appropriate existing record and thus create an updated best record. Thereafter, SAP Master Data Governance, cloud edition will inform SAP Master Data Integration about the updated (original) record and notify it to deprecate the newly created record and update the key mapping.

        Rating: 0 / 5 (0 votes)

        The post Using SAP BTP for Master Data Governance with Fiori Tiles appeared first on ERP Q&A.

        ]]>
        SAP Central Finance – MDG Mapping via CDS Views https://www.erpqna.com/sap-central-finance-mdg-mapping-via-cds-views/?utm_source=rss&utm_medium=rss&utm_campaign=sap-central-finance-mdg-mapping-via-cds-views Mon, 30 Aug 2021 04:19:46 +0000 https://www.erpqna.com/?p=53245 Introduction: In Central Finance S/4 HANA, MDG mapping plays an important role in replication process. Mapping helps in defining the relation of different identifiers or objects like GL Account, Cost Centre, Company Code, etc. between source ECC systems and Central Finance. Mapping is read via Standard SAP functionality during replication. However, there is always requirement […]

        The post SAP Central Finance – MDG Mapping via CDS Views appeared first on ERP Q&A.

        ]]>
        Introduction:

        In Central Finance S/4 HANA, MDG mapping plays an important role in replication process. Mapping helps in defining the relation of different identifiers or objects like GL Account, Cost Centre, Company Code, etc. between source ECC systems and Central Finance. Mapping is read via Standard SAP functionality during replication. However, there is always requirement to read mapping in BADI during replication. There are different standard MDG classes used in Central Finance to read the MDG Mappings. This blog will focus on how to create ABAP CDS Views to read different types of MDG Mapping, for better performance and can be utilized for different integrations.

        Approach:

        Central Finance: MDG Mapping via CDS Views

        There are two categories of MDG Mapping:

        1. Value Mapping
        2. Key Mapping

        How to create ABAP CDS Views for Value Mapping?

        A common CDS Views can be created to read the Value Mapping for all objects. Following tables can be used:

        • FINS_CFIN_MDGME – CFIN: Definition of Mapping Entities (ID / Code)
        • MDGD_CCODEMAP – Client-dependent Code Mapping
        • MDGD_CMAPCONTEXT – Specifies Mapping Contexts of code-mapping(client dependent)
        • MDG_BUS_SYS_TECH – Technical information of a Business System

        Fields Usage:

        MAPPING_ENTITY: MDG Mapping entity Name

        LIST_AGENCY_ID: Business System

        LOGSYS: Source Logical System

        SRC_VAL: Source Value

        TRG_VAL: Target Value

        This CDS View can help in getting the mapping of all Standard and Custom Value Mappings, which are available in FINS_CFIN_MAP_MANAGE.

        How to create ABAP CDS Views for Key Mapping?

        In MDG Framework, key mapping tables are different for different mapping entities. In this case, we need to create CDS views per mapping entity.

        Below is example of GL Account Key Mapping. Following tables can be used to get the GL Account Key Mapping:

        • UKMDB_AGCGLAM0 – UKM: Key Agency (Business System Details with reference to Mapping entity)
        • UKMDB_MGPGLAM0 – UKM: Positive Mapping Groups (Mapping key between source and target value
        • UKMDB_KEYGLAM0 – UKM: Key (Stores the actual source and target values)
        • UKMDB_SCHGLAM0 – Schema ID
        • MDG_BUS_SYS_TECH – Business System to Logical System Mapping

        GLAM0 is key which is identifier for GL Account Key Mapping. Prefix of table remain same and identifier varies from one mapping entity to other. For eg, Business Partner mapping table UKMDB_AGCBNSS0, UKMDB_MGPBNSS0, UKMDB_KEYBNSS0, UKMDB_SCHBNSS0. In this case, schema table UKMDB_SCHBNSS0 helps in identifying whether mapping is for vendors, customers, etc.

        Below is an example CDS View creation of Key Mapping for GL Account:

        Hidden Field in above screen is Central Finance Business System and 907 is Identifier type for GL Account Mapping

        Hidden Field in above screen is Central Finance Business System and 907 is Identifier type for GL Account Mapping

        Fields Usage:

        BUSINESS_SYSTEM: Business System

        LOGSYS: Source Logical System

        KOKRS_SRC: Source Controlling Area

        SAKNR_SRC: Source GL Account

        BUKRS_SRC: Source Company Code

        KOKRS_TRG: Target Controlling Area

        SAKNR_TRG: Target GL Account

        BUKRS_TRG: Target Company Code

        Rating: 0 / 5 (0 votes)

        The post SAP Central Finance – MDG Mapping via CDS Views appeared first on ERP Q&A.

        ]]>
        Mapping different domains in SAP Master Data Governance to SAP S/4HANA processes https://www.erpqna.com/mapping-different-domains-in-sap-master-data-governance-to-sap-s-4hana-processes/?utm_source=rss&utm_medium=rss&utm_campaign=mapping-different-domains-in-sap-master-data-governance-to-sap-s-4hana-processes Wed, 30 Jun 2021 02:38:57 +0000 https://www.erpqna.com/?p=50037 This blog post is to help you to map the master data domains in SAP Master Data Governance with the SAP S/4HANA processes. In the following section, I will outline processes in SAP S/4HANA logically mapped one-to-one with master data domains in SAP Master Data Governance. Process in SAP S/4HANA || Master data domain in […]

        The post Mapping different domains in SAP Master Data Governance to SAP S/4HANA processes appeared first on ERP Q&A.

        ]]>
        This blog post is to help you to map the master data domains in SAP Master Data Governance with the SAP S/4HANA processes.

        In the following section, I will outline processes in SAP S/4HANA logically mapped one-to-one with master data domains in SAP Master Data Governance.

        Process in SAP S/4HANA

        ||

        Master data domain in MDG

        Lead to Cash

        What is Lead to Cash?

        Lead to cash is the name given to an end-to-end, top-level business-process that begins with marketing and ends with revenue collection. Stages along the way include sales management, CPQ (configure, price and quote), customer service, project management, order management, and revenue management.

        SAP Master Data Governance for Customer

        Customer Governance

        Source to Pay

        What is Source to Pay?

        Source-to-pay or S2P is the entire end-to-end process involved in procurement. It spans every process, right from spend management, strategic sourcing and vendor management to purchasing, performance management and accounts payable.

        SAP Master Data Governance for Supplier

        Supplier Governance

        SAP Master Data Governance for Financials

        Finance & Accounting Governance

        Design to Operate

        What is Design to Operate?

        Design to Operate is a process that seamlessly connects business processes across the entire product lifecycle to break down silos and provide visibility across design, planning, logistics and operations.

        SAP Master Data Governance for Material

        Material Governance

        Recruit to Retire

        What is Recruit to Retire?

        Recruit to Retire is a human resources process that includes everything that needs to be done over the course of an employee’s career with a company.

        There is no specific domain for HR in SAP Master Data Governance.

        What is SAP Master Data Integration?

        SAP Master Data Integration is a multi-tenant cloud service for master data integration. It provides a consistent view on master data across a hybrid landscape. This feature provides the functionality for multiple systems to edit the same data.

        Source: https://discovery-center.cloud.sap/serviceCatalog/master-data-integration?region=europe(frankfurt)

        SAP Master Data Governance in future is going to be SAP Master Data Governance + SAP Master Data Integration.

        SAP MDG deployment options supporting business needs and landscapes

        The role of MDG as the SAP solution for master data management is the strategic solution for corporate-wide master data management and its key capabilities continue to include consolidation, central governance, and data quality management.

        SAP Master Data Governance on SAP S/4HANA is a highly integrated application suite for enterprise master data management.

        SAP Master Data Governance, cloud edition does not replace SAP Master Data Governance on S/4HANA. It is an additional deployment option.

        Federation of master data governance is a new way of organizing enterprise-wide master data management. This is about setting up corporate-wide master data management in the application landscape in such a way that globally relevant (core) attributes of a master data object, for example a Customer, are governed centrally and shared with all relevant applications across the enterprise while decentral applications govern the attributes that are specific to a business unit or group locally.

        SAP MDG Deployments going forward.

        Source: https://drive.google.com/file/d/1iNpILHi5Q8hambe-eKmEL0zAcwHnXvR2/view?usp=sharing

        The capabilities and the role of MDG do not change, what will change is how MDG is deployed.

        SAP Master Data Governance Capabilities

        Summary: The SAP MDG domains like Customer, Supplier, Financials and Material can be used to consolidate and govern master data for SAP S/4HANA processes like Lead to Cash, Source to Pay and Design to Operate. SAP MDG is undergoing number of innovations by getting coupled with SAP Master Data Integration to provide the same master data view to different systems. Federated enterprise MDG is a new way to organize enterprise-wide master data management by covering both central and local data governance.

        Rating: 0 / 5 (0 votes)

        The post Mapping different domains in SAP Master Data Governance to SAP S/4HANA processes appeared first on ERP Q&A.

        ]]>
        MDG – Edition Concept https://www.erpqna.com/mdg-edition-concept/?utm_source=rss&utm_medium=rss&utm_campaign=mdg-edition-concept Wed, 02 Jun 2021 10:41:15 +0000 https://www.erpqna.com/?p=48386 In this blog post, we will see in detail the concept of Edition Management in MDG. This blog post will help to configure editions for reference master data as well. Edition Management in MDG In MDG, we have two types of business objects which can be governed. One is Standard Business Object like Business Partner, […]

        The post MDG – Edition Concept appeared first on ERP Q&A.

        ]]>
        In this blog post, we will see in detail the concept of Edition Management in MDG. This blog post will help to configure editions for reference master data as well.

        Edition Management in MDG

        In MDG, we have two types of business objects which can be governed. One is Standard Business Object like Business Partner, Customer, Vendor which is not time dependent master data (though it has certain entities like addresses which can be time-dependent), which has only one instance of master data in the system and as soon as the changes are approved, the changes are replicated to the Reuse area / downstream systems. Other one is Edition based business objects, which are time-dependent in nature. For these business objects, the replication to reuse area / downstream systems can be configured based on an time interval.

        Important criteria for using Edition management in MDG is, the data model should be an Flex model. By default, SAP Standard delivers MDG-F data model as edition dependent data model and you can also create data model for custom objects using editions.

        Below figure shows the end-to-end process of governing edition based master data in MDG. In the figure you can see the two different processes of flexible edition management, where you can schedule the changes with an edition also you can directly replicate the changes, even if the business object is edition based. We will see this in detail in the below section, on how to configure replications for edition based objects.

        Using Edition brings flexible ways to maintain the master data and also brings transparency about multiple changes made on the data.

        Using MDG Editions for Scheduling Changes to Master Data

        In the above example, we have two objects Object A, Object B which is maintained across multiple editions E1, E2, E3, E4 with various statuses of those editions.

        Below is the edition information with their validity period

        Edition Valid-From Status 
        E1  01-01-2020 Released
        E2 01-01-2021 Released
        E3  01-04-2021  In-Process 
        E4  01-01-2022  In-Process 

        In this example, the Object A is existing in all the editions, which is, in E1 & E2 it is existing as released status, but in E3 & E4 it is existing In-Process status. Whereas, the Object B is existing only in E1 and E3, with E1 as released status and E3 as In-Process status.

        If you observe this example, we have only defined the Valid-from date for the editions. Which means, valid-to date is automatically calculated based on the next-change edition.

        • In case of Object A in E1, is valid from 01-01-2020 till valid-to 31-12-2020, because, the next change edition for object A is E2 which starting from 01-01-2021.
        • This means, when an object is changed under next-edition, the valid-to date is calculated as next-change edition Valid-from date minus 1.
        • In our case, Valid-from date of E2 is 01-01-2021
        • So Valid-to date of Object A in E1 is (01-01-2021) – (1) = 31-12-2020.

        So, in our MDG system, Object A has 4 instances with various validity dates and Object B has 2 instances.

        How to maintain and manage editions

        Edition Type should be created before the corresponding editions are created in MDG. Edition types are created through below settings in MDGIMG

        You need to provide the Edition type (name of edition), data model which is associated with, Validity of Edition and a description. You need to define the Fiscal Year variant (FV column), in case you choose your edition validity as ‘Period Specific’. It is also mandatory to choose at least one Entity type from the data model, otherwise you cannot save the Edition type.

        After the edition type is created, we need to create the edition itself. For creating editions, there is a separate Webdynpro Application – USMD_EDITION available in the system. You can access it through multiple MDG-F role like below

        This open a Webdynpro page, where you need to define the Edition along with the valid-from date and replication type

        In the above screen, you need to provide Edition name, Description, Edition type (the one which is created in previous step), valid from Date (this is automatically choosed, based on the validity of edition type configuration) and Replication timing. There are three ways you can define the replication of the changes with respect to editions

        1. Manually Started After Release of Edition – this means, you have to trigger the distribution of CR’s manually after you release the Edition. This option is choosen, when you want to schedule the changes for an edition and distribute at once when edition is released
        2. On Final Approval of Change Request – This means, the replication is triggered immediately after the CR is final check approved. In this case, before the edition is changed to released status, there should not be any pending CR’s. All the CR’s should be approved and changes are replicated.
        3. Select in Each Change Request – This means, the above two options are shown for each CR and it becomes a mandatory field to be filled. User has to choose Option 1 or Option 2 during CR processing and the replication is planned accordingly. Check below screen

        Generic functions of Editions like Display associated Change requests, changing status delivered through POWL application

        Editions are having three status

        • Set in progress – to make the edition in progress
        • Mark for release – edition is in pre-released status, but can be reverted back to in progress
        • released – final state of edition, where further changes using this edition is not possible. We have note here that, you can only release edition if there are no pending change requests under that edition. If you have pending change requests, you can use the ‘Reschedule’ option and move the CR’s to later editions. This can also be achieved using standard report USMD_EDITION_MOVE_CREQUEST

        Edition types has to be associated to CR types, in order to process the changes with respect to the editions. So, the edition types are linked during the CR type creation

        Once this link is established, when you open the CR for any master data object, you will see an additional screen in the header ‘Validity Data’. This has all the time-dependent information for that object in the system.

        Replication of Edition data:

        If the target system does not support time-dependent master data, you can schedule the changes to be sent to target system on a specific day, before the edition becomes valid. For this, you can use the standard report USMD_EDITION_REPLICATE.

        Otherwise, you can choose one of the options as we discussed in previous section, to configure the replication of edition records.

        Rating: 0 / 5 (0 votes)

        The post MDG – Edition Concept appeared first on ERP Q&A.

        ]]>