The post Pass URL-Filter on BW Live Hierarchy Nodes from Story to Story appeared first on ERP Q&A.
]]>Currently it is not possible to pass an URL-Filter on Hierarchy Nodes for BW-Live Models with the standard options.
In this blog I´ll show how this can be accomplished with some simple scripting and story setup.
There is one story which contains input controls (or story filters) for specific dimensions with hierarchies and we would like to pass the selected members of the hierarchy (be it nodes or leafs) to another story where these values should be applied as respective filters again.
The jump to the other story is bound to a specific action, e.g. click on a button. This Event will be used to derive the selected members and pass them as URL parameters.
The event contains the following script (example for two relevant dimensions on input controls):
var dim1_string = "";
//only retrieve selected members if not all selected
if(IC_Dim1.getInputControlDataSource().isAllMembersSelected() === false){
var mem1 = IC_Dim1.getInputControlDataSource().getActiveSelectedMembers(10000);
for(var i=0;i<mem1.length;i++){
if(mem1[i].id.startsWith("PSEUDO")){ //really selected members
if(mem1[i].id.endsWith("-")){ //is a node
dim1_string = dim1_string.concat("/0HIER_NODE!".concat(mem1[i].displayId));
}
else{ //is a leaf
dim1_string = dim1_string.concat("/!".concat(mem1[i].displayId));
}
}
}
}
var p_dim1 = UrlParameter.create("p_dim1", dim1_string);
var dim2_string = "";
//only retrieve selected members if not all selected
if(IC_Dim2.getInputControlDataSource().isAllMembersSelected() === false){
var mem2 = IC_Dim2.getInputControlDataSource().getActiveSelectedMembers(100000);
for(var j=0;j<mem2.length;j++){
if(mem2[j].id.startsWith("PSEUDO")){ //really selected members
if(mem2[j].id.endsWith("-")){ //is a node
dim2_string = dim2_string.concat("/0HIER_NODE!".concat(mem2[j].displayId));
}
else{ //is a leaf
dim2_string = dim2_string.concat("/!".concat(mem2[j].displayId));
}
}
}
}
var p_dim2 = UrlParameter.create("p_dim2", dim2_string);
NavigationUtils.openStory("STORY_ID", "PAGE_ID", [p_dim1, p_dim2]);
Important to note is:
If the same should be done for story filters, the API “Application.getFileDataSource.getDimensionFilters” can be used which immediately only gives back the chosen nodes/leafs with the correct BW syntax. They can directly be concatenated into a string.
In the story on the receiving end these parameters are maintained as script variables:
E.g. in the onInit script, the parameters can be split into the members and applied to an input control (example of first dimension):
if(dim1.length>0){
var dim1_param = dim1.slice(1); //remove first char, is "/"
var dim1_arr = dim1_param.split("/");
IC_Dim1.getInputControlDataSource().setSelectedMembers(dim1_arr);
}
With this the correct hierarchy filters from Story 1 are passed to Story 2.
The post Pass URL-Filter on BW Live Hierarchy Nodes from Story to Story appeared first on ERP Q&A.
]]>The post Step-by-step Financial Statement Version Reporting with Currency Type Characteristic on S/4 Hana + Embedded BW appeared first on ERP Q&A.
]]>During analysis of a business process for reporting, some of the required developments for the need and how to develop them are crystal clear. We tend to think of the required development elements on-the-fly/instinctively while analyzing the process. Almost in a visual way, but sometimes, even after working in BI environment for 10 years, I find myself asking simple how-to-do questions and searching for a relevant blog article to answer the question.
Also Read: What is SAP BW on HANA Certification?
I decided to write this article in order to help those who will have a similar requirement and to motivate myself for researching a subject in depth.
In my case, a complex calculated/restricted key figure structure was needed. All the measures needed to be filtered by financial statement version hierarchy nodes (GL Account hierarchy based on an FSV created on Tx OB58) and users needed to analyze the report in different 3 currency types.
Enhancing “S/4HANA Financials: Actual Data from ACDOCA – /ERP/SFIN_V01” with required fields and creating restricted key figures (RKF) for each currency types wasn’t a good option because it would increase the required time for development and maintenance. Plus, users should hide/unhide relevant measures to change the currency type.
1- Check which financial reporting version you’ll be using.
Go to Tx OB58 and double click on the FSV statement.
Check the hierarchy.
2- Replicate the hierarchy via “HRRP_REP- FIN Runtime Hierarchy Replicator”
Go to Tx HRRP_REP. Enter the Hierarchy ID you want to replicate and execute the replication.
3- Create table function
Note: SAP is deprecating SQL Script based Calculation Views. The recommended approach for modelling is using Table Function based Calculation Views but if you’re more comfortable with SQL script based calculation views, you can create an SQL based calculation view and migrate it using the save as function.
Table function code:
4- Create Calculation View
Go to Hana Modeler perspective and create a graphical calculation view.
Note that the warning in the screenshot is caused by the fact that I already created a calculation view with the same name.
Add the table function to the aggregation component.
Add all the fields to the output.
Create an input parameter
Add list of values to the parameter (i.e. 00-Transaction Currency, 10-Global Currency, 30-Company Code Currency)
Assign table function parameter to calculation view parameter using semantics details.
4- Create the virtual infocube (or composite provider)
Create a characteristic for input parameter
Create a virtual infocube based on the hana calculation view.
Details
Add all required infoobjects to the infocube.
Assign all hana calculation view fields to infoobjects.
5- Create the report
Create a variable for the currency type selection and add it to the report.
Create a measure with hierarchy node filter.
6- Validate the report running twice with different currency types.
Report output with “Currency type = 10-Company Code Currency” selection.
Report output with “Currency type = 30- Global Currency” selection.
The post Step-by-step Financial Statement Version Reporting with Currency Type Characteristic on S/4 Hana + Embedded BW appeared first on ERP Q&A.
]]>The post Optimizing models in BW/4HANA mixed scenarios appeared first on ERP Q&A.
]]>I’ve had the opportunity of trying some test in a customer with a SAP BW/4HANA in order to improve performance in his reports. This customer has a modern infrastructure with a SAP BW/4HANA (BW 7.5, HDB 2.0) with an intensive reporting with AFO, webi and other third-party tools.
Also Read: SAP S/4HANA Management Accounting Certification Preparation Guide
The reporting models have been built using mixed (or hybrid) scenarios following the LSA++ methodology. The different models are heterogeneous, some models are based mainly in HANA views (and using BW “only” for acquisition layer with a Composite Provider + query in top), and others models are using complex composite providers.
Using a copy of production environment, we want to try different actions that should require low effort in order to improve the performance reporting. These actions will be designed only from a technical point of view (no functional topics analyzed).
Our system has high amount of records (billions in some cases) and is loaded 4 times a day.
We compare performance before and after apply the actions.
There are some acknowledged best practices that are NOT subject to explanation in detail in this post.The goal of this post is explain the results obtained after applying these techniques in order to evaluate them.
Our BW4 has a few ADSO much bigger than others, so is logic to start focusing in these ADSO. I have to admit that the results have surprised me by being quite different from what I expected: we have gained relevant improvement only in certain circumstances.
About partitioning… physical partition or logical partition?
We have tested with physical and logical partition (with semantic group), with the following improvement in performance:
My conclusions here are:
(*) Obviously, if we are partitioning by fiscal period and our query is asking for fiscal year, we have to change this (there are some ways to do this).
!: Important: the models tested have a few ADSO much bigger than others, but these models are complex too, this is, the models don’t have only a big adso, but also several joins and others ADSO involved. Is for this that the gain with the partitioning has been modest in some cases.
For the same reason, removing historical data from these ADSO didn’t have relevant improvement: We removed 20% of historical data in these ADSO and we only obtained 2-5% of gain in performance.
A different case was a model that was relatively simple, with few joins and only one relevant ADSO with 1 billion of records. We detected that this ADSO was partitioned by fiscal year, however the queries were using fiscal period. After changing the partitioning criteria, the gain was 8%-35%, depending on the values selected in the query.
Finally, comment that we had some problems with the “remodelation process” of these big ADSO. We finally decided to do first a copy of them, drop the data in the ADSO, to do the partition and reload from the copy.
I’ts recommended by SAP that data must be filtered as early as possible using input parameters. This means for example that if you have a projection reading an ADSO, you should have an input parameter in this projection (or/and a filter if possible).
Some of the models had the filters (variables) in the BW-query in the top. Changing the model by adding a input parameter instead a query-filter, we had about 5-20% of improvement, depending on the model and the data requested.
Sometimes, we think that the primary keys and indexes are not relevant in HANA environments. However, SAP recommends that all relevant joins should be done using the key columns or indexed columns.
We checked all the mains joins and we added indexes where needed. I have to admit that the results were not good. In most cases the gain wasn’t relevant or even a little bit worst.
Only in one case, where some fields were used in several joins in a view, we gained about 17% of performance. This join was at customer level in an ADSO with 700 millions of records.
Thought this action in most of times won’t have positive results, I recommend to check this, is easy to test.
Really I don’t understand completely how this flag works. However, we detected that in one HANA view, where the performance was bad, the problem was concentrated only in one join, and this join apparently was not more complex that the rest. After activate this flag, we gained about 50% in performance.
By experience in other projects, I can affirm that if you have a composite provider with unions, is good idea to replace this BW-unions by HANA-unions. Yes, I know, you’re thinking that it doesn’t make sense. But this is my experience, and the gain is relevant.
In this customer, only one model was in this situation, and the change wasn’t relevant, I think because depending on the selection in the query, not all the ADSO was being reading at the same time. however, I recommend this action.
In parallel to my tests, some colleagues in a project discovered an interesting thing… one of the models had a composite provider in the top where some attributes had a lot of navigational attributes:
They decided add a new composite provider above the “old” composite provider, reading it and “remapping” the navigational attributes on their own infoobject:
In this way, depending on the number of navigational attributes used in the query, the improvement was about 25%-45%.
Obviously, you don’t need to read this blog to know that if your remove a join and you add the fields required in the load of your ADSO (or creating a new ADSO in the EDW layer), the query performance will we better.
This is NOT, in general, a good best practice and good approach in BW4/HANA. We must analyze carefully this situation, but sometimes it can be highly recommended, specially when the compression rate by creating a new ADSO is high.
After some test, we concluded that in one area, where the EDW layer has billions of rows with a high granularity, this approach must be considered in future projects.
SAP recommends whenever possible, make inner joins instead of left joins. Personally, I’m not very sure about this, and probably depends on the model.
In the particular case of this customer, we changed some left join by inner joins, without relevant improvements.
The post Optimizing models in BW/4HANA mixed scenarios appeared first on ERP Q&A.
]]>The post Getting MAX/MIN Date on BW Query Designer (BEx), Compatible with BusinessObjects appeared first on ERP Q&A.
]]>The maximum and minimum with regarding to date subject has been on my mind for quite a while. I searched for it a couple of years back and did not come to a significant answer but to make the date as a Key Figure then use the query conditions to retrieve the TopN based on that date Key Figure, or BottomN in case of minimum, this solution will work only on the Analyzer, or Analysis for office level, however once this query is consumed bw Web Intelligence for example, conditions will not work, consultants mostly preferred doing this on the report level.
I am more of a back end guy when it comes with altering data sets or creating calculations and there are many reasons for that, most important ones are performance and integrity, I prefer a lot doing whatever I can to avoid making calculations on the front end level, I believe that there is a way for doing everything on BW.
Read More: SAP BW on HANA Certification Preparation Guide
I had an idea of calculating the max GR date for each material on each company code, on the Query Desginer level (BEx), most importantly that is compatible with Web Intelligence.
The solution is very simple, one word, exception aggregation!
Let me show you how.
BW Version:
7.57 on HANA with BW4 Starter Package, this package restricts using any object that is not compatible during the migration to BW4, which makes it a simulation of a BW4 system.
I will be using the Inventory HANA Optimized BI Content as the base of the required calculation.
0CALDAY is the date I will be doing this exercise on, it is mapped to the posting date in my dataflow, a GR date is the posting date restricted on movement type 101 and a not null vendor, we will come to that later.
Navigate to 0CALDAY > right click, New > Variable
You will be prompted with the screen below, give it a name and a description, make sure that the Type of variable is Formula, Processing by is Replacement Path, Ref. Char is 0CALDAY
Here is how your variable should look like:
General tab:
Replacement path tab, make sure to put the offset Length 8, and untick After Aggregation:
Currencies and Units tab, in Dimension, select Date:
Save and close.
Now on the query level, create a new formula, in the Formula tab, from Groups, select Variables, select the Variable we just created ZRPF_0CALDAY
Finally, go to the General tab, Properties, Aggregation, Select Maximum, on the reference characteristic, for my scenario I needed to find the max date for each material on each company code (Max date had to reset on each company code), so I had to add both Company Code and Material, as material is not the most granular dimension in the movements ADSO.
Now in the final query, put Company and Material in rows, and the formula we just created in the columns.
Here’s the output on Web Intelligence, a single line for each material and company code with the maximum GR date:
That is it, as simple as that, the following steps will be optional, if you want to make things cleaner and reusable on the Infoprovider level.
A GR date is the posting date of a material movement of type 101, where the vendor is not null, hence we need to restrict that formula, this can not be done if the formula is local, on the query level, we need to create a calculated key figure then a restricted key figure.
Calculated Key Figure:
From the query designer navigate to the Infoprovider tab, expand Reusable components > right click Calculated Key Figure > New Calculated Key Figure:
Calculated Key Figure, same definition as the formula above:
Save and Close.
Restricted Key Figure:
Same place as the Calculated Key Figure, right click Restricted Key Figure node > New Restricted Key Figure:
Give it a name and a description, then navigate to the Selection tab, drag and drop the calculated Key Figure, and the dimensions you wish to restrict on, note that you must pick only the static dimensions that define your restricted Key Figure, as I mentioned above, a GR date is the Posting date for the movement type 101 where there is a vendor and coming from the movements infoprovider, the date or any possible dynamic filter should not be added here, this is from a design perspective, so that you’re free to do selection periods per query for example if you restrict on 0CALDAY, however it is of course technically possible:
Now you have a new key figure, that can be restricted on whatever dimension values you need, on the ADSO level, so you can use it in any query you design.
I was able to successfully develop a stock aging report using this method, a single query returned the maximum GR date, material creation date, total stock quantity and total stock cost.
The post Getting MAX/MIN Date on BW Query Designer (BEx), Compatible with BusinessObjects appeared first on ERP Q&A.
]]>The post Looking forward for the next available value in a table without using LEAD or LAG function or LOOPING in SAP HANA appeared first on ERP Q&A.
]]>This blog is regarding the calculation of “Production Ratio” in Supply Chain Management for the monthly bucket in SAP HANA.
The client wanted to see, Production Ratio of a year for each month for a particular Product, Location and Product Version combination. In my case Production Ration was calculated as (Quantity / Total Quantity * 100) for each month. The catch is when there is no value for Quantity and Totaly Quantity in a month, then we have to look forward to the upcoming months for values.
Use Case: Reapplying the production/ transportation quota to most relevant BOM/ lane. This scenario is applicable in almost all the supply chain planning projects where you take constrained/ unconstrained supply plan and extracts production and transportation quotas for Inventory planning.
Let me first introduce you with the reference table which has six columns namely PRODUCT, LOCATION, P_VERSION (PRODUCT VERSION), QTY (QUANTITY), TOTAL_QTY (TOTAL QUANTITY) and DATE.
This table contains the list of ordered quantity of a product (i.e. material) from a location (i.e. plant) for an entire year. If you look at the table, there is a zero (or null) quantity ordered in Jan. In Feb, we have ordered 10 quantity of product version 001 and 20 quantity of product version 002, so the total order quantity is 30. Similarly, we have values of ordered quantity for the rest of the months.
Now, when I say we have to look forward whenever there is a null value for QTY and TOTAl_QTY, then for Jan, we should have values from Feb (which is the first non-null value month after Jan) Hence, for Jan there will be two product versions 001 and 002, and their respective ordered quantity from Feb. Similarly, for Mar, Apr, May and June, July is the desired month to look for value.
In simple words,
To achieve this, I have used Table Function, which can be further consumed in a Calculation view to get the result.
Select all the values from the referenced table or VDM.
T1 = Select * From BASE_TABLE;
Here, we will select only “PRODUCT”, “LOCATION”, “TOTAL_QTY” and “DATE” fields from table T1. Set FLAG as 0, where the value of “TOTAL_QTY” is NULL, else Set FLAG as 1.
T2 = SELECT
"PRODUCT",
"LOCATION",
"DATE",
"TOTAL_QTY",
CASE When "TOTAL_QTY" Is Null
Then 0
ELSE 1
END AS "FLAG"
FROM :T1
order by "DATE";
Now, apply the “Running Sum” Function on the “FLAG” column. By doing so, you would notice that the value of running sum column, i.e., “FLAG_SUM” changes whenever a non-null “TOTAL_QTY” row occurs. This will become clearer in the next step, how this would help us.
T3 = SELECT
"PRODUCT",
"LOCATION",
"DATE",
"TOTAL_QTY",
"FLAG",
SUM("FLAG") OVER (PARTITION BY "PRODUCT","LOCATION" ORDER BY "DATE") AS "FLAG_SUM"
FROM :T2
order by "DATE";
Apply ROW_NUMBER() Function on “PRODUCT”, “LOCATION”, and “FLAG_SUM” and Order By “DATE” in DESC order.
Now, if you look at the result, the “ROW_NUM” column gives the number of rows (here, months) to look forward to get the value. For example, the value for Jan is supposed to be picked from Feb. Here, Jan(1) + ROW_NUM(1) = Feb(2). Similarly, for March, April, May and June, the month to look forward for the value is July. So, March(3) + ROW_NUM(4) = July(7), and so on.
NOTE: – ROW_NUMBER() function applied on “DATE” must be in DESC order.
T4 = SELECT
"PRODUCT",
"LOCATION",
"DATE",
"TOTAL_QTY",
"FLAG",
"FLAG_SUM",
ROW_NUMBER() OVER (PARTITION BY "PRODUCT","LOCATION","FLAG_SUM" ORDER BY "DATE" DESC) AS "ROW_NUM"
FROM :T3
order by "DATE";
Adding the number of months to “DATE” from “ROW_NUM” column to get a new column as “DATE_NEW”, which is the desired month to look forward to the next available value (as explained above).
T5 = SELECT
"PRODUCT",
"LOCATION",
"DATE",
"TOTAL_QTY",
"FLAG",
"FLAG_SUM",
"ROW_NUM",
TO_DATE (ADD_MONTHS( "DATE","ROW_NUM")) AS "DATE_NEW"
From :T4
order by "DATE";
Now, apply “Left Outer” join on table T1 and T5, keeping T1 (our base table) as LEFT and T5 (having “DATE_NEW” column) as RIGHT. By this, we will have the desired months to look for under “DATE_NEW” column for each row.
NOTE: – You might have noticed that we are having two rows for P_VERSION-001 in Feb (2019-02-01). These will be handled going forward.
T6 = select
T1."PRODUCT",
T1."LOCATION",
T1."P_VERSION",
T1."QTY",
T1."TOTAL_QTY",
T1."DATE",
T5."DATE_NEW"
From :T1 AS T1 LEFT OUTER JOIN :T5 AS T5
on T5."PRODUCT" = T1."PRODUCT"
AND T5."LOCATION" = T1."LOCATION"
AND T5."DATE" = T1."DATE"
order by "DATE";
Apply “Left Outer” join on table T6 and T1, keeping T6 as LEFT and T1 (our base table) as RIGHT. By this, we will have the desired values of “P_VERSION”, “QTY” and “TOTAL_QTY” under “P_VERSION1”, “QTY1” and “TOTAL_QTY1” columns, respectively, for each row.
NOTE: – As mentioned above, don’t worry about the duplicate entries as we are going to handle them soon.
T7 = SELECT
T6."PRODUCT",
T6."LOCATION",
T6."P_VERSION",
T6."QTY",
T6."TOTAL_QTY",
T6."DATE",
T1."P_VERSION" AS "P_VERSION1",
T1."QTY" AS "QTY1",
T1."TOTAL_QTY" AS "TOTAL_QTY1"
FROM :T6 AS T6 LEFT OUTER JOIN :T1 AS T1
on T1."PRODUCT" = T6."PRODUCT"
AND T1."LOCATION" = T6."LOCATION"
AND T1."DATE" = T6."DATE_NEW"
order by "DATE";
Select the required fields “PRODUCT”, “LOCATION”, “P_VERSION”, “QTY”, “TOTAL_QTY” and “DATE”.
“SELECT DISTINCT” will remove the duplicate entries (as mentioned above).
Now, if we have a null value for “P_VERSION”, only then “P_VERSION1” will be picked up. Else, “P_VERSION” will remain as it is. Similar will be the case for “QTY” and “TOTAL_QTY”. This will give us our final output.
var_out =
SELECT
DISTINCT
"PRODUCT",
"LOCATION",
CASE When "P_VERSION" Is Null
Then "P_VERSION1"
ELSE "P_VERSION"
END AS "P_VERSION",
CASE When "QTY" Is Null
Then "QTY1"
ELSE "QTY"
END AS "QTY",
CASE When "TOTAL_QTY" Is Null
Then "TOTAL_QTY1"
ELSE "TOTAL_QTY"
END AS "TOTAL_QTY",
"DATE"
FROM :T7
order by "DATE",
"P_VERSION";
By this, we are having the values of Quantity and Total Quantity for each month which will help us in calculating our Production Ratio for the monthly bucket.
The post Looking forward for the next available value in a table without using LEAD or LAG function or LOOPING in SAP HANA appeared first on ERP Q&A.
]]>The post SAP HANA Based Transformations (Processing transformations in HANA) aka ABAP Based Database Procedure (AMDP) appeared first on ERP Q&A.
]]>As majority of us has worked on SAP BW and have written ABAP routines in transformations in BW to derive the business logic, we often noticed the performance issue while loading the data into DSO, Info cube or master data info object.
Also Read: SAP BW on HANA Certification Preparation Guide
There could be numerous reasons for this:
An ABAP based BW transformation loads the data package by package from the source object into application layer (ABAP). The BW transformation logic is executed inside the application layer and transformed data packages are shipped back to database server which writes the result packages into target object. Therefore, the data is transmitted twice between Application layer and database layer.
During processing of ABAP based BW Transformation, the source data package is processed row by row.
But in HANA based BW transformation, the data can be transferred directly from source object to target object within a single processing step. This eliminates the data transfer between DB layer and Application layer.
The complete processing takes place in SAP HANA**.
Note: Some user defined formulas in transformations could prevent the code pushdown to HANA.
Step 1: Create a ADSO say ZSOURCE with activate data and change log in BW Modelling Tool in Eclipse
Step 2: Create transformation between aDSO and Data Source. Create expert routine from
Edit Menu -> Routine -> Expert Routine.
Pop-up will ask for confirmation to replace standard transformation with expert routine. Click on the “Yes” button.
System will ask for ABAP routine or AMDP script. Click on AMDP Script.
An AMDP Class will be generated with default method – PROCEDURE and with default interface – IF_AMDP_MARKER_HDB
Step 3: Open ABAP development tools in Eclipse with BW on HANA system in ABAP perspective to change the HANA SQL script.
Important Points to note:
For example: If we want to lookup on another DSO, master data tables or read the target table, we need to mention the DSOs, master data tables in the using clause of method. See example below.
METHOD PROCEDURE BY DATABASE PROCEDURE
FOR HDB LANGUAGE SQLSCRIPT OPTIONS READ-ONLY USING /BIC/AZSOURCE2.
For better understanding, I have explained 3 scenarios (Real world scenarios) for better understanding and how to create and write the code in AMDPs :
1. Created ADSO say ZSOURCE and added fields in it.
2. Created a SAP HANA transformation with expert routine ( steps are explained above)
3. Write an expert routine in Method Procedure to assign source fields to target fields.
Here in the below code, we are selecting fields from intab and assigning to exporting parameter outTab (no additional logic in transformation).
Code:
Output :
If we build this logic in ABAP, we would need to match the target records (one by one) with source object data for all data packages and update the deletion flag with X if a record is not present in source object. This could lead to performance issue if the records are in high volume and the SQL script logic is much simpler than writing the logic in ABAP.
In SQL, we can achieve this by using SQL functions:
1. First step is to declare the variable lv_count to store the count of entries in Target Table. This is required to know if it is the first-time load in DSO or the successive loads.
Based on that we will build our logic.
For the first-time data load the value of LV_COUNT value will be zero. In this case, we need to transfer all the source data as it is to Target DSO with deletion flag as blank as we don’t need to update the flag.
For the successive date loads, we need to compare the existing records in Target DSO with the latest records coming from Source and update the deletion flag accordingly.
Target DSO Contents after the first load:
**say after first load, the record with field 1 = 500 got deleted from source system. In that case, we need to update the flag with value = X.
We can declare the variable by using DECLARE statement.
Declare varchar(3);
2. For the successive loads, we need to compare the existing records in Target with Source as mentioned above.
Here lv_count is variable and /BIC/AZSOURCE2 is target DSO and I have copied the content in temporary table – It_target.
For this case, I have counted the number of records in Target DSO and put it in lv_count Variable.
**It_target is the temporary table declared to store the content of target DSO which would be used to comapare the records in source package
If Lv_Count > 0 then
/* copying the latest record from source to temporary table it_zsource1*/
It_zsource1 will have:
*It_target – is the target dso
*:intab – Source package contents
it_zsource2 will have:
Union of It_zsource1 and It_zsource2
It_zsource3 will have data shown below and this would be the final output which we need to load it in outTAB:
This will be the final output we required and assigned it to outTab:
Below else statement is for first time load when lv_count = 0 and transfers the records from source as it is to target.
SAMPLE CODE BELOW :
-- INSERT YOUR CODING HERE
/* outTab =
select "/BIC/ZFIELD1",
' ' "RECORDMODE",
"/BIC/ZFIELD2",
"/BIC/ZFIELD3",
' ' "/BIC/ZFLAGDEL",
' ' "RECORD",
' ' "SQL__PROCEDURE__SOURCE__RECORD"
from :intab; */
declare lv_count varchar(3); /* declare a variable */
it_target = select * from "/BIC/AZSOURCE2";
select Count(*) INTO lv_count from :it_target; /* count number of records in target */
if lv_count > 0 then /* if new records are added in source */
it_zsource1 = select
sourcepackage."/BIC/ZFIELD1" as field1,
sourcepackage."/BIC/ZFIELD2" as field2,
sourcepackage."/BIC/ZFIELD3" as field3,
'' as zfield1
from :intab as sourcepackage;
/* if existing records deletes or updated*/
it_zsource2 = select ztarget."/BIC/ZFIELD1" as field1,
ztarget."/BIC/ZFIELD2" as field2,
ztarget."/BIC/ZFIELD3" as field3,
sourcepackage."/BIC/ZFIELD1" as zfield1
from :it_target as ztarget left join
:intab as sourcepackage on
ztarget."/BIC/ZFIELD1" = sourcepackage."/BIC/ZFIELD1";
/* union of it_zsource1 and it_zsource 2 */
it_zsource3 = select *, '' as flag from :it_zsource1
union all
select *, 'X' as flag from :it_zsource2
where zfield1 is null;
outTab =
select
field1 as "/BIC/ZFIELD1",
' ' "RECORDMODE",
field2 as "/BIC/ZFIELD2",
field3 as "/BIC/ZFIELD3",
flag as "/BIC/ZFLAGDEL",
'' "RECORD",
'' "SQL__PROCEDURE__SOURCE__RECORD"
from :it_zsource3;
/* for first time load or when the DSO is empty */
else
outTab =
select "/BIC/ZFIELD1",
' ' "RECORDMODE",
"/BIC/ZFIELD2",
"/BIC/ZFIELD3",
' ' "/BIC/ZFLAGDEL",
' ' "RECORD",
' ' "SQL__PROCEDURE__SOURCE__RECORD"
from :intab;
end if;
errorTab = select
' ' "ERROR_TEXT",
' ' "SQL__PROCEDURE__SOURCE__RECORD"
from :outTab;
Key 1,Key2, Field 3 should hold the value from source table
Field 4x should hold the latest value for field 4 from lookup table based on dates.
Field 5x should hold the sum of all the values of Field 5 based on same keys.
Field 6x should hold the previous value of the latest transaction.
In ABAP Routines, this would be very complex in reading the previous transactions and updated the value of the field. But in SQL script, we can use windows functions with which the complex computations can be done in a simpler way.
There are many windows functions available in SAP HANA SQL Script:
And many more are there.
In my scenario, I will use the windows function Rank () and Lead () to achieve the required output.
Define the It_lookup temporary table and copy the records from Lookup DSO.
So It_Lookup will have data :
Then, I have ranked the records using RANK() window function based on date in descending order and also used the windows function LEAD () to get the next value after the latest transaction.
It_tab1 will have:
For requirement field5x : we need to sum the values field 5x for the same keys.
Inner join It_tab1 and It_tab2 where rank = 1
It_tab3 will have :
This is the required output. Hence assigning the fields from It_tab3 to Outtab:
The post SAP HANA Based Transformations (Processing transformations in HANA) aka ABAP Based Database Procedure (AMDP) appeared first on ERP Q&A.
]]>