OData and SAP Netweaver Gateway. Part VIII. SAP’s Love for OData – a Tale of the Friendly ABAPer

Let’s scratch that and come back to reality and start over again.

Once upon a time, a long time ago in a far away land there lived a Functional Consultant. His client had a rather peculiar Sales Order process. His job, however difficult though it may seem, was to configure the SAP sales order transaction to map it to the client’s process. Alas, whatever he would do, whatever switches he would turn on, which every IMG node he would check, whatever table entry he would change, the system just would not work the way he wanted.

Finally raising his hands up in despair, he enlisted the help of the famous (or vaguely notorious Technical Consultant) also known as the Friendly ABAPer. The most under-rated or step team member of any Project.

The ABAPer put all his skills to use and within a short time (let’s say some weeks – including development, testing etc.) put in some enhancement code which finally tamed the dragon. The Sales Order Transaction now worked just the way the client wanted it.

Does the above story sound familiar?

If you are from the SAP world and particularly technical background then you should be very familiar with this kind of situation. Nothing out of the box here. Right? Only a few dramatic Hollywood effects.

This story is talking about our familiar term known as Gap Analysis.

When some SAP functionality cannot be met with basic configuration it is termed as a GAP. There are various ways to fill in these GAPS with custom development, with enhancements, with customizations etc.

So what’s new here, where is OData?

Well, the story is not over yet. The “then they live happily ever after” is yet to come. So, let us continue the story.

Once the dragon was slain, the client was happy and the Consultants were praised, until one day, the client unleashed another dragon on the Consultant.

This time a much mightier than the first one. And it was called dotNet (.NET). Yes, you heard it right!!

The client wanted to perform the same sales order creation, but through their own native in-house developed .NET application. If you remember, .NET and SAP don’t go very well. .NET is a Microsoft’s product and SAP is well …. SAP’s.

Don’t ask me why the client wanted to support .NET?

Maybe, some of its users were just comfortable using the .NET screen functionality or they wanted a better entry screen than SAP GUI or just that they had an expert in-house .NET Development team. We don’t know. But we will not try to decipher that here. Instead, we will see what happened next in the story.

So, this time the ABAPer was at a loss. He was skilled in ABAP but not in .NET. So he approached the gods of .NET and together they strategized and decided to call upon the earth the all-powerful API(Application Programming Interface) of SAP called as BAPI.

BAPI’s are Business API’s provided by SAP which can be used for interfacing third-party systems with SAP.

At the heart of it, BAPI’s are modeled as Function modules which are Remote Enabled (RFC’s for short). Which means they can be called or invoked from Remote systems which could be third party systems as well.

Consider RFC’s to be tiny openings/windows into SAP from where other systems can call and access data.

Hence using all the knowledge, they had, the ABAPer and the .NET Guru worked together for the next couple of months to build an application which would let the user create Sales Order directly through the .NET application.

The data would pass through the RFC protocol from SAP straight into the .NET application.

And Finally, when the whole application was ready they presented this solution to the client and the client was happy as could be. Everything worked like a charm.

But the story does not end here. Off course, how could it, when OData has still not been introduced.

So, days went by, both the dragons were tamed and they worked well for the client. But the happiness was not meant to be, cause the client got acquired by another company who had a Java Application which they wanted to interface with SAP.

This time the ABAPer had more skills than the first time and hence he started with an enhancement but that did not work.

SAP and JAVA don’t talk to-one another anyway. So, he thought of using the BAPI technique, but that did not work smoothly either so he tried the other way. He read a lot and after a lot of research understood that he would need to build a JCo connection to fetch data from SAP system.

OK so here is where I need to bring you to reality.

As we can see from the story above, 3 case studies were presented. The client wanted to access SAP Database in 3 ways:

i. Directly within SAP through ABAP
ii. From SAP using RFC – .NET
iii. From SAP using JCo Connection – JAVA

This is a usual scenario in any software development. Initially what starts out as simple requirement slowly grows into an intertwined network of software components and this leads to the building of something known as Point-to-Point Solutions.

From the diagram below, you can see that as systems become complex, in a typical SAP overall IT landscape, the number of data consumers grow. Also unlike in the earlier times, SAP Data is now stored in multiple storage sources. Data could be in the SAP BW system, the company could be investing in the HANA Database and the client could also be using SRM, CRM, and other systems to store data.

Hence building systems of data communication between each and every data consumer and data provider lead to Point-2-Point systems and this brings in numerous challenges such as maintenance headaches, Upgrade issues, Adaptability etc.

Imagine if you want to upgrade your underlying SAP system. Then in a typical Point to Point system, you would have to ensure that none of the communication lines break. Which also involves ensuring that the original consumers can still understand and work with the upgraded SAP system. Which really is a dangerous assumption to make.

Therefore, SAP felt a serious need to expose its data to the external data consumers in such a way that it is:

i. Consistent across all software’s and systems and across all consumers
ii. Can be traced, tracked and monitored
iii. Does not lead to disruptions in the normal business operations or day to day production activities

In other words, if SAP comes up with a solution, it should ensure that the solution works for all systems in a consistent way. To look at consistency across system we need to look for commonalities between each of these systems and build a solution leveraging this commonality.

Now, if you look at each of the data consumers, it is safe to assume that all these consumers work over the internet. And If they work on the internet, it would mean they all would understand HTTP protocol (after all internet works on the HTTP protocol). So, if we develop a technology of data communication over the HTTP protocol then all the systems/consumers can easily consume that data in a uniform way. And hence no unique development methodology would need to be adopted both on the consumer’s side as well as the data producer’s side. Everything would work as it does as usual.

With this in mind SAP started researching the “How to solve the data exposure” challenge over the internet. Well, the solution for this was pretty much out there in the open.

Yes, because the open systems world had already recognized this challenge and had already been researching and playing around with many other techniques to solve this. Companies like Microsoft and Adobe and many others collaborating together on this idea. And this idea is what is now shaped as the form of OData. Alas!!! The Entry of OData.

To simply solve its data exposure challenge, such as to make it easy for clients to use other third party development tools and platforms can consume its internal SAP Database in a consistent, controlled and systematic manner.

Many folks get confused between other 3rd party middleware systems and OData.

It is helpful to mention that a middleware’s primary focus is on integration. It is what enables systems to talk to one another, however, middlewares are generally very specific in their approach and these are what give rise to the point-2-point systems. Open Database Connectivity (ODBC), JDBC, Object Request Brokers (ORBs) are all middleware concepts with specific implementations. But they all serve a specific function and do not serve the generic requirement of connecting to the database over the internet.

So, it’s imperative to ask how does OData really solve this challenge?

To understand this, we need to remember that OData at its core is nothing but an enhanced implementation of the HTTP protocol. HTTP protocol is actually the basic implementation of the REST principles. REST stands for Representational State Transfer.

RESTful Web Services are one way of providing interoperability between computer systems on the Internet.

There are other types of services also, for example, WSDL and SOAP which have their own pros and cons.

The term representational state transfer was introduced and defined in 2000 by Roy Fielding in his doctoral dissertation

In simple terms, REST proposes guiding principles which any communication channel should support to be called RESTful. These principles are:

  1. Client-server
  2. Stateless
  3. Cacheable
  4. Layered System
  5. Code on Demand
  6. Uniform Interface.

HTTP is the common implementation of REST principles. Because HTTP is based on REST hence using a URI it is possible to access resources (Websites and Web Pages) using HTTP as we commonly do. For example, when you type www.google.com in your browser internally the browser makes an http-> get method call to Google’s server and correspondingly receives googles home-page as the response.

In addition, HTTP also provides us with methods with which we can work with the backend data if the webserver supports these operations.

OData, however, takes HTTP to a new level, where in it provides specific implementation shells for application and backend server developers to provide actual implementations for each service also known as OData service. This way OData can be used to work with the backend data directly over the internet.

One tenet of REST is that any of REST implementations should support the following 4 verbs to talk to the backend database server: POST, PUT, GET, DELETE.

Accordingly, HTTP protocol gives us POST, PUT, GET and DELETE Methods to communicate with the backend data.

Whenever you fill an online order form on let say Amazon or Yahoo.com etc, internally the system is issuing a POST request to update the database. However, this communication or the implementation of the POST methods was not consistent. In other words, the implementation of the method depended on the application designer. This made standardization and maintenance difficult.

OData removed this ambiguity by providing a proper channel for developers to program these HTTP methods. OData provides shell methods in which service developers can add code to provide the implementation for the HTTP methods. Hence it is known as the SQL of the WEB. Because it is just like SQL Statements, you run on a typical RDBMS system.

Synonyms to HTTP therefore, OData gives us the CREATE, READ, UPDATE and DELETE methods which can be used to provide accurate implementations for the HTTP POST, GET, PUT and DELETE methods respectively.