Posts

Showing posts from May, 2015

Project Transformation : Push persistent BW dataflows to HANA Virtual Data Marts, Consume HANA models via Virtual provider

Image
Project Transform  is about redesigning BW data flows while leveraging HANA's number crunching power. Goals :  1. Utilize HANA in-memory computing by pushing Stat/End Routines/Field Routines onto DB layer  2. Reduce intermediate staging DSO's there by reducing data foortprint, avoiding activation times 3. Promoting Virtual data mart layer 4. Providing users with up-to-date data rather than pre-computed data 5. Enabling users to query at item level rather than at summarized level(Cubes) 1.  Utilize HANA in-memory computing by pushing Stat/End Routines/Field Routines onto DB layer  Routines from data flow will be included in the generated program and DTP spends lot of time doing processing on ABAPAS. An ABAP program spends more than 50% of its execution time in ABAPAS normally. These routines can be efficiently pushed down to HANA layer by creating Stored procedures and calling them from Routines. Else we can create a Calculation/Analytic view /Stored...

Ramblings over Rank Node, ABAP vs HANA SQL Data types and Activation error for DSO with only Key Fields

Rank Node : We have new node type introduced in calculation views, Rank Node. This serves the same purpose of Exceptions from Bex. Instead of swamping  user with lot of data, we could restrict to display only TOP N values. The value of N could be fixed or based on a input parameter. In addition to Calculated, Restricted Attributes/ Measures , Rank node will further enhance OLAP toolkit of HANA Appliance. Temporal Join : ABAP vs HANA Native Data Types :   Every table in SAP will have two definitions. ABAP run time object and Database Object. From SE11, we can navigate to both and you can spot differences between them. reason being, ABAP Language  comes with some native data types. Examples being DATS, TIMS, CUKY, UNIT, CURR etc. These data types will mapped onto underlying Database native data types. Hence we have two representations of same object. Database interface helps us in converting Open SQL queries into underlying Database SQL. Lets shift our focus to Impo...

Star Join node Vs Analytic view && Attribute View vs Calculation Dimension View

Analytic views comes with constraint that Measures should be coming from a single of Data foundation, although data foundation could contain any number of tables joined. Previously, we used model multiple Analytic views and combine them using JOIN node of an calculation view, in order to facilitate to Measures from multiple tables.  Star Join solves this by allowing measures coming from multiple tables. So we don't have to use multiple Analytic view to achieve this.  However Analytic views comes enriched with master data through Attribute views. To compensate this, We have Calculation Dimensional Models introduced. These artifacts are same as  Attribute views, but can only be used in a Star Join. So this solves out Master data enrichment issue while using Star Join. Introduction of Star Join Node and Calculation Dimensional Models  certainly introduces more uses cases for Calculation views and avoids scenario of creating multiple Analytic views for e...