Oracle Essbase End-user Tools

.

Turn around your Excel tables with the SmartView plug-in and discover further insights with Tableau

Excel is the most widespread tool across organizations for dealing with data, and whilst it is clear that it puts a big package of features and capabilities in the users’ hands, it is also worth noting that there are more options to enhance its performance and capabilities.

In this Oracle Essbase End-user Tools article, we'll explore how Essbase and its specific add-in for Excel can bring the performance of your staff to another level when a support tool for data-driven decision making or data analysis is required. Not only we will present the Excel add-in for Essbase, but we will also introduce Tableau as a complementary tool to analyze data from Essbase, thanks to its built-in connector.

 

1. Main Motivation

Excel is the world’s favorite data analysis tool, but we all too often see how a BI solution with a particular tool is then exported top Excel so that business users can carry on working in a comfortable environment.

Instead of trying to fight the end-users’ natural instincts, setting Excel against the BI system, a more suitable approach would perhaps be the following: Why shouldn't the BI system use Excel as its final consumption tool?

You might think that a regular Excel sheet is not fancy or powerful enough to meet the requirements of a BI report tool, but suppose that the BI system is going to use an enhanced version of Excel, specifically designed to gain crucial capabilities in two main aspects:

• Lightning-fast access to corporate data

• User interactivity with data. Drill-downs, drill-throughs, and other capabilities to explore and navigate the data

 

2. Essbase Technical Overview

Before starting a more detailed description of how Essbase could enhance your BI stack, let’s review what Essbase actually is from a technical perspective.

Essbase is as an On-Line Analytic Processing (OLAP) technology which consists of a Multidimensional Database Management System (MDBMS) for storing and analyzing data using a multidimensional approach. This key technologic feature gives us the developed Essbase applications, called cubes (or hypercubes).

In contrast with On-Line Transaction Processing (OLTP), OLAP is focused and optimized for processing queries instead of transactions.

It is also interesting to mention the etymology of the product name: Essbase stands for Spread Sheet Database, so it’s easy enough to see that it can be optimized for analyzing Excel-like data and enhancing Microsoft Excel for analytic purposes.

In particular, Essbase shines at financial reporting, scenario analysis (such as actual, budget, forecast, etc.) and, in general, at analyzing highly hierarchical data and complex formula calculations.

 

3. Benefits and enhancements of empowering Excel with Essbase

So far, we have introduced Essbase as a tool that could empower Excel in order to provide an enhanced experience to Business users working with data in spread sheets, and we have also outlined the technology behind it. But what exactly are the specific features and capabilities that this OLAP system can really offer Excel?

Let’s have a look at some of what Essbase can do for Excel users:

• Build your own ad hoc analysis in a user-friendly way.
And remember you will still be using your favorite spreadsheet program!

• Navigate the data.
Once a cube is accessed from Excel, exploring the data through all its dimensions is both fast (we are talking about sub-second query times) and intuitive. After a quick training session on the new features offered by this tool, users will find themselves slicing and dicing the cube according to the specific information they are looking for.

• Drill-down through the hierarchies of your dimensions.
Cube dimensions are designed in a hierarchical fashion in Essbase, and aggregation rules are always defined for its measures. This provides built-in drill-down functionality, allowing the end-user to retrieve data with the best-fitting detail for each analysis.

• Access pre-built reports directly in Excel.
Although ad hoc analyses may be the jewel of the crown, SmartView for Excel also provides functionality to develop reports so that they can be directly accessed by end-users.

• No exporting and importing data.
There is no need to export the requested data from any BI presentation tool for subsequent import to Excel; the data is already in Excel, and you only need to access the cube, look for the required data and build your own custom analysis. Bear in mind that once the data is in your spreadsheet, no one is going to prevent you from applying formulas to get new calculations to support your analysis.

 

4. Hands On: SmartView add-in for Excel

Now everything’s ready for show-time! Let’s proceed to a brief demonstration of how to access to your Essbase applications (cubes) with the SmartView plug-in for Excel.

Once the add-in (kindly provided by Oracle) is installed in the user’s local machine, the SMART VIEW tab will be available right after the default ones.

The very first time, it will be necessary to set up the connection to Essbase. Once this is done, connecting to the server is going to be as easy as:

➜ Open the SMART VIEW table

➜ Click the Panel button

➜ Access Shared Connections

➜ Fill in your user name and password

Figure 1: Applications (cubes) with the SmartView plug-in for Excel

Figure 1: Applications (cubes) with the SmartView plug-in for Excel

Note that all the security settings for data governance are managed through user accounts in the Oracle’s WebLogic domain.

Once logged in, we will be able to select the Oracle® Essbase server, where the developed applications will be shown; expanding the desired application will show the cube deployed.

Clicking the cube (in this example we are selecting TBC_PO) will make different options to access and manage it available.

Figure 2: Excel SmartView Shared Connections

Figure 2: Excel SmartView Shared Connections

In this case, we select Ad hoc analysis, thus loading the cube and making the ESSBASE tab available:

Figure 3: Excel Ad hoc analysis

Figure 3: Excel Ad hoc analysis

Here we can see the aggregation of total profit across all the dimensions. We can move dimensions in and out of the spreadsheet, set filters and drill-down to the level of detail required. In this article, we will perform two different ad hoc analyses:

 

Analysis 1

We want to look at the Sales per region across the year 2000. We will need both Market and Time dimensions:

Figure 4: Analysis 1

Figure 4: Analysis 1

Double-clicking the Market dimension will drill-down to the next level of the Market hierarchy:

Figure 5: Analysis 1 - Market hierarchy and Time

Figure 5: Analysis 1 - Market hierarchy and Time

In a few steps, we obtain the desired dataset (by drilling-down and pivoting the dimensions):

Figure 6: Analysis 1 - Sales by Market and Time

Figure 6: Analysis 1 - Sales by Market and Time

Analysis 2

Now let’s say that we are interested in the Sales by Product type and quarter, across the year 2000.

We will need both Product and Time dimensions. Then, as we did in the other analysis, we will drill-down until we get the desired detail of data.

Let’s have a look at the final dataset:

Figure 7: Analysis 2 - Sales by Product and Time

Figure 7: Analysis 2 - Sales by Product and Time

Now that the desired data has been gathered, it is up to the user whether there are going to be further steps in the analysis or not. In any case, the good news is that the data is already in Excel, so it is 100% ready to be played with.

As you can see, in a few steps we can build our own ad hoc analyses by querying the Essbase cube, which provides us with fast access to corporate data. And there is no need for the end-user to work in any BI-specific client tool; everything is done in the friendly Excel domain.

 

5. Hands On: Tableau with Essbase data sources

Now that we have seen the classic end-user tool for accessing Essbase applications, we are going to introduce an alternative, more oriented to data-discovering and visualizations, in a modern-day BI fashion. There are a number of similar tools covering this market, such as QlikView or Microsoft Power BI.

In this article we will introduce Tableau, which comes with a pre-built connector for Essbase. In particular, we will show how to perform the same analyses as above with SmartView, but taking advantage of the advanced visualizations to give them a special twist.

First of all, we need to make the connection to the cube. In order to do so, we have to:

➜ Click More Servers

➜ Select Oracle Essbase

➜ Fill in the connection properties

Figure 8: Tableau with Essbase data sources connection properties

Figure 8: Tableau with Essbase data sources connection properties

Figure 9: Oracle Essbase: Connection

Figure 9: Oracle Essbase: Connection

Note that a specific driver is required for this connection; if it is not already installed in the client computer, Tableau will provide a link to download it.

Once the connection with the server is set up, we need to select the application and the Database:

Figure 10: Tableau Database

Figure 10: Tableau Database

Clicking Sheet 1 will open the default blank sheet for the selected cube. Note that both Dimensions and Measures come with pre-built hierarchies (those we set up in Essbase):

Figure 11: Tableau Clicking Sheet 1

Figure 11: Tableau Clicking Sheet 1

Now we can proceed with the two analyses:

 

Analysis 1

We want to look at the Sales per region across the year 2000. In this case, we can take advantage of Tableau’s map visualization, and also use an animation to represent the Time dimension:

Oracle Essbase: Sales per region across the year 2000

Figure 12: Oracle Essbase: Analysis 1 - Sales per region across the year 2000

It’s a rather different output from before, isn’t it? Of course, there is no good or bad output; it all comes down to each analysis’s requirements. This one is much more useful in a scenario in which we are attempting to gain some extra insight from our data, while in the other, we might be more interested in the specific amount ($) of Sales for each region and month, or perhaps plan to use it as the input data for further calculations.

 

Analysis 2

We are interested in the Sales by Product type and quarter, across the year 2000. It is worth noting that we have the functionality to drill-down in the Time dimension, so that we can display the information by quarter instead of by year:

Oracle Essbase: Sales by Product type and quarter, across the year 2000

Figure 13: Oracle Essbase: Analysis 2 - Sales by Product type and quarter, across the year 2000

Again, the outputs of an equivalent analysis in Tableau and in SmartView are quite different; nevertheless, we maintain that both tools are complementary, each one better suited to different analytic scenarios.

Finally, I would like to provide a (non-exhaustive) summary list of extra features and limitations:

• Extra features
- Dynamic hierarchies inherited from the database (no need to build them in Tableau)
- Alias file support, in order to display the aliases created in Essbase
- Creating Calculated Members for the Measures, through MDX queries to the cube
- Security settings and functionality preserved from the Essbase configuration (within Weblogic domain)

• Limitations
- Extracts are not allowed, only direct connections to the cube
- Drill-down is not available for Measures
- It is not possible to create Dimension Calculated fields
- It is not possible to create Constraints for Level of Detail Expressions

 

Conclusion

In this article, we have been looking at how Essbase can empower your spreadsheets and take them to a new level. We have also introduced Tableau as a complementary alternative to SmartView for Excel, which may be better suited to certain analytic situations.

So far we have only spoken about end-user applications and how can they be empowered by Essbase, so now let’s have a brief look at some of the generic benefits of using Essbase:

• Centralized and consistent corporate information, with security settings governed by Weblogic server

• Outstanding performance and features for financial analytics

• Built-in writeback capabilities

• Fast, in-memory database. Response query times much lower than RDBMS

It is also worth noting that we have only mentioned a few end-user tools, but these are not the only ones. The different tools can be placed in three main groups:

• Specific tools for Essbase: like SmartView for Office or EAS Console

• Traditional BI tools: like OBIEE, which can have Essbase as one of its data sources

• Modern BI tools more focused on data discovery: like Tableau or QlikView

To summarize, Essbase is a data storage tool that excels in fields where RDBMS can’t match its performance in analytics with highly hierarchical information and complex formula calculations. Besides its core technical features, it performs really well as a database system for spreadsheet data, both because of its design and SmartView client tool. And it’s not only limited to spreadsheets - there are also more data discovery oriented tools, offering built-in connectors.

So, if you are looking for a way to enhance your company’s analytics, especially if you want to boost Excel or if you are in a financial environment, Essbase may just be the back-end tool you are looking for!

Click here if you would like to receive more information about the topic!

Customizing ODI Knowledge Modules

.

Lately we have seen a growing interest in ODI (Oracle Data Integrator). We have been working with this tool here in ClearPeaks for a while now, on several projects for different verticals, but it is also true that with the appearance of the newer versions of Oracle Business Intelligence Applications (OBI Apps) 11.1.1.X, it’s a pretty hot topic, and that’s why we’ve decided it was time to publish a quick “tips and tricks” guide for ODI, hoping to save our customers a lot of effort, time, and money.

The first thing to bear in mind when you start working with ODI is that you really must follow best practices in order to take full advantage of two of the most powerful weapons in the ODI arsenal: the declarative design and the knowledge modules. For those of you that don’t have too much experience with ODI, let me run through these two outstanding features.

The declarative design basically allows you to create an abstraction layer between the developer and the actual code that is going to be performed by the DBMS. This means that in ODI you define “what you want to get”, and the way to do so is automatically implemented by the Knowledge Module.

However, you might ask yourself “Is this possible? Can we really rely on the default ODI KMs?” Well, the answer is very simple: for standard needs, where performance is not a problem, yes! But in most of our BI projects, remember that we have had to tune the KMs to adapt them to our customers’ needs and to obtain the maximum benefit from the tool.

But don’t think that this undermines what is a fantastic feature. ODI comes with a great set of KMs that give you the perfect starting point to create your own customized KM. And moreover, all the developers don’t need to go into the details of the KM implementation; in a typical ODI project, the architect will be responsible for setting up the ODI environment and will provide the whole team with the appropriate KMs that will satisfy the particular project needs.

So in principle, the developers don’t need to know all the implementation details (it is up to each user/developer to go beyond and analyze the code ODI is generating, if required). This abstraction significantly speeds up the process of developing an ETL, since once the logic is established, there is no need to redo it over and over again.

A typical example to consider is the logic necessary to load a Slowly Changing Dimension (SCD Type II). With other tools, each developer would need to fully understand the logic of the SCDs and the way the integration process is performed, since it has to be replicated for each table to be loaded.

With the ODI declarative design, you just establish what you want, and the KM will take care of the logic. So you simply need to indicate:

➜ I want to treat “Table X” as a Slowly Changing Dimension (we will mark it like this).

Oracle Data Integrator

 

➜ I want to use “Column X” and “Column Y” to store the starting and ending date of the row, respectively.

Oracle Data Integrator

 

➜ I want to use “Column Z” as the current row flag.

Oracle Data Integrator

 

➜ I want “Column A” to be the Primary Key (Surrogate Key) of “Table X”.

Oracle Data Integrator

 

➜ And I want this column / set of columns (e.g. B and C) to be the Integration Key (Natural Key) of Table X (the column, or columns, that will be used to determine if the row is a new row, or if the row previously existed and has to be updated).

Oracle Data Integrator

 

➜ Finally, we can indicate for each column if we want the ETL to add a new row when the value changes in the source system, or if we prefer to update the whole table.

Oracle Data Integrator

 

✓ And that’s it! By then selecting the appropriate KM, tuned with the necessary logic by our architect, we can develop as many mappings as we want for Slowly Changing Dimensions. Just indicate the source for each column and run it. Quick and easy!

We have also mentioned the concept of Knowledge Modules. So, some of you may wonder, what is a “Knowledge Module”? This is simply a generic set of steps that will perform the needed logic for your ETL process. Each step can be written in different languages (SQL, Oracle-SQL, Jython, and many more) depending on the technology underneath, with placeholders for each column, table, and in general, “entity” that will take part in our ETL. At execution time, those placeholders are filled with the details of the mappings that have been developed, and this gives the project team the flexibility to reuse logic and speed up the delivery of the ETL process.

Well, that’s enough of an introduction to these two handy features in ODI. Now let’s see some usage examples of the things that can be done, and which can help our customers to reduce implementation time.


1. Automatizing Loads

▼ Automatizing the Incremental / Full Load Strategy
There are several ways to automatize an Incremental / Full Load Strategy using KMs. If we think back to the previous version of OBI Apps, in Informatica we had to have two separate mappings for the Incremental and the Full version of the mapping. With ODI this can be automatized in several ways:

ⓐ Manually: Adding an option to the mapping
The simplest way is to add an option to the mapping so we manually specify if we want to execute a full load or not. This option will drive the execution of a new step in our KM. For example, this step may consist in truncating the table, and if we are populating a dimension, resetting the associated sequence.Here we see the importance of following best practices and naming conventions. If we follow a good methodology when naming our DWH objects, we can distinguish the table type by the name, and also the sequences can be easily related to the corresponding dimension.

Oracle Data Integrator
Oracle Data Integrator

 

ⓑ Automatically: Adding an option to the mapping and using a control table
The second option is an extension of the previous one. By using a control table containing the table name and one column containing the “FULL_LOAD_FLAG”, we can invoke this process only if the DWH administrator has set the property in the table (FULL_LOAD_FLAG = 1). This way, there is no need to modify any ODI object to change the behavior of the interface.

▼ Automatizing the population of the unspecified rows for dimensions
A typical need in any data warehousing project is to populate an “unspecified” row for each dimension. This will give us the capability of always making inner joins between the facts and dimensions, and thus improve performance in all our OLAP queries.

Once again, we could generate a separate interface for this, for each dimension, but will it be efficient? ODI gives us the possibility of defining a generic step in our KM to be executed for any dimension table to be loaded.

A typical need in any data warehousing project is to populate an “unspecified” row for each dimension. This will give us the capability of always making inner joins between the facts and dimensions, and thus improve performance in all our OLAP queries.

Once again, we could generate a separate interface for this, for each dimension, but will it be efficient? ODI gives us the possibility of defining a generic step in our KM to be executed for any dimension table to be loaded.

At execution time, ODI is aware of the data type of each column to be populated (remember that every object is stored in the model, and so we have an internal representation stored in the ODI repository). We can prepare a simple process that will basically assign a default value for each data type. For example:

Oracle Data Integrator

Oracle Data Integrator

NOTE: See the ODI Substitution API Reference to learn more about the functions you can use to retrieve useful information from the ODI model, or contact us for further details!

▼ Automatizing the population of Fact Modules by using a control table
Another interesting functionality we usually include in our projects is control of the periods to be loaded by a control table. As we mentioned before, with this table we can change the behaviour of our ETL process just by changing one row in one table. This speeds the process of programming and administering the loads up a lot.

By using this table, the ETL administrator is capable of specifying the starting date that we need to populate, and the number of periods to be loaded. More options can be useful in this table, like a flag (PREV_EXEC_CORRECT_FLG) indicating if the automated previous execution of the ETL was correct (if not the ETL should not continue in order to avoid inconsistencies, for example, in Slowly Changing Dimensions). Another flag might be used to indicate if the population of this table should be included in the automated process (IS_PART_OF_DAILY_LOAD), or the FULL_LOAD_FLAG that we already mentioned.

In the example below, the source system needed to be queried daily due to performance restrictions; by specifying a starting date and the number of periods to be loaded, the ETL automatically ran a loop for the number of days to be loaded by leveraging the ODI variables.

Oracle Data Integrator
Oracle Data Integrator

In this case, it is also important to highlight that the interfaces have to be filtered by a variable containing the value of the day to be populated:

Oracle Data Integrator

Another interesting modification done to the KMs involved in this case study is that of the corresponding CKM to add an option that will avoid deleting the previous errors, since the same session is going to be executed several times in a loop.

Oracle Data Integrator

In some cases, it might even be interesting to remove the period from the target table before inserting, if we are performing a full period insertion (we don’t want to merge the data with the previously inserted data, but to directly reload a complete period).

Oracle Data Integrator

 

As you can see above, by using the ODI functions we can generate generic steps that will work for any object in our ODI model. For example, <%=odiRef.getTable(“L”,”TARG_NAME”,”A”)%> will retrieve the target table name.

See: http://docs.oracle.com/cd/E21764_01/integrate.1111/e12645/odiref_reference.htm#ODIKD1295


2. Boosting performance

▼ Boosting performance for populating staging tables
Since ODI 11g does not have a multi-target capability, we sometimes need to reuse the same staging table to populate several final targets; we have to load these staging tables as fast as possible to speed up the performance of the whole ETL process. We can use the default KMs to load these staging tables, but for one staging table you may not need to apply integration processes, validations checks and other time-consuming logic; to get round this, a good practice is to generate your own staging KM, including only the necessary steps:
Oracle Data Integrator

Then, in the “Insert” step, you can populate the target table directly, instead of the integration table:

<%=ODIREF.GETTABLE("L","TARG_NAME","A")%>
VS
<%=ODIREF.GETTABLE("L","INT_NAME","W")%>

Oracle Data Integrator

 

▼ Boosting performance by using partition exchange
In order to boost performance of load processes and reporting as well, it is sometimes useful to apply partitioning to your DWH tables. This will give you two main advantages:

➀ At ETL time, the population of a big table (GBs of data) is much faster if the table is partitioned, since you can populate a temporary empty table, and then execute a partition exchange to swap the partitions between both tables. This way, all the population in your temporary table, the consistency checks, constraints and all the processes you may need to do can be done much faster.
It is important to emphasize that this is useful for snapshot tables, and in general for tables that do not need to cross data between partitions.

➁ At reporting time, if you just need to analyze one partition (one period) at a time, or maybe two to compare them, the response time in a partitioned table will be faster.

To accomplish the partition exchange in ODI, you can also use a knowledge module that can encapsulate the logic needed. The command below will execute the partition exchange between the temporary table and the target table, leveraging ODI functions.


ALTER TABLE

<%=ODIREF.GETTARGETTABLE( "SCHEMA" )%>.<%=ODIREF.GETTARGETTABLE("RES_NAME")%>
EXCHANGE PARTITION ' || V_PARTITION_NAME || ' WITH TABLE
<%=ODIREF.GETTABLE("L", "INT_NAME","W")%>
<$="<%=ODIREF.GETUSEREXIT("PARTITION EXCHANGE OPTIONS")%>"$>

 

Note that the variable V_PARTITION_NAME will need to contain the name of the partition to be exchanged. This can be retrieved by a simple query on the temporary table that your mapping has populated.

➀ Loading historical information into a Slowly Changing Dimension
The last interesting modification that we are going to show you is to the Oracle SCD KM. Imagine that you launch your system with a starting date (e.g. 30 Sept., 2014) and after some months running your ETL system in production, the source system is updated with historical information so that the business can analyze trends and customer behavior over several years. Obviously, we can’t load the latest information in the source system to our DWH because it will override the current information with this stale but still relevant information. We need to look for a way to include this information in our system, so if a user queries information about old periods, they can analyze and explore the attributes the customer had at that point in time.

Well, we can do the following:

➀ Duplicate the mapping you are using to load your SCD.

➁ Duplicate the Slowly Changing Dimension IKM you are using to populate your SCDs.

➂ Modify the copy of your IKM to set the "ACTIVE_TO_DATE" to the date when you launched your system. This way you can run a separate load for older periods without impacting the daily load you are running on a daily basis with current data. Any row loaded by this second thread (older periods), will always finish by the date when you launched your system, so there won’t be duplicates in your SCD and you can explore both current and old information about the entities in your dimension (e.g. your customer profile, segment, location, etc.).

Oracle Data Integrator

 

Just by modifying small pieces of the given KM code, you can achieve your particular needs. Just be careful that you modify the appropriate properties (in this case, SCD_END and also the SCD_FLAG have to have different values from those in your daily thread).

We hope that you’ve found these techniques that can be applied to ODI KMs to speed up the process of generating and manipulating your ETL interesting, and if you have further questions, do not hesitate to contact us.
Stay tuned for more interesting features to apply to your ETL processes with #ODI!!

Get in touch with us and see what we can do for you!  ✉

Blog Article: Emilio Soriano

privacy policy - Copyright © 2000-2010 ClearPeaks

topnav