Oracle Financial Analytics: General ledger to sub-ledgers navigation

.

When implementing Oracle Financial Analytics, it has been noticed that it is a common requirement to enable navigation from general ledger to sub-ledgers reports to obtain information on specific transactions. One of the main reasons for this is to reconcile general ledger to individual sub-ledgers.

The general ledger is the master set of accounts that summarise the sub-ledger transactions. It is used to add information into the financial statements and journal entries.  A sub-ledger is a detailed record of transactions for an individual account that contains details of transactions for an account.

Sub-ledgers serve as support for amounts posted to the general ledger. It is important to reconcile the general ledger balances to the sub-ledger balances on a regular basis to spot possible discrepancies.

In this blog article, I will describe how this process can be approached in OBI Apps. I will describe the OBI Apps data model and the configuration required. Also, I will describe the creation of sample navigation from general ledger to sub-ledger by the creation of an intermediate analysis.

The animation below shows the expected navigation from GL Account balance to Payables details.

Financial Analytics

Note: This information applies when the data source is Oracle EBS.

OBI Apps allows this navigation by default using the ‘GL Journal ID’ column as a link between subject areas. It is available in general ledger transaction subject area and in the following sub-ledgers (In some cases, ‘GL Journal ID’ has to be exposed to the presentation layer):

  • Accounts Payable
  • Accounts Receivable
  • Revenue
  • Purchase Orders
  • Fixed Assets (Available in OBIA 11g)

Understanding the OBIA model

‘GL Journal ID’ is a unique identifier of an EBS journal entry line and it is available in the linkage table W_GL_LINKAGE_INFORMATION_G.

Sub-ledger transaction tables are linked to W_GL_LINKAGE_INFORMATION_G at source distribution level.

Finantial Analytics

For each sub-ledger transaction fact, there is a logical “transaction detail” dimension comprising the transaction fact and the W_GL_LINKAGE_INFORMATION_G table, ‘GL Journal ID’ is available in this dimension. The figure below shows the physical source “Fact_W_AP_XACT_F_GLJournalID” of the logical dimension “Dim – AP Transaction Details”.

Finantial Analytics

With this configuration, it is possible to have ‘GL Journal ID’ available in the sub-ledger transaction subject areas. And since ‘GL Journal ID’ is also available in “Financial - GL Detail Transactions” subject area, it is possible to configure the navigation from general ledger to sub-ledger transactions by using the OBIEE Navigation functionality.

Creating GL to sub-ledger navigation

‘GL Journal ID’ is available in the two subject areas that we want to link, but the level of detail between the source and the target reports are not the same, which is why it is necessary to create an intermediate analysis to link the two reports. This process is described in this section.

  1. Create the GL report and enable navigation.

Using the subject area “Financial - GL Detail Transactions”, create a new analysis that shows the Balance by Accounting Document Source of a selected account for a selected period. Accounting Document Source is used to identify the sub-ledger source.

Finantial Analytics

  1. Create an Intermediate analysis that contains the ‘GL Journal ID’.

Using the subject area “Financial - GL Detail Transactions”, create an analysis that contains the ‘GL Journal ID’ column. Account and period filters need to be created as well. This report will be used to filter the sub-ledger Transactions.

Finantial Analytics

  1. Create a detailed sub-ledger report

Using one of the sub-ledger subject areas, create a new analysis and filter the results by existing report. Use the analysis created in step 2 as the “Saved Analysis”.

 

image 6 Finantial Analytics

  1. Enable navigation

In the source report (created in step 1), using Action Links, enable navigation to the different sub-ledger reports. The example below shows the navigation to AP Transaction report when Account Document source is Payables.

Finantial Analytics

In conclusion, the OBI Apps data model is prepared to enable the link between General ledger and sub-ledger reports, but it has to be configured at analysis level. This feature can be used for the reconciliation process between General ledger and sub-ledgers.

If you want to go further, create a report that shows summaries sourced by general ledger, a second column that shows sub-ledger totals and a third column that shows the difference between these two amounts, if the value of this column is not ‘0’ navigate to details to discover the discrepancies.

 

Big Data Ecosystem – Spark and Tableau

.

In this article we give you the big picture of how Big Data fits in your actual BI architecture and how to connect Tableau to Spark to enrich your current BI reports and dashboards with data that you were not able to analyse before. Give your reports and dashboards a 360º view, and understand what, when, why, who, where and how.

After reading this article you will understand what Big Data can offer you and you will be able to load your own data into HDFS and perform your analysis on it using Tableau powered by Apache Spark.

The Big Data ecosystem

When considering a Big Data solution, it is important to keep in mind the architecture of a traditional BI system and how Big Data comes into play.

Until now, basically we have been working with structured data coming mainly from RDBMS loaded into a DWH, ready to be analysed and shown to the end user. Before considering how this structure may change when taking Big Data into the field, one could wonder how exactly the use of Big Data technology benefits my current solution. Using this technology allows the system to process higher volumes of data much faster, which can be more diverse, giving the chance to efficiently and safely extract information from data that a traditional solution can’t (high fault tolerance).

In addition, using Big Data permits the hardware structure to grow horizontally, which is more economical and flexible.

So, how does Big Data enter this ecosystem? Well, the main architecture concepts are quite the same, but there are big changes. The main differences are a whole new set of data sources, specifically non-structured and a completely new environment to store and process data.

BId Data -Spark and Tableau

In the picture above, at the top we have our traditional BI architecture. Below we can see how the new Big Data architecture still preserves the same concepts, Data Acquisition, Data Storage, etc. We are showing a few Big Data tools from the ones available in Apache Hadoop project.

What is important to point out is that Reporting & visualization must be combined. We must combine data from traditional and Big Data storage to provide a 360º view, which is where the true value resides.

To combine it there are different options. We could administer our aggregation calculations from HDFS, Cassandra data etc to feed the Data warehouse with information we were unable to compute before. Or we could use a reporting & visualization tool capable of combining traditional Data warehouse and Big Data storage or engines, like Tableau does.

A Big Data implementation: Apache Spark + Tableau

When approaching a Big Data implementation, there are quite a lot of different options and possibilities available, from new data sources and connectors to the final visualization layer, passing through the cluster and its components for storing and processing data.

A good approach to a Big Data solution is the combination of Apache Spark for processing in Hadoop clusters consuming data from storage systems such as HDFS, Cassandra, Hbase or S3, and Tableau such as the visualization software that will make the information available to the end users.

Spark has demonstrated a  great improvement in terms of performance compared to the original Hadoop’s MapReduce model. It also stands out as a one-component solution for Big Data processing, with support for ETL, interactive queries, advanced analytics and streaming.

The result is a unified engine for Big Data that stands out in low-latency applications and iterative computations, where fast performance is required, like iterative processing, interactive querying, large-scale batch computations, streaming or graph computations.

Tableau is growing really quickly, and has already proven to be one of the most powerful data discovery and visualisation tools. It has connectors to nearly any data source such as Excel, corporate Data Warehouse or SparkSQL. But where Tableau really stands out is when transforming data into compelling and interactive dashboards and visualizations through its intuitive user interface.

The combination of Apache Spark with Tableau stands out as a complete end-to-end Big Data solution, relying on Spark’s capabilities for processing the data and Tableau’s expertise for visualisation. Integrating Tableau with Apache Spark gives the chance to visually analyse Big Data in an easy and business-friendly way, no Spark SQL code is needed here.

Connecting Tableau with Apache Spark

Here, at ClearPeaks, we are convinced that connecting Apache Spark to Tableau is one of the best approaches for processing and visualising Big Data. So, how does this solution work? We are already working with this technology, and are proud to show a demonstration of Tableau connected to Apache Spark.

Prerequisites:

  • Tableau Desktop, any version that supports SparkSQL connector.
  • Apache Spark installed either on your machine or on an accessible cluster.

Integration

Tableau uses a specific SparkSQL connector, which communicates with Spark Thrift Server to finally use Apache Spark engine.

Big Data Spark & Tabelau

Software components

Tableau Desktop

Apache Spark Driver for ODBC with SQL Connector

Apache Spark (includes Spark Thrift Server)

Set up the environment

Installing Tableau Desktop and Apache Spark is out of the scope of this article. We assume that you have already installed Tableau Desktop and Apache Spark.

Apache Spark needs to be built with Hive support, i.e.: adding –Phive and –Phive-thriftserver profiles to your build options. More details here.

Install Apache Spark Driver for ODBC with SQL Connector

Install Apache Spark connector from Simba webpage. They are offering a free trial period which can be used to follow this article.

It has an installation wizard which makes installation a straightforward process.

Configure and start Apache Spark Thrift Server

Configuration files

Spark Thrift Server uses Hive Metastore by default unless another database is specified. We need to copy hive-site.xml config file from Hive to Spark conf folder.


cp /etc/hive/hive-site.xml /usr/lib/spark/conf/


park needs access to Hive libraries in order to connect to Hive Metastore. If those libraries are not already in Spark CLASSPATH variable, they need to be added.

Add the following line to /usr/lib/spark/bin/compute-classpath.sh


CLASSPATH=“$CLASSPATH:/usr/lib/hive/lib/*”


Start Apache Spark Thrift Server

We can start Spark Thrift Server with the following command:


./sbin/start-thriftserver.sh --master <master-uri>


<master-uri> might be yarn-cluster if you are running yarn, or spark://host:7077 if you are running spark in standalone mode.

Additionally, you can specify the host and port using the following properties:


./sbin/start-thriftserver.sh \

  --hiveconf hive.server2.thrift.port=<listening-port> \

  --hiveconf hive.server2.thrift.bind.host=<listening-host> \

  --master <master-uri>


To check if Spark Thrift Server has started successfully you can look at Thrift Server log. <thriftserver-log-file> is shown after starting Spark Thrift Server in console output.


tail -f <thriftserver-log-file>


Spark Thrift Server is ready to serve requests as soon as the log file shows the following lines:

INFO AbstractService: Service:ThriftBinaryCLIService is started.

INFO AbstractService: Service:HiveServer2 is started.

INFO HiveThriftServer2: HiveThriftServer2 started

INFO ThriftCLIService: ThriftBinaryCLIService listening on 0.0.0.0/0.0.0.0:10000

Connect Tableau using SparkSQL connector

Start Tableau and select option to connect to Spark SQL.

Select the appropriate Type depending on your Spark version and the appropriate Authentication depending on your security.

Big Data Spark Tableau

The next steps are selecting schema, tables and desired relations, the same as when using any other Tableau connector.

Now you are able to run your own analysis on Big Data powered by Spark!

Spark tableau  Big Data

The dashboard above has been created in Tableau 9.0 after following the instructions provided. Apache Spark is used by Tableau to transparently retrieve and perform calculations over our data stored in HDFS.

Show us a capture of your Spark powered dashboards and reports. Share with us your impressions about Apache Spark and Tableau tandem in the comment section at the bottom.

Happy analytics!

 

Eduard Gil & Pol Oliva

Bonus: Add data to Hive Metastore to consume it in Tableau

If you are not familiar with the process of loading data to Hive Metastore you will find this section very useful.

This section describes how to load your csv from your file system to Hive Metastore. After this process you will be able to use it from Tableau using the process described in this article.

For this example we are going to use the following file that contains the well-known employee example:

my_employees.csv

123234877,Michael,Rogers, IT
152934485,Anand,Manikutty,IT
222364883,Carol,Smith,Accounting
326587417,Joe,Stevens,Accounting
332154719,Mary-Anne,Foster,IT
332569843,George,ODonnell,Research
546523478,John,Doe,Human Resources
631231482,David,Smith,Research
654873219,Zacary,Efron,Human Resources
745685214,Eric,Goldsmith,Human Resources
845657245,Elizabeth,Doe,IT
845657246,Kumar,Swamy,IT

As we can see it follows the schema: Employee Id, Name, Last Name, and Department.

We are going to use beeline to connect to Thrift JDBC Server. Beeline is shipped with Spark and Hive.

Start beeline from the command line

Beeline

Connect to Thrift JDBC Server


beeline> !connect jdbc:hive2://localhost:10000


Create the table and specify the schema of it


beeline> CREATE TABLE employees (employee_id INT, name STRING, last_name STRING, department STRING)  ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';


Now you are ready to load your my_employees.csv file into the previously created table


beeline> LOAD DATA LOCAL INPATH '/home/user/my_employees.csv' INTO TABLE employees;


We can even perform operations over employees table using beeline


beeline> SELECT COUNT(*) FROM employees;


 

5th European Business Intelligence Summer School

.

We are glad to announce that ClearPeaks will participate this year in the 5th European Business Intelligence Summer School (eBISS 2015), from July 5 to july 10.  This summer school, presented by leading researchers in the field, represents an opportunity for postgraduate students to equip themselves with the theoretical, practical, and collaboration skills necessary for developing challenging Business Intelligence applications. To ensure a high ratio between tutors and students the school will be limited to 50 participants. Furthermore, industrial presentations will allow participants to gain a deep understanding of current product offer.

The objective from this summer school is to provide a stimulating and enjoyable environment in which participants will benefit not only from the formal and practical sessions but also from informal and social interactions with established researchers and the other participants in the school. To further facilitate communication and feedback all attendees will be obliged to prepare and present a poster on their own research.

Wilinton -bw-IN

 

Wilinton Tenorio, as a Senior BI Consultant at ClearPeaks, will talk about “Life at ClearPeaks, an overview to the most relevant projects”.  Wilinton Tenorio will profile some real life BI pro jects he has ongoing and take a look at the architectures and technologies ClearPeaks has deployed at these customers. In addition to the functional aspect of these projects, we will discuss where the tough challenges were and how we solved them. Live demos will be shown during the session from some of our customers, LTA, Zain and Gasco

 

The session will take place on Tuesday, July 7 at 14:30. Location: Castelldefels UPC Campus.

More information

 

Tableau quick integration in Salesforce

.

Salesforce is one of the leaders in the CRM sector and it is used in thousands of companies around the world. It is really useful software for managing the data of a sales department and it performs even better if you can combine it with Tableau. If you integrate Tableau into Salesforce you will be able to have the departmental reporting on the same page, saving time and making better analysis of the data.

Tableau offers you a solution with the Salesforce Canvas Adapter, but depending of the level of integration needed you can implement the following very easily. With the method explained here you could save time developing the integration and therefore save money too.

This method takes advantage of the flexibility offered by Tableau with the URL parameters and the facility to embed Tableau in an iframe. Also, Salesforce allows us to create some flexible objects with Apex programming language. Now we are going to explain the procedure for integrating Tableau with Salesforce in four quick and easy steps. Take into account that these actions need to be done in the dev version of Salesforce (sandbox) and then deploy it to production.


Tableau Salesforce 1
Step 1 – Tableau dashboard

Create a dashboard in Tableau and publish it to the server. This Dashboard has to be filtered with an available field in Salesforce. For instance, the ID for a certain account can be the perfect candidate if you are sure that exists in the EDW and in Salesforce. Once published, the URL provided by Tableau Server will be useful in the next step.

Step 2 – Visualforce page

Create a Visualforce Page in Salesforce:

This part can seem scary if you are not used to Salesforce development, but following this step by step guide can help you without needing previous knowledge in this area. All that is necessary to do this is to access the developer console in the sandbox environment.

Tableau Salesforce 2

Once inside the developer console is time to add the apex code below. The URL copied in the previous step needs to replace the red one. The filtering field in Tableau must replace the blue variable and the ID field of Salesforce replaces the green one. Name it and save it as a visualforce page.


<apex:page standardController="Account">

<apex:iframe src="http://tableauserver/views/workbook/dashboard?:embed=yes&CRMID={!account.AccountID__c}&:toolbar=no"

height="1110px" width="1160px" scrolling="true"/>

</apex:page>


Now we are going to analyse the relevant parts of this code:

  1. <apex:iframe – apex code to create an iframe where the Tableau dashboard will appear.
  2. http://tableauserver/views/workbook/dashboard - URL of the published dashboard in Tableau server.
  3. ?:embed=yes – parameter to embed the dashboard into the iframe.
  4. &CRMID={!account.AccountID__c} – part of the URL to filter the view. CRMID is the field used in Tableau and has to be set as a filter in the dashboard. !account.AccountID__c is the ID of the accounts used in Salesforce. Both fields have to be common.
  5. &:toolbar=no – Parameter to hide the toolbar.
  6. height="1110px" width="1160px" – height and width of the iframe. It is advisable to be the same as the Tableau dashboard (unless using automatic size).
  7. scrolling="true" – enables the option of having a scroll in case it does not entirely fit.

The console will look like this:

Tableau Salesforce 3

(!) The common fields for which Tableau has to filter have to be defined in {!account.AccountID__c} section of the code. To detect the proper name of the Salesforce field it is advisable to go to the fields section just to ensure that you are picking the correct name and not the alias that is used in the view.

Tableau Salesforce 4

Step 3 – Create Salesforce section

To create a section to place the Tableau dashboard you will have to edit the page layout. To enter the editor mode press the “Edit Layout” link.

Tableau Salesforce 5

Afterwards, drag and drop the section object to the desired emplacement and name it.

Tableau Salesforce 6

Step 4 – Add Visualforce page

The next step consist in drag and drop the Visualforce page that we have created in step 2 to the “Tableau Reporting” section added in step 3.

Tableau Salesforce 7

Once added, it is possible to edit the dimensions of the Visualforce page pressing the settings button.

Tableau Salesforce 8

Finally, you will have to save the layout and the Tableau report will appear in your account view filtered by the account that you are visualizing. Each time that you change the account in Salesforce the Tableau report will also change. The final result will look something similar to this:

Tableau Salesforce 9

 

 

Enjoy your integration!

 

OBIEE 11g training for Kenyan Oracle BI Partners

.

In our post OBIEE 11g training in Nigerian, we were talking about our virtual training capabilities and how we were putting our efforts to promote this training course format. I'm glad to say that the efforts were rewarding.

Last week, the ClearPeaks Academy  provided a virtual training to Oracle BI Partners in Kenya.  Providing a virtual training is an added challenge to the usual mix of profiles we find in this type of training courses. However, our experience providing virtual training courses made this one another successful training course.

If you want to get information about the ClearPeaks Academy training courses, please visit our Academy page or contact us via our web page form or by sending an email to academy@clearpeaks.com.

privacy policy - Copyright © 2000-2010 ClearPeaks

topnav