Displaying Geographic Data in OBIEE

.

 

Introduction - What is geographic data?

The main goal of Business intelligence is to transform raw data into meaningful and useful information for the purpose of enabling more effective operational insights, as well as more tactical decision-making. The nature and character of this raw data can be very heterogeneous, ranging from structured data of orders originating from transactional databases to unstructured data coming from clients´ Twitter feeds. Today, I want to focus on one specific data type: geographic data.

The term ‘geographic’ neither refers to the way data is stored, nor its source. Instead, it denotes a functional characteristic, meaning data can be somehow positioned on the Earth. More precisely, geographic data can be defined as data with an implicit or explicit association with a location relative to the Earth, either a point, a line or a polygon.

In the following images you can see a clear example of how important is to show geographic data properly. While it is very difficult to see a clear pattern on the bar chart, the map displays a much clearer picture. Indeed, it turns out we are visualising the latitude of each region of Spain. The conclusion is that, as I like to say, showing geographic data without a map means losing information.

What is geographic data? Comparison bar chart - map

Figure 1: Comparison bar chart - map

As you might know, most organisations have some geographic data among their data sets. It can be points with a client’s location or an event’s situation, lines representing streets or railways, or polygons with the shape of countries or some other customised regions. Geographic data is usually present, and hence, it has to be properly displayed in order to get useful insights and information from it.
 

1. Geographic Visualisations Components

When creating geographic visualisations, that is, maps with data on the top, three pieces or components are needed:

Background map: which is the map displayed on the bottom. It can be either an online map (Google Maps, Bing, TomTom, etc.), or an on-premise map (e.g. HERE or OpenStreetMaps) which was previously designed and stored. As you can see on the images below, the visual experience of an online map is hardly achievable with an on-premise one.
Examples on-premise maps

Figure 2: Examples background maps

Data layers: which are composed by a shape (a point, a line or a polygon) and data objects (measures and attributes). The critical issue is which shapes are identified by the visualisation tool and which are not. Geocodes comprised of latitude/longitude coordinates are usually identifiable as points, but not literal addresses. For polygons, only main administrative areas are usually recognised, while custom areas will have to be manually introduced.
High-end geographic information system (high-end GIS): which is the software that matches the background map with the data layers, renders the map and includes some extra spatial functionalities. Some examples of high-end GIS are Oracle MapViewer (for OBIEE), Tableau (already built-in) or libraries such as Google Maps JavaScript API, Leaflet or Kendo.

 

2. Creating Geographic Visualisations in OBIEE12c

When working with OBIEE12c, we have three main options to implement some nice geographic visualisations:

Oracle MapViewer with online or on-premise maps:Developing OBIEE built-in maps using the Map View analysis type, which is specially designed to display several map visualisations such as Color Fill Map, Bubble Map or Pie Graph Map. The background map can be either some online map like Oracle eLocation if you have Internet connectivity or a customised offline one. The MapViewer toolkit for OBIEE and the Oracle Spatial option for Oracle database are necessary. The pros are that no third party is involved in the solution, nor is any code needed. On the other side, the amount of visualisations is limited and requires the configuration of layers and background maps.
On-premise library with online or on-premise maps: Another solution is developing maps using a library on-premise, such as Leaflet or Kendo, together with a customised on-premise map, for a 100% offline solution, or with an online map. Remember, OBIEE allows you to run JavaScript code using Narrative View analyses. In this case, the pros are no extra Oracle tools needed, higher visualisation features and options, and no Internet required. However, the development and maintenance cost increases significantly.
Google Maps JavaScript API: The last main solution is developing maps using the Google Maps JS API (or some other geographic API). Again, we use Narrative View analyses to run the JavaScript code on OBIEE, but in this case you also need to enable cross-site scripting from the API to OBIEE server. The pros are increased user experience and better visualisation features, while some drawbacks are dependency on third parties and higher development cost.

In short, each solution has its pros and cons. For this reason, doing an analysis of the users requirements, the system limitations and the maintenance and development cost is a must before starting with the development.
 

3. Developing Geographic Visualisations with Google Maps JS API

In this section, we explore the possibilities of developing geographic visualisations in OBIEE12c using Google Maps JavaScript API. Specifically, we show three different dashboards, each with one map and several extra functionalities developed with HTML, CSS and JavaScript, such as the title and some navigation buttons.
The first one is a heat map colouring the area of towns by some measure. It also includes several background map styles, a legend with the minimum and maximum value of the measure, and a tooltip with some specific attributes and measures. Moreover, the traditional Google buttons can be configured. In this case the Street View button is hidden and only the zooming buttons are shown.

Developing Geographic Visualisations with Google Maps JS API

Figure 3: Developing Geographic Visualisations with Google Maps JS API - Dashboard 1

The next dashboard shows a heat map with a similar look and feel to the previous one. This one also incorporates a label with the number of points requested, which is very important to control due to performance issues. It is a perfect way to identify geographical patterns on the localisation of events.

Image 4

Figure 4: Developing Geographic Visualisations with Google Maps JS API - Dashbaord 2

The last one shows a markers map which uses specific icons to represent different values of a category. Also, a legend with the descriptions of the icons used can be shown or hidden by clicking on the information button. This is a great manner of introducing another dimension on a geographical analysis.

Developing Geographic Visualisations with Google Maps JS API

Figure 5: Developing Geographic Visualisations with Google Maps JS API - Dashbaord 3

Obviously, the complexity of the code depends on the characteristics of the map itself such as the type of map, the utilisation of custom geometrics or markers, the amount of auxiliary elements such as legends and tooltips, or the level of customisation of the background map. However, when working with OBIEE, there is another complexity element to take into account: The insertion of this code into the narrative view structure.
 

Conclusions

Although this article gives a more general overview of several topics related to the big world of geographic data, we can still get some conclusions from what has been said.
First and most importantly, if you have geographic data, remember to draw a map! Moreover, we have talked about the main components you need to create a geographical analysis, and the technical options to implement it in OBIEE. Finally we have shown some nice dashboards using Google Maps API.
Click here if you would you like to know more about displaying geographic data in OBIEE and the services we offer!

OBIEE12c Integration with Oracle EBS Security

.

Integration of the Oracle Business Intelligence Enterprise Edition and Oracle E-Business Suite provides a seamless controlled flow between the systems, allowing for drill down and drill back from key metrics to underlying detail. Theoretically, this can be done between any OBIEE form and any EBS form with pass-thru to any connected EBS subsystem.

If you are integrating OBIEE with EBS, you are likely to be using OBIA, Oracle Business Intelligence Analytics, although this is certainly not a requirement. OBIA is a pre-built, pre-packaged BI solution that delivers role-based intelligence to the organization. It is a set of OBIEE dashboards and reports that run from a pre-built warehouse previously serviced by Informatica/DAC, while the next generation of the OBIA warehouse utilizes Oracles Data Integrator, ODI, which runs high-volume batch load plans, event-driven load plans, and even SOA data services.

1. OBIEE 12c Configuration

While configuring an initialization block to retrieve data from EBS, make sure that Row-wise initialization is checked, as this allows multiple results to be stored in the variable, regardless of whether the variable is static or dynamic; otherwise you will only be able to retrieve the last item in the result set. Be sure to set the Execution Precedence of the EBS Integration init block that attaches the session through the session cookie, so that it executes before any attempt is made to retrieve security information.

Figure 1: EBS Configuration

Figure 1: EBS Configuration

Two files must be modified in order for WebLogic to find, accept and attach to the EBS session.

• instanceconfig.xml

• authenticationschema.xml

To configure external authentication, you will need to modify instanceconfig.xml as follows:
Path to instanceconfig.xml:

$BI_HOME/config/fmwconfig/biconfig/OBIPS

Note: Take a backup of the file before editing.

Add “EBS-ICX” in the EnabledSchemas xml tag.

<Authentication>
<!--This Configuration setting is managed by Oracle Enterprise Manager Fusion 
Middleware Control-->
<EnabledSchemas>UidPwd,Impersonate,UidPwd-soap,Impersonate-soap,EBS-ICX</EnabledSchemas>
</Authentication>

Middleware recognizes internal schema name for interface to EBS "EBS-ICX". The Oracle already worked on that. It is only necessary to let the system know that this is to be utilized. Then, let the system know the name of the session cookie that EBS writes.
To configure authenticationschema.xml it is necessary to know the Oracle EBS instance cookie name.
Follow the steps to get the cookie name.

1. Login to Oracle EBS

2. If you are using Google Chrome or Firefox then open Dev Tools ➜ Web Console and write the following command:

javascript:alert(document.cookie)

or

javascript:document.write(document.cookie)

ALERT command will pop up a dialog box as follows, while DOCUMENT.WRITE will display the cookie information in browser window.

Figure 2: Alert command pop up

Figure 2: Alert command pop up

Notice that key value pair of the cookie, ERPU1 is the cookie name of Oracle EBS Test instance and the value dynamically generated for each user after each login. We only required the key from it which is “ERPU1”. Now we will use this value in authenticationschema.xml file.

Path to authenticationschema.xml file:

Obiee/bi/bifoundation/web/display

Note: Take a backup of the file before editing.

Edit the following tags in the file:

<SchemaKeyVariable source="cookie" forceValue="EBS-ICX" nameInSource="ERPU1" />

 

<RequestVariable source="cookie" type="auth" nameInSource="ERPU1" 
biVariableName="NQ_SESSION.ICX_SESSION_COOKIE" />

As per Oracle Doc ID 2141505.1
"Access Prohibited" When Logging In To Analytics In EBS Integrated Environment.
Following tag need to be added in the file under “AuthenticationSchema”.

<RequestVariable source="constant" type="auth" nameInSource="ssi" 
biVariableName="NQ_SESSION.SERVICEINSTANCEKEY" />

That’s it for OBIEE configuration!

 

2. RPD Changes using Administration Tool

Here comes the part that is familiar to every OBIEE administrator, the RPD modifications. If you are following the document, the sample EBS connection pool can be used or create a new one just for the initialization process and retrieving security.
Create database objects and connection pools for Oracle EBS database.

Figure 3: EBS Connection Pool

Figure 3: EBS Connection Pool

Note: APPS user should have all READ and EXECUTE permissions to run PL/SQL queries. If not, grant the privileges to the APPS user.

Now, create an init block which will use this connection pool to retrieve the EBS context and set that into OBIEE session variables. The init block will use the just defined connection pool and will send a data source query to the EBS database:

SELECT
FND_GLOBAL.RESP_ID,
FND_GLOBAL.RESP_APPL_ID,
FND_GLOBAL.SECURITY_GROUP_ID,
FND_GLOBAL.RESP_NAME,
FND_GLOBAL.USER_ID,
FND_GLOBAL.EMPLOYEE_ID,
FND_GLOBAL.USER_NAME,
FND_GLOBAL.RESP_NAME
FROM DUAL

Figure 4: EBS Security Context

Figure 4: EBS Security Context

Referring to another Oracle document, 1539742.1, create these static session variables to hold the context:

EBS_RESP_ID
EBS_RESP_APPL_ID
EBS_SEC_GROUP_ID
EBS_RESP_NAME
EBS_USER_ID
EBS_EMPLOYEE_ID
USER
ROLES

Note: You have to create another init block named “EBS Security Context – ROLES – Row wise” only for ROLES as a user will have more than one Role in Oracle EBS and the init block will be set for row-wise initialization.

Figure 5: EBS Security Context - Roles

Figure 5: EBS Security Context - Roles

Figure 6: EBS Security Context - Roles - Row Wise

Figure 6: EBS Security Context - Roles - Row Wise

The following query will be used to fetch all the Responsibilities of log-in user and assign it to the variable ROLES.

SELECT DISTINCT 'ROLES',
RESPONSIBILITY_NAME
FROM FND_USER,
FND_USER_RESP_GROUPS,
FND_RESPONSIBILITY_VL
WHERE FND_USER.USER_ID = FND_USER_RESP_GROUPS.USER_ID
AND FND_USER_RESP_GROUPS.RESPONSIBILITY_ID = FND_RESPONSIBILITY_VL.RESPONSIBILITY_ID
AND FND_USER_RESP_GROUPS.RESPONSIBILITY_APPLICATION_ID = FND_RESPONSIBILITY_VL.APPLICATION_ID
AND FND_USER_RESP_GROUPS.START_DATE < SYSDATE
AND (CASE WHEN FND_USER_RESP_GROUPS.END_DATE IS NULL THEN SYSDATE
ELSE TO_DATE (FND_USER_RESP_GROUPS.END_DATE)
END) >= SYSDATE
AND FND_USER.USER_ID = (SELECT USER_ID
			FROM FND_USER
			WHERE UPPER (USER_NAME) = UPPER('VALUEOF(NQ_SESSION.USER)')
			)

 

3. Oracle EBS Configuration

Now we need to introduce a responsibility for OBIEE through which a user can navigate to OBIEE from Oracle EBS.

1. Create a Function, using Application in EBS:

Figure 7: Oracle EBS Configuration - Form Functions - Description

Figure 7: Oracle EBS Configuration - Form Functions - Description

2. In the Properties Tab, add as follows:

Function: OBIEE
Type: SSWA jsp function
Maintenance Mode Support: None
Context Dependence: Responsibility

Figure 8: Oracle EBS Configuration - Form Functions - Properties

Figure 8: Oracle EBS Configuration - Form Functions - Properties

3. In the Web HTML tab, add the following link:

Function: OBIEE
HTML Call: OracleOasis.jsp?mode=OBIEE&function=Dashboard

Figure 9: Oracle EBS Configuration - Form Functions - Web HTML

Figure 9: Oracle EBS Configuration - Form Functions - Web HTML

4. Create a Menu in Oracle EBS named “OBIEE Dashboard” and add the Function created in step 1:

Figure 10: Oracle EBS Configuration - Menus

Figure 10: Oracle EBS Configuration - Menus

Note: Only create Menu for OBIEE Dashboard

5. Assign Menu to the relevant responsibility:

Figure 11: Oracle EBS Configuration - Users

Figure 11: Oracle EBS Configuration - Users

6. Set Profile

You need to enter the URL of the Oracle BI Server as part of a profile. You can set up a profile for a responsibility, a user, or a site. The following procedure shows how to set profile options for a responsibility:

Figure 12: Oracle EBS Configuration - Find System Profile Values

Figure 12: Oracle EBS Configuration - Find System Profile Values

You should use a fully-qualified host server.domain name rather than an IP address or just a host name. The OBIEE domain must be the same as the Oracle EBS domain, so that the EBS-ICX cookie is visible to OBIEE from the user's browser.

References:
OBIEE 12c: Integrating OBIEE 12c with Oracle E-Business Suite (EBS) Security (Doc ID 2174747.1)
Chapter 9: Oracle® Fusion Middleware Integrator's Guide for Oracle Business Intelligence Enterprise Edition.

Click here if you would like to receive more information about the topic or if you need help with your EBS-OBIEE configuration!

Customizing ODI Knowledge Modules

.

Lately we have seen a growing interest in ODI (Oracle Data Integrator). We have been working with this tool here in ClearPeaks for a while now, on several projects for different verticals, but it is also true that with the appearance of the newer versions of Oracle Business Intelligence Applications (OBI Apps) 11.1.1.X, it’s a pretty hot topic, and that’s why we’ve decided it was time to publish a quick “tips and tricks” guide for ODI, hoping to save our customers a lot of effort, time, and money.

The first thing to bear in mind when you start working with ODI is that you really must follow best practices in order to take full advantage of two of the most powerful weapons in the ODI arsenal: the declarative design and the knowledge modules. For those of you that don’t have too much experience with ODI, let me run through these two outstanding features.

The declarative design basically allows you to create an abstraction layer between the developer and the actual code that is going to be performed by the DBMS. This means that in ODI you define “what you want to get”, and the way to do so is automatically implemented by the Knowledge Module.

However, you might ask yourself “Is this possible? Can we really rely on the default ODI KMs?” Well, the answer is very simple: for standard needs, where performance is not a problem, yes! But in most of our BI projects, remember that we have had to tune the KMs to adapt them to our customers’ needs and to obtain the maximum benefit from the tool.

But don’t think that this undermines what is a fantastic feature. ODI comes with a great set of KMs that give you the perfect starting point to create your own customized KM. And moreover, all the developers don’t need to go into the details of the KM implementation; in a typical ODI project, the architect will be responsible for setting up the ODI environment and will provide the whole team with the appropriate KMs that will satisfy the particular project needs.

So in principle, the developers don’t need to know all the implementation details (it is up to each user/developer to go beyond and analyze the code ODI is generating, if required). This abstraction significantly speeds up the process of developing an ETL, since once the logic is established, there is no need to redo it over and over again.

A typical example to consider is the logic necessary to load a Slowly Changing Dimension (SCD Type II). With other tools, each developer would need to fully understand the logic of the SCDs and the way the integration process is performed, since it has to be replicated for each table to be loaded.

With the ODI declarative design, you just establish what you want, and the KM will take care of the logic. So you simply need to indicate:

➜ I want to treat “Table X” as a Slowly Changing Dimension (we will mark it like this).

Oracle Data Integrator

 

➜ I want to use “Column X” and “Column Y” to store the starting and ending date of the row, respectively.

Oracle Data Integrator

 

➜ I want to use “Column Z” as the current row flag.

Oracle Data Integrator

 

➜ I want “Column A” to be the Primary Key (Surrogate Key) of “Table X”.

Oracle Data Integrator

 

➜ And I want this column / set of columns (e.g. B and C) to be the Integration Key (Natural Key) of Table X (the column, or columns, that will be used to determine if the row is a new row, or if the row previously existed and has to be updated).

Oracle Data Integrator

 

➜ Finally, we can indicate for each column if we want the ETL to add a new row when the value changes in the source system, or if we prefer to update the whole table.

Oracle Data Integrator

 

✓ And that’s it! By then selecting the appropriate KM, tuned with the necessary logic by our architect, we can develop as many mappings as we want for Slowly Changing Dimensions. Just indicate the source for each column and run it. Quick and easy!

We have also mentioned the concept of Knowledge Modules. So, some of you may wonder, what is a “Knowledge Module”? This is simply a generic set of steps that will perform the needed logic for your ETL process. Each step can be written in different languages (SQL, Oracle-SQL, Jython, and many more) depending on the technology underneath, with placeholders for each column, table, and in general, “entity” that will take part in our ETL. At execution time, those placeholders are filled with the details of the mappings that have been developed, and this gives the project team the flexibility to reuse logic and speed up the delivery of the ETL process.

Well, that’s enough of an introduction to these two handy features in ODI. Now let’s see some usage examples of the things that can be done, and which can help our customers to reduce implementation time.


1. Automatizing Loads

▼ Automatizing the Incremental / Full Load Strategy
There are several ways to automatize an Incremental / Full Load Strategy using KMs. If we think back to the previous version of OBI Apps, in Informatica we had to have two separate mappings for the Incremental and the Full version of the mapping. With ODI this can be automatized in several ways:

ⓐ Manually: Adding an option to the mapping
The simplest way is to add an option to the mapping so we manually specify if we want to execute a full load or not. This option will drive the execution of a new step in our KM. For example, this step may consist in truncating the table, and if we are populating a dimension, resetting the associated sequence.Here we see the importance of following best practices and naming conventions. If we follow a good methodology when naming our DWH objects, we can distinguish the table type by the name, and also the sequences can be easily related to the corresponding dimension.

Oracle Data Integrator
Oracle Data Integrator

 

ⓑ Automatically: Adding an option to the mapping and using a control table
The second option is an extension of the previous one. By using a control table containing the table name and one column containing the “FULL_LOAD_FLAG”, we can invoke this process only if the DWH administrator has set the property in the table (FULL_LOAD_FLAG = 1). This way, there is no need to modify any ODI object to change the behavior of the interface.

▼ Automatizing the population of the unspecified rows for dimensions
A typical need in any data warehousing project is to populate an “unspecified” row for each dimension. This will give us the capability of always making inner joins between the facts and dimensions, and thus improve performance in all our OLAP queries.

Once again, we could generate a separate interface for this, for each dimension, but will it be efficient? ODI gives us the possibility of defining a generic step in our KM to be executed for any dimension table to be loaded.

A typical need in any data warehousing project is to populate an “unspecified” row for each dimension. This will give us the capability of always making inner joins between the facts and dimensions, and thus improve performance in all our OLAP queries.

Once again, we could generate a separate interface for this, for each dimension, but will it be efficient? ODI gives us the possibility of defining a generic step in our KM to be executed for any dimension table to be loaded.

At execution time, ODI is aware of the data type of each column to be populated (remember that every object is stored in the model, and so we have an internal representation stored in the ODI repository). We can prepare a simple process that will basically assign a default value for each data type. For example:

Oracle Data Integrator

Oracle Data Integrator

NOTE: See the ODI Substitution API Reference to learn more about the functions you can use to retrieve useful information from the ODI model, or contact us for further details!

▼ Automatizing the population of Fact Modules by using a control table
Another interesting functionality we usually include in our projects is control of the periods to be loaded by a control table. As we mentioned before, with this table we can change the behaviour of our ETL process just by changing one row in one table. This speeds the process of programming and administering the loads up a lot.

By using this table, the ETL administrator is capable of specifying the starting date that we need to populate, and the number of periods to be loaded. More options can be useful in this table, like a flag (PREV_EXEC_CORRECT_FLG) indicating if the automated previous execution of the ETL was correct (if not the ETL should not continue in order to avoid inconsistencies, for example, in Slowly Changing Dimensions). Another flag might be used to indicate if the population of this table should be included in the automated process (IS_PART_OF_DAILY_LOAD), or the FULL_LOAD_FLAG that we already mentioned.

In the example below, the source system needed to be queried daily due to performance restrictions; by specifying a starting date and the number of periods to be loaded, the ETL automatically ran a loop for the number of days to be loaded by leveraging the ODI variables.

Oracle Data Integrator
Oracle Data Integrator

In this case, it is also important to highlight that the interfaces have to be filtered by a variable containing the value of the day to be populated:

Oracle Data Integrator

Another interesting modification done to the KMs involved in this case study is that of the corresponding CKM to add an option that will avoid deleting the previous errors, since the same session is going to be executed several times in a loop.

Oracle Data Integrator

In some cases, it might even be interesting to remove the period from the target table before inserting, if we are performing a full period insertion (we don’t want to merge the data with the previously inserted data, but to directly reload a complete period).

Oracle Data Integrator

 

As you can see above, by using the ODI functions we can generate generic steps that will work for any object in our ODI model. For example, <%=odiRef.getTable(“L”,”TARG_NAME”,”A”)%> will retrieve the target table name.

See: http://docs.oracle.com/cd/E21764_01/integrate.1111/e12645/odiref_reference.htm#ODIKD1295


2. Boosting performance

▼ Boosting performance for populating staging tables
Since ODI 11g does not have a multi-target capability, we sometimes need to reuse the same staging table to populate several final targets; we have to load these staging tables as fast as possible to speed up the performance of the whole ETL process. We can use the default KMs to load these staging tables, but for one staging table you may not need to apply integration processes, validations checks and other time-consuming logic; to get round this, a good practice is to generate your own staging KM, including only the necessary steps:
Oracle Data Integrator

Then, in the “Insert” step, you can populate the target table directly, instead of the integration table:

<%=ODIREF.GETTABLE("L","TARG_NAME","A")%>
VS
<%=ODIREF.GETTABLE("L","INT_NAME","W")%>

Oracle Data Integrator

 

▼ Boosting performance by using partition exchange
In order to boost performance of load processes and reporting as well, it is sometimes useful to apply partitioning to your DWH tables. This will give you two main advantages:

➀ At ETL time, the population of a big table (GBs of data) is much faster if the table is partitioned, since you can populate a temporary empty table, and then execute a partition exchange to swap the partitions between both tables. This way, all the population in your temporary table, the consistency checks, constraints and all the processes you may need to do can be done much faster.
It is important to emphasize that this is useful for snapshot tables, and in general for tables that do not need to cross data between partitions.

➁ At reporting time, if you just need to analyze one partition (one period) at a time, or maybe two to compare them, the response time in a partitioned table will be faster.

To accomplish the partition exchange in ODI, you can also use a knowledge module that can encapsulate the logic needed. The command below will execute the partition exchange between the temporary table and the target table, leveraging ODI functions.


ALTER TABLE

<%=ODIREF.GETTARGETTABLE( "SCHEMA" )%>.<%=ODIREF.GETTARGETTABLE("RES_NAME")%>
EXCHANGE PARTITION ' || V_PARTITION_NAME || ' WITH TABLE
<%=ODIREF.GETTABLE("L", "INT_NAME","W")%>
<$="<%=ODIREF.GETUSEREXIT("PARTITION EXCHANGE OPTIONS")%>"$>

 

Note that the variable V_PARTITION_NAME will need to contain the name of the partition to be exchanged. This can be retrieved by a simple query on the temporary table that your mapping has populated.

➀ Loading historical information into a Slowly Changing Dimension
The last interesting modification that we are going to show you is to the Oracle SCD KM. Imagine that you launch your system with a starting date (e.g. 30 Sept., 2014) and after some months running your ETL system in production, the source system is updated with historical information so that the business can analyze trends and customer behavior over several years. Obviously, we can’t load the latest information in the source system to our DWH because it will override the current information with this stale but still relevant information. We need to look for a way to include this information in our system, so if a user queries information about old periods, they can analyze and explore the attributes the customer had at that point in time.

Well, we can do the following:

➀ Duplicate the mapping you are using to load your SCD.

➁ Duplicate the Slowly Changing Dimension IKM you are using to populate your SCDs.

➂ Modify the copy of your IKM to set the "ACTIVE_TO_DATE" to the date when you launched your system. This way you can run a separate load for older periods without impacting the daily load you are running on a daily basis with current data. Any row loaded by this second thread (older periods), will always finish by the date when you launched your system, so there won’t be duplicates in your SCD and you can explore both current and old information about the entities in your dimension (e.g. your customer profile, segment, location, etc.).

Oracle Data Integrator

 

Just by modifying small pieces of the given KM code, you can achieve your particular needs. Just be careful that you modify the appropriate properties (in this case, SCD_END and also the SCD_FLAG have to have different values from those in your daily thread).

We hope that you’ve found these techniques that can be applied to ODI KMs to speed up the process of generating and manipulating your ETL interesting, and if you have further questions, do not hesitate to contact us.
Stay tuned for more interesting features to apply to your ETL processes with #ODI!!

Get in touch with us and see what we can do for you!  ✉

Blog Article: Emilio Soriano

Oracle Data Visualization Desktop – April 2016 Release

.

1. Introduction - Oracle Data Visualization Desktop

With the release of Oracle Business Intelligence Enterprise Edition 12c (OBIEE 12c), Oracle announced a new Data Visualization tool, aimed at ad hoc personal data discovery tasks. Oracle is putting a great deal of effort into developing this new tool, that is available as:

* component of Oracle BI Cloud Service (BICS)
* standalone cloud service named Data Visualization Cloud Services (DVCS)
* component of OBIEE 12c on premise
* a standalone desktop tool named Data Visualization Desktop (DVD)

Oracle Data Visualization Desktop

At the end of April 2016 Oracle released the first publicly available version of Oracle Data Visualization Desktop (DVD), under version 12.2.1.0.0 (and timestamp 20160422085526). In this blog post we will present the main characteristics of this tool (DVD); but most aspects are common to the other three above modalities.


2. Data Sources and Data Management

There are 3 types of data sources that can be used in DVD:

  1. Excel sheets, which allow complete offline analysis
  2. Databases like Oracle, SQL Server, MySQL, Teradata, Redshift, MongoDB, Spark, etc.
  3. OBIEE, where the user can connect to an existing Analysis or use a Logical QueryOracle Data Visualization Desktop

Once a source has been added to a project, DVD offers different options to manage the data:

* Modify data types (string, number, date, etc.)
* Alternate between Attributes and Measures
* Select the Aggregation rule of Measures
* Create Calculations using a wide variety of functions

Oracle Data Visualization Desktop

Multiple sources of different types can be added to a DVD project, joined automatically using fields with matching names; the joins can always be edited using the Source Diagram.

Oracle Data Visualization Desktop


3. Visualizations

One of the most important characteristics of DVD is the high number of visualizations available out-of-the-box. There are 22 data visualizations in total, plus the possibility of including Text Boxes and Images. All the available visualizations are shown in the image below:

Oracle Data Visualization Desktop

The visualizations are very easily created by dragging and dropping Data Elements (data columns in DVD) to the different Drop Targets (that is, the corresponding visual dimensions of the visualization).

Oracle Data Visualization Desktop

The visualizations can be highly customized in DVD. The user can edit the titles and axis, modify the colour schemes, sort the data, change the data format, etc.

In addition, the Map visualization allows you to create custom maps using the GeoJSON format. The underlying map engine is able to render the new maps and join them to the corresponding data element.

Oracle Data Visualization Desktop

Multiple visualizations can be combined in the Canvas, thus allowing the creation of complete dashboards to analyse the data. In addition, through the Data Brushing feature, the data selected in any visualization is highlighted in the others.

Oracle Data Visualization Desktop


4. Data Discovery and Advanced Analytics

As a Data Discovery tool, DVD includes multiple features to facilitate the data analysis process. One simple tool used for data discovery is the filters: the user can decide to filter Attributes based on values or Measures based on ranges.

Oracle Data Visualization Desktop

Together with the filters, Reference Lines and Trend Lines are available in DVD straight out-of-the-box. As well as these features, more Advanced Analytics tools are available in combination with R. For this reason, DVD includes an Oracle R Distribution (version 3.1.1) installer executable after the installation of DVD. When R and the required libraries are installed, we will be able to use Clustering, Outlier Detection and Forecasting, as well as custom R scripts.

In the example below we use Clusters to identify how the number of apartments by neighbourhood affects the price. In addition, we have a Reference Line to analyse the average apartment price for different room types. Finally, using Trend Lines, we can see that the relationship between minimum number of nights and price has been increasing over the last few years.

Oracle Data Visualization Desktop

Thanks to the data discovery and advanced analytics capabilities of DVD, we can easily identify hidden information and patterns in our data. In order to keep track of the data discovery flow, we can use the Story Navigator, which allows different insights to be saved. These insights are very useful when we want to share a project, letting other users understand our findings fast.

Oracle Data Visualization Desktop


5. Managing Projects

It is very easy to share Projects in DVD. The first thing to do is to save them locally; the different Projects are shown in the Home page. From the Home page we can select the option to export the Project, which will create a DVA (.dva) file. It is possible to store the source data in this file and protect it with a password.

Oracle Data Visualization Desktop

At the other end, we can similarly use the Import option to add the Project to our main folder in the Home page.


6. Oracle Data Visualization Training

We provide a wide range of training services designed to propel course delegates to a project-ready state, ensuring they have both the necessary BI theory as well as the hands-on practical skills needed to engage in a BI project with confidence.
Here in ClearPeaks we are experts on Oracle Data Visualization and we can offer you this expertise in our specialized training sessions.

Get in touch with us and see what we can do for you!

Blog Article Author: Iñigo Hernáez

OBIEE Data Lineage Solution

.

Tracking OBIEE reporting data

BI implementations may use hundreds of data warehouse tables and thousands of reports, and one of the most common requirements in a BI project is the ability to track back the data used in reports. The possibility to quickly identify the set of columns and tables used in a given report or dashboard, or to check which reports may be affected by a change of physical column in the data warehouse, is also crucial for development. This calls for a data lineage solution which is accessible to users and developers,  and allows checking such data on an ad hoc basis.

Various vendors offer data lineage solutions, but these can be expensive and vendor-specific. With our simple solution, we combine Catalog Manager and Administration Tool export sources to create an easily accessible solution for tracking data lineage in OBIEE.

By implementing the OBIEE data lineage solution, we can check the following:

  1. Which physical tables and columns are used in a given report or dashboard
  2. Which reports use given physical columns or tables; this is especially important when modifying an existing table, as any change in the table’s structure must take existing reports into consideration
  3. Which are the most commonly used columns in reports in a given subject area; identifying the most commonly used columns in a report can hint at creating indexes to improve the overall performance of the OBIEE implementation even further.

Solution

The ClearPeaks OBIEE Data Lineage Solution gathers all the required data lineage information in one place and uses it as a source for OBIEE reports within Data Lineage Subject Area. Two sources are combined to achieve this:

  1. List of OBIEE reports and presentation columns used

Catalog Manager provides an option to export the list of reports and columns used. The export can be done either with the Catalog Manager UI tool or through the command line utility. We use the latter option, as it allows automation of the whole process later.

  1. Mappings of presentation columns back to the physical columns

Such mappings can be obtained manually by creating Repository Documentation from Administration Tool’s Repository Documentation utility. The output will be a repository documentation in the form of comma separated values file (.csv). This will contain column mapping from presentation through logical to physical layers, including column formula expressions. Another way to obtain column mappings is by extracting the OBIEE repository in .xml file format through the command line utility. Our solution uses the second option, as the repository file will be used in our automated script.

OBIEE Data Lineage Solution

Once we have obtained both files, we need to populate the data lineage tables.

The data can be transformed and manually inserted into the tables, but in our solution we use a script (which can run on the OBIEE server) that parses the data and inserts it into the tables.

Data Lineage Subject Area

Once we have populated the data lineage tables and their data model has been mapped in Administration Tool, we can create and run reports in OBIEE using Data Lineage Subject Area and filter the results according to our requirements.

OBIEE Data Lineage Solution

Let us look at a few of the use cases for the Data Lineage Subject Area:

Use case 1. Which data warehouse tables and columns are used in a given report?

We would like to know which data warehouse tables and columns are used in a particular report or dashboard. We can create a report with a list of the columns used by a given OBIEE report and their data lineage:

OBIEE Data Lineage Solution


Use case 2.
Which reports use given physical tables?

We want to know how many and which reports or dashboards are using given physical tables or particular columns; this could be very useful when assessing the potential impact of column formula or table structure changes on reporting. Using Data Lineage Subject Area we can fetch up the list of OBIEE reports used by a given physical table:

OBIEE Data Lineage Solution


Use case 3.
Which reports use given subject areas?

We need to know which reports and dashboards are accessing data from given subject areas. This may be particularly useful when revising users’ access permissions.

OBIEE Data Lineage Solution


Future Improvements

OBIEE Data Lineage Subject Area can also serve as a backbone for further developments, providing complex and complete solutions for managing existing OBIEE project implementations. Here are some examples and potential benefits of merging additional information into the solution:

Usage Tracking data – allows analysis of the physical tables used in the most accessed reports, the tables and columns in reports not used by anyone, removing non-used reports or underlying tables.

Data warehouse database metadata – such as table size, indexes on columns. This allows for performance analysis of the most heavily used tables and columns by report usage.

ETL Data Lineage – additional layer of data lineage tracking – allowing the tracking of data back to the very source – can be achieved by adding ETL transformation data obtained from ETL systems. For example, it is possible to track all the ETL transformations on a given presentation column down to the source system.

Catalog metadata – it is possible to extract additional information regarding catalog objects such as user permissions, report owners, last modified date etc., to further enhance the solution usability.

Adding all the above components creates a strong core for the OBIEE management dashboard, allowing continuous improvement through:

* tracking data from the report to the source database column
* constant analysis of the most used tables in order to improve performance
* checking which users have permissions for given OBIEE reports and dashboards

All of the above is accessible from the OBIEE front-end, providing a convenient and quick way to facilitate many daily business-as-usual tasks in BI deployments.

Conclusion

The ClearPeaks OBIEE Data Lineage Solution can be easily deployed in any project using Oracle Business Intelligence Enterprise Edition. The solution can be run from the command line tools, which makes it possible to create automated jobs to extract and update data on a regular basis.

If you think your OBIEE project could benefit from our data lineage solution, please contact us for more information via our web contact form or by leaving your comments below!

Blog Article Author: Rafal Ostaszewski

privacy policy - Copyright © 2000-2010 ClearPeaks

topnav