Displaying Geographic Data in OBIEE

.

 

Introduction - What is geographic data?

The main goal of Business intelligence is to transform raw data into meaningful and useful information for the purpose of enabling more effective operational insights, as well as more tactical decision-making. The nature and character of this raw data can be very heterogeneous, ranging from structured data of orders originating from transactional databases to unstructured data coming from clients´ Twitter feeds. Today, I want to focus on one specific data type: geographic data.

The term ‘geographic’ neither refers to the way data is stored, nor its source. Instead, it denotes a functional characteristic, meaning data can be somehow positioned on the Earth. More precisely, geographic data can be defined as data with an implicit or explicit association with a location relative to the Earth, either a point, a line or a polygon.

In the following images you can see a clear example of how important is to show geographic data properly. While it is very difficult to see a clear pattern on the bar chart, the map displays a much clearer picture. Indeed, it turns out we are visualising the latitude of each region of Spain. The conclusion is that, as I like to say, showing geographic data without a map means losing information.

What is geographic data? Comparison bar chart - map

Figure 1: Comparison bar chart - map

As you might know, most organisations have some geographic data among their data sets. It can be points with a client’s location or an event’s situation, lines representing streets or railways, or polygons with the shape of countries or some other customised regions. Geographic data is usually present, and hence, it has to be properly displayed in order to get useful insights and information from it.
 

1. Geographic Visualisations Components

When creating geographic visualisations, that is, maps with data on the top, three pieces or components are needed:

Background map: which is the map displayed on the bottom. It can be either an online map (Google Maps, Bing, TomTom, etc.), or an on-premise map (e.g. HERE or OpenStreetMaps) which was previously designed and stored. As you can see on the images below, the visual experience of an online map is hardly achievable with an on-premise one.
Examples on-premise maps

Figure 2: Examples background maps

Data layers: which are composed by a shape (a point, a line or a polygon) and data objects (measures and attributes). The critical issue is which shapes are identified by the visualisation tool and which are not. Geocodes comprised of latitude/longitude coordinates are usually identifiable as points, but not literal addresses. For polygons, only main administrative areas are usually recognised, while custom areas will have to be manually introduced.
High-end geographic information system (high-end GIS): which is the software that matches the background map with the data layers, renders the map and includes some extra spatial functionalities. Some examples of high-end GIS are Oracle MapViewer (for OBIEE), Tableau (already built-in) or libraries such as Google Maps JavaScript API, Leaflet or Kendo.

 

2. Creating Geographic Visualisations in OBIEE12c

When working with OBIEE12c, we have three main options to implement some nice geographic visualisations:

Oracle MapViewer with online or on-premise maps:Developing OBIEE built-in maps using the Map View analysis type, which is specially designed to display several map visualisations such as Color Fill Map, Bubble Map or Pie Graph Map. The background map can be either some online map like Oracle eLocation if you have Internet connectivity or a customised offline one. The MapViewer toolkit for OBIEE and the Oracle Spatial option for Oracle database are necessary. The pros are that no third party is involved in the solution, nor is any code needed. On the other side, the amount of visualisations is limited and requires the configuration of layers and background maps.
On-premise library with online or on-premise maps: Another solution is developing maps using a library on-premise, such as Leaflet or Kendo, together with a customised on-premise map, for a 100% offline solution, or with an online map. Remember, OBIEE allows you to run JavaScript code using Narrative View analyses. In this case, the pros are no extra Oracle tools needed, higher visualisation features and options, and no Internet required. However, the development and maintenance cost increases significantly.
Google Maps JavaScript API: The last main solution is developing maps using the Google Maps JS API (or some other geographic API). Again, we use Narrative View analyses to run the JavaScript code on OBIEE, but in this case you also need to enable cross-site scripting from the API to OBIEE server. The pros are increased user experience and better visualisation features, while some drawbacks are dependency on third parties and higher development cost.

In short, each solution has its pros and cons. For this reason, doing an analysis of the users requirements, the system limitations and the maintenance and development cost is a must before starting with the development.
 

3. Developing Geographic Visualisations with Google Maps JS API

In this section, we explore the possibilities of developing geographic visualisations in OBIEE12c using Google Maps JavaScript API. Specifically, we show three different dashboards, each with one map and several extra functionalities developed with HTML, CSS and JavaScript, such as the title and some navigation buttons.
The first one is a heat map colouring the area of towns by some measure. It also includes several background map styles, a legend with the minimum and maximum value of the measure, and a tooltip with some specific attributes and measures. Moreover, the traditional Google buttons can be configured. In this case the Street View button is hidden and only the zooming buttons are shown.

Developing Geographic Visualisations with Google Maps JS API

Figure 3: Developing Geographic Visualisations with Google Maps JS API - Dashboard 1

The next dashboard shows a heat map with a similar look and feel to the previous one. This one also incorporates a label with the number of points requested, which is very important to control due to performance issues. It is a perfect way to identify geographical patterns on the localisation of events.

Image 4

Figure 4: Developing Geographic Visualisations with Google Maps JS API - Dashbaord 2

The last one shows a markers map which uses specific icons to represent different values of a category. Also, a legend with the descriptions of the icons used can be shown or hidden by clicking on the information button. This is a great manner of introducing another dimension on a geographical analysis.

Developing Geographic Visualisations with Google Maps JS API

Figure 5: Developing Geographic Visualisations with Google Maps JS API - Dashbaord 3

Obviously, the complexity of the code depends on the characteristics of the map itself such as the type of map, the utilisation of custom geometrics or markers, the amount of auxiliary elements such as legends and tooltips, or the level of customisation of the background map. However, when working with OBIEE, there is another complexity element to take into account: The insertion of this code into the narrative view structure.
 

Conclusions

Although this article gives a more general overview of several topics related to the big world of geographic data, we can still get some conclusions from what has been said.
First and most importantly, if you have geographic data, remember to draw a map! Moreover, we have talked about the main components you need to create a geographical analysis, and the technical options to implement it in OBIEE. Finally we have shown some nice dashboards using Google Maps API.
Click here if you would you like to know more about displaying geographic data in OBIEE and the services we offer!

Real Time Business Intelligence with Oracle Technologies

.

 

Introduction

In our previous blog article we discussed the necessity of real time solutions for Business Intelligence (BI) platforms and presented a user case with Microsoft Technologies (namely Azure and Power BI). In this case, we are analysing the same scenario, but we instead propose a design using Oracle Cloud and Oracle On Premise technologies.

We recommend going through the previous blog article to understand completely the scenario under analysis and its requirements.

 

1. Oracle Technologies for Real Time BI

Oracle offers both in cloud and on-premises solutions, focusing on Big Data. These solutions can be used for a variety of Big Data applications, including real-time streaming analytics.

a. Oracle Cloud Services

From all the services offered on Oracle Cloud, the following suit the needs of a real-time BI application:

Oracle Event Hub Service. This service provides a managed Apache Kafka cluster, where we can easily assign resources and create topics using the web interface.
Oracle Big Data Compute Edition Service (OBDCE). This service provides a managed Big Data cluster with most of the Apache Hadoop/Spark stack elements, where the resource management and execution of tasks is also managed from the web interface.

Both the Event Hub and the OBDCE services are part of the PaaS offerings of Oracle Cloud, which means that the whole hardware and software infrastructure behind them is managed by Oracle. The key benefit is that, even though we are using standard open source technologies, we don’t have to worry about resource provisioning, software updates, connectivity, etc. With this, the developers can focus on building the solutions, without losing time on administrative tasks.

Another important point is that the connectivity between services on the cloud can be configured very easily using the web console, which ensures a reliable and safe path for the data.

b. On Premise Technologies

In addition to the cloud solution, a similar environment can be built on premises. For this we are using the Oracle Big Data Appliance. The Big Data Appliance consists, generally speaking, of the following components:

Hardware: Several server nodes and other accessory elements for networking, power, etc. The configuration can be a starter rack with 6 nodes to a full rack with 18 nodes. Multi-rack configurations allow for an even larger number of nodes.
Software: All the components of the Cloudera Distribution for Hadoop (CDH) and additional components from Oracle like Oracle Database, Oracle NoSQL Database, Oracle Spatial and Graph, Oracle R Enterprise, Oracle Connectors, etc.

For the purpose of our Real Time project, the required components are Kafka and Spark, as we will see later. Both are part of CDH and key elements of the standard open source real-time analytics scenario. In this case, all the components will be available in the same cluster.

It is also important to know that, for demo projects like this, Oracle offers a Big Data Lite virtual machine, that contains most of the components of the Big Data Appliance.

c. Real-Time Visualization

At the time of writing this article, there is no BI tool from Oracle (OBIEE, DV) that allows visualization of real-time data. To tackle this, we decided to build a custom front-end that could be used both as a standalone web application or as one integrated into OBIEE.
The key technologies we are using for this purpose are:

Flask, which is a lightweight web framework for Python.
SocketIO, which is a framework based on WebSockets for real-time web applications using asynchronous communication.
ChartJS, which is a JavaScript library for building charts.

 

2. Solution Design and Development

The architectural design for the solutions is shown in the figure below and explained throughout the rest of this section:

Realtime BI solution design diagram, using both cloud and on premise Oracle technologies

Figure 1: Realtime BI solution design diagram, using both cloud and on premise Oracle technologies

a. On Premise Source System
The on-premises source system part of the solution simulates a real-time operational system using a public data feed provided by Network Rail. The data from the feed is processed and inserted into a PostgreSQL database. An event monitor script listens for notifications sent from the database and forwards them to the real-time processing queue (Kafka), either located on Oracle Cloud or on the Oracle Big Data Appliance.

For more details on this part of the solution, please, refer to the previous article using Microsoft Technologies. The design is exactly the same in this case, except that, in this solution, the Event Monitor sends the events to a Kafka queue (cloud or on premises), instead of sending them to an Azure Event Hub.

b. Oracle Cloud and Oracle Big Data Appliance

As explained in previous sections and shown in the diagram above, the Oracle solution for real-time stream processing can be developed both in the Oracle Cloud and using the Oracle Big Data Appliance. Both solutions are similar, as the underlying technologies are Kafka for event queueing and Spark Streaming for event processing.

Apache Kafka is a message queueing system where Producers and Consumers can send and receive messages from the different queues, respectively. Each queue is called Topic, and there can be many of them per Kafka Broker (or node). The Topics can be optionally split into Partitions, which means that the messages will be distributed among them. Kafka uses Zookeper for configuration and managing tasks.

In our scenario, we are using a simple configuration with just a single Kafka Broker, three Topics, each of them with a single partition. The process works as follows:

The Event Monitor sends the events received from the database to Topic 1.
Spark Streaming consumes the messages from Topic 1, processes them and sends the results to Topics 2 and 3.
In the Flask Web Server a couple of Kafka Consumers are listening to Topics 2 and 3, and forwarding them to the web application.
Interaction between main components of the solution and the different Kafka topics

Figure 2: Interaction between main components of the solution and the different Kafka topics

Spark Streaming is one of the multiple tools of the Apache Spark stack, built on top of the Spark Core. Basically, it converts a continuous stream of messages (called DStream) into a batched stream using a specific time window. Each batch is treated as a normal Spark RDD (Resilient Distributed Dataset), which is the basic unit of data in Spark. The Spark Core can apply most of the available operations to process this RDDs of the batched stream.

Spark Streaming processing workflow

Figure 3: Spark Streaming processing workflow

In our scenario, Spark Streaming is being used to aggregate the input data and to calculate averages of the PPM metric by timestamp and by operator. The process to calculate these averages requires few operations, as shown in the Python code snippet below:

# Create Kafka consumer (using Spark Streaming API)
consumer = KafkaUtils.createDirectStream(streamingContext,
[topicIn],
{"metadata.broker.list": kafkaBroker})

# Create Kafka Producer (using Kafka for Python API)
producer = KafkaProducer(bootstrap_servers=brokers,
value_serializer=lambda v: json.dumps(v).encode('utf-8'))

# Consume the input topic
# Calculate average by timestamp
# Produce to the output topic
ppmAvg = consumer.map(lambda x: json.loads(x[1]))
                 .map(lambda x: (x[u'timestamp'], float(x[u'ppm']))
                 .reduceByKey(lambda a, b: (a+b)/2)\
                 .transform(lambda rdd: rdd.sortByKey())\
                 .foreachRDD(lambda rdd: sendkafka(producer, topicOut, rdd))

From this sample code, we can see that the RDD processing is similar to Spark Core, with operations such as map, reduceByKey and transform. The main difference in Spark Streaming is that we can use the foreachRDD operation, which executes the specified function for each of the processed batches.

It is also important to know that, at the time of writing this article, the Spark Streaming API in Python does not offer the option to create a Kafka Producer. Therefore, we are using the Kafka for Python library to create it and send the processed messages.

Together with the average by timestamp shown above, we are also generating an average by operator. As we have two sets of processed data, Spark needs to send the data to two separate Kafka Topics (2 and 3).

One key problem with Spark Streaming is that it does not allow data processing in Event Time. This basically means that we can’t synchronize the windowing applied by Spark to create the batches to the timestamps of the source events. Therefore, as shown in the diagram below, this can lead to the events of different source timestamps being mixed in with the aggregates created by Spark.

Misalignment between Kafka and Spark Streaming windows causes events to be processed inside the incorrect time window

Figure 4: Misalignment between Kafka and Spark Streaming windows causes events to be processed inside the incorrect time window

In fact, in our scenario, we have events that are timestamped in the source, but we were not able to align the Spark Streaming batching process to this time.

There are some possible solutions for this issue, namely:

Spark Streaming with Updates.This is a workaround, were we can tell Spark Streaming to update the results of a batch with data coming in a “future” processing window. We tested this approach but, unfortunately, it lead to compatibility errors with Kafka and we couldn’t go ahead with it.
Spark Structured Streaming.This is a separate tool of the stack built on top of Spark SQL, which is meant to solve the problem with Event Time processing. Unfortunately, at the time of writing this article, it is still only available in “alpha” version as part of Spark 2.X. Again, we were able to test this experimental feature but couldn’t get it to properly work with Kafka.
Other streaming processing tools.There are other existing tools in the Big Data ecosystem that can work with Event Time. Kafka itself has a Streams API that can be used for simple data processing. There is also Storm and others.

c. On Premise BI System

As introduced earlier, Oracle standard BI solutions don’t offer the possibility to connect to real-time sources. Therefore, we decided to build our own custom platform and integrate it with OBIEE.

The key component of this part of the solution is the real-time messaging between the web server and the browser provided by SocketIO. In this library, the client requests the server to start a session. If the server accepts, a continuous stream of bidirectional messages is opened (in our case the messages are unidirectional, from the server to the client). Both the client and the server react to the received messages. Finally, either the client or the server can close the connection (in our system the client closes the connection when the browser is closed).

Continuous bidirectional communication channel using SocketIO

Figure 5: Continuous bidirectional communication channel using SocketIO

Although the message channel is bidirectional, only the server is sending messages to the clients. What it does is consume the events coming from the Kafka Topics populated by Spark Streaming and forwards them, with a slight manipulation, to a SocketIO namespace using two different named channels.

The web server sends the messages received from the Kafka topics to the SocketIO channels

Figure 6: The web server sends the messages received from the Kafka topics to the SocketIO channels

# Create a SocketIO object using the Flask-SocketIO add-in for Flask
socketio = SocketIO(app, async_mode=async_mode)

# Create a Kafka Consumer using PyKafka
client = KafkaClient(hosts=kafka_host, zookeeper_hosts=zookeeper_host)
topic = client.topics[kafka_topic]
consumer = topic.get_simple_consumer(consumer_group=kafka_consumer_group,
                                     auto_offset_reset=OffsetType.LATEST,
                                     auto_commit_enable=True,
                                     auto_commit_interval_ms=1000,
                                     auto_start=False)

# Start the Consumer, monitor the input
# and send the received data through the SocketIO channel
consumer.start()
while True:
socketio.sleep(0.1)
message = consumer.consume(block=False)
if message is not None:
data = json.loads(message.value.decode('utf-8'))
socketio.emit(channel,
{'key': data[0], 'value': data[1]}, namespace=namespace)

On the client side we have two possibilities to display the data, a standalone website served by the same web server or a set of custom analysis developed in OBIEE. In both cases, the key elements are the SocketIO and ChartJS JavaScript libraries. The first one establishes the connection with the server and the second one is used to create the charts. The SocketIO object is configured so that anytime it receives a message from any of the channels, it will update the data of the chart and ask ChartJS to refresh it accordingly. The required code in Javascript is shown in the following snippet:

# Create the socket using the JavaScript SocketIO client library
socket = io(location.protocol + '//'
+ document.domain + ':'
+ location.port
+ namespace);

# Create an event listener for the “realtime_data01” channel
# and update the corresponding charts
socket.on('realtime_data01', function(msg) {
if (lineChart.data.labels.length > 20) {
lineChart.data.labels.shift();
lineChart.data.datasets[0].data.shift();
}
lineChart.data.labels.push(msg.key);
lineChart.data.datasets[0].data.push(msg.value);
lineChart.update();
});

Here, we are asking the socket object to update the values of the lineChart object, shift the values if required and update the chart when a new message is received in the realtime_data01 channel. The result will be a set of charts updating automatically as soon as the new data is sent through the socket:

: Standalone web application with real time visualizations

Figure 7: Streaming tiles showing real time data coming from Stream Analytics

Moreover, using dummy analysis with static text visualizations, we can embed the HTML and JavaScript into a normal OBIEE dashboard. In the example below, we can see exactly the same visualisations as in the standalone web. However, we will always be able to combine these real time visualisations with normal RPD-based OBIEE analyses.

Real-time visualizations embedded into an OBIEE dashboard

Figure 8: Real-time visualizations embedded into an OBIEE dashboard

 

3. Scenario Analysis and Conclusions

Based on our experience developing this solution with Oracle technologies for Real Time BI scenario, we have identified the following benefits, as well as areas for future improvement:

Advantageous features:

Oracle solutions for Big Data are available both in the Cloud and On Premise, suiting perfectly to different companies and scenarios.
The Oracle Big Data stack is based on open source standard software, which makes it really easy to develop solutions and to find guidance.
Kafka topics are really easy to create and configure in a matter of minutes.
Spark Streaming leverages all the processing power of Spark to real time streams, thus allowing for very sophisticated analytics in a few lines of code.
SocketIO allows creating bidirectional channels between web servers and applications and suits very well the needs of a real time application.
The web based front end is very flexible as it can be served standalone or integrated into other tools as OBIEE.

Areas for improvement:

The Oracle Big Data stack comes with a plethora of components pre-installed, which can be unnecessary in many applications.
Spark Streaming windowing is not ready yet for event time processing, which makes it flawed for some real-time applications. Other solutions from the Spark stack are still not yet completely ready for production environments.
Oracle standard BI solutions do not offer real-time visualisation solutions.

Authors: Iñigo Hernáez, Oscar Martinez

Click here if you would like to receive more information about the Real Time BI Services we offer!

OBIEE12c Integration with Oracle EBS Security

.

Integration of the Oracle Business Intelligence Enterprise Edition and Oracle E-Business Suite provides a seamless controlled flow between the systems, allowing for drill down and drill back from key metrics to underlying detail. Theoretically, this can be done between any OBIEE form and any EBS form with pass-thru to any connected EBS subsystem.

If you are integrating OBIEE with EBS, you are likely to be using OBIA, Oracle Business Intelligence Analytics, although this is certainly not a requirement. OBIA is a pre-built, pre-packaged BI solution that delivers role-based intelligence to the organization. It is a set of OBIEE dashboards and reports that run from a pre-built warehouse previously serviced by Informatica/DAC, while the next generation of the OBIA warehouse utilizes Oracles Data Integrator, ODI, which runs high-volume batch load plans, event-driven load plans, and even SOA data services.

1. OBIEE 12c Configuration

While configuring an initialization block to retrieve data from EBS, make sure that Row-wise initialization is checked, as this allows multiple results to be stored in the variable, regardless of whether the variable is static or dynamic; otherwise you will only be able to retrieve the last item in the result set. Be sure to set the Execution Precedence of the EBS Integration init block that attaches the session through the session cookie, so that it executes before any attempt is made to retrieve security information.

Figure 1: EBS Configuration

Figure 1: EBS Configuration

Two files must be modified in order for WebLogic to find, accept and attach to the EBS session.

• instanceconfig.xml

• authenticationschema.xml

To configure external authentication, you will need to modify instanceconfig.xml as follows:
Path to instanceconfig.xml:

$BI_HOME/config/fmwconfig/biconfig/OBIPS

Note: Take a backup of the file before editing.

Add “EBS-ICX” in the EnabledSchemas xml tag.

<Authentication>
<!--This Configuration setting is managed by Oracle Enterprise Manager Fusion 
Middleware Control-->
<EnabledSchemas>UidPwd,Impersonate,UidPwd-soap,Impersonate-soap,EBS-ICX</EnabledSchemas>
</Authentication>

Middleware recognizes internal schema name for interface to EBS "EBS-ICX". The Oracle already worked on that. It is only necessary to let the system know that this is to be utilized. Then, let the system know the name of the session cookie that EBS writes.
To configure authenticationschema.xml it is necessary to know the Oracle EBS instance cookie name.
Follow the steps to get the cookie name.

1. Login to Oracle EBS

2. If you are using Google Chrome or Firefox then open Dev Tools ➜ Web Console and write the following command:

javascript:alert(document.cookie)

or

javascript:document.write(document.cookie)

ALERT command will pop up a dialog box as follows, while DOCUMENT.WRITE will display the cookie information in browser window.

Figure 2: Alert command pop up

Figure 2: Alert command pop up

Notice that key value pair of the cookie, ERPU1 is the cookie name of Oracle EBS Test instance and the value dynamically generated for each user after each login. We only required the key from it which is “ERPU1”. Now we will use this value in authenticationschema.xml file.

Path to authenticationschema.xml file:

Obiee/bi/bifoundation/web/display

Note: Take a backup of the file before editing.

Edit the following tags in the file:

<SchemaKeyVariable source="cookie" forceValue="EBS-ICX" nameInSource="ERPU1" />

 

<RequestVariable source="cookie" type="auth" nameInSource="ERPU1" 
biVariableName="NQ_SESSION.ICX_SESSION_COOKIE" />

As per Oracle Doc ID 2141505.1
"Access Prohibited" When Logging In To Analytics In EBS Integrated Environment.
Following tag need to be added in the file under “AuthenticationSchema”.

<RequestVariable source="constant" type="auth" nameInSource="ssi" 
biVariableName="NQ_SESSION.SERVICEINSTANCEKEY" />

That’s it for OBIEE configuration!

 

2. RPD Changes using Administration Tool

Here comes the part that is familiar to every OBIEE administrator, the RPD modifications. If you are following the document, the sample EBS connection pool can be used or create a new one just for the initialization process and retrieving security.
Create database objects and connection pools for Oracle EBS database.

Figure 3: EBS Connection Pool

Figure 3: EBS Connection Pool

Note: APPS user should have all READ and EXECUTE permissions to run PL/SQL queries. If not, grant the privileges to the APPS user.

Now, create an init block which will use this connection pool to retrieve the EBS context and set that into OBIEE session variables. The init block will use the just defined connection pool and will send a data source query to the EBS database:

SELECT
FND_GLOBAL.RESP_ID,
FND_GLOBAL.RESP_APPL_ID,
FND_GLOBAL.SECURITY_GROUP_ID,
FND_GLOBAL.RESP_NAME,
FND_GLOBAL.USER_ID,
FND_GLOBAL.EMPLOYEE_ID,
FND_GLOBAL.USER_NAME,
FND_GLOBAL.RESP_NAME
FROM DUAL

Figure 4: EBS Security Context

Figure 4: EBS Security Context

Referring to another Oracle document, 1539742.1, create these static session variables to hold the context:

EBS_RESP_ID
EBS_RESP_APPL_ID
EBS_SEC_GROUP_ID
EBS_RESP_NAME
EBS_USER_ID
EBS_EMPLOYEE_ID
USER
ROLES

Note: You have to create another init block named “EBS Security Context – ROLES – Row wise” only for ROLES as a user will have more than one Role in Oracle EBS and the init block will be set for row-wise initialization.

Figure 5: EBS Security Context - Roles

Figure 5: EBS Security Context - Roles

Figure 6: EBS Security Context - Roles - Row Wise

Figure 6: EBS Security Context - Roles - Row Wise

The following query will be used to fetch all the Responsibilities of log-in user and assign it to the variable ROLES.

SELECT DISTINCT 'ROLES',
RESPONSIBILITY_NAME
FROM FND_USER,
FND_USER_RESP_GROUPS,
FND_RESPONSIBILITY_VL
WHERE FND_USER.USER_ID = FND_USER_RESP_GROUPS.USER_ID
AND FND_USER_RESP_GROUPS.RESPONSIBILITY_ID = FND_RESPONSIBILITY_VL.RESPONSIBILITY_ID
AND FND_USER_RESP_GROUPS.RESPONSIBILITY_APPLICATION_ID = FND_RESPONSIBILITY_VL.APPLICATION_ID
AND FND_USER_RESP_GROUPS.START_DATE < SYSDATE
AND (CASE WHEN FND_USER_RESP_GROUPS.END_DATE IS NULL THEN SYSDATE
ELSE TO_DATE (FND_USER_RESP_GROUPS.END_DATE)
END) >= SYSDATE
AND FND_USER.USER_ID = (SELECT USER_ID
			FROM FND_USER
			WHERE UPPER (USER_NAME) = UPPER('VALUEOF(NQ_SESSION.USER)')
			)

 

3. Oracle EBS Configuration

Now we need to introduce a responsibility for OBIEE through which a user can navigate to OBIEE from Oracle EBS.

1. Create a Function, using Application in EBS:

Figure 7: Oracle EBS Configuration - Form Functions - Description

Figure 7: Oracle EBS Configuration - Form Functions - Description

2. In the Properties Tab, add as follows:

Function: OBIEE
Type: SSWA jsp function
Maintenance Mode Support: None
Context Dependence: Responsibility

Figure 8: Oracle EBS Configuration - Form Functions - Properties

Figure 8: Oracle EBS Configuration - Form Functions - Properties

3. In the Web HTML tab, add the following link:

Function: OBIEE
HTML Call: OracleOasis.jsp?mode=OBIEE&function=Dashboard

Figure 9: Oracle EBS Configuration - Form Functions - Web HTML

Figure 9: Oracle EBS Configuration - Form Functions - Web HTML

4. Create a Menu in Oracle EBS named “OBIEE Dashboard” and add the Function created in step 1:

Figure 10: Oracle EBS Configuration - Menus

Figure 10: Oracle EBS Configuration - Menus

Note: Only create Menu for OBIEE Dashboard

5. Assign Menu to the relevant responsibility:

Figure 11: Oracle EBS Configuration - Users

Figure 11: Oracle EBS Configuration - Users

6. Set Profile

You need to enter the URL of the Oracle BI Server as part of a profile. You can set up a profile for a responsibility, a user, or a site. The following procedure shows how to set profile options for a responsibility:

Figure 12: Oracle EBS Configuration - Find System Profile Values

Figure 12: Oracle EBS Configuration - Find System Profile Values

You should use a fully-qualified host server.domain name rather than an IP address or just a host name. The OBIEE domain must be the same as the Oracle EBS domain, so that the EBS-ICX cookie is visible to OBIEE from the user's browser.

References:
OBIEE 12c: Integrating OBIEE 12c with Oracle E-Business Suite (EBS) Security (Doc ID 2174747.1)
Chapter 9: Oracle® Fusion Middleware Integrator's Guide for Oracle Business Intelligence Enterprise Edition.

Click here if you would like to receive more information about the topic or if you need help with your EBS-OBIEE configuration!

OBIEE 11.1.1.9 UPGRADE

.

How to upgrade your Oracle BI System to 11.1.1.9

2593358

On our previous technical post we looked at what is new in OBIEE 11.1.1.9, below we describe how to upgrade your Oracle BI system from version 11.1.1.7 to 11.1.1.9.

Not sure about the upgrade? Ask us for access to our online testing showcase.

The installation files are available from OTN - Oracle Business Intelligence (11.1.1.9.0) Downloads. As per Oracle advice ensure to download this OBIEE release from the Oracle Software Delivery Cloud for production environments.

Before proceeding with the upgrade always remember to back up your Middleware Home, Domain Home and Oracle Instances, also database schemas and any other additional configuration.

OBIEE upgrade procedure on a Linux x86-64 environment:

  1. Upgrade Oracle WebLogic Server from 10.3.5 to 10.3.6
  2. Upgrade OBIEE from 11.1.1.7 to 11.1.1.9
  3. Upgrade database schemas MDS and BIPLATFORM
  4. Upgrade OPMN system components

Upgrade Oracle WebLogic Server from 10.3.5 to 10.3.6

Oracle BI 11.1.1.7.X and the previous 11.1.1.6 run within Oracle Weblogic Server 10.3.5. However the newly released OBIEE 11.1.1.9 ONLY supports 10.3.6.

To check your current version go to your console page on http://host:7001/console. If you are on 10.3.6 proceed to the next step, otherwise follow the instructions below.

image2

Download Oracle Weblogic 10.3.6 bin files from My Oracle Support. Go to Patches & Updates search for Patch 13529623: PLACEHOLDER BUG FOR WEBLOGIC SERVER 11GR1 (10.3.6) UPGRADE THE INSTALLER corresponding to your platform, this example is based on Linux x86-64.

img0

Unzip the files once downloaded and execute the upgrade. The welcome screen for the upgrade will pop up. Note that you have the command line or the UI options, below is use the latter.

java –jar wls10.36_upgrade_generic.jar

picture 1 picture 2

Click next and select “Use an existing Middleware Home”

img3

On register for security updates screen uncheck the option to receive security updates and confirm.

img4

img5img6

The connection will fail and you have to keep the option to remain uninformed of security issues checked.

img7

On the product and components to upgrade select WebLogic Server, leaving Oracle Coherence unchecked and confirm selection.

img8 img9

After this the upgrade starts and should last a couple of minutes. At the completion click on the Done button.

img10img11

Once the upgrade has completed start the WebLogic Admin Server by command line.

> cd /MW_HOME/user_projects/domains/bifoundation_domain/bin /startWebLogic.sh

Go to http://server:7001/console and make sure you have upgraded to Oracle WebLogic Server 10.3.6 correctly.

image 12

Upgrade OBIEE from 11.1.1.7 to 11.1.1.9

Before proceeding make sure WebLogic Server has completely stopped:

> cd /MW_HOME/user_projects/domains/bifoundation_domain/bin /stopWebLogic.sh

Change directory to Disk1folder inside the unzipped archive folder and start the installer - note that OBIEE installer only has a graphical interface. The installer can be run in Silent Mode.

> ./runInstaller

img13img15

On the software updates select the option “Skip Software Updates”

img17

For the installation type select “Software Only Install” since the system has already been configured and it is only being upgraded.

img18img19

Select your Oracle Middleware Home location and click Next.

img20img23

After the summary click next and the upgrade of the BI components will start, taking less than 10 minutes to complete. Click Finish when it has completed.

img24img26

img21

Upgrade database schemas MDS and BIPLATFORM

The next step is to upgrade the database schemas. Go to MW_HOME/Oracle_BI1/bin and run the Patch Set Assistant (.psa) to upgrade the database repository. In this case you can use graphical interface or command line.

Right after the welcome screen pop ups, click Next.

> cd /MW_HOME/Oracle_BI1/bin/psa

img27

img28

Select “Oracle Business Intelligence” components and check all options from the prerequisites.

img29 img30

Set up the connections for MDS and BIPLATFORM schemas

img31img32

img33 img34

Click on Upgrade button and it will take one minute to complete.

img36

Use the SQL command line to verify that the upgrade of the schema is in VALID status and the version is 11.1.1.9.0.

> SELECT OWNER, VERSION, STATUS, UPGRADED FROM SCHEMA_VERSION_REGISTRY;

UPGRADE OPMN system components

The last post-patching task is to upgrade the system components managed by the OPMN – BI Server, BI Scheduler and BI Presentation Services.

Before starting, make sure that Node Manager and Admin Server are up and running.

> cd /MW_HOME/wlserver_10.3/server/bin/startNodeManager.sh

> cd /MW_HOME/user_projects/domains/bifoundation_domain/bin /startWebLogic.sh

Tip: use NOHUP option to avoid killing the services when closing the command window

img37

Start the Configuration Assistant:

cd /MW_HOME/Oracle_BI1/bin/config.sh

img38img40

Select “Update BI Domain” option and add the credentials of your existing BI Domain.

img41img42

img43img44

Important note: Possibly a process could get stuck during one of the steps without notification. In this event, see the Troubleshooting section on how to fix it.

Start the BI services and start enjoying your OBIEE 11.1.1.9!

img45

For more detailed information you can access the OBIEE documentation - Oracle Fusion Middleware 11g Release 1 (11.1.1.9)

 

Do you need help? Contact us!

Oracle Financial Analytics: General ledger to sub-ledgers navigation

.

When implementing Oracle Financial Analytics, it has been noticed that it is a common requirement to enable navigation from general ledger to sub-ledgers reports to obtain information on specific transactions. One of the main reasons for this is to reconcile general ledger to individual sub-ledgers.

The general ledger is the master set of accounts that summarise the sub-ledger transactions. It is used to add information into the financial statements and journal entries.  A sub-ledger is a detailed record of transactions for an individual account that contains details of transactions for an account.

Sub-ledgers serve as support for amounts posted to the general ledger. It is important to reconcile the general ledger balances to the sub-ledger balances on a regular basis to spot possible discrepancies.

In this blog article, I will describe how this process can be approached in OBI Apps. I will describe the OBI Apps data model and the configuration required. Also, I will describe the creation of sample navigation from general ledger to sub-ledger by the creation of an intermediate analysis.

The animation below shows the expected navigation from GL Account balance to Payables details.

Financial Analytics

Note: This information applies when the data source is Oracle EBS.

OBI Apps allows this navigation by default using the ‘GL Journal ID’ column as a link between subject areas. It is available in general ledger transaction subject area and in the following sub-ledgers (In some cases, ‘GL Journal ID’ has to be exposed to the presentation layer):

  • Accounts Payable
  • Accounts Receivable
  • Revenue
  • Purchase Orders
  • Fixed Assets (Available in OBIA 11g)

Understanding the OBIA model

‘GL Journal ID’ is a unique identifier of an EBS journal entry line and it is available in the linkage table W_GL_LINKAGE_INFORMATION_G.

Sub-ledger transaction tables are linked to W_GL_LINKAGE_INFORMATION_G at source distribution level.

Finantial Analytics

For each sub-ledger transaction fact, there is a logical “transaction detail” dimension comprising the transaction fact and the W_GL_LINKAGE_INFORMATION_G table, ‘GL Journal ID’ is available in this dimension. The figure below shows the physical source “Fact_W_AP_XACT_F_GLJournalID” of the logical dimension “Dim – AP Transaction Details”.

Finantial Analytics

With this configuration, it is possible to have ‘GL Journal ID’ available in the sub-ledger transaction subject areas. And since ‘GL Journal ID’ is also available in “Financial - GL Detail Transactions” subject area, it is possible to configure the navigation from general ledger to sub-ledger transactions by using the OBIEE Navigation functionality.

Creating GL to sub-ledger navigation

‘GL Journal ID’ is available in the two subject areas that we want to link, but the level of detail between the source and the target reports are not the same, which is why it is necessary to create an intermediate analysis to link the two reports. This process is described in this section.

  1. Create the GL report and enable navigation.

Using the subject area “Financial - GL Detail Transactions”, create a new analysis that shows the Balance by Accounting Document Source of a selected account for a selected period. Accounting Document Source is used to identify the sub-ledger source.

Finantial Analytics

  1. Create an Intermediate analysis that contains the ‘GL Journal ID’.

Using the subject area “Financial - GL Detail Transactions”, create an analysis that contains the ‘GL Journal ID’ column. Account and period filters need to be created as well. This report will be used to filter the sub-ledger Transactions.

Finantial Analytics

  1. Create a detailed sub-ledger report

Using one of the sub-ledger subject areas, create a new analysis and filter the results by existing report. Use the analysis created in step 2 as the “Saved Analysis”.

 

image 6 Finantial Analytics

  1. Enable navigation

In the source report (created in step 1), using Action Links, enable navigation to the different sub-ledger reports. The example below shows the navigation to AP Transaction report when Account Document source is Payables.

Finantial Analytics

In conclusion, the OBI Apps data model is prepared to enable the link between General ledger and sub-ledger reports, but it has to be configured at analysis level. This feature can be used for the reconciliation process between General ledger and sub-ledgers.

If you want to go further, create a report that shows summaries sourced by general ledger, a second column that shows sub-ledger totals and a third column that shows the difference between these two amounts, if the value of this column is not ‘0’ navigate to details to discover the discrepancies.

 

privacy policy - Copyright © 2000-2010 ClearPeaks

topnav