OBIEE12c Integration with Oracle EBS Security

.

Integration of the Oracle Business Intelligence Enterprise Edition and Oracle E-Business Suite provides a seamless controlled flow between the systems, allowing for drill down and drill back from key metrics to underlying detail. Theoretically, this can be done between any OBIEE form and any EBS form with pass-thru to any connected EBS subsystem.

If you are integrating OBIEE with EBS, you are likely to be using OBIA, Oracle Business Intelligence Analytics, although this is certainly not a requirement. OBIA is a pre-built, pre-packaged BI solution that delivers role-based intelligence to the organization. It is a set of OBIEE dashboards and reports that run from a pre-built warehouse previously serviced by Informatica/DAC, while the next generation of the OBIA warehouse utilizes Oracles Data Integrator, ODI, which runs high-volume batch load plans, event-driven load plans, and even SOA data services.

1. OBIEE 12c Configuration

While configuring an initialization block to retrieve data from EBS, make sure that Row-wise initialization is checked, as this allows multiple results to be stored in the variable, regardless of whether the variable is static or dynamic; otherwise you will only be able to retrieve the last item in the result set. Be sure to set the Execution Precedence of the EBS Integration init block that attaches the session through the session cookie, so that it executes before any attempt is made to retrieve security information.

Figure 1: EBS Configuration

Figure 1: EBS Configuration

Two files must be modified in order for WebLogic to find, accept and attach to the EBS session.

• instanceconfig.xml

• authenticationschema.xml

To configure external authentication, you will need to modify instanceconfig.xml as follows:
Path to instanceconfig.xml:

$BI_HOME/config/fmwconfig/biconfig/OBIPS

Note: Take a backup of the file before editing.

Add “EBS-ICX” in the EnabledSchemas xml tag.

<Authentication>
<!--This Configuration setting is managed by Oracle Enterprise Manager Fusion 
Middleware Control-->
<EnabledSchemas>UidPwd,Impersonate,UidPwd-soap,Impersonate-soap,EBS-ICX</EnabledSchemas>
</Authentication>

Middleware recognizes internal schema name for interface to EBS "EBS-ICX". The Oracle already worked on that. It is only necessary to let the system know that this is to be utilized. Then, let the system know the name of the session cookie that EBS writes.
To configure authenticationschema.xml it is necessary to know the Oracle EBS instance cookie name.
Follow the steps to get the cookie name.

1. Login to Oracle EBS

2. If you are using Google Chrome or Firefox then open Dev Tools ➜ Web Console and write the following command:

javascript:alert(document.cookie)

or

javascript:document.write(document.cookie)

ALERT command will pop up a dialog box as follows, while DOCUMENT.WRITE will display the cookie information in browser window.

Figure 2: Alert command pop up

Figure 2: Alert command pop up

Notice that key value pair of the cookie, ERPU1 is the cookie name of Oracle EBS Test instance and the value dynamically generated for each user after each login. We only required the key from it which is “ERPU1”. Now we will use this value in authenticationschema.xml file.

Path to authenticationschema.xml file:

Obiee/bi/bifoundation/web/display

Note: Take a backup of the file before editing.

Edit the following tags in the file:

<SchemaKeyVariable source="cookie" forceValue="EBS-ICX" nameInSource="ERPU1" />

 

<RequestVariable source="cookie" type="auth" nameInSource="ERPU1" 
biVariableName="NQ_SESSION.ICX_SESSION_COOKIE" />

As per Oracle Doc ID 2141505.1
"Access Prohibited" When Logging In To Analytics In EBS Integrated Environment.
Following tag need to be added in the file under “AuthenticationSchema”.

<RequestVariable source="constant" type="auth" nameInSource="ssi" 
biVariableName="NQ_SESSION.SERVICEINSTANCEKEY" />

That’s it for OBIEE configuration!

 

2. RPD Changes using Administration Tool

Here comes the part that is familiar to every OBIEE administrator, the RPD modifications. If you are following the document, the sample EBS connection pool can be used or create a new one just for the initialization process and retrieving security.
Create database objects and connection pools for Oracle EBS database.

Figure 3: EBS Connection Pool

Figure 3: EBS Connection Pool

Note: APPS user should have all READ and EXECUTE permissions to run PL/SQL queries. If not, grant the privileges to the APPS user.

Now, create an init block which will use this connection pool to retrieve the EBS context and set that into OBIEE session variables. The init block will use the just defined connection pool and will send a data source query to the EBS database:

SELECT
FND_GLOBAL.RESP_ID,
FND_GLOBAL.RESP_APPL_ID,
FND_GLOBAL.SECURITY_GROUP_ID,
FND_GLOBAL.RESP_NAME,
FND_GLOBAL.USER_ID,
FND_GLOBAL.EMPLOYEE_ID,
FND_GLOBAL.USER_NAME,
FND_GLOBAL.RESP_NAME
FROM DUAL

Figure 4: EBS Security Context

Figure 4: EBS Security Context

Referring to another Oracle document, 1539742.1, create these static session variables to hold the context:

EBS_RESP_ID
EBS_RESP_APPL_ID
EBS_SEC_GROUP_ID
EBS_RESP_NAME
EBS_USER_ID
EBS_EMPLOYEE_ID
USER
ROLES

Note: You have to create another init block named “EBS Security Context – ROLES – Row wise” only for ROLES as a user will have more than one Role in Oracle EBS and the init block will be set for row-wise initialization.

Figure 5: EBS Security Context - Roles

Figure 5: EBS Security Context - Roles

Figure 6: EBS Security Context - Roles - Row Wise

Figure 6: EBS Security Context - Roles - Row Wise

The following query will be used to fetch all the Responsibilities of log-in user and assign it to the variable ROLES.

SELECT DISTINCT 'ROLES',
RESPONSIBILITY_NAME
FROM FND_USER,
FND_USER_RESP_GROUPS,
FND_RESPONSIBILITY_VL
WHERE FND_USER.USER_ID = FND_USER_RESP_GROUPS.USER_ID
AND FND_USER_RESP_GROUPS.RESPONSIBILITY_ID = FND_RESPONSIBILITY_VL.RESPONSIBILITY_ID
AND FND_USER_RESP_GROUPS.RESPONSIBILITY_APPLICATION_ID = FND_RESPONSIBILITY_VL.APPLICATION_ID
AND FND_USER_RESP_GROUPS.START_DATE < SYSDATE
AND (CASE WHEN FND_USER_RESP_GROUPS.END_DATE IS NULL THEN SYSDATE
ELSE TO_DATE (FND_USER_RESP_GROUPS.END_DATE)
END) >= SYSDATE
AND FND_USER.USER_ID = (SELECT USER_ID
			FROM FND_USER
			WHERE UPPER (USER_NAME) = UPPER('VALUEOF(NQ_SESSION.USER)')
			)

 

3. Oracle EBS Configuration

Now we need to introduce a responsibility for OBIEE through which a user can navigate to OBIEE from Oracle EBS.

1. Create a Function, using Application in EBS:

Figure 7: Oracle EBS Configuration - Form Functions - Description

Figure 7: Oracle EBS Configuration - Form Functions - Description

2. In the Properties Tab, add as follows:

Function: OBIEE
Type: SSWA jsp function
Maintenance Mode Support: None
Context Dependence: Responsibility

Figure 8: Oracle EBS Configuration - Form Functions - Properties

Figure 8: Oracle EBS Configuration - Form Functions - Properties

3. In the Web HTML tab, add the following link:

Function: OBIEE
HTML Call: OracleOasis.jsp?mode=OBIEE&function=Dashboard

Figure 9: Oracle EBS Configuration - Form Functions - Web HTML

Figure 9: Oracle EBS Configuration - Form Functions - Web HTML

4. Create a Menu in Oracle EBS named “OBIEE Dashboard” and add the Function created in step 1:

Figure 10: Oracle EBS Configuration - Menus

Figure 10: Oracle EBS Configuration - Menus

Note: Only create Menu for OBIEE Dashboard

5. Assign Menu to the relevant responsibility:

Figure 11: Oracle EBS Configuration - Users

Figure 11: Oracle EBS Configuration - Users

6. Set Profile

You need to enter the URL of the Oracle BI Server as part of a profile. You can set up a profile for a responsibility, a user, or a site. The following procedure shows how to set profile options for a responsibility:

Figure 12: Oracle EBS Configuration - Find System Profile Values

Figure 12: Oracle EBS Configuration - Find System Profile Values

You should use a fully-qualified host server.domain name rather than an IP address or just a host name. The OBIEE domain must be the same as the Oracle EBS domain, so that the EBS-ICX cookie is visible to OBIEE from the user's browser.

References:
OBIEE 12c: Integrating OBIEE 12c with Oracle E-Business Suite (EBS) Security (Doc ID 2174747.1)
Chapter 9: Oracle® Fusion Middleware Integrator's Guide for Oracle Business Intelligence Enterprise Edition.

Click here if you would like to receive more information about the topic or if you need help with your EBS-OBIEE configuration!

Customizing ODI Knowledge Modules

.

Lately we have seen a growing interest in ODI (Oracle Data Integrator). We have been working with this tool here in ClearPeaks for a while now, on several projects for different verticals, but it is also true that with the appearance of the newer versions of Oracle Business Intelligence Applications (OBI Apps) 11.1.1.X, it’s a pretty hot topic, and that’s why we’ve decided it was time to publish a quick “tips and tricks” guide for ODI, hoping to save our customers a lot of effort, time, and money.

The first thing to bear in mind when you start working with ODI is that you really must follow best practices in order to take full advantage of two of the most powerful weapons in the ODI arsenal: the declarative design and the knowledge modules. For those of you that don’t have too much experience with ODI, let me run through these two outstanding features.

The declarative design basically allows you to create an abstraction layer between the developer and the actual code that is going to be performed by the DBMS. This means that in ODI you define “what you want to get”, and the way to do so is automatically implemented by the Knowledge Module.

However, you might ask yourself “Is this possible? Can we really rely on the default ODI KMs?” Well, the answer is very simple: for standard needs, where performance is not a problem, yes! But in most of our BI projects, remember that we have had to tune the KMs to adapt them to our customers’ needs and to obtain the maximum benefit from the tool.

But don’t think that this undermines what is a fantastic feature. ODI comes with a great set of KMs that give you the perfect starting point to create your own customized KM. And moreover, all the developers don’t need to go into the details of the KM implementation; in a typical ODI project, the architect will be responsible for setting up the ODI environment and will provide the whole team with the appropriate KMs that will satisfy the particular project needs.

So in principle, the developers don’t need to know all the implementation details (it is up to each user/developer to go beyond and analyze the code ODI is generating, if required). This abstraction significantly speeds up the process of developing an ETL, since once the logic is established, there is no need to redo it over and over again.

A typical example to consider is the logic necessary to load a Slowly Changing Dimension (SCD Type II). With other tools, each developer would need to fully understand the logic of the SCDs and the way the integration process is performed, since it has to be replicated for each table to be loaded.

With the ODI declarative design, you just establish what you want, and the KM will take care of the logic. So you simply need to indicate:

➜ I want to treat “Table X” as a Slowly Changing Dimension (we will mark it like this).

Oracle Data Integrator

 

➜ I want to use “Column X” and “Column Y” to store the starting and ending date of the row, respectively.

Oracle Data Integrator

 

➜ I want to use “Column Z” as the current row flag.

Oracle Data Integrator

 

➜ I want “Column A” to be the Primary Key (Surrogate Key) of “Table X”.

Oracle Data Integrator

 

➜ And I want this column / set of columns (e.g. B and C) to be the Integration Key (Natural Key) of Table X (the column, or columns, that will be used to determine if the row is a new row, or if the row previously existed and has to be updated).

Oracle Data Integrator

 

➜ Finally, we can indicate for each column if we want the ETL to add a new row when the value changes in the source system, or if we prefer to update the whole table.

Oracle Data Integrator

 

✓ And that’s it! By then selecting the appropriate KM, tuned with the necessary logic by our architect, we can develop as many mappings as we want for Slowly Changing Dimensions. Just indicate the source for each column and run it. Quick and easy!

We have also mentioned the concept of Knowledge Modules. So, some of you may wonder, what is a “Knowledge Module”? This is simply a generic set of steps that will perform the needed logic for your ETL process. Each step can be written in different languages (SQL, Oracle-SQL, Jython, and many more) depending on the technology underneath, with placeholders for each column, table, and in general, “entity” that will take part in our ETL. At execution time, those placeholders are filled with the details of the mappings that have been developed, and this gives the project team the flexibility to reuse logic and speed up the delivery of the ETL process.

Well, that’s enough of an introduction to these two handy features in ODI. Now let’s see some usage examples of the things that can be done, and which can help our customers to reduce implementation time.


1. Automatizing Loads

▼ Automatizing the Incremental / Full Load Strategy
There are several ways to automatize an Incremental / Full Load Strategy using KMs. If we think back to the previous version of OBI Apps, in Informatica we had to have two separate mappings for the Incremental and the Full version of the mapping. With ODI this can be automatized in several ways:

ⓐ Manually: Adding an option to the mapping
The simplest way is to add an option to the mapping so we manually specify if we want to execute a full load or not. This option will drive the execution of a new step in our KM. For example, this step may consist in truncating the table, and if we are populating a dimension, resetting the associated sequence.Here we see the importance of following best practices and naming conventions. If we follow a good methodology when naming our DWH objects, we can distinguish the table type by the name, and also the sequences can be easily related to the corresponding dimension.

Oracle Data Integrator
Oracle Data Integrator

 

ⓑ Automatically: Adding an option to the mapping and using a control table
The second option is an extension of the previous one. By using a control table containing the table name and one column containing the “FULL_LOAD_FLAG”, we can invoke this process only if the DWH administrator has set the property in the table (FULL_LOAD_FLAG = 1). This way, there is no need to modify any ODI object to change the behavior of the interface.

▼ Automatizing the population of the unspecified rows for dimensions
A typical need in any data warehousing project is to populate an “unspecified” row for each dimension. This will give us the capability of always making inner joins between the facts and dimensions, and thus improve performance in all our OLAP queries.

Once again, we could generate a separate interface for this, for each dimension, but will it be efficient? ODI gives us the possibility of defining a generic step in our KM to be executed for any dimension table to be loaded.

A typical need in any data warehousing project is to populate an “unspecified” row for each dimension. This will give us the capability of always making inner joins between the facts and dimensions, and thus improve performance in all our OLAP queries.

Once again, we could generate a separate interface for this, for each dimension, but will it be efficient? ODI gives us the possibility of defining a generic step in our KM to be executed for any dimension table to be loaded.

At execution time, ODI is aware of the data type of each column to be populated (remember that every object is stored in the model, and so we have an internal representation stored in the ODI repository). We can prepare a simple process that will basically assign a default value for each data type. For example:

Oracle Data Integrator

Oracle Data Integrator

NOTE: See the ODI Substitution API Reference to learn more about the functions you can use to retrieve useful information from the ODI model, or contact us for further details!

▼ Automatizing the population of Fact Modules by using a control table
Another interesting functionality we usually include in our projects is control of the periods to be loaded by a control table. As we mentioned before, with this table we can change the behaviour of our ETL process just by changing one row in one table. This speeds the process of programming and administering the loads up a lot.

By using this table, the ETL administrator is capable of specifying the starting date that we need to populate, and the number of periods to be loaded. More options can be useful in this table, like a flag (PREV_EXEC_CORRECT_FLG) indicating if the automated previous execution of the ETL was correct (if not the ETL should not continue in order to avoid inconsistencies, for example, in Slowly Changing Dimensions). Another flag might be used to indicate if the population of this table should be included in the automated process (IS_PART_OF_DAILY_LOAD), or the FULL_LOAD_FLAG that we already mentioned.

In the example below, the source system needed to be queried daily due to performance restrictions; by specifying a starting date and the number of periods to be loaded, the ETL automatically ran a loop for the number of days to be loaded by leveraging the ODI variables.

Oracle Data Integrator
Oracle Data Integrator

In this case, it is also important to highlight that the interfaces have to be filtered by a variable containing the value of the day to be populated:

Oracle Data Integrator

Another interesting modification done to the KMs involved in this case study is that of the corresponding CKM to add an option that will avoid deleting the previous errors, since the same session is going to be executed several times in a loop.

Oracle Data Integrator

In some cases, it might even be interesting to remove the period from the target table before inserting, if we are performing a full period insertion (we don’t want to merge the data with the previously inserted data, but to directly reload a complete period).

Oracle Data Integrator

 

As you can see above, by using the ODI functions we can generate generic steps that will work for any object in our ODI model. For example, <%=odiRef.getTable(“L”,”TARG_NAME”,”A”)%> will retrieve the target table name.

See: http://docs.oracle.com/cd/E21764_01/integrate.1111/e12645/odiref_reference.htm#ODIKD1295


2. Boosting performance

▼ Boosting performance for populating staging tables
Since ODI 11g does not have a multi-target capability, we sometimes need to reuse the same staging table to populate several final targets; we have to load these staging tables as fast as possible to speed up the performance of the whole ETL process. We can use the default KMs to load these staging tables, but for one staging table you may not need to apply integration processes, validations checks and other time-consuming logic; to get round this, a good practice is to generate your own staging KM, including only the necessary steps:
Oracle Data Integrator

Then, in the “Insert” step, you can populate the target table directly, instead of the integration table:

<%=ODIREF.GETTABLE("L","TARG_NAME","A")%>
VS
<%=ODIREF.GETTABLE("L","INT_NAME","W")%>

Oracle Data Integrator

 

▼ Boosting performance by using partition exchange
In order to boost performance of load processes and reporting as well, it is sometimes useful to apply partitioning to your DWH tables. This will give you two main advantages:

➀ At ETL time, the population of a big table (GBs of data) is much faster if the table is partitioned, since you can populate a temporary empty table, and then execute a partition exchange to swap the partitions between both tables. This way, all the population in your temporary table, the consistency checks, constraints and all the processes you may need to do can be done much faster.
It is important to emphasize that this is useful for snapshot tables, and in general for tables that do not need to cross data between partitions.

➁ At reporting time, if you just need to analyze one partition (one period) at a time, or maybe two to compare them, the response time in a partitioned table will be faster.

To accomplish the partition exchange in ODI, you can also use a knowledge module that can encapsulate the logic needed. The command below will execute the partition exchange between the temporary table and the target table, leveraging ODI functions.


ALTER TABLE

<%=ODIREF.GETTARGETTABLE( "SCHEMA" )%>.<%=ODIREF.GETTARGETTABLE("RES_NAME")%>
EXCHANGE PARTITION ' || V_PARTITION_NAME || ' WITH TABLE
<%=ODIREF.GETTABLE("L", "INT_NAME","W")%>
<$="<%=ODIREF.GETUSEREXIT("PARTITION EXCHANGE OPTIONS")%>"$>

 

Note that the variable V_PARTITION_NAME will need to contain the name of the partition to be exchanged. This can be retrieved by a simple query on the temporary table that your mapping has populated.

➀ Loading historical information into a Slowly Changing Dimension
The last interesting modification that we are going to show you is to the Oracle SCD KM. Imagine that you launch your system with a starting date (e.g. 30 Sept., 2014) and after some months running your ETL system in production, the source system is updated with historical information so that the business can analyze trends and customer behavior over several years. Obviously, we can’t load the latest information in the source system to our DWH because it will override the current information with this stale but still relevant information. We need to look for a way to include this information in our system, so if a user queries information about old periods, they can analyze and explore the attributes the customer had at that point in time.

Well, we can do the following:

➀ Duplicate the mapping you are using to load your SCD.

➁ Duplicate the Slowly Changing Dimension IKM you are using to populate your SCDs.

➂ Modify the copy of your IKM to set the "ACTIVE_TO_DATE" to the date when you launched your system. This way you can run a separate load for older periods without impacting the daily load you are running on a daily basis with current data. Any row loaded by this second thread (older periods), will always finish by the date when you launched your system, so there won’t be duplicates in your SCD and you can explore both current and old information about the entities in your dimension (e.g. your customer profile, segment, location, etc.).

Oracle Data Integrator

 

Just by modifying small pieces of the given KM code, you can achieve your particular needs. Just be careful that you modify the appropriate properties (in this case, SCD_END and also the SCD_FLAG have to have different values from those in your daily thread).

We hope that you’ve found these techniques that can be applied to ODI KMs to speed up the process of generating and manipulating your ETL interesting, and if you have further questions, do not hesitate to contact us.
Stay tuned for more interesting features to apply to your ETL processes with #ODI!!

Get in touch with us and see what we can do for you!  ✉

Blog Article: Emilio Soriano

Oracle BI Cloud Service, Part V: Managing the Service

.

How Oracle BI Cloud Service is managed

Parts II, III and IV of our BICS series were dedicated to the basic functionalities of the service. In this post, the last one in the series, we will explain how the BI Cloud Service is managed.

As in the on-premise version of OBI, security of the BI cloud service is managed with users, roles and application roles. All the cloud services you buy will be assigned a unique identity domain that will allow you to manage what users and roles can access to which service associated with that identity domain. Application roles allow you to set permissions to users and roles within each cloud service.

You can manage users and roles from the My Services page of Oracle Cloud with the Identity Domain Administrator credentials.

 

image

Figure 1: Oracle Cloud My Services webpage

Application roles are managed from the BI Cloud Service Console with BI Service Administrator credentials.

bics 2

 Figure 2: Managing users and roles

From the Service Console you can easily add users, roles and application roles as members to one or multiple application roles, manage pre-defined application roles, and create your own application roles.

 

bics 3

Figure 3: Oracle BI Cloud Service - Service Console

A useful feature included in BI Cloud Service is Snapshot: you can capture with one click the state of the service at a point in time including data model, catalog and application roles (but not database elements which should be backed-up separately). You can then save the snapshot in the cloud (maximum 10 snapshots) or download it to a file system, and upload a saved snapshot to restore the service to a previous state. Each snapshot import is total in that it overwrites everything that was in the instance beforehand. Apart from backups, snapshots are useful for moving data from the pre-prod to the prod environment.

Image 3

Figure 4: Taking system snapshots

Other common administration tasks of OBIEE are also available in the BI Cloud Service, such as monitoring users currently signed in, analyze or test SQL queries, and monitor usage and other metrics.

 

Conclusion

The Oracle BI Cloud Service is a good option for small and medium companies looking for an easy-to-use BI framework with a fast deployment cycle and minimum capital and operating expenditures. The agility and availability of the system allows companies to start reporting against their business data in a couple of hours. We are still missing some features to consider it as an option for large projects, but it is still useful for any company requiring their BI contents to be available in the cloud and in any mobile device.

So what are your thoughts on the Oracle BI Cloud Service? Would you consider deploying your BI fully in the cloud using the service? Contact us at info@clearpeaks.com or using our web form if you would like more information about the Oracle BI Cloud Service, or leave your comments below!

Oracle BI Cloud Service, Part IV: Creating analyses, reports and dashboards

.

In Parts II and III of the BICS series, we covered how to upload data to the cloud and build the repository. In this post, we will be covering the basics of analyses, reports, and dashboards in the BI Cloud Service.

The OBI Cloud Service has almost the same report capabilities of OBIEE. You can create analyses combining different views and dashboards, limit the results using filters, prompt for values and create dynamic reports using named and inline actions just as you would do in the on-premise version of OBI.

image 1

Figure 1: Creating reports, visualizations and dashboards
from the Oracle BI Cloud Service & Data Discovery Webinar
from Oracle Deutschland B.V. & Co. KG

All the results of analyses can be exported to the usual formats such as Excel or PDF, and you can also make use of variables: session, repository, presentation, request, and the recently added global variables are supported. There is the possibility to report against a mash-up of model data and external sources in the PROD environment.

A new presentation feature recently added to the BI Cloud Service (and that will be available in the on-premise version 12c of OBIEE) is Visual Analyzer, a web-based tool accessible from the BICS homepage that allows you to explore analytics data visually and on an individual basis in just a few clicks.

image 2

Figure 2: Accessing the Visual Analyzer tool
from blog Sixty-Analytics

Visual Analyzer simplifies the process of finding the right information and creating rich and powerful visualizations with that data. You can search the entire repository for dimension and measure attributes and drag and drop them to build graphs and charts in a Tableau-type single pane, allowing you to find correlations, discover patterns, and see trends in your content.

image 3

Figure 3: Creating reports in Visual Analyzer
from Oracle

image 4

Figure 4: Creating compound visualizations in Visual Analyzer
from Oracle

On the downside, there are still some presentation services lacking in the cloud version of OBI. Scorecards, a useful view for KPIs in OBIEE, are still not supported in the cloud version. Agents, alerts and BI Publisher are also not presently supported, although we will probably see these services supported in future releases within the next 12 months.

Be sure to check out the last part of our BICS post series explaining how the service is managed and our overall conclusion.

Oracle BI Cloud Service Part III: Modeling Data

.

Data Modeler tool

In the last post of this BICS series, we reviewed how data can be loaded into the cloud. Once you have all your data loaded and ready, it is time to build the business model, or in other words, create the repository. For that purpose, a browser-based tool called Data Modeler is accessible from the home page of the BI Cloud Service.

OBI Cloud Service

Figure 1: Accessing the Data Modeler tool
from Blog Sixty Analytics

 

Data Modeler is the equivalent to the Administration Tool of the on-premise OBI, although much simplified and with a reduced set of its capabilities, considering that the main premise of the cloud version of OBI is precisely that is should be simple and with a fast deployment cycle. Still, the Data Modeler tool allows you to build a repository from your source tables with all the basic features of OBIEE Admin Tool, and all from your web browser.

The application is divided into panes, with the left pane listing database objects, data model objects, variables, and users & roles, and the right pane listing in detail the fact tables, dimension tables, and joins in your data model.

OBI Cloud Service

Figure 2: Data Modeler
From Blog Sixty Analytics

 

Since we are in a cloud environment, there has to be a way to ensure that multiple users can modify the model concurrently. Data Modeler uses a Lock & Edit system for this, so that users must lock the data model before making any changes to the model and database views (note that the lock only affects the Data Modeler tool and does not prevent other users from changing database views using other tools such as SQL Developer). Only one user at a time can have the lock, with the possibility to override locks and discard changes made by other users if you have administrative privileges.

After every change, the data model has to be validated (equivalent to a consistency check in OBIEE Admin Tool), and any validation error will appear at the bottom of the right pane. All the changes made to the model need to be published in order to to save them permanently and make them available to other users. There is the possibility to retain the lock after publishing the changes, discard the changes and release the lock, and discard the changes reverting to the previous published state.

Data should be refreshed frequently to ensure that it reflects the most up-to-date information. You can refresh the source objects, the data model, and synchronize with the database to automatically detect new columns, obsolete objects, and other discrepancies. It is advisable to clear the cache after loading new data as well, to ensure that the most recent data is displayed in analyses.

Data Modeler also offers users the possibility to create source views, i.e., saved data queries in the database. Source views are useful when using a single table as a source for more than one table, when creating a dimension table based on multiple source tables, and when performing pre-aggregation calculations. In general source views facilitate the handling of model objects and should be used as a base when subsequent changes in these objects might be needed.

You can partition a single source table into fact tables and dimension tables, or create fact and dimension tables from single source tables.

 

OBI Service Cloud

Figure 3: Adding fact and dimension tables to the model
From the article:  Using Oracle BI Cloud Service

Joins are automatically created between fact and dimension tables if the join reference exists in the source tables, and of course you can create manually other joins in Data Modeler. Hierarchies and levels can be added to dimension tables joined to fact tables, and aggregation levels can be set for measures. As in OBIEE Admin Tool, you can make use of variables (static / dynamic repository variables and session variables supported). You can also secure the model, setting permissions to tables, columns, and even records using filters.

A recently added feature of Data Modeler is the possibility to upload a full repository to the cloud service, so if you had already modeled your business data with OBIEE you don’t need to start from scratch in the cloud. This also allows a single model to be used both on-premise and in the cloud. Something to bear in mind however is that repositories uploaded from on-premises OBIEE cannot be edited through the BICS Data Modeler, so changes should be done in OBIEE Admin Tool and then uploaded again (after the consistency check).

So once you have your data and repository in the cloud, you are ready to start reporting against it.

Check out our next post of the BICS series to learn about analyses, reports, and dashboards in the BI Cloud Service.

 

privacy policy - Copyright © 2000-2010 ClearPeaks

topnav