Extracting data from Taleo

.

Over the past few years we have seen companies focusing more and more on Human Resources / Human Capital activities. This is no surprise, considering that nowadays a large number of businesses depend more on people skills and creativity than on machinery or capital, so hiring the right people has become a critical process. As a consequence, more and more emphasis is put on having the right software to support HR/HC activities, and this in turn leads to the necessity of building a BI solution on top of those systems for a correct evaluation of processes and resources. One of the most commonly used HR systems is Taleo, an Oracle product that resides in the cloud, so there is no direct access to its underlying data. Nevertheless, most BI systems are still on-premise, so if we want to use Taleo data, we need to extract it from the cloud first.

 

1. Taleo data extraction methods

As mentioned before, there is no way of direct access to Taleo data; nevertheless, there are several ways to extract it, and once extracted we will be able to use it in the BI solution:

Querying Taleo API
Using Cloud Connector
Using Taleo Connect Client

API is very robust, but the most complex of the methods, since it requires a separate application to be written. Usually, depending on the configuration of the BI system, either Oracle Cloud Connector, Taleo Connect Client or a combination of both is used.

Extracting data from Taleo

Figure 1: Cloud connector in ODI objects tree

Oracle Cloud Connector is a component of OBI Apps, and essentially it’s Java code that replicates Taleo entities / tables. It’s also easy to use: just by creating any Load Plan in BIACM using Taleo as the source system, a series of calls to Cloud Connector are generated that effectively replicate Taleo tables to local schema. Although it works well, it has 2 significant disadvantages:

It’s only available as a component of BI Apps
It doesn’t extract Taleo UDFs

So even if we have BI Apps installed and we use Cloud Connector, there will be some columns (UDFs) that will not get extracted. This is why the use of Taleo Connect Client is often a must.

 

2. Taleo Connect Client

Taleo Connect Client is a tool that is used to export or import data from / to Taleo. In this article we’re going to focus on extraction. It can extract any field, including UDFs, so it can be used in combination with BI Apps Cloud Connector or, if that’s not available, then as a unique extraction tool. There are versions for both Windows and Linux operating systems. Let’s look at the Windows version first.

Part 1 - Installation & running: Taleo Connect Client can be downloaded from the Oracle e-delivery website; just type Taleo Connect Client into the searcher and you will see it on the list. Choose the necessary version, select Application Installer and Application Data Model (required!), remembering that it must match the version of the Taleo application you will be working with; then just download and install. Important – the Data Model must be installed before the application is installed.

Extracting data from Taleo

Figure 2: Downloading TCC for Windows

After TCC is installed, we run it, providing the necessary credentials in the initial screen:

Extracting data from Taleo

Figure 3: Taleo Connect Client welcome screen

And then, after clicking on ‘ping’, we connect to Taleo.  The window that we see is initially empty, but we can create or execute new extracts from it. But before going on to this step, let’s find out how to see the UDFs: in the right panel, go to the ‘Product integration pack’ tab, also selecting the correct product and model. Then, in the Entities tab, we can see a list of entities / tables, and in fields / relations, we can see columns and relations with other entities / tables (through foreign keys). After the first run, you will probably have some UDFs that are not on the list of fields / relations available. Why is this? Because what we initially see in the field list are only Taleo out-of-the-box fields, installed with the Data Model installer. But don’t worry, this can easily be fixed: use the ‘Synchronize custom fields’ icon (highlighted on the screenshot). After clicking on it you will be taken to a log-on screen where you’ll have to provide log-on credentials again, and after clicking on the ‘Synchronize’ button, the UDFs will be retrieved.

Extracting data from Taleo

Figure 4: Synchronizing out-of-the-box model with User Defined Fields

Extracting data from Taleo

Figure 5: Synchronized list of fields, including some UDFs (marked with 'person' icon)

Part 2 - Preparing the extract: Once we have all the required fields available, preparing the extract is pretty straightforward. Go to File->New->New Export Wizard, then choose the desired Entity, and click on Finish. Now, in the General window, set Export Mode to ‘CSV-Entity’, and in the Projections tab, select the columns that you want to extract by dragging and dropping them from the Entity->Structure window on the right. You can also add filters or sort the result set. Finally, save the export file. The other component necessary to actually extract the data is so-called configuration. To create it, we select File->New->New Configuration Wizard, then we point the export file that we’ve created in the previous step and, in the subsequent step, our endpoint (the Taleo instance that we will extract the data from). Then, on the following screen, there are more extract parameters, like request format and encoding, extract file name format and encoding and much more. In most cases, using the default values of parameters will let us extract the data successfully, so unless it’s clearly required, there is no need to change anything. So now the configuration file can be saved and the extraction process can start, just by clicking on the ‘Execute the configuration’ button (on the toolbar just below the main menu). If the extraction is successful, then all the indicators in the Monitoring window on the right will turn green, as in the screenshot below.

Extracting data from Taleo

Figure 6: TCC successfull extraction

By using a bat file created during the installation, you can schedule TCC jobs to be executed on a timely basis, using Windows Scheduler, but it’s much more common to have your OBI / BI Apps (or almost any other DBMS that your organization uses as a data warehouse) installed on a Linux / Unix server. This is why we’re going to have a look at how to install and set up TCC in a Linux environment.
 
Part 3 - TCC in a Linux / Unix environment: TCC setup in a Linux / Unix environment is a bit more complex. To simplify it, we will use some of the components that were already created and used when we worked with Windows TCC, and although the frontend of the application is totally different (to be precise, there is no frontend at all in the Linux version as it’s strictly command-line), the way the data is extracted from Taleo is exactly the same (using extracts designed as XML files and Taleo APIs). So, after downloading the application installer and data model from edelivery.oracle.com , we install both components. Installation is actually just extracting the files, first from zip to tgz, and then from tgz to uncompressed content. But this time, unlike in Windows, we recommend installing (extracting) the application first, and then extracting the data model files to an application subfolder named ‘featurepacks’ (this must be created, it doesn’t exist by default). It’s also necessary to create a subfolder ‘system’ in the application directory. Once this is done, you can move some components of your Windows TCC instance to the Linux one (of course, if you have no Windows machine available, you can create any of these components manually):

Copy file default.configuration_brd.xml from windows TCC/system to the Linux TCC/system
Copy extract XML and configuration XML files, from wherever you had them created, to the main Linux TCC directory

There are also some changes that need to be made in the TaleoConnectClient.sh file

Set JAVA_HOME variable there, at the top of the file (just below #!/bin/bash line), setting it to the path of your Java SDK installation (for some reason, in our installation, system variable JAVA_HOME wasn’t captured correctly by the script)
In the line below #Execute the client,  after the TCC_PARAMETERS variable, add:

✓ parameters of proxy server if it is to be used:

-Dhttp.proxyHost=ipNumber   –Dhttp.proxyPort=portNumber

✓ path of Data Model:

-Dcom.taleo.integration.client.productpacks.dir=/u01/oracle/tcc-15A.2.0.20/featurepacks

So, in the end, the TaleoConnectClient.sh file in our environment has the following content (IP addresses where ‘masked’):

#!/bin/sh
JAVA_HOME=/u01/middleware/Oracle_BI1/jdk
# Make sure that the JAVA_HOME variable is defined if  [ ! "${JAVA_HOME}" ] then
    echo +-----------------------------------------+
    echo "+ The JAVA_HOME variable is not defined.  +"
    echo +-----------------------------------------+
    exit 1
fi

# Make sure the IC_HOME variable is defined if  [ ! "${IC_HOME}" ] then
    IC_HOME=.
fi

# Check if the IC_HOME points to a valid taleo Connect Client folder if [ -e "${IC_HOME}/lib/taleo-integrationclient.jar" ] then
    # Define the class path for the client execution
    IC_CLASSPATH="${IC_HOME}/lib/taleo-integrationclient.jar":"${IC_HOME}/log"

    # Execute the client
    ${JAVA_HOME}/bin/java ${JAVA_OPTS} -Xmx256m ${TCC_PARAMETERS}
-Dhttp.proxyHost=10.10.10.10 -Dhttp.proxyPort=8080 -Dco m.taleo.integration.client.productpacks.dir=/u01/tcc_linux/tcc-15A.2.0.20/featurepacks
  -Dcom.taleo.integration.client.i
nstall.dir="${IC_HOME}" -Djava.endorsed.dirs="${IC_HOME}/lib/endorsed"
-Djavax.xml.parsers.SAXParserFactory=org.apache.xe
rces.jaxp.SAXParserFactoryImpl
-Djavax.xml.transform.TransformerFactory=net.sf.saxon.TransformerFactoryImpl
-Dorg.apache.
commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger
-Djavax.xml.xpath.XPathFactory:http://java.sun.com/jaxp/x
path/dom=net.sf.saxon.xpath.XPathFactoryImpl -classpath ${IC_CLASSPATH} com.taleo.integration.client.Client ${@} else
    echo +-----------------------------------------------------------------------------------------------
    echo "+ The IC_HOME variable is defined as (${IC_HOME}) but does not contain the Taleo Connect Client"
    echo "+ The library ${IC_HOME}/lib/taleo-integrationclient.jar
cannot be found.                      "
    echo +-----------------------------------------------------------------------------------------------
    exit 2
fi

Once this is ready, we can also apply the necessary changes to the extract and configuration files, although there is no need to change anything in the extract definition (file blog_article_sq.xml). Let’s have a quick look at content of this file:

<?xml version="1.0" encoding="UTF-8"?>
<quer:query productCode="RC1501" model="http://www.taleo.com/ws/tee800/2009/01" projectedClass="JobInformation" locale="en" mode="CSV-ENTITY" largegraph="true" preventDuplicates="false" xmlns:quer="http://www.taleo.com/ws/integration/query"><quer:subQueries/><quer:projections><quer:projection><quer:field path="BillRateMedian"/></quer:projection><quer:projection><quer:field path="JobGrade"/></quer:projection><quer:projection><quer:field path="NumberToHire"/></quer:projection><quer:projection><quer:field path="JobInformationGroup,Description"/></quer:projection></quer:projections><quer:projectionFilterings/><quer:filterings/><quer:sortings/><quer:sortingFilterings/><quer:groupings/><quer:joinings/></quer:query>

Just by seeing the file we can figure out how to add more columns manually: we just need to add more quer tags, like

<quer:projection><quer:field path="DesiredFieldPath"/></quer:projection>

With regard to the configuration file, we need to make some small changes: in tags cli:SpecificFile and cli:Folder absolute Windows paths are used. Once we move the files to Linux, we need to replace them with Linux filesystem paths, absolute or relative.

Once the files are ready, the only remaining task is to run the extract, which is done by running:

./TaleoConnectClient.sh blog_article_cfg.xml 

See the execution log:

[KKanicki@BIApps tcc-15A.2.0.20]$ ./TaleoConnectClient.sh blog_article_cfg.xml
2017-03-16 20:18:26,876 [INFO] Client - Using the following log file: /biapps/tcc_linux/tcc-15A.2.0.20/log/taleoconnectclient.log
2017-03-16 20:18:26,876 [INFO] Client - Using the following log file: /biapps/tcc_linux/tcc-15A.2.0.20/log/taleoconnectclient.log
2017-03-16 20:18:27,854 [INFO] Client - Taleo Connect Client invoked with configuration=blog_article_cfg.xml, request message=null, response message=null
2017-03-16 20:18:27,854 [INFO] Client - Taleo Connect Client invoked with configuration=blog_article_cfg.xml, request message=null, response message=null
2017-03-16 20:18:31,010 [INFO] WorkflowManager - Starting workflow execution
2017-03-16 20:18:31,010 [INFO] WorkflowManager - Starting workflow execution
2017-03-16 20:18:31,076 [INFO] WorkflowManager - Starting workflow step: Prepare Export
2017-03-16 20:18:31,076 [INFO] WorkflowManager - Starting workflow step: Prepare Export
2017-03-16 20:18:31,168 [INFO] WorkflowManager - Completed workflow step: Prepare Export
2017-03-16 20:18:31,168 [INFO] WorkflowManager - Completed workflow step: Prepare Export
2017-03-16 20:18:31,238 [INFO] WorkflowManager - Starting workflow step: Wrap SOAP
2017-03-16 20:18:31,238 [INFO] WorkflowManager - Starting workflow step: Wrap SOAP
2017-03-16 20:18:31,249 [INFO] WorkflowManager - Completed workflow step: Wrap SOAP
2017-03-16 20:18:31,249 [INFO] WorkflowManager - Completed workflow step: Wrap SOAP
2017-03-16 20:18:31,307 [INFO] WorkflowManager - Starting workflow step: Send
2017-03-16 20:18:31,307 [INFO] WorkflowManager - Starting workflow step: Send
2017-03-16 20:18:33,486 [INFO] WorkflowManager - Completed workflow step: Send
2017-03-16 20:18:33,486 [INFO] WorkflowManager - Completed workflow step: Send
2017-03-16 20:18:33,546 [INFO] WorkflowManager - Starting workflow step: Poll
2017-03-16 20:18:33,546 [INFO] WorkflowManager - Starting workflow step: Poll
2017-03-16 20:18:34,861 [INFO] Poller - Poll results: Request Message ID=Export-JobInformation-20170316T201829;Response Message Number=123952695;State=Completed;Record Count=1;Record Index=1;
2017-03-16 20:18:34,861 [INFO] Poller - Poll results: Request Message ID=Export-JobInformation-20170316T201829;Response Message Number=123952695;State=Completed;Record Count=1;Record Index=1;
2017-03-16 20:18:34,862 [INFO] WorkflowManager - Completed workflow step: Poll
2017-03-16 20:18:34,862 [INFO] WorkflowManager - Completed workflow step: Poll
2017-03-16 20:18:34,920 [INFO] WorkflowManager - Starting workflow step: Retrieve
2017-03-16 20:18:34,920 [INFO] WorkflowManager - Starting workflow step: Retrieve
2017-03-16 20:18:36,153 [INFO] WorkflowManager - Completed workflow step: Retrieve
2017-03-16 20:18:36,153 [INFO] WorkflowManager - Completed workflow step: Retrieve
2017-03-16 20:18:36,206 [INFO] WorkflowManager - Starting workflow step: Strip SOAP
2017-03-16 20:18:36,206 [INFO] WorkflowManager - Starting workflow step: Strip SOAP
2017-03-16 20:18:36,273 [INFO] WorkflowManager - Completed workflow step: Strip SOAP
2017-03-16 20:18:36,273 [INFO] WorkflowManager - Completed workflow step: Strip SOAP
2017-03-16 20:18:36,331 [INFO] WorkflowManager - Completed workflow execution
2017-03-16 20:18:36,331 [INFO] WorkflowManager - Completed workflow execution
2017-03-16 20:18:36,393 [INFO] Client - The workflow execution succeeded.

And that’s it! Assuming our files were correctly prepared, the extract will be ready in the folder declared in cli:Folder tag of the configuration file.

As for scheduling, different approaches are available, the most basic being to use the Linux crontab as the scheduler, but you can also use any ETL tool that is used in your project easily. See the screenshot below for an ODI example:

Extracting data from Taleo

Figure 7: TCC extracts placed into 1 ODI package

The file extract_candidate.sh contains simple call of TCC extraction:

[KKanicki@BIApps tcc-15A.2.0.20]$ cat extract_candidate.sh
#!/bin/bash
cd /u01/tcc_linux/tcc-15A.2.0.20/
./TaleoConnectClient.sh extracts_definitions/candidate_cfg.xml

If your extracts fail or you have any other issues with configuring Taleo Connect Client, feel free to ask us in the comments section below!

In the last couple of years we have delivered several highly successful BI Projects in the Human Resources / Human Capital space! Don´t hesitate to contact us if you would like to receive specific information about these solutions!

Integration of Linux/UNIX OBI installations with MS Access

.

Introduction

Although most of the organizations that use OBI use some kind of ERP software, there are still divisions or departments where some decentralized, separate data is in use. Quite often those small solutions use MS Access as a database, as it’s cheap and doesn’t require a lot of technical knowledge. If there is a need to make this data available in the data warehouse, it usually requires an ETL process. Anyway, sometimes there may be a need to connect directly to MS Access and retrieve the data directly from such a source.

Unfortunately, there is no ODBC driver that would allow creating the connection from Linux server (where OBI is usually installed) to an MS Access database; hence you may think that creating this connection is impossible. Well, in this article I will solve this problem, describing one of the possible workarounds.

Concept

Before we get to the details, let me explain you briefly what we are going to implement. The whole concept is simple: we will install Oracle Database Gateway – which will ‘translate’ MS Access tables to be seen as Oracle tables. Then, using DB link we will connect DWH to the Oracle Gateway and we will create synonyms on the ‘db-linked’ tables. Finally, those synonyms will be imported to the RPD.

graph

But what is Oracle Gateway actually?

Oracle Database Gateway is a product that translates the heterogeneous data source – which may be not only relational database (MS SQL Server, Teradata, Sybase – see documentation for full list), but any data source for which ODBC driver is available (MS Access, MS Excel, also any of previously mentioned RMDBSes) – to be seen as Oracle Database. The license for Gateway for ODBC is included if you already own Oracle DB, Gateways for other products have to be purchased separately. Oracle Database Gateway is not a standalone product, it requires Oracle Database to work. For the official Oracle’s summary on the Gateway, please check the following document: Oracle Database Gateway

Requirements

Before we start, we need to download some software that will be necessary to configure and run the connection between OBI and MS Access. We also have to make sure that a few more components are available:

  • OBI environment: up, running & rock stable –your local virtual machine you use to play with RPD is fine, even if it freezes once in a while.
  • MS Windows machine, where you should have administration rights and which can be reached (through network) from the DWH that OBI uses. If you are short of Windows licenses, your local PC may also be used, if only it satisfies the previously mentioned networking condition. At that machine Oracle Database Gateway will be installed & Access database will also be located there.
  • Client machine, with Oracle client, Oracle SQL/development software & Oracle BI Administration Tool. Exactly like the PC you use each day in the office.
  • Oracle Database Gateways 11gR2 for Windows is necessary, it can be downloaded here: Oracle Database Gateway for Windows and now choose file win32_11gR2_gateways.zip (32bit version)  or here: Oracle Database Gateway for Windows  and download file win64_11gR2_gateways.zip (64bit version). OTN account is required to download any of them.
  • Package including ODBC Driver for MS Access: ODBC Driver download – if you don’t have the required ODBC Access driver installed in your system yet.

After making sure you have all components mentioned above, let’s roll with this! This tutorial will have the following parts:

  • #1: Configuring ODBC connection to MS Access database
  • #2: Installing and configuring Oracle Database Gateway
  • #3: Creating the required database objects in the DWH
  • #4: Importing MS Access tables to OBI

 

Step 1: Configuring ODBC connection to the MS Access database

This step is very simple. Copy the MS Access file containing the database you would like to integrate with OBI to the machine where you will install Oracle Gateway. This is mandatory, as Gateway cannot use a database located on network share, it has to be a local one. Now you would have to configure the ODBC connection for this database. If  you are a BI developer, you probably already did this a few hundred times, so let me go through that briefly:

’Control Panel’->‘Administrative Tools’->‘Data Sources (ODBC)’->‘System DSN’->‘Add’->’Microsoft Access Driver’. Now chose the file with the database, set credentials (if necessary), data source name & description, and that’s it!

If you don’t have the ODBC driver for MS Access installed, or if you just have the version supporting mdb file format (up to MS Access 2003), but you want to use accdb file, please download and install the ODBC driver for MS Access mentioned in the previous paragraph.

Step 2: Installing and configuring Oracle Database Gateway

If you have already downloaded the Gateway, you may start its installation. Extract the content of the downloaded zip file and then run file ‘setup.exe’. In the following steps set installation path, choose which gateway drivers you would like to install (for the need of this exercise only the Oracle Database Gateway for ODBC is required, which is, by the way, the only driver that is free of charge) and also configure listener of the Gateway, setting its name, network protocol and specifying the port number (assuming you chose TCP to be used, which in 99,99% you did). If you don’t use listener’s default name, please note & save it somewhere as it will be necessary in the following steps.

After the installation is finished, there are still a few files to be edited.

  • #1: GATEWAY_INSTALLATION_PATH/hs/admin/init[SID].ora

This file is used to connect Oracle Database Gateway to the previously created ODBC connection. You choose whatever you want to use as a SID – for this exercise let’s use ‘msacc’ as a SID, hence, the file should be named ‘initmsacc.ora’. As such a file doesn’t exist in the file system, to create it make a copy of ‘initdg4odbc.ora’ file that is created during the installation of Gateway. Then change its name and edit the content to be following:

#

# Environment variables required for the non-Oracle system

#

#set <envvar>=<value>#

# HS init parameters

#

HS_FDS_CONNECT_INFO = [NAME_OF_ODBC_CONNECTION_CREATED_IN_STEP1]

HS_FDS_TRACE_LEVEL = OFF

Of course string [NAME_OF_ODBC_CONNECTION_CREATED_IN_STEP1] shall be replaced by the required value. After editing the file, save, close and proceed to the next one:

  • #2: GATEWAY_INSTALLATION_PATH/network/admin/listener.ora

Here the content should be as following:


[LISTENER_NAME_SET_DURING_GATEWAY_INSTALLATION] =

(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = [YOUR_HOSTNAME.YOUR_DOMAINNAME])(PORT = [PORT_CHOSEN_DURING_INSTALLATION]))

(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC[PORT_CHOSEN_DURING_INSTALLATION]))

)

)

ADR_BASE_[LISTENER_NAME_SET_DURING_GATEWAY_INSTALLATION] = [GATEWAY_INSTALLATION_PATH]

SID_LIST_[LISTENER_NAME_SET_DURING_GATEWAY_INSTALLATION] =

(SID_LIST=

(SID_DESC=

(SID_NAME=[SID_NAME_CHOSEN_IN_PREVIOUS_PARAGRAPH])

(ORACLE_HOME= [GATEWAY_INSTALLATION_PATH])

(PROGRAM=dg4odbc)

)

)

For those of you who wonder what line:

 (PROGRAM=dg4odbc)

means: it just tells gateway to use ODBC Gateway. ‘PROGRAM’ parameter would have different value if you install and use other Gateway

Below there is an example, with values & paths set:

LISTENER_GTW =

(DESCRIPTION_LIST =

 (DESCRIPTION =

 (ADDRESS = (PROTOCOL = TCP)(HOST = pc04.acme.org)(PORT = 1521))

 (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))

 )

 )

ADR_BASE_LISTENER_GTW = C:\product\11.2.0\tg_1

 SID_LIST_LISTENER_GTW=

 (SID_LIST=

 (SID_DESC=

 (SID_NAME=msacc)

 (ORACLE_HOME=C:\product\11.2.0\tg_1)

 (PROGRAM=dg4odbc)

 )

)

Attention! If you want to integrate/import more than one MS Access database into OBI, repeat the bold part for every database you want to import, of course giving each of them a different name.

Save, exit and reboot Windows.

After rebooting Windows, you may open ‘services.msc’ console (Menu Start->Run->Type ‘services.msc’->hit Enter key), and there, you shall find a service named ‘OracleOraGtw11g_home1TNS[listener_name]’ and start it if it hasn’t started yet. The configuration of Gateway is now finished and it shall be ready to use! To verify it, run command

C:\>lsnrctl status [LISTENER_NAME_SET_DURING_GATEWAY_INSTALLATION]

at the machine where the Oracle Gateway is installed and see listener’s status.

Step3: Creating the required database objects at the DWH

To make MS Access tables available for the import in OBI RPD, you need to do 3 things:

  • #1: Create TNS entry at OBI’s Data Warehouse pointing to listener’s SID created in the previous step:

The following entry should be added to ‘tnsnames.ora’ file used by your DWH:

[SID_USED_BY_MSACCESS_DB_BEHIND_GATEWAY]_connection  =

(DESCRIPTION=

(ADDRESS=(PROTOCOL=tcp)(HOST=[HOSTNAME_OF_MACHINE_WITH_GATEWAY])(PORT=[PORT_USED_BY_GATEWAY]))

(CONNECT_DATA=(SID=[SID_USED_BY_MSACCESS_DB_BEHIND_GATEWAY]))

(HS=OK)

)

So, in case of our exercise, the values should be as follows:

msacc_connection  =

(DESCRIPTION=

(ADDRESS=(PROTOCOL=tcp)(HOST=pc04.acme.org)(PORT=1521))

(CONNECT_DATA=(SID=msacc))

(HS=OK)

)

(of course if you configured more than one SID in listener.ora file in a previous paragraph, then you should create a tns entry for each of them!)

To check if the created tns entry works correctly, you can execute the ‘tnsping’ command at the server where your DWH is installed.

#2: Create a database link at the Data Warehouse pointing to the MS Access database behind Oracle GatewayTo do that, run the following command:

CREATE DATABASE LINK [CHOSEN_NAME_FOR_DBLINK] USING [TNS_ENTRY_NAME]

;

So, in our case this would be:

CREATE DATABASE LINK ms_access_db USING msacc_connection ;
  • #3: Create synonyms for all tables of the MS Access database that you want to import to OBI

Now, for each table you want to import to OBI RPD, you have to execute:

CREATE SYNONYM TAB_SYNONYM FOR “TAB_NAME”@MS_ACCESS_DB;

Remember about quotation marks! They are obligatory!

If all your steps were successful, you should now be able to access data from MS Access in your Oracle DWH just by issuing:

SELECT * FROM TAB_SYNONYM;

Please also remember, that if you want to access only certain columns of such a synonym, you also need to put their names between quotation marks, e.g.:

SELECT “some_column_name” FROM TAB_SYNONYM;

Otherwise, it doesn’t work.

Now your objects should be ready to be imported to the RPD.

Step 4: Importing MS Access tables to OBI

Now the last step – import of the metadata. When we import Oracle tables to the OBI we can do that automatically, just ‘ticking’ the required table on the list of tables and then moving it to existing physical containers. Here the story is different, as the metadata of db-linked tables is not present in Oracle system dictionaries, hence it cannot be imported automatically (as, in the end, OBI’s automatic metadata import is a SQL query against ‘all_tables’ and ‘all_tab_cols ‘ dictionaries). So to import the metadata, we need to create the physical table manually (giving it the name of the synonym), and then, one by one, also create the physical columns manually. Bear in mind to put column names in quotation marks, like it was suggested while querying the columns in the SQL tool. You also need to remember that for some MS Access data types, the data type that should be used in OBIEE is not that obvious. For example ‘YES/NO’ type in MS Access translates to INTEGER in OBIEE. To check if metadata was imported correctly, please save the repository, then right click on the physical table name and choose ‘View Data’ from the menu.

And that’s it! Now you have your MS Access tables available in OBI RPD and you can add them to existing stars, create new hierarchies and do whatever you want!

Conclusion

MS Access databases are still in use in many places and it may be sometime useful or even necessary to add them to an existing BI system.  As you could find out, this operation is not that complex, and, once done correctly, may be easily repeated in the future, as just part of the steps needs to be repeated for any new database that has to be imported.

If you have any issues with the configuration described in the article, please let me know in the comments, maybe I will be able to help you.

APEX 4.2 goes live!

.

Last week, after more than a year since the release of v4.1, Oracle released version 4.2 of APEX. In this article I would like to show its most significant modifications. Moreover, you will see not only ‘this feature, that feature’ list, but also some examples of those new functionalities.

So, the most important modifications are:

-> Introduction of HTML5  charts/graphs, themes and components

-> Return of package applications

-> A lot of new functionalities for mobile devices:

  1. mobile interfaces
  2. new components added for mobile applications (of course already mentioned above: HTML5 & responsive themes also should be mentioned in this section )
  3. new ‘Dynamic actions’ like ‘swipe’, ‘tap’
  4. HTML5-mobile-oriented items like slider, on/off switch or calendar

-> Responsive themes

-> Refreshed APEX’s interface

With uncertain future of flash on mobile devices (or maybe word ‘certain’ is much more convenient: after this announcement by Adobe we finally realized that Steve Jobs was 100% right in that matter) there is no surprise that everybody is moving toward HTML5 and, well, looks like days of using flash in that kind of applications are numbered.

So now you just need to update your APEX to 4.2 and then we can proceed to take a closer look at those new, fancy features!

Continue reading this post >

Creating basic reports with APEX

.

Some of you may not even realize that by installing Oracle database will also install a free tool to build web applications. The name of that tool is APEX (stands for Application Express) and I will try to show you some of its basic functionalities. Just by looking at the Sample Application (created by default during the creation of the development environment) you can see what APEX is.

I will show you how to build a basic chart using APEX’s wizards, and how to choose a checklist as a filter for report’s parameters. You will also see how to overcome one of APEX’s issues. We will start from the point that you already have a new application with one empty page created.

Continue reading this post >

privacy policy - Copyright © 2000-2010 ClearPeaks

topnav