Skip to main content

Efficient Lookups with Talend Open Studio's Hash Components

If you are using the same data source in several Talend Open Studio subjobs, consider loading the data data into an internal data structure for use throughout a job.

A HashMap is a Java data structure maintained in RAM.  Talend Open Studio lets you load manipulate data in HashMaps using the tHashInput and tHashOutput components.  Because the data exists in RAM, lookups are faster, especially if secondary storage is based on spinning hard disks.

Without a HashMap

This Talend Open Studio job loads data from two spreadsheets, EmployeeHires and EmployeeTerminations, into a target table, EmployeeActions.  The spreadsheet sources contain a data (hireDate and terminationDate) that is used as a key into a table called BusinessDates.  Although the date could simply be carried over into the target table (without the lookup), many data warehouses maintain date information in a separate table.  This is because calendar-related business information is merged with the timestamp.

This data includes flags for Holiday, Payday, and Weekday that supplement that timestamp (Month, Day, Year, Quarter) fields.  The spreadsheet has been loaded into the MS SQL Server table "BusinessDates".

Basic Date Fields PLUS Business-specific Information
A typical Talend Open Studio job will use BusinessDates as a lookup table.  The main flow comes from two sources: an Employee Hires spreadsheet, and an Employee Terminations spreadsheet.

BusinessDates is Repeated for Each Subjob
The Employee Hires spreadsheet is similar to the Terminations spreadsheets with hireDate replaced with terminationDate.

Employee Action Source (Hires)

Hash Alternative

The preceding job works.  5 Hire and 3 Termination records are written to the database.  However, the job has a drawback.  Even if caching is enabled, the lookup is read at least once for each subjob, leading to poorer performance.  An alternative is to use the Talend Hash components: tHashInput and tHashOutput.

Job Rewritten with Talend Hash Component
This version creates a tMap with the tHashOutput_1 component which is loaded by a database input, BusinessDates.  I flag three columns as keys: year, month, day.  This is for informational purposes; any fields can be used in the later tMap joins.

The tHashOutput component is configured as follows.

tHashOutput Configuration
The schema -- with the 3 informational keys -- used in the tHashOutput follows.


The tHashOutput schema can now be applied to joins (tMap) by adding tHashInput components as lookup flows.  This is the configuration of tHashInput_1 which is identical to tHashInput_2.  More inputs can be added for other data loading subjobs.

tHashInput Configuration Links to tHashOutput
 In the UI, you must define a schema for both the tHashInput and tHashOutput components.  I do this by setting the value to the Repository, then changing the value to Built-in and re-defining the keys from "id" to "year/month/day".

The join is performed as an Inner Join on three columns.  Because the dates are represented differently, I use a TalendDate routine.  Note the important +1 which dealing with the zero-based Java month and day values.

Multi-Part Join on a Hash Map Input
The job loads 8 records in the database.  The Hires are flagged with "HI" and the Terminations with "TE".

Data Loading Results
RAM-based data structures can provide a performance improvement since slower spinning disks aren't involved in data reads.  This is a good pattern when you're dealing with data from different sources (different dbs, spreadsheets, etc).  If your data processing is table-to-table in the same database, huge performance improvements can be made with the ELT* components that keep all processing within the database, eliminating the network latency of pulling the data into Talend Open Studio's JVM.

Comments

Popular posts from this blog

ODI KM Adding Order by Option

You can add Order by statement to queries by editing KM.I have edited IKM SQL Control Append to provide Order by.  1) Add an option to KM named USE_ORDER_BY, its type is Checkbox and default value is False. This option determines you want an order by statement at your query. 2)Add second option to KM named ORDER_BY, type is Text. You will get order by values to your query by this option. 3) Editing Insert New Rows detail of KM. Adding below three line code after having clause. That's it! <% if (odiRef.getOption("USE_ORDER_ BY").equals("1")) { %> ORDER BY <%=odiRef.getOption("ORDER_BY" )%> <%} %>  If USE_ORDER_BY option is not used, empty value of ORDER_BY option get error. And executions of KM appears as such below; At this execution, I checked the KM to not get errors if ORDER_BY option value is null. There is no prove of ORDER BY I'm glad.  Second execution to get  Ord...

Synchronous and Asynchronous execution in ODI

In data warehouse designing, an important step is to deciding which step is before/after. Newly added packages and required DW data must be analyzed carefully. Synchronous addings can lengthen ETL duration. Interfaces, procedures without generated scenario cannot be executed in parallel. Only scenario executions can be parallel in ODI. Default scenario execution is synch in ODI. If you want to set a scenario to executed in parallel then you will write “-SYNC_MODE=2″ on command tab or select Synchronous / Asynchronous option Asynchronous in General tab. I have created a package as interfaces executes as; INT_JOBS parallel  INT_REGIONS synch  INT_REGIONS synch  INT_COUNTRIES synch  INT_LOCATIONS parallel  INT_EMPLOYEES parallel (Interfaces are independent.) Selecting beginning and ending times and durations from repository tables as ODI 11g operator is not calculating these values. It is obvious in ODI 10g operator. SELECT    sess_no...

Oracle Data Integrator tools: OdiFileDelete and OdiOutFile

Hello everyone! It’s time for another cool ODI tutorial. Last time, I spoke about the   OdiZip tool and how it can be used to create zip files from a directory. Through this post, I will talk about two more tools related to  Files  namely  OdiFileDelete and  OdiOutFile . 1. OdiFileDelete The  OdiFileDelete  is a tool used to delete files present in a directory or a complete directory on the machine running the agent. Usage OdiFileDelete -DIR=<dir> | -FILE=<file> [-RECURSE=<yes|no>] [-CASESENS=<yes|no>] [-NOFILE_ERROR=<yes|no>] [-FROMDATE=<fromdate>] [-TODATE=<todate>] If  -FROMDATE  is omitted, all files with a modification date earlier than the  -TODATE  date will be deleted. If  -TODATE  is omitted, all files with a modification date later than the  -FROMDATE  date will be deleted. If both parameters are omitted, all files matching the  -FILE...