Skip to main content

Top Ten Timesaving Talend Tips

Here are 10 timesaving tips for Talend Open Studio for Data Integration that I used.

#10 Drag and drop a component on a connector

From the Palette, select a component and drag over a connection.  When the connection is highlighted, release the mouse.  This works particularly well with tFilterRow and tLogRow and avoids having to connect/reconnect several connections.

Drag a Component onto a Connection
#9 Copy and paste

Sometimes, you may want to take a second pass on a data file to handle another case.  Selecting a subjob, right-clicking "Copy", then right-clicking "Paste", will produce an identical subjob with the components renamed so that it readily compiles and runs.  You can then tweak this second job to handle the second pass.  Use this with caution as too much copy-paste-and-tweaking leads to maintenance problems and makes the job less receptive to change.


Subjob Context Menu with Copy Command
#8 Chain multitable inserts

With good business keys, you can take multiple passes over the data to write into normalized tables.  Without good keys, or if you want to streamline your Talend job, you can chain together DB output components to write to multiple tables.  In this screenshot, two DB writes are connected by a tMSSqlLastInsertId component.


Two Output Components Chained Together
For more detail on using tMSSqlLastInsertId, see this post: tMSSqlLastInsertId Returns 0.

#7 Drag jobs on canvas to create tRunJobs

You can drag a tRunJob component from the Palette to the canvas and set the child job.  A faster way is to drag a job from the Repository onto the canvas, saving a step to lookup the job in Component View.


Creating tRunJob from the Repository
#6 Manage documentation through Talend

Project documentation and configuration can be managed through the Talend Repository.  This makes it eligible for export from a single place (Export Items).  This is used for sharing among workspaces; the documentation isn't exported with the Export Job function.

A Config File Maintained in Talend Documentation

#5 Re-use connections

Each RDBMS has a connection component: tOracleConnection, tMSSqlConnection.  Add a connection to your component and reference the component with the "Use an existing connection" option in other DB components like tMSSqlOutput or tOracleInput.  This centralizes the connections configuration which includes items like username/password, auto-commit settings, and JDBC properties.

A Job with a tMSSqlConnection Component
#4 Properties files

When you're managing different environments, particularly a production environment, a text-based properties file is a convenient way to configure your jobs.  The properties file can be versioned, is easily readable, and supports files differences with Linux commands like "diff".


This is a video on using properties files in Talend Open Studio.

#3 Context groups

The standard way to parameterize a set of Talend Open Studio jobs is through Context Groups.  These are sets of global variables grouped by an environment (dev, test, prod) which can be toggled through an export or via the Run View.


Run View Referencing Several Contexts
This post describes using Contexts and exporting them in more detail: Applying Contexts to Talend Open Studio Jobs.

#2 Use queries

While Talend Open Studio will generate queries based on a table for your input components like tOracleInput, you can save your own queries and reference them throughout your jobs.  This has two advantages.  The first is to allow for queries that span multiple tables and that exceed the query-generation capability of Talend Open Studio (think Oracle set-based operations).  The second is to produce a more robust job by leaving off irrelevant queries that may be removed later.


For example, if a lookup involves only a name and an id field, there's no need to add other fields that may be dropped before the job goes to production.  If a column is dropped and it's not relevant to the query, it shouldn't break the job.

A Talend Open Studio Query
#1 Durable schemas

Schemas should be based on the Repository rather than Built-in where ever possible.  In some cases, components like tMSSqlOutput can be adjusted to ignore columns for a write operation using the Advanced tab.  That way, a complete set of columns can still be referenced in a Repository schema, but there won't be any contention over auto-generated fields.


This tip also works with #2 to support more robust jobs.  If a subset of fields is used repeatedly -- say an id/name pair -- define it as a Generic Schema or other Schema and store it in the repository.  That way, the field list never becomes out of sync with the database (as long as the lookup fields are still valid).


Comments

Popular posts from this blog

ODI KM Adding Order by Option

You can add Order by statement to queries by editing KM.I have edited IKM SQL Control Append to provide Order by.  1) Add an option to KM named USE_ORDER_BY, its type is Checkbox and default value is False. This option determines you want an order by statement at your query. 2)Add second option to KM named ORDER_BY, type is Text. You will get order by values to your query by this option. 3) Editing Insert New Rows detail of KM. Adding below three line code after having clause. That's it! <% if (odiRef.getOption("USE_ORDER_ BY").equals("1")) { %> ORDER BY <%=odiRef.getOption("ORDER_BY" )%> <%} %>  If USE_ORDER_BY option is not used, empty value of ORDER_BY option get error. And executions of KM appears as such below; At this execution, I checked the KM to not get errors if ORDER_BY option value is null. There is no prove of ORDER BY I'm glad.  Second execution to get  Ord...

Creating Yellow Interface in ODI

Hello everyone! In Oracle data integrator (ODI), an  interface  is an object which populates one datastore, called the  target , with data coming from one or more other datastores, known as  sources . The fields of the source datastore are linked to those in the target datastore using the concept of  Mapping . Temporary interfaces used in ODI are popularly known as  Yellow Interfaces . It is because ODI generates a yellow icon at the time of creation of a yellow interface as opposed to the blue icon of a regular interface. The advantage of using a yellow interface is to avoid the creation of  Models each time you need to use it in an interface. Since they are temporary, they are not a part of the data model and hence don’t need to be in the Model. So let’s begin and start creating our yellow interface! Pre-requisites : Oracle 10g Express Edition with *SQL Plus, Oracle Data Integrator 11g. Open *SQL Plus and create a new table  Sales ...

Synchronous and Asynchronous execution in ODI

In data warehouse designing, an important step is to deciding which step is before/after. Newly added packages and required DW data must be analyzed carefully. Synchronous addings can lengthen ETL duration. Interfaces, procedures without generated scenario cannot be executed in parallel. Only scenario executions can be parallel in ODI. Default scenario execution is synch in ODI. If you want to set a scenario to executed in parallel then you will write “-SYNC_MODE=2″ on command tab or select Synchronous / Asynchronous option Asynchronous in General tab. I have created a package as interfaces executes as; INT_JOBS parallel  INT_REGIONS synch  INT_REGIONS synch  INT_COUNTRIES synch  INT_LOCATIONS parallel  INT_EMPLOYEES parallel (Interfaces are independent.) Selecting beginning and ending times and durations from repository tables as ODI 11g operator is not calculating these values. It is obvious in ODI 10g operator. SELECT    sess_no...