Series: Subject: Topic:

DataStage Interview Questions

Showing Questions 1 - 20 of 636 Questions
First | Prev | | Next | Last Page
Sort by: 

Datastage job

Asked By: hema123 | Asked On: Jul 11th, 2014

I have a sequence of job in datastage which is taking more than 4 hrs which is supposed to complete in less than 1 hr so what could be the possibilities to take much longer time than expected?

Answered by: Karthik on: Aug 24th, 2014

- Check if any of the parallel job got hung in the sequencer which may also be the cause for long running.

- Check if any of the SQL used in the data stage waiting for the resources

Answered by: BML on: Aug 19th, 2014

There can be many reasons. no one can tell you exact reason. when you ask a datastage related question, we expect you to have basic debugging skill. checkout underlying datastage jobs to see which job is taking longer time then there are many ways to find the bottleneck

Getting files in datastage

Asked By: Kiranchandra | Asked On: Aug 1st, 2014

How to get a files from different servers to one server in datastage by using UNIX command?

Answered by: Murali on: Aug 19th, 2014

  1. scp test.ksh dsadm@

Display files date wise like aug 18th,19th,29th data files by using UNIX cmd?

Asked By: Kiranchandra | Asked On: Aug 18th, 2014

Answered by: Ashok on: Aug 19th, 2014

You can display files date wise by doing normal ls -latr cmd

Flatfiles(.Txt,.Csv,xml) to laod sequential files?

Asked By: Kiranchandra | Asked On: Aug 5th, 2014

How can we load three different flat files(1 file .Txt,2 file .Csv,3 file XML) to sequential file at a time?

Answered by: Devesh Ojha on: Aug 11th, 2014

If Metadata is the same then we can load by doing the Union operation and if metadata is diffrent then first sync the metadata and then load them .

How to get top five rows in datastage?

Asked By: naveen.chinthala | Asked On: Jan 3rd, 2013

How to get top five rows in datastage? I tried to use @inrownum,@outrownum system variables in transformer..But they are not giving unique sequential numbers for every row...Please help! thanks in advance!!

Answered by: Poorna on: Aug 7th, 2014

You can restrict data it at source stage level its self @ using filter option.

Apply in filter : head -5

Answered by: Bhavani on: Aug 1st, 2014

It is very simple no need to think complex answers. In Seq file --> set property limit rows = 5. thats it..

What is the difference between symetrically parallel processing,massively parallel processing?

Asked By: balu | Asked On: Jun 30th, 2006

Answered by: Saurabh Sinha on: Aug 1st, 2014

In SMP every processor share a single copy of the operating system (OS)

In MPP each processor use its own operating system (OS) and memory.

Answered by: Guest on: Jan 8th, 2007

Symmetric MultiProcessing (SMP) is  the processing of programs by multiple processors that share a commom operating system and memory. This SMP is also called as "Tightly Coupled MultiProcessing"...

How to get the last day of the current month?

Asked By: naveen.chinthala | Asked On: Sep 12th, 2013

I have explored all the available functions in the transformer stage, but could not find the exact function to get the last day of the current month. Can you please show me which function is available for this logic.

Answered by: Arunjith B Indivar on: Jul 27th, 2014


Answered by: venueksh on: Apr 8th, 2014

Oracle has LAST_DAY function:

SELECT LAST_DAY(to_date(07/04/2014,MM/DD/YYYY)) from dual;

What are routines and where/how are they written and have you written any routines before?   

Asked By: Interview Candidate | Asked On: May 24th, 2005

Routines are stored in the routines branch of the datastage repository, where you can create, view or edit. The following are different types of routines:   1) transform functions   2) before-after job subroutines   3) job control routines

Answered by: Chalapathirao Maddali on: Jul 11th, 2014

Datasatge has 2 types of routines ,Below are the 2 types. 1.Before/After Subroutine. 2.Transformer routines/Functions. Before/After Subroutines : These are built-in routines.which can be called in...

Answered by: bvrp on: Nov 1st, 2005

RoutinesRoutines are stored in the Routines branch of the DataStage Repository,where you can create, view, or edit them using the Routine dialog box. Thefollowing program components are classified as ...

Whats difference betweeen operational data stage (ods) & data warehouse?

Asked By: Interview Candidate | Asked On: Jun 20th, 2005

Answered by: Ramyapriya Sudhakar on: Jul 9th, 2014

operational data store : It is unlike real EDW ,data is refreshed near real time and used for routine business activity. It is used as an interim logical area for data warehouse. This is the pla...

Answered by: Dharmendra on: Sep 22nd, 2006

An operational data store (or "ODS") is a database designed to integrate data from multiple sources to facilitate operations, analysis and reporting. Because the data originates from multiple sources,...

Purpose of using user defined environment variables and parameter sets

Asked By: google_yahoo | Asked On: Dec 28th, 2012

What is the purpose of using user defined environment variables and parameter sets. I m little bit confused. Could any one explain me in detail.?

Answered by: Charmi on: Jul 1st, 2014


Parameter Set is used when you want a set of user defined variables to be used many times in a project.

For example, Variables like Server Name, User Id, Password can be added in a parameter set and that can be used across the jobs , instead of including the three variables everytime.

What is the architecture of your datastage project

Asked By: Sam Geek | Asked On: Oct 19th, 2013

I came across this question many times in interview, in specific what can I answer..... Please help..

Answered by: shiv on: Jun 11th, 2014

The above answer, is to architecture of Datastage, its not the architecture of project. Project architecture would be like: ***************************** You have: 1 Source--------> 1 Staging Area----...

Answered by: Dileep J on: Jan 29th, 2014

There mainly 3 parts . 1. DS Engine 2. Metadata Repository 3. Services. If these 3 tiers installed on a single server the it will be called Single Tier architecture. If DSEngine on 1 Server and Metad...

What are hierarchies? Examples?

Asked By: upendarkm | Asked On: Apr 11th, 2012

Answered by: Rajesh B on: Jun 8th, 2014

Hi The hierarchy is nothing but parent and child relationship....
lets say the country is parent -->state is child to it---> city is child to it--->house no is child to it

here if anyone needs hierarchy you can say from country-->state-->city-->house is the hierarchy relationship

How to extract job parameters from a file?

Asked By: ramamulas | Asked On: Oct 27th, 2011

Answered by: karthick on: May 30th, 2014

Parameter file will have comma delimiter.
use cat file1.txt |cut -d, -f1| tr -d "
" command to extract the first field ...
use execute command activity for extracting all the parameter. then finally pass the value to actual job.

Answered by: Mallikarjuna_G on: Aug 18th, 2013

Write a server job routine that takes input as the file and reads the parameters from it. If the file contains more than one parameter each in a separate line, the your routine should concatenate them...

Scenario based question

Asked By: Sam Geek | Asked On: Oct 24th, 2013

How to find if the next value in a column is incrementing or not for ex 100 200 300 400 if the curval greater than previous val then print greater if lesser print lesser for ex 100 200 150 400, here 150

Answered by: karthick on: May 30th, 2014

Previous value in one stag_v1 and presend value in stg_v2 compare the two , if greater then stg_v1=stg_v2 and move to next value. else loop it.

Datastage job scenario question

Asked By: premox5 | Asked On: Feb 13th, 2014

My input has a unique column-id with the values 10,20,30.....How can I get first record in one o/p file,last record in another o/p file and rest of the records in 3rd o/p file?

Answered by: Lakshman on: May 27th, 2014

In usual case, we need to use sort, filter and target file. But in this scenario, as the column in unique, they we cannot expect duplicates. Hence take the file of records into Filter stage and provide "Where Clause" as

i) =10
ii) >10 and

Answered by: Muralidhar Bolla on: Apr 13th, 2014

In transformer using constraints we can achieve
1). Link--> @inrownum=1
2).link --> lastrow()
3). link --> click the otherwise condition

Datastage job scenario question

Asked By: Boopathy Srinivasan | Asked On: May 18th, 2011

Input file a contains12345678910input file b contains6789101112131415output file x contains12345output file y contains678910output file z contains1112131415how can we do in this in a single ds job in px ?....Could you please give me the logic to implement ???

Star Read Best Answer

Editorial / Best Answer

Answered by: vinod chowdary

Answered On : Jul 27th, 2011

Hello guy's, I would like to solve this by using the Change capture stage. First, i am going to use source as A and refrerence as B both of them are connected to Change capture stage. From, change capture stage it connected to filter stage and then targets X,Y and Z. In the filter stage: keychange column=2 it goes to X [1,2,3,4,5] Keychange column=0 it goes to Y [6,7,8,9,10] Keychange column=1 it goes to Z [11,12,13,14,15] Revert me PLz

Answered by: ghost on: May 17th, 2014

create one px job. src file= seq1 (1,2,3,4,5,6,7,8,9,10) 1st lkp = seq2 (6,7,8,9,10,11,12,13,14,15) o/p - matching recs - o/p 1 (6,7,8,9,10) not-matching records - o/p 2 (1,2,3,4,5) 2nd lkp: s...

Answered by: premox5 on: Jan 31st, 2014

If the input file metadata are same means you can use 1 seq file to read both the input files and then use filter stage and load it to the target files.

Datastage - delete header and footer on the source sequential

Asked By: srinivas | Asked On: Nov 3rd, 2007

How do you you delete header and footer on the source sequential file and how do you create header and footer on target sequential file using datastage?

Answered by: Kalai on: May 7th, 2014

"Output --> Properties --> Option --> Filter --> add sed command here" to delete header and footer records

Answered by: on: Jul 22nd, 2008

By using UNIX sed command we can delete header and footeri.e; for header     sed -n '1|p'and footer            sed -n '$|p'

Datastage scenario question

Asked By: NaveenKrish | Asked On: Feb 13th, 2014

A sequences is calling activity 1, activity 2 and activity 3.While running, activity 1 and 2 got finished but 3 got aborted. How can I design a sequence such that the sequence has to run from activity 2 when I restart the sequences?

Answered by: Mallikarjuna_G on: May 7th, 2014

To make the job re-run from activity 3, we need to introduce restartability in the sequence job. For this below points have to be taken care of in Job Sequence Adding Checkpoints: Checkpoints have t...

Answered by: Ritwik on: Apr 21st, 2014

You have to check the " Do not checkpoint run " checkbox for activity 2. If you set the checkbox for a job that job will be run if any of the job later in the sequence fails and the sequence is restarted.

Data granularity

Asked By: manju_thanneeru | Asked On: Jul 13th, 2010

What is data granularity? Explain.

Answered by: pari on: Apr 26th, 2014

Explain data granularity and how it is applicable to data ware house?

Answered by: shri on: Oct 18th, 2011

I am adding one more point to the above answer.Granularity refers to the level of detail of the data stored in any table .
Example :- Year > Month> Week> Day > Hour

In the same manner in real time project also have granularity in the data.

First | Prev | | Next | Last Page





twitter fb Linkedin GPlus RSS


Interview Question

 Ask Interview Question?


Latest Questions

Interview & Career Tips

Get invaluable Interview and Career Tips delivered directly to your inbox. Get your news alert set up today, Once you confirm your Email subscription, you will be able to download Job Inteview Questions Ebook . Please contact me if you there is any issue with the download.