GeekInterview.com
Series: Subject: Topic:

DataStage Interview Questions

Showing Questions 1 - 20 of 633 Questions
First | Prev | | Next | Last Page
Sort by: 
 | 

Datastage job

Asked By: hema123 | Asked On: Jul 11th, 2014

I have a sequence of job in datastage which is taking more than 4 hrs which is supposed to complete in less than 1 hr so what could be the possibilities to take much longer time than expected?

What are routines and where/how are they written and have you written any routines before?   

Asked By: Interview Candidate | Asked On: May 24th, 2005

Routines are stored in the routines branch of the datastage repository, where you can create, view or edit. The following are different types of routines:   1) transform functions   2) before-after job subroutines   3) job control routines

Answered by: Chalapathirao Maddali on: Jul 11th, 2014

Datasatge has 2 types of routines ,Below are the 2 types. 1.Before/After Subroutine. 2.Transformer routines/Functions. Before/After Subroutines : These are built-in routines.which can be called in...

Answered by: bvrp on: Nov 1st, 2005

RoutinesRoutines are stored in the Routines branch of the DataStage Repository,where you can create, view, or edit them using the Routine dialog box. Thefollowing program components are classified as ...

Whats difference betweeen operational data stage (ods) & data warehouse?

Asked By: Interview Candidate | Asked On: Jun 20th, 2005

Answered by: Ramyapriya Sudhakar on: Jul 9th, 2014

operational data store : It is unlike real EDW ,data is refreshed near real time and used for routine business activity. It is used as an interim logical area for data warehouse. This is the pla...

Answered by: Dharmendra on: Sep 22nd, 2006

An operational data store (or "ODS") is a database designed to integrate data from multiple sources to facilitate operations, analysis and reporting. Because the data originates from multiple sources,...

Purpose of using user defined environment variables and parameter sets

Asked By: google_yahoo | Asked On: Dec 28th, 2012

What is the purpose of using user defined environment variables and parameter sets. I m little bit confused. Could any one explain me in detail.?

Answered by: Charmi on: Jul 1st, 2014

Hi,

Parameter Set is used when you want a set of user defined variables to be used many times in a project.

For example, Variables like Server Name, User Id, Password can be added in a parameter set and that can be used across the jobs , instead of including the three variables everytime.

What is the architecture of your datastage project

Asked By: Sam Geek | Asked On: Oct 19th, 2013

I came across this question many times in interview, in specific what can I answer..... Please help..

Answered by: shiv on: Jun 11th, 2014

The above answer, is to architecture of Datastage, its not the architecture of project. Project architecture would be like: ***************************** You have: 1 Source--------> 1 Staging Area----...

Answered by: Dileep J on: Jan 29th, 2014

There mainly 3 parts . 1. DS Engine 2. Metadata Repository 3. Services. If these 3 tiers installed on a single server the it will be called Single Tier architecture. If DSEngine on 1 Server and Metad...

What are hierarchies? Examples?

Asked By: upendarkm | Asked On: Apr 11th, 2012

Answered by: Rajesh B on: Jun 8th, 2014

Hi The hierarchy is nothing but parent and child relationship....
lets say the country is parent -->state is child to it---> city is child to it--->house no is child to it

here if anyone needs hierarchy you can say from country-->state-->city-->house is the hierarchy relationship

How to extract job parameters from a file?

Asked By: ramamulas | Asked On: Oct 27th, 2011

Answered by: karthick on: May 30th, 2014

Parameter file will have comma delimiter.
use cat file1.txt |cut -d, -f1| tr -d "
" command to extract the first field ...
use execute command activity for extracting all the parameter. then finally pass the value to actual job.

Answered by: Mallikarjuna_G on: Aug 18th, 2013

Write a server job routine that takes input as the file and reads the parameters from it. If the file contains more than one parameter each in a separate line, the your routine should concatenate them...

Scenario based question

Asked By: Sam Geek | Asked On: Oct 24th, 2013

How to find if the next value in a column is incrementing or not for ex 100 200 300 400 if the curval greater than previous val then print greater if lesser print lesser for ex 100 200 150 400, here 150

Answered by: karthick on: May 30th, 2014

Previous value in one stag_v1 and presend value in stg_v2 compare the two , if greater then stg_v1=stg_v2 and move to next value. else loop it.

Datastage job scenario question

Asked By: premox5 | Asked On: Feb 13th, 2014

My input has a unique column-id with the values 10,20,30.....How can I get first record in one o/p file,last record in another o/p file and rest of the records in 3rd o/p file?

Answered by: Lakshman on: May 27th, 2014

In usual case, we need to use sort, filter and target file. But in this scenario, as the column in unique, they we cannot expect duplicates. Hence take the file of records into Filter stage and provide "Where Clause" as

i) =10
ii) >10 and

Answered by: Muralidhar Bolla on: Apr 13th, 2014

In transformer using constraints we can achieve
1). Link--> @inrownum=1
2).link --> lastrow()
3). link --> click the otherwise condition

Datastage job scenario question

Asked By: Boopathy Srinivasan | Asked On: May 18th, 2011

Input file a contains12345678910input file b contains6789101112131415output file x contains12345output file y contains678910output file z contains1112131415how can we do in this in a single ds job in px ?....Could you please give me the logic to implement ???

Answered by: ghost on: May 17th, 2014

create one px job. src file= seq1 (1,2,3,4,5,6,7,8,9,10) 1st lkp = seq2 (6,7,8,9,10,11,12,13,14,15) o/p - matching recs - o/p 1 (6,7,8,9,10) not-matching records - o/p 2 (1,2,3,4,5) 2nd lkp: s...

Answered by: premox5 on: Jan 31st, 2014

If the input file metadata are same means you can use 1 seq file to read both the input files and then use filter stage and load it to the target files.

Datastage - delete header and footer on the source sequential

Asked By: srinivas | Asked On: Nov 3rd, 2007

How do you you delete header and footer on the source sequential file and how do you create header and footer on target sequential file using datastage?

Answered by: Kalai on: May 7th, 2014

"Output --> Properties --> Option --> Filter --> add sed command here" to delete header and footer records

Answered by: leelasankar.pr on: Jul 22nd, 2008

By using UNIX sed command we can delete header and footeri.e; for header     sed -n '1|p'and footer            sed -n '$|p'

Datastage scenario question

Asked By: NaveenKrish | Asked On: Feb 13th, 2014

A sequences is calling activity 1, activity 2 and activity 3.While running, activity 1 and 2 got finished but 3 got aborted. How can I design a sequence such that the sequence has to run from activity 2 when I restart the sequences?

Answered by: Mallikarjuna_G on: May 7th, 2014

To make the job re-run from activity 3, we need to introduce restartability in the sequence job. For this below points have to be taken care of in Job Sequence Adding Checkpoints: Checkpoints have t...

Answered by: Ritwik on: Apr 21st, 2014

You have to check the " Do not checkpoint run " checkbox for activity 2. If you set the checkbox for a job that job will be run if any of the job later in the sequence fails and the sequence is restarted.

Data granularity

Asked By: manju_thanneeru | Asked On: Jul 13th, 2010

What is data granularity? Explain.

Answered by: pari on: Apr 26th, 2014

Explain data granularity and how it is applicable to data ware house?

Answered by: shri on: Oct 18th, 2011

I am adding one more point to the above answer.Granularity refers to the level of detail of the data stored in any table .
Example :- Year > Month> Week> Day > Hour

In the same manner in real time project also have granularity in the data.

How to get the last day of the current month?

Asked By: naveen.chinthala | Asked On: Sep 12th, 2013

I have explored all the available functions in the transformer stage, but could not find the exact function to get the last day of the current month. Can you please show me which function is available for this logic.

Answered by: venueksh on: Apr 8th, 2014

Oracle has LAST_DAY function:

SELECT LAST_DAY(to_date(07/04/2014,MM/DD/YYYY)) from dual;
SELECT LAST_DAY(SYSDATE) from dual;

Answered by: venueksh on: Apr 8th, 2014

Oracle has a last_day() function.

SELECT LAST_DAY(to_date(07/04/2014,MM/DD/YYYY)) from dual;
SELECT LAST_DAY(SYSDATE) from dual;

How to get top five rows in datastage?

Asked By: naveen.chinthala | Asked On: Jan 3rd, 2013

How to get top five rows in datastage? I tried to use @inrownum,@outrownum system variables in transformer..But they are not giving unique sequential numbers for every row...Please help! thanks in advance!!

Answered by: Kuldeep on: Mar 12th, 2014

If you are using @inrownum and @outrownum then your output will vary from node by node. it is depend on what node (4 or 2 etc)your project is configured.

Answered by: GGGGGGG on: Oct 26th, 2013

Using head Stage we can retrieve top 5 records

How to seperate two diff datatypes and load it into two files?

Asked By: premox5 | Asked On: Jan 21st, 2014

I have a source file1 consist of two datatypes file1: no(integer) 1 2 3 & dept(char) cs it ie and I want to seperate these two datatypes and load it into target files file2 & file3. how can I do this in datastage and by using which stage?

Answered by: dileep Janga on: Jan 31st, 2014

I think this Question is to confuse the Job Aspirant by using Datatypes and all... Its very simple... File1-->2 Columns. 1.NO(Integer) 2.DEPT(Char). Target1: NO(Integer), Target2: DEPT(Char). Take ...

Answered by: Lubna Khan on: Jan 27th, 2014

In Transformer stage there is one function IsInteger and IsChar , We can identify If IsInteger (column name) then file1 else file2

What are the uses of using parameters in datastage?

Asked By: premox5 | Asked On: Dec 19th, 2013

Answered by: Lubna Khan on: Jan 27th, 2014

By Using Parameters we can avoid hardcoding and assign values at runtime

What is the exact difference betwwen join,merge and lookup stage??

Asked By: Phantom | Asked On: Jan 20th, 2006

Answered by: vij on: Jan 6th, 2014

Default partition technique is Auto in all please check once

Answered by: mallika_chaithu on: Jun 8th, 2011

Hope the below one helps you.Join Stage: 1.) It has n input links(one being primary and remaining being secondary links), one output link and there is no reject link2.) It has 4 join operations: inner...

Remove leading zero

Asked By: goodfriendsri | Asked On: Dec 19th, 2010

How to remove leading zeros in data and transform data to target?

Answered by: Sunitha on: Dec 2nd, 2013

The main differences between 8.1 and 8.5 are 8.5 has Input looping ,Output looping. 8.5 has saving ,editing and compling is 40% faster. 8.5 has functions like LastRow,LastInGroup and Iteration system...

Answered by: Rupesh Agrawal on: Nov 8th, 2013

Convert data to integer and convert back to string.

First | Prev | | Next | Last Page

 

 

Ads

Connect

twitter fb Linkedin GPlus RSS

Ads

Interview Question

 Ask Interview Question?

 

Latest Questions

Interview & Career Tips

Get invaluable Interview and Career Tips delivered directly to your inbox. Get your news alert set up today, Once you confirm your Email subscription, you will be able to download Job Inteview Questions Ebook . Please contact me if you there is any issue with the download.