-
Oracle Connector Stage and Oracle Enterprise Satge
1. What is the difference between Oracle connector stage and Oracle enterprise stage. 2. If we can achieve Oracle enterprise stages task using Oracle connector stage then why Oracle enterprise stage exists?
-
Delete Duplicates Using Transformer
1.Without using stage variable how can we delete the duplicates using Transformer? 2.If we will remove duplicates using Transformer then minimum how many stage variable required for this?
-
Load 1 input column values (with delemeter) into different target columns
I have column X which having 3 values A;B;C. How can I load these values into 3 diff cols in the target. If tomorrow I will receive 100 values for same single column I will load into 100 target cols respectively (Same de-limiter) ?
-
Handle Rejects in Transformer
How to handle rejects in Transformer & How many rejects a xfm support?
-
Minimum Number of Input and Output for Lookup
What is the minimum number of input and output required for Lookup ?
-
Identify Normal and Sparse Lookup Datastage Job
By seeing a Datastage job how you will identify which one is Normal lookup and which one is Sparse Lookup?
-
Change the Configuration file during Run-time
Can we change the configuration file during run time? If then how?
-
Implement SCD stage using Lookup Stage
Can we implement SCD stage using Lookup stage? If Yes then how?
-
Split the Input Columns into Different Target
I have a source file having 4 columns. How can I store first 2 columns values into one target and the next 2 columns values into another target? Can anyone suggest me here how can I achieve this using Copy stage and without using Copy Stage?
-
Merge Two Columns into One Column in Target
I have a file having columns C1,C2,C3,C4,C5 with comma delimited.In target I want store the first two columns value into one column. Like below Input Output I need -------------- ---------------------- C1 C2 C3 C4 C5 C1(C1 & C2 value) C3 C4 C5 That means my input is 5 columns & Output will be 4 columns. Can anyone suggest how Can I achieve this in DS job ?
-
dsjob run command in Unix platform
We are using below command in Unix to run a Datastage Job: dsjob run mode project_name job_name When we are executing the above command in Unix what exactly it returns?
-
Track Source File name in Target File
I have 5 source files. In target I need to write it into a single file. But in the output file I need the corresponding input file name i.e which records are coming from which source file. How to achieve this?
-
Print Minimum & Maximum Salary of Respective Employee
I have an input file in the below format : NAME SAL ---- --- A 4000 B 3000 C 8000 A 2000 B 7000 C 5000 B 2000 C 9000 A 1000 If I will use Sort ---> Aggregator Group by then it will give 3 columns like NAME,MAX(),MIN() NAME MAX() MIN() ---- ----- ----- A 4000 1000 But my requirement is to generate the output which will give the maximum & minimum salary...
-
Load 10 Input files into 10 Target tables at a time
I have the 10 input file like F1,F2...F10 then I need to load these 10 input files into 10 target Output tables like T1,T2...T10. Here is the scenario for 10 tables But in future If i will receive 100 input files then I need to load it into respective 100 target tables. After loading the input files into target tables I need a confirmation in the respective target tables(By input File name) Please...
-
Generate the Occurrence Number in Output File
How to generate the occurrence no in out put file for the respective no of records in input file I need a count in the target file how many times the same record is available in input file. INPUT ----- ID NAME LOC 20 B Y 10 A X 20 B Y 30 C Z 10 A X 10 A X 20 B Y 20 B Y 30 C Z OUTPUT I NEED ----------------------- ID NAME LOC OCCURENCE 10 A X ...
-
How to automate a Datastage job with out using Sequencer
Without using Sequencer how to design a Datastage job which normally receives input file to run. If tomorrow we will not receive any input file Job should not fail. It will run successfully without any warnings.
-
Which Case you will go for star schema and snow flake schema
In your project Which Case you will go for star schema designing and which Case you will go for snow flake schema designing?
-
Load a date field value from sequencial file to databse table with out using Transformer
I have an input file in below format Name DOB ------ ------ A 10-05-1990 B 07-12-2000 Q1 -> How we will load the above file data into a target data base table in a simplest method without using Transformer ? Q2 -> How will load the Date column into 3 splitted columns(DD|MM|YYYY). Like below Name DOB ------ ------- A 10|05|1990 B 07|12|2000
-
Project Data Model
Basically which data model do we use in the project?
-
Generate Surrogate Key in Database
How we will generate Surrogate Key in Database not in Datastage.
-
Which table will load first - Fact or Dimension Table?
In Data warehouse which table will load first and why? Fact or Dimension?
-
Change Partition to Auto in Join stage
What will happen if we will change the partition to Auto in Join stage?
-
Store Rejected Record
In join stage how we will store the rejected record in one file?
-
How to load 2 files data into a single file without using join stage
I have 2 files having different meta data and 2 files does not have any common key. How can I load the 2 files data into a single file without using Join stage?
-
How Can we run a Datastage job form sequential mode to parallel mode
Suppose I have designed a Datastage job using sequential file stage,It will run in sequential mode.How can I run it it Parallel mode.
-
How we can implement Bulk Collect methodology in Datastage
I have faced an interview question recently. In database we are using Bulk Collect concept to load the bunch of records ata time. How we will achieve the same process in Datastage ? Can we use any paticular stage or any other methodology we can implement. Can any one help me on this. Thanks in advance. Aloka