1 1. Difference between Hashfile and Sequential File?. What is modulus?2 2. What is iconv and oconv functions?.3 3. How can we join one Oracle source and Sequential file?.4 4. How can we implement Slowly Changing Dimensions in DataStage?.5 5. How can we implement Lookup in DataStage Server jobs?.6 6. What are all the third party tools used in DataStage?.7 7. what is the difference between routine and transform and function?.8 8. what are the Job parameters?.9 9. Plug-in?.10 10.How can we improve the performance of DataStage jobs?.11 11.How can we create Containers?.12 12.What about System variables?.13 13 What is the use of Usage analysis ?14 14 Different ways a project can be moved to production ?...example ….export-import and Version control.15 15 What database is the Datastage repository use …? Answer: Universe Database16 16 How is the scheduling done in the project ?17 17 Which version of DataStage is used in the project?18 18 What are the performance tuning required while dealing with large data ?19 19 What do the reject option in transformer do? 20 What is the architecture of datastage?21 How do you define and use the job parameters?22 What is stage variables,system variables,environment varaibles?23 How to use routines in datastage?24 What is difference between shared-container and local-container?25 How do you connect to Oracle?26 Please explain any ETL process that you have developed?27 What is hash file? Types of hash files.28 If you are doing any changes in shared-container will it reflect in all the jobs wherever you used this shared- container?29 Have u written any custom routines in your project? If so explain?30 How do you get log info into a file?31 What is before job subroutine/After job subroutine? When do you use them?32 How do you backup and restore the project? 33 What is Clear Status File and when do you use it? 34 What is Cleanup Resources and when do you use it?35 Can I join a flat file and oracle and load into oracle? Is this possible?36 While loading some data into target suddenly thier is a problem loading process stopped how can u start loading from the records that were left?37 What are the general problems that u face in Datastage?38 What are the various reports that could be generated using this Datastage?39 How to remove blank spaces from data40 What is Active and Passive stage?41 What all are the stages you have used in your project?42 Could DataStage generate test cases?43 What is difference between hash file and sequential file44 What is the difference between Transform and routine45 What is sequencer?46 How to take backup of project?These are some of DataStage PX Jobs questions which can be asked in interviews.1) Types of parallel processing2) What is the SMP(Symmetric MultiProcessing) and MPP (Masiively parallel processing)?3) What is CPU limited , Memory limited and Disk I/O limited jobs ?4) Can one combine pipeline & partition parallelism?5) Advantages of PX over server job6) Is it possible to create user-defined stage in PX?7) Can I use hash file in PX?8) What is surrogate key stage?1) What is the use of APT_DUMP_SCORE ? Ans: To get messages in logs such as no of processes, no of nodes used.2) What are the fours types of joins possible in Joiner stage? Ans: Inner, Left Outer, Right Outer, Full outer3) What are the components of APT_CONFIG_FILE Ans: Nodes, Fastnode, Pools, ResourceWhat are the points that needs to be considred while creating the config file? Ans: Available nodes, CPU time, available memory, what other process to be executed on same nodes, are there any configurations restrictions ? Eg DB only runs on certain nodes and ETL cannot run on them, Get breakdown of the resource usage? Is the hardware config SMP, Cluster or MPP?5) When are wrappers created? Ans:only for executable commands for unix, dos6) When are buildups created? Ans: More functionality, complex logic needed.7) When are custom stage created ? .. Ans :new operators need which are not in EE 8) What are different job sequencer stages ?9) What is iconv and oconv functions?.10) can we implement Slowly Changing Dimensions in DataStage?. what are the Job parameters?.9. Plug-in?.10.How can we improve the performance of DataStage jobs?.11.How can we create Containers?.12.What about System variables?.13 What is the use of Usage analysis ?14 Different ways a project can be moved to production ?...example ….export-import and Version control.15 What database is the Datastage repository use …? Answer: Universe Database16 How is the scheduling done in the project ?17 Which version of DataStage is used in the project?18 What are the performance tuning required while dealing with large data ?19 What do the reject option in transformer do? 20) How is parallelism executed?21) What is RCP?22) What is orchestrate23) Difference between join, merge, and lookup stage?24) What is dataset ? 25) Diffrenece between dataset, fileset and lookup file set?Questions on Data Warehousing concept 1. What is Data Warehouse?2. What is difference between Data Warehouse and Data Mart ?3. What is Star schema?4. What is Snow-flake schema?5. What is fact and dimension?6. What is surrogate key?7. What Normlisation ?Explain 3rd Normlised form?8. What is the difference between OLTP and OLAP?9. Are you involved in data modeling ?If yes which tool/tech you are using?10. Which schema modeling techniques you ever used?11. What do you mean by summary table?12. What Degenerated Dimensions13. What is fact less fact?Oracle question based on data warehouse?1 What is parallel execution2 What is Bitmap and B-Tree indexes ? Explain Local Vs Global variables3 What is materialised view4 What is page size/array size in oracle?5 What are integrity constraints ?6 How can one tune SQL’s in Oracle?

Showing Answers 1 - 3 of 3 Answers

Vinod

  • Feb 12th, 2007
 

The records in a Sequential file are organized serially, one after another, but the records in the file may be ordered or unordered. The hashed file access method scatters the records randomly throughout the RMS data file. When creating a hashed RMSfile, the maximum number of records the file will contain must be declared. When a record is added to a hashed RMSfile, the primary key value is transformed into a number between one and the number of records in the file. RMS attempts to place the record at that location. If a record already exists at that location, a collision has occurred and the record must be placed elsewhere.

  Was this answer useful?  Yes

Give your answer:

If you think the above answer is not correct, Please select a reason and add your answer below.

 

Related Answered Questions

 

Related Open Questions