What are other Performance tunings you have done in your last project to increase the performance of slowly running jobs?

  1. Staged the data coming from ODBC/OCI/DB2UDB stages or any database on the server using Hash/Sequential files for optimum performance also for data recovery in case job aborts.
  2. Tuned the OCI stage for 'Array Size' and 'Rows per Transaction' numerical values for faster inserts, updates and selects.
  3. Tuned the 'Project Tunables' in Administrator for better performance.
  4. Used sorted data for Aggregator.
  5. Sorted the data as much as possible in DB and reduced the use of DS-Sort for better performance of jobs
  6. Removed the data not used from the source as early as possible in the job.
  7. Worked with DB-admin to create appropriate Indexes on tables for better performance of DS queries
  8. Converted some of the complex joins/business in DS to Stored Procedures on DS for faster execution of the jobs.
  9. If an input file has an excessive number of rows and can be split-up then use standard logic to run jobs in parallel.
  10. Before writing a routine or a transform, make sure that there is not the functionality required in one of the standard routines supplied in the sdk or ds utilities categories.
    Constraints are generally CPU intensive and take a significant amount of time to process. This may be the case if the constraint calls routines or external macros but if it is inline code then the overhead will be minimal.
  11. Try to have the constraints in the 'Selection' criteria of the jobs itself. This will eliminate the unnecessary records even getting in before joins are made.
  12. Tuning should occur on a job-by-job basis.
  13. Use the power of DBMS.
  14. Try not to use a sort stage when you can use an ORDER BY clause in the database.
  15. Using a constraint to filter a record set is much slower than performing a SELECT … WHERE….
  16. Make every attempt to use the bulk loader for your particular database. Bulk loaders are generally faster than using ODBC or OLE.

Editorial / Best Answer

sistlasatish  

  • Member Since Nov-2005 | Nov 14th, 2005


  1. Minimise the usage of Transformer (Instead of this use Copy, modify, Filter, Row Generator)
  2. Use SQL Code while extracting the data
  3. Handle the nulls
  4. Minimise the warnings
  5. Reduce the number of lookups in a job design
  6. Use not more than 20stages in a job
  7. Use IPC stage between two passive stages Reduces processing time
  8. Drop indexes before data loading and recreate after loading data into tables
  9. Gen\'ll we cannot avoid no of lookups if our requirements to do lookups compulsory.
  10. There is no limit for no of stages like 20 or 30 but we can break the job into small jobs then we use dataset Stages to store the data.
  11. IPC Stage that is provided in Server Jobs not in Parallel Jobs
  12. Check the write cache of Hash file. If the same hash file is used for Look up and as well as target, disable this Option.
  13. If the hash file is used only for lookup then \"enable Preload to memory\". This will improve the performance. Also, check the order of execution of the routines.
  14. Don\'t use more than 7 lookups in the same transformer; introduce new transformers if it exceeds 7 lookups.
  15. Use Preload to memory option in the hash file output.
  16. Use Write to cache in the hash file input.
  17. Write into the error tables only after all the transformer stages.
  18. Reduce the width of the input record - remove the columns that you would not use.
  19. Cache the hash files you are reading from and writting into. Make sure your cache is big enough to hold the hash files.
  20. Use ANALYZE.FILE or HASH.HELP to determine the optimal settings for your hash files.

       This would also minimize overflow on the hash file.

  1. If possible, break the input into multiple threads and run multiple instances of the job.
  2. Staged the data coming from ODBC/OCI/DB2UDB stages or any database on the server using Hash/Sequential files for optimum performance also for data recovery in case job aborts.
  3. Tuned the OCI stage for 'Array Size' and 'Rows per Transaction' numerical values for faster inserts, updates and selects.
  4. Tuned the 'Project Tunables' in Administrator for better performance.
  5. Used sorted data for Aggregator.
  6. Sorted the data as much as possible in DB and reduced the use of DS-Sort for better performance of jobs
  7. Removed the data not used from the source as early as possible in the job.
  8. Worked with DB-admin to create appropriate Indexes on tables for better performance of DS queries
  9. Converted some of the complex joins/business in DS to Stored Procedures on DS for faster execution of the jobs.
  10. If an input file has an excessive number of rows and can be split-up then use standard logic to run jobs in parallel.
  11. Before writing a routine or a transform, make sure that there is not the functionality required in one of the standard routines supplied in the sdk or ds utilities categories.
    Constraints are generally CPU intensive and take a significant amount of time to process. This may be the case if the constraint calls routines or external macros but if it is inline code then the overhead will be minimal.
  12. Try to have the constraints in the 'Selection' criteria of the jobs itself. This will eliminate the unnecessary records even getting in before joins are made.
  13. Tuning should occur on a job-by-job basis.
  14. Use the power of DBMS.
  15. Try not to use a sort stage when you can use an ORDER BY clause in the database.
  16. Using a constraint to filter a record set is much slower than performing a SELECT … WHERE….
  17. Make every attempt to use the bulk loader for your particular database. Bulk loaders are generally faster than using ODBC or OLE.

Showing Answers 1 - 1 of 1 Answers

  1. Minimise the usage of Transformer (Instead of this use Copy, modify, Filter, Row Generator)
  2. Use SQL Code while extracting the data
  3. Handle the nulls
  4. Minimise the warnings
  5. Reduce the number of lookups in a job design
  6. Use not more than 20stages in a job
  7. Use IPC stage between two passive stages Reduces processing time
  8. Drop indexes before data loading and recreate after loading data into tables
  9. Gen\'ll we cannot avoid no of lookups if our requirements to do lookups compulsory.
  10. There is no limit for no of stages like 20 or 30 but we can break the job into small jobs then we use dataset Stages to store the data.
  11. IPC Stage that is provided in Server Jobs not in Parallel Jobs
  12. Check the write cache of Hash file. If the same hash file is used for Look up and as well as target, disable this Option.
  13. If the hash file is used only for lookup then \"enable Preload to memory\". This will improve the performance. Also, check the order of execution of the routines.
  14. Don\'t use more than 7 lookups in the same transformer; introduce new transformers if it exceeds 7 lookups.
  15. Use Preload to memory option in the hash file output.
  16. Use Write to cache in the hash file input.
  17. Write into the error tables only after all the transformer stages.
  18. Reduce the width of the input record - remove the columns that you would not use.
  19. Cache the hash files you are reading from and writting into. Make sure your cache is big enough to hold the hash files.
  20. Use ANALYZE.FILE or HASH.HELP to determine the optimal settings for your hash files.

       This would also minimize overflow on the hash file.

  1. If possible, break the input into multiple threads and run multiple instances of the job.
  2. Staged the data coming from ODBC/OCI/DB2UDB stages or any database on the server using Hash/Sequential files for optimum performance also for data recovery in case job aborts.
  3. Tuned the OCI stage for 'Array Size' and 'Rows per Transaction' numerical values for faster inserts, updates and selects.
  4. Tuned the 'Project Tunables' in Administrator for better performance.
  5. Used sorted data for Aggregator.
  6. Sorted the data as much as possible in DB and reduced the use of DS-Sort for better performance of jobs
  7. Removed the data not used from the source as early as possible in the job.
  8. Worked with DB-admin to create appropriate Indexes on tables for better performance of DS queries
  9. Converted some of the complex joins/business in DS to Stored Procedures on DS for faster execution of the jobs.
  10. If an input file has an excessive number of rows and can be split-up then use standard logic to run jobs in parallel.
  11. Before writing a routine or a transform, make sure that there is not the functionality required in one of the standard routines supplied in the sdk or ds utilities categories.
    Constraints are generally CPU intensive and take a significant amount of time to process. This may be the case if the constraint calls routines or external macros but if it is inline code then the overhead will be minimal.
  12. Try to have the constraints in the 'Selection' criteria of the jobs itself. This will eliminate the unnecessary records even getting in before joins are made.
  13. Tuning should occur on a job-by-job basis.
  14. Use the power of DBMS.
  15. Try not to use a sort stage when you can use an ORDER BY clause in the database.
  16. Using a constraint to filter a record set is much slower than performing a SELECT ? WHERE?.
  17. Make every attempt to use the bulk loader for your particular database. Bulk loaders are generally faster than using ODBC or OLE.

  Was this answer useful?  Yes

Give your answer:

If you think the above answer is not correct, Please select a reason and add your answer below.

 

Related Answered Questions

 

Related Open Questions