Results 1 to 8 of 8

Thread: DataStage Parallel jobs Vs DataStage Server jobs

  1. #1
    Moderator
    Join Date
    Oct 2005
    Answers
    305

    DataStage Parallel jobs Vs DataStage Server jobs

    How DataStage Parallel jobs different from DataStage Server jobs?

    NOTE : [This question was asked by prasanth]


  2. #2

    Re: DataStage Parallel jobs Vs DataStage Server jobs

    does someone knows this?


  3. #3
    Junior Member
    Join Date
    Nov 2007
    Answers
    2

    Re: DataStage Parallel jobs Vs DataStage Server jobs

    Hi,

    I found the difference between Server and PX in one site. I hope this might help you guys.

    The basic difference between server and parallel jobs is the degree of parallelism that PX offers.

    Server job stages do not have in built partitoning and parallelism mechanism for extracting and loading data between different stages. All you can do to enhance the speed and perormance in server jobs is to enable inter process row buffering through the administrator. This helps stages to exchange data as soon as it is available in the link. You could use IPC stage too which helps one passive stage read data from another as soon as data is available. In other words, stages do not have to wait for the entire set of records to be read first and then transferred to the next stage. Link partitioner and link collector stages can be used to achieve a certain degree of partitioning paralellism.

    All of the above features which have to be explored in server jobs are built in datastage Px. The Px engine runs on a multiprocessor sytem and takes full advantage of the processing nodes defined in the configuration file. Both SMP and MMP architecture is supported by datastage Px.
    Px takes advantage of both pipeline parallelism and partitoning paralellism. Pipeline parallelism means that as soon as data is available between stages( in pipes or links), it can be exchanged between them without waiting for the entire record set to be read. Partitioning parallelism means that entire record set is partitioned into small sets and processed on different nodes(logical processors). For example if there are 100 records, then if there are 4 logical nodes then each node would process 25 records each. This enhances the speed at which loading takes place to an amazing degree. Imagine situations where billions of records have to be loaded daily. This is where datastage PX comes as a boon for ETL process and surpasses all other ETL tools in the market.


  4. #4
    Junior Member
    Join Date
    Jun 2008
    Answers
    1

    Re: DataStage Parallel jobs Vs DataStage Server jobs

    congratulations for sending good answer with suitable example


  5. #5
    Junior Member
    Join Date
    Jul 2008
    Answers
    2

    Re: DataStage Parallel jobs Vs DataStage Server jobs

    what are roles and responsibilities of an ETL developers/designer?


  6. #6
    Junior Member
    Join Date
    Jul 2008
    Answers
    2

    Re: DataStage Parallel jobs Vs DataStage Server jobs

    1) The basic difference between server and parallel jobs is the degree of parallelism
    Server job stages do not have in built partitoning and parallelism mechanism for extracting and loading data between different stages.

    • All you can do to enhance the speed and perormance in server jobs is to enable inter process row buffering through the administrator. This helps stages to exchange data as soon as it is available in the link.
    • You could use IPC stage too which helps one passive stage read data from another as soon as data is available. In other words, stages do not have to wait for the entire set of records to be read first and then transferred to the next stage. Link partitioner and link collector stages can be used to achieve a certain degree of partitioning paralellism.
    • All of the above features which have to be explored in server jobs are built in datastage Px.

    2) The Px engine runs on a multiprocessor system and takes full advantage of the processing nodes defined in the configuration file. Both SMP and MMP architecture is supported by datastage Px.

    3) Px takes advantage of both pipeline parallelism and partitoning paralellism. Pipeline parallelism means that as soon as data is available between stages( in pipes or links), it can be exchanged between them without waiting for the entire record set to be read. Partitioning parallelism means that entire record set is partitioned into small sets and processed on different nodes(logical processors). For example if there are 100 records, then if there are 4 logical nodes then each node would process 25 records each. This enhances the speed at which loading takes place to an amazing degree. Imagine situations where billions of records have to be loaded daily. This is where datastage PX comes as a boon for ETL process and surpasses all other ETL tools in the market.

    4) In parallel we have Dataset which acts as the intermediate data storage in the linked list, it is the best storage option it stores the data in datastage internal format.

    5) In parallel we can choose to display OSH , which gives information about the how job works.

    6) In Parallel Transformer there is no reference link possibility, in server stage reference could be given to transformer. Parallel stage can use both basic and parallel oriented functions.

    7) Datastage server executed by datastage server environment but parallel executed under control of datastage runtime environment

    8) Datastage compiled in to BASIC(interpreted pseudo code) and Parallel compiled to OSH(Orchestrate Scripting Language).

    9) Debugging and Testing Stages are available only in the Parallel Extender.

    10) More Processing stages are not included in Server example, Join, CDC, Lookup etc…..

    11) In File stages, Hash file available only in Server and Complex falat file , dataset , lookup file set avail in parallel only.

    12) Server Transformer supports basic transforms only, but in parallel both basic and parallel transforms.

    13) Server transformer is basic language compatability, pararllel transformer is c++ language compatabillity

    14) Look up of sequntial file is possible in parallel jobs

    15) . In parallel we can specify more file paths to fetch data from using
    file pattern similar to Folder stage in Server, while in server we can
    specify one file name in one O/P link.

    16). We can simulteneously give input as well as output link to a seq. file
    stage in Server. But an output link in parallel means a reject link, that
    is a link that collects records that fail to load into the sequential file
    for some reasons.

    17). The difference is file size Restriction.
    Sequential file size in server is : 2GB
    Sequential file size in parallel is : No Limitation..

    18). Parallel sequential file has filter options too. Where you can specify the file pattern.


  7. #7
    Junior Member
    Join Date
    Jan 2010
    Answers
    1

    Re: DataStage Parallel jobs Vs DataStage Server jobs

    hello can you explain in detail


  8. #8
    Junior Member
    Join Date
    Dec 2010
    Answers
    1

    Re: DataStage Parallel jobs Vs DataStage Server jobs

    much appreciated!!!

    Quote Originally Posted by Prasanna2883 View Post
    Hi,

    I found the difference between Server and PX in one site. I hope this might help you guys.

    The basic difference between server and parallel jobs is the degree of parallelism that PX offers.

    Server job stages do not have in built partitoning and parallelism mechanism for extracting and loading data between different stages. All you can do to enhance the speed and perormance in server jobs is to enable inter process row buffering through the administrator. This helps stages to exchange data as soon as it is available in the link. You could use IPC stage too which helps one passive stage read data from another as soon as data is available. In other words, stages do not have to wait for the entire set of records to be read first and then transferred to the next stage. Link partitioner and link collector stages can be used to achieve a certain degree of partitioning paralellism.

    All of the above features which have to be explored in server jobs are built in datastage Px. The Px engine runs on a multiprocessor sytem and takes full advantage of the processing nodes defined in the configuration file. Both SMP and MMP architecture is supported by datastage Px.
    Px takes advantage of both pipeline parallelism and partitoning paralellism. Pipeline parallelism means that as soon as data is available between stages( in pipes or links), it can be exchanged between them without waiting for the entire record set to be read. Partitioning parallelism means that entire record set is partitioned into small sets and processed on different nodes(logical processors). For example if there are 100 records, then if there are 4 logical nodes then each node would process 25 records each. This enhances the speed at which loading takes place to an amazing degree. Imagine situations where billions of records have to be loaded daily. This is where datastage PX comes as a boon for ETL process and surpasses all other ETL tools in the market.



Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
About us
Applying for a job can be a stressful and frustrating experience, especially for someone who has never done it before. Considering that you are competing for the position with a at least a dozen other applicants, it is imperative that you thoroughly prepare for the job interview, in order to stand a good chance of getting hired. That's where GeekInterview can help.
Interact