IEEE Seminar Report for ECE : Parallel Computing- Parallel Network RAM
|courtesy: Argonne National Laboratory's|
Parallel Computing could be defined as a large collection of processing elements that can communicate and cooperate to solve large problems fast.The silicon based chips are reaching a physical limit in processing speed, as they are constrained by the speed of electricity, light and certain thermodynamic laws. A viable solution to overcome this limitation is parallel computing.Parallel computers can be roughly classified according to the level at which the hardware supports parallelism. Multi-core and Multi-processor computers have multiple processing elements within a single machine while clusters and grids use multiple computers to work on the same task. Specialized parallel computer architectures(like Graphics Processing Units,GPUs). are used alongside traditional processors, for accelerating specific tasks. The seminar provides an overview of parallel computing while particularly examining GPU computing and cluster computing. Further,the concept of Parallel Network RAM is introduced and discussed in depth. Large scientific parallel applications demand large amounts of memory space. Current clusters schedule jobs without fully knowing their memory requirements. This leads to uneven memory allocation in which some nodes are overloaded. This, in turn, leads to disk paging, which is extremely expensive in the context of scientific parallel computing. To solve this problem, we have a new peer-to-peer solution called Parallel Network RAM. This approach avoids the use of disk, better utilizes available RAM resources, and will allow larger problems to be solved while reducing the computational, communication, and synchronization overhead typically involved in parallel applications.Several different Parallel Network RAM designs shall be evaluated based on the performance of each under different conditions. We shall see that different designs are appropriate in different situations.