Coordinator: Prof. Antonio Vicino
Home |  Engineering Library |  DIISM |  Santa Chiara  | Login Privacy e Cookie policy

Info

Structure



New Parallel Programming Models

 

Prof.
O. Unsal
Barcelona Supercomputing Center
A. Crystal
Barcelona Supercomputing Center
Course Type
Group 2
Calendar
9-13 febbraio 2009
Room
Program
Abstract. Looking at the last 10 years, we see a shift towards multi- and many-core processors. In the 1990s, processor manufacturers were designing monolithic single-core processors and were struggling to increase the performance of this core through extracting more Instruction-Level Parallelism (ILP). However, the cost of extracting more ILP became prohibitively expensive: typically doubling the power consumption for a 20-30% increase in performance. Over time, chip power-density became such a big problem that entertaining web videos appeared of users cooking their eggs sunny side up on their processors. Unfortunately, processor manufacturers realized that they hit this power-wall a bit late; there were several well-publicized product delays and cancellations. The industry then executed a “right-hand” turn and concentrated on extracting Thread-Level Parallelism (TLP) which is more power-efficient than ILP. To be effective, TLP relies on simpler, lower-power multiple processing cores on a chip executing parallel programs. Therefore, processor manufacturers started putting more cores on chip with each new technology generation doubling the number of cores. To realize the potential of these additional cores requires experts; since it is very-difficult to program these multiple processors using current hardware and software. This problem led many to ponder if a programmer-productivity wall is looming in the future. How to design multi-core processors to make them more effective and easier to program is a challenge for computer architects. In this course, we plan to make provide a look at the latest architectural efforts to make parallel programming easier. In particular we will examine Transactional Memory (TM), a new technology that promises to make lock-based programming for shared-memory Chip MultiProcessors easier. TM is essentially an optimistic concurrency scheme: multiple threads can be in a critical section, sharing data, in the hope that data ordering conflicts will be rare. According to Bill Gates, "Now, the grains inside these machines more and more will be multi-core type devices, and so the idea of parallelization won't just be at the individual chip level, even inside that chip we need to explore new techniques like transactional memory that will allow us to get the full benefit of all those transistors and map that into higher and higher performance." We will cover the most popular "flavors" of TM: Hardware Transactional Memory, Software Transactional Memory and Hybrid Transactional Memory. We plan to discuss each topic through programming examples, and paper readings. Wherever appropriate, we will examine other programming models and concepts such as OpenMP, MPI, CILK and Dataflow.
Notes
referente Prof. R. Giorgi





 

Courses

PhD Students/Alumni


Dip. Ingegneria dell'Informazione e Scienze Matematiche - Via Roma, 56 53100 SIENA - Italy