Parallel Processing, 1980 to 2020 | Buch | 978-1-68173-975-5 | sack.de

Buch, Englisch, 166 Seiten, Paperback, Format (B × H): 191 mm x 235 mm

Reihe: Synthesis Lectures on Computer Architecture

Parallel Processing, 1980 to 2020


Erscheinungsjahr 2020
ISBN: 978-1-68173-975-5
Verlag: Morgan & Claypool Publishers

Buch, Englisch, 166 Seiten, Paperback, Format (B × H): 191 mm x 235 mm

Reihe: Synthesis Lectures on Computer Architecture

ISBN: 978-1-68173-975-5
Verlag: Morgan & Claypool Publishers


This historical survey of parallel processing from 1980 to 2020 is a follow-up to the authors' 1981 Tutorial on Parallel Processing, which covered the state of the art in hardware, programming languages, and applications. Here, we cover the evolution of the field since 1980 in: parallel computers, ranging from the Cyber 205 to clusters now approaching an exaflop, to multicore microprocessors, and Graphic Processing Units (GPUs) in commodity personal devices; parallel programming notations such as OpenMP, MPI message passing, and CUDA streaming notation; and seven parallel applications, such as finite element analysis and computer vision. Some things that looked like they would be major trends in 1981, such as big Single Instruction Multiple Data arrays disappeared for some time but have been revived recently in deep neural network processors. There are now major trends that did not exist in 1980, such as GPUs, distributed memory machines, and parallel processing in nearly every commodity device.This book is intended for those that already have some knowledge of parallel processing today and want to learn about the history of the three areas. In parallel hardware, every major parallel architecture type from 1980 has scaled-up in performance and scaled-out into commodity microprocessors and GPUs, so that every personal and embedded device is a parallel processor. There has been a confluence of parallel architecture types into hybrid parallel systems. Much of the impetus for change has been Moore's Law, but as clock speed increases have stopped and feature size decreases have slowed down, there has been increased demand on parallel processing to continue performance gains. In programming notations and compilers, we observe that the roots of today's programming notations existed before 1980. And that, through a great deal of research, the most widely used programming notations today, although the result of much broadening of these roots, remain close to target system architectures allowing the programmer to almost explicitly use the target's parallelism to the best of their ability. The parallel versions of applications directly or indirectly impact nearly everyone, computer expert or not, and parallelism has brought about major breakthroughs in numerous application areas. Seven parallel applications are studied in this book.
Parallel Processing, 1980 to 2020 jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


- Foreword by David Kuck
- Preface
- Acknowledgments
- Introduction
- Parallel Hardware
- Programming Notations and Compilers
- Applications
- Parallel Hardware Today and Tomorrow
- Concluding Remarks
- Appendix A: Myths and Misconceptions about Parallelism
- Appendix B: Bibliographic Notes
- Appendix C: Taxonomic Notes
- Appendix D: The 1981 Tutorial
- References
- Authors'Biographies


Robert Kuhn received his Ph.D. from the University of Illinois at Urbana-Champaign in 1981. In 1983, as assistant professor at Northwestern University, he consulted on the vector register architecture for the Gould SEL real-time minicomputers. In 1987, he led Alliant Computer System's vectorizing-parallelizing compiler team. In 1990, he led the team of application experts at Alliant. In 1992 when Alliant closed, he worked for Kuck and Associates, Inc. and led the customer experts where, for example, he worked with SGI and other OEMs on the definition and adoption of OpenMP. In 2000 when Intel acquired KAI, he worked on adoption and integration of threading by HPC ISVs He managed the acquisition by Intel of Pallas GmbH and their MPI tools. He managed Intel's participation in the ASCI/LLNL Ultrascale project to develop MPI/OpenMP performance analysis tools and led development of other Intel HPC tools. Dr. Kuhn led the adoption of threading by ISVs for the introduction of Intel's first multicore processor and the Intel/Microsoft Universal Parallel Computing Research Center project with University of California Berkeley and University of Illinois at Urbana-Champaign, as well as managing approximately 20 other university research projects in high performance computing. David Padua received his Ph.D. from the University of Illinois at Urbana-Champaign in 1980. In 1985, after a few years at the Universidad Simón Bolívar in Venezuela, he returned to the University of Illinois where he is now Donald Biggar Willet Professor in Engineering. He has served as program committee member, program chair, or general chair to more than 70 conferences and workshops. He was the Editor-in-Chief of Springer-Verlag's Encyclopedia of Parallel Computing and is currently a member of the editorial board of the Communications of the ACM, the Journal of Parallel and Distributed Computing, and the International Journal of Parallel Programming.Dr. Padua has supervised the dissertations of 30 Ph.D. students. He has devoted much of his career to the study of languages, tools, and compilers for parallel computing and has authored or co-authored more than 170 papers in these areas. He received the 2015 IEEE Computer Society Harry H. Goode Award. In 2017, he awarded an honorary doctorate from the University of Valladolid in Spain. He is a Fellow of the ACM and the IEEE.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.