Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Introducing multi-core programming into the lower-level curriculum: an incremental approach

Published: 01 January 2010 Publication History

Abstract

Historically, improved hardware processing power has been due to increases in clock speed. Recently, however, chip manufacturers have begun to increase overall processing power by adding additional processing cores to the microprocessor package, while clock speeds have remained virtually unchanged. New processors will eventually come in heterogeneous configurations such as combinations of high- and low-power cores, graphical processors, etc. These future configurations are being termed "many-core" architectures. For software developers, this change raises many challenges. No longer will programmers be able to "ride the wave" of increasing performance without explicitly taking advantage of parallelism. Currently, only a very small proportion of developers have expertise in parallel programming. Software developers need new programming models, tools, and abstraction by the operating system to handle concurrency and complexity of numerous processors. For computer science educators, this change will also require radical shifts in the way computer science is taught. Parallelism will need to be introduced early in the curriculum, preferably in the CS1/CS2 sequence.
Despite the fact that a small group of programmers have been programming parallel applications for many years, most programmers have only a cursory understanding of the issues involved in developing multi-core applications. As machines with 32 or more cores become commonplace, students must gain a working knowledge of how to develop parallel programs. It seems evident that students must be exposed to concurrency throughout the curriculum, beginning with the introductory CS sequence.
While new parallel languages will be developed (as well as extensions to existing languages) it is not yet evident which direction that development will head. Message-passing (MPI) and threads (POSIX threads and Java threads) have been the methods of choice for teaching parallelism at the undergraduate level. Since multi-core systems use shared-memory, and the message-passing model is more suited for clusters than it is for shared-memory systems, it is not a likely candidate for multi-core and many- core systems. Java threads have the advantage of being built-in, so the language can be used for parallel programming on multi-core machines. Although threads may be seen as a small syntactic extension to sequential processing, as a computational model, they are non-deterministic, and the programmer's task when using them is to "prune" that non-determinism. POSIX threads are implemented by a standard library providing a set of C calls for writing multithreaded code. They have the same difficulties in programming that Java threads do.
OpenMP (Open Multi-Processing) is an API which supports shared memory multiprocessing. It consists of a set of compiler directives and a library of support functions. Open MP works in conjunction with Fortran, C, and C++. Compared to Pthreads, OpenMP allows for a higher level of abstraction, allowing a programmer to partition a program into serial regions and parallel regions (rather than a set of concurrently executing threads.) It also provides intuitive synchronization constructs.
This tutorial will survey the parallel programming landscape, summarize the OpenMP approach to multi-threading, and illustrate how it can be used to introduce parallelism into the lower-level curriculum to novice or intermediate C programmers.

Cited By

View all
  • (2017)An example of Android's foreground service patternJournal of Computing Sciences in Colleges10.5555/3144605.314461933:1(62-71)Online publication date: 1-Oct-2017
  • (2016)Mining autograding data in computer science educationProceedings of the Australasian Computer Science Week Multiconference10.1145/2843043.2843070(1-10)Online publication date: 1-Feb-2016
  • (2014)An experience on multithreading using Android's handler classJournal of Computing Sciences in Colleges10.5555/2667369.266738430:1(80-86)Online publication date: 1-Oct-2014

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Journal of Computing Sciences in Colleges
Journal of Computing Sciences in Colleges  Volume 25, Issue 3
January 2010
177 pages
ISSN:1937-4771
EISSN:1937-4763
Issue’s Table of Contents

Publisher

Consortium for Computing Sciences in Colleges

Evansville, IN, United States

Publication History

Published: 01 January 2010
Published in JCSC Volume 25, Issue 3

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2017)An example of Android's foreground service patternJournal of Computing Sciences in Colleges10.5555/3144605.314461933:1(62-71)Online publication date: 1-Oct-2017
  • (2016)Mining autograding data in computer science educationProceedings of the Australasian Computer Science Week Multiconference10.1145/2843043.2843070(1-10)Online publication date: 1-Feb-2016
  • (2014)An experience on multithreading using Android's handler classJournal of Computing Sciences in Colleges10.5555/2667369.266738430:1(80-86)Online publication date: 1-Oct-2014

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media