Abstract
In this paper, we report on our “Iridis-Pi” cluster, which consists of 64 Raspberry Pi Model B nodes each equipped with a 700 MHz ARM processor, 256 Mbit of RAM and a 16 GiB SD card for local storage. The cluster has a number of advantages which are not shared with conventional data-centre based cluster, including its low total power consumption, easy portability due to its small size and weight, affordability, and passive, ambient cooling. We propose that these attributes make Iridis-Pi ideally suited to educational applications, where it provides a low-cost starting point to inspire and enable students to understand and apply high-performance computing and data handling to tackle complex engineering and scientific challenges. We present the results of benchmarking both the computational power and network performance of the “Iridis-Pi.” We also argue that such systems should be considered in some additional specialist application areas where these unique attributes may prove advantageous. We believe that the choice of an ARM CPU foreshadows a trend towards the increasing adoption of low-power, non-PC-compatible architectures in high performance clusters.
Similar content being viewed by others
Notes
See http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone, accessed 10 Dec 2012.
Also shown in a video available online, https://www.youtube.com/watch?v=Jq5nrHz9I94, assembled according to the guide at http://www.southampton.ac.uk/~sjc/raspberrypi/pi_supercomputer_southampton_web.pdf.
http://www.amd.com/us/press-releases/Pages/press-release-2012Oct29.aspx accessed 10 Dec 2012.
See http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone, accessed 10 Dec 2012.
References
Open source ARM userland. http://www.raspberrypi.org/archives/2221
Petitet, A., Whaley, R.C., Dongarra, J., Cleary, A.: HPL—a portable implementation of the high-performance linpack benchmark for distributed-memory computers. http://www.netlib.org/benchmark/hpl/. Accessed Jan 2013
Andersen, D.G., Franklin, J., Kaminsky, M., Phanishayee, A., Tan, L., Vasudevan, V.: Fawn: a fast array of wimpy nodes. Commun. ACM 54(7), 101–109 (2011). doi:10.1145/1965724.1965747
Argonne National Laboratory: MPICH2. http://www-unix.mcs.anl.gov/mpi/mpich2/
Asanovic, K., Bodik, R., Catanzaro, B.C., Gebis, J.J., Husbands, P., Keutzer, K., Patterson, D.A., Plishker, W.L., Shalf, J., Williams, S.W., Yelick, K.A.: The landscape of parallel computing research: a view from Berkeley. Tech. Rep. UCB/EECS-2006-183, EECS Department, University of California, Berkeley. http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html (2006)
Dongarra, J.J., Luszczek, P., Petitet, A.: The linpack benchmark: past, present and future. Concurr. Comput., Pract. Exp. 15(9), 803–820 (2003). doi:10.1002/cpe.728
Gates, M., Tirumala, A., Ferguson, J., Dugan, J., Qin, F., Gibbs, K., Estabrook, J.: Iperf—the TCP/UDP bandwidth measurement tool. http://sourceforge.net/projects/iperf/ [Online]
Hamilton, J.: Cooperative expendable micro-slice servers (cems): low cost, low power servers for internet-scale services. www.mvdirona.com/jrh/talksandpapers/jameshamilton_cems.pdf (2009)
Levenberg, K.: A method for the solution of certain non-linear problems in least squares. Q. Appl. Math. 2, 164–168 (1944)
Lim, K., Ranganathan, P., Chang, J., Patel, C., Mudge, T., Reinhardt, S.: Understanding and designing new server architectures for emerging warehouse-computing environments. Comput. Archit. News 36(3), 315–326 (2008). doi:10.1145/1394608.1382148
Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 11(2), 431–441 (1963). doi:10.1137/0111030
Raspberry Pi Foundation: Raspberry Pi FAQs. http://www.raspberrypi.org/faqs. Accessed 2012
Snell, Q.O., Mikler, A.R., Gustafson, J.L.: Netpipe: a network protocol independent performance evaluator. In: IASTED International Conference on Intelligent Information Management and Systems (1996)
Szalay, A.S., Bell, G.C., Huang, H.H., Terzis, A., White, A.: Low-power amdahl-balanced blades for data intensive computing. Oper. Syst. Rev. 44(1), 71–75 (2010). doi:10.1145/1740390.1740407
Whaley, R.C., Petitet, A.: Minimizing development and maintenance costs in supporting persistently optimized BLAS. Softw. Pract. Exp. 35(2), 101–121 (2005). http://www.cs.utsa.edu/~whaley/papers/spercw04.ps
Ghemawat, S., Gobioff, H., Leung, S.-T.: The Google file system. In: ACM Symposium on Operating Systems Principles, pp. 29–43 (2003)
Shvachko, K., Kuang, H., Radia, S., Chansler, R.: The hadoop distributed file system. In: Symposium on Mass Storage Systems (2010)
Dean, J., Ghemawat, S.: MapReduce: simplied data processing on large clusters. In: Operating Systems Design and Implementation, pp. 137–150 (2004)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Cox, S.J., Cox, J.T., Boardman, R.P. et al. Iridis-pi: a low-cost, compact demonstration cluster. Cluster Comput 17, 349–358 (2014). https://doi.org/10.1007/s10586-013-0282-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10586-013-0282-7