User Tools

Site Tools


teaching:ws2223:hpc-seminar

Seminar High Performance Computing

Lecturer Prof. Dr. Estela Suarez
Module (BASIS) MA-INF-1322
eCampus eCampus_MA-INF-1322
Type of Lecture Seminar
Credits 4 CP
Research Area High Performance Computing
Language English
Max. Number of Participants 10

On the Topic

High Performance Computing (HPC) refers to the use of large cluster computers to solve scientific and technical problems unsolvable on small-scale systems, and doing it so that the hardware resources are efficiently employed. HPC systems are designed to achieve the maximum computing performance at the lowest possible power consumption. They are parallel computers made of hundreds to thousands of compute nodes connected to each other via a high-speed network. Operating HPC systems requires specific software distributions, programming models, and tools. Applications must be parallelized, meaning they must be programmed so that the problem to be solved is split into many individual operations that can be executed in parallel. This seminar addresses important topics and challenges on today’s HPC. It is designed to cover a wide range of areas allowing for a general overview on each of the aspects, as well as for a deep-dive into specific solutions within any of the proposed topics.

Dates

Event Date
Application open 01.10.2022
Term of application 01.11.2022
Preliminary discussion and subject assignment 04.11.2020 at 10:00-12:00
Seminar-Room 3.035b
Deadline application finalization (in BASIS) 16.12.2022
Deadline written report (concept) 16.12.2022
Deadline written report (first submission) 20.01.2023
Deadline to review a paper from a colleague 27.01.2023
Deadline written report (final version) 17.02.2023
Deadline presentation slides (structure) 03.03.2023
Deadline presentation slides (full version) 05.03.2023
Presentations 07.03.2023

Subjects

The topics below are formulated in a general manner. There are two possible ways of approaching them, which lead to two different kinds of seminar report:

  • Review paper: Provide a general overview of the topic and research at relatively high level
  • Focus paper: shortly describe the general overview (“background” section of your report) and then focus on a specific research (e.g. a specific publication) within the given topic.
# Topics (some specific aspects as suggestions) Refs
A HPC computer architectures (basic principles, system-level approaches, homogeneous clusters, heterogeneous systems, modular systems) [01], [02], [03], [04]
B Memory and storage hierarchies (cache levels, memory technologies, storage systems) [05], [06], [07]
C Network interconnect (principles, intra-node and system-wide approaches, topologies, technologies) [08], [09], [10]
D Resource management and job scheduling (principles of multi-user environment, standard approaches, advanced features (e.g. malleability, dynamic resource allocation) [11], [12]
E Programming models for parallel computing (shared memory approaches, distributed memory approaches, hardware-specific, high-level programming models, performance portability) [13], [14], [15]
F Scalability (principles, Amdahl’s and Gustafsons law) [16], [17], [18]
G Energy efficiency and power consumption (challenges, trends, measurement/monitoring techniques) [19], [20], [21]
H Performance prediction and measurement (principles, metrics, classifications, roofline, analysis and modelling tools) [02], [22], [23], [24], [25], [26]
I Co-design (how application requirements are fed into system/component design, principles, approaches, examples) [27], [28]
J Exascale (trends and challenges) [29], [30], [31], [32]

Application Process

  • Application period will be open from the 01.10.2022 to 01.11.2022
  • Application is done by writing an email to Prof. Dr. Estela Suarez, which must contain
    • your name,
    • your email address (@uni-bonn.de),
    • your matriculation number,
    • and your desired subjects (rank 3 or more subjects).
  • Please note, that the number of participants is limited to 10 and only the first 10 applications will be considered.
    • You will receive a feedback mail in all cases.
  • Subject assignment will be done at the preliminary discussion (04.11.2022).
    • The preliminary discussion is mandatory
    • Absence will lead to exclusion from the seminar.
    • It will take place in a Seminar room at the UniBonn: 3.035B
    • Details about the technical process will be given in confirmation mail.
  • Basis registration: until 16.12.2022

Organization

A selection of scientific publications / subjects is available above. The subject will be assigned at the preliminary discussion (see section Dates).

During the semester the participants will create their written report and presentation slides. During this period there will be no regular meetings with the other participants. The tutor Prof. Dr. Estela Suarez will be available for any organizational or content-related questions. Feel free to drop an email anytime, also if if you'd like to setup a meeting.

First step: a concept about the written report must be created. This should contain the projected structure of the report as well as a basic description of the contents of each section. Furthermore, it should contain the used literature. To avoid misconceptions and unnecessary work, the conception should be discussed with the tutor before the actual work on the written report is started.

The written report must span 8-10 pages and must be created using LaTeX (report-template.zip). Generally, it is necessary to make a selection and prioritization of topics discussed in the source literature. The content of the written report should match the later presentation, although a different depth, ordering, and prioritization is possible. It is advantageous to incorporate other scientific sources. A scientific complete reference to all used sources is mandatory. It is expected to critically review the subject and literature at hand. A complete and successful written report is necessary to continue with the seminar. Simple rephrasing will be considered unsuccessful.

  • Submissions after the given deadlines will not be accepted.
  • Presentation slides will not be accepted for review by the tutors if the written report is unsuccessful.

The tutor will review the written report after the first submission. All participants have the possibility to incorporate desired changes and suggestions from the review and submit a final version afterwards. Only the final version will be subject to grading. Both submissions are mandatory. The projected structure of the presentation slides should be submitted to the tutor in advance to the complete version for discussion. It is possible that the tutor will request further changes to the presentation slides after the submission of the complete slides. Be prepared to incorporate these changes before the seminar takes place.

All participants will present their subject in a 30 minute presentation during the seminar. After each presentation, there is a 15 minute time slot to discuss the presentation and the subject.

  • Attendance to all presentations is mandatory.

Report Template

To write your report, please use the following LaTeX template: report-template.zip

Criteria for Grading

Criteria for the Written Composition

  • Layout and formal requirements: citation style and appropriate citation usage, correct mathematical notion, grammatical correctness, spelling, punctuation, formatting, optical appearance.
  • Style and structure: spelling style, well defined technical terms, well structured, concise content representation, correct usage of LaTeX environments and theorems.
  • Content: adequate selection and prioritization of the content, usage of additional literature, content related correctness, mathematical correctness, correct definitions / theorems / proofs, suitable self-provided examples, precise phrasing, critical evaluation and discussion of the content.
  • Independent work style: preparation of good questions for meetings with the tutor, performing literature search for open questions and a deep understanding of the content, justified prioritization and content selection. Attention: Questions and discussions with the tutor are recommended and welcome. They will not lead to lesser grades. On the contrary, they will typically enhance the overall quality of the submissions. An independent work style means, that you think over your problem on your own in advance to such discussions and that you do not rely on your tutor to make trivial corrections.
  • Bibliography: for correct bibliographic referencing, see more information in this Bibliography Guideline

Criteria for the Presentation

  • Content: structure, adequate selection and prioritization of the content, correctness, self-prepared examples, graphics, critical evaluation and discussion
  • Presentation: presentations style (free, smooth, adequate and precise phrasing, understandability), reasonable and supportive presentation slides / examples / graphics, that help the audience to understand the problems / definitions / evaluations, timing.
  • Some general guidelines: for further information, see this Presentation Guideline

Bibliography

These are just some possible sources on the topics of the Seminar. In order to get an overview of the topic that you selected, it is a good approach to start by one of the related references and look further into papers that have cited it, and on its own references (especially those given as related work). If you cannot get access to the papers via the UniBonn library licence, please contact Prof. Dr. Estela Suarez.

A - Computer Architectures

  • [01] D. Pleiter, Parallel Computer Architectures. Lecture Notes of the 45th IFF Spring School “Computing Solids - Models, ab initio methods and supercomputing” (Forschungszentrum Jülich, 2014) https://juser.fz-juelich.de/record/186708
  • [02] J. Hofmann, G. Hager, G. Wellein, D. Fey, An Analysis of Core- and Chip-Level Architectural Features in Four Generations of Intel Server Processors, Lecture Notes in Computer Science book series (LNTCS, volume 10266), https://arxiv.org/abs/1702.07554
  • [03] E. Suarez, N. Eicker, Th. Lippert, Modular Supercomputing Architecture: from Idea to Production, Contemporary High Performance Computing: From Petascale toward Exascale, Volume 3 FL, USA : CRC Press 3, : 3rd, 223-251 (2019), http://hdl.handle.net/2128/22212
  • [04] C. Engelmann, H. Ong, S.L. Scott, Middleware in Modern High Performance Computing System Architectures. In: Shi, Y., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds) Computational Science – ICCS 2007. ICCS 2007. Lecture Notes in Computer Science, vol 4488. Springer, Berlin, Heidelberg. (2007). https://doi.org/10.1007/978-3-540-72586-2_111, https://link.springer.com/content/pdf/10.1007/978-3-540-72586-2_111.pdf

B – Memory and storage hierarchies

  • [05] J. Lüttgau, M. Kuhn, K., Duwe, Y. Alforov, E. Betke, J. Kunkel, T. Ludwig, Survey of Storage Systems for High-Performance Computing. Supercomputing Frontiers and Innovations, 5(1), 31–58 (2018). https://doi.org/10.14529/jsfi180103
  • [06] A. Suresh, P. Cicotti and L. Carrington, Evaluation of emerging memory technologies for HPC, data intensive applications, 2014 IEEE International Conference on Cluster Computing (CLUSTER), pp. 239-247, 2014. doi: 10.1109/CLUSTER.2014.6968745.
  • [07] S. Narasimhamurthy, N. Danilov, S. Wu, G. Umanesan, S. Wei-der Chien, S. Rivas-Gomez, I. Bo Peng, E. Laure, S. de Witt, D. Pleiter, S. Markidis, The SAGE Project: a Storage Centric Approach for Exascale Computing, In Proceedings of the 15th ACM International Conference on Computing Frontiers (CF '18). Association for Computing Machinery, New York, NY, USA, 287–292. (2018). https://doi.org/10.1145/3203217.3205341, https://doi.org/10.48550/arXiv.1807.03632

C – Network and interconnect

  • [08] C. A. Thraskias et al., Survey of Photonic and Plasmonic Interconnect Technologies for Intra-Datacenter and High-Performance Computing Communications, in IEEE Communications Surveys & Tutorials, vol. 20, no. 4, pp. 2758-2783, Fourthquarter (2018), doi: 10.1109/COMST.2018.2839672. https://ieeexplore.ieee.org/document/8367741
  • [09] R. Trobec, R. Vasiljević, M. Tomašević, V. Milutinović, R. Beivide, M. Valero, Interconnection Networks in Petascale Computer Systems: A Survey, ACM Computing Surveys. Volume 49, Article No.: 44pp 1–24, (2017). https://doi.org/10.1145/2983387
  • [10] D. De Sensi, S. Di Girolamo, K.H. McMahon, D. Roweth, T. Hoefler, An In-Depth Analysis of the Slingshot Interconnect, (2020). https://arxiv.org/abs/2008.08886

D – Resource management and job scheduling

  • [11] S. Perarnau, J. A. Zounmevo, M. Dreher, B. C. V. Essen, R. Gioiosa, K. Iskra, M. B. Gokhale, K. Yoshii, and P. Beckman. 2017. Argo NodeOS: Toward unified resource management for exascale. In Proceedings of the 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS). 153–162. DOI: https://doi.org/10.1109/IPDPS.2017.25
  • [12] S. Prabhakaran, M. Neumann, S. Rinke, F. Wolf, A. Gupta and L. V. Kale, A Batch System with Efficient Adaptive Scheduling for Malleable and Evolving Applications, 2015 IEEE International Parallel and Distributed Processing Symposium, pp. 429-438, (2015). doi: 10.1109/IPDPS.2015.34

E - Programming models for parallel computing

  • [13] T. Sterling, M. Anderson, M. Brodowicz, A Survey: Runtime Software Systems for High Performance Computing. Supercomputing Frontiers and Innovations, 4(1), 48–68, (2017). https://doi.org/10.14529/jsfi170103
  • [14] W. Gropp and M. Snir, Programming for Exascale Computers, in Computing in Science & Engineering, vol. 15, no. 6, pp. 27-35, (2013). doi: 10.1109/MCSE.2013.96.
  • [15] J. Diaz, C. Muñoz-Caro and A. Niño, A Survey of Parallel Programming Models and Tools in the Multi and Many-Core Era, in IEEE Transactions on Parallel and Distributed Systems, vol. 23, no. 8, pp. 1369-1386, (2012). doi: 10.1109/TPDS.2011.308. https://ieeexplore.ieee.org/abstract/document/6122018

F – Scalability

G - Energy efficiency and power consumption

  • [19] P. Czarnul, J. Proficz, and A. Krzywaniak, Energy-aware high-performance computing: survey of state-of-the-art tools, techniques, and environments. Scientific Programming 2019, (2019). https://doi.org/10.1155/2019/8348791
  • [20] H. Shoukourian, T. Wilde, A. Auweter, A. Bode, Predicting the Energy and Power Consumption of Strong and Weak Scaling HPC Applications. Supercomputing Frontiers and Innovations, 1(2), 20–41, (2014). https://doi.org/10.14529/jsfi140202
  • [21] A. Netti, M. Müller, A. Auweter, C. Guillen, M. Ott, D. Tafani, M. Schulz, From facility to application sensor data: modular, continuous and holistic monitoring with DCDB. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC '19). Association for Computing Machinery, New York, NY, USA, Article 64, 1–27, 2019. https://doi.org/10.1145/3295500.3356191, https://arxiv.org/abs/1906.07509

H – Performance prediction and measurement

  • [22] G. Hager, J. Treibig, J. Habich, and G. Wellein, Exploring performance and power properties of modern multicore chips via simple machine models. Concurrency and Computation: Practice & Experience. Vol 28, Issue 2, pp 189–210, (2016). https://doi.org/10.1002/cpe.3180
  • [23] U. Lopez-Novoa, A. Mendiburu and J. Miguel-Alonso, A Survey of Performance Modeling and Simulation Techniques for Accelerator-Based Computing, in IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 1, pp. 272-281, (2015), doi: 10.1109/TPDS.2014.2308216. https://ieeexplore.ieee.org/abstract/document/6748067
  • [24] J. Dongarra, M.A. Heroux, and P. Luszczek, High-performance conjugate-gradient benchmark: A new metric for ranking high-performance computing systems. The International Journal of High Performance Computing Applications, 30(1):3–10, (2016). https://doi.org/10.1177/1094342015593158
  • [25] P.F. Baumeister, T. Hater, J. Kraus, D. Pleiter, P. Wahl, A Performance Model for GPU-Accelerated FDTD Applications, 2015 IEEE 22nd International Conference on High Performance Computing (HiPC) - ISBN 978-1-4673-8488-9, (2015), [10.1109/HiPC.2015.24] doi:10.1109/HiPC.2015.24, https://ieeexplore.ieee.org/document/7397633
  • [26] S. Wienke, H. Iliev, D. an Mey, M.S. Müller. Modeling the Productivity of HPC Systems on a Computing Center Scale. In: Kunkel, J., Ludwig, T. (eds) High Performance Computing. ISC High Performance 2015. Lecture Notes in Computer Science(), vol 9137. Springer, Cham., (2015). https://doi.org/10.1007/978-3-319-20119-1_26

I – Codesign

  • [27] J. Shalf, D. Quinlan, C. Janssen. Rethinking Hardware-Software Codesign for Exascale Systems, Computer, Vol. 44, Issue 11, November 2011 pp 22–30, https://doi.org/10.1109/MC.2011.300
  • [28] D. Unat, C. Chan, W. Zhang, S. Williams, J. Bachan, J. Bell, J. Shalf, ExaSAT: An exascale co-design tool for performance modeling. The International Journal of High Performance Computing Applications 29, 2 (May 2015), 209–232, (2015). DOI: https://doi.org/10.1177/1094342014568690

J – Exascale

teaching/ws2223/hpc-seminar.txt · Last modified: 2023/03/17 14:28 by estela.suarez

Page Tools