CS 255/455 Spring 2018

CSC 255/455 Software Analysis and Improvement (Spring 2017)

Lecture slides, reading, later assignments, and other material will be distributed through Blackboard.


Assignments:


 Course description

With the increasing diversity and complexity of computers and their applications, the development of efficient, reliable software has become increasingly dependent on automatic support from compilers & other program analysis and translation tools. This course covers principal topics in understanding and transforming programs by the compiler and at run time. Specific techniques include data flow and dependence theories and analyses; type checking and program correctness, security, and verification; memory and cache management; static and dynamic program transformation; and performance analysis and modeling.

Course projects include the design and implementation of program analysis and improvement tools.  Meets jointly with CSC 255, an undergraduate-level course whose requirement includes a subset of topics and a simpler version of the project.

 Instructor and grading

Teaching staff: Chen Ding, Prof., Wegmans Hall Rm 3407, x51373;  Fangzhou Liu, Grad TA;  Zhizhou Zhang, Undergrad TA.

Lectures: Mondays and Wednesdays, 10:25am-11:40am, Hylan 202

Office hours: Ding, Fridays 11am to noon (and Mondays for any 15 minute period between 3:30pm and 5:30pm if pre-arranged).

TA Office hours: Zhizhou, Mondays 2 to 3pm, open area outside the elevator, third floor Wegmans Hall.  Jerry, Tuesdays 3 to 4pm, 3407 Wegmans Hall.

Grading (total 100%)

  • midterm and final exams are 15% and 20% respectively
  • the projects total to 40% (LVN 5%, LLVM trivial 5%, loop+index 10%, dep 10%, par 10%)
  • written assignments are 25% (trivial 1%; 4 assignments 6% each)

 Textbooks and other resources (on reserve at Carlson)

Optimizing Compilers for Modern Architectures (UR access through books24x7), Randy Allen and Ken Kennedy, Morgan Kaufmann Publishers, 2001. Chapters 1, 2, 3, 7, 8, 9, 10, 11. lecture notes from Ken Kennedy. On-line Errata

Engineering a Compiler, (2nd edition preferred, 1st okay), Keith D. Cooper and Linda Torczon, Morgan Kaufmann Publishers. Chapters 1, 8, 9, 10, 12 and 13 (both editions). lecture notes and additional reading from Keith Cooper. On-line Errata

Compilers: Principles, Techniques, and Tools (2nd edition), Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman, Pearson.

Static Single Assignment Book, Rastello et al. (in progress)

Introduction to Lattices and Order,  Davey and Priestley, Cambridge University Press.

2017 2nd URCSSA Alumni Summit

On Oct. 26, Dr. Chengliang Zhang, former graduate and now Staff Software Engineer at Google Seattle,  was invited by Chinese Student and Scholar Association (URCSSA) to speak at the second Alumni Summit titled Cloud | Big Data | AI.  The compiler group held a separate mini-symposium to present our research and had lunch with our esteemed graduate.

RTHMS: A tool for data placement on hybrid memory system

This paper uses a rule based algorithm to guide data placement on hybrid memory system. The hybrid memory system is abstracted as combinations of a FAST memory (HBM) and SLOW memory (DRAM). FAST memory is assumed to have larger bandwidth but larger latency than SLOW memory. Also FAST memory can be either software managed or be configured as the CACHE of SLOW memory. 

The placement decision problem is divided into two steps: (1) Each memory object will be first evaluated individually with a score for each placement choice (FAST, SLOW, CACHE). The rules are listed below(corresponding scores are in brackets):
      R1 (single threaded), memory objects accessed by only one thread are preferred to be placed in SLOW memory. (0, 0, 1). As the high bandwidth will be under utilized if placed in FAST.
      R2 (computing intensity), the number of computing operations on data fetched from memory is larger than a threshold. The memory objects are preferred to be placed in SLOW. (0,0,1). As long latency will be amortized by the cost of computing.
      R3 (small size), memory objects whose cache size is smaller than last level cache (LLC) size are preferred to be placed in SLOW. (0, 0, 1). As LLC can hold all the data and most accesses will result in accessing LLC.
      R4 (small/strided access), memory objects with regular access pattern are preferred to be placed in FAST. (1, 0, -1). As regular accesses are highly optimized to hide memory latency, the bandwidth is the bottleneck.
      R5 (good locality), memory objects with good locality but size larger than FAST memory are preferred to use CACHE model. (N/A, 1, 0)
      R6 (poor locality), memory objects with poor locality but size larger than FAST memory are preferred to be placed in SLOW. (N/A, -1, 1)
      R7 (irregular access, low concurrency), memory objects with irregular memory accesses but low concurrency are preferred to be placed in SLOW. (0, -1, 1). As irregular accesses is hard to optimize to hide latency and low concurrency can not amortize that, placing in lower latency memory is preferred.
      R8 (irregular access, high concurrency), memory objects with irregular memory accesses and high concurrency are preferred to be placed in FAST. (1, -1, 0). As high concurrency can amortize the latency well, exploring the benefit of higher bandwidth is preferred.
       The intuitions  behind can be summarized as follows: placing in FAST is to best utilizing the bandwidth, placing in SLOW is to best utilizing the small latency and place in CACHE is to best utilizing the locality.
       (2) But the size of FAST memory is limited, not every objects that prefer FAST can be all placed in FAST. Global decisions are made by assigning a rank for each object with the following 2 rules to identify which objects should be prioritized for FAST memory assignment.
       R9 (total access), memory objects that accessed often are typically important data structures. Memory objects with larger total accesses have higher priority.
       R10 (write intensity), memory objects that have larger write intensity are more likely to be benefited from higher bandwidth (FAST). Memory objects with larger write intensity have higher priority.

MEMSYS 2017

 

Three Walls by the Monday’s keynote speaker Peter Kogge, University of Notre Dame

 

Memory Equalizer for Lateral Management of Heterogeneous Memory
Chen Ding (University of Rochester), Chencheng Ye (Huazhong University of Science and Technology), Hai Jin (Huazhong University of Science and Technology)

 

Spirited Discussion

Memory Systems Problems and Solutions

• Chen Ding, University of Rochester
• David Donofrio, Berkeley Labs
• Scott Lloyd, LLNL
• Dave Resnick, Sandia
• Uzi Vishkin, University of Maryland


Sally McKee: on Chip Cache


David Wang keynote


Hotel accommodation and conference dinner (and investigation … of murder)

 

Joel Fest


From Lane: “On Labor Day (Sept. 4), URCS will host a day of talks by wonderful speakers … in honor of our wonderful colleague Joel I. Seiferas’s retirement.”

JS: “Good morning and welcome. As the ‘Joel’ of ‘JoelFest,’ I have asked to say a (very) few words of introduction.

I can’t take credit for today’s program of distinguished speakers (or the presence of other notable colleagues), but I am happy that my recent retirement can be the excuse for it. I hope everyone enjoys and is stimulated by what you hear today.

… 

Thanks for today are also due to all of the following:  …

  • the entire well-oiled machine of an organizing committee, including, in addition to Muthu and Lane, Prof. Daniel Stefankovic and my wife Diane, and of course our distinguished speakers, to be introduced individually. 

Anyway, the U. of R. is clearly a great place to retire from.

More significantly (but briefly), Rochester also has been a wonderful place to work since I came here in 1979:

  • Faculty, past and present, have always been collegial, generous, smart, eloquent, and a pleasure to work with.
  • Past and present staff has always been eager and successful in providing the best support for the department.
  • The graduate students, especially, have been enthusiastic participants in the community, even in learning experiences much broader than what they needed for their theses.
  • In later years, the growing undergraduate community has become an impressive part of the mix, with many remarkable gems emerging there as well.”

 Speakers at JoelFest (full details see http://www.cs.rochester.edu/~lane/=joelfest/)

  • Zvi Galil, the John P. Imlay Dean of Computing and Professor at Georgia Tech’s College of Computing, “Online Revolutions: From Stringology with Joel to Georgia Tech’s Highly Affordable Master Degree”
  • Shafi Goldwasser, the RSA Professor of Electrical Engineering and Computer Science at MIT, “Pseudo-determinism”
  • Jon Kleinberg, Tisch University Professor of Computer Science at Cornell University, “Social Dynamics and Mathematical Inevitability”
  • Muthuramakrishnan Venkitasubramaniam, Department of Computer Science, University of Rochester, “The Status of Zero-Knowledge Proofs”