[Feitelson:Book15] Workload Modeling for Computer Systems Performance Evaluation

Foreword/Preface.  The book is partly to explain the intuition and reasoning behind the sophisticated mathematical models.   1994 survey of parallel job scheduling included 76 systems and 638 references.  Practically every paper showed it was better than other schemes.  “If the workload is wrong, the results will be wrong too.”  The workload is the experimental basis; otherwise, a study is based on assumptions rather than measurements, and mathematical techniques are misapplied.

Introduction.  Three factors of performance: system design, implementation, and its workload.  A trivial example is a sorting algorithm.  Three problems: job scheduling by size (scaling), processor allocation (distribution), and load balancing (inference).  Workload classification.  Workload modeling and its many uses in performance evaluation.  Modeling is to generalize to transcend specific observation and recording and to simplify to have as few parameters as possible.  Descriptive models mimic the phenomena in observation.  Generative models capture the process in which the workload is brought about.

6.2 Spatial and Temporal Locality

Locality is ubiquitous [Denning CACM 2005].  8 types of locality in workloads.  Access locality in (1) addresses and pages, (2) file reuse in server, (3) single file access, (4) database and key-value store.  Communication locality (5).  Repetition in (6) network addresses, (7) document retrieval, and (8) function parameters and memory values.  The section runs from page 215 to 231.

6.2.1 Definitions.

Denning’s principle of locality has 3 features: non-uniform reference, slow changing, correlation between immediate past and future.  “Not a precise and measurable definition”

Spatial locality definition is similar to our reference affinity.  A locality is a group of pages.  Spatial regularity.   Popularity [Jin & Bestavros 2000].  (The formal connection between popularity and locality will be established in the new category theory, manuscript in preparation)

6.2.2 Statistical Measures of Locality.  Access probability.  Frequent substrings.  “attach a numerical value that represents the degree of locality”

6.2.3 The Stack Distance.  (Should be called LRU stack distance or reuse distance)  Comparison of reuse distance distribution for an actual and a random trace.  Inter-reference distance [Almasi+ MSPC]  “The most common way to quantify locality is not be statistical measures, but by its effect … by means of a simulation … The stack is maintained using a move-to-front discipline.”

Later sections in 6.2.  Entropy measure of popularity.  IRM.  Pareto distribution of LRU distances.  Markov model of phase transition, limiting distribution (stochastic assumption in [Denning & Schwartz CACM 1972]).  Fractal model (idea also used in the IPDPS 2012 paper by Zhibin Yu).

9.2 Desktop and Workstation Workloads

Many interesting ideas with references.  Benchmarks for different application areas: CPU, parallel, multimedia and games.  The concept of “predicability”.  Load balancing based on Pareto run-time distribution.

Koller et al.  2010 [ref 410 in Feitelson book]  The flood rate: flood the cache with memory lines not reused before eviction (not in its reuse set), so it takes space / applies pressure to peer applications.  The reuse rate: the rate of access to its reuse set.  If the reuse rate is lower than the flood rate, its reuse set tends to be evicted.  The wastage: the space used for non reusable data.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s