ICPE 2011 Proceedings
- Introduction
- Keynote 1
- Proprietary Code to Non-Proprietary Benchmarks: Synthesis Techniques for Scalable Benchmarks
Lizy Kurian John (The University of Texas at Austin)
- Proprietary Code to Non-Proprietary Benchmarks: Synthesis Techniques for Scalable Benchmarks
- Keynote 2
- Performance Analysis of Domain Specific Visual Models (Page 3)
Antonio Vallecillo (
- Performance Analysis of Domain Specific Visual Models (Page 3)
- Industrial Invited Talk
- Performance Modeling in MapReduce Environments: Challenge and Oppprtunities (Page 5)
Ludmila Cherkasova (Hewlett-Packard Laboratories)
- Performance Modeling in MapReduce Environments: Challenge and Oppprtunities (Page 5)
- Session 1: Performance Models and Technique - Part 1
- Computing First Passage Time Distributions in Stochastic Well-Formed Nets (Page 7)
Gianfranco Balbo ()
Marco Beccuti ()
Massimiliano De Pierro (Uni)
Giuliana Franceschinis () - Detection and Solution of Software Performance Antipatterns in Palladio Architectural Models (Page 19)
Catia Trubiani ()
Anne Koziolek (Karlsruhe Institute of Technology) - An Approach for Scalability-Bottleneck Solution: Identification and Elimination of Scalability Bottlenecks in a DBMS
Takashi Horikawa (NEC Corporation) - Experience Building Non-Functional Requirement Models of a Complex Industrial Architecture (Page 43)
(Chemtech a Siemens Company)
Cyro de A. Assis D. Muniz (Chemtech a Siemens Company)
Gilson A. Pinto (Chemtech a Siemens Company)
Alberto Avritzer (Siemens Corporate Research)
(Federal University of Rio de Janeiro)
Edmundo de Souza e Silva (Federal University of Rio de Janeiro)
Morganna Carmem Diniz (Federal University of the State of Rio de Janeiro )
Luca Berardinelli (University of L' Aquila)
Julius C. B. Leite (Universidade Federal Fluminense)
(University of Pittsburgh)
Yuanfang Cai (Drexel University)
Mike Dalton (Drexel University)
Lucia Kapova (Karlsruhe Institute of Technology)
Anne Koziolek (Karlsruhe Institute of Technology) - Relative Roles of Instruction Count and Cycles Per Instruction in WCET Estimation (Page 55)
Archana Ravindar (Indian Institute of Science)
Y. N. Srikant (Indian Institute of Science)
- Computing First Passage Time Distributions in Stochastic Well-Formed Nets (Page 7)
- Session 2: Performance Models and Techniques - Part 2
- An Automatic Trace Based Performance Evaluation Model Building for Parallel Distributed Systems (Page 61)
Ahmad Mizan (Carleton University)
Greg Franks (Carleton University) - Hierarchical Performance Measurement and Modeling of the Linux File System (Page 73)
Hai Nguyen(University of Arkansas
Amy Apon (University of Arkansas) - Real-World Performance Modelling of Enterprise Service Oriented Architectures: Delivering Business Value with Complexity and Constraints (Page 85)
Paul Brebner (NICTA / ANU) - Correct Router Interface Modeling (Page 97)
Krzysztof Rusek (AGH University of Science and Technology)
Lucjan Janowski (AGH University of Science and Technology)
ZdzisBaw Papir (AGH University of Science and Technology) - A PMIF with Petri Net Building Blocks (Page 103)
(Universitat de les Illes Balears)
Peter G. Harrison (mperial College London)
- An Automatic Trace Based Performance Evaluation Model Building for Parallel Distributed Systems (Page 61)
- Session 3: Performance and Energy Reduction - Part 1
- Adaptive Workload Shaping for Power Savings on Disk Drives (Page 109)
Xenia Mountrouidou (College of William and Mary)
Alma Riska (College of William and Mary)
Evgenia Smirni (College of William and Mary) - Fluid Analysis of Energy Consumption Using Rewards in Massively Parallel Markov Models (Page 121)
Anton Stefanek (Imperial College London)
Richard A. Hayden (Imperial College London)
Jeremy T. Bradley (Imperial College London) - Assessment of High-Performance Smart Metering for the Web Service Enabled Smart Grid (Page 133)
Stamatis Karnouskos (SAP Research)
Per Goncalves da Silva (SAP Research)
Dejan Ilic (SAP Research) - The Design and Development of the Server Efficiency Rating Tool (SERT) (Page 145)
Klaus-Dieter Lange (Hewlett-Packard Company)
Michael G. Tricker (Microsoft Corporation) - Metric-Based Selection of Timer Methods for Accurate Measurements (Page 151)
Michael Kuperberg (Karlsruhe Institute of Technology)
Martin Krogmann (Karlsruhe Institute of Technology)
Ralf Reussner (Karlsruhe Institute of Technology)
- Adaptive Workload Shaping for Power Savings on Disk Drives (Page 109)
- Session 4: Adaptive Systems
- Integrated Estimation and Tracking of Performance Model Parameters with Autoregressive Trends (Page 157)
Tao Zheng (Carleton University)
Marin Litoiu (York University)
Murray Woodside (Carleton University) - Adaptive Run-Time Performance Optimization Through Scalable Client Request Rate Control (Page 167)
Guenther Starnberger (Vienna University of Technology)
Lorenz Froihofer (Vienna University of Technology)
Karl M. Goeschka (Vienna University of Technology) - Tracking Adaptive Performance Models Using Dynamic Clustering of User Classes (Page 179)
Hamoun Ghanbari (York University )
Cornel Barna (York University)
Marin Litoiu (York University)
Murray Woodside (Carleton University)
Tao Zheng (University of Waterloo)
Johnny Wong (University of Waterloo)
Gabriel Iszlai (IBM Toronto Lab) - Dynamic Selection of Implementation Variants of Sequential Iterated Runge-Kutta Methods with Tile Size Sampling (Page 189)
Natalia Kalinniki (University of Bayreuth
Matthias Korch (University of Bayreuth)
Thomas Rauber (University of Bayreuth) - Performance Sensitive Self-Adaptive Service-Oriented Software Using Hidden Markov Models (Page 201)
Diego Perez-Palacin (Universidad de Zaragoza)
(Universidad de Zaragoza)
- Integrated Estimation and Tracking of Performance Model Parameters with Autoregressive Trends (Page 157)
- Session 5: Performance and Energy Reduction - Part 2
- Energy-Delay Based Provisioning for Large Datacenters: An Energy-Efficient and Cost Optimal Approach (Page 207)
Sriram Sankar (Microsoft Corporation)
Kushagra Vaid (Microsoft Corporation)
Harry Rogers (Microsoft Corporation) - Optimizing Benchmark Configurations for Energy Efficiency (Page 217)
Meikel Poess (Oracle Corporation)
Raghunath Nambiar (Cisco Systems, Inc.)
Kushagra Vaid (Microsoft) - Power and Energy-Aware Processor Scheduling (Page 227)
Luigi Brochards (IBM Systems and Technology Group)
Raj Panda (IBM Systems and Technology Group)
Don DeSota (IBM Systems and Technology Group)
Francois Thomas (IBM Systems and Technology Group)
Robert H. Bell, Jr. (IBM Systems and Technology Group) - Towards More Effective Utilization of Computer Systems (Page 235)
Niklas Carlsson ()
Martin Arlitt (Hewlett-Packard Laboratories)
- Energy-Delay Based Provisioning for Large Datacenters: An Energy-Efficient and Cost Optimal Approach (Page 207)
- Session 6: Large-scale and Distributed Systems
- MT-WAVE: Profiling Multi-Tier Web Applications (Page 247)
Anthony Arkles (University of Saskatchewan)
Dwight Makarofft (University of Saskatchewan) - A Capacity Planning Process for Performance Assurance of Component-Based Distributed Systems (Page 259)
Nilabja Roy (Vanderbilt University)
Abhishek Dubey (Vanderbilt University)
Aniruddha Gokhale (Vanderbilt University)
Larry Dowdy (Vanderbilt University) - A New Business Model for Massively Multiplayer Online Games
Vlad Nae (University of Innsbruck)
Radu Prodan (University of Innsbruck)
Alexandru Iosup (Delft University of Technology)
Thomas Fahringer (University of Innsbruck) - MassConf: Automatic Configuration Tuning By Leveraging User Community Information
Wei Zheng (Rutgers University)
Ricardo Bianchini (Rutgers University)
Thu D. Nguyen (Rutgers University) - Global Cost Diversity Aware Dispatch Algorithm for Heterogeneous Data Centers
Ananth N. Sankaranarayanan (Simon Fraser University)
Somsubhra Sharangi (Simon Fraser University)
Alexandra Fedorova (Simon Fraser University)
- MT-WAVE: Profiling Multi-Tier Web Applications (Page 247)
- Session 7: Virtualized Environments
- IO Performance Prediction in Consolidated Virtualized Environments (Page 295)
Stephan Kraft (SAP Research)
Giuliano Casale (Imperial College London)
Diwakar Krishnamurthy (University of Calgary)
Des Greer (Queen's University Belfast)
Peter Kilpatrick (Queen's University Belfast) - Virt-LM: A Benchmark for Live Migration of Virtual Machine (Page 307)
Dawei Huang (Zhejiang University)
Deshi Ye (Zhejiang University)
Qinming HeZ (Zhejiang University)
Jianhai Chen (Zhejiang University)
Kejiang Ye (Zhejiang University) - Dynamic VM Migration: Assessing Its Risks & Rewards Using a Benchmark (Page 317)
Krishnamurthy Srinivasan (Intel Corporation)
Sterlan Yuuw (Intel Corporation)
Tom J. Adelmeyer (Intel Corporation) - Performance Evaluation for Software Migration (Page 323)
Issam Al-Azzoni (INRIA)
Lei Zhang (McMaster University)
Douglas G. Down (McMaster University) - Modular Performance Modelling for Mobile Applications (Page 329)
Niaz Arijo (University of Leicester)
Reiko Heckel (University of Leicester)
Mirco Tribastone ()
Stephen Gilmore (University of Edinburgh)
- IO Performance Prediction in Consolidated Virtualized Environments (Page 295)
- Session 8: Measurements and Benchmarks - Part 1
- RMS-TM: A Comprehensive Benchmark Suite for Transactional Memory Systems (Page 335)
Gokcen Kestor (Barcelona Supercomputing Center)
Vasileios Karakostas (Barcelona Supercomputing Center)
Osman S. Unsal (Barcelona Supercomputing Center)
Adrian Cristal (IIIA - Artificial Intelligence Research Institute CSIC - Spanish National Research Council)
Ibrahim Hur (Barcelona Supercomputing Center)
Mateo Valero () - Automatic Estimation of Performance Requirements for Software Tasks of Mobile Devices (Page 391)
Simon Schwarzer (University of Bonn)
Patrick Peschlow (University of Bonn)
Lukas Pustina (University of Bonn)
Peter Martini (University of Bonn) - Improving the Efficiency of Information Collection and Analysis in Widely-used IT Applications (Page 359)
Sergey Blagodurov (Simon Fraser University)
Martin Arlitt (Hewlett-Packard Laboratories) - A Little Language for Rapidly Constructing Automated Performance Tests (Page 371)
Shaun Dunning (NetApp, Inc.)
Darren Sawyer (NetApp, Inc.)
- RMS-TM: A Comprehensive Benchmark Suite for Transactional Memory Systems (Page 335)
- Session 9: Measurements and Benchmarks - Part 2
- Workload Characterization of Cryptography Algorithms for Hardware Acceleration (Page 381)
Jed Kao-Tung Chang (University of California, Irvine)
Chen Liu (Florida International University)
Shaoshan Liu (Microsoft Corp.)
Jean-Luc Gaudiot (University of California, Irvine) - Characterization, Monitoring and Evaluation of Operational Performance Trends on Server Processor Hardware (Page 391)
Ernest Sithole (University of Ulster)
Sally McClean (University of Ulster)
Bryan Scotney (University of Ulster)
Gerard Parr (University of Ulster)
Adrian Moore (University of Ulster)
Stephen Dawson (SAP Research) - Instrumentation-based tool for latency measurements (Page 403)
(VTT Technical Research Centre of Finland)
Jarmo Prokkola (VTT Technical Research Centre of Finland)
Ali Lattunen (VTT Technical Research Centre of Finland)
- Workload Characterization of Cryptography Algorithms for Hardware Acceleration (Page 381)
- Poster Session
- Analysing the Fidelity of Measurements Performed with Hardware Performance Counters (Page 413)
Michael Kuperberg (Karlsruhe Institute of Technology)
Ralf Reussner (Karlsruhe Institute of Technology) - Reusable QoS Specifications for Systematic Component-based Design (Page 415)
Lucia Kapova (Karlsruhe Institute of Technology) - Benchmarking Database Design for Mixed OLTP and OLAP Workloads (Page 417)
Anja Bog (Hasso Plattner Institute, University of Potsdam)
Kai Sachs (SAP AG)
Alexander Zeier (Hasso Plattner Institute, University of Potsdam) - A New Approach to Introduce Aspects in Software Architecture (Page 419)
Khider Hadjer (Saad Dahlab University)
Bennouar Djamal (Saad Dahlab University) - Performance Cockpit: Systematic Measurements and Analyses (Page 421)
Dennis Westermann (SAP Research)
Jens Happe (SAP Research) - FORGE: Friendly Output to Results Generator Engine (Page 423)
(Universitat de les Illes Balears)
(Universitat de les Illes Balears)
R. Puigjaner (Universitat de les Illes Balears)
Connie U. Smith (Performance Engineering Services)
- Analysing the Fidelity of Measurements Performed with Hardware Performance Counters (Page 413)
- Tutorials
- Analyzing Measurements from Data with Underlying Dependences and Heavy-Tailed Distributions (Page 425)
Natalia M. Markovich (Russian Academy of Sciences)
Udo R. Krieger (Otto Friedrich University) - Performance Engineering with Product-Form Models: Efficient Solutions and Applications (Page 437)
Simonetta Balsamo ()
Andrea Marin () - Quantitative System Evaluation with Java Modeling Tools (Page 449)
Giuliano Casale (Imperial College London)
Giuseppe Serazzi (Politecnico di Milano)
- Analyzing Measurements from Data with Underlying Dependences and Heavy-Tailed Distributions (Page 425)
- Works-in-Progress
- Work-In-Progress Chairs' Welcome Message (Page 455)
David J. Lilja (University of Minnesota)
Raffaela Mirandola (Politecnico di Milano) - In Search for Contention-Descriptive Metrics in Hpc Cluster Environment (Page 457)
Sergey Blagoduro (Simon Fraser University)
Alexandra Fedorova (Simon Fraser University) - Automatic Performance Model Synthesis From Hardware Verification Models (Page 463)
Robert H. Bell Jr. (IBM)
Matyas Sustik (IBM)
David W. Cummings (IBM)
Jonathan R. Jackson (IBM) - Engineering Ssl-Based Systems for Enhancing System Performance (Page 469)
Norman Lim (Carleton University)
Shikharesh Majumdar (Carleton University)
Vineet Srivastava (Cistech Limited) - Performance Modeling of Distributed Collaboration Services (Page 475)
Toqeer Israr (University of Ottawa)
Gregor v. Bochmann (University of Ottawa) - On-Line Analysis of Hardware Performance Events for Workload Characterization and Processor Frequency Scaling Decisions (Page 481)
()
Daniel Hackenberg () - Nat/Firewall Traversal Cost Model for Publish-Subscribe Systems (Page 487)
Debmalya Biswas (Nokia research)
Kathryn Bean (SAP Business Objects)
Florian Kerschbaum (SAP Research) - Combined Profiling: Practical Collection of Feedback Information for Code Optimization (Page 493)
Paul Berube (University of Alberta)
Adam Preuss (University of Alberta)
Jose Nelson Amaral (University of Alberta) - Towards Studying the Performance Effects of Design Patterns for Service Oriented Architecture (Page 499)
Nariman Mani (Carleton University)
Dorina C. Petriu (Carleton University)
Murray Woodside (Carleton University) - Using Observation Ageing to Improve Markovian Model Learning in Qos Engineering (Page 505)
Radu Calinescu (Aston University)
Kenneth Johnson (Aston University)
Yasmin Rafiq (Aston University)
- Work-In-Progress Chairs' Welcome Message (Page 455)
Keynote 1
Proprietary Code to Non-Proprietary Benchmarks: Synthesis Techniques for Scalable Benchmarks
Authors:
Lizy Kurian John (The University of Texas at Austin)
Abstract:
Real world applications constitute intellectual property and simultaneous design of hardware and software is made very difficult due to the need for disclosing proprietary software to hardware designers. Consider a smart phone for which applications are developed by various third parties or a military system, where the classified applications are developed in-house at the military while hardware is procured from standard vendors. Design of hardware that gives good performance and low power can be done if hardware designers had access to the software, so they can understand the features of the software and tune various hardware features to the software characteristics. While non-disclosure agreements and legal arrangements can be used to partly solve the problem, it will be much more convenient to have a mechanism to create proxies of proprietary benchmarks that have the performance (and power) characteristics of the source, but not the functionality.
In our past research, we created a benchmark synthesis process for early design exploration. The benchmark synthesis process consists of constructing a proxy workload that possesses approximately the same performance and power characteristics as the original workload [1-3]. The synthesis comprises of two steps: (1) profiling the real-world proprietary workload to measure its inherent behavior characteristics, and (2) modeling the measured workload attributes into a synthetic benchmark program. The set of workload characteristics can be thought of as a signature that uniquely describes the workload's inherent behavior, independent of the microarchitecture. The cloned code in fact has no functionality and cannot be reverse engineered to create the original code or algorithms. The cloned software can be freely released to hardware developers so that they can optimize the hardware they deliver to their clients to yield improved performance.
This talk will describe the benchmark synthesis process as well as the status of the research. The process of constructing synthetic proxies that approximately resemble the original proprietary application will be explained. Synthesis for multicore and multithreaded processors will be described.
Full text: PDF
Keynote 2
Performance Analysis of Domain Specific Visual Models
Authors:
Antonio Vallecillo (
Abstract:
Domain specific visual languages (DSVLs) play a key role in Model-Driven Engineering. They allow domain experts to develop and to manipulate models of their systems using intuitive and graphical notations, much closer to their domain languages and at the right level of abstraction.
DSVLs are normally equipped with supporting toolkits including editors, checkers and code generation facilities. Many DSVLs also allow the specification of the behavioral dynamics of systems, beyond their basic structure. However, there is still the need to model, simulate and analyze other critical aspects of systems, such as their non-functional properties. In particular QoS usage and management constraints (performance, reliability, etc.) are essential characteristics of any non-trivial system that cannot be neglected. Current proposals for the specification of such kind of properties tend to remain at a lower abstraction level than needed for most end-user domain-specific models, and normally require skilled knowledge of specialized languages and notations (such as MARTE). These problems clash with the intuitive nature of end-user DSVLs and hinder its smooth combination with them.
In this talk we present an approach to specify QoS properties in DSVLs, and show how it enables different kinds of analysis of the performance and reliability of the systems being specified. We also discuss the strategic role that model transformations play in this context, the opportunities they provide, and their current challenges for bridging the different semantic and technological domains involved in the specification and analysis of systems.
Full text: PDF
Industrial Invited Talk
Performance Modeling in MapReduce Environments: Challenges and Opportunities
Authors:
Ludmila Cherkasova (Hewlett-Packard Laboratories)
Abstract:
Unstructured data is the largest and fastest growing portion of most enterprise's assets, often representing 70% to 80% of online data. These steep increase in volume of information being produced often exceeds the capabilities of existing commercial databases. MapReduce and its open-source implementation Hadoop represent an economically compelling alternative that offers an efficient distributed computing platform for handling large volumes of data and mining petabytes of unstructured information. It is increasingly being used across the enterprise for advanced data analytics, business intelligence, and enabling new applications associated with data retention, regulatory compliance, e-discovery and litigation issues.
However, setting up a dedicated Hadoop cluster requires a significant capital expenditure that can be difficult to justify. Cloud computing offers a compelling alternative and allows users to rent resources in a "pay-as-you-go" fashion. For example, a list of offered Amazon Web Services includes MapReduce environment for rent. It is an attractive and cost-efficient option for many users because acquiring and maintaining complex, large-scale infrastructures is a difficult and expensive decision. One of the open questions in such environments is the amount of resources that a user should lease from the service provider. Currently, there is no available methodology to easily answer this question, and the task of estimating required resources to meet application performance goals is the solely user's responsibility. The users need to perform adequate application testing, performance evaluation, capacity planning estimation, and then request appropriate amount of resources from the service provider. To address these problems we need to understand: "What do we need to know about a MapReduce job for building an efficient and accurate modeling framework? Can we extract a representative job profile that reflects a set of critical performance characteristics of the underlying application during all job execution phases, i.e., map, shuffle, sort and reduce phases? What metrics should be included in the job profile?" We discuss a profiling technique for MapReduce applications that aims to construct a compact job profile that is comprised of performance invariants which are independent of the amount of resources assigned to the job (i.e., the size of the Hadoop cluster) and the size of the input dataset. The challenge is how to accurately predict application performance in the large production environment and for processing large datasets from the application executions that being run in the smaller staging environment and that process smaller input datasets.
One of the major Hadoop benefits is its ability of dealing with failures (disk, processes, node failures) and allowing the user job to complete. The performance implications of failures depend on their types, when do they happen, and whether a system can offer some spare resources instead of failed ones to the running jobs. We discuss how to enhance the MapReduce performance model for evaluating the failure impact on job completion time and predicting a potential performance degradation.
Sharing a MapReduce cluster among multiple applications is a common practice in such environments. However, a key challenge in these shared environments is the ability to tailor and control resource allocations to different applications for achieving their performance goals and service level objectives (SLOs). Currently, there is no job scheduler for MapReduce environments that given a job completion deadline, could allocate the appropriate amount of resources to the job so that it meets the required SLO. In MapReduce environments, many production jobs are run periodically on new data. For example, Facebook, Yahoo!, and eBay process terabytes of data and event logs per day on their Hadoop clusters for spam detection, business intelligence and different types of optimization. For the production jobs that are routinely executed on the new datasets, can we build on-line job profiles that later are used for resource allocation and performance management by the job scheduler? Wediscuss opportunities and challenges for building the SLO-based Hadoop scheduler.
The accuracy of new performance models might depend on the resource contention, especially, the network contention in the production Hadoop cluster. Typically, service providerstend to over provision network resources to avoid undesirable side effects of network contention. At the same time, it is an interesting modeling question whether such a network contention factor can be introduced, measured, and incorporated in theMapReduce performance model. Benchmarking Hadoop, optimizing cluster parameter settings, designing job schedulers with different performance objectives, and constructing intelligent workload management in shared Hadoop clusters create an exciting list of challenges and opportunities for the performance analysis and modeling in MapReduce environments.
Full text: PDF