Friday, 13 August 2021

ICPE 2011 Proceedings

Keynote 1

Proprietary Code to Non-Proprietary Benchmarks: Synthesis Techniques for Scalable Benchmarks

Authors:

Lizy Kurian John (The University of Texas at Austin)

Abstract:

Real world applications constitute intellectual property and simultaneous design of hardware and software is made very difficult due to the need for disclosing proprietary software to hardware designers. Consider a smart phone for which applications are developed by various third parties or a military system, where the classified applications are developed in-house at the military while hardware is procured from standard vendors. Design of hardware that gives good performance and low power can be done if hardware designers had access to the software, so they can understand the features of the software and tune various hardware features to the software characteristics. While non-disclosure agreements and legal arrangements can be used to partly solve the problem, it will be much more convenient to have a mechanism to create proxies of proprietary benchmarks that have the performance (and power) characteristics of the source, but not the functionality.

In our past research, we created a benchmark synthesis process for early design exploration. The benchmark synthesis process consists of constructing a proxy workload that possesses approximately the same performance and power characteristics as the original workload [1-3]. The synthesis comprises of two steps: (1) profiling the real-world proprietary workload to measure its inherent behavior characteristics, and (2) modeling the measured workload attributes into a synthetic benchmark program. The set of workload characteristics can be thought of as a signature that uniquely describes the workload's inherent behavior, independent of the microarchitecture. The cloned code in fact has no functionality and cannot be reverse engineered to create the original code or algorithms. The cloned software can be freely released to hardware developers so that they can optimize the hardware they deliver to their clients to yield improved performance.

This talk will describe the benchmark synthesis process as well as the status of the research. The process of constructing synthetic proxies that approximately resemble the original proprietary application will be explained. Synthesis for multicore and multithreaded processors will be described.

DOI: 10.1145/1958746.1958748

Full text: PDF

[#][?]

Keynote 2

Performance Analysis of Domain Specific Visual Models

Authors:

Antonio Vallecillo (

Abstract:

Domain specific visual languages (DSVLs) play a key role in Model-Driven Engineering. They allow domain experts to develop and to manipulate models of their systems using intuitive and graphical notations, much closer to their domain languages and at the right level of abstraction.

DSVLs are normally equipped with supporting toolkits including editors, checkers and code generation facilities. Many DSVLs also allow the specification of the behavioral dynamics of systems, beyond their basic structure. However, there is still the need to model, simulate and analyze other critical aspects of systems, such as their non-functional properties. In particular QoS usage and management constraints (performance, reliability, etc.) are essential characteristics of any non-trivial system that cannot be neglected. Current proposals for the specification of such kind of properties tend to remain at a lower abstraction level than needed for most end-user domain-specific models, and normally require skilled knowledge of specialized languages and notations (such as MARTE). These problems clash with the intuitive nature of end-user DSVLs and hinder its smooth combination with them.

In this talk we present an approach to specify QoS properties in DSVLs, and show how it enables different kinds of analysis of the performance and reliability of the systems being specified. We also discuss the strategic role that model transformations play in this context, the opportunities they provide, and their current challenges for bridging the different semantic and technological domains involved in the specification and analysis of systems.

DOI: 10.1145/1958746.1958750

Full text: PDF

[#][?]

Industrial Invited Talk

Performance Modeling in MapReduce Environments: Challenges and Opportunities

Authors:

Ludmila Cherkasova (Hewlett-Packard Laboratories)

Abstract:

Unstructured data is the largest and fastest growing portion of most enterprise's assets, often representing 70% to 80% of online data. These steep increase in volume of information being produced often exceeds the capabilities of existing commercial databases. MapReduce and its open-source implementation Hadoop represent an economically compelling alternative that offers an efficient distributed computing platform for handling large volumes of data and mining petabytes of unstructured information. It is increasingly being used across the enterprise for advanced data analytics, business intelligence, and enabling new applications associated with data retention, regulatory compliance, e-discovery and litigation issues.

However, setting up a dedicated Hadoop cluster requires a significant capital expenditure that can be difficult to justify. Cloud computing offers a compelling alternative and allows users to rent resources in a "pay-as-you-go" fashion. For example, a list of offered Amazon Web Services includes MapReduce environment for rent. It is an attractive and cost-efficient option for many users because acquiring and maintaining complex, large-scale infrastructures is a difficult and expensive decision. One of the open questions in such environments is the amount of resources that a user should lease from the service provider. Currently, there is no available methodology to easily answer this question, and the task of estimating required resources to meet application performance goals is the solely user's responsibility. The users need to perform adequate application testing, performance evaluation, capacity planning estimation, and then request appropriate amount of resources from the service provider. To address these problems we need to understand: "What do we need to know about a MapReduce job for building an efficient and accurate modeling framework? Can we extract a representative job profile that reflects a set of critical performance characteristics of the underlying application during all job execution phases, i.e., map, shuffle, sort and reduce phases? What metrics should be included in the job profile?" We discuss a profiling technique for MapReduce applications that aims to construct a compact job profile that is comprised of performance invariants which are independent of the amount of resources assigned to the job (i.e., the size of the Hadoop cluster) and the size of the input dataset. The challenge is how to accurately predict application performance in the large production environment and for processing large datasets from the application executions that being run in the smaller staging environment and that process smaller input datasets.

One of the major Hadoop benefits is its ability of dealing with failures (disk, processes, node failures) and allowing the user job to complete. The performance implications of failures depend on their types, when do they happen, and whether a system can offer some spare resources instead of failed ones to the running jobs. We discuss how to enhance the MapReduce performance model for evaluating the failure impact on job completion time and predicting a potential performance degradation.

Sharing a MapReduce cluster among multiple applications is a common practice in such environments. However, a key challenge in these shared environments is the ability to tailor and control resource allocations to different applications for achieving their performance goals and service level objectives (SLOs). Currently, there is no job scheduler for MapReduce environments that given a job completion deadline, could allocate the appropriate amount of resources to the job so that it meets the required SLO. In MapReduce environments, many production jobs are run periodically on new data. For example, Facebook, Yahoo!, and eBay process terabytes of data and event logs per day on their Hadoop clusters for spam detection, business intelligence and different types of optimization. For the production jobs that are routinely executed on the new datasets, can we build on-line job profiles that later are used for resource allocation and performance management by the job scheduler? Wediscuss opportunities and challenges for building the SLO-based Hadoop scheduler.

The accuracy of new performance models might depend on the resource contention, especially, the network contention in the production Hadoop cluster. Typically, service providerstend to over provision network resources to avoid undesirable side effects of network contention. At the same time, it is an interesting modeling question whether such a network contention factor can be introduced, measured, and incorporated in theMapReduce performance model. Benchmarking Hadoop, optimizing cluster parameter settings, designing job schedulers with different performance objectives, and constructing intelligent workload management in shared Hadoop clusters create an exciting list of challenges and opportunities for the performance analysis and modeling in MapReduce environments.

DOI: 10.1145/1958746.1958752

Full text: PDF

[#][?]