Tutorials

The following tutorials are planned for ISSRE 2014.

  1. Modern Web Applications' Reliability Engineering
    by Karthik Pattabiraman
    Monday, Nov. 3, 9:00am - Hotel RC - Aragonese Room
  2. More Reliable Software Faster and Cheaper: An Introduction to Software Reliability Engineering
    by Laurie Williams and Mladen Vouk - Slides
    Monday, Nov. 3, 2:00pm - Hotel RC - Aragonese Room
  3. Holistic Optimization of Power Distribution Automation Network Designs Using Survivability Modeling
    by Kishor S. Trivedi, Alberto Avritzer, and Daniel Sadoc Menasché
    Tuesday, Nov. 4, 2:00pm - Hotel RC - Normanna Room
  4. Data Science: Creation and Use of Predictive Software Models
    by Pete Rotella and Sunita Chulani
    Tuesday, Nov. 4, 2:00pm - Hotel RC - Sveva Room
  5. ODC - A 10x on Root Cause Analysis
    by Ram Chillarge
    Wednesday, Nov. 5, 9:00am - Hotel RC - Aragonese Room
  6. Advanced software reliability and availability models
    by Kishor S. Trivedi, Michael Grottke, Javier Alonso, and Allen P. Nikora
    Wednesday, Nov. 5, 2:00pm - Hote RCl - Aragonese Room
  7. Supporting architectural decisions through software quality optimization models
    by Vittorio Cortellessa and Pasqualina Potena
    Thursday, Nov. 6, 9:00am - Hotel RC - Catalana Room
  8. Effective security management: using case control studies to measure vulnerability risk
    by Luca Allodi and Fabio Massacci
    Thursday, Nov. 6, 9:00am - Hotel RC - Sveva Room

NEW: Earn the IEEE Reliability Society Certificate by attending two of three tutorials (Only: #2, #5 or #6)


 

Modern Web Applications' Reliability Engineering
by Karthik Pattabiraman
Monday, Nov. 3, 9:00am - Hotel RC - Aragonese Room

Karthik Pattabiraman
University of British Columbia
Canada

Abstract

JavaScript is today the de-facto client-side programming language for modern web applications, and is extensively used in the client-side of web applications for interactivity and faster load times. For example, 97 of the top 100 Alexa websites use JavaScript code, often running into thousands of lines of code. However, JavaScript is notorious for its difficult-to-analyze constructs and "laissez-faire" programming style, which makes it challenging to build reliable web applications in JavaScript. This tutorial will present approaches to assess and improve the reliability of modern JavaScript-based web applications.
In the first part of the tutorial, we will present empirical studies on the reliability of modern web applications, through field data studies, bug databases and online fora such as StackOverflow. We will then proceed to explore automated tools and techniques for web applications' testing, understanding and fault mitigation (i.e., repair). Finally, we will conclude with some tools for building robust client-side web applications, and discuss open issues and research problems.

The target audience are web developers and testers, who want to get exposed to the latest research ideas in the area. The other target is researchers working in the software reliability area, who want to understand the current state of the art and open issues in the reliability of modern web applications.

Presenter's biography

Karthik Pattabiraman is an assistant professor in the Electrical and Computer Engineering (ECE) department at the University of British Columbia (UBC). Karthik received his M.S. and PhD degrees from the University of Illinois at Urbana Champaign (UIUC). Karthik has been a post-doctoral researcher at Microsoft Research (MSR), Redmond. Karthik’s research spans the areas of web applications’ reliability, software fault tolerance for hardware faults and security. He was awarded the William Carter award for best student paper in DSN 2008, the best paper runner up award at ICST 2013, and the SIGSOFT distinguished paper award at ICSE 2014. He was recently general chair for PRDC 2013 in Vancouver, BC.


More Reliable Software Faster and Cheaper: An Introduction to Software Reliability Engineering
by Laurie Williams and Mladen Vouk
Monday, Nov. 3, 2:00pm - Hotel RC - Aragonese Room

Slides

Laurie Williams
North Carolina State University
USA

Mladen Vouk
North Carolina State University
USA

Abstract

This tutorial teaches the essentials of how to apply the best practice of Software Reliability Engineering (SRE) to your project. The knowledge gained will help you develop and test more reliable software faster and cheaper. Through taking the course, participants will have basic knowledge in how to:

  1. Define a software-based product you plan to develop in SRE terms
  2. Express relative use of a product's principal functions by developing operational profiles
  3. Employ operational profiles and criticality information to:
    1. Greatly increase efficiency of development and test by optimally distributing people resources, test cases, and test time over operations
    2. Invoke test so as to much more accurately represent field use
    3. Plan feature release dates to better match customer needs
  4. Determine the reliability / availability your customers need for a product, making optimal tradeoffs with cost and time of delivery
  5. Engineer software reliability strategies to meet reliability / availability objectives more efficiently
  6. Identify failures during system test and process failure data to track reliability growth of and certify systems, guiding product release
  7. Assess applicability of SRE in the context of software security and cloud computing.

 

Presenters' biography

Dr. Laurie Williams is a certified instructor of Musa's More Reliable Software Faster and Cheaper course. Laurie is a professor in the Computer Science department at North Carolina State University (NCSU). She teaches software engineering and software security. Her research also involves software reliability engineering and testing, and agile software development. Laurie is a member of the NCSU Academy of Outstanding Teachers and a recipient of the ACM Special Interest Group on Software Engineering (SIGSOFT) Influential Educator Award. Prior to joining NCSU, she worked at IBM for nine years, including several years as a manager of a software testing department.

Dr. Mladen Vouk received Ph.D. from the King's College , University of London , U.K. He is Department Head and Professor of Computer Science, and Associate Vice Provost for Information Technology at N.C. State University, Raleigh, N.C., U.S.A. Dr. Vouk has extensive experience in both commercial software production and academic computing. He is the author/co-author of over 300 publications. He is the author of award winning RAMS tutorial on Software Reliability Engineering. He regularly teaches an advanced course in Cloud Computing Technology. His research and development interests include software reliability and security engineering, and cloud computing and analytics.


Holistic Optimization of Power Distribution Automation Network Designs Using Survivability Modeling
by Kishor S. Trivedi, Alberto Avritzer, and Daniel Sadoc Menasché
Tuesday, Nov. 4, 2:00pm - Hotel RC - Normanna Room

Kishor S. Trivedi
Duke University
USA

Alberto Avritzer
Siemens Corporate Research
USA

Daniel Sadoc Menasché
Federal University of Rio de Janeiro
Brazil

Abstract

Smart grids are fostering a paradigm shift in the realm of power distribution systems. Whereas traditionally different components of the power distribution system have been provided and analyzed by different teams through different lenses, smart grids require a unified and holistic approach that takes into consideration the interplay of communication reliability, energy backup, distribution automation topology, energy storage and intelligent features such as automated failure detection, isolation and restoration (FDIR) and demand response.
The goal of this tutorial is to present analytical methods, models, and metrics for the survivability assessment of the distribution power grid network. The tutorial will have three parts:

  1. Survivability concepts and definition: survivability definitions (qualitative and quantitative), requirements, relate survivability to performance, dependability (reliability/availability) and security. We will also present three different definitions of survivability that are given in the literature, and finally suggest a definition to be used for survivability quantification.
  2. Analytical methods and models: the analytical methods introduced in this tutorial are two-layer models to capture the cyber-physical characteristics of the system, from failure up to repair. Markov chain model with reward rates associated to the states. The failure handling process is modeled using the first, while the performance of the system is characterized using the latter.
  3. Metrics: we show that the proposed models yield traditional power system metrics such as SAIDI (system average interruption duration index) and its generalization. That way, we show how to predict the impact of different investment strategies on the system survivability.

Although the tutorial is driven by power grid applications, similar techniques are applicable to study the survivability of other cyber-physical infrastructures, such as gas and water networks.

 

Presenters' biography

Kishor S. Trivedi holds the Hudson Chair in the Department of Electrical and Computer Engineering at Duke University, Durham, NC. His research interests are in reliability and performance assessment of computer and communication systems. He has published over 500 articles, lectured extensively on these topics and supervised 45 Ph.D. dissertations. He is a co-designer of the HARP, SAVE, SHARPE, SPNP and SREPT modeling packages which have been widely circulated. He is the author of Probability and Statistics with Reliability, Queuing and Computer Science Applications, 2nd edition, published by Wiley. He is a Fellow of the IEEE and a Golden Core Member of IEEE Computer Society. He is the recipient of IEEE Computer Society Technical Achievement Award for his research on Software Aging and Rejuvenation. He has presented many tutorials at conferences such as SIGMETRICS, DSN, ISSRE, ICC, RAMS and many more. He has also presented many such tutorials in industrial laboratories.

Alberto Avritzer received a Ph.D. in Computer Science from the University of California, Los Angeles, a M.Sc. in Computer Science for the Federal University of Minas Gerais, Brazil, and the B.Sc. in Computer Engineering from the Technion, Israel Institute of Technology. He is currently a Senior Member of the Technical Staff in the Software Engineering Department at Siemens Corporate Research, Princeton, New Jersey. Before moving to Siemens Corporate Research, he spent 13 years at AT&T Bell Laboratories, where he developed tools and techniques for performance testing and analysis. He spent the summer of 1987 at IBM Research, at Yorktown Heights. His research interests are in software engineering, particularly software testing, monitoring and rejuvenation of smoothly degrading systems, and metrics to assess software architecture, and he has published over 50 papers in journals and refereed conference proceedings in those areas. He has presented tutorials at international conferences such as LADC and ACM ICPE. He is a Senior Member of ACM.

Daniel Sadoc Menasché received his Ph.D. in computer science from the University of Massachusetts at Amherst. Currently, he is an Assistant Professor in the Computer Science Department at the Federal University of Rio de Janeiro, Brazil. His interests are in modeling, analysis and performance evaluation of computer systems. He was awarded the Yahoo! outstanding synthesis project award at UMass in 2010. Dr. Menasche was co-author of papers that received best paper awards at Globecom’07, CoNEXT’09 and INFOCOM'13.


Data Science: Creation and Use of Predictive Software Models
by Pete Rotella and Sunita Chulani
Tuesday, Nov. 4, 2:00pm - Hotel RC - Sveva Room

Pete Rotella
Cisco
USA

Sunita Chulani
Cisco
USA

Abstract

High performance models are needed to enable software practitioners to identify deficient (and superior) development and test practices. Even with standard practices, metrics, and goals, software development teams can, and do, vary substantially in practice adoption and effectiveness. One challenge, as analysts in these organizations, is to develop andimplement mathematical models that adequately characterize the health of individual practices (such as code review, unit test, static analysis, function testing, etc.), to enable process and quality assurance groups to assist engineering teams in surgically repairing broken practices or replacing them with more effective and efficient ones.

In this tutorial, we will describe our experience with model building and implementation, and attempt to describe the boundaries within which certain types of models perform well. We will also address how to balance model generalizability and specificity in order to integrate computational methods into everyday engineering workflow.
An important part of the analysis effort needs to be the correlative 'linking' of the quality of development and test metrics to customer experience outcomes and then to customer sentiment (satisfaction). These linkages are essential in not only convincing engineering leadership to use computational tools in practice, but also in enabling investigators, at an early stage, to design experiments and pilots to test model applicability moving forward intime. After convincing experiments and pilots have been demonstrated, much work remains: Choosing a useful, but manageable, set of metrics, establishing goals and tracking/reporting mechanisms, planning andimplementing the tooling, rollout, training, etc. These practical considerations invariably put a strain on the models, therefore the models and ancillary analyses must be 'industrial strength' in many ways.
In this tutorial we will describe the lifecycle of a useful industrial model, and how this lifecycle impacts not only the organizationsthat use it, but also the model itself. Understanding a model's practical limitations and strengths is an important part of its use - just as the mathematical and statistical limitations and strengths underscore a model's scientific validity. Both factors, mathematical and practical, need to mesh properly in a workable way in a computation-driven engineering environment. The tutorial addresses the integration of these factors.

Tutorial outline:

  • Industrial mathematical models and simulations, strengths and weaknesses/limitations
  • Generalizability and specificity - balancing these needs in a software development environment
  • Choosing variables (including the question "to fish or not to fish")
  • Correlation and causality - can we test causality, and if so, how?
  • Linkages from in-process to customer experience to customer sentiment metrics
  • Case studies - general and specific models that have worked well and those that haven't
  • Impact v. precision/recall - how to identify which problems to address, and where to invest
  • Practical limitations of models/simulations in industrial settings
  • Customer sentiment models and models characterizing non-functional requirements
  • Reporting/goaling/governance and best-in-class paradigm
  • Measuring model adherence/adoption and effectiveness
  • Practical considerations in integrating models and simulations into engineering workflow
  • Use of computational engineering in corporate quality programs
  • What has worked and what has not - what are the next steps for computational engineering

Most organizations have an abundance of data that can be harvested using practical data science techniques into industrial strength models.

 

Presenters' biography

Sunita Chulani is an Advisory Engineer at Cisco Systems working in Data Science and Analysis for Software data. Prior to Cisco, Dr. Chulani was a research staff member in the IBM T.J. Watson Research Center's Software Engineering Department. Her research interests include the entire software engineering life cycle, particularly data science, measurement and analysis. She received her PhD in computer science from the University of Southern California. Contact her at schulani@cisco.com.


 

ODC - A 10x on Root Cause Analysis
by Ram Chillarge
Wednesday, Nov. 5, 9:00am - Hotel RC - Aragonese Room

 

 

 

Ram Chillarege
Chillarege Inc
USA

Topics

{C}{C}

Orthogonal Defect Classification provides an incredibly fast and thorough method to gain insight into a software development organization. It uses data we already have: defects - the natural outcome of humans developing software, to gain this insight. ODC extracts semantics from defects creating a powerful multi-dimensional measurement system for software engineering. Specific analytical applications on ODC data create insights and yield predictions.
ODC has been used in a wide range of products ranging from embedded controllers, networking systems, database applications, space systems, etc. When one makes a cell phone call, steps on the gas pedal in a motor vehicle, or wears a medical implant - you may be surprised to discover that ODC may have been used to make that device more dependable and trustworthy!

Topics to be covered and Agenda:

  • Concepts that gave birth to ODC
  • Intrinsic versus Extrinsic Measurement
  • Development Measurement - Defect Type
  • Test Measurement - Triggers
  • Orthogonality and Measurement
  • Cause-Effect Relationships
  • 10x - from Classical to ODC based root cause analysis
  • How to measure Test Effectiveness with ODC
  • Case Studies - Release Management
  • Case Studies - Process Diagnosis
  • What is an ODC Roll-out in an Organization?
  • What support does ODC need in an Organization?

 

Presenter's biography

Ram, inventor of Orthogonal Defect Classification (ODC), brings a new order of insight into measuring and managing software engineering. His consulting practice specializes in Software Engineering Optimization using ODC. These methods bring speed and consistency into the art of managing product quality and delivery using data from the current process. He was with IBM for 14 years where he founded and headed the IBM Center for Software Engineering. He then served as Executive Vice President of Software and Technology for Opus360, New York. In 2004 Ram received the IEEE technical achievement award for the invention of Orthogonal Defect Classification (ODC). He had received the IBM Outstanding Innovation Award for ODC in 1993. The methodology brings value through fast measurement, sophisticated analysis and targeted feedback. Ram is an IEEE Fellow, and author of ~50 peer reviewed technical articles. He chairs the IEEE Steering Committee for the International Symposium on Software Reliability Engineering. He has served on a few steering committees, editorial boards, the alumni board of the University of Illinois Department of Electrical and Computer Engineering. He received a BSc degree from the University of Mysore, BE and ME from the Indian Institute of Science, and PhD from the University of Illinois, Urbana Champaign in Electrical and Computer and Engineering.


Advanced software reliability and availability models
by Kishor S. Trivedi, Michael Grottke, Javier Alonso, and Allen P. Nikora
Wednesday, Nov. 5, 2:00pm - Hotel RC - Aragonese Room

Kishor S. Trivedi
Duke University
USA

Michael Grottke
Friedrich-Alexander-Universität
Germany

Javier Alonso
University of Leon
Spain

Allen P. Nikora
JPL's Quality Assurance Office
USA

Abstract

While traditional software reliability research has focused on reliability growth modeling during the testing/debugging phase, this tutorial concentrates on software failures, their underlying faults and the mitigation techniques used to deal with them during the operational phase.
The tutorial will be driven by three sets of real examples:
The first case study, based on failures of NASA satellite onboard software, leads to data-driven models. We will present the input data analysis, including the identification of probability distributions, distribution parameter estimation, and its verification.
In the second case study, based on IBM’s high availability architecture of SIP on WebSphere, a model is developed from the system architecture, and is then parameterized from real data. Another distinction between these two case studies is that the latter deals with software fault tolerance while in the former the data about fault tolerance based recovery is not available in the problem reports from NASA. Here, we will discuss in detail different approaches for reliability and availability modeling.
The third set of examples will be based on failures caused by software aging and an associated proactive recovery method known as software rejuve¬nation.
The audience will also be trained in different statistical techniques to conduct a detailed data analysis.

The tutorial will provide a holistic view of the potential problems that practitioners usually have to cope with during their work, improving the experience acquired by the student at the end of the tutorial.

Presenters' biography

Kishor Trivedi is a Chaired Professor of ECE at Duke University. He is the author of a well-known text entitled, Probability and Statistics with Reliability, Queuing and Computer Science Applications. He is an IEEE Fellow and Golden Core Member of IEEE Computer Society. He has published over 500 articles and have supervised 45 Ph.D. dissertations; his h-index is 79. He is the recipient of IEEE Computer Society Technical Achievement Award for research on Software Aging and Rejuvenation. He works closely with industry in carrying out reliability/availability/performability analysis, providing short courses and in the development and dissemination of software packages such as SHARPE and SPNP.

Michael Grottke is a Privatdozent at Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany. While his Ph.D. thesis was related to software reliability modeling during the testing/debugging phase, he obtained the Habilitation degree for his work on dealing with software faults throughout the software life cycle. Besides publishing articles in international journals such as IEEE Computer, IEEE Transactions on Reliability, Journal of Systems and Software, and Performance Evaluation, he has presented research papers at many conferences, including COMPSAC, DSN, and ISSRE.

Javier Alonso is the research manager at Institute of Applied sciences on Cybersecurity and Dependability at University of Leon, Spain. Formerly, he was Postdoctoral Associate at DHAAL research group led by Prof. Kishor Trivedi at Duke University. Dr. Alonso has published papers about different aspects of dependability, availability, reliability and software aging in international conferences and major journals like ISSRE, SRDS, DSN, IEEE Transactions on Computers and Performance Evaluation and also served as a reviewer for major journals and international conferences. His research interests are focused on software dependability and security, with special attention on mobile and cloud computing scenarios.

Allen Nikora is a researcher in the Software Assurance and Assurance Research group in JPL's Quality Assurance Office. He manages the software program in JPL's Assurance Technology Program Office (ATPO). He has authored and co-authored numerous papers and book chapters on software reliability assessment and fault modeling, and serves on numerous conference program and organizing committees, including the International Symposium on Software Reliability Engineering (ISSRE). He is the primary developer of the CASRE software reliability modeling tool. He is a member of the working group currently updating IEEE Std 1633-2008, IEEE Recommended Practice on Software Reliability.


Supporting architectural decisions through software quality optimization models
by Vittorio Cortellessa and Pasqualina Potena
Thursday, Nov. 6, 9:00am - Hotel RC - Catalana Room

Vittorio Cortellessa
University of L'Aquila
Italy

Pasqualina Potena
University of Alcalá
Spain

Abstract

Modern software applications are usually assembled by reusing existing components and they run on heterogeneous distributed platforms. In this context users are becoming more demanding on non-functional properties (e.g., reliability, performance, security) that determine the user-perceived software quality. A major issue in this direction is that the analysis of a non-functional property "in isolation" may often result inaccurate, because these properties notoriously (sometime adversely!) affect each other. This tutorial focuses on the support that the joint analysis of non-functional properties can give to architectural decisions. It is based on our experience in the field of software quality optimization, where we have proposed several models for this goal. A particular emphasis will be given to two aspects of this problem: a) how reliability models can be embedded in optimization models for architectural decisions, and b) how to perform tradeoff analysis among different non-functional properties. In particular, we will show how to induce changes in the software structure and behavior at a minimum cost while keeping the reliability and other quality attributes (e.g., performance) of a software architecture within certain thresholds.

Presenters' biography

Vittorio Cortellessa: Ph.D. in Computer Science from University of Roma "Tor Vergata" (1995). Post-doc fellow at the European Space Agency (1997), and at University of Roma "Tor Vergata" (1998-1999). Research Assistant Professor at CSEE, West Virginia University, and Research Contractor at DISP, University of Roma "Tor Vergata" (2000-2001). Assistant Professor at University of L'Aquila (Italy, 2002-2005), where he holds an Associate Professor position since 2005. He has been involved in several research projects in the areas of performance and reliability analysis of software/hardware systems, component-based software systems, fault-tolerant systems, model-driven engineering, which are his main research areas. He has published about 90 papers in these areas on international journals and conferences. He is in the Editorial Board of ACM Transactions on Software Engineering Methodologies and for Empirical Software Engineering journal.

Pasqualina Potena: She has a post-doc position linked to project Iceberg funded by EU under IAPP Marie Curie Program (grant 324356), University of Alcalá, Spain. She received the degree in Computer Science from the University of L'Aquila and the Ph.D. degree in Science from the University "G. D'Annunzio" Chieti e Pescara (Italy). She was research fellow at the University of L'Aquila, Politecnico di Milano and University of Bergamo. Her research interests include: non functional-aspects, architecture-based solutions for selfadapting/evolving software systems, optimization models.


 

Effective security management: using case control studies to measure vulnerability risk
by Luca Allodi and Fabio Massacci
Thursday, Nov. 6, 9:00am - Hotel RC - Sveva Room

Luca Allodi
University of Trento
Italy

Fabio Massacci
University of Trento
Italy

Abstract

Policy makers often impose standard security measures (e.g. CVSS) that can be widely uneconomical and inefficient to implement in practice. We propose a scientifically sound way to identify "risky" vulnerabilities that must be patched immediately.
Attendees will learn how to evaluate the risk reduction entailed by any patching policy and will have a new tool to optimize their security decisions and planning.
This tutorial is organized in three parts:
In the first part we give all the theoretical background needed to fully understand the methodology and its outcomes. A basic understanding of statistics is considered a pre-requisite to this part.
In the second part of the tutorial we guide the attendees through a hands-on session implementing the methodology. We provide each attendee with the example datasets necessary for the walkthrough. The hands-on part is organized in multiple sessions, each with different tasks. For guidance we also provide working code for the attendees. Examples will be provided in R CRAN but the attendee can choose an arbitrary environment of his/her preference.
Finally, we wrap up and discuss results and the extensibility of the methodology to other scenarios.

The participants are expected to have a laptop with a suitable environment pre-installed (Matlab, R Cran, etc.).
Results from this work have been presented to practitioners at Black USA 2013 and are published in the ACM Transactions on Information and System Security (TISSEC). This research has also been used to refine the new upcoming version of the CVSS world standard for vulnerability assessment.

Presenters' biography

Luca Allodi got his MSc in Information Security from the University of Milan. In 2006 he co-founded "Area Software", of which he remained an Executive Director for five years. He is now a third year PhD student at the University of Trento, under the supervision of Professor Fabio Massacci, expecting to graduate this fall. Luca is currently working on new methodologies to evaluate policy effectiveness. He is also a member of the standard body for the third version of the Common Vulnerability Scoring System, the international standard for vulnerability criticality estimation.

Fabio Massacci is full professor at the University of Trento. He received a M.Eng. in 1993 and Ph.D. in Computer Science and Engineering at University of Rome La Sapienza in 1998. He visited Cambridge University in 1996-97 and was visiting researcher at IRIT Toulouse in 2000. He joined the University of Siena as Assistant Professor in 1999, and in 2001 he went to Trento where he is now full professor. His research interests are in malware analysis, security economics, empirical validation of risk and security requirements methodologies, and predictive models for vulnerabilities. With W. Joosen he co-founded the ESSOS, Engineering Secure Software and Systems Symposium which aims at bringing together Requirements and Software Engineers and Security experts. In the last five years he has lead the Empirical Security Requirements and Risk Engineering Challenge (E-RISE) which focuses on the evaluation and comparison of security risk assessment and security requirements engineering methods.