November 3-6, 2014
Thursday, Nov. 6, 09:00 - 10:30 - Hotel RC - Aragonese Room
Chair: Michael Grottke
1. Saeko Matsuura, Yoshitaka Aoki and Shinpei Ogata. Practical Behavioral Inconsistency Detection between Source Code and Specification using Model Checking
2. Leonardo Mariani, Daniela Micucci and Fabrizio Pastore. Early Conflict Detection with Mined Models
3. Jasmin Jahic and Thomas Kuhn. Analysis of functional software dependencies through supervised execution
4. Satoshi Fujimoto, Hideharu Kojima, Hiroyuki Nakagawa and Tatsuhiro Tsuchiya. Applying parameter value weighting to a practical application
5. Passant Kandil, Sherin Moussa and Nagwa Badr. Regression testing approach for large-scale systems
6. Chia Hung Kao, Ping-Hsien Chi and Yi-Hsuan Lee. Automatic Testing Framework for Virtualization Environment
Thursday, Nov. 6, 11:00 - 12:30 - Hotel RC - Aragonese Room
Chair: Roberto Natella
2. Jianwen Xiang, Fumio Machida, Kumiko Tadano and Shigeru Hosono. Is cut sequence necessary in dynamic fault trees?
3. Maximilian Junker. Exploiting Behavior Models for Availability Analysis of Interactive Systems
5. Omar Alhazmi and Yashwant Malaiya. Are the Classical Disaster Recovery Tiers Still Applicable Today?
Practical Behavioral Inconsistency Detection between Source Code and Specification using Model Checking
To achieve practical use of model checking, we propose a method to find the discrepancy between the behavior of the source code and the specifications written in UML by using a decision table.
Early Conflict Detection with Mined Models
Source Code Management (SCM) systems with extensive support to branching, parallel development, and merging, such as Git and Mercurial, are increasingly popular. Despite the popularity of advanced SCM systems, merging multiple branches is still extremely painful and time consuming because it requires manually resolving a number of conflicts generated while developing branches independently. This is due to the limitations of currently available conflict detection mechanisms: conflicts are identified late and are limited to textual conflicts, while the conflicts that are most expensive to fix are the ones that cause misbehaviors in the program without causing textual conflicts.This paper introduces a multi-branch server-side dynamic analysis that can automatically detect conflicts that produce misbehaviors as soon as they are introduced, regardless the syntactical changes that occur in the program. The results of the analysis can dramatically improve the rate of conflicts that are detected and resolved early in the process. As a consequence, the cost and the effort required to complete merge operations will drastically decrease and the capability to timely evolve software will significantly improve.
Analysis of functional software dependencies through supervised execution
Knowing functional interferences between system components is imperative when developing safety critical systems. In this paper, we describe an approach for reliably detecting functional dependencies of software components to other system entities through supervised execution of software using the simulation techniques. Supervised execution enables monitoring of the internal state of system components and therefore permits the rapid detection of component behavior changes across simulation contexts. Credibility of results is achieved through collected test coverage metrics.
Applying parameter value weighting to a practical application
This paper reports a case study where pair-wise testing was applied to a real-world program.In particular we focus on weighting, an added feature which allows the tester to prioritize particular parameter values.In our previous work we proposed a weighting method that can reflect given weights in the resulting test suite more directly than can existing methods.To asses the effects of weighting in a practical testing process, we compare the number of execution times of the program's methods among three pair-wise test suites,including the test suite generated by our weighting method and those generated by an existing test case generation tool with and without the weighting option.The results show that the effects of weighting were most clearly observed when our weighting method was used.
Regression testing approach for large-scale systems
Regression testing is an important andexpensive activity that is undertaken every time a program is modified to ensure that the changes do not introduce new bugs into previously validated code. Instead of re-running all test cases, different approaches were studied to solve regression testing problems. Data mining techniques are introduced to solve regression testing problems with large-scale systems containing huge sets of test cases, as different data mining techniques were studied to group test cases with similar features. Dealing with groups of test cases instead of each test case separately helped to solve regression testing scalability issues. In this paper, we propose a new methodology for regression testing of large-scale systems using data mining techniques to prioritize and select test cases based on their coverage criteria and fault history.
Automatic Testing Framework for Virtualization Environment
Virtualization has attracted considerable attention in recent years. With the increasing popularity of virtualization technology, how to ensure the correctness and the quality becomes a critical issue. Several researches evaluate functional- ities and corresponding performance on different virtualization environments. However, the task of testing and evaluation of virtualization environment is still complex and time consuming. In this paper, an automatic testing framework is introduced. The framework provides automatic methods of test environment setting, usage scenario deployment and test plan revision, and helps to facilitate the functional and performance evaluation on virtualization environments.
Mapping the Software Errors and Effects Analysis method to ISO26262 requirements for software architecture analysis
Is cut sequence necessary in dynamic fault trees
The dynamic fault trees (DFT) is proposed to model sequence dependencies between fault events based on the traditional static fault tree (SFT). In the analysis of DFT, only combinations of basic events may not be sufficient to imply the top event (TE) due to the encoded sequence dependencies of the DFT. The calculation of cut sequences is, however, a permutation problem and is much more complex than the calculation of cut sets in general. In this paper, we demonstrate that for any DFT that can be modeled by a Markov or semi-Markov process, the DFT can be transformed into an equivalent SFT. With the transformation, the analysis of a DFT can be carried out without resorting to the cut sequences anymore, but in terms of the cut sets of the transformed SFT which can be solved with traditional efficient combinatorial algorithms. The transformation provides a way to reduce the permutation problem of cut sequences into a combinatorial problem of cut sets.
Exploiting Behavior Models for Availability Analysis of Interactive Systems
We propose an approach for availability analysis that directly utilizes behavior models as they occur in model-based development. The main benefits of our approach are reduced effort as no dedicated availability models need to be created as well as precise results due to the inclusion of behavior interactions.
Evaluating embedded-software specifications - Quantitative & structured assessment of declarative interface descriptions
Relying on implementations in verification results in specifications and implementations that do not lend themselves well to reuse. Moreover, conventional verification tells engineers rather little about actual software or design quality ('Trust me, it's good.'). We regard ease of formal specification, quality assessment and specification reuse, particularly in the form of declarative specifications, to be decisive factors in furthering the application of formal methods in software development. We aim to provide new, tool-supported techniques combining, and enabling, practical specification formalisms, implementationless model checking of structured specifications and complex measures for specification quality.
Are the Classical Disaster Recovery Tiers Still Applicable Today?
As disaster recovery plans (DRPs) for IT systems have been improving over the past decades; some metrics became widely accepted such as recovery time objective (RTO) and recovery point objective (RPO). However, disaster recovery plans and solutions vary in their design, sophistication and their required RTO/RTO. Therefore, a need to categorize disaster recovery plans into tiers has become necessary. Fortunately, a number of classifications exist but sometimes they are not fully explained; so, independent researchers may find the classification confusing or inappropriate for the current state of technology with significant overlap among tiers. Moreover, advances in communication and technology and the introduction of disaster recovery as a service (DRaaS) by several cloud service providers (CSPs) has reshaped the area of disaster recovery and development of DRPs. Therefore, one can argue that the old classification of 7-tiers of DRPs is obsolete and a new classification is needed. Here, we try to survey these classifications, understand the common grounds and the differences and try to suggest some improvements to gap them.