Student Papers

 

Tuesday, Nov. 4, 16:00 - 18:00 - CC - Aula Magna

Session chair: Karthik Pattabiraman

1.     Satoko Kinoshita, Hiroki Takamura, Daichi Mizuguchi and Hidekazu Nishimura. Describing Software Specification by Combining SysML with the B method

2.     Jingwen Zhou, Zhenbang Chen, Ji Wang, Zibin Zheng and Wei Dong. A Runtime Verification Based Trace-Oriented Monitoring Framework for Cloud Systems

3.     Luigi De Simone. Towards Fault Propagation Analysis in Cloud Computing Ecosystems

4.     Nuno Silva and Marco Vieira. Towards Making Safety-Critical Systems Safer: Learning from Mistakes

5.     Luís Santos, Marília Curado and Marco Vieira. A Research Agenda for Benchmarking the Resilience of Software Defined Networks

6.     Nobuo Kikuchi, Takeshi Yoshimura, Ryo Sakuma and Kenji Kono. Do Injected Faults Cause Real Failures? — A Case Study of Linux

 


Describing Software Specification by Combining SysML with the B method
This paper shows a methodology to describe software specifications combining SysML with the B method. Modeling languages of a system such as SysML do not guarantee the correctness of the specification. In addition, formal methods including the B method are generally difficult to use for describing software specifications from ambiguous requirements at the start of the development, because it is not easy for software developers to denote the formal notations. Our methodology redeems those shortcomings by iterating processes which translate SysML diagrams to the abstract machine notations of the B method. At the last part of this paper, we showed the effectiveness of our methodology with an example.

A Runtime Verification Based Trace-Oriented Monitoring Framework for Cloud Systems
Cloud computing provides a new paradigm for resource utilization and sharing. However, the reliability problems, like system failures, often happen in cloud systems and bring enormous loss. Trace-oriented monitoring is an important runtime method to improve the reliability of cloud systems. In this paper, we propose to bring runtime verification into trace-oriented monitoring, to facilitate the specification of monitoring requirements and to improve the efficiency of monitoring cloud systems. Based on a data set collected from a cloud storage system in a real environment, we validate our approach by monitoring the critical properties of the storage system. The preliminary experimental results indicate the promise of our approach.

Towards Fault Propagation Analysis in Cloud Computing Ecosystems
Nowadays, Cloud Computing is a fundamental paradigm that provides computational resources as a service, on which users heavily rely. Cloud computing infrastructures behave as an ecosystem, where several actors play a crucial role. Unfortunately Cloud Computing Ecosystems (CCEs) are often affected by outages, such as those experienced by Amazon Web Service in the last years, that result from component faults that propagate through the whole CCE. Thus, there is still a need for approaches to improve CCEs’ reliability. This paper discusses both existing approaches and open challenges for the dependability evaluation of CCEs, and the need for novel techniques and methodologies to prevent fault propagation within CCEs as a whole.

Towards Making Safety-Critical Systems Safer: Learning from Mistakes
Safety-critical systems usually need to be qualified and certified, they follow specific and strict development standards that recommend the use of techniques and processes, specific personnel training and domain expertise. These systems are very sensitive to failures and thus there is a need to guarantee the higher quality and dependability levels. The goal of this paper is to present the PhD work plan that shall lead to a disruptive approach to identify the quality gaps, root-causes and improve safety-critical systems engineering. The main idea is to start from the classification of real issues, map them to engineering properties and root causes, and identify how to avoid and reduce the impact of those causes. The foreseen improvements shall be reflected in development and V&V techniques, resources training or preparation, and international standards adaptations in order to reflect measurable improvement in the safety and quality of the systems.

A Research Agenda for Benchmarking the Resilience of Software Defined Networks
Software Defined Networking (SDN) has recently emerged as a hot topic deserving strong interest both from academia and industry. The change it advises of logically centralizing the overall network control on software platforms and applications, turns networks more and more into software-based systems. Such paradigm shift enables to build more manageable, agile and smarter data communication infrastructures. Despite the interest it is deserving on a multitude of physical and virtualized infra-structures, currently there are no systematic approaches of characterizing and comparing alternative SDN-based solutions regarding their resilience – a characteristic of utmost importance on such critical environments. In this paper we present a tentative path to bridge this gap, by proposing a research agenda to advance resilience benchmarks for SDNs. We believe that with such class of tools and methodologies, researchers, developers and practitioners will become in a better position to advance the area and take more informed decisions.

Do Injected Faults Cause Real Failures? - A Case Study of Linux
Software fault injection (SFI) has been used to intentionally cause gfailuresh in software components and assess their impacts on the entire software system. A key property that SFI should satisfy is the representativeness of injected failures; the failures caused by SFI should be as close as possible to failures in the wild. If injected failures do not represent realistic failures, the measured resilience or tolerance against failures of the investigated system is not trustworthy. To the best of the authorsf knowledge, the representativeness of gfaultsh has been investigated. However, it is an open problem whether the failures caused by injected faults represent realistic failures. In this paper, we report the preliminary results of the investigation on the representativeness of injected failures. To compare injected failures with real failures, we have collected 43,742 real crash logs of Linux from the RedHat repository, and conducted a fault injection campaign on Linux, using SAFE, a state-of-theart injector of software faults. In the fault injection campaign, 50,000 faults are injected to the Linux file system and 71,470 runs of a workload are executed. The crash logs generated by SFI are compared with the real RedHat logs with respect to crash causes, crashed system calls, and crashed modules. Our preliminary results suggest that failures caused by injected faults do not represent real failures, probably because injected faults are not representative enough or because the selected workload is not realistic.