Industry Papers and Presentations

 

Industry #1: Testing 1

Tuesday, Nov. 4, 11:00 - 12:30 - CC - Aula Magna

Chair: Gabriella Carrozza

1.     Maryam Raiyat Aliabadi, Karthik Pattabiraman and Nematollah Bidokhti. Soft-LLFI; A Comprehensive Framework for Software Fault Injection

2.     Brian Atkinson, Nathan Debardeleben, Qiang Guan, Robert Robey and William Jones. Fault Injection Experiments With the CLAMR Hydrodynamics Mini-App

3.     Fabio Baccanico, Gabriella Carrozza, Marcello Cinque, Domenico Cotroneo, Antonio Pecchia and Agostino Savignano. Event Logging in an Industrial Development Process: Practices and Reengineering Challenges

 

Industry #2: Reliability Modeling 1

Tuesday, Nov. 4, 11:00 - 12:30 - CC - Room A

Chair: Veena Mendiratta

1.     Sathish V, Sudarsan S.D and Srini Ramaswamy. Event Based Robot Prognostics using Principal Component Analysis

2.     Ashlie Hocking, John Knight, Anthony Aiello and Shin’ichi Shiraishi. Proving Model Equivalence in Model Based Design

3.     Alberto Avritzer and Andre Bondi. Developing Software Reliability Models in the Architecture Phase of the Software Lifecycle

 

Industry #3: Testing 2

Tuesday, Nov. 4, 14:00 - 15:30 - CC - Aula Magna

Chair: Raghudeep Kannavara

1.     Joachim Froehlich and Reiner Schmid. Architecture for a hard-real-time system enabling non-intrusive tests

2.     Haihong Henry Zhu. Handling Soft Error in Embedded Software for Networking System

3.     Linn Gustavsson Christiernin, Svante Augustsson and Stefan Christiernin. Safety Critical Robot Programming and Testing for Operations in Industrial Co-production

 

Industry #4: Best Paper Nominees

Wednesday, Nov. 5, 09:00 - 10:30 - CC - Room A

Co-Chairs: Pete Rotella and Mariacristina Rossi

1.     Quentin Ochem. Programming with Contracts - Ada 2012 Tool-Supported Industrial Insights

2.     Marcello Cinque, Antonio Pecchia, Raffaele Della Corte, Agostino Savignano, Stefano Avallone, Antonio Marotta and Gabriella Carrozza. NAPOLI FUTURA: Novel Approaches for Protecting Critical Infrastructures from Cyber Attacks

3.     Luigi De Simone, Antonio Ken Iannillo, Anna Lanzaro, Roberto Natella, Domenico Cotroneo, Jiang Fan and Wang Ping. Network Function Virtualization: Challenges and Directions for Reliability Assurance

 

Industry #5: Availability and Performance

Wednesday, Nov. 5, 11:00 - 12:30 - CC - Room A

Chair: Linn Christiernin

1.     Majid Hormati, Ferhat Khendek and Maria Toeroe. Towards an Evaluation Framework for Availability Solutions in the Cloud

2.     Seshadhri Srinivasan, Furio Buonopane, Srini Ramaswamy and Juri Vain. Verifying Response Times in Networked Automation Systems Using Jitter Bounds

1.     Fulvio Frati, Ernesto Damiani, Luigi Buglione, Daniele Gagliardi, Sergio Oltolina and Gabriele Ruffatti. Balanced Measurement Sets: Criteria for Improving Project Management Practices

 

Industry #6: Reliability Modeling 2

Wednesday, Nov. 5, 11:00 - 12:30 - Hotel RC - Sveva Room

Chair: V.S. Singh

1.     Veena Mendiratta and Robert Hanmer. Using Data Analytics to Drive Product Software Reliability

2.     Mudasir Ahmad. Reliability Models for the Internet of Things: A Paradigm Shift

3.     Pete Rotella and Sunita Chulani. Predicting Release Quality

 

Industry #7: Model-Based Testing

Wednesday, Nov. 5, 16:00 - 17:30 - CC - Room A

Chair: Vance Hilderman

1.     Charlie Lane and Terry Gregson. Selex ES experience of V&V Tools with UML Models

2.     Myron Hecht, Emily Dimpfl and Julia Pinchak. Automated Generation of Failure Modes and Effects Analysis from SysML Models

3.     Eddie Jaffuel, Bruno Legeard and Fabien Peureux. MBT for GlobalPlatform Compliance Testing: Experience Report and Lessons Learned

 

Industry #8: Security

Wednesday, Nov. 5, 16:00 - 17:30 - Hotel RC - Sveva Room

Chair: Raffaele Della Corte

1.     Raghudeep Kannavara. Assessing the Threat Landscape for Software Libraries

2.     Raghudeep Kannavara. A Six Sigma Approach to Design for Security

3.     Luca Recchia, Giuseppe Procopio, Andrea Onofrii and Francesco Rogo. Security Evaluation of a Linux - A Common Criteria EAL4+ certification experience

 

Industry #9: Data Management and Analysis

Thursday, Nov. 6, 09:00 - 10:30 - CC - Room A

Chair: Will Jones

3.     Rekha Singhal, Manoj Nambiar, Harish Sukhwani and Kishor Trivedi. Performability Comparison of Lustre and HDFS for MR Applications

2.     Abayomi Ipadeola and Ahmed Ameen. CAMDIT: A Toolkit for Integrating Heterogeneous Data for enhanced Service Provisioning

3.     Clauirton Siebra. Applying Metrics to Identify and Monitor Technical Debt Items during Software Evolution

 

Industry #10: Static Analysis

Thursday, Nov. 6, 11:00 - 12:30 - CC - Room A

Chair: Quentin Ochem

1.     Shinichi Shiraishi, Veena Mohan and Hemalatha Marimuthu. Quantitative Evaluation of Static Analysis Tools

2.     Salvatore Scervo, Gabriella Carrozza and Stefano Rosati. Code Analysis as an Enabler for SW Quality Improvement: the SELEX ES SW Engineering experience

3.     Tukaram Muske. Supporting Reviewing of Warnings in Presence of Shared Variables: Need and Effectiveness

 

Industry #11: Design and Planning

Thursday, Nov. 6, 14:00 - 15:30 - CC - Room A

Chair: Marcelo Teixeira

1.     Marcelo Teixeira, Richardson Ribeiro, Marco Barbosa and Luciene Marin. A formal method applied to the automated software engineering with quality guarantees

2.     John Hudepohl, Hans-Martin Niederer and Oliver Seiler. Efficient Development of Reliable Solutions through Education and Effective Training: A Best Practice Report from ABB

3.     Gianfrancesco Ranieri. Planning of prioritized test procedures in large integrated systems: best strategy of defect discovery and early stop of testing session. The Selex-ES Experience

 

Industry #12: Avionics

Thursday, Nov. 6, 14:00 - 15:30 - Hotel RC - Catalana Room

Chair: Myron Hecht

1.     Vance Hilderman. Understanding DO-178C Software Certification: Benefits Versus Costs

2.     Andreas Löfwenmark and Simin Nadjm-Tehrani. Challenges in Future Avionic Systems on Multi-core Platforms

3.     Pierre-Alain Bourdil, Bernard Berthomieu and Eric Jenn. Model-Checking Real-Time Properties of an AutoFlight Control System Functio

 


Programming with Contracts - Ada 2012 Tool-Supported Industrial Insights
Programming by Contract is a software development discipline that has been around for decades and is know to bring many benefit in terms of software safety and reliability. Strangely, there is relatively few modern programming workshops that supports this paradigm in an integrated manner. With the Ada 2012 language and the SPARK 2014 technology, Ada vendors are trying to put together a integrated environment targeting related design and verification methodologies. This environment is supported by various tools taking advantage of the formalism of contracts, including support for documentation, dynamic check generation, static analysis or even formal proof. This talk will give an introduction on what programming by contracts is, what kind of methodological objectives can be selected to specify contracts and what kind of benefit can be obtained. We'll go over bottom-up approach, design approach and object-orientated approach. The presentation will be articulated around concrete examples written using the Ada 2012 / SPARK 2014 language.

Quantitative Evaluation of Static Analysis Tools
This paper presents a quantitative comparison of static analysis tools. First, we conduct a wide-ranging survey of static analysis tools and select several promising tools qualitatively. Second, we build test suites that contain a large variety of defects and evaluate the selected tools with them. Third, we derive several metrics for measuring tool performance and subsequently clarify the characteristics of the said tools by using them. Finally, we quantitatively identify the best tools from several different viewpoints.

Soft-LLFI; A Comprehensive Framework for Software Fault Injection
Fault Injection is defined as the validation technique of the dependability of fault tolerant systems which consists in the accomplishment of controlled experiments where the observation of the system’s behavior in presence of faults is induced explicitly by the writing introduction (injection) of faults in the system. It has become well-established that software will never become bug-free, which has spurred research in mechanisms to contain faults and recover from them. Since such mechanisms deal with faults, fault injection is necessary to evaluate their effectiveness. However, little thought has been put into the question whether fault injection experiments faithfully represent the fault model designed by the user. Correspondence with the fault model is crucial to be able to draw strong and general conclusions from experimental results. The aim of this paper is twofold: to propose 1) a comprehensive list of fault models representing realistic software bugs, and 2) a robust configurable software fault injection framework to automatically inject erroneous behavior into application and to evaluate the dependability.

Architecture for a hard-real-time system enabling non-intrusive tests
A software-intensive, hard-real-time system with high reliability and availability goals must perform the functions for which it was conceived correctly in its intended environment without adverse effects. The system must respond appropriately and immediately even in situations where system parts have failed, either transiently or permanently, in order to ensure that the system as a whole still operates correctly. This holds in particular for software-intensive systems in the context of aircrafts which must operate fail-safe. Integration platforms of future electric road vehicles, for another example, may take a similar direction (www.projekt-race.de).

Handling Soft Error in Embedded Software for Networking System
Single event upset (SEU) is a well known and documented phenomenon that affects electronic circuitry [1]. These events are caused by either atmospheric neutrons or alpha particles emitted by trace impurities in the silicon processing and packaging materials. The error in device output or operation caused by SEU is called soft error. Soft error is not software defects, but instead refers to a hardware data corruption that does not involve permanent chip damage. Soft errors can lead to catastrophic failures for embedded system. Due to the nature of the soft error, it is almost impossible to prevent them. Based on the impact severity, the recommended handling is to detect and correct them, called mitigation methodologies. The mitigation strategies are implemented in embedded software for networking system. This paper presents a comprehensive framework for single-event upset (SEU) mitigation methodologies for networking system. To achieve this goal we start by defining the SEU mitigation strategy as a combination of chip level methods and system level handling methods. Given a particular SEU chip level or system level mitigation choice, we propose first categorizing the SEU Failure In Time (FIT) into different time window bins based on SEU recovery time. Then we analyze the impact of each mitigation strategy, results in the FIT value change in each bin. This framework enables the engineers to do the SEU mitigation design in early product development phase. A user-friendly Excel tool is also developed to make the complicated model easy to use. The embedded system like networking device can be modeled using the tool at an early stage to support design decisions and trade-offs related to potentially costly implementation.

Selex ES experience of V&V Tools with UML Models
Selex ES has been using model-based code generation for over 20 years, initially with HOOD, and for the past 16 years with UML using the Rhapsody IDE. Rhapsody is established as the IDE of choice for software development, and we continually seek to improve its usage through ancillary tool integrations, and widen its use across engineering via the use of Rhapsody by systems engineers and deployment of software internally to FPGAs.

Understanding DO-178C Software Certification: Benefits Versus Costs
DO-178C is the new software certification guideline for avionics software. Many consider aviation to lead other industries in terms of safety and certification. DO-178C is never cheap, certainly not on the first project. And in clear cases outlined herein, DO-178C can increase costs above DO-178B, which already increased software certification costs by 20-40% itself. But is DO-178C really “too” expensive? Doesn’t it actually reduce costs over DO-178B for companies who were doing it “right”? Does it reduce long-term costs at the expense of increased development cost? Will it improve safety and reliability and if so, to what degree? Exactly what benefits are received from complying with DO-178C? In what areas is DO-178C more expensive than DO-178B? These questions are answered within this paper

Challenges in Future Avionic Systems on Multi-core Platforms
Modern avionic system development is undergoing a major transition, from federated systems to Integrated Modular Avionics (IMA) where several applications with mixed criticality will reside on the same platform. Moreover, there is a departure from today’s single core computing, and we need to address the problem of how to guarantee determinism (in time and space) for application tasks running on multiple cores and interacting through shared memory. This paper summarises the main challenges and briefly describes some active directions in research. It also outlines the forthcoming research that we will pursue for quantifying time bounds on memory access related interference, to ensure determinism and comply with certification requirements.

Assessing the Threat Landscape for Software Libraries
Libraries are a collection of implementations of behavior written in a computer programming language providing a well-defined interface by which the behavior can be invoked. Although a majority of the code in numerous applications comes from libraries, the risk of security vulnerabilities that comes with these libraries is often overlooked. In this regard, we seek to assess the threat landscape associated with software libraries and discuss mitigation strategies via Security Development Lifecycle (SDL).

Event Based Robot Prognostics using Principal Component Analysis
As industrial systems are getting complicated, challenges in coming up with efficient maintenance strategies which include predicting failures in the system become important industry specific research topic. Traditionally, research focuses on developing failure prediction models based on physical understanding of the system. But, development of such models are often time consuming and labor intensive for complex systems. In recent past, due to advent of cheaper data collection mechanisms and efficient algorithms, data driven approaches for predicting failures are gaining significant interest in industrial research community. In this paper, we provide a Principal component Analysis (PCA) based approach of failure prediction in industrial robots using event log information. The event logs are collected through remote service set-up from a robot controller. The proposed method will reduce the dimensionality of the original data which consist of interrelated events while retaining the variation present in the data. Using PCA and multivariate statistics such as Hotelling T sqaured, Q Residuals and Q contributions charts, we are able to detect abnormal behavior of event pattern within 30 days before failure.

Balanced Measurement Sets: Criteria for Improving Project Management Practices
The availability of a measurement framework right at the early stage of a project can have a very positive impact in the management of software development process. In this paper, we cope with this problem proposing a methodology that can allow an early adoption of balanced measurement sets, which will be iteratively refined at each iteration of the process. The proposed methodology can be implemented and supported by open source tools like the Spago4Q platform.

Safety Critical Robot Programming and Testing for Operations in Industrial Co-production
In a smart and flexible industrial setting where human operators co-produce together with robots and heavy machines the safety in, and testing of, programmed solutions becomes crucial. In this study we set up a camera monitoring safety system in a robot production cell and use exhaustive testing based on use cases and states to achieve a safe solution for co-production. The system as a whole is successfully evaluated, including human interaction, and live nail gun demonstrations. We conclude that the combi¬nation of strict handling of requirements, planning and testing in combination does allow for an industrially cost-effective and safe solution.

Towards an Evaluation Framework for Availability Solutions in the Cloud
The Cloud is a new computing paradigm for providing computing services. It has many benefits such as cost efficiency, better resource utilization and scalability. However, critical businesses (e.g. telecommunication industry) have major concerns about the availability of their services in the Cloud. Different availability solutions can be used for the services in the Cloud. These solutions provide different protection mechanisms against failures at different layers (applications, virtual resources, physical hosts). In this paper we provide a framework for the evaluation of availability solutions. We determine the different aspects that should be taken into account for protecting services in the Cloud and therefore for evaluating potential solutions. We put this framework into practice, we investigate and evaluate a set of availability solutions based on the existing Cloud and availability related components, including OpenStack, VMware technologies and OpenSAF.

A formal method applied to the automated software engineering with quality guarantees
Modern systems tend to be larger, more complex and to depend on an increasingly numerous set of requirements. In contrast, system development practices remain human-centered, depending mostly on the engineer expertise to be carried out. This paper shows how maximally permissive and deadlock-free components of software can be automatically produced. We argue that modeling methods and mathematical operations can be combined to systematically manage the software development process, based on high-level views of the system. Results show that possibly complex programming tasks become easier and independent from the expertise of the software engineer. Examples are provided to illustrate the approach.

Automated Generation of Failure Modes and Effects Analysis from SysML Models
This paper describes a method for automated generation of Failure Modes and Effects Analyses from SysML models containing block definition diagrams, internal block diagrams, state transition machines, and activity diagrams. The SysML model can be created in any SysML modeling tool and then analysis is performed using the AltaRica language and modeling tool. An example using a simple satellite and ground user shows the approach

CAMDIT: A Toolkit for Integrating Heterogeneous Data for enhanced Service Provisioning
Data Integration is classified as an Open and Lingering (OL) problem which must be sufficiently addressed due to its myriad benefits and significances, especially in the health sector where collaborative medicine is vital. Data integration problem evolved from the disparity in semantics and syntactic representations of medical data. It is a challenge, which must be solved to realize effective collaboration in the health sector and paramount for efficient health care service provisioning. In recent times, different approaches and software artifacts such as services, components and tools have been proposed for resolving data integration problem. However, existing approaches are faced with data inaccuracy, data unreliability, increased query response time, network bottleneck and poor system performance. The focus of this paper is to present our technique and the CAMDIT toolkit which efficiently achieves data integration, data accuracy, reliability and reduced query-response time and impacts of network bottleneck on systems performances.

A Six Sigma Approach to Design for Security
Design for Security (DFS) is a maturity model that guides and measures the capabilities and practices that will help Business Units (BU's) deliver secure products. DFS provides the metrics and planning framework for implementing programs that will increase an organization's capability to deliver products meeting security and privacy standards. Maturity Model - A set of structured levels that describe how well the behaviors, practices and processes of an organization can reliably and sustainably produce secure products. Guides and Measures - Provides the guidance that will help the BU improve their product assurance practices and process and the ability to measure the BU's maturity level. Capabilities and Practices - Product development capabilities and practices in product security space.

Verifying Response Times in Networked Automation Systems Using Jitter Bounds
Networked Automation Systems (NAS) have to meet stringent response time during operation. Verifying response time of automation is an important step during design phase before deployment. Timing discrepancies due to hardware, software and communication components of NAS affect the response time. This investigation uses model templates for verifying the response time in NAS. First, jitter bounds model the timing fluctuations of NAS components. These jitter bounds are the inputs to model templates that are formal models of timing fluctuations. The model templates are atomic action patterns composed of three composition operators- sequential, alternative, and parallel and embedded in time wrapper that specifies clock driven activation conditions. Model templates in conjunction with formal model of technical process offer an easier way to verify the response time. The investigation demonstrates the proposed verification method using an industrial steam boiler with typical NAS components in plant floor.

MBT for GlobalPlatform Compliance Testing: Experience Report and Lessons Learned
Compliance testing is done to determine whether a system meets a specified standard prescribed by a given authority. One key goal of compliance testing is to ensure interoperability between systems, on the basis of agreed norms and standards, and however allowing acceptable variations inserted by the compliant product vendors. This paper reports about the deployment of a Model-Based Testing approach, based on the Smartesting solution, to produce compliance test suites for GlobalPlatform specifications, which aim to ensure the long-term interoperability of embedded applications on secure chip technology. After relating the context and the motivation to use a Model-Based Testing approach as a keystone of the GlobalPlatform Compliance Program since 2007, the paper describes the GlobalPlatform Working Group testing process, and discusses the lessons learned from this success story.

Security Evaluation of a Linux System - A Common Criteria EAL4+ certification experience
This paper discusses our experience with the certification process of FIN.X RTOS, a Linux distribution certified as Common Criteria Evaluation Assurance Level 4+ (EAL4+). In May 2014, the FIN.X RTOS successfully passed the Common Criteria evaluation process, as stated by the Certification Report: “The product identified in this certificate complies with the requirements of the standard ISO/IEC 15408 (Common Criteria) v. 3.1 for the assurance level: EAL4+ (ALC_FLR.1) Rome, May the 21st 2014.”

NAPOLI FUTURA: Novel Approaches for Protecting Critical Infrastructures from Cyber Attacks
This paper presents the main objectives and preliminary results of the NAPOLI FUTURA project, which aims to define novel approaches for protecting critical infrastructures from cyber attacks. The paper focuses on the architectural design of the NAPOLI FUTURA platform. The platform leverages cutting-edge big data analytics solutions to detect security attacks and to support live migration of services in the context of Critical Information Infrastructures (CIIs).

Fault Injection Experiments With the CLAMR Hydrodynamics Mini-App
In this paper, we present a resilience analysis of the impact of soft errors on CLAMR, a hydrodynamics mini-app for high performance computing (HPC). We utilize F-SEFI, a fine-grained fault injection tool, to inject faults into the kernel routines of CLAMR. We demonstrate visually the impact of these faults as they are either benign (have no impact on the results), cause silent data corruption (SDC), or cause the application to crash due to instabilities. We quantify the probability that an injected fault will cause CLAMR to transition to one of the above three states using F-SEFI. Finally, we explore the relationship between the application’s fault characteristics and when the fault is injected in simulation time. Overall, we find that 17% and 24% of the faults propagate into SDC and crashes respectively.

Model-Checking Real-Time Properties of an AutoFlight Control System Function
We relate an experiment in modeling and verification of an avionic function. The problem addressed is the correctness of a temporal condition enabling the detection of some physical faults in the hardware implementing the function. Using the Fiacre/Tina verification toolset, we produced a formal model abstracting the function, and confirmed by model-checking that the condition determined analytically is indeed correct. The modelling issues ensuring tractability of the model are discussed.

Network Function Virtualization: Challenges and Directions for Reliability Assurance
Network Function Virtualization (NFV) is an emerging solution that aims at improving the flexibility, the efficiency and the manageability of networks, by leveraging virtualization and cloud computing technologies to run network appliances in software. Nevertheless, the “softwarization” of network functions imposes software reliability concerns on future networks, which will be exposed to software issues arising from virtualization technologies. In this paper, we discuss the challenges for reliability in NFVIs, and present an industrial research project on their reliability assurance, which aims at developing novel fault injection technologies and systematic guidelines for this purpose.

Event Logging in an Industrial Development Process: Practices and Reengineering Challenges
This paper discusses a preliminary analysis of event logging practices adopted in a large-scale industrial development process at Selex ES, i.e., a top leader Finmeccanica company in electronic and information solutions for critical systems. We assess total around 50+ millions lines of log produced by an Air Traffic Control (ATC) system. Analysis reveals that event logging is not strictly regulated by company-wide practices, which results into heterogeneous logs across different development teams. We introduce our ongoing effort at developing an automatic support to browse collected logs along with a uniform logging policy.

Proving Model Equivalence in Model Based Design
We introduce the concept of constrained equivalence of models in model-based development and present a proof technology for establishing constrained equivalence for models documented in MathWorks Simulink. We illustrate the approach using a simple model of an automobile anti-lock braking system.

Developing Software Reliability Models in the Architecture Phase of the Software Lifecycle
We present an approach for software reliability modeling at the architecture phase that has been applied to a large complex industrial system. The practicality of the approach is supported by the ease of use and model simplicity, which followed from the failure data analysis and the system architecture workflow modeling of this large and complex industrial system.

Applying Metrics to Identify and Monitor Technical Debt Items during Software Evolution
The Technical Debt (TD) metaphor has been used in the software community as a way to manage and communicate the long-term consequences that some technical decisions may cause. Although intuitive, researches in TD do not discuss practical approaches to identify and monitor TD items, which could be applied in a transparent way during the process of software evolution. This work proposes a technique based on software metrics that automates the process of TD identification and monitoring. For that end, a subset of metrics was analyzed and related to aspects of TD items. As a form to validate the technique, it was used to analyze a TD item from a past 7 years multinational project. Evidences suggest that TD items can be related to software metrics so that software metric tools could be an important resource to automate the identification and monitoring of TD items.

Using Data Analytics to Drive Product Software Reliability
In this presentation we will show our data analytics for software reliability improvement methodology, the model results from a production software system including several experiments with the data and the insights gained for future work, the potential gains from using this methodology, and the future work areas to further explore the methods.

Reliability Models for the Internet of Things: A Paradigm Shift
More than 50 Billion devices are expected to be internet enabled by 2020 [1]. These devices, commonly referred to as the “Internet of Things” (IoT), are expected to become ubiquitous and involved in every aspect of life, ranging from wearable devices to sensors monitoring industrial processes. The networking equipment connecting these devices will need to seamlessly communicate with several different software platforms, with software continuously upgraded. In addition, these devices will be exposed to unprecedented, highly varying external stimuli: harsh thermal fluctuations, fluids, moisture, vibrations and shock.

Code Analysis as an Enabler for SW Quality Improvement: the SELEX ES SW Engineering experience
SELEX ES research and development effort to improve delivered software quality is constantly increasing, trying to stay on the cutting edge of methodologies and techniques for improving customers satisfaction, as well as for gaining competitive advantages over competitors worldwide. In the last couple of years, great attention has been devoted to Sofware Verification process and, in particular, to static code analysis as a mean for increasing quality of legacy solutions and for allowing the design of highly reliable new products at an affordable cost and in reasonable times with respect to the market’s dogged demand. This paper shortly reports SELEX ES experience and pre- liminary results, with particular emphasis on i) the use of static analysis as a direct intrinsic quality maker, ii) the def- inition of ad hoc KPIs and quality targets and iii)the way static analysis results have been exploiting to gain side indications and metrics for pre release software defects prevention and prediction.

Performability Comparison of Lustre and HDFS for MR Applications
With its simple principles to achieve parallelism and fault tolerance, the Map-reduce framework has captured wide attention, from traditional high performance computing to marketing organizations. The most popular open source implementation of this framework is Hadoop. Today, the Hadoop stack comprises of various software components including the Hadoop Distributed File System (HDFS), the distributed storage layer amongst others such as GPFS and WASB. The traditional high performance computing has always been at the forefront of developing and deploying cutting edge technology and solutions such as Lustre, a Parallel IO file systems, to meet its ever growing need. To support new and upcoming use cases, there is a focus on tighter integration of Hadoop with existing HPC stacks. In this paper, we share our work on one such integration by analyzing an FSI workload built using map reduce framework and evaluating the performance and reliability of the application on an integrated stack with Hadoop and Lustre through Hadoop extensions such as Hadoop Adapter for Lustre (HAL) and HPC Adapter for MapReduce (HAM) developed by Intel, while comparing the performance against the Hadoop Distributed File System (HDFS). We also carried out performability analysis of both the systems, where HDFS ensures reliability using replication factor and Lustre does not replicate any data but ensures reliability by having multiple OSSs connecting to multiple OSTs.

Supporting Reviewing of Warnings in Presence of Shared Variables: Need and Effectiveness
Static analysis tools have showcased their usefulness in software quality assurance by detecting defects early in the software development life cycle. However, these tools face scalability issue on many real world systems. Clustering, breaking a large system into multiple clusters is commonly used technique to scale these tools to large systems. A cluster thus formed represents a system functionality or designated task that is implemented independently of or in communication with other system functionalities. Each generated cluster being smaller and less complex than the original system can be analyzed using the static analysis tools.

Predicting Release Quality
Identifying correlations between in-process development and test metrics is key in anticipating subsequent reliability performance in the field. For several years now at Cisco, our primary measure of field reliability has been Software Defects Per Million Hours (SWDPMH), and this metric has been goaled on a yearly basis for over 100 product families. A key reason SWDPMH is considered to be of critical importance is that we see a high correlation between SWDPMH and Software Customer Satisfaction (SW CSAT) over a wide spectrum of products and feature releases. Therefore it is important to try to anticipate SWDPMH for new releases before the software is released to customers, for several reasons:

Efficient Development of Reliable Solutions through Education and Effective Training: A Best Practice Report from ABB
Efficient development of reliable solutions is a goal of many companies, a formidable task which requires a well trained and highly motivated management and engineering workforce. In 2008 ABB started out with an sustainable effort for implementing a software engineering education and training platform, the software development improvement program (SDIP), to address these high level goals in a systematic way. With clear guidance, continuous support and attention of the top management SDIP quickly evolved and offered the first SDIP online course in April 2011 [1]. Since, SDIP persisted the change of the top management, delivered trainings for more than 10’000 employees worldwide [1] and it is showcase for effective optimization, achieving traction and recognition in both the company and the community [1-4].

Planning of prioritized test procedures in large integrated systems: best strategy of defect discovery and early stop of testing session. The Selex-ES Experience
Integrated systems involving more than 50 main different SW applications running on more than 20-30 servers and workstations require a testing strategy that may grant both early discovery of main problems and early go-no go decision point. Selex-ES solution for Air Traffic Management is a clear example of this kind of complex integrated systems: the variety of services and structure of its SW components require for answering challenging questions like, which are the tests to be executed first,which is the minimum set of tests to be executed , and which are the pass fail criteria at system level. this paper discusses industry problems in this field and, presenting selex es solution, whishes to foster open discussion with other industries in the field of complex and mission critical systems.