November 3-6, 2014
Wednesday, Nov. 5, 9:00am
Veena Mendiratta and Sunita Chulani
Catello Di Martino
Large amounts of data are generated in the software lifecycle process: source code, feature specifications, bug reports, test cases, execution traces/logs as well failure data from the field. Traditionally classic statistical methods were used for data analysis and prediction. With the growing use of analytics methods such as machine learning, data mining and data visualization much more of the data can now be analyzed in new ways for descriptive, prescriptive and predictive analysis for software engineering. The goal of this panel session is to provide a forum for discussion on current practices and the vision for future work in the area of data analytics for software engineering in various domains.
Wednesday, Nov. 5, 2:00pm - CC - Room A
Salvatore Scervo, Selex ES
Roberto Giacobazzi, University of Verona
Quentin Ochem, AdaCore
Vladimir Sklyar, RADYI
Vadim Okun, National Institute of Standards and Technology (NIST)
It is estimated that each developer injects on average one defect each 8 lines of code, and demonstrated by eminent names that nobody can prove any software to
be defect free. The best approach for dealing with failures is their early detection, i.e., the attempt to detect software defects at the earliest stage (i.e., in the same SDLC phase in which they get injected) in order
to avoid their degeneration into failures and to reduce fixing and mainteinance costs. Static Analysis is a powerful enabler for early detection: at compile time, it’s a mean to identify
programming errors that can escape both compilers' detection facilities and functional testing campaigns. For this reason, many respectful software giants, like Microsoft and Nasa, have been
massively using such a technique with very good results.
Apart from its traditional application to state the compliance of developed code to a set of imposed coding standards, static analysis can provide much more. Many studies explicitly show that tere exists a positive correlation between static defects and post release failures. Also, they seem to be very good indicators of application vulnerability as well.
This panel aims to create a stimulating discussion about how big companies feel with static analysis real word experience and feedback about the use of such technique in critical sw systems issues and limitation of the most widely used cots tools for static analysis, expected benefits and estimated ROI.
Panelists will play a role game trying to figure out challenges, issues and sinergies related to static code analysis application in industry and academia.
Monday, Nov. 3, 11:00am
W. Eric Wong, University of Texas at Dallas
Regardless of the effort spent on developing a computer program, it may still have bugs. In fact, the larger and more complex a program, the higher the likelihood of it containing bugs. When the execution of a program on a test case fails, it reveals that there are bugs in the program. Then, the burden is on the programmers to locate and fix these bugs. However, program debugging can be extremely time-consuming and tedious, especially given the size and complexity of software we have today. Manual debugging is certainly not the right approach.
With this realization, researchers have proposed various techniques to assist programmers in finding and fixing bugs more effectively and efficiently. Yet, many questions still remain open and need to be further explored:
Monday, Nov. 3, 5:00pm
The safety-critical industry as a whole followed for decades a conservative approach to safety. On the one hand, regulatory authorities, fearing the potential risks, reject or discourage the adoption of recent innovations, limiting the complexity of functions allocated to software, which could otherwise provide benefits to users and a "competitive advantage" to industries. On the other hand, researchers are often interested in theoretical aspects of their own research, not considering the market and industrial needs. This gap can be attributed to a number of factors, such as communication issues, cultural differences, fear of change, etc. Topics of interest in the field of software reliability include (but are not limited to):
Wednesday, Nov. 5, 5:00pm
John Knight, University of Virginia, USA
Dave Higham, Delphi Diesel Systems, UK
Kenji Taguchi, AIST, Japan
Over the last twenty years, there has been increasing interest in using structured argumentation notations such as GSN (Goal Structuring Notation) or CAE (Claims-Argument-Evidence) to communicate the structure of the argument. While such arguments are structured, they remain informal. There is increasing interest in exploring how these informal arguments may be modelled in formal logic, potentially opening up benefits of forms of analysis and automation not possible with informally recorded arguments. This panel will discuss the considerations in balancing the role of informal and formal logic in modelling assurance case arguments.