November 3-6, 2014
Naples, Italy
The following keynotes and talks are planned during ISSRE 2014.
Date and time may change.
MAKING IT BIG(GER AND BIGGER): the challenge of dominating complexity in a transnational companies merging process.
by Salvatore Scervo
Tuseday, Nov. 4, 9:30am - Hotel RC - Mirabilis Room
Salvatore Scervo |
Abstract
Selex ES comes from merging three companies, mainly based in Italy and UK , insisting on different business areas and using different policies and processes, into quite a big(ger) one.
The process of creating such a giant has been running since January 2013 when three FINMECCANICA subsidiaries, formerly named SELEX SI, SELEX GALILEO and SELEX ELSAG, have been merged as far as both technical and organizational aspects are concerned.
Now that almost two years have passed, time is ripe for reasoning about strategies, achievements, risks, mitigations, failures and hits that such a challenging process raised. Also, time is right for pinpointng the winning moves that allowed 17k people (800 committed in Software programmes) working well together and that are worth to exploit into future company reorganization process, as well as all the slips that shall be avoided instead. This talk aims to discuss how the SELEX ES Software Engineering function has got the challenge, mainly stepping into i) process and policies governance, ii) human skills and iii) efficiency improvement strategies.
It will tell about what has been done for enabling the seamless integration of existing worlds, for defeating people reluctance to change, and for harmonizing future targets to pursue by means of an holistic innovation roadmap.
Presenter's biography
Salvatore (Rino) Scervo is leading one of the largest (about 800 engineers) Italian Software Community acting as Director of Sw Engineering at Selex ES (17,700 employees and 3,5 Billion € of Revenues), a Finmeccanica international leader company in the development of Space, Defence, Security and Smart Solutions. Formerly in charge of Sw Product Engineering acting as Vice President at Selex Sistemi Integrati, Salvatore earned a broad and consolidated skills and competences in Air Traffic Control, Air Defence, Naval Defence and C2 and C4I system design, development and deployment. Today, he is really committed into supporting SELEX ES Air and Vessel Traffic control business area leading dozens of software engineers in the field, as well as into paving to software innovation by designing and deploying novel methodologies and processes for quality improvement.
{C}{C}
{C}{C}
Assessment techniques, certification and [what else we need for] confidence in software
by Lorenzo Strigini
Monday, Nov. 3, 9:00am [WoSoCer workshop] - Hotel RC - Santa Lucia Room
Lorenzo Strigini |
Abstract
Certification of software may play multiple roles, both intended and unintended, and both beneficial and damaging. Some of these roles are unrelated to what the name "certification" is about, i.e., creating certainties; for those that are related to it, we should usually talk about creating confidence rather than certainty. With an eye on this socio-technical landscape, this talk will attempt a map of the logical links between the evidence collected through assessment practices and the confidence in reliability, safety or security that users wish to derive from the evidence. Central issues are the links between deterministic and probabilistic claims, their scopes of validity, and the evidence behind them. Probing these links raises useful questions about unstated assumptions, possible means for giving confidence more solid bases, and how these could affect the practice of certification.
Presenter's biography
Lorenzo Strigini is a professor of systems engineering and the current director of the Centre for Software Reliability at City University London, which he joined in 1995. He has worked for more than 30 years on problems of reliability, safety, and security of software and systems, including acting as principal investigator in national and international research projects and as a consultant on fault tolerance and assurance for critical applications, publishing widely in these areas. Much of his research aims to improve the scientific credibility of claims about dependability, using probabilistic modelling for insight, as well as for inference from data. His work on software diversity began in the 1980s, as a visiting scholar at the University of California, Los Angeles. Other research topics have included software testing methods, reliability of computer-assisted decision making, the problems of decisions about critical systems, and interactions between safety and security.
{C}{C}
{C}{C}
Software Rejuvenation in Cloud Systems
by Antonio Puliafito
Monday, Nov. 3, 9:00am [WoSAR workshop] - Hotel RC - Sveva Room
Antonio Puliafito |
Abstract
Cloud computing is a promising paradigm able to rationalize the use of hardware resources by means of virtualization. Virtualization allows to instantiate one or more virtual machines (VMs) on top of a single physical machine managed by a virtual machine monitor (VMM). Similarly to any other software, a VMM experiences aging and failures. Software rejuvenation is a proactive fault management technique that involves terminating an application, cleaning up the system internal state, and restarting it to prevent the occurrence of future failures. Adopting Software rejuvenation techniques in Cloud computing is a new research frontier with challenging problems related to the complexity and the distributed nature of Cloud itself.In this talk, we investigate how Cloud systems may benefit from the adoption of rejuvenation principles and propose a technique to model and evaluate the VMM aging process and to investigate the optimal rejuvenation policy that maximizes the VMM availability under variable workload conditions. Starting from dynamic reliability theory and adopting symbolic algebraic techniques, we investigate and compare existing time-based VMM rejuvenation policies. We also propose a time-based policy that adapts the rejuvenation timer to the VMM workload condition improving the system availability. The effectiveness of the proposed modeling technique is demonstrated through a numerical example based on a case study taken from the literature.The talk also addresses the problem of data gathering related to resources usage that may indicate critical working conditions, requiring the rejuvenation of software components. A 3D model for analyzing cloud monitoring information is proposed to retrieve meaningful information and support the coordination of management actions between cloud infrastructure and application. This monitoring solution is applied to a testbed using Openstack and the WordPress application.Finally, rejuvenation is combined with green Cloud concepts, with the goal to minimize the effects of system downtime, while reducing its impact in terms of energy consumption and environmental pollution.
Presenter's biography
Antonio Puliafito is a full professor of computer engineering at the University of Messina, Italy. His interests include parallel and distributed systems, networking, wireless, GRID and Cloud computing. During 1994-1995 he spent 12 months as visiting professor at the Department of Electrical Engineering of Duke University, North Carolina - USA, where he was involved in research on advanced analytical modelling techniques. He was the coordinator of the Ph.D. course in Advanced Technologies for Information Engineering at the University of Messina and currently is the responsible for the course of study in computers engineering. He acts as a referee for the European Community for since 1998. He has contributed to the development of the software tools WebSPN and ArgoPerformance, which are being used both at national and international level. Dr. Puliafito is co-author (with R. Sahner and Kishor S. Trivedi) of the text entitled "Performance and Reliability Analysis of Computer Systems: An Example-Based Approach Using the SHARPE Software Package", edited by Kluwer Academic Publishers. He is currently the director of the RFIDLab, a joint research lab with Oracle and Intel on RFID and wireless. He is currently the President of the Centre on Information Technologies at University of Messina. From 2006 to 2008 he acted as the technical director of the Project 901, aiming at creating a wireless/wired communication infrastructure inside the University of Messina to support new value added services (winner of the CISCO innovation award). He is also the responsible for the University of Messina of two big Grid Projects (TriGrid VL, http://www.trigrid.it, and PI2S2, http://www.pi2s2.it) funded by the Sicilian Regional Government and by the Ministry of University and Research, respectively. He was involved in several European projects such as Reservoir, Vision and CloudWave. He is also the main investigator of the Italian PRIN2008 research project "Cloud@Home", trying to combine cloud and volunteer computing. He currently is the scientific director of the SIGMA PON_01 project on the management and control of multi-risk systems. He was the scientific director of Inquadro s.r.l., a spin-off company of the University of Messina whose main business is RFID and its application both in public and private sectors. He recently constituted the start-up DH Labs, which develops systems and solutions related to the Internet of Things.
{C}{C}
{C}{C}
From Script-kiddies to Cyberwars...
by Leyla Yumer
Monday, Nov. 3, 11:00am [RSDA workshop] - Hotel RC - Catalana Room
Leyla Yumer |
Abstract
Internet and the tools that allow us use it brought many comforts into our lives. As a result, it did not take too long before Internet became one of the most indispensable components in various aspects of our lives. Unfortunately, Internet attracted miscreants as well. The more Internet evolved, the more advanced cyber attacks emerged. According to many recent reports, cyber threats are now accepted to be the top threat facing not only the individuals, as it was back in the beginning of 2000s, but also enterprises and critical infrastructures. In 2013, the World Economic Forum placed cyber attacks among the most influential global risks. This indicates that cyber attacks are now acknowledged the same way as other accidental risks such as health problems, safety, and extreme weather. This talk will present the current threat landscape and list the technical challenges the malware defenders are faced with to fight with the advanced cyber threats. Furthermore, the talk will also touch upon the urgent need for big data analysis on security-related data.
Presenter's biography
Leylya Yumer is a research engineer in Symantec Research Labs since 2012. She obtained her Ph.D in December 2011 from Eurecom which is based in south of France. The topic of her PhD thesis is Network-based Botnet Detection. In her thesis, she proposed three different network-based botnet detection schemes one of which is Exposure.
Her research interests embrace most of the computer security problems with special focus on DNS-based malware detection systems, malware analysis,reverse-engineering and big data analysis. Currently, she conducts large-scale data analysis on security data feeds to find novel malware detection systems and discover unrevealed facts about cyber threats. She is working on the development of a malicious domains detection system which performs passive DNS analysis on big collections of DNS data produced by real users. In addition, she is involved in the Symantec's World Wide Intelligence Network Environment project.
The Dual Nature of Software Aging: Twenty Years of Software Aging Research
by Stefano Russo
Monday, Nov. 3, 2:00pm [WoSAR workshop] - Hotel RC - Sveva Room
Stefano Russo |
Abstract
Twenty years have passed since Parnas coined in 1994 the expression software aging, highlighting that programs, like people, get old, due to failure of a software product's owner to modify it to meet changing needs, or as a result of changes that are made.
In 1995, Kintala et al. showed that software ages while running, too. They were concerned with continuously-running applications, such as server programs, that exhibit an increasing failure rate as their runtime period increases. They introduced the concept of software rejuvenation as a proactive technique to counteract this type of aging.
These two seminal works originated two different meanings of software aging.
Parnas' work raised the software engineering viewpoint of this phenomenon. During their lives, applications undergo many changes, that tend to reduce software quality. As the software portfolio is a key asset for many companies, software aging needs to be monitored and dealt with in software maintenance.
Kintala's work gave rise to a dynamic viewpoint, in the perspective of software dependability. In the last decade, this aspect has been deeply investigated, much more than the other. The current understanding of the aging of running software is that it is usually – but not always - a consequence of software faults; accordingly, this type of faults is referred to as aging-related bugs. However, aging may manifest also as a consequence of the natural dynamics of a system’s behavior; this is referred to as natural aging.
This somewhat provocative talk argues that software aging might have a dual nature, similarly to some physical phenomena. The most prominent of them is light. Between the XVII and the XX centuries, some famous experiments showed that light is made by particles (photons) having a momentum, while others showed that it is an electromagnetic radiation. The apparent wave-particle paradox was bypassed by the theory of quantum mechanics, showing that the two natures of elementary particles do coexist and are complementary. For the software aging phenomenon, one may wonder whether and to what extent its two aspects - static and dynamic - are related and maybe complementary; for instance, aging-related bugs might be due to the maintenance and evolution of software products, and one may also wonder whether natural aging is a consequence of changes and evolution in an application's execution environment.
This talk tries to reconcile the software engineering and the software dependability visions of software aging, in the hope to open new research areas able to provide a wider and deeper understanding of the very nature of software aging phenomena.
Presenter's biography
Stefano Russo is Professor of Computer Engineering at the Federico II University of Naples, where he teaches Software Engineering and Distributed Systems, and leads the distributed and mobile systems MOBILAB research group (www.mobilab.unina.it). He co-authored over 130 papers in the areas of distributed software engineering; middleware technologies; software dependability; software aging; mobile computing. He is Associate Editor of the IEEE Transactions on Services Computing.
{C}
{C}
The 21st Century challenge - open systems on a closed planet
by Hillary Sillitto
Tueasday, Nov. 4, 11:00am [WOSD workshop] - CC - Room B
Hillary Sillitto |
Abstract
In the prehistoric era of systems engineering, many successful systems were created and used, from the complex organisational and societal systems that built the pyramids, to networked command, control and intelligence systems in the first half of the 20th Century.
Systems Engineering (SE) became recognised as a distinct activity in the second half of the 20th Century, driven by the American efforts on ballistic missiles, nuclear submarines, and the space programme. In that era, which I think of as 'the first age' of systems engineering, the system of interest was regarded as pretty much a closed system. It had a clear boundary, and limited interactions with the 'rest of the world', which was an infinite source of resources, and an infinite sink for waste. So the first age of systems engineering could be characterised as 'closed systems in an infinite environment'. It coincided with the early years of computers and software.
Towards the end of the twentieth century, systems started to interact with each other and form 'systems of systems'. So the design paradigm had to shift to recognise most systems as open rather than closed. So the second age of SE was 'open systems in an infinite environment'.
But our environment isn't infinite. Man-made systems now span the planet, and interact with each other on a planetary scale. Our planet is the source for all our resources except for solar and tidal energy; and it is the sink for all the waste we produce. This has led to a new focus on sustainability, environmental concerns, resilience, and discussions on governance of planetary scale systems. So we are now in the 'third age of systems engineering - open systems on a closed planet'.
What does this mean for System Dependability? The pace of change is increasing. It took a century for cars to saturate the market, 30 years for televisions, not much more than ten for mobile phones, and five for smart-phones and tablets. Information grows and circulates in a way unimaginable a few decades ago. Software is becoming pervasive in engineered products and management and administrative systems. This creates massive and increasing cognitive overload, and massive opportunities to leverage information to create value and societal benefit. And it creates massive risks - how do we manage and mitigate unintended consequences of system interactions we don't fully understand, and maybe don't even realise exist?
How do we apply and integrate what we know in our different communities to improve our ability to respond to these problems? As systems become more interconnected, we need to become better at doing and describing system architecture, and connecting different stakeholders through a common view of architecture that spans soft and hard system perspectives and societal and technical systems. The paradigms now used by the Systems Engineering community to formalise system architecture draw heavily on the paradigms developed by the design and software communities from the 1970's to the 90's. Are they appropriate for the 21st Century? The presentation will spell out some of the challenges, and show some snapshots of how the Systems community is trying to approach them. Is the dependability community facing the same issues?
Presenter's biography
Hillary Sillitto (CEng, FInstP, ESEP) is an author, researcher and consultant in the fields of systems engineering and architecting, and a Visiting Professor at the University of Bristol. Educated in Physics at St Andrews in Scotland, and in Applied Optics at Imperial College London, he started engineering in the 1970s, worked on the design of novel optical systems for a wide range of system applications, and was awarded several patents. He worked on increasingly diverse and complex systems in defence, aerospace, security and transport. Responsibilities included Head of the UK MOD's Integration Authority (2005-8), and Systems Engineering Director for Thales UK (2010-12). He has served on several advisory boards including the Large Scale Complex IT Systems (LSCITS) research programme, and the University of Bristol Systems Centre. He was recognised as a Thales Fellow. He retired from full-time employment in 2013.
He is a Fellow of the International Council On Systems Engineering (INCOSE) and a Past President of INCOSE's UK Chapter. He presented many papers at INCOSE symposia, winning several Best Paper awards, and contributed to the BKCASE Systems Engineering Body of Knowledge and to the latest version of the INCOSE SE Handbook.
His book on 'Architecting systems' will be published during 2014.
{C}
{C}
The role of data analytics in reliability and security verification
by Brendan Murphy
Tuesday, Nov. 4, 2:00pm [RSDA workshop] - CC - Room A
Brendan Murphy |
Abstract
As a product evolves its approach to reliability and security often goes through three separate stages. In its early stages reliability and security is achieved through applying good engineering practices, such as code reviews, and trusting in the ability of its engineering team. As the product scales and matures then additional reliability requirements start appearing, such as backward compatibility, and as its functionality evolves so can its attack surface, complicating security verification. There are a number of tool and techniques to address these issues, all that is required is the time and resources. In its third stage the cost of the products verification process starts having a significant impact, decreasing the agility of the development process while increasing its cost. It is at this third stage that product group recognize the need for data analytics to focus its verification efforts to ensure reliability and security of the product while not hindering its development. This talk will discuss the practical limitations of a lot of reliability and security techniques, highlighting the lack of silver bullets. The talk will provide the speakers perspective of data analytics and how it can be used to optimize the application of reliability and security techniques during the development process.
Presenter's biography
Brendan Murphy is a Principal Researcher at the Microsoft Research Centre in Cambridge UK. Brendan works in the Empirical Software Engineering and Measurement (ESE) group at Microsoft focusing on software reliability, dependability, quality and process issues. Over the last year Brendan has been researching software development practices within Microsoft.
Prior to his current position at Microsoft, Brendan was at Compaq Corporation (previously Digital), Ayr Scotland till August 1999, where he ran the DPP program which collected and analysed dependability data from customer sites. Prior to working in Scotland, Brendan worked for Digital in Galway Ireland, UNISYS (Scotland and US) and ICL (West Gorton, Manchester).
Brendan graduated from Newcastle University. In his free time you can find him playing golf in and around Cambridge.
{C}{C}
{C}{C}
Open Systems Dependability - Achievements and future strategy: culture, technology, standards and research
by Mario Tokoro
Tuesday, Nov. 4, 4:00pm [WOSD workshop] - CC - Room B
Mario Tokoro |
Abstract
Today, computer systems are increasing in scale, spread, and complexity, being connected with one another to form boundary-less infrastructures, accommodating various changes in business and users’ requirements, technology, and regulations/standards in their long life cycles. Yet, dependability for such systems is one of the issues of highest priority. In order to achieve dependability of these systems, we introduced a new notion of “Open Systems Dependability” envisaging these ever-changing systems as Open Systems.
In this speech, achievements of research on Open Systems Dependability will be given mainly based on the experience gained from the DEOS project and its applications. Then, prospects toward the future will be presented on technology, standardization, and applicable areas beyond software.
Presenter's biography
Dr. Mario Tokoro, former Professor of Computer Science at Keio University, established Sony Computer Science Laboratories, Inc. (http://www.sonycsl.co.jp/en) in 1988 and led it to be one of the world-renowned fundamental research laboratories. He joined Sony Corporation in 1987 and assumed to be CTO in 2000. He introduced architecture-based design and common software platform for consumer electronics products. He played the key role in the establishment of Consumer Electronics Linux Forum (http://www.celinuxforum.org/), CELF in short, which was absorbed in Linux Foundation in 2010 to form the CELF workgroup.
Dr. Tokoro has been advocating a new scientific methodology called Open Systems Science to solve problems of complex, ever-changing systems such as earth sustainability, life and health, and man-made huge information infrastructures (Open Systems Science – from Understanding Principles to Solving Problems, IOS Press, 2010). He served as the research supervisor for the DEOS project (http://www.jst.go.jp/crest/crest-os/osddeos/index-e.html) for 2006 through 2014 and opened up a new research area of Open Systems Dependability (Open Systems Dependability – Dependability Engineering for Ever-Changing Systems, CRC Press, 2012).
Challenges and Trends for Automotive Safety Assurance
by Dave Higham
Thursday, Nov. 6, 9:00am [ASSURE workshop] - Hotel RC - Catalana Room
Dave Higham |
Abstract
The automotive industry's focus on functional safety has changed over the last decade with no sign of abatement. This keynote outlines the key drivers and trends to address challenges driven by customer expectations, technology advances and the emergence of safety standards.
Presenter's biography
Head of Functional Safety Delphi Diesel Systems.
Responsible for the alignment of process and product to requirements of functional safety.
Chair of Delphi corporate functional safety steering team, and expert groups.
Lead functional safety assessor for ISO 26262 projects.
Over 25 years' experience in the design, implementation and management of automotive powertrain and infotainment systems and software.
Technical expert on the ISO 26262 working group.
Technical expert for UKAS.
MISRA steering committee member.
Member of ISA (Independent Safety Assurance) working group.
{C}{C}{C}{C}
{C}{C}{C}{C}
Model-based Risk Analysis in the Railways Domain
by Markus Schacher
Wednesday, Nov. 5, 9:00am [RISK workshop] - Hotel RC - Sveva Room
Markus Schacher |
Abstract
Back in 2010 the Swiss Railways started an initiative to standardize the interfaces of their highly safety-relevant interlocking systems across all suppliers. As the leading contractor, KnowGravity Inc. approached this challenge in an entirely model-based way: from model-based requirements engineering in SysML, over executable specifications in xUML and model-based testing using the UML testing profile (UTP), down to model-based planning and document production. So, it was only natural to perform risk analysis in a model-based way as well. In this presentation I will show how we developed a formal model to predict and evaluate critical behavior of complex heterogeneous systems utilizing the mechanism of UML profiling. Developing a UML profile for risk analysis enabled us to apply common techniques such as HAZOP, FMEA, FTA and ETA using a commercial UML modeling tool. It also made tight model integration and comprehensive traceability between risk models and other languages implemented as UML profiles possible. I will discuss the organizational as well as technical challenges we were (and still are) facing, particularly the reuse of model elements across multiple systems and components to be able to "model by difference" the risk-related aspects of a whole family of systems.
Presenter's biography
Markus Schacher is co-founder and KnowBody of KnowGravity Inc., a small but smart consulting company based in Zurich, Switzerland and specialized in model-based engineering. As a trainer, Markus ran the first public courses on UML in Switzerland back in early 1997 and as a consultant he helped many large projects introducing and applying model-based techniques. As an active member of the Object Management Group (OMG), Markus is involved in the development of various modeling languages such as the Business Motivation Model (BMM), the Semantics of Business Vocabulary and Business Rules (SBVR), and the UML Testing Profile (UTP). He is co-author of three books on business rules, SysML, and operational risk as well as a frequent presenter in international conferences.