Dependability Assurance  

home consortium project research publications training softwareprivate

Summer school





@ 2013 - Site Map - Credits


Dependability Assurance


As CONNECT aims at enabling open networking of systems, it is of paramount importance to ensure that this is realised in a dependable way. Dependability assurance research is conducted on a broad scope, including classical dependability attributes, performance, security and trust: we coin the term CONNECTability to include the CONNECT relevant properties (see figure below). Hence our research targets the development of new concepts, metrics and approaches for guaranteeing and assessing the CONNECTability of the eternally CONNECTed systems, in spite of changes and accidental (or intentional) faults (or attacks).
While of course CONNECT is not immune to other sources and kinds of failures, we focus our work on understanding what is the potential impact on system dependability, performance, security and trust of the communication established through the CONNECT approach. Thus, building on the existing literature and other related projects for general dependability needs, we concentrate on the threats specifically entailed by on-the-fly synthesis of CONNECTTors: e.g., is the CONNECTor reliable?, is the CONNECTion secure?, is the CONNECTed System trustworthy?, and so on.

CONNECT Properties and CONNECTability Metrics

Suitable properties for CONNECTed systems must be defined in rigorous formalized way in order to be used as a reference for CONNECTability analysis and monitoring. Elaborating on traditional, well-understood dependability metrics, a conceptual model has been first developed as a structured framework, which refines generic metrics into CONNECT-dependent and context-dependent metrics. These metrics apply to each of the four CONNECT actors: the Enabler (e.g., discovery, learning and synthesis), the CONNECTor, the Networked System, the CONNECTed system.

dependability metrics


The elements and relations within this conceptual model have been then formalized into the CONNECT Property Meta-Model (CPMM), which provides a comprehensive machine-processable meta-model for all CONNECT-relevant non-functional properties and metrics. Fully following a model-driven transformational approach, CPMM provides a flexible notation that permits to keep the metrics definition separated from the specific system the property refers to. This descends from the observation that a metrics definition is application-independent and can be re-used for different systems. In other words, metrics can be defined once, stored in a repository and retrieved from it in case of need. Thus, by using CPMM, CONNECT users willing to specify a non-functional property need to (i) retrieve the required, or define new, metrics, (ii) specify the event the metrics apply to, and  finally (iii) specify the property to be analysed/monitored by binding the metrics to the event. By transformation these property models are then automatically translated into the expected input format for the CONNECT Enablers. For example, an implemented Model2Code Transformer translates CPMM properties into Drools Fusion rules for the GLIMPSE monitor, in the case of properties to be observed at run-time.
CPMM defines elements and types to specify prescriptive (required) and descriptive (owned) quantitative and qualitative properties that CONNECT actors may expose or must provide, respectively. The properties specified from this language might refer to the CONNECTed or the Networked System; they can serve as reference model requirements for the CONNECTor synthesis or as events to be observed by the monitoring Enabler during CONNECTor execution. Besides, the specified properties can also describe the characteristics owned by the Enablers. The key concepts of this meta-model are: Property, MetricsTemplate, Metrics, EventSet, and EventType. We separate the property definition from the application domain and its specific ontology. The ontology is linked to the Property meta-model via the EventType entity that models a generic event type where the terms of the application-domain ontology will be used.
CPMM is implemented as an eCore model into the Eclipse Modeling Framework. Moreover, an associated editor, released as an Eclipse Plugin, contains the information of the defined models and allows to create new model instances of the Property, Metrics, MetricsTemplate, EventType and EventSet meta-models. An excerpt of CPMM is provided below.


dependability metrics


Verification & Validation

Concerning verification and validation, we apply two complementary approaches: state-based stochastic methods, supported by Mobius, and probabilistic model checking, supported by PRISM. Using both approaches, a variety of scenarios and user/application needs in terms of dependability and performance analysis can be satisfied. In fact, the different formalisms and tools implied by the two methods allow: i) on the one side, to complement the analysis from the point of view of a number of aspects, such as level of abstraction/scalability/accuracy, for which the two approaches may show different abilities to cope with; and ii) on the other hand, through their inner diversity, provide cross-validation to enhance confidence in the correctness of the analysis itself. Our activity on dependability and performance assessment is strictly related and complemented by a verification framework on quantitative compositional reasoning, which is under investigation as part of our work on formal foundations for CONNECTors.
We address automated analysis performed by the Dependability and Performance (DePer) Enabler to assess, before deployment, if the dependability and performance requirements requested by the Networked Systems can be satisfied by the synthesised CONNECTor, and at run-time if the functioning CONNECTed system continues to satisfy them. The architecture of DePer is logically split into six main functional modules: Builder, Analyser, Evaluator, Enhancer (for the state-based stochastic analysis engine), Repairer (for the stochastic model checking analysis engine) and Updater.


dependability metrics


  • The Builder module derives the dependability/performance model of the CONNECTed System from the specification provided by Synthesis.
  • The Analyser module uses the model generated by Builder to perform a quantitative assessment of the non-functional requirements reported by the Discovery Enabler.
  • The Evaluator module checks the analysis results to determine if the non-functional requirements are met:
    •  If the requirements are satisfied, Evaluator reports to Synthesis that the CONNECTor can be successfully deployed, and reports to the Updater the aspects that must be observed for the CONNECTor that is going to be deployed, e.g., transition durations, and probability of transitions failure.
    • If the requirements are not satisfied, Evaluator activates the Enhancer and/or Repairer module(s) to determine possible solutions to improve the dependability/performance level of the CONNECTed System.
  • The Updater module interacts with the Monitor Enabler to refine the accuracy of model parameters through run-time observations, obtained through a continuous monitoring activity. Therefore, it provides adaptation of the pre-deployment analysis to cope with changes in, or inaccurate estimates of, model parameters.
  • The Enhancer module is in charge of assessing whether the CONNECTor can be improved trough dependability mechanisms. It performs a form of pre-deployment adaptation to try to overcome deficiencies of the CONNECTor specification as revealed by the analysis. To accomplish its task, Enhancer is equipped with models representing basic dependability mechanisms suitable to contrast two typical classes of failure modes that may happen during interactions: timing failures, in which networked systems send messages at time instants that do not match an agreed schedule, and value failures, in which networked systems send messages containing incorrect information items.
  • The Repairer module attempts to compute proper values for those parameters such that the new CONNECTor synthesised using the new values ensure that the non-functional properties are satisfied. Repairer adopts a variant of the Monte-Carlo sampling method to test a set of samples until it finds a good sample.

We are also working on an on-line quantitative verification approach dedicated to the case where the verification needs to be performed repeatedly with differing probability values, .e.g., provided by the Monitoring Enabler. Such runtime analysis can incur significant time and memory overheads. Our approach aims at improving the performance by reusing results from previous verifications to obtain fast accurate results, i.e., the results obtained in each round of verification are stored in order to avoid re-computation for the part of model that is not affected by probability changes. We also improve on efficiency of a probabilistic verification adopted in the approach, which typically requires a combination of graph-based analysis techniques and iterative numerical solution methods.


Security & Privacy

Security mechanisms are studied and developed for guaranteeing security requirements while two Networked Systems communicate through a CONNECTor. In CONNECT, the Security-by-Contract-with-Trust (SxCxT) paradigm has been conceived as an unified framework for managing both security and trust in an integrated way. SxCxT is a very general and expressive framework, including trust assessment, contract-driven deployment, contract monitoring or policy enforcement and trust feedback inference.
The SxCxT Workflow, illustrated in the figure below, includes:

  • Step 1-Trust Assessment: The trust module decides if it trusts or not that the execution of the CONNECTor satisfies its contract (to account for different trust models, we apply the trust model composition, see below);
  • Step 2-Contract Driven Deployment: According to the trust measure, the security module decides if just monitoring the contract (Scenario MC, where MC stands for Monitor of the Contract) or both enforcing the policy and monitoring the contract (Scenario EPMC, where EPMC stands for Enforce the Policy and Monitor the Contract).

We assume that each Networked System comes with a specified Policy, describing which messages they can accept, or conversely which message would not be considered secure. Within the CONNECT architecture, a Security Enabler takes as input the description automaton of the CONNECTor and the policy obtained by the Negotiation phase, and checks if the contract satisfies the policy; if it does not, we follow an instrumentation approach: the concrete CONNECTor is instrumented, i.e., some security probes are added, by substituting each method call to an NS method by a method call with a similar interface, but which first checks if the security policy is satisfied. Any detected violation to the security policy will result in raising a security exception: so the new instrumented CONNECTor is functionally equivalent to the original one, except that it will send a Security Exception to the Monitoring Enabler when the policy is violated.


Trust Management

The proposed CONNECT Trust model, depicted in the figure below, highlights three categories of trust relations, namely: relations for assessing CONNECTors, relations for assessing Enablers, and relations for assessing Networked Systems. In the proposed CONNECT trust model, Enablers can estimate a measure of confidence on a CONNECTor. Enablers can also safely coordinate with one another to jointly synthesise and deploy CONNECTors, and finally to manage feedbacks to detect dysfunction and update trust relations.


trust model

Currently, we assume that CONNECTorsors and Enablers are trusted and we focus on interoperable trust management among heterogeneous Networked Systems through the CONNECTors Trust Enabler. More precisely, we introduce a trust meta-model in the form of TMDL (Trust Model Description Language) to allow describing any trust model. Given the TMDL specification of trust models implemented by Networked Systems, we are able to infer composite trust models for them to be able to interact in a trusted way.

The aim of composing two different trust models (TMX and TMY ) is to provide participants of a given model (which we call target model TMT ) the ability to interact (i.e., create, assess and retrieve relationships) with participants that come from another model (which we call source model TMS). We have identified and implemented the two following composition processes:

  • Merging Composition:  A new trust model is created from merging the two given ones. The composition is permanent and leads to an extension of the behavior of the trust models’ roles according to some given mapping rules.
  • Mediation Composition: A cooperation is created that might happen for a limited period of time. The objective of this process is to not modify the behavior of existing roles, rather to create a set of mediation roles capable of bridging the two given models.

The supporting iMTrust framework which will serve as a building block to create the CONNECTors Trust Enabler allows to generate, emulate and compose trust management systems. The tools within such framework (i) guide developers to check and create a valid TMDL description; (ii) automatically generate from such description the Java code of the corresponding trust management system; and (iii) enables the composition of any given trust management system according to given mapping rules.



A generic lightweight architecture for monitoring, called GLIMPSE (Generic fLexIble Monitoring based on a Publish-Subscribe

infrastructure) has been developed.


Monitoring is conceived as a common core service offered to the other Enablers to detect conditions that they deem relevant, in order to implement feedback loops whereby approaches to dependability analysis,CONNECTor synthesis (shown in the picture below), and behaviour learning can be applied to an on-line setting and can be enhanced to cope with change and dynamism.


Selected Publications

  • A. Di Marco, C. Pompilio, A. Bertolino, A. Calabrò, F. Lonetti, and A. Sabetta “Yet Another Meta-Model to specify Non-Functional Properties”, Proc. QASBA 2011 - Lugano, Switzerland, 14/09/2011, 9-16.
  • P. Masci, N. Nostro and F. Di Giandomenico, “On Enabling Dependability Assurance In Heterogeneous Networks Through Automated Model-Based Analysis”, SERENE 2011, Sept 29th-30th, Geneve, Switzerland (2011).
  • M. Kwiatkowska, D. Parker and H. Qu. “Incremental Quantitative Verification for Markov Decision Processes”. In Proc. IEEE/IFIP International Conference on Dependable Systems and Networks (DSN-PDS'11), pages 359-370, IEEE CS Press.
  • A. Bertolino, A. Calabrò, F. Di Giandomenico, and N. Nostro, “Dependability and Performance Assessment of Dynamic CONNECTed Systems”. SFM 2011, Formal Methods for Eternal Networked Software Systems, LNCS 6659/2011, 350-392.
  • G. Costa, V. Issarny, F. Martinelli, I. Matteucci, and R. Saadi, “Security and Trust”, In SFM 2011, Formal Methods for Eternal Networked Software Systems, LNCS 6659/2011
  • R. Saadi, M. Rahaman, V. Issarny, and A. Toninelli. “Composing Trust Models towards Interoperable Trust Management”. In Wakeman, Ian, Gudes, Ehud, Jensen, Christian et al (editors),Trust Management V. Springer. (2011)
  • A. Bertolino, A. Calabrò, F. Lonetti and A. Sabetta “GLIMPSE: A Generic and Flexible Monitoring Infrastructure “, EWDC 2011, Pisa May 11-12th, ACM 2011.

Further Information

More information about the CONNECT work on dependability assurance can be found from the Publications page
The software tools above described can be downloaded from the Software page.


The CONNECT project acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the ICT theme of the Seventh Framework Programme for Research of the European Commission.