Accepted Manuscript
Title: A component-based process with separation of concerns for thedevelopment of embedded real-time software systems
Author: Marco Panunzio Tullio Vardanega
The ¡ournal of
Systems and Software
PII: DOI:
Reference:
S0164-1212(14)00138-1 http://dx.doi.org/doi:10.1016/j.jss.2014.05.076 JSS 9341
To appear in:
Received date: Revised date: Accepted date:
3-8-2013
18-2-2014
26-5-2014
Please cite this article as: Marco Panunzio, Tullio Vardanega, A component-based process with separation of concerns for thedevelopment of embedded real-time software systems, The Journal of Systems & Software (2014), http://dx.doi.org/10.1016/j.jss.2014.05.076
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
'Highlights (for review)
Paper title:
"A component-based process with separation of concerns for the development of embedded real-time software systems".
Authors:
Marco Panunzio and Tullio Vardanega
Highlights:
We propose a component-based approach for embedded real-time software systems. The approach meets requirements from the space, railway and telecom domains. The approach enforces separation of concerns throughout the development process. The approach supports model-based analysis and code generation. The approach was assessed in four case studies in two parallel research projects.
A component-based process with separation of concerns for the development of embedded real-time software systems
Marco Panunzio1, Tullio Vardanega1
Department of Mathematics, University ofPadova via Trieste 63, 35121 Padova, Italy
Abstract
Numerous component models have been proposed in the literature, a testimony of a subject domain rich with technical and scientific challenges, and considerable potential. Unfortunately however, the reported level of adoption has been comparatively low. Where successes were had, they were largely facilitated by the manifest endorsement, where not the mandate, by relevant stakeholders, either internal to the industrial adopter or with authority over the application domain. The work presented in this paper stems from a comprehensive initiative taken by the European Space Agency (ESA) and its industrial suppliers. This initiative also enjoyed significant synergy with interests shown for similar goals by the telecommunications and railways domain, thanks to the interaction between two parallel project frameworks. The ESA effort aimed at favouring the adoption of a software reference architecture across its software supply chain. The center of that strategy revolves around a component model and the software development process that builds on it. This paper presents the rationale, the design and implementation choices made in their conception, as well as the feedback obtained from a number of industrial case studies that assessed them.
Keywords: Embedded real-time systems, component model, non-functional properties, separation of concerns
1. Introduction
Non-functional concerns such as time and space predictability, dependability, safety, and more recently security, have an increasingly large incidence on system development in high-integrity application domains such as avionics and space, railways, telecom and, prospectively, automotive. Several of those needs are addressed by software.
This trait places stringent requirements at process and product level, corresponded by onerous verification and validation (V&V) needs. Industry therefore seeks ways to contain the cost of development while strengthening the guarantees on the result.
Email addresses: panunzio@math. unipd. it (Marco Panunzio), tullio . vardanega@math. unipd. it (Tullio Vardanega)
Preprint submitted to Journal of Systems and Software February 17, 2014
Among the various solutions proposed to that end, the adoption of Model-Driven Engineering (MDE) [1] has fared rather well by measure of interest and success. Evidence collected in domain-specific initiatives (cf. e.g, [2, 3, 4]) shows that the higher level of abstraction in the design process facilitated by MDE allows addressing nonfunctional concerns earlier in the development, thereby enabling proactive analysis, maturation and consolidation of the software design. Moreover, the automation capabilities of the MDE infrastructure may ease the generation of lower-level design artefacts and enable the automated generation of source code products of certain quality.
In the space arena specifically, experience gained in the ASSERT1 project persuaded the European Space Agency (ESA) and its main system and software suppliers that for the adoption of MDE methods to produce tangible benefits, a software reference architecture common to all development stakeholders should be established first.
Reference [5] defines an architecture as composed of: (a) the fundamental organization of a system embodied in its components; (b) their relationships to each other, and to the environment; and (c) the principles guiding its design and evolution. On that basis, reference [6] regards the concept of software reference architecture as proceeding from: (i) a component model, to design the software as a composition of individually verifiable and reusable software units; (ii) a computational model, to relate the design entities of the component model, their non-functional needs for concurrency, time and space, to a framework of analysis techniques which assures that the architectural description is statically analysable in the dimensions of interest by construction; (iii) a programming model, to ensure that the implementation of the design entities obeys the semantics, the assumptions and the constraints of the computational model; (iv) a conforming execution platform, which actively preserves at run time the system and software properties asserted by static analysis and it is able to notify and react to possible violations of them.
ESA and their industrial partners decided to explore how well that concept could serve as a basis for their MDE-adoption initiative, and saw their effort complemented by the parallel launch of the CHESS project2. Those two efforts successfully collaborated in the definition of a component-oriented design process for the model-driven development of high-integrity software for space, telecom and railway systems.
That joint initiative proved the component model (initially captured in [7]) to be an essential facilitator to the industrial adoption of the proposed approach. It also showed the need for the component model definition and implementation to be enriched with support for: (1) specification and model-based analysis of non-functional requirements; (2) separation between functional and non-functional concerns, achieved by the enactment of design views (specializing the definitions of ISO 42010 [5]) and careful allocation of concerns to software entities; (3) selective inclusion of domain-specific concerns, whether functional or non-functional, to address special industrial needs.
'ASSERT: Automated proof-based System and Software Engineering for Real-Time systems. FP6 IST-004033 02/2004-01/2008 http://www.assert-project.net
2CHESS: "Composition with Guarantees for High-integrity Embedded Software Components Assembly", ARTEMIS JU grant nr. 216682, 02/2009-04/2012, http : //www. chess-proj ect. org/
Several component models have been proposed in the past, with varied interests - from pure research to specific applications - and equally varied success. The one that has emerged from the cited initiative has prominence from the large collaborative effort that promoted it, merit from extensive evaluation from the perspective of diverse industrial domains, and benefit from a wealth of use experience. Those assets make it an interesting case to consider, not so much for originality per se, but rather for its being able to capture crucial priorities of industrial developments of high-integrity systems.
This paper recalls the founding principles and motivations of the proposed component model (section 2), presents its essential traits, illustrates the development process that is centered on it (section 3), and reports on the four industrial case studies (section 4) that were carried out on two distinct implementations of it. Section 5 draws some conclusions and outlines future work.
2. Background
In this section we recall the founding principles behind the proposed component model and its associated design process, and then we relate them to the state of the art.
2.1. Founding principles of choice
Correctness by Construction. In his 1972 ACM Turing lecture [8], E.W. Dijkstra advocated a constructive approach to program correctness where program construction should follow - instead of precede - the construction of a solid proof of correctness.
Two decades later the Correctness by Construction (C-by-C) manifesto [9] promoted a software production method fostering the early detection and removal of development errors for safer, cheaper and more reliable software. The C-by-C best practice included: (1) the use of formal and precise tools and notations for the development and the verification of any product item, whether document or code, to allow constructive reasoning on their correctness; (2) the effort to say things only once so as to avoid contradictions and repetitions; (3) the effort to design software that is easy to verify, by e.g., using safer language subsets or appropriate coding styles and design patterns.
The original C-by-C activities reflected a source-centric development mindset. In the work presented in this paper the C-by-C practices are cast to a component-oriented approach based on MDE toward: (i) the design of components, hence the organization and provisions of the MDE design environment and the user design language; (ii) the provision of verification and analysis capabilities of the design environment to sanction the well-formedness and goodness of fit of the design products; (iii) the production of lower-level artefacts from the design model. We envision software production to be as fully automated as possible; ideally, with full automation of every implementation and documentation activity proceeding from the design model.
To this end, we restrict the expressive power of the user up front by propagating to the design space the constraints that emanate from the proposed development approach and by enforcing them actively, so that the resulting model is correct by construction in the dimensions of interest.
Separation of concerns. A long-known but much neglected practice first advocated by Dijkstra in [10], separation of concerns strives to separate different aspects of software design and implementation to enable separate reasoning and focused specification for each of them. We apply that notion to our component model, by making the following distinctive choices:
1. components comprise functional (sequential) code only: the non-functional needs with bearing on run-time behaviour (such as e.g., tasking, synchronization, timing) are dealt with outside of the component by the component infrastructure, expressed in terms of containers, connectors and their runtime support;
2. the non-functional requirements that the user wishes to set on components are declaratively specified by decoration of component interfaces with a specific annotation language, which currently addresses concurrency, synchronization, time and memory, and, to a lesser extent via the work of other authors, fault tolerance and safety concerns; non-functional requirements that specialised model-based analysis ascertains that can be met by the component infrastructure on the execution platform of choice are elevated to non-functional properties;
3. using predefined and separately compilable code templates, a code generator that operates in the back-end of the component model builds all of the component infrastructure that embeds the user components, their assemblies, and the component services that help satisfy the non-functional properties asserted by analysis.
The extent of separation of concerns that ensues from these choices has two principal benefits: (1) it increases the reuse potential of the software by enabling one and the same functional specification (corresponding to one or more components) to be reused under different non-functional requirements (corresponding to instantiations of component infrastructure); (2) it facilitates the automated generation of vast amounts of complex and delicate infrastructural code addressing non-functional concerns with bearing on run-time behaviour, which in this paper are limited to concurrency, realtime, communication and component interfacing needs, in accord with well-defined styles and fully deterministic rules, with obvious benefits in terms of important life-cycle properties including readability, traceability, maintainability.
Experience shows that benefit (1) is much more difficult to achieve than it may seem, as it requires substantial effort to arrive at common and stable specifications, effective component breakdown, clean interface design and consolidation for the functional part of the system. Conversely, benefit (2) becomes available much sooner and at a fraction of the cost of (1), with immediate and tangible benefits. This is a crucial point to our whole concept.
Composition. Our definition of software reference architecture builds on the premise that general-purpose programming languages are not and cannot be component models themselves. In addition to missing some fundamental capabilities in fact - most notably multi-concern interface semantics for the specification of provided and required interfaces - programming languages operate at a level of abstraction that is lower (i.e., too implementation specific) to that proper of component models. Conversely, restricted
and specialized profiles of apt programming languages may help develop specific parts of the correct-by-constructioncode artefacts that implement component-based systems.
Our approach aims at achieving the properties of composability and compositional-ity. When composability and compositionality can be assured by static analysis, guaranteed throughout implementation, and actively preserved at run time, we may speak of composition with guarantees [11], which is our grander goal here.
In accord with [12], we maintain that composability is achieved when the designated properties of individual components, captured in terms of needs and obligations, are preserved on component composition, deployment on target and execution. Our components operate in the functional space only. They therefore express strictly sequential semantics. The signature of their methods determines how the invocation occurs functionally. Components are stateful and their state is comprised within the component. From the interaction perspective, components are black boxes that only expose provided and required interfaces. These provisions help warrant functional composability, void of non-functional semantics, hence without concurrency, interleaving, synchronisation (among other concerns with bearing on run-time behaviour). Functional composability, a narrower view of composability tout-court, warrants that the properties held by the sequential execution of the functional interfaces provided by individual components verified in isolation are preserved, in the functional dimension, when components are composed by the binding of their matching required and provided interfaces. Yet there is more to composability than just functional concerns.
We address the non-functional dimension, again limited to concerns with bearing on run-time behaviour, in two steps. The specification of non-functional behavior is super-imposed on component interfaces in a manner that preserves their original functional semantics and enriches it with non-functional semantics separately realized by the container that encapsulates the component. Indeed, it is the interface provider (as opposed to the caller) that determines the semantics of the invocation, including for the effects that the execution has on the component state. This prescription is crucial to ensuring that interface decoration adds to the functional semantics expressed by the component itself, instead of possibly conflicting with it. Interface decoration is a conveyor of semantic enrichment which takes effect in safeguarding the non-functional behavior of components and their functional binding to one another. The syntax used for interface decoration is comparatively arbitrary. Yet the semantics that decorations capture must match the execution semantics stipulated by the computational model to which the component model is attached.
The computational models that fit our needs (1) help extend composability to the non-functional dimensions of interest, with concurrency and real-time especially considered in this paper, and (2) make it possible to take a compositional view of how execution occurs at system level. In accord with [12], we regard compositionality to be achieved when the properties of the system as a whole can be determined as a function of the properties of the constituting components. The binding of a computational model to the component model allows the execution semantics of components with added non-functional descriptors to be fully understood in the face of concurrency, interleaving, contention, synchronisation (and of any other dimension covered by the computational model).
From the real-time perspective, example properties that may be attached to the pro-
vided interfaces of a component, include worst-case execution time (WCET), period, deadline. Looking at each of these three properties helps appreciate what composabil-ity and compositionality signify in this context. The WCET is a local property of the program (that is, the service attached to the interface in question): composability in the time dimension [13] is achieved if the interfering effect caused by the presence of other components in the system does not prevent a safe and tight WCET bound to be determined for every single interface service. Period requirements are composable so long as the execution platform can sustain them without incurring unacceptable jitter. This property can be asserted statically by analysis of features of the execution platform. For a scheduling algorithm that allows interleaving - which is what we assume in this work - the satisfaction of deadline requirements is a property of execution that can only be asserted compositionally. For the scheduling algorithm adopted by the execution platform of choice, and reflected in the concurrency and real-time semantics of the decoration attributes that emanate from the corresponding computational model, schedulability analysis compositionally determines how the completion time of individual services, effected by the interference effects caused by job-level interleaving, relates to the applicable deadline. This analysis is intrinsically compositional as it uses a decomposition of the system that allows singling out the local properties of interest and uses them to determine the effect that they will have globally.
By its very nature, the computational model considers entities that belong to the implementation level (e.g., tasks, protected objects, semaphores). In the design of a component model therefore, and especially for its use in an MDE process, a higherlevel representation of those entities must be provided that: (i) does not pollute the user model with entities that pertain to a lower level of abstraction; (ii) meaningfully represents those entities and their semantics; (iii) ensures that it always is possible to correctly transform the information set by the designer in the higher-level representation into entities recognized by the computational model. In our approach, needs (i) and (ii) are addressed by concentrating the representation of the required semantics in interface decoration attributes. Needs (iii) instead is addressed as part of the generation of correct-by-construction code artefacts.
2.2. Software entities
The real-time architecture of our component model features three distinct software entities: the component, the container and the connector.
Components and connectors are present in most component-oriented approaches: a wealth of literature discusses their various possible flavours (see for example [14, 15, 16]). Containers have a much lesser prominence in the literature, perhaps a token of the insufficient penetration of the concept of separation of concerns in component-based software engineering. They are used in approaches (like the one presented herein) which present an endogenous treatment of non-functional properties (i.e., outside of the component) [17]. Industrial-level examples of containers exist, for example OSGI containers and Enterprise JavaBeans (EJB) containers (although those are more intended as a run-time environment for components rather than a per-component wrapper).
In our context, the component addresses exclusively functional and algorithmic concerns, the connector is used to address interactions concerns, and the container is
responsible for the realization of the non-functional concerns, with regard to concurrency (tasking and synchronization), real-time and reconfiguration aspects.
The component is the only entity that appears at design level. Containers and connectors pertain to the implementation level and get attached to components by way of fully automated transformations. This attachment allows components to interact among themselves and with the execution platform once deployed in the run-time environment.
Component. Ref. [18] defines a software component as "a software building block that conforms to a component model". The authors of the cited work maintain that: "a Component Model defines standards for (i) properties that individual components must satisfy and (ii) methods, and possibly mechanisms, for composing components".
To us, the component is the unit of composition throughout the system development process, from conception, reuse, refinement and aggregation, which all pertain to the modeling phase, to system building, which for us corresponds to producing the full sources of all component implementations, and of their connectors and containers, ready for compilation and linking. The software system is built as an assembly of components, deployed on an execution platform which takes care of their execution needs. The requirement of "independent deployment of components" entailed by the definition by [14] is currently not a core requirement for us (and neither is in many other component-oriented approaches). This stance matches the practices in use at the target industrial domains of interest, which all require verified and validated static builds of the system, which treat upgrades and reconfigurations outside of the development phase, in the long-lasting operation and maintenance phase.
A component provides a set of functional services and exposes them through a "provided interface"; the services needed from other components or the environment in general are declared in a "required interface". The component is assembled with other components so as to satisfy the functional needs of its required interfaces. Components can also use event-based communication with a publish-subscribe communication paradigm. Components can register to an "event service" in order to receive notifications of events emitted by other components.
Non-functional attribute descriptors are added to component interfaces to specify the non-functional properties desired for the execution of the corresponding services. Those attributes are taken from a fixed language of declarative specifications. The semantics of that declarative language emanates from the chosen computational model: the Ravenscar Computational Model (RCM) [19] in our case. That provision has three important consequences: (1) it fully informs the model transformations that automatically produce the containers and connectors which serve to realize the non-functional requirements set on the component interfaces and the binding among them; (2) it enables the execution of schedulability analysis directly on the model of components; and (3) it warrants full consistency between the specification of non-functional concerns and their realization in the implementation.
The left part of Figure 1 depicts two components with their contract interfaces and a component binding between them.
Container. The container (see Fig. 1) is a software entity that can be regarded as a wrapper around the component, which is directly responsible for the realization of the non-functional properties specified on the component that it embeds. In programming terms, the relation between the container and the component is the same as that determined by inversion of control3, the style of software construction where reusable code (the container) controls the execution of problem-specific code (the component).
The container exposes the same provided and required interfaces as the enclosed component, through interface "promotion" and "subsumption" relations from the services of the component to the equivalent services in the container. With interface promotion, the container is able to prefix the component's execution with what it takes to realise the non-functional semantics attached to the relevant component interface. With interface subsumption, the container is able to intercept the interface calls made by the component and transparently forward them to the container that wraps the target component. As a result of that provision, there is no direct communication to a component, since all communication between them is mediated by the enclosing containers. The container also mediates the access of the component to the executive services it needs from the execution platform.
Binding between components, as we said already, is statically defined at model level (no dynamic loading of components is allowed), yet - in line with the inversion of control principle - it takes effect at software initialization time, when initialization of containers and their external connections takes effect.
The right part of Figure 1 depicts a container and its embedded component and also shows the promotion and subsumption of the corresponding interfaces.
Contract for A if
Assumptions:... Required Services: then
Provided Services:
Component
Contract for B if
Assumptions:... Required Services: then
Provided Services:
Component B
Container A
Figure 1 : Two components with their interfaces and component binding and a container.
Connector. The connector [16] is the software entity responsible for the interaction between components, which actually is a mediated communication between containers. Connectors allow separating interaction concerns from functional concerns. We maintain that this separation is beneficial in that the user only needs to specify the interaction style and semantics to be established in the binding between components, without having to produce the code-level - or model - for it, thus with benefits in terms
3http://martinfowler.com/bliki/InversionOfControl.html
of correctness guarantees and assured performance. Components are consequently void of code that deals with interactions with other components.
A connector decouples the component from the other endpoints of the communication. In this way, the functional code of a component can be specified independently of: (1) the components it will be eventually bound to; (2) the cardinality of the communication; and (3) the location of the other parties. This is necessary as components are designed in isolation and their binding with other components is a later concern, or may vary in different reuse contexts.
The nature of our target systems reduces the variety of necessary connectors to a few basic kinds, which are required to perform function/procedure calls, remote message passing or data access (I/O operations on files in safeguard memory). This also means that we do not require an approach for the creation or composition of complex connectors [20]. More complex connector kinds are necessary when communication between components implemented in different languages is carried out, guarantees on remote communication are required and for location and representation transparency in more heterogeneous systems.
Figure 2 depicts a connector that regulates the interaction between two containers. The figure also shows that there can never be direct connection of a component with the execution platform, as the connection is always mediated by the container.
Execution platform
Figure 2: A connector that realizes the communication between two containers on behalf of their respective components. The figure shows also the underlying execution platform.
2.3. Execution platform
With the generic term "execution platform" we identify the middleware, the realtime operating system/kernel (RTOS / RTK), communication drivers and the board support package (BSP) for a given hardware platform. For the purposes of this paper, we consider the execution platform as a single monolithic block, and we just categorize the services it is to provide to our design and implementation entities.
The concerns addressed by a given platform service determine the software entity entitled to use it. We in fact classify platform services according to their user:
• services for containers: they are used by the containers to enforce or monitor non-functional properties. For example: tasking primitives, synchronization primitives, time-related primitives, timers.
• services for connectors: they are the implementation of communication means, constructs to transparently handle physical distribution across processing units, libraries for translation of data encoding.
• services for components: they are (infrastructural) services intended for the functional service of components; typifying examples of such services include: access to the system time for time-stamping data; context management and data recovery. To use those services, components do not access the execution platform directly: access to them is mediated by the corresponding container.
In contrast to components, whose implementation only includes sequential code and thus is independent of the execution platform (no direct calls to OS primitives or other execution platform services are allowed), the implementation of containers and connectors necessarily depends on the target platform, to which it allows components to statically bind. It is therefore necessary to create specific implementation of containers and connectors for each execution platform of interest.
2.4. Component Models
The primary purpose of the component model in the work presented in this paper is to favour reuse with guarantees, in the connotation given in section 2.1, while embracing a model-based development paradigm. In terms of the rich classification proposed in [17], our component model (i) addresses the modeling and implementation phases of the development life cycle, (ii) is independent from the programming language, (iii) provides constructs for interface specification, (iv) allows expressing a limited set of interaction patterns, (v) supports specification, composition and analysis of non-functional properties, and (vi) is special-purpose as intended to high-integrity embedded real-time systems.
At the cost of some redundancy with the review provided in the cited work, which could be transposed in this paper as a reflection of the proposed multi-dimensional classification, in the following we relate our component model to relevant samples of state-of-the-art component models.
Ref. [21] focuses on integration of components with heterogeneous interactions and execution paradigms. Our work instead aims at the integration of components implemented in different programming languages and targeting different execution platforms. The framework aims at correct-by-construction design by achieving component composability and compositionality. That work later evolved in the conception of the BIP framework (Behaviour, Interaction, Priority) [22]. Separation of concerns is central to that work - though with other goals than ours - as components are created as superposition of three layers: (i) a lower behaviour layer; (ii) an intermediate layer that describes with a set of connectors the interactions between transitions of the behaviour; and (iii) an upper layer with a set of priority rules to determine the scheduling policies for interactions. The product of two components is the result of the separate composition of their layers.
The BIP framework provides atomic components, which are the basic building blocks of the system, and allows creating composite components, which are obtained by successive composition of their constituents. Notably, the authors describe the operational semantics of BIP, and an infrastructure to generate C++ code from BIP systems and an execution platform to run it either using a multi-threaded execution (each atomic component has its own thread of control) or using a single threaded execution (the execution engine is the only thread).
The two major contributions of BIP are the modeling of heterogeneous systems and the overall approach that reduces the gap between the analyzed system and the implementation. The BIP framework however does not make provisions for property preservation at run time, which is necessary to achieve composition with guarantees, contenting itself with semantic assurance at specification level.
SaveCCM [23] targets heavy vehicular systems. That component model supports both time-triggered and event-triggered activation events and its components are hierarchical. Components are exclusively passive units (hence they do not comprise threads); howevertheir description is implicitly carrying non-functional semantics: the choice of equipping a component with a trigger, data or trigger-data port implicitly dictates the concurrent semantics of the component. The ports of a component can be decorated with quality attributes to feed analysis in various non-functional dimensions and code generation. Automatic model transformations can turn the design model into a representation amenable to various forms of static analysis: timing automata with tasks or finite state process models to perform model checking of properties such as absence of deadlock or perform reachability analysis.
The PROGRESS component model (ProCom) [24] extends SaveCCM to address high-level concerns typical of early design stages of a large-scale distributed embedded system: high-level early analysis and deployment to processing units. The component model distinguishes two granularity levels for the software specification: (i) a higher level, where the system is modeled as a set of active, concurrent subsystems which communicate through message passing; (ii) and a lower level, for the specification of the internals of the subsystems. The two levels are addressed by two separate languages ProSys for the former, and ProSAVE for the latter (i.e. an evolution of SaveCCM). ProSys high-level components are deployed to virtual nodes, which are then allocated to the physical architecture. Virtual nodes are logical units for the specification of budgets with respect to CPU time and memory consumptions. The approach supports early analysis on those dimensions, to assess the goodness of the overall design before proceeding with the implementation. As a notable difference with our approach, ProCom provides explicit connectors related to data flow (such as "data muxer", a "data or", a "data demuxer") or control flow (such as fork and join). Clocks have to be explicitly specified in the design to provide the sources for periodic triggers.
The ROBOCOP component model [25], targets the consumer electronic domain. The authors started their endeavour by considering the aspects that require particular attention for the target domain: (i) upgradability, to extend the life time of devices by uploading improved version of the software; (ii) extensibility, to add functionalities to the device; (iii) low resource consumption, particularly in footprint size, due to the limited hardware capabilities of devices. (iv) support for third-party components, which influences the strategy for packaging of components.
A ROBOCOP component is a collection of related models which are used to trade components between parties. Those models are human-readable models (like documentation) or machine-oriented models (a simulation model, a resource model, an interface model, a security model, etc..). In each of these models it is possible to describe the attributes of the component relatively to the dimension of interest.
The ROBOCOP component is much more similar to a package that is used to share components with various stakeholders and is different from the unit of deployment of
the approach (which would be an executable component). What is termed component in component-model parlance, it is called service in ROBOCOP, specified in IDL. A component is developed and published in a repository. At this stage, the component is still generic as it is not bound to any target platform. Components are then tailored for execution on a specific target platform, loaded on the device, registered and instantiated for execution, and finally deployed, all via model representations.
Giotto [26] is a progenitor in a family of time-triggered languages and tools, which specialize for control processing where deterministic time of execution is inbred to the domain culture [27]. Each Giotto component executes a specified number of times per period, as specified in its frequency attribute, which defaults to one. If a component has frequency greater than one, its output arcs are updated more than once per iteration, but only the final update will be visible to components with frequency one, and only on the subsequent iteration. To strengthen its fitness for strictly periodic control processing, Giotto imposes the additional constraint that the frequencies of all components in a model bear harmonic relationships with each other. This requirement clearly emanates from the wish to ease the generation of the component execution schedule and back propagates to the component model.
The time-triggered nature of the Giotto model of computation is the fundamental and radical difference to the component model presented in this paper, which instead attempts to keep the component model separate from the computational model and requires the latter to be bound to the former for a particular instantiation of the system.
The Ptolemy project [28] studies modeling, simulation, and design of concurrent, real-time, embedded systems realized as an assembly of concurrent components. The key principle underneath the project is the use of well-defined models of computation that govern the interaction between components. The belief behind that principle is that the choice of models of computation strongly affects the quality of a system design. And for embedded systems, useful models of computation must capture well the notions of concurrency and time. The evolution of Ptolemy supported the use of multiple models of computation constructed in a hierarchy of models as in the opinion of the Ptolemy team, no single general-purpose model of computation was likely to satisfy the needs entailed in modeling a complex embedded system.
Although not a component model itself, Ptolemy has a number of similarities to the vision presented in this paper; notably, the notion that the computational model should be understood as one essential parameter of the design problem. Ptolemy however addresses that intuition from a tool environment perspective more than from the architecture standpoint argued for in this work.
3. The proposed component-based development approach
3.1. Overall process
Figure 3 captures what we regard as the main activities related to software development in the component-oriented approach proposed in this work.
An initial phase is concerned with the definition of components. In the proposed approach, a lot of emphasis is put on the definition of component interfaces, which exist independently of components and precede their definition (see section 3.2).
Figure 3: Our component-oriented development process.
Components can be either: (i) defined from scratch, using newly defined interfaces or reusing an interface definition; or (ii) reused from previous projects. In the latter case, if the component is not reused as is, an adaptation of the component may be required. The adaptation shall follow separation of concerns and therefore affect either interfaces alone or internals, where the latter achieves lesser reuse.
Reuse of an existing component or creation of a new component from scratch depends mainly on system requirements (functional requirements shall be compatible) and on the trade-off between the effort of producing the justification documentation for component reuse according to the applicable development process standard and the expected gain in spared development and verification and validation (V&V) effort.
Components are then bound to one another so as to create component assemblies: the complete software system is specified as a set of collaborating components.
As a distinguishing feature of our approach, all the steps described up to know are related exclusively to specification of functional concerns and functional services supported by components. All other concerns (concurrency and real-time, dependability, deployment) are addressed in separate development steps.
Figure 3 depicts concurrency, real-time and deployment, which are directly addressed in this paper. Concurrency and real-time requirements and properties are specified by adding them on top of the functional description of services exposed by components, in the form of attribute descriptors Deployment concerns are addressed by establishing deployment directives for each component.
The software model is then subject to analysis (e.g., schedulability analysis, bus communication analysis) to confirm that it meets the applicable non-functional require-
ments. Negative results from the analysis phase may require changes to the relevant non-functional attributes (e.g., periods, deadlines, priorities), deployment directives (allocation to processing units), or in the most severe cases, re-designing component assemblies or components.
Finally, containers and connectors are automatically generated using the software model as input. The former implements concurrency and real-time concerns; the latter the interaction between components. In our approach, the full real-time and communication architecture on top of the execution platform is automatically generated.
The envisioned design process recognizes two main actors: the software architect and the software supplier. The software architect represents the technical responsible for the whole software in the regard of the system-level customer. The development of part of the software may be delegated to software suppliers before its final integration.
All the activities previously described are under the responsibility of the software architect. As soon as a component is defined, it can undergo detailed design and code implementation. Those activities may of course highlight incomplete or bad definitions of components (e.g., lack of an operation in the component interface), thus requiring a renegotiation of the component definition. In any case, if an iterative / incremental development process is adopted, a number of iterations from component definition to detailed design have to be anticipated, according to the new functional perimeter of the iteration. Component detailed design and implementation are performed by software developers or may be subcontract to a software supplier.
Section 3.2 discusses those activities.
In order to organize the development process in a manner that enforces separation of concerns in the design space, we adopt the concept of "design view".
The ISO standard 42010 [5] stipulates that"architectural description of the system is organized into one or more constituents called views", and a view is a partial representation of a system from a particular viewpoint, which is the expression of some stakeholders' concerns. If during the construction of a development approach we ratify that each view is the expression of a single concern, then views become effective means to enforce separation of concerns in the specification of the software system.
Design views are used mainly for two purposes: (i) view-specific visualization of entities; (ii) control of the development capabilities attributed to the designer.
As each view captures different concerns (according to the viewpoint of different software specialists), views shall enable selective visualization of only the relevant entities, and shall support creation and modification rights on those entities only. A design view can also be expression of a defined development stage. Hence it may be possible to activate it only after a certain number of conditions hold.
In the general case, views shall not incur overlaps of responsibility for creation or modification of new modelling entities. While multiple views can have read rights over cross-cutting aspects, only a single view can have create/write rights on them.
Section 3.3 provides a description on the design views defined in the proposed development process.
3.2. Design entities and design steps
The proposed development method involves the use of entities that can be differentiated between design space entities and entities of the real-time architecture. The
design-level entities are explicitly specified in the design space, and require intellectual contribution from the user. The real-time architecture entities are neither specified nor explicitly represented in the design space; instead, they are automatically generated. The following entities belong in the design space: 1) data types, events and interfaces; 2) component types; 3) component implementations; 4) component instances; 5) component bindings; 6) entities for the description of the hardware topology and the target platforms. The remaining entities belong to the real-time architecture: 7) containers; and 8) connectors.
Automated generation of containers and connectors is assuredly feasible on the condition of adopting a given computational model and execution platform: that information makes it possible to define and deploy deterministic transformation rules from design-level entities to the real-time architecture formed by containers and connectors (cf. e.g., [2], for our reference choices on this topic).
The development process proceeds across distinct design steps, some of which can be delegated to software suppliers. Easy and clear delegation of software development simplifies the technical and contractual relationship between the consortium of companies producing the software, and mitigates the relevant project risks. This is particularly relevant in ESA projects, because of geographic return policy considerations4.
We can now proceed to illustrate the sequence of design steps.
#01 - Data types and events. Data types are the basic entities in our approach. The designer can define a set of data types such as primitive types, enumerations, ranged or constrained types, arrays or composite types (like the struct in C or the record type in Ada). An Event is a type used for signal-based asynchronous notifications. It can comprise event parameters, which shall be typed with a datatype.
#02 - Interfaces. An interface is a set of one or more operations, whose signature is determined by an operation name and an ordered set of parameters, each with a direction (in, in out, out) and a parameter type chosen between the defined types. The interface can also defines a set of attributes, which are typed parameters that can be accessed through the interface. They can be read-only or read-write.
In fig. 4 we represent three data types and an event. Interface AOCS_IF and THR_IF implement only operations while interface GYRJF comprises one read-only attribute.
#03 - Component types. The component type (figure 5, top) is the entity that forms the basis for a reusable software asset. The software architect specifies a component type to provide a specification of the functional services of the component. The component type is specified in isolation, with no relationship with other components.
The component type therefore specifies provided interfaces (PI) and required interfaces (RI) by referencing already-defined interfaces.
The component type can also define component attributes, which similarly to interface attributes are typed parameters. Component attributes are however local to the
4ESA is mandated to "ensure that all Member States participate in an equitable manner, having regard to their financial contribution, in implementing the European space programme". This obligation is reflected as an evaluation criterion for competitive bids.
1-2) Data types, events and interfaces AOCS Mode Enumeration Angular Velocity: Struct Duration: Primitive
NOMINAL_MODE SAFE MODE SCIENCE MODE Rate_X: integer [-30, +30] Rate_Y: Integer [-30, +30] Rate_Z: Integer [-30, +30] Integer ¡0, 2^-1]
GYR_FAILURE: Event Last_Acq_AV: Angular_Velocity
«interface» AOCSJF «interface» GYRJF «interface» THRJF
Set ModeOnAOCS mode mode) StepO ■ GYR_AV : ro Angular_Velocity Exec_Thr_Cmd(n Duration dr)
Figure 4: Datatypes, events and interfaces.
component and cannot be accessed from the outside. They are typically used to define internal configuration parameters.
Finally, a component type may raise or receive events, via event emitter ports and event receiver ports respectively.
3) Component type
AOCSJF
«interface» AOCSJF S«J.!odel«iAO:5_mode mrà! SnpO
«type» AOCS
[)gyr_i
GYRJF —( THR_IF
FAILURE EM
Component Attributes Float FILTER PAR 1
4) Component implementation
AOCS IF
«implementation» AOCSJmpI
Non-functional constraints: StepO shall be executed at 8Hz
Implementation language: C
-c —>
Technical budgets
Operation AOCSJF. StepO Max_WCET = (14000 Kcydes, LEON 2) Component Implementa tion AOCSJmpI Max_MemFootprint = 470 KB
voidStep(){ [. .] i
void Set_Mode(AOCS_MODE mode) {
Figure 5: Component type and component implementation.
In fig. 5 (top) we defined an "Attitude and Orbit Control" (AOCS) component that
provides an interface (AOCSJF) for executing basic and application mode management functions (operation Step and Set_Mode, respectively). The component requires the interfaces GYRJF and THRJF, in order to perform an acquisition on a gyroscope and send pulse commands to thrusters. Finally, the component can raise events of type GYR JAILURE, to notify that it has detected a failure of the gyroscope.
#04 - Component implementations. The software architect then proceeds by creating a component implementation from a component type.
A component implementation (fig. 5, bottom) fulfils two roles: (i) it is a concrete realization of a component type; and (ii) it is the subcontracting unit of the approach, whose realization can be delegated by the system architect to a software supplier.
A component type may have several implementations (one more precise yet more computationally expensive, one more robust, etc...).
A component implementation must implement all the functional services of its type in the example above, the operations Step and Set_Mode, and includes the sequential/algorithmic code of those services and the necessary packaging information. The code is purely sequential code and shall be void of any tasking or timing constructs. Implementations can be developed in different implementation languages (i.e., Ada, C and C++ are currently supported).
Despite the sequential nature of the code, an implementation may set specific nonfunctional constraints to preserve the functional correctness of its behaviour. For example, a control law algorithm may work correctly only if executed at a certain frequency, say 8Hz. In that case, the functional code propagates some implicit non-functional constraints which we represent in the component interface.
Additionally, the component implementation shall implement means to store the attributes exposed through its provided interfaces and those defined in its component type. The former shall be accessible via appropriate getters and setters from the interface where they are defined.
A number of technical budgets can be placed either on operations or on the whole component. The implementation of the component and its operations shall respect the allocated budget, which shall then be considered as an implementation requirement. The types of technical budget of interest emerged in our discussions with stakeholders comprise: a worst-case execution time (WCET) bound for a certain operation; a maximum memory footprint for the component implementation; maximum numbers of calls to a certain operation of a RI. The latter implicitly bounds the communication budget allowed for an implementation, as the size of data types involved in the communication are known.
This is especially useful when the software integrator wants to subcontract part of the software. After establishing how the component functionally relates to the system (by declaring its needs and provisions at the component-type level), they can derive a component implementation, attach technical budgets to it and then delegate the source code implementation to a software supplier.
It is important to notice that component implementations can (and hopefully will) undergo a detailed design activity, which may add internal decomposition (package hierarchy) or operations private to the component. This decomposition can be either be performed directly in the source code of the component implementation, or using separate implementation models (e.g., in UML). The component implementation will however be considered as a black box, as for what concerns the functional aspects, only its external interfaces matter (the same PI and RI of its component type).
For the purposes of analysis instead, we need to know which required interfaces (RI) are requested by each provided interface (PI) of a component implementation, and, more precisely, which operations in the RI are called by each operation in the various PI of the component implementation. This information is defined at implementation level, as it does not vary across different instances of the same component implementation and can be specified with a UML activity diagram or similar formalism.
#05 - Component instances and component bindings. A component instance is instantiated from a component implementation.
A component instance serves three purposes: (i) it is subject to composition with other components, and as such it is expression of functional concerns; (ii) it is the deployment unit of the approach, whereby it is expression of deployment concerns; (iii) it is the entity on which non-functional attributes are specified, hence it is expression of non-functional concerns. In this design step, we concentrate only on aspect (i).
Component bindings are set at design time between one required and one provided interface of component instances (1-to-N connections for multicast are disallowed). The binding is subject to static interface type matching to ensure that the providing end fulfills the functional needs of the requiring end.
At this stage, it is also possible to trace bindings between an event emitter ports and one or more event receiver ports. Contrarily to a binding between a RI and a PI, which establishes a functional dependency, an event binding simply establishes that the receiver component instance is interested in receiving notification of occurrences of a given event. Event-based communication is based on a publish-subscribe communication model.
In fig. 6, a binding between one RI of the Mode_Manager component instance and the AOCS component instance is traced. This allows the Mode_Manager instance to call the operations of the interface AOCSJF provided by the AOCS instance. The Mode .Manager instance will also receive notification of any event of type GYRJAILURE raised by the AOCS instance.
#06- Specification of non-functional attributes. After component instances have been created (fig. 6), the software architect can add non-functional attributes to the services of their provided interfaces; those non-functional attributes are specified in nonfunctional descriptors.
Figure 6: Component instance, component bindings and decoration with non-functional attributes.
At this stage the software architect shall specify timing and synchronization attributes. At first they shall establish the concurrent kind of the operation, by classifying it as either immediate or deferred. In the former case, the operation is executed by
the flow of control on the caller side. In the latter case, the operation is executed by a dedicated flow of control in the callee. Either option has direct repercussions on the implementation of the component's container.
A stateful immediate operation is said protected if its execution requires to be protected from the risk of data races from concurrent calls. Otherwise it is unprotected and can be either stateless or otherwise free from such risks.
For a deferred operation, the architect must choose the release pattern, which can be periodic, sporadic or bursty.
• Periodic operations are executed by a dedicated flow of control released by the execution platform with a fixed period.
• Sporadic operations are executed by a dedicated flow of control which responds to a request posted via software invocation by another component or via interrupt. Two subsequent releases of the operation are to be separated by a minimum timespan called minimum inter-arrival time (MIAT). The execution platform -and not the user or the component implementor - guarantees the MIAT by enforcement. Sporadic operations require the creation of a finite-size buffer for the incoming requests, which is managed by the container (storing and fetching of requests is totally transparent to the component).
• Bursty operations, which are used to model dense releases of sporadic jobs possibly followed by spans of inactivity. There is a maximum number of activations of a bursty operation in a bounded interval; the designer is expected to provide both of them as attributes, and the size of the buffer for incoming requests, similarly to sporadic operations.
For all deferred release patterns, the software architect shall also specify the relative deadline for the completion of the operation. At a preliminary stage of development, an estimate of the worst-case execution time (WCET), based on experience from previous projects is provided, in order to enable early analysis of the overall system. The value for the WCET can later be refined with bounds for a given target platform obtained by timing analysis [29].
Table 1 presents the syntax for the current set of non-functional attributes (related to concurrency and real-time) applied to the interfaces of component instances.
It is also possible to specify end-to-end requirements on call chains across components, which is particularly useful for system-level analysis. Schedulability analysis (in the simplest forms of response-time analysis equations derived from [30]) can in fact provide response times only for individual tasks. The specification of end-to-end requirements can instead be used by a more expressive analysis framework (such as [31]) to compute the completion time of end-to-end chains of operations as well as on single intermediate operations in the chain.
The component model also provides the user with means to define measurement units (and conversion factors between them). The definition of those units can then be factored out in a library reused across projects. Non-functional attributes and technical budgets that represent a dimensioned attribute (e.g., a timespan) require the specification of a value and a measurement unit.
EFDescriptor = operationName, concurrencyKind ;
concurrencyKind = immediate | deferred ;
deferred = cyclic | sporadic | bursty ;
immediate = protected | unprotected ;
cyclic = "cyclic", period, WCETdesc, [deadline], [offset] ;
sporadic = "sporadic", MIAT, WCETdesc, [deadline], queueSize ;
bursty = "bursty", boundedlnterval, maxActivations, WCETdesc, deadline, queueSize ;
protected = "protected", WCETdesc ;
unprotected = "unprotected", WCETdesc ;
WCETdesc = execPlatform, WCETentry;
execPlatform = "execPlatform", identifier;
WCETentry = "WCET". naturalNumber, unit. WCETkind ;
WCETkind = "Estimation" | "Measured" | "AnalysisBound" ;
period = "period", positiveNumber, unit;
deadline = "deadline", naturalNumber, unit;
offset = "offset", naturalNumber, unit;
MIAT = "MIAT", naturalNumber, unit;
boundedInterval = "boundedlnterval", positiveNumber, unit;
maxActivations = "maxActivations", positiveNumber ;
queueSize = "queueSize", positiveNumber ;
Table 1: Syntax in EBNF for non-functional attributes attached to operations.
The design environment checks that the attributes defined at instance level are compatible with the applicable non-functional constraints and technical budgets defined at implementation level.
In the example of figure 6 the design environment ensures that the frequency of operation Step matches the corresponding constraint (execution at 8Hz) in the component implementation and that the WCET is within the stipulated bound (cf. fig. 5).
#07 - Hardware topology and target platforms. The hardware topology (see fig. 7) provides a description of the system hardware limited to the aspects related to communication, analysis and code generation.
The following elements are described: (i) processing units, i.e., units that have general-purpose processing capability; (ii) avionics equipment, i.e., sensors, actuators, storage memories and remote terminals; (iii) the interconnections between the elements above, in terms of buses, point-to-point links, serial lines.
For the specification of those elements we use the following attributes:
• for processors: the processor frequency, which is used to re-scale WCET values expressed in processor cycles.
• for buses and point-to-point links: the bandwidth, the maximum blocking incurred by a message due to non-preemptability of lower-priority message transmission, the minimum and maximum packet size, the minimum and maximum propagation delay, the maximum time necessary for the bus arbiter/driver to prepare and send a message on the physical channel; and the maximum time to make it available to the receiver after reception at the destination end.
#08- Component instance deployment. Once the hardware topology has been defined, the last step to perform in the design space is the allocation of component instances to processing units (see fig. 7).
Figure 7: Deployment of component instances to processing units.
#09 - Model-based analysis. The system model is submitted to static analysis in the non-functional dimensions of interest. For example, schedulability analysis verifies whether the timing requirements set on interfaces can be met [4].
The extraction of information from the user model (i.e., generation of intermediate models such as a Platform-Specific Model, PSM, or a Schedulability Analysis Model, SAM) and generation of the input for the analysis tools are automated and the results of the analysis are seamlessly propagated back to the design model as read-only attributes of the appropriate design entities.
As the model transformation that generates the model representation for containers and connectors in the SAM (see Step #10) is an integral part of the analysis transformation chain, the transformation can add information about the cost in time and space of the containers and, in particular, of connectors. This is of utmost importance for an accurate analysis of the overhead introduced by the use of platform services for local execution and for remote communication.
The analysis can be iterated at will until the designer is satisfied.
#10- Generation of containers and connectors. Containers and connectors (fig. 8) are generated with rules that specify: (i) the structure of each container in terms of the interface exposed to the software system, its internal threads and its protected objects; (ii) the structure of each connector; (iii) how non-functional attributes and deployment determine the creation of containers and connectors and how component instances and their operations are allocated to them.
For our choice of computational model (i.e., RCM), we defined the whole set of allowable containers and connectors in a library of code archetypes [32], which vastly simplifies automatic code generation.
In a single-core processor setting such as ours (multi-processors systems are supported, but multi-core processors are not supported yet), implementation of concur-
10) Containers/ connectors
Start_Mode_Transition [sporadic MIAT=500ms. queue Size = 2];
GYR_FAILURE_RC EventReceiver
Iunprotected]
Figure 8: Automated generation of containers and connectors.
rency is achieved with the encapsulation of sequential procedures (of component implementations) into tasks generated into containers, and the necessary protection from data races in access to shared logical resources stems from the attachment to them of concurrency control. All of that can be attained without modifying the functional logic in the relevant application code, simply by following the use relations among components and by generating the code patterns associated to the non-functional attributes of the relevant provided interfaces.
Interaction with the environment. Embedded software interacts with the environment by sampling information of interest through sensors and by controlling the plant where it executes by using actuators.
At design level we must be able to: (i) represent those devices in the hardware architecture; (ii) associate software components to them. For the latter, we use a component-level description of sensors and actuators in order to bind software components (at instance level) with the devices that they command.
The device representation is only for interaction purposes, as we only intend to represent the functional interface of the device, not its internals.
3.3. Design flow and design views
A component model does not only prescribe the syntactic rules to create design entities and how to relate them to one another, but - whether intentionally or implicitly, in any case inevitably - it also establishes a defined design flow.
The design flow comprises a series of steps that must be followed to create components, reuse components, assemble them and ultimately produce the software system. It may also determine precedence relations between those steps.
This implies that the developers of the component model shall pay careful attention to ensuring that the design flow promoted by their approach is compatible with the development process in use with the concerned industrial domain.
One of the advantages of design views is also to promote or enforce a certain design flow. The definition of the proposed component model is accompanied by the following design views (fig. 9):
Data view
Functional code generation
(UML state machines)
Behavioural view
Component view
Real-time view
Extra-functional view _
Dependability view
Space-specific view
¡Railway-specific |
Hardware view
Deployment view
Model-based analysis infrastructural code generation
(e.g. schedulability analysis, state-based analysis, .)
(Interfaces, component skeletons, containers, connectors)
Legend:
Included in both investigations
Provided only inCHESS
Domain-neutral view
Domain-specific view
Precedence constraint
Enabled activity —>
Figure 9: The design views of our component model, as defined in the two investigation strands.
• Data view, for the description of data types and events (which are messages generated or received by a component following a publish/subscribe model);
• Component view, for the definition of interfaces, components and the bindings between components to fulfill their functional needs;
• Hardware view, for the specification of hardware and the network topology;
• Deployment view, for the specification of the allocation of components to computational nodes;
• Non-functional view, where non-functional annotations are attached to the functional description of components. The view is divided in two sub-view according to the non-functional concerns of interest: the real-time view and the dependability view (this is presently defined only in CHESS);
• Space-specific view, where the designer can specify the use of services related to commandability and observability of the spacecraft (i.e. the PUS services [33]);
In the CHESS project we defined two additional views: a Behavioural view, and -for the lack of a better name - a Railway-specific view. In the former, the designer can specify in the model the functional code of a component implementation using UML state machines and generate C++ code for it [34]; otherwise, the designer can always associate Ada or C/C++ source code written manually to a component implementation. In the latter, a number of railway-specific remote connection concerns are addressed.
Additionally, in CHESS, study partners focused on the development of various forms of model-based dependability analysis (i.e., error modeling, state-based analysis, FMEA, FMECA) [35].
In the ESA side of this work, the Behavioural view was not considered of high priority, whereas the only domain-specific concerns of interest were obviously those related to space. We implemented in that view a sizeable subset of the PUS services mentioned above, in particular: monitoring and reporting of on-board parameters, raising of nominal and erroneous events on board, commanding of on-board operations from ground. It was not possible to integrate yet dependability concerns in that strand of investigation, and they were left to follow on projects.
Interestingly, no telecom-specific view was necessary, as it turned out that the technical requirements of the domain where either already addressed by the component model or - when not addressed yet - deemed of interest also for the other two domains and therefore were promoted to the domain-neutral part of the component model (e.g. multiplicities for component instances and ports, which is interesting syntactic sugar to mitigate the cluttering of the design space and the burden of specification for the user).
Data view
[data type concerns]
#01 - Data type definition
Behavioural view (behavioural concerns]
i Behaviour P k-1 definition 1 [UML state 1 machines. SDL. 1 Simulink .]
Component view
[functional concerns]
#02 • Interface definition
#03 - Component type definition p
#04 - Component implementation definition p * —
#05 - Component instance definition
#05 - Component bindings
Non-functional view
End-to-end non-functional attributes
#06 - instance non-functional attnbutes
Hardware view
[communication concerns]
#07 - Hardware definition
#08 - Component instance allocation on processing unit
Deployment view
[communication and deploymentconcerns]
precedence constraint
Implementation I Analysis view
[Realization of non-functional concerns]
Automatically generated
#10-Generation of containers / allocation on containers
#10-Generation of connectors (communication code)
Mandatory step
1 Optional step i
#N Design step number [P] Step with internal parallelism
Figure 10: The design flow of our component model, including the design steps described in previous sections, and the domain-neutral design views.
Figure 10 depicts the complete design flow entailed by the proposed component model, the precedence constraints that apply to it, and the allocation of design steps and related concerns to views. The definition of data types (Step #01) shall be performed in a dedicated view were it is possible to define types independently of the underlying representation of the target platforms and selecting the encoding rules for their representation.
The definition of interfaces (Step #02), component types (Step #03), component implementations (Step #04), component instances and instance bindings (Step #05)
are allocated to a view that we termed "Component view " as those entities include only functional concerns.
In order to specify non-functional attributes (Step #06), we require the designer to explicitly transition to the "Non-functional view ". In this manner, the modification rights of the latter view take effect: in the non-functional view, the designer cannot create or modify entities, but only add non-functional descriptors to the interfaces of instances. The creation or modification of entities would require the designer to return back to the "Component view".
The specification of the hardware topology and the description of the execution platform (Step #07), and the allocation of component instances to processing units (Step #08) are performed in the "hardware view " and "deployment view ".
The generated containers and connectors (Step #10) realize the non-functional attributes that were specified as declarative attributes in the non-functional view. They thus belong to another view termed implementation view. We decided to make this view solely available in read-only mode. The reason is twofold and in line with our previous work [4]. Firstly, the view is automatically generated, so the responsibility for its correct generation is not on the designer, but on model transformation. Secondly, we do not allow manual modifications of the PSM so that we ensure consistency between the PIM and PSM levels. A similar reasoning applies also to all the necessary analysis views (for the generation of the SAM or other analysis-specific models).
3.4. Versioning, configuration management and traceability
The component model presented in this approach does not cater for versioning of interfaces or components (for example by adding annotations with version number or additional information). This choice is intentional as we consider versioning as an issue pertaining to the development life cycle and to all the artifacts produced as part it. Standard software configuration management systems (e.g., Git, SVN, Clearcase) can be used for model versioning. Staggered software releases following iterative development can be easily managed at model level with the use of branches and tags. We equally maintain that traceability of model entities (interfaces, components, etc..) to requirements shall also be managed externally (without embedding any information in the component model itself). We consider configuration management of the software model and traceability as orthogonal aspects, that shall be managed according to the applicable per-project policies. Additionally, external management of those aspects allows to address easily the reuse of components in a different project context. In fact, any versioning or traceability information embedded in the software model would be moot if a component is reused in a project under a different requirements baseline.
4. Implementation and Evaluation
The concepts presented in this paper, from the component model to the associated development process were the subject of two parallel prototype implementations and to a rather comprehensive range of industrial use cases.
One implementation happened within an ESA doctoral-level program, where one of the authors enrolled. That effort resulted in a domain-specific metamodel (named
SCM, short for "Space Component Model") and a dedicated graphical editor based on Obeo Designer5. Design views were implemented using the concept of "Viewpoint" provided by the Obeo Designer framework. The use case for this effort took place in the ESA-funded COrDeT-2 study6 and involved a thorough industrial evaluation of the component model and the associated design environment.
The other implementation took place in the ARTEMIS JU CHESS project, and retained exactly the same methodology and component model as endorsed in the parallel ESA initiative. In CHESS, the specification language for components in the PIM user model comprises UML entities, some high-level MARTE stereotypes [36], and a few CHESS-specific stereotypes. The PSM/SAM model generated for analysis and code generation comprises exclusively MARTE stereotypes. The CHESS design environment is based on Papyrus, an Eclipse-based UML editor. A set of CHESS-specific plug-ins extend Papyrus to provide support for design views (visualization and modification rights on entities and view-specific palettes and constraints) and user-friendly automation capabilities. Model-based schedulability analysis is provided via integration with a plug-in by the University of Cantabria, which extracts the analysis information from our PSM to feed the MAST analysis tool [37]. The results of the analysis are first propagated back to the PSM and then to the PIM, for the user to consider.
CHESS enjoyed the presence of end users from a variety of industrial domains, which included telecom, railways and space again, though from a different team than for the ESA study.
The exposure to such a score of end users allowed exposing the vision presented in this paper and its proof-of-concept technology to a rich and complementary set of case studies. Even though every individual evaluation was comparatively contained in size and effort, their total quantity caused very comprehensive coverage of all evaluation criteria.
Moreover, the very existence of the parallel implementations undertaken for ESA and for CHESS arguably shows that the proposed approach is feasible in both methodology and technology, and incurs no major difficulty in being pursued with two different specification languages and tool chains.
In the following we briefly present the essence of each case study, while in section 4.5 we summarize the results and the feedback we received from them.
4.1. ESA: Reference Earth Observation case study
The first case study was performed in the scope of the COrDeT-2 study. It concerned the re-engineering of a small yet representative Earth Observation mission, used as a reference in many R&D studies at ESA. It consists in a small satellite in Low Earth Orbit, comprising an optical payload to capture images. The mission was originally developed with a traditional code-centric approach.
In this case study, a subset of the on-board software was re-designed, basing on the software reference architecture approach and using the supported component model.
5http://www.obeodesigner.com
6http://cordet.gmv.com/
This case study focused on: (1) ensuring that the component model is able to express the needs for the development of on-board software for satellites; (2) ensuring that the component model is able to accommodate space-specific needs (namely, the "Packet Utilization Standard" (PUS) services for commandability and observability of the spacecraft from ground stations) in a manner that is consistent with the rest of the approach, and avail to the designer specification means at the right level of abstraction.
The case study was performed by a senior engineer with no prior experience on component-oriented methodologies, from a small-size on-board software prime contractor, with support by a part-time consultant who participated in the ESA investigation. Occasional support by one of the authors on some details of the methodology was also needed. The case study spanned 6 months. The conclusions of the investigation were reviewed by two large software and system prime contractors of the domain.
4.2. CHESS: Space case study
The CHESS space use case was based on Sentinel-3, an Earth observation mission within ESA's Living Planet Program. The use case modeled a sizeable subset of the on-board software of the Sentinel-3 satellite: AOCS (Attitude and Orbit Control System), EM (Equipment Management), PM (Platform Management, an abstraction layer between the SW applications and platform resources such as the 1553B command bus, the on-board time reference, etc.), TR (Thermal Regulation), SADM (Solar Array Drive Mechanism).
This case study focused on: (1) ensuring that the CHESS methodology is compatible with the current process and practices of the domain stakeholder; (2) code generation of containers and connectors, by means of a model transformation towards a proprietary modeling infrastructure (MyCCM); (3) verifying the non-functional properties of the model - in particular timing properties - utilizing the supported modelbased analysis and the back-propagation mechanism to rapidly iterate the analysis.
The case study was performed by a R&D engineer with experience on component-oriented methodologies in two iterations: a shorter one of approximately 1 month and a second one that spanned 2 months.
4.3. CHESS: Telecom case study
The telecom use case was based on the "Connectivity Packet Platform" (CPP), which allows constructing packet access nodes based on IP and ATM transport technologies. It provides cluster functionalities, redundancy, fault tolerance, and can be considered as a soft real-time system with a few components with stringent time requirements.
This case study focused on: (1) assessing the use of the component model for their development process; (2) modeling of functional code via ALF state machines; (3) functional code generation from state machines. The generated code is in C++ and targets their reference execution platform;
The case study was performed by two junior engineers with support by a senior engineer for the integration of their reference execution platform in the approach. It was performed along a time-span of approximately 9 months, in two iterations. Some work was re-done in the second iteration, after the team had gained better understanding of the methodology.
4.4. CHESS: Railway case study
The railway case study was based on applications related to the European Rail Traffic Management Systems (ERTMS)7. ERTMS comprises two main constituents: (i) the ETCS (European Train Control System), which is used to transmit information to the train driver (train speed, calculation of breaking curves,...) and to monitorthe compliance of the driver with prescriptions; (ii) the GSM-R standard, to enable bi-directional wireless communication exchanges between the ground system and the train. The case study concerned a commercial solution for the monitoring and analysis of the strength of the up-link and down-link GSM-R signal (respecting bandwidth constraints) in proximity of a high-speed/high-capacity railway line. It analyses possible interferences on the signal and can discriminate if the interference originates from outside the train or on board; in the latter case the train driver is notified, so as to take appropriate actions.
The case study focused on modeling two subsystems: (i) an "analyzer", which performs the analysis of the GSM-R signal; (ii) a "receiver", which receives the analyzed data on signal quality on board and performs the appropriate actions in response.
The "analyzer" is deployed on a laptop, the "receiver" on a dedicated board (simulated with a laptop in the case study).
The case study was small-sized, yet centered on several key features: (i) support for the creation of components written either in C or Ada; (ii) support for multi-node systems; (iii) support for a railway-specific communication protocol to regulate communication from (i) to (ii).
The case study was performed by an engineer without previous experience in component oriented methods, with support on dependability modeling from a lead engineer. The case study spanned along two iterations over a total period of 4 months. Considerable technical support by one of the authors was necessary during the second iteration.
ID Result / Criteria short name ESA case CHESS: CHESS: CHESS:
study telecom railway space
PR-l Adoption of design views I I I
PR-2 Component-oriented design process A A A A
PR-3 Automated generation of non-functional code I I
PD-1 Containers for separate realization of non-functional properties I I I
PD-2 Support for multiple target platforms A A A A
M-l Maturity of methodology H H H H
M-2 Increase of productivity M M L L
M-3 Learning curve H H M H
Table 2: I (Demonstrated and considered interesting and promising); A (Demonstrated and considered Adequate); H (High); M (moderate); L (low).
7http://www.ertms.net/
4.5. Summary evaluation
The overall result of the above evaluations is reported in table 4.4. In the following we briefly summarise the feedback obtained from the industrial users in the three main dimensions of their evaluation: the fitness of the development process for the industrial domain; the quality of the product as resulting from the chain of model transformations; the viability of use in production.
Process-related aspects. The adoption of design views (PR-1) as a means to enforce separation of concerns is cited as an important factor in the telecom and space domain feedback of the CHESS project and in the ESA investigation.
The component-oriented development process that was defined (PR-2) is cited by all case studies as adequate for the realization of their target system. It helps to split the intellectual work in manageable parts that can be realized (or further refined) independently by the designers. Furthermore, it is considered as an enabler for efficient multi-team development.
The automated generation of non-functional code, plus the interface code and the skeletons for components (PR-3) was successfully achieved in the railway case study of CHESS. The code generation was also evaluated by the CHESS space case study, and deemed satisfactorily and promising.
Product-related aspects. Generation of containers and connectors (non-functional code) from functional code (PD-1) was demonstrated in the CHESS space and railway use cases and in the ESA investigation, and considered an important achievement.
Support for multiple target platform (PD-3) was demonstrated by the support for the target languages (Ada, C, C++) and reference execution platforms selected by the represented application domains. It confirms the goodness of fit of the approach for a cross-domain exploitation.
Miscellaneous aspects. The maturity of the methodology (M-1) is considered high by the feedback of all case studies.
The gain in productivity of the approach (M-2) is still considered low to moderate. This feedback is mostly due to the prototypical nature of the two toolsets and to the lack of collaborative features for the modeling activity. We however are satisfied by this fair evaluation, as the feedback by industrial users highlights how this is exclusively a technological problem (and in fact the methodology itself was considered mature).
Finally, the learning curve of the approach (M-3) is considered from moderate to high. This highly depends on the previous exposure of the project partner to MDE and component-oriented approaches (the CHESS space partner is quite familiar with those paradigms, the space partner of the ESA investigation and the telecom and railway partners of CHESS were not). This highlights in particular: the difficulty on fitting a novel development approach into the pre-existent industrial process of a stakeholder, which is an aspect that is seldom ignored or underestimated, especially in academic literature. It also highlighted that an increasing effort shall be devoted to dissemination activities, especially in the form of tutorials or reference guides (for example, with the description of architectural patterns for solving recurrent design problems using the component model and the reference architecture).
5. Conclusions and future work
In this paper we presented a novel component model developed in the context of an investigation of the European Space Agency that aims at the definition of a software reference architecture for the on-board software of future ESA mission and in the parallel multi-domain project CHESS.
The component model was designed to support separation of concerns, in particular between functional and non-functional concerns. Separation of concerns is achieved with careful allocation of distinct concerns to the software entities of the approach: components, containers and connectors. Our components are void of any non-functional concern such as - in the dimension of concurrency - tasking, synchronization, timing. This ensures that they can be directly reused in different contexts under different non-functional requirements.
In our approach, component types and implementations only address functional concerns. Non-functional attributes are later superimposed on component instances. No premature non-functional choice is in fact forced on the component design. Furthermore, all the non-functional decisions are postponed to the appropriate design stage and no component description or implementation requires any modification in case non-functional attributes are changed.
After non-functional attributes have been set, a call chain may span across components but it is guaranteed to be finite (it is represented as a tree with a leaf at every first occurrence of a deferred operation, i.e., with sporadic or bursty activation pattern, which is executed by its own thread of execution) and acyclic [38].
Finally, it is worth of notice that in our approach, software suppliers solely address functional concerns (i.e., the implementation of the sequential/algorithmic code), under a defined budget envelope negotiated with the software architect. The specification of any non-functional concern (how the software shall be executed) stays the responsibility of the software architect; and the implementation of non-functional concerns stays the responsibility of the design environment (through automated code generation).
We also support model-based analysis of non-functional concerns to help the designer evaluate the design in early phases of development. The results of the analysis are directly presented as attributes of the entities of the user model.
The two implementations of the proposed component model that we developed attest the methodological goodness of the approach withstanding the use of different specification languages and technologies.
We are currently extending the component model described herein to support hierarchical components. Hierarchical decomposition will be used mainly to master the design complexity and contain the cluttering of entities in the design space: a problem common to all MDE approaches. Hierarchical composition will be used to aggregate reused components (fetched from a component repository). We are also carefully revisiting the design stages when hierarchical components are used, so that they fit our design process and do not break the separation of concerns principle.
Acknowledgements. This work was supported by the Networking/Partnering Initiative of the European Space Agency and by the CHESS project under ARTEMIS JU grant nr. 216682.
References
[1] D. C. Schmidt, Model-Driven Engineering, IEEE Computer 39 (2) (2006) 25-31.
[2] M. Bordin, T. Vardanega, Correctness by Construction for High-Integrity RealTime Systems: a Metamodel-driven Approach, in: Proc. of the 12th International Conference on Reliable Software Technologies - Ada-Europe, 2007.
[3] M. Panunzio, T. Vardanega, A Metamodel-driven Process Featuring Advanced Model-based Timing Analysis, in: Proc. of the 12th International Conference on Reliable Software Technologies - Ada-Europe, 2007.
[4] M. Bordin, M. Panunzio, T. Vardanega, Fitting Schedulability Analysis Theory into Model-Driven Engineering, in: Proc. of the 20th Euromicro Conference on Real-Time Systems, 2008.
[5] ISO/IEC/(IEEE), Systems and Software engineering - Recommended practice for architectural description of software-intensive systems, ISO/IEC 42010 (IEEE Std) 1471-2000 (2007).
[6] M. Panunzio, T. Vardanega, On Software Reference Architectures and Their Application to the Space Domain, in: 13th International Conference on Software Reuse, 2013, pp. 144-159.
[7] M. Panunzio, T. Vardanega, A Component Model for On-board Software Applications, in: Proc. of the 36th Euromicro Conference on Software Engineering and Advanced Applications, 2010, pp. 57-64.
[8] E. W. Dijkstra, The humble programmer, Communications of the ACM 15 (10) (1972) 859 - 866, ISSN 0001-0782.
[9] R. Chapman, Correctness by Construction: a Manifesto for High Integrity Software, in: ACM International Conference Proceeding Series; Vol. 162, 2006.
[10] E. W. Dijkstra, On the role of scientific thought, in: E. W. Dijkstra (Ed.), Selected writings on Computing: A Personal Perspective, Springer-VerlagNew York, Inc., 1982, pp. 60-66, ISBN 0-387-90652-5.
[11] T. Vardanega, Property Preservation and Composition with Guarantees: From ASSERT to CHESS, in: Proc. of the 12th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing, 2009, pp. 125-132.
[12] J. Sifakis, A Framework for Component-based Construction Extended Abstract, in: Proc. of the 3rd IEEE International Conference on Software Engineering and Formal Methods, 2005, pp. 293-300.
[13] P. Puschner, R. Kirner, R. Pettit, Towards Composable Timing for Real-Time Software, in: Proc. 1st International Workshop on Software Technologies for Future Dependable Distributed Systems, as part of ISORC, 2009.
[14] C. Szyperski, Component Software: Beyond Object-Oriented Programming, 2nd ed. Addison-Wesley Professional, Boston, 2002.
[15] K.-K. Lau, Z. Wang, Software Component Models, IEEE Trans. Software Eng. 33 (10) (2007) 709-724.
[16] N. R. Mehta, N. Medvidovic, S. Phadke, Towards a Taxonomy of Software Connectors, in: Proc. of the 22nd International Conference on Software Engineering,
2000, pp. 178-187.
[17] I. Crnkovic, S. Sentilles, A. Vulgarakis, M. R. V. Chaudron, A classification framework for software component models, Software Engineering, IEEE Transactions on 37 (5) (2011) 593-615.
[18] M. Chaudron, I. Crnkovic, Component-based software engineering, chapter 18 in H. van Vliet, Software Engineering: Principles and Practice, Wiley, 2008.
[19] A. Burns, B. Dobbing, T. Vardanega, Guide for the Use of the Ada Ravenscar Profile in High Integrity Systems, Technical Report YCS-2003-348, University of York.
[20] B. Spitznagel, D. Garlan, A Compositional Approach for Constructing Connectors, in: IEEE/IFIP Conference on Software Architecture, 2001, pp. 148-157.
[21] G. Gofller, J. Sifakis, Composition for Component-Based Modeling, in: First International Symposium on Formal Methods for Components and Objects, 2002, pp. 443-466.
[22] A. Basu, M. Bozga, J. Sifakis, Modeling Heterogeneous Real-time Components in BIP, in: Proc. of the 4th IEEE International Conference on Software Engineering and Formal Methods, 2006, pp. 3-12.
[23] H. Hansson, M. Akerholm, I. Crnkovic, M. Torngren, SaveCCM - A Component Model for Safety-Critical Real-Time Systems, in: Proc. of the 30th Euromicro Conference, 2004, pp. 627-635.
[24] J. Carlson, J. Feljan, J. Maki-Turja, M. Sjodin, Deployment Modelling and Synthesis in a Component Model for Distributed Embedded Systems, in: Proc. of the 36th Euromicro Conference on Software Engineering and Advanced Applications, 2010.
[25] J. Muskens, M. Chaudron, J. J. Lukkien, A Component Framework for Consumer Electronics Middleware, in: C. Atkinson et al. (Ed.), Component-Based Software Development for Embedded Systems, Vol. 3778 of Lecture Notes in Computer Science, Springer Berlin / Heidelberg, 2005, pp. 164-184.
[26] T. Henzinger, B. Horowitz, C. M. Kirsch, Giotto: A Time-Triggered Language for Embedded Programming, in: Springer-Verlag (Ed.), Proceedings of EMSOFT
2001, Vol. LNCS(2211), 2001.
[27] T. Henzinger, C. Kirsch, M. Sanvido, M. Pree, From control models to real-time code using Giotto, IEEE Control Systems Magazine 23 (1).
[28] E. Lee, Overview of the Ptolemy Project (2001).
URL http://ptolemy.eecs.berkeley.edu/
[29] R. Wilhelm, J. Engblom, A. Ermedahl, N. Holsti, S. Thesing, D. Whalley, G.Bernat, C. Ferdinand, R.Heckmann, T. Mitra, F. Mueller, I. Puaut, P. Puschner, G. Staschulat, P. Stenstroem, The worst-case execution time problem: overview of methods and survey of tools, IEEE Transactions on Embedded Computing Systems 7 (3) (2008) 1-53.
[30] M. Joseph, P. K. Pandya, Finding Response Times in a Real-Time System, The Computer Journal 29 (5) (1986) 390-395.
[31] J. C. Palencia, M. Gonzalez Harbour, Schedulability Analysis for Tasks with Static and Dynamic Offsets, in: Proc. of the 19th IEEE Real-Time Systems Symposium, 1998.
[32] M. Panunzio, T. Vardanega, Ada Ravenscar code archetypes for component-oriented development, in: Proc. of the 17th International Conference on Reliable Software Technologies - Ada-Europe, 2012.
[33] European Cooperation for Space Standardization (ECSS), Space Engineering -Ground systems and operations - Telemetry and telecommand packet utilization, ECSS-E-70-41A (2003).
[34] F. Ciccozzi, A. Cicchetti, M. Krekola, M. Sjodin, Generation of correct-by-construction code from design models for embedded systems, in: Proc. of the 6th IEEE Int. Symposium on Industrial Embedded Systems, 2011, pp. 63-66.
[35] L. Montecchi, P. Lollini, A. Bondavalli, Dependability Concerns in Model-Driven Engineering, in: Proc. of the 14th IEEE Int. Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops, 2011, pp. 254-263.
[36] Object Management Group, UML Profile for Modeling and Analysis of Realtime and Embedded Systems (MARTE), 2009, version 1.0 http:// www. omg. org/spec/MARTE/1.0/.
[37] M. Gonzalez Harbour, J. Gutierrez, J. Palencia, J. Drake, MAST: Modeling and Analysis Suite for Real-Time Applications, in: Proc. of the 13th Euromicro Conference on Real-Time Systems, 2001.
[38] D. Cancila, R. Passerone, T. Vardanega, M. Panunzio, Toward Correctness in the Specification and Handling of Nonfunctional Attributes of High-Integrity Real-Time Embedded Systems, IEEE Transactions on Industrial Informatics 6 (2) (2010) 181-194.
'Biography
Marco Panunzio received the Laurea Specialistica (MSc) in Computer Science (full marks cum laude) from the University of Padova, Italy in 2006.
He received the Ph.D. in Computer Science from the University of Bologna, Italy in 2011.
During the Ph.D. and later as a post-doc research fellow at the University of Padova, Italy, he has been
a visiting researcher at the European Space Research and Technology Centre (ESTEC) of the European
Space Agency (ESA) in the scope of the Networking / Partnering Initiative (NPI).
Since May 2012, he joined Thales Alenia Space - France, where he works as R&D engineer in the area
of on-board software development.
His main research interests are: schedulability analysis of real-time systems, Model-Driven Engineering, Component-Based Software Engineering and software reference architectures.
Tullio Vardanega graduated with a degree in computer science at the University of Pisa, Italy, in 1986 and received the Ph.D. degree in computer science from the Technical University of Delft, The Netherlands, in 1998, while working at the European Space Research and Technology Centre (ESTEC) of the European Space Agency (ESA).
At ESTEC, over the period 1991-2001, he held responsibilities for research and technology transfer projects as a lead person in the area of onboard embedded real-time software.
In January 2002, he was appointed Lecturer in Computer Science, Faculty of Science, University of Padova, Italy, before becoming Associate Professor in October 2004.
At Padova, he took on teaching and research responsibilities in the areas of high-integrity real-time systems, quality-of-service under real-time constraints and software engineering methods, including model-driven engineering, and processes for such environments. He has authored numerous papers and technical reports on these subjects. He runs a range of research projects in these areas on funding from international and national organizations.