9. Evaluation of software architectures 9.1 Introduction 9.2 ATAM method 9.3 Example 9.4 MPM method 9.5 Summary 1
9.1 Introduction What is "architectural"? System S components, subsystems not "architectural" in the context of S Typically not architectural: - data structures - algorithms - details of interfaces 2
Evaluation viewpoint to architectures Architecture determines to what extent (most of) the quality requirements are satisfied. Architecture description should contain the information needed to reason about the quality requirements of a system. Architectural evaluation is usually performed against the quality requirements of a system, but also functional requirements can be evaluated. 3
Quality attributes to be evaluated Performance Reliability Availability Security Modifiability Portability Variability Usability Memory efficiency 4
Different quality attributes Run-time quality attributes (e.g. performance) Development & evolution time quality attributes (e.g. modifiability) 5
What are the results of evaluation? Evaluation typically yields answers to the following types of questions: 1) Is the planned architecture suitable for the system, satisfying essential quality requirements? If not, why? 2) Which of the alternative architectural solutions is the most suitable for the system, and why? 3) How well a particular quality attribute can be achieved with the planned architecture? Note 1: the answers are based on the architecture description, assuming a reasonable implementation Note 2: the results of the evaluation depend on the accuracy of the architecture description 6
Why should software architectures be evaluated? Architecture is the first precise description of a system Architecture contains the most critical decisions of a system Architectural evaluation increases the understanding of the system Architecture determines many process and organization aspects as well 7
When should software architectures be evaluated? After the first architectural sketch (pre-evaluation) After the architecture is designed, before implementation is started (full evaluation) After the system is implemented (sensible only in the case of legacy systems) 8
How to elicit information about the system's quality properties based on architecture? Scenarios Checklists Metrics Simulations Experts 9
Scenario-based evaluation methods SAAM (Software Architecture Analysis Method) concentrates on modifiability, portability, variability developed at SEI based on evolution-time scenarios ATAM (Architecture Tradeoff Analysis Method) covers all quality attributes developed at SEI derived from SAAM MPM (Maintenance Prediction Method) concentrates on maintainability (cost of maintenance) developed by Jan Bosch based on maintenance scenarios 10
9.2 ATAM (Architecture Tradeoff Analysis Method) Phase 1 Phase 2 1. Presentation 2. Analysis 3. Testing 4. Reporting 11
Presentation (Phase 1) 1. Present the ATAM method ATAM phases ATAM techniques (scenarios, quality attribute trees, etc.) 2. Present the business drivers most important functions of the system business goals of the system constraints (economic, political etc.) 3. Present the architecture technical constraints (operating system, hardware etc.) external interfaces of the system architecture description 12
Analysis (Phase 1) 4. Identify architectural solutions identify and name used styles, patterns etc. add explanations on how the solution achieves certain quality attributes 5. Produce quality attribute tree refine quality requirements to the level of scenarios prioritize by importance and difficulty 6. Analyze the architectural solutions against the scenarios architecture is probed against scenarios risks, nonrisks, sensitivity points and tradeoff points identified 13
Quality Quality attribute tree Performance Modifiability Availability Security quality attributes Transaction throughput Response time Change UI Change OS Hardware failure Server crash Data confidentiality refinements scenarios Throughput 1000 service requests/sec (H,M) Authentication response in less than 1 sec. (H,M) Change to Web UI in 1 month (M,H) Change to Linux in 6 months (L,H) Restart after disk failure in 5 minutes (L,H) Restart after auth server crash in 5 minutes (M,M) Credit card transaction secure 99.999% (H,L) 14
Risks Risk = potentially problematic architectural decision example: The rules for writing business logic modules in the second layer of your three-layer architecture are not clearly articulated. (decision/fact in architecture) This could result in replication of functionality (rationale), thereby compromising modifiability (quality atttribute implication). 15
Nonrisks Nonrisk = good decision/property in the architecture based on safe assumptions example: Assuming that only the actually responding components are registered as observers (assumption), the use of the Observer pattern in the communication between the components (decision) improves modifiability (quality attribute implication), because components do not have to know the components responding to their events (rationale). 16
Sensitivity points Sensitivity point = an architectural aspect that is critical for achieving a particular quality attribute examples: The portability of the system is sensitive to the use of the MVC model in the GUI implementation. The modifiability of the system is sensitive to the use of the Abstract Factory pattern for the creation of the driver objects. 17
Tradeoff points Tradeoff point = Sensitivity point that affects more than one quality attribute examples: Use of the State pattern in the implementation of the state machine improves modifiability but impairs performance. Using XML for the input format improves interoperability of the system but impairs performance. 18
Testing (Phase 2) 7. Produce testing scenarios all stakeholders brainstorm and rank scenarios from their perspectives voting techniques for ranking scenarios validate & complete quality attribute tree 8. (Re)analyze the architectural solutions highly ranked scenarios run against architecture map highly ranked scenarios to architectural solutions & quality attributes 19
Reporting (Phase 2) Output of ATAM Identification and analysis of architectural solutions Identification of prioritized scenarios Quality attribute tree Identification of risks Identification of nonrisks Identification of sensitivity points and tradeoff points Identification of risk themes 20
9.3 Example: Car service monitor system A car control system needs to be extended with a subsystem that collects various kinds of data during the running of the car, to be used for monitoring and service purposes. The control system is based on a CAN-bus. The bus is used to send messages between the components in the system. The monitoring system (called MS) listens to the messages sent along the CAN-bus. It should be possible to add later various kinds of processing capabilities to MS, reading certain kinds of messages, performing arbitrary computation on the basis of the transferred data, possibly storing the computed data on a local database, and sending in certain cases information to a central service station through GSM. For example, MS may collect information about the usage of gears and breaks, about the speed, about engine temperatures, consumption etc. The driver should have access to the collected information through a graphical user interface and activate or passivate certain information collecting services. Since MS is not controlling critical functions, there are no hard real-time requirements. It should be possible to receive monitored information also from external unknown systems, and to receive monitoring requests from such external systems. The system should be easily configurable to various kinds of cars, and it should be possible to upgrade the system at run-time. 21
CANBus CANFilter MessageDispatcherIF XMLMsg Message Dispatcher Msg type(): MsgType Component receive(msg) send(msg) register(msgtype,component) BrakeModelIF BrakeViewIF update() Brake- View BrakeController handleevent(event) BrakeState recordusage checkcondition register(view) getstate() setstate() GSMComp sendreport DB 22
Analysis (1) 4. Identify architectural solutions identify and name used styles, patterns etc. add explanations on how the solution achieves certain quality attributes Architectural solution Use of the message dispatcher architectural style in the communication of components Explanation Supports dynamic extensibility of the system, because new components can be loaded at run-time, interacting with other components through messages. 23
Quality Analysis (2) 5. Produce quality attribute tree refine quality requirements to the level of scenarios prioritize by importance and difficulty Modifiability Extensibility Performance Data modifiability Functional modifiability... S1: Change database in 1 month (H,M) S2: Change message format in two weeks (L,H) Change UI Availability Usability... Change data collecting Component service response in 1 microsec... 24
Analysis (3) 6. Analyze the architectural solutions against the scenarios architectural solutions mapped to scenarios risks, nonrisks, sensitivity points and tradeoff points identified Scenario: S1: Change database in 1 month Quality attribute: Modifiability Stimulus: Old database no more maintained Response: Change done in 1 month Architectural solutions: Risk Nrisk Sens Troff Abstract interface for database NP1 SP1 access (interface+db comp) 25
NP1: Assuming that the persistence services can be provided by a new database, the use of an abstraction layer (interface + DB component) between the other components and the database makes the change of the underlying database easy, thus improving the modifiability of the system. SP1: The modifiability of the system is sensitive to the use of abstraction layer between the components and the concrete database. 26
Testing (1) 7. Produce testing scenarios all stakeholders brainstorm and rank scenarios from their perspectives voting techniques for ranking scenarios validate & complete quality attribute tree A scenario presented by a manager: It is possible that in the future the company will develop its own proprietary information exchange format for onboard devices, instead of using standard CAN-bus. Therefore it should be possible to read data from other sources than CAN-bus. A scenario presented by a car engineer: Actually the behavior of the car is also affected by the external conditions. For example, if the temperature is low, the fuel consumption gets higher. Therefore information about external conditions from various sensors might be later added to the system. 27
Testing (2) 8. (Re)analyze the architectural solutions highly ranked scenarios run against architecture map highly ranked scenarios to architectural solutions & quality attributes Scenario: S27: Change CAN-bus to proprietary information source in two months Quality attribute: Modifiability Stimulus: CAN-bus not any more used in company Response: The modification done in two months Architectural solutions: Risk Nrisk Sens Troff Separate source handler NP14 SP22 Abstract interface for messages NP15 SP23 28
9.4 MPM: Maintenance Prediction Method 1. Define scenario categories 2. Define scenarios 3. Assign weights for scenarios 4. Impact analysis 5. Maintenance cost prediction 29
Define scenario categories Scenario categories serve as the first step of finding scenarios give structure for the scenario set help to find a covering set of scenarios can be based for example on enviroment change types, source of change etc. Example: Scenario category I: Scenario category II: Scenario category III: Scenario category IV: Scenario category V: New/changed processing functionality New/changed hardware devices New/changed infrastructure New/changed implementation techniques Changed UI 30
Define scenarios (1) Scenario category I: New/changed processing functionality S1: Change the way a serious damage is inferred from engine temperature data... Scenario category II: New/changed hardware devices S6: Add a new sensor for the temperature of the gear box, producing CAN messages to be monitored and analyzed statistically... Scenario category III: New/changed infrastructure S11: Change the database system to another one with similar interface, both being relational... 31
Define scenarios (2) Scenario category IV: New/changed implementation techniques S15: Change the message format from XML to binary... Scenario category V: Changed UI S18: Add voice output for alarm situations... 32
Assign weights for scenarios Scenario S1 Normalized probability 0.12 S22 0.05 Sum 1.0 33
Impact analysis Scenario Affected components Impact LOC S1 Engine Controller component 20% 0.2x1500=300............ S15 SourceHandler ComponentX ComponentY... 30% 5% 5%... 0.3x1800=540 0.05x200=10 0.05x500=25............ 34
Maintenance cost prediction Weighted LOC impact for a scenario: Weight * Impact = 300*0.12 +... = X LOC i i scenario i Assume average cost of LOC in the enterprise: Y / LOC Assume expected number of scenarios in year: Z scenarios: => Expected maintenance cost per year: Z * X * Y 35
9.5 Summary Assessment methods provide an opportunity for such discussions that should be carried out anyway during the process Assessment methods can be regarded as exceptionally deep and systematic architectural review techniques Methods are not (probably deliberately) very precise, giving a lot of freedom to tailor them for a particular case & company Methods need a significant amount of resources and commitment which is not always available (especially ATAM) 36
Course synthesis Architecture & software development process Well-defined & documented architecture gives the constitution of a software system Architecture & dependencies Architectural solutions reduce unnecessary dependencies in the system Architecture & reuse Product-line architecture and variation management enable systematic reuse of enterprise know-how Architecture & quality Architectural assessment, carried out as part of the process, ensures quality attributes 37