SolutionArchitecture FM 14
SolutionArchitecture FM 14
Version 1.5
Solution Architecture - Framework and Methodology.docx 2/57
I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.
I send them over land and sea,
I send them east and west;
But after they have worked for me,
I give them all a rest.
References
Document Date Description
Prerequisites
Document Description
Every solution has its proper life cycle that runs through 4 separate phases, as shown in the illustration above. Each
of these phases has its own activities that break up these phases into manageable parts each assigned to a group of
stakeholders responsible for their resolution.
The Plan phase concerns itself with identifying and documenting the broad contour of the project. Coarse-grained
requirements, usually focused on the triple constraint of project management, are penned down, and a global planning
with milestones is drawn up. We mainly concern ourselves with the goals of the project in this phase, so that all parties
know what we are working towards from the start.
The Build phase is where the action of the getting down to business starts. We elaborate the business needs into sets
of measurable (SMART) requirements, and design architectures that cover all of these, which we in turn realize by
implementing a solution, and testing it thoroughly. This is where the bulk of the work for the technical teams in charge
of developing the solution, happens.
The Operate Phase is where we reap the benefits of the implementation, and the solution becomes a solution-in-use.
The project switches from active development to maintenance and change management. This operational phase will
be the true test to see whether the goals set forth by the stakeholder in the first and second phase have been met.
Every solution has a finite life before which it becomes obsolete. Either because the needs of the stakeholders have
changed or the implementation cost of a replacement solution is starting to dive under the maintenance cost of the
current solution. This final phase is about how the dispose of the current solution, while still retaining all relevant value
the solution still possesses. Most of the time, this comes down to transferring all relevant information still present in
the solution to the replacement solution, and repurposing the resources freed by the disposal.
Another poet of British origin once said “No Man is an Island, entire of Itself”, and the same can be proclaimed about
Solution Architecture. In itself, it does not amount to much, needing the entirety of a solution development process to
complement it. As far as the phases of a project goes, it is conceived either in the Build phase or the Plan Phase,
serves as input and quality verification for both the Build and Operate Phase, and finally becomes a point of reference
to launch the dispose phase.
The depth of the knowledge needed in each of these topics is described in a way similar to Bloom’s Taxonomy of
Learning for the Cognitive Domain (1956). We acknowledge 5 levels of depth, to be sequentially mastered per topic.
These levels are listed in Illustration 1.3 (see below). Each of the topics mentioned in the previous paragraph can be
attributed one of these levels to form the basis of a good architect.
An architectural element (or just element) is a fundamental piece from which a system can be considered to be
constructed. It is often referred to as components or modules. The nature of an architectural element depends very
much on the type of system and the context you are considering. Programming libraries, subsystems, deployable
software units (e.g., Enterprise Java Beans and Active X controls), hardware components (e.g., firewalls, proxy
servers, routers…), reusable software products (e.g., database management systems), or entire applications may
form architectural elements in an information system, depending on the system being built. Architectural elements are
often known informally as components or modules, but these terms are already widely used with established specific
meaning. For this reason, we deliberately use the term element to avoid confusion.
The external visible properties of an architectural element are a set of properties that owned solely by the element
and identify the element. Often these properties are expressed as interfaces, contracts, specifications (functional and
non-functional) of an element (component or module).
The inter-element relationships indicate how the elements are connected to each other. The most common
relationships are expressed as connectivity, dependencies, ownerships, extensions, compositions, aggregations,
usages, communications and/or constraints.
The illustration below lays out the architecture conception framework proposed by IEEE 1471-2000, which is often
used during the architectural design of the software engineering process as a common language. We adapt these
concepts to align with the industry standards and to attain a common language for architecture related discussions.
A description of architecture has to present the essence of the architecture and has to be detailed enough at the same
time. In other words, it must provide an overall picture that summarizes the whole system, but it must also decompose
into enough details so it can be validated and the described system can be built. It is a major challenge to achieve
the right balance between the essentials and the detailing in an architectural description.
To represent complex systems in a way that is manageable and comprehensible by a range of business and technical
stakeholders, we use the only successful and widely used approach: we partition the architecture description into a
number of separate but interrelated views, each of which describes a separate aspect of the architecture. Collectively,
the views describe the whole system.
An architectural view is a way to portray those aspects or elements of the architecture that are relevant to the
concerns that the view intends to address, and by implication, the stakeholders for whom those concerns are
important. Formally, addressed by IEEE 1471, quoted hereunder:
Logical view:
The architecture is described by using a decomposition of the system into logical layers, subsystems and
components of the system in the logical view. This does not only include the internal subsystems,
components and the relations between them, but also the dependencies to the external systems and
components.
Physical view
We discuss all the environments into which the system will be deployed, concerning the constraints of
hardware, third-party software, network, and system topology, as well as the cooperation between hardware
and software. Additionally, we stipulate the different protocols used to communicate with the outside world.
Operational view
The operational view of the architecture justifies the correctness and completeness of the architecture
against the operational requirements. Where the previous views describe the tools available for service
management teams, this view describes the procedures needed to perform maintenance, upgrades,
monitoring, etc. This view makes the link with the service management discipline.
As illustrated above, the logical view, implementation view and physical view cover the system’s conceptual model
and describe how this is realized and mapped to the implementation and to the operational solution. This answers the
concerns of the stakeholders from the development team, the project management, the infrastructure team and the
maintenance team. The requirements view is for sponsors as stakeholder to verify the architecture (introduced by
logical view) against their concerns. The operational view provides the answers to the concerns from the
maintenance team and infrastructure team. By using this view based approach, we collectively cover the whole
architecture of the whole system.
This approach was based on the article Philippe Kruchten wrote in 1995, titled Architectural Blueprints - The “4+1”
View Model of Software Architecture.
The most common architecture stakeholders are from the following categories:
Business Users
Project Management
Development and Quality Assurance
Infrastructure and maintenance
Some of these are:
Project/Program Management (PMO)
Sponsors
End users
Development team (technical project leads, architect, developers, analysts, …)
Sparring architect
Infrastructure team
Maintenance team
Quality Assurance team (QA)
A complete stakeholder analysis from different perspectives is needed in any project. A business architecture has an
elaborate analysis, as well as the requirements view in the solution architecture from the perspective of the
technological decisions.
Architects themselves will assume any of three roles in the project: Solution Architect, Sparring Architect or
Architect Reviewer.
The Solution Architect is the main actor in the architect phase. He is the driving force behind designing the system
architecture. He has the following responsibilities:
Participate in the coordination of the functional and technical requirements gathering and analysis
Validate (accept or reject) the functional and non-functional requirements documents.
Construct the architecture.
Describe the architecture.
Organize the architecture review meeting and present the architecture
Revise the architecture and architecture document according to the architecture review if necessary.
Hand over to the technical design phase.
The Sparring Architect is the soundboard for the application architect during the architect phase. He has the
following responsibilities:
Assist the solution architect and technical analyst by providing the suggestions and guidance.
Review and validate architecture.
Furthermore, the sparring architect is responsible for the technical review and sparring to the technical team
during the whole technical development (architect, technical design and construct).
Report and escalate to the project and competence management
The Architecture Reviewer is another role involved in the architect phase. The stakeholders of the corporate
architecture team could for example fulfill this role. The architecture reviewers have the following responsibilities:
Review the architecture by asking the questions, pointing out the incorrectness and incompleteness of the
architecture in order to allow the revision of the architecture
Validate the architecture by confirming the sufficiency, correctness and completeness of the architecture
from the specific perspective as a stakeholder.
All three of these roles could evidently also be performed by a coordinated team of people as well.
Translated to our document, this will include a matrix such as the one in Illustration 3.2, which details one or more of
these levels per relevant stakeholder. If a defined stakeholder has no level to attribute, he should not be included in
the matrix.
Role Level of Participation
Business Analyst R, A
Functional Analyst R, A
Business Expert C
Solution Architect C, I
Illustration 3.2 – Example RACI Matrix
An extended matrix RACI-VS (VS is for Verification and Sign off) is also sometimes used when the RACI levels of
participation are not sufficient. Not described in the PMBOK® Guide, these levels are reserved more for elaborate
projects needing several levels of accountability (for example in higher CMMI levels). Some common sense rules to
consider:
For many projects, the Sign off role might also be assigned to the Accountable person to provide them an
opportunity to ensure project standards are met and processes are followed.
Too many Sign offs can cause delay as the work product is routed for review.
Verification should be independent (e.g. an architect who created a change shouldn’t verify the change)
where possible to insure the highest quality.
Verification often designates the quality assurance or project scope verification role.
The verification and sign off roles may be in addition to other roles.
In fact, a plethora of variants on this approach exist, each identifying one or more different levels that might be in need
of stipulating. Some of these are CAIRO, where we add an “Out-of-the-Loop”-level, RASCI, where we add a “Support”-
level, and DACI, which stands for Driver, Approver, Contributors and Informed.
Other ways of indicating stakeholder interest include the Responsibility Assignment Matrix (RAM) and the Linear
Responsibility Chart (LRC). We won’t go into detail about these methods.
Control &
Rationale Project Validation
Infrastructure
Development &
Take Maintenance
Guidelines & Take Mapping
Constraints
From the business viewpoint, the Architecture is a rationale of how a solution realizes the requirements. The
architecture should reassure the recipients of the solution that their needs are met.
From the project management viewpoint, the Architecture is a control mechanism, to validate whether a system of
such structure is sufficient, feasible, efficient, maintainable, risks free (meaning that all risks have been mitigated),
and which promises a system with high quality.
From the development viewpoint, the Architecture is about the guidelines and constraints on the technical design
of the system elements and relationships between these elements, and how the structure of the system is constructed
with these elements.
From the infrastructure and maintenance viewpoint, the Architecture explains how the system is mapped to the
infrastructure and how the system is operated and maintained. The architecture is often used to understand how the
infrastructure and operation process shall be adapted and applied to support the system.
Clearly, all stakeholders need a common reference that illustrates the solution’s architecture from each of their
viewpoint, yet contributing to the architecture as a whole. Hence, the book Software Architecture In Practice, Second
Edition, Addison-Wesley summarizes the purpose of documenting architecture as follows:
Afterwards, these project level capabilities and standards might become enterprise scoped by establishing the
proper governance surrounding them. This is normally done as part of an IT roadmap for an organization.
The first activities of the architecture effort focus mainly on gathering all relevant requirements that might influence
the design. During the Requirements Gathering effort of the Plan Phase, and their elaboration during the Build Phase,
an Architect will perform the following tasks:
The architecture construction starts with the gathering and analyzing of the system requirements. These
requirements can be divided into functional and non-functional requirements, where the latter can be divided
into technical (such as integration, quality, and infrastructure requirements) and operational requirements
(such as documentation, training, and managed services requirements), transforming them in to measurable
statements.
The application architect works together with domain specialists, both business and technical, to guide and
constrain the business and technical analyses from a technical perspective and should assist the analysts
by informing them of technical information and possibilities.
In this step, the business knowledge is acquired; high level business and technical requirements are
produced. Meanwhile, the application architect constructs the high level architecture (structure of the
architecture elements).
These requirements do not only feed the technical effort, but other disciplines such as business architecture
and testing as well.
This high level architecture is written down to the architecture document version 1.
For an outsourced project, the first version of a Solution Architecture (and its corresponding document) could even
be constructed before the Plan phase, during a presales effort, when a first set of requirements becomes known, and
an attempt at setting the scope ensues as part of a request for information (RFI). However, in the project lifecycle the
first architecture version of some maturity, happens after the initial activities of Build Phase have happened, and we
have a first attempt at an architecture design. As with most deliverables of a project, the solution architecture
document will mature well into the Operate phase and even a bit of the Dispose phase with activities to keep the
documentation up to date with reality.
The resulting first version of the Solution Architecture will be reassessed several times during the architecture effort.
Each time new requirements are detected, or new constraints are introduced, the architecture needs to go through a
cycle of validation of the new requirements/constraints, which feeds into a new version of the architecture design,
followed by an architecture presentation and review event, as shown in illustration 5.7. In most cases, these new
requirements are derived from proofs of concept (POC), which have been executed following an earlier version of the
architecture, and have exposed gaps in the solution. The following activities will be undertaken by the architect to
achieve a more and more mature architecture document:
Based on the previous architecture version, the detailed analysis will be executed by the analysts. The
application architect has further the responsibility to streamline the correlation between this detailed analysis
and his architecture. The application architect provides technical information to the analysts; influences and
aids the analysts’ decision making.
For some areas we will go into deeper detail in the following chapters, but the main concept of Project Management
is called the “Triple Constraint”, formed by the competing forces of time, cost and scope. Where the project manager
stands guard over the budget (being time and money), the solution architect will guard the scope. This tension
between the two will result in a quality delivery. Almost all knowledge areas affect at least one aspect of this. Either
by the estimates given by the solution architect, or the constraints put on scope for budgetary reasons by the project
manager.
1
A Guide to the Project Management Body of Knowledge (PMBOK Guide) – Project Management Institure
Every project finds its genesis in a set of business needs. An organization has a strategy (or mission) with which it
approaches to market to claim its stake. This strategy translates in goals it needs to attain to make that strategy
successful, and they in turn are the drivers for the business needs. And business needs generate the drive for a
project. The process of eliciting these needs and translating them into requirements so that they become measurable
and can be validated, is called requirements analysis.
As with any other analysis exercise where there is an interaction with people, the architect comes into contact with a
wide range of opinions and perspectives of people who think they know what is required, and with a wide range of
confusion and vagueness from those who don’t know what they want. It is imperative to cut through the assumptions
and attain accurate and relevant information. One of the methods for achieving this is ‘Critical Thinking’, pioneered by
John Dewey in the early 1900’s.
As shown in the illustration, the Critical Thinking Method tries to validate all data presented to the architect to
determine its accuracy, bias and relevance to put the information we extract into context. However, in order to form
the proper conclusions, we need to weigh the information against those volumes of data not presented (in the form of
assumptions), and either justify them or discount them. The skills needed to aptly perform this thinking exercise are
the following:
1. Recognize problems
2. Understand the importance of prioritization and order of precedence in problem solving
3. Gather relevant information
4. Recognize and question unstated assumptions and values
5. Comprehend and use language with accuracy and clarity
6. Interpret data to appraise evidence and evaluate arguments
7. Recognize the existence (or non-existence) of logical relationships between propositions
8. Draw warranted conclusions and generalizations
9. Put to test the conclusions
10. Adjust one's beliefs on the basis of wider experience
Next, the Specification phase is where we formally structure the list of requirements with different modelling
techniques into an apprehensive, clear, precise, unambiguous, atomic, complete, testable, maintainable
requirements, or creating SMART requirements. These requirements will be elaborated upon in the Build Phase of
the project. Techniques employed here are the Unified Modeling Language (UML), Business Process Modeling
Notation (BPMN), Semantics of Business Vocabulary and Rules (SBVR), Domain Specific Language (DSL), etc.
Finally we validate the list of requirements with the stakeholders through such techniques as reviews, prototyping,
simulations, etc. in order to receive a formal signoff on the scope of the project. This is best done through a formalized
entity (for example a Change Advisory Board) that meets at regular intervals to assess the status of all open or
changed requirements, and then formalizes the decisions made. If an issue arises with the requirements, and the
periodicity of the board is not sufficient to tackle this issue, an emergency convocation should be possible.
The tasks of this board are to evaluate requirements and their associated assumptions and risks when they arise
during the course of the project. While the individual evaluations are largely an organic activity, no real procedural
steps can be given. However, there are some guidelines that can be specified:
When a requirement has no stakeholders that demand it, it is not needed, and should be set to that status
until a stakeholder re-emerges that demands it.
When a requirement has no acceptance criteria, it cannot be verified to have been fulfilled. It should be put
on the ‘Not Needed’ status as long as it is not verifiable.
When a new requirement is added to the list, a determination of its category (and if possible subcategory)
should be made to simplify the task of verifying whether or not this is a duplicate
Beware, there is a difference between project risks (on which we are focusing with this document) and enterprise
risks. Those risks play in categories more closely linked to the strategy and goals of the organization, as shown in the
illustration. These risks are governed by other processes, for example using COBIT.
Keep in mind that the success of presales inquiry can be mapped directly onto the possibility to work out the entire
business context. Not only are the requirements (both functional and non-functional) of great import, but a good
understanding of business and project context, as well as the interaction between stakeholders, contribute to the
overall success of the Architectural Phase and ultimately the success of the project itself.
Since the ISO standard focuses on quality and acceptance of the delivered product, the same model should be the
basis to compose requirements as well. As a result the quality attributes can be considered as the non-functional
requirements of the product focusing on expectations about functionality, usability, reliability, efficiency, maintainability
and portability. However, in order to make the factors more complete and representative, each of the characteristics
has been augmented with its proper set of characteristics.
Note however that not all of the characteristics are as important to every project. Some of them can be neglected or
be given a very low priority. Hence, the priority indication is very important in the specification for every requirement.
Some older architecture documents make a reference to the predecessor of this standard, namely ISO 9126. For an
overview of what has changed between the two standards, see the appendix.
2
The Standish CHAOS Report: a 2009 study on the success rate of projects in the IT industry
The Quality in Use Model characterizes the impact that the product (system or software product) has on stakeholders.
It is determined by the quality of the software, hardware and operating environment, and the characteristics of the
users, tasks and social environment. All these factors contribute to the quality in use of the system. Most of these
requirements are however only applicable in very specific cases.
We accompany this diagram with a description of the solution, or in the case of complex systems with a description
of each of the individual components. This description will contain the purpose of the component, as well as any
technology agnostic particularities, that might impact our choice of technological component later on. As complex
systems warrant descriptions for each component, so will they cause technical designs to be drawn up for each of
these components, as well as possible governance approaches, should these components be subject to change
behavior different from the overall solution. For example, when versioning of web services in the task layer can happen
without the need for a redeployment of the other components.
Data is the representation of facts (e.g., text, numbers, images, sound, and video) for a particular subject and can be
used as a basis for discussion, reasoning, measuring, and calculation. When it comes to IT, data tends to have its
context and relationships held within a single IT system, and therefore can be seen as raw or unorganized. Data can
also be seen as the building blocks for further analysis and investigation beyond a particular subject.
Information is quality data that has been organized within a context of increased understanding and timely access.
Peter Drucker stated that "Information is data endowed with relevance and purpose." This relevance and purpose is
achieved by subjecting data from one or more sources to processes such as acquisition, enrichment, aggregation,
filtering, and analysis, to add value and meaning within a certain context. Therefore information must have meaning
to the business in a language and format that is understood, easily consumed, and is accessible in a timely manner.
Knowledge is the insight and intelligence achieved from the formal and informal experience such as intuitive
exploration and consumption of information. Knowledge can be seen as actionable information as it enables
stakeholders to act in an informed manner when they fully understand its context (e.g., via interpretation of patterns,
trends, and comparisons). Wisdom enters the realm of predictive actions. It becomes a tool and source for strategic
considerations and performs predictions towards the successfulness of a course taken. This is typically accompanied
by Big Data initiatives.
Repository structures can also occur when a system has need for a storage of data based on its classification or use.
For example, this will occur when we work with BPMS setups that keep data on stateful processes (or process data),
which becomes irrelevant after the process has ended. This type of data follows different rules for archiving, privacy,
security, etc. and therefore could be placed in a different repository to make these requirements more easily realized.
The repository descriptions should state their purpose, and should elaborate on how their information is structured.
Examples of this structure are normalized diagrams, ORM-optimized, star diagrams, big data constructs such as
James Kinsley’s Lambda architecture, and which information classifications they contain…
In cases where the different component take ownership of a specific part of the data model, encapsulating it from the
rest of the solution, and exposing it through an interface, this matrix can become quite extensive. The Thomas Erl (of
SoaSchools fame) design pattern of an Entity Service Layer in a Service Oriented Architecture is an obvious example
of such an approach.
As indicated in the table above, this ownership need not necessarily reside within the solution, but can be opened up
to the solution by means of communication with a partner system, perhaps not even under the control of the
organization the solution will be deployed in. An example of this can be found in the Belgian public sector where such
information as organizational data can be fetched from the authentic source for this type of data, in this case being
the Crossroads Bank for Enterprises (CBE).
Load
Stress
Capacity
Standards
Quality
Code
Coverage
Reviews
Security
Functional
Illustration 7.3.4 – Testing Approach
The purpose of writing unit tests is to ensure that the developer dry runs his code sufficiently for the desired behavior.
Design patterns such as interfaces, dependency injection, inversion of control and the like make code more testable.
Two different methods of unit testing can be applied. Black Box (BB) Testing is a method of unit testing where we
access the code by its public interfaces without paying attention to its inner working. This can be considered as
specification testing. This method is ideal for verifying the functional and technical analysis documentation of the
component, but can result in the fact that not all branches of the codes within have been checked. White Box (WB)
Testing is a method where we target pieces of a specific component. There is a direct link with the code within the
component, yielding a larger coverage of the code, but necessitating more effort to keep automated tests in line with
code evolution. Some metrics we can test for during unit testing are shown in the table below.
Type of Test Cat Purpose
Code Coverage WB Verify that all lines of code are called in at least one test in order to detect
possible dead code.
Branch Coverage WB Verify that all permutations of a decision gateway are tested.
Error Guessing WB Insert possible values into the function in order to detect possible bugs
and/or common mistakes, such as causing a division by zero.
Input Boundary Testing BB Enter into the input parameters the minimum value for each parameter,
the maximum value for each parameter and a value somewhere between
the two to test extreme cases/
Output Boundary Testing BB Devise tests that result in the return parameters to take the maximum,
minimum and a medium value to determine whether they are correctly
typed.
We could also consider more esoteric tests such as the Lack of Cohesion Metric (LCOM4) tests, code duplication
tests, circular dependency tests and in extension package tangle tests to determine how many circular dependencies
between different packages exist. This list is certainly not exhaustive. There could even be a need to introduce tests
that check whether the architectural guidelines are followed. For example, tests that verify the naming convention of
parameters or the existence of security coding for each function could be added to the test runs.
When we assert the proper interaction between two or more components, we use integration tests. These tests
verify the interfaces between these components, or as tests of assumptions made during the implementation process
when these interfaces are not yet known. This type of testing typically forgoes the use of stubs or mocks for any
component within our proper solution.
Having a structured approach to regression testing is a factor in the successful delivery of any software. The
regression test battery is a collection of tests which are used specifically to verify whether new releases of code still
follow the requirements and therefore expected behavior of said code. Also, with all automated tests, one axiom holds
As with requirements gathering, the later in the project lifecycle we detect bugs, the more expensive it becomes to fix
them. As shown in the infographic above, this is also determined by the type of development the project team chose
to adopt for the project.
Bear in mind that software testing eventually faces a point of diminishing returns. At some point, after the most obvious
and easy to find bugs are located, the only bugs left are, by definition, those that are harder to find. The rate at which
An important point to address, is which parts of your artifacts are to be environment agnostic, meaning they do not
change based on which environment they are deployed in, and which artifacts will be influenced by this. A best practice
is to make sure most of your artifacts fall in the former category. The latter is populated by configuration files, and
data scripts with dependencies to live data.
The first consideration to make is the decision between building the component and buying an existing solution. Or in
other words, do solutions exist that cover all (or at least most) of the needs of the project, or are the workings of the
organization so distinct that a custom fit is needed. When deciding on a software package, some principles should be
uphold in order to secure a sound choice:
1. Buying a package means joining a network. Although the customers of such a package normally don’t even
know each other, this is nevertheless a common interest in keeping the product alive and well. The continued
evolution of the product is in the best interest of its buyers. The training and education of the users of the product
also adds to the investment an organization in the bought product. To a large extent, this investment represents
a sunk cost, requiring some level of risk mitigation (such as the implications for the dispose phase).
2. Always take a long-term perspective. As the lifespan of most hardware and software decreases to only a few
years due to the rapid developments in the IT landscape, the data in these systems is more durable than the
tools handling it. An overview of the evolution of the product therefore gleans insight into the stability of its inner
workings.
3. Safety in numbers is a saying that can also be applied to package vendors. How viable is the company that
holds the intellectual property of the product? The more customers (thus the higher the market share), the more
likely it is the vendor will go on in the future. Vendors losing market share (usually due to inadequate technology)
are sometimes plagued with the need for rapid development in a last ditch effort to stave off the inevitable
replacement.
4. Beware of vendors veering away from standards. The compatibility with other software depends on this, and
every time a piece of proprietary software is added to the product, the switching cost for this product becomes
higher, maybe even becoming a de facto standard, dethroning the standard, or diminishing it so that the
community around it suffers the switching cost.
5. Choose a package with the right type of standardization for the organization. The types of standardization are
the following:
Standardization of the user interface: A common strategy to reduce the need for user training.
Standardization of input/output: Greatly enhances the facility of connecting with other software or partners.
Accompanying this rationale, the physical characteristics of the environments are listed. These are comprised of three
distinct parts. The first part of the physical characteristics are the hardware specifications, listing how many CPU’s
the environment has, the size of the memory, physical storage media… A second part is the middleware setup. Here
we specify how the setup looks for each of the environments (as in Illustration 7.4.2).
The description of the middleware should also cover a comprehensive overview of all ports that have a significance
in the environment, in order to properly setup functionalities such as firewalls and load balancers. But this overview
can also serve to verify no conflicts are generated by the different components within our solution.
Finally, we elaborate on the configuration of the database, such as for example the calculation of the initial size and
the yearly extent, as demonstrated in Illustration 7.4.4. Other calculations to this effect include the load the database
will be subjected to at different times, as well the number of concurrent transactions it need to be able to handle in
order to guarantee demanded response times.
The first level (availability) involves the monitoring of basic system resources such as CPU, memory or physical disk
space, as well as availability checks of the system and the application level (pings, custom availability checks…). The
next level (stability) is to ensure that our environment is working properly. These metrics are usually drawn using a
management interface, such as for example JMX and JSR-77 in the Java world or PerfMon counters for .NET. The
idea is to verify whether our environment is working as intended.
The first two levels don’t concern themselves with the user perspective of the application. This is where level 3
(serviceability) comes into the picture. This level is addressed by application monitoring, where metrics are provided
at the application level informing us of the individual transactions of users. We can learn for example the response
times of the application from monitoring at this level.
The next level (SLA compliance) is used to ensure the level of performance needed by the users of the application.
This involves end-user monitoring to determine the “feel” of the performance with our user base. End-user experience
is the performance perceived by the actual user at a specific point in time. As we handle SLA’s at this level, a
methodology for resolving violations of any SLA can be envisioned in this phase. To solve such a violation, advanced
metrics such as component-level information (sometimes down to the coding level) are required. This could include
database calls, web service calls… At this level, we can also set up provisions for continuous monitoring of these
metrics to enable proactive maintenance.
The final level (proactive performance management) is striving for the maximum performance with minimal resource
usage. This is an iterative process that never ends, and is more of a strategic consideration than a technical one. We
determine the next performance drain, and try to fix it.
A technical architecture is an architecture where the concepts and best practices of the architecture are predominantly
determined by the technology behind them. Examples of this category are Service Oriented Architectures, Event
Driven Architectures, and the like. Industry architectures are the other side of the coin, and are almost completely
dominated by how the industry to which they apply is structured. We have for example telecom architectures, financial
architectures, public sector architectures, and so on. The final category are the alignment architectures, and these
are the architectures that although they still have strong technological ties, they likewise incorporate industry concepts.
These are the BPM architectures, the CRM architectures, the ERP architectures…
It is very unlikely that a single architecture type will be able to cover all requirements set out for any solution. Most
solutions have several applicable architecture types. In these cases, it is imperative that a hierarchy be made clear
between these choices, as indicated in the chapter about design constraints. This way it will become easier later on
to decide which best practice to follow, should the best practices of the applicable architecture contradict each other.
Another piece of the global technical design is the domain specification. In this chapter we go into detail about the
Information Aspect of the Logical View. We translate the data model from the business architecture and functional
analyses into an Entity Relationship Diagram, and complete this description with guidelines for data validation,
possible data transformations due to technical processes, and the technical implementation of the auditing
requirements. As with the context diagram descriptions, these parts can also be described using a reference to a
document containing this information.
The third factor in a technical design is the overview of the non-functional and cross-cutting concerns. Typically, we
discuss the guidelines that are ubiquitous in all components of the architecture, and how we implement these
concerns. Think here of such concerns as internationalization (I18N), transaction management, exception handling,
logging, security…
Finally, we describe the build process. How we construct the artifacts for deploy, as well as where to find the
configurations of each of the environments. In build scenarios, always aim to make all artifact environment
independent, so that artifacts should not be built specifically for a target environment. This will make you build more
robust, and less error-prone. Do not forget to highlight how this build process interfaces with your application lifecycle
management approach, as described in the Implementation View of the architecture.
The International Standard ISO/IEC 25010 revises ISO/IEC 9126-1:2001, and incorporates the same
software quality characteristics with some amendments:
Context coverage has been added as a quality in use characteristic, with subcharacteristics context
completeness and flexibility.
Security has been added as a characteristic, rather than a subcharacteristic of functionality, with
subcharacteristics confidentiality, integrity, non-repudiation, accountability and authenticity.
Compatibility (including interoperability and co-existence) has been added as a characteristic.
The following subcharacteristics have been added to existing product quality characteristics:
functional completeness, capacity, user error protection, accessibility, availability, modularity and
reusability.
Compliance with standards or regulations that were subcharacteristics in ISO/IEC 9126-1 are now
outside the scope of the quality model as they can be identified as part of requirements for a system.
When appropriate, generic definitions have been adopted to extend the scope to computer systems,
rather than using software-specific definitions.
To comply with ISO/IEC Directives, definitions have been based on existing ISO/IEC when possible,
and terms defined in this International Standard have been worded to represent the general meaning
of the term.
Several characteristics and subcharacteristics have been given more accurate names.
The following table lists the differences between the characteristics and subcharacteristics in the new
International Standard, and ISO/IEC 9126-1:2001: