Index | Next

Technical Architect Overview

Architectures are the actual blueprint as well as the end-result technical underpinnings to the solution, excluding business logic - i.e., blueprints + design + code.

Architecture Layers

Architecture Layers provide the formal structure required to build a software application. Beginning at the lowest level with the "boxes and wires" that are part of the infrastructure architecture, moving up to the software services (e.g. logging) provided by technical architecture that are integrated together to provide a structured application architecture for developers to build applications and business systems on top of.

  1. Business Systems
  2. Application Architecture
  3. Technical Architecture
  4. Infrastructure Architecture

Business Systems

Business Systems contain the business logic and applications to support business processes. Business Systems can be custom-built or packaged solutions. They use the common services and overall structure and capabilities provided by the application architecture to fulfill their functionality, but typically involve multiple applications and application styles

Application Architecture

The Application Architecture is responsible for managing the interaction between the Technical Architecture and the application itself. As such, it frames, constrains and supports the generic and non-functional requirements that occur throughout the application. The Application Architecture provides a key opportunity for reuse and consistency, supporting simplifying and accelerating the delivery of the application itself.

Typically, the Application Architecture will provide:
  • Patterns and approaches to solve problems e.g. defining how the Technical Architecture will be used by the Business Application, and potentially providing components that wrap the Technical Architecture in order to make it specific and applicable to the particular business application. This is important as it ensures consistency within the Business Application and to the Application Designers and Developers. These patterns and approaches may extend to code templates and frameworks that direct and constrain the application developers.
  • Additional support for non-functional requirements, utilizing the Technical Architecture. For instance, while the Technical Architecture may define a logging component, the Application Architecture may provide the supporting and contextual information that enables logging to fulfill the defined non-functional requirements (thus acting as the bridge between the Technical Architecture and the Business Application).
  • Potentially reusable components that are utilized throughout the Business Application. For instance, if a particular UI component is identified during the requirements review that occurs throughout the application, then it may be a candidate to be included within the Application Architecture rather developed on an adhoc basis within the Business Application.
  • When architecture is made up of multiple systems, the Application Architecture will provide guidance as to what business functionality should reside in what system and how the interactions between these systems should be controlled and designed.

Generally, the Application Architecture will be much more tailored to the particular requirements of the Business Application than the Technical Architecture. This should be kept in mind when determining whether an architecture component should be implemented at the Technical or Application Architecture layers. The size and composition of the Application Architecture will vary from system to system, and may more of a set of approaches that are understood and used by the designers and developers than a set of implemented components.

When designing the Application Architecture, the Architect needs to be familiar with
  • the business requirements (both functional and non-functional) and
  • Look for opportunities to solve problems and provide common functionality so to simplify (and accelerate) the work of the application designers and developers.

The applications architecture is specified on the basis of business requirements. This involves defining the interaction between application packages, databases, and middleware systems in terms of functional coverage. This helps identify any integration problems or gaps in functional coverage. A migration plan can then be drawn up for systems which are at the end of the software life cycle or which have inherent technological risks.

Applications Architecture means managing how multiple applications are poised to work together. It is different from software architecture, which deals with design concerns of one application.

Technical Architecture

Technical Architecture defines the middleware, technical services and system software that are required to support one or more application styles. It comprises of Development, Execution & Operation Architecture.

Development architecture

Development architechure includes three major areas:
  • The development life cycle and processes used to build business applications
  • The application models that show the appropriate technical design that will best fit the business requirements
  • The inventory and categorization of the business applications that exist within the organization today. (In some companies, this area is called application architecture, but including it in the definition of the broader development architecture.)

Project management

Project management is about planning, organizing, securing, managing, leading, and controlling resources to achieve specific goals. A project is a temporary endeavor with a defined beginning and end (usually time-constrained, and often constrained by funding or deliverables), undertaken to meet unique goals and objectives, typically to bring about beneficial change or added value. The temporary nature of projects stands in contrast with business as usual (or operations), which are repetitive, permanent, or semi-permanent functional activities to produce products or services.

In practice, the management of these two systems is often quite different, and as such requires the development of distinct technical skills and management strategies. The primary challenge of project management is to achieve all of the project goals and objectives while honoring the preconceived constraints. Typical constraints are scope, time, and budget. The secondary—and more ambitious—challenge is to optimize the allocation of necessary inputs and integrate them to meet pre-defined objectives.

Monitoring and controlling includes:
  • Measuring the ongoing project activities ('where we are');
  • Monitoring the project variables (cost, effort, scope, etc.) against the project management plan and the project performance baseline (where we should be);
  • Identify corrective actions to address issues and risks properly (How can we get on track again);
  • Influencing the factors that could circumvent integrated change control so only approved changes are implemented.
=>Solution Planning & Agile Methods

Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen interactions throughout the development cycle

Agile_software_development =>Distributed Agile Development Estimator
=>Technical Architecture Estimator

Requirement Management

Requirements management, requirements definition, and requirements engineering practices are the cornerstones of project success.
Rational Requirements Composer

Quality Management

  1. The aim of Software Quality Management (SQM) is to manage the quality of software and of its development process.
  2. A quality product is one which meets its requirements and satisfies the user
  3. A quality culture is an organizational environment where quality is viewed as everyone’s responsibility.
SQA - An Organizational quality guide of
  • Standards, regulations, and procedures to produce, verify, evaluate and confirm work products during the software development lifecycle
  • Incorporated knowledge base of best practices
  • Off-the-shelf software tools selected to apply the above
Issue Integrity Solution Capability and Benefits
Process Inefficiency and Waste
Quality and processes are linked inextricably. Good, enforceable process leads to good quality. Automation also plays a major role in the cost and time to achieve better quality.
  • End-to-end process automation and enforcement
  • Comprehensive metrics dashboards identify bottlenecks
  • Lean and Agile concepts drive out waste.
Late Cycle Rework
Errors introduced during requirements definition early in the lifecycle produce as much as 40% of the rework on the average project. Quality must be "baked in" from the beginning.
  • Improved capture of accurate and complete requirements early in the development cycle
  • Immediate propagation of changes upstream AND downstream
  • Better collaboration and communication with a single source of truth
Inability to Manage Growing Product Lines When each variant is treated as a unique product, change management, defect management, and validation must be duplicated across products. This can create many quality issues as resources are spread thin.
  • Advanced reuse of all lifecycle assets including features, requirements, code, test cases, test results, etc.
  • Ability to perform rapid impact analysis across all product variants and product lines
  • Automated propagation of changes and fixes across applicable variants ensures quality across product lines
Disconnected Tools and Repositories Data redundancy, siloed processes, and disconnected tools lead to significant manual intervention and with that comes a high percentage of preventable errors.
  • Single platform for managing all activities and assets
  • Enterprise integration gateway to ensure assets are accessible and relate-able while reducing data redundancy


  1. Functional Testing
  2. Integration Testing
  3. Performance Testing

Execution Architecture

The execution architecture decomposes the system in terms of concurrent elements and hardware elements. This module explains execution architecture, and examines concurrent subsystems, concurrent objects, and deployment.
  1. Focuses on the system runtime structure
  2. Hardware elements, subsystems, processes and threads
  3. Suited for examining quality attributes, most-notably runtime
  4. Attributes
  5. E.g. performance, security, usability. scalability
  6. Similarly to conceptual architecture comprised of components and
  7. Connector

Error/Resume Handling & Notification

Signal Handling


Configuration management (CM) is a process for establishing and maintaining consistency of a product’s performance, functional and physical attributes with its requirements, design and operational information throughout its life

Configuration Management

Thread Management

Thread Management



Batch services

Batch services

Operation Architecture

Like all other architectures, the operations architecture must run through the 10 steps to complete its design. The first step is the vision. In this case, it should be the weight of the world on our shoulders. While you might think this motto is depressing -- everyone relies on you, and only you, to do everything -- it is, in fact, quite positive. Operations teams are heroes that everyone can rely on to make things run smoothly. That is, because like Atlas of old, the operations architecture supports all other architectures. If the operations teams perform their work properly, business runs smoothly and IT becomes an enabler instead of a show stopper.

This is the true purpose of IT. So, how do you get there? One of the first aspects to work on in this architecture is team roles. This means focusing on step 5: identifying the principal actors in the architecture. Operations teams consist of personnel who play several different roles in the organization. Depending on the organization's size, the same person may be playing several different roles, but it is still important to identify the roles that are required to ensure that nothing is missed. Organizations that have successfully transited to enterprise architecture -- one that relies fully on proper operations architecture -- have introduced the following team roles:

  1. Project management
  2. Process management
  3. Business relationship management (BRM)
  4. Technical architecture
  5. Desktop operations
  6. Server operations
  7. Network operations
  8. Security operations
  9. Services administration and coordination

Backup & Disaster Recovery (F)

Disaster recovery is the process, policies and procedures related to preparing for recovery or continuation of technology infrastructure critical to an organization after a natural or human-induced disaster. Disaster recovery is a subset of business continuity. While business continuity involves planning for keeping all aspects of a business functioning in the midst of disruptive events, disaster recovery focuses on the IT or technology systems that support business functions.

Performance & Usage Management (F)

Network performance management: a set of functions that evaluate and report the effectiveness of the network or network element, and the behavior of telecommunications equipment. It also includes a set of subfunctions, such as gathering statistical information, maintaining and examining historical logs, determining system performance under natural and artificial conditions, and altering system modes of operation.[3]
System performance management includes monitoring monitoring and management of utilization of operating systems resources, including CPU, memory, I/O, and disk usage. This includes both physical and virtual systems. In cloud environments events can be defined using monitoring software and actions automated with cloud management application programming interfaces.[4]
Application performance management (APM): the discipline within systems management that focuses on monitoring and managing the performance and availability of software applications. APM looks at workflow and related IT tools deployed to detect, diagnose, remedy, and report on application performance issues to ensure that application performance meets or exceeds the expectations of end-users and businesses.
Self-learning performance management: the use of technology to help automate the performance management of information technology systems. This is done through the use of software that employs applied mathematics (such as statistics, time series analysis, and forecasting), automated baselining, neural networks, pattern recognition, and other similar technologies. The intent is to automate manual processes and "fixed baseline" approaches used to determine when IT systems are operating out of normal ranges, which would indicate potential system problems. Self-learning performance management is complementary to the disciplines of systems management, network performance management, and application performance management, and is sometimes referred to by IT analyst firms like Gartner by the term behavior learning technology or behavior learning software.
Business transaction management (BTM): the discipline within systems management that monitors business transactions across the data center in order to manage IT performance

Production Scheduling (F)

Scheduling is an important tool for manufacturing and engineering, where it can have a major impact on the productivity of a process. In manufacturing, the purpose of scheduling is to minimize the production time and costs, by telling a production facility when to make, with which staff, and on which equipment. Production scheduling aims to maximize the efficiency of the operation and reduce costs.

Software Distribution (C)

Software distribution can refer to:
The process of software distribution, from creator to the user,
A collection of software, also referred to as a distribution, or a distro, which is set of software components built, assembled and configured so that it can be used essentially "as is" for its intended purpose.

User Profile Management (C)

Profile management (Citrix Systems) ensures that the user’s personal settings are applied to the user’s virtual desktop and applications, regardless of the location and end point device.
Profile management is enabled through a profile optimization service that provides an easy, reliable way for managing these settings in Windows environments to ensure a consistent experience by maintaining a single profile that follows the user. It auto-consolidates and optimizes user profiles to minimize management and storage requirements and requires minimal administration, support and infrastructure, while providing users with improved logon and logout.

IT Asset Management (F)

IT asset management (ITAM) is the set of business practices that join financial, contractual and inventory functions to support life cycle management and strategic decision making for the IT environment. Assets include all elements of software and hardware that are found in the business environment.

Services Delivery & Support (F)

To run business, operations, and support aspects of an enterprise IT organization, there are many applications that are necessary. The complexity of controlling changes, service levels, and costs would be impossible without these tools and processes. The goal is to provide technologies, tools, and processes that significantly enhance the efficiency and quality of central IT services.
In pursuit of this vision, quality should continuously be tracked by direct service metrics, including operational monitoring and customer feedback. Likewise, efficiency should be tracked by performance in execution of services rendered and by cost management for service delivery. Requests for IT support across the entire campus will be tracked and routed with the minimum of delay; these support requests will be consolidated across campus to increase efficiency. Rich repositories of support information — from descriptive summaries to detailed FAQs and user guides — will be available both to IT support personnel as well as directly available to end users. Additional information resources will be delivered via enhanced online training technologies. IT Services will leverage a Customer Relationship Management (CRM) system to track and anticipate the service needs of clients.
There will be fully integrated management control over changes in the application and system environments, and tracking of the real-time health of services. All services and their dependencies will be understood and monitored in order to rapidly determine the root cause of service disruptions. Monitoring and trend metrics data will be used to anticipate service degradation. Analytical insight into financial, operational, resource utilization, and support metrics will be available via a reporting infrastructure that has access to consolidated data fed from all relevant sources.

  • Service Catalog
  • Server and Application Monitoring
  • Client Relationship Management
  • Training Services Technologies
  • Ordering and Billing
  • Reporting
  • Service Desk
  • Workflow

Infrastructure Architecture

Infrastructure Architecture defines the hardware, networks, and system software that support the application software and business systems of an enterprise

Infrastructure architecture is a new kid on the architecture block. Traditionally, a large amount of IT-architecture attention has been devoted to information and application architecture. However, several developments have fostered a growing desire for infrastructure architecture. But not only will an organization's infrastructure provisions benefit from the appliance of this new architectural discipline; IT architecture, as a whole, will mature. Being that infrastructure architecture is in its childhood, a lot of work has to be done to stimulate and create infrastructure-architecture methods, models, and tools. This paper includes a number of first steps in this new architecture discipline.

LAN, WAN, SAN, Physical Data centre

Architecture Domains

Integration Architecture

In a well-designed building, the electrics and plumbing usually keep working no matter how many appliances are switched on. Such a building is also capable of extension without having to tear up the blueprints and start again. Why? Because it has good architectural design.

The same applies to software systems. Software architecture is the backbone of any complex computer system. The architecture encompasses all of the software elements, the relationships between the elements and the user interfaces to those elements. The performance and reliability of a software system are highly dependent upon the software architecture.

Well-designed software architecture can be extended with relative ease to accommodate new applications without requiring extensive infrastructure development.

Message Queuing

Queuing is the mechanism by which messages are held until an application is ready to process them. Queuing allows you to:

  • Communicate between programs (which might each be running in different environments) without having to write the communication code.
  • Select the order in which a program processes messages.
  • Balance loads on a system by arranging for more than one program to service a queue when the number of messages exceeds a threshold.
  • Increase the availability of your applications by arranging for an alternative system to service the queues if your primary system is unavailable.

Event Processing

Event handling is familiar to any developer who has programmed graphical user interfaces (GUI). When a user interacts with a GUI control (e.g., clicking a button on a form), one or more methods are executed in response to the above event. Events can also be generated without user interactions. Event handlers are methods in an object that are executed in response to some events occurring in the application. To understand the event handling model of .Net framework, one need to understand the concept of delegate

Web Services

Web services provide a standard means of interoperating between different software applications, running on a variety of platforms and/or frameworks. Here we talk about a common definition of a Web service, and define its place within a larger Web services framework to guide the community. The WSA provides a conceptual model and a context for understanding Web services and the relationships between the components of this model.

The Web services architecture is an interoperability architecture: it identifies those global elements of the global Web services network that are required in order to ensure interoperability between Web services.

•The Message Oriented Model focuses on messages, message structure, message transport and so on — without particular reference as to the reasons for the messages, nor to their significance

•The Service Oriented Model focuses on aspects of service, action and so on. While clearly, in any distributed system, services cannot be adequately realized without some means of messaging, the converse is not the case: messages do not need to relate to services

•The Resource Oriented Model focuses on resources that exist and have owners.

•The Policy Model focuses on constraints on the behavior of agents and services. We generalize this to resources since policies can apply equally to documents (such as descriptions of services) as well as active computational resources.

Common Data Model

Data modeling in software engineering is the process of creating a data model for an information system by applying formal data modeling techniques. to manage data as a resource;
for the integration of information systems;
for designing databases/data warehouses (aka data repositories)

XML Core

XML Core Services (formerly known as MSXML, for Microsoft Extensible Markup Language or XML) is an application for processing Extensible Stylesheet Language Transformation (XSLT) in an XML file. Based on Microsoft's Component Object Model (COM), XML Core Services is essentially an application programming interface (API) to an XML parser and the XPath processor. The parser organizes the XML data into a tree structure for processing, and the processor converts the XML to Hypertext Markup Language (HTML) for display.

XML Core Services works in conjunction with Internet Explorer. The earliest version of MSXML was included on Internet Explorer 4.0, which made that version the first browser to support XML. MSXML 1.0 was a basic parser based on the Document Object Model (DOM). The current version can be used to create and validate XML documents, as well as parsing and processing them, and can make HTTP (Web site) requests and process the replies. Support is included for the World Wide Web Consortium (W3C) recommendations for XML Schema.

MSXML can be used to create, parse, and process XML documents using either DOM (memory-mapped hierarchical tree-based API) or SAX (streaming event-based API). It can be used to validate XML documents either using XSD schemas or XDR schemas. It can be used to transform XML documents using XSLT and XPath.

Data Architecture

Data architecture describes the architecture of the data structures used by a business and/or its applications. There are descriptions of data in storage and data in motion; descriptions of data stores, data groups and data items; and mappings of those data artifacts to data qualities, applications, locations etc.

Essential to realizing the target state, Data Architecture describes how data is processed, stored, and utilized in a given system. It provides criteria for data processing operations that make it possible to design data flows and also control the flow of data in the system.

The Data Architect is typically responsible for defining the target state, aligning during development and then following up to ensure enhancements are done in the spirit of the original blueprint.

During the definition of the target state, the Data Architecture breaks a subject down to the atomic level and then builds it back up to the desired form. The Data Architect breaks the subject down by going through 3 traditional architectural processes:

  1. Conceptual - represents all business entities.
  2. Logical - represents the logic of how entities are related.
  3. Physical - the realization of the data mechanisms for a specific type of functionality.

Digital Rights Management

Digital rights management (DRM) is a class of access control technologies that are used by hardware manufacturers, publishers, copyright holders and individuals with the intent to limit the use of digital content and devices after sale. DRM is any technology that inhibits uses of digital content that are not desired or intended by the content provider. DRM also includes specific instances of digital works or devices. Companies such as Amazon, AT&T, AOL, Apple Inc., BBC, Microsoft, Electronic Arts and Sony use digital rights management.

Master Data Management

MDM is a tool that removes duplicates and creates an authoritative source of master data. Master data are the products, accounts and parties for which the business transactions are completed. The root cause problem stems from business unit and product line segmentation, in which the same customer will be serviced by different product lines, with redundant data being entered about the customer (aka party in the role of customer) and account in order to process the transaction. The redundancy of party and account data is compounded in the front to back office life cycle, where the authoritative single source for the party, account and product data is needed but is often once again redundantly entered or augmented.

Cloud Architecture

Cloud computing is the delivery of computing and storage capacity as a service to a heterogeneous community of end-recipients. The name comes from the use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts services with a user's data, software and computation over a network.

There are three types of cloud computing:
  1. Infrastructure as a Service (IaaS),
  2. Platform as a Service (PaaS), and
  3. Software as a Service (SaaS).

Cloud Distribution Network

Introduction Cloud computing

Infrastructure, Platform & Software As services

Security Architecture

These controls serve the purpose to maintain the system's quality attributes, among them confidentiality, integrity, availability, accountability and assurance.

The unifying framework and reusable services that implement policy, standards, and risk management decisions. The security architecture is a strategic framework that allows the development and operations staff to align efforts, in addition the security architecture can drive platform improvements which are not possible to make at a project level. A given software development project may not be able to make a business case to purchase an XML Security Gateway for improved web services security, but at the architecture level, architects can potentially identify several projects that could leverage such a reusable service. In this instance the security architecture delivers improved XML/ Web services security, a simplified programming model for developers, and saves development costs, because the wheel is not reinvented multiple times.

Risk management, security policy and standards, and security architecture govern the security processes and defense in depth architecture through design guidance, runtime support, and assurance services. Security metrics are used for decision support for risk management, security policy and standards, and security architecture. The security architecture should have a reference implementation for developers and other IT staff to review what functions the security mechanisms performs, and how they do it.

SDL: Security functions as a collaborative design partner in the software development lifecycle (SDL), from requirements, architecture, design, coding, deployment, and withdrawal from service. Security adds value to the software development lifecycle through prescriptive and proscriptive guidance and expertise in building secure software. Security can play a role in all phases of the SDL, but an iterative, phased-based integration of security into the SDL is the wisest path, each additional security process improvement must fit with the overall SDL approach in the enterprise, which vary widely. The DHS Build Security In portal defines process improvements that enterprises can leverage throughout their SDL.4 Every security process added into the SDL adds incremental expense to the developer’s time, so the enterprise security group must wisely choose the artifacts and activities to add in the SDL.

As the overall Security Architecture and related components such as Identity Management evolve over time, these security components and services should be baked into the SDL in a prescriptive way – making it easier for developers to build secure software. The diagram below shows an example approach for iterating through a number of security artifacts and evolving the SDL over time. The goal is to identify reusable services that, over time, can speed development of reliable software, for example building reusable attack patterns that are implemented across a particular set of threats like a set of web attack patterns that can be used for security design in any enterprise web application.


Encryption is the process of transforming data into an unintelligible form in such a way that the original data either cannot be obtained or can be obtained only by using a decryption process.
Data that is encrypted is referred to as ciphertext. Data that is not encrypted is referred to as plaintext. The data that is encrypted into ciphertext is considered securely secret from anyone who does not have the decryption key.
The following encryption algorithms exist:
Symmetric encryption algorithm
A common key is used to both encrypt and decrypt data. Therefore, the encryption key can be calculated from the decryption key and the decryption key can be calculated from the encryption key.
Asymmetric encryption algorithm
Two keys are used to encrypt and decrypt data. A public key that is known to everyone and a private key that is known only to the receiver or sender of the message. The public and private keys are related in such a way that only the public key can be used to encrypt messages and only the corresponding private key can be used to decrypt them.


Biometrics is the science and technology of measuring and analyzing biological data. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements, for authentication purposes.

Application Architecture


The Batch Application Style is organized into four logical tiers, which include Run, Job, Application, and Data. The primary goal for organizing an application according to the tiers is to embed what is known as "separation of concerns" within the system. These tiers can be conceptual but may prove effective in mapping the deployment of the artifacts onto physical components like Java runtimes and integration with data sources and targets. Effective separation of concerns results in reducing the impact of change to the system. The four conceptual tiers containing batch artifacts are:

  1. Run Tier: The Run Tier is concerned with the scheduling and launching of the application. A vendor product is typically used in this tier to allow time-based and interdependent scheduling of batch jobs as well as providing parallel processing capabilities.
  2. Job Tier: The Job Tier is responsible for the overall execution of a batch job. It sequentially executes batch steps, ensuring that all steps are in the correct state and all appropriate policies are enforced.
  3. Application Tier: The Application Tier contains components required to execute the program. It contains specific tasks that address required batch functionality and enforces policies around execution (e.g., commit intervals, capture of statistics, etc.)
  4. Data Tier: The Data Tier provides integration with the physical data sources that might include databases, files, or queues.


Real-Time & Embedded

Architecture Concerns

System performance and scalability

System performance and scalability issues often have their roots in architectural and design choices that are made early in the software life cycle. Because he must communicate with developers, designers, product managers, business stake holders, application domain experts, testers, and requirements engineers, the software architect is uniquely placed to play a leadership role in linking performance requirements to business and engineering needs. Ideally, the architectural, technology, and design choices that are made should take performance requirements and artifacts into account. This means that the architect should be equipped with at least a rudimentary understanding of performance engineering concepts. Ideally, an architect should be directly involved in performance concerns. Failing that, he should overtly give a mandate to and remain in close contact with a performance engineer to do this instead, because close architectural involvement with performance concerns is key to the success of the project.

Architecture Practices


Client design


Service design

Concurrent Process

Web servers

Web servers are computers that deliver (serves up) Web pages. Every Web server has an IP address and possibly a domain name

Any computer can be turned into a Web server by installing server software and connecting the machine to the Internet. There are many Web server software applications, including public domain software from NCSA and Apache, and commercial packages from Microsoft, Netscape and others Document management system

Document management system

A document management system (DMS) is a computer system (or set of computer programs) used to track and store electronic documents and/or images of paper documents. It is usually also capable of keeping track of the different versions modified by different users (history tracking). The term has some overlap with the concepts of content management systems. It is often viewed as a component of enterprise content management (ECM) systems and related to digital asset management, document imaging, workflow systems and records management systems.

Enterprise Application Integration

Enterprise Application Integration (EAI) is defined as the use of software and computer systems architectural principles to integrate a set of enterprise computer applications.

Enterprise Application Integration (EAI) is an integration framework composed of a collection of technologies and services which form a middleware to enable integration of systems and applications across the enterprise.

EAI can be used for different purposes:

  1. Data integration: Ensures that information in multiple systems is kept consistent. This is also known as Enterprise Information Integration (EII).
  2. Vendor independence: Extracts business policies or rules from applications and implements them in the EAI system, so that even if one of the business applications is replaced with a different vendor's application, the business rules do not have to be re-implemented.
  3. Common facade: An EAI system can front-end a cluster of applications, providing a single consistent access interface to these applications and shielding users from having to learn to use different software packages.

Solutions Architect

A Solutions Architect will essentially need to take some key responsibilities during the course of a project. Here are 6 key responsibilities/ roles that I think are the most important ones -

  1. Solution delivery - The Solutions Architect (SA) is responsible for the delivery of a particular solution on the date committed. At times, this is shared by a Project Manager, but most often, the Solutions Architect is responsible
  2. Project Timelines/ Schedule - Its an SA's duty to make sure the project is progressing in the right pace. The SA typically will work with the Project Manager to accomplish this task
  3. Solution Architecture - The SA takes complete responsibility of the Solution Archtiecture. Though, in the process, the Solutions Architect might consult the Integration/ Infrastructure architects, the end product is owned by the SA
  4. A Consultant - The SA also dons the role of a consultant during the early phases of the solution, when the SA needs to gather requirements and work with the sales team to put together a document in return to a Request For Proposal (RFP) from the customer
  5. Subject Matter Expert - The Solution Architect is also a key person who understands the solution and the domain/ industry well. The Solution Architect needs to be a Subject Matter Expert (SME) to be able to understand the customer's requirement and also suggest alternatives to better the solution and alleviate the challenge/ issue being addressed
  6. A Leader - A Solution Architect needs to be a leader who can efficiently lead a team of developers, project manager and effectively work with other business units in the company and the customer team in the creation of the solution. The SA should be able to work with the customer and help them envision the solution value, and how it will benefit them in the longer term
Index | Next