tri-university

122
TRI-UNIVERSITY System Wide Information Technology Architecture

Upload: aamir97

Post on 20-May-2015

1.692 views

Category:

Education


2 download

TRANSCRIPT

Page 1: Tri-University

TRI-UNIVERSITY

System Wide Information Technology

Architecture

AUG 2004

Page 2: Tri-University

Updated APR 2007

Table of Contents

Section 1 . . . . Project Team Members

Section 2 . . . . Project Overview

Section 3 . . . . Target Data/Information Architecture

Section 4 . . . . Target Middleware Architecture

Section 5 . . . . Target Network Architecture

Section 6 . . . . Target Platform Architecture

Section 7 . . . . Target Security Architecture

Section 8 . . . . Target Software/Application Architecture

Section 9 . . . . Glossary of Terms

Table of Contents - 1 -

Page 3: Tri-University

Project Team Members

Project Executive Team (During the drafting of the original document)

ASU - William Lewis

NAU - Fred Estrella

UA - Sally Jackson

Project Executive Team (During the updating of the document)

ASU - Adrian Sanner

NAU - Fred Estrella

UA - Michele Norin

Project Manager

ASU - Darel Eschbach

University Team Leaders (During the drafting of the original document

ASU - Darrel Huish

NAU - Matt McGlamery

UA - Michele Norin

University Team Leaders (During the updating of the document)

ASU - Bob Miller

NAU - Ricky Roberts

UA - Michael Torregrossa

Project Domain Work Teams(During the drafting of the original document)

Data/Information DomainCoordinator: ASU - Darrel Huish Members NAU - John Campbell

Section 1 - Project Team Members 1

Page 4: Tri-University

UA – Rick Hargis

Middleware DomainCoordinator: ASU - John Babb Members NAU - Darrel Huish

UA - Mike Torregrossa

Network DomainCoordinator: ASU - Darel Eschbach Members ASU - Joe Askin NAU - Matt McGlamery UA - Michele Norin

Platform DomainCoordinator: UofA - Michele Norin Members ASU - John Babb ASU - Darrel Huish NAU - Ricky Roberts UA - Mike Torregrossa

Security DomainCoordinator: UA - Ted Frohling Members ASU - Joe Askins NAU - Lanita Collette

Software DomainCoordinator: NAU - Max Davis-Johnson Members ASU - Darrel Huish UA – Rick Hargis

Section 1 - Project Team Members 2

Page 5: Tri-University

TRI-UNIVERSITY

System Wide Information Technology

Architecture

Overview

AUG 2004

Section 2 – Project Overview 1

Page 6: Tri-University

Tri-University IT Architecture - Overview I. Introduction This Tri-University IT Architecture (Tri-University ITA) document is intended to provide a framework for information technology (IT) use at Arizona’s three state universities. The Tri-University ITA facilitates the application of IT to university initiatives and projects. Its goal is to aid in the efficient and effective implementation of technology on our campuses by describing a direction for current and future IT activities, supported by underlying principles, standards, and best practices. Lastly, it will further facilitate Tri-University collaboration efforts by establishing a common vision for future of IT on our campuses.

The use of technology is a large and growing element of the universities' environment and overall expenditures. Arizona’s universities collectively are interested in increasing service quality and saving money through the best possible use of IT. Where feasible, our universities will seek ways to collaborate our efforts to improve our ability to provide cost-effective and technically sound services to our students, faculty and staff.

A. Environmental Impacts

Certain environmental factors become important in determining the actual implementation of the Tri-University ITA:

Differing university missions can create differences in an institution’s optimal IT architecture. For example, fostering distance learning will have different technological requirements than increasing research efforts in Genomics.

The historical growth of IT on each campus is a powerful factor in determining the ultimate way each campus can implement any IT initiative.

Academic interests frequently dictate the use of a particular communication protocol, database, operating system or application.

The blending of private funds with public funds to achieve institutional objectives creates additional participants with unique technical requirements.

The ratio of centralized vs. decentralized IT resources and management influences the level at which each campus has widespread acceptance of standards and best practices.

Some IT oriented research requires an unstructured environment to conduct experimental constructs that could become the technology of the future.

The diverse and complex nature of any university campus makes the integration and centralization of IT efforts extremely difficult.

Section 2 - Project Overview 2

Page 7: Tri-University

B. Assumptions

Several underlying assumptions are used to aid in this document’s creation: The scope of the Tri-University ITA will focus on the

technologies, methodologies and best practices found in industry and in other universities.

This document will remain highly flexible to accommodate the ever-changing nature of IT.

The standards and best practices listed within are guides to creating a high-level framework for establishing our individual campus IT architectures and for evaluating the current and future direction of IT on our campuses.

C. Characteristics

In order to achieve the overall Tri-University ITA objectives, the following qualitative characteristics are established:

Adaptability Change as national & industry standards evolve, so our universities can enhance and incorporate new ways of doing essentially the same business function without major developmental impact.

Manageability Centrally manage or coordinate and monitor, including the orderly planning for capacity changes of various essential services.

Reliability Remain in continuous operation even if part of the system suffers failure, needs maintenance or upgrading, or is destroyed or damaged by a disaster.

Securability Provide different access to individuals based on the classification of data and the user’s business function.

Extensibility Easily add new kinds of functionality to existing processes without major impact.

Scalability Increase or decrease size or capability in cost-effective increments without software impact or “spikes” in the unit cost of operations due to step functions in procuring additional resources.

Performance Fast response and high throughput.

Connectability Communications access to a variety of area, national, and international networks.

Consistency Relative stability of the person/machine interface over time.

Portability University community members should be able to access and use services wherever they are in the world.

AccessibilityAll services should be sensitive to community members with disabilities and provide them equal access without

Section 2 - Project Overview 3

Page 8: Tri-University

hardship.Usability

Easy to use; applicable to university mission.Supportability

University architectural standards, policies, and guidelines must be followed to ensure support from central IT.

AffordabilityAll IT projects should be negotiated to find a reasonably priced solution to meet the universities desired goals.

OpennessOpen-source standards and products should be seriously considered in all IT projects.

D. University IT Similarities/Differences

Information Technology resources and services have evolved differently at the three Arizona Universities. Many core IT services are centrally provided and coordinated such as network services, administrative and major business systems, and architecture and system design. Where other key IT services are offered varies depending on the culture of the institution, when it began to integrate technology into its operations, and how funding has been prioritized by administration, colleges, and departments within each university.

Universities compete for students and the quality and ease of use of IT services are important factors in this competition. The challenge is how to fit the pieces together in an efficient and effective IT foundation for service at each campus, and how to coordinate services among the three institutions. Finding the right fit is not easy. Hardware and software vendors regularly change technology standards. The need for adequate, reliable service raises issues including: security questions, requirement for ubiquitous 24/7 access, information back-up and disaster recovery. The increasingly distributed technology environment at the Arizona universities makes meeting these challenges increasingly complex.

IT activities must be coordinated at the highest levels of the university to function effectively. University administration and the campus IT leadership must establish priorities which balance user needs with budget and resource realities to effectively plan, coordinate, manage and support IT resources and services. The goal is to provide seamless, reliable, secure but accessible IT services to current and prospective students, staff, faculty, and to our communities.

Key issues related to planning and maintaining the critical network infrastructure include:

-Maintaining Reliable Services to support Instruction, Research, Business Services, and Outreach

-Accommodating exponential growth in demand-Balancing access with privacy and security-Planning for convergence of voice, data, and video-Managing distributed services and resources-Accommodating increasing costs such as for Software

Section 2 - Project Overview 4

Page 9: Tri-University

licenses and Network Connectivity

It is obvious Academic disciplines differ dramatically in the technologies they need for research and teaching. For this reason, the IT resources that are maintained to support Academic IT services vary considerably and are only standardized at levels of a common architecture for network connectivity and desktop equipment. To a slightly lesser degree the same statement is true of the wide range of Administrative services at the three State Universities. Therefore, Academic and Administrative computing requires reliable communication between university resources and user computers. This includes connectivity among departments, classrooms and labs, to central resources, and to the Internet. Communication support (such e-mail and file transfer technology), tools for interaction management (such as calendar systems and workflow software) and tools for collaboration (such as providing shared workspace and files) are also critically important.

Each of the three Arizona universities has central IT organizations (Information Technology at ASU, Information Technology Services at NAU, Center for Computing and Information Technology at UA). These organizations provide support for substantial shared resources including hardware, software, technical support, user support, and the institutional IT infrastructure.

Differences among systems developed at each University and even within University departments are influenced by a number of factors, which include changes in:

-Technology: File structures, databases, mainframes/dumb terminals, personal computers, telephone, Internet/web interfaces.

-Vendors: Buy-outs, mergers, etc.;-Product Offerings: SIS, SIS Plus, Exeter, Peoplesoft systems

-Resources available: Skilled personnel with appropriate expertise, consultants, fiscal resources, availability of parallel systems to support evaluation and transitions

-Leadership: Administration, IT unit, Operational units-Level of centralization/decentralization of IT-Size and age of systems currently in use-Size of institution and geography location-Opportunities: such as State benefit enrollment, ARU-Emerging differing missions and priorities

The three universities work together to plan and implement administrative systems and services and coordinate efforts through sharing:

-Software: applications, where applicable, and support software

-Data: for data exchanges

Section 2 - Project Overview 5

Page 10: Tri-University

-Challenges and opportunities: SSN identifiers, security-Vendors and contracts: Included options for other schools

-Ideas: strategic planning and disaster planning-Resources: disaster recovery

II. Architecture Components (Domains)

The Tri-University project has identified, in part from a literature (web) search and in part from our own experience. The following architecture components or domains are to be documented. Other components may be added as this document lives and matures.

Data/Information

The Data/Information Architecture defines common, industry-wide, strategies and practices related to managing data and providing information in a university setting. The architecture must support reliable and ubiquitous data/information sharing within the Universities’ distributed information processing environments. It defines technologies, standards, and methodologies currently used to enable information sharing among its internal users, the board of regents, other universities, the federal government, and the public.

Middleware

Middleware facilitates interchange of information in a distributed, multi-vendor, and heterogeneous systems environment while providing the same levels of security, reliability, and manageability traditionally associated with a monolithic, mainframe-based architecture where all products are supplied by a single vendor. Middleware encompasses a wide range of capabilities from database access to very sophisticated integration. Middleware is intended to provide the glue to tie disparate applications together.

The Middleware Architecture describes in a general fashion concepts and directions to be associated with concepts and products that are evolving in this emerging technology area to provide application-to-application integration, business-to-business integration, and business process management.

Network

The Network Architecture defines network infrastructures providing reliable and ubiquitous communication for the Tri-University’s distributed information processing environment. It defines various technologies required to enable secure connections among all system users, both human and electronic machines. The policy describes a network infrastructure that can support converged services as well as accommodating traditional data, voice, and video services, providing the framework and foundation to enable the Tri-University’s business processes, new business opportunities, and new methods for delivering service. It also encourages further deployment

Section 2 - Project Overview 6

Page 11: Tri-University

of open systems based on targeted network architectures that use common, proven, pervasive, and industry-wide standards.

Platform

The client and server platforms, with their associated operating systems, provide the end-user interface to the business application. Clients include the personal computer (PC), thin client, host-controlled devices (terminals,

telephones, etc.), voice interface devices, single- and multi-function mobile devices (Pocket PC, PDA, PDA-phone, etc.), telephony devices, smart cards, etc. Personal input devices (tablet, keyboard, probe, etc.) and output devices (monitors, displays, projectors, speakers, printers, etc.) attached to a client device should use standard interfaces and industry de facto standard software drivers.

The Platform Architecture provides guidance for the hardware devices to be supported and the preferred software interfaces, WEB, Portal as well as dedicated applications. In addition the standard describes common, industry-wide, open-standards-based, interoperable devices facilitating the reliable and pervasive availability of, access interfaces with, and processing for, the Universities' distributed information processing environment. It defines various technologies required to deliver individual units' to all system users. It allows the Universities and individual departments to deploy and support effective and efficient end-user access interfaces to business application systems, as well as providing the processing capability to execute business application systems, while increasing the use of e-solutions and maintaining traditional methods of service delivery to client users.

The Platform Infrastructure Standard is intended to be vendor/manufacturer neutral by design, focusing instead on relative versatility, capability to seamlessly interoperate with other platform devices, operating systems, embedded security, adherence to open or pervasive industry standards, provision for open system standard interfaces, and utilization of open standard drivers.

Security

The Security Architecture provides guidelines to securely and economically protect transaction of the Universities’ business, delivery of services, and communications among all users of the Tri-University systems. It encourages the units to incorporate technology security improvements for business requirements without compromising the security, integrity, and performance of the enterprise and its information resources.

Software

The Software Architecture delineates common, industry-wide, open-standards-based, interoperable technologies (methodologies, tools, principles, etc.) facilitating the

Section 2 - Project Overview 7

Page 12: Tri-University

design, development, and purchase of software to automate and maintain Tri-University business processes, and provides a foundation for interoperability, integration, collaboration, and communication. The Applications and Related Software Standard guides implementations of software applications that automate business processes. The standard provides for more effective sharing of resources and information among units as well as improved interoperability.

III. Architecture Documentation

Each component’s documentation includes, at a minimum, the following sections:

Introduction Vision/Purpose Standards Principals Target Architecture Recommended Best Practices Future Technology Trends

In addition to the individual components the ITA documentation is supported by a glossary of terms.

Section 2 - Project Overview 8

Page 13: Tri-University

TRI-UNIVERSITY

System Wide Information Technology

Architecture

Target Data/Information Architecture

AUG 2004

Section 3 - Data/Information 1

Page 14: Tri-University

Tri-University Data/Information Architecture

TABLE OF CONTENTS

1. Introduction

2. Data/Information Architecture Vision

3. Data/Information Architecture Definition

4. Target Data/Information Architecture

5. Recommended Data/Information Architecture Strategies

6. Recommended Data/Information Architecture Standards

7. Data/Information Architecture Purpose

8. Data/Information Architecture Principles

9. Data/Information Architecture recommended Best Practices

10. Data/Information Architecture Technology Trends

Appendix A. National Institute of Standards and Technology Data Management Model

Appendix B. Example of a University Data Warehouse Environment

Appendix C. Glossary of terms

Section 3 - Data/Information 2

Page 15: Tri-University

Tri-University Data/Information Architecture

1. Introduction

The Tri-University Information Technology Architecture (ITA) describes a framework for information technology that supports the Universities’ strategic plans. ITA facilitates change in an orderly, efficient manner by describing a direction for the future that is supported by underlying principles, standards, and best practices. The implementation of ITA presents opportunities for the Universities to interoperate to deliver a higher level of courteous, efficient, responsive, and cost-effective service to their respective communities of interests. Individually, each University can independently implement ITA components that are interoperable. Economies of scale, consolidation, and cross-university savings may best be realized not just through interoperability, but by working together in partnership and sharing.

The Tri-University's Information Technology Conceptual Architecture document explains the overall strategic alignment of the ITA with the Universities’ goals and objectives, the principles behind the architecture, the domains to be addressed, the plans for addressing domains, and the technology trends to be taken into consideration.

ITA includes important business, governance, and technical components. The technical components collectively referred to in the ITA provide technical guidance to the Universities. That guidance is supported by principles related to unit business functions, recommended standards, and applicable recommended best practices.

Tri-University ITA Domains Data/Information Middleware Network Platform Security Software

The Data/Information Architecture is one of the ITA domains being developed by the Tri-University architectural task teams. The complete ITA will be developed in phases and updated periodically. The domain architectures, driven by the business and program priorities of the Universities, are aligned with, and facilitate the strategic goals of the State Universities’ Information Technology (IT) plans.

2. Data/Information Architecture Vision

The Tri-Universities’ Data/Information Architecture describes strategies and practices that reflect the growing importance of data and information assets in a modern university environment. The Data/Information Architecture strives to support data-driven decision support systems that securely transform data into timely information that is reliable, relevant, and easily accessed in a university setting. The university setting is filled with a variety of internal and external stakeholders and

Section 3 - Data/Information 3

Page 16: Tri-University

involves data and information unique to a higher education environment. All these stakeholders increasingly rely upon accurate and timely digital information in easily accessible formats to inform decision making in business, instruction, research, and shared governance realms. The information age mandates reliable, scalable, secure access to a growing amount of institutional data. The Data/Information Architecture provides the framework and foundation to enable this access.

3. Data/Information Architecture Definition

The Data/Information Architecture defines common, industry-wide, strategies and practices related to managing data and providing information in a university setting. The architecture must support reliable and ubiquitous data/information sharing within the Universities’ distributed information processing environments. It defines technologies, standards, and methodologies currently used to enable information sharing among its internal users, the board of regents, other universities, the federal government, and the public.

There are many types of data and information in a university setting including unstructured data in the form of web sites, memos, policy documents, operating manuals, and even research notebooks. This document defines Data/Information Architecture as currently concerned with managing only those digital data sources that are recognized as core institutional assets derived from enterprise information sources. Such data are typically captured from various online transaction systems, have some metadata structures that describe the data, and are used to help inform the university decision-making processes.

The Educause Core Data survey lists the following information systems as common across nearly all higher education institutions: Student, Financial, Human Resources. Other information systems, such as library, course management, development, and grants management systems may also benefit from considering the strategies and practices described in this architecture document.

4. Target Data/Information architecture

The goal of specifying a target Data/Information architecture is to document those strategies, standards, and practices that allow decision support systems and the various stakeholders’ optimal access to data and information subject to necessary security, legal, and technical constraints. In this domain, the architecture is somewhat fluid. While the overall goal of the ITA is to adopt common, industry-wide, open-standards-based infrastructures, the reality of working with proprietary core information systems coupled with the emerging and rapidly evolving nature of web-based data exchange among communities of interest, requires the architecture to remain at a high level of discussion. Such a discussion can illustrate some best practices and targeted standards, but cannot dictate, at this time, the sole platform or the sole standard that all data warehouses or all decision support systems must adopt.

Section 3 - Data/Information 4

Page 17: Tri-University

5. Recommended Data/Information Architecture Strategies

Data/Information strategies and standards are established to coordinate data exchange and encourage data sharing among stakeholders. The goal is to employ open systems based on common, proven, and pervasive strategies and standards. However, Data/Information standards are still evolving and vendors are only now establishing the data exchange mechanisms required for the type of interoperability and federated access that should be the goal of this architecture. Still, there are a few common strategies and standards that everyone should be adopting whenever possible.

Recommended Strategy 1Develop a central data warehouse with appropriate resources for proper management of valuable university data assets. The data warehouse should provide for high availability, scalability, data recoverability, data integrity, data cleansing, etc. Make a clear distinction between the online transaction system(s) and the data reporting system(s) roles.

Recommended Strategy 2Record and agree on the authoritative source for each data element and the access restrictions needed on the data element. In a complex environment where the enterprise applications are of distinct origin, certain data elements may have identical names but represent different functions. In situations where this is unavoidable, the distinctions among the various uses should be well known and completely documented.

Recommended Strategy 3Develop the smallest set of roles and access controls possible given the legal and policy requirements of the state and institution. Centralize the implementation of authorization access control based on these roles, if possible.

Recommended Strategy 4Pay attention to metadata (information about data which describes its various dimensions and uses within the institution). Search for solutions that allow metadata to be stored and collected as part of the extract and load process.

Recommended Strategy 5Use relational databases in all cases where practical. Exception cases would include highly specialized data stores such as Active Directory or LDAP directory information services. Also, some enterprise application databases are so complex and large that a construct (cube, star schema, pre-designed data model) must exist to render them queryable.

Recommended Strategy 6To the extent possible, encourage data sharing by providing easy to use data reporting tools and by supporting federated database efforts.

Section 3 - Data/Information 5

Page 18: Tri-University

Recommended Strategy 7Consider web access as key to egalitarian and universal access to public data or to data used by distributed stakeholders. This includes the use of XML for data exchange as well as using generic web browsers for data access.

Recommended Strategy 8Continue scanning the data/information communities for new interoperable standards. Those that come from the W3C are increasingly becoming important (e.g. XML, XSD, SOAP, WSDL, UDDI).

6. Recommended Data/Information Architecture Standards

A number of standards exist related to Data/Information architectures. Some, like UML data modeling, are not ubiquitous due to their specialized nature (data modeling). Some standards are externally imposed when doing certain types of transactions. ARS 41-132, for example, describes requirements for electronic signatures that mandate certificates and encryptions based on the PKI and PGP standards. Both of these standards can be costly to widely implement in a University environment; their use at this time is best restricted to situations, like legal electronic signatures, that warrant the expense. Likewise, standards involving interoperability and data exchange are still evolving and will continue to evolve as specialized communities of interest begin to work out their data exchange needs. Experience seems to show that many of these interoperability standards, like IMS, continue not only to evolve and change, but also to have a slow adoption rate in higher education product lines.

Other standards, like Standard Query Language (SQL) are widely adopted throughout industry and higher education. Some lower level elements, like XML, and SOAP are becoming ubiquitous, but this does not mean that they solve the problem of sharing metadata between applications. Such agreements are just now being ironed out at the semantic level among various communities of interest. Work to define how to share information between applications and business units is continuing.

All standards listed here are intended to interact and support relevant standards proposed in the other ITA domains. The network, security, middleware, and client interface standards should be especially relevant.

Recommended Standard 1

ANSI SQL: a standardized query language for requesting information from a database. In 1986, ANSI approved a rudimentary version of SQL as the official standard, but most versions of SQL since then have included many extensions to the ANSI standard. In 1991, ANSI updated the standard.

Section 3 - Data/Information 6

Page 19: Tri-University

Recommended Standard 2

XML: Short for Extensible Markup Language. XML is a pared-down version of SGML, designed especially for WEB documents. It allows designers to create their own customized tags, enabling the definition, transmission, validation, and interpretation of data between applications and between organizations.

Recommended Standard 3

ODBC: Short for Open DataBase Connectivity, a standard database access method. The goal of ODBC is to make it possible to access any data from any application, regardless of which database management system (DBMS) is handling the data. ODBC manages this by inserting a middle layer, called a database driver, between an application and the DBMS. The purpose of this layer is to translate the application's data queries into commands that the DBMS understands. For this to work, both the application and the DBMS must be ODBC-compliant -- that is, the application must be capable of issuing ODBC commands and the DBMS must be capable of responding to them. Since version 2.0, the standard supports ANSI SQL.

Recommended Standard 4

JDBC: Short for Java Database Connectivity, a JAVA API that enables Java programs to execute SQL statements. This allows Java programs to interact with any SQL-compliant database. Since nearly all relational database management systems (DBMSs) support SQL, and because Java itself runs on most platforms, JDBC makes it possible to write a single database application that can run on different platforms and interact with different DBMSs.

JDBC is similar to ODBC, but is designed specifically for Java programs, whereas ODBC is language-independent.

7. Data/Information Architecture Purpose

The purpose of the Data/Information architecture is to support and encourage data sharing among inter and intra-university constituents. This is in alignment with the strategic goal of increased use of data for decision making and for meeting increased demands on information in governance and in institutional reviews. Increased access to the data and information covered by this architectural plan directly supports the following university activities:

A. University reporting

B. University decision making

C. Improved efficiencies by building and providing data-driven online services.

D. Improved data and information quality through exposing more

Section 3 - Data/Information 7

Page 20: Tri-University

data to more scrutiny and use.

The Data/Information Architecture must support the business and program priorities of all three Universities. Technology investments in Data/Information Architecture must provide measurable improvements in operations, support public service and facilitate the ABOR’s goals for the Tri-University. The Data/Information Architecture must enable the development of systems that make university information and programs more accessible to our communities of interest.

The Data/Information Architecture must increase access to information and services for all faculty, staff, students as well as the general citizens of Arizona, while protecting privacy and fostering openness. The Data/Information Architecture must enable easier access and more widely available information, while still protecting individual rights of privacy.

8. Data/Information Architecture Principles

The Data/Information Architecture specifies how information resources are shared and documents the strategies, standards, and best practices for accomplishing these goals. The Data/Information Architecture principles guide the planning, design, and development of database, data warehouse, data mart, data modeling, data mining, and knowledge discovery initiatives focused on the core institutional data assets.

Principle 1The data warehouse provides the primary central repository to support the university’s reporting and information investigation efforts. The warehouse should be designed to optimize data retrieval and involve sound extract, transform, and load principles.

Principle 2Data must be secure and reliably available. High availability, recoverability, and data integrity must be built into the data base management practices for all stages of the data’s life-cycle (from transaction system to data warehouse).

Principle 3Data and information need to be shared more than in the past. Access to data should be the rule rather than the exception. Restrictions on data access should be subject only to the legal and local policy restrictions in place at the institution.

Section 3 - Data/Information 8

Page 21: Tri-University

Principle 4Raw data alone is useless; tools must be provided to allow standard reports and ad-hoc queries in order to transform the raw data into useful information. To maximize data access, these tools need to be simple to use and be well supported. The best tools are also powerful and allow end-users to drill down into multi-dimensional data sets and extract data transparently into spreadsheets (or other tools) for further analysis.

Principle 5Broad data sharing suggests federated databases and local data marts may be necessary. Access rules and data definitions need to be respected even in these secondary systems. Where possible the same access controls should be extended to ancillary systems and ancillary systems should not duplicate existing data.

9. Data/Information Architecture recommended Best Practices

Best Practices are approaches that have consistently been demonstrated by diverse organizations to achieve similar high-level results, which in the case of architecture, means demonstrating the principles.

Recommended Best Practice 1Assure the data warehouse is administered by professional database administrators.

Recommended Best Practice 2Establish the data warehouse using enterprise level database hardware and relational DBMS software capable of scaling to the size and scope required. Assure that backup procedures and resources protect the data assets stored in the data warehouse.

Recommended Best Practice 3Consider using an “Extract Transform and Load” (ETL) tool to migrate data from the transaction systems to the data warehouse.

Recommended Best Practice 4Make data cleansing everyone’s job, promoting feedback loops that correct the data at the source. Data cleansing products might help, especially as mission critical data-driven systems are implemented. Such systems often suffer if the underlying data is inaccurate.

Recommended Best Practice 5Acquire and support a primary reporting tool that has the power and simplicity needed by the majority of campus analysts. Such a tool should have OLAP traits such as transparent download of data, ability to drill down into the data, and ability to interact with the report by changing report parameters.

Section 3 - Data/Information 9

Page 22: Tri-University

Recommended Best Practice 6Document not only the data dictionary but also create a “data confidence” report. Explain why the data is to be trusted and how it is transformed, cleaned, and loaded into the data warehouse. Document the questions that the data is designed to answer.

Recommended Best Practice 7Most casual access should be through a web interface and should be dynamically generated, and not just statically captured, reports. The better web interfaces even have drill down and parameterized reporting capabilities. 10. Data/Information Architecture Technology Trends

Trends, economic, and technical, that impact and influence ITA are to be annually updated. Areas that will require near-term investigation include:

Knowledge discovery and Data Mining OLAP Interoperability standards XML semantic standards (communities of interest) Data modeling Centralized access security Decision support systems Dashboard or balanced score card reporting

Section 3 - Data/Information 10

Page 23: Tri-University

Appendix A. National Institute of Standards and Technology Data Management Model

Source: The National Institute for Standards and Technology. Information Management Directions: The Integration Challenge, NIST Special Publication 500-167, September 1989. Retrieved April 5, 2004 from http://www.faa.gov/niac/pdf/wn18_fia.pdf

Section 3 - Data/Information 11

Page 24: Tri-University

Appendix B. Example of a University Data Warehouse Environment

Section 3 - Data/Information 12

Page 25: Tri-University

Appendix C. Glossary of Terms

These terms are listed here due to their relevance to this domain. An extensive glossary of terms for the entire ITA document is available as the last section of this document.

Ad Hoc Query Any query that cannot be determined prior to the moment the query is issued. An ad hoc query consists of dynamically constructed SQL, which is usually constructed by desktop query tools.

Business intelligence

A term that is becoming popular to describe a broad category of technologies and software for helping organizations make better business decisions.

Data MartA Data Mart contains a subset of historical information from the Data Warehouse. Data Mart’s are focused on a particular subject area.

Data modelingA method used to define and analyze the data requirements needed to support the business functions of an enterprise. Data modeling defines the relationships between data elements and structures.

Data Warehouse

The Data Warehouse is a comprehensive and centralized database for storing institutional information. Allows the university community to assess past performance, develop better forecasts and support business decisions.

ETL Extract, Transform and Load. Tools such as Informatica extract data from operational systems, transform the data according to university business rules and load the data into the Data Warehouse.

Metadata Data about data. One example is the definition of a data element. Metadata is to the data warehouse what the card catalog is to the traditional library. It is an information directory, containing the yellow pages, a road map, and a list of places of interest for navigating a data warehouse.

Operational Data Store

The Operational Data Store (ODS) is the component of the Data Warehouse that provides tactical support for the university community. It captures current operational data on a nightly basis.

Legacy system A software system found on the mainframe system that is vital to the institution but hard to cope with.

Operational Reports

Reports generated from the Operational Data Store (ODS). Providing information from the most recent business day.

ScalabilityThe ability to scale to support larger or smaller volumes of data and more or fewer users. The ability to increase or decrease size or capability is cost-effective with minimal impact on the services.

System of record

A source of data for the data warehouse. The system of record, which may consist of legacy databases, extract files, and so on, is treated as the official record.

Strategic Reports

Reports generated from the Data Warehouse and Data Marts. Providing historical information for statistical, longitudinal and trend analysis.

Transactional Reports

Reports generated directly from the transactional systems (e.g. student, human resources, and financial systems). Providing on-line, real-time information.

Transactional System

The operational information system from which the Data Warehouse extracts data, i.e. the university student, human resources, and financial systems.

.

Section 3 - Data/Information 13

Page 26: Tri-University

TRI-UNIVERSITY

System Wide Information Technology

Architecture

Target Middleware Architecture

AUG 2004

Section 4 - Middleware 1

Page 27: Tri-University

Tri-University Target Middleware Architecture

TABLE OF CONTENTS

1. Introduction

2. Middleware Architecture Vision/Purpose

3. Middleware Architecture Standards

4. Middleware Architecture Principles

5. Middleware Target Architecture

6. Middleware Architecture recommended Best Practices

7. Middleware Architecture Technology Trends

Appendix A. Glossary of terms

Section 4 – Middleware 2

Page 28: Tri-University

Tri-University Target Middleware Architecture

1. Introduction

The Tri-University Information Technology Architecture (ITA) describes a framework for information technology that supports the universities’ strategic plans. ITA facilitates change in an orderly, efficient manner by describing a direction for the future that is supported by underlying principles, standards, and best practices. The implementation of ITA presents opportunities for the universities to interoperate to deliver a higher level of courteous, efficient, responsive, and cost-effective service to their respective communities of interests. Individually, each university can independently implement ITA components that are interoperable, however, economies of scale, consolidation, and cross-university savings may best be realized not just through interoperability, but also by working together in partnership and sharing.

The Tri-University's Information Technology Conceptual Architecture document explains the overall strategic alignment of the ITA with the universities’ goals and objectives, the principles behind the architecture, the domains to be addressed, the plans for addressing domains, and the technology trends to be taken into consideration.

ITA includes important business, governance, and technical components. The technical components collectively referred to in the ITA, provide technical guidance to the Universities. That guidance is supported by principles correlated to unit business functions, recommended standards, and applicable recommended best practices.

Tri-University ITA Domains Data/Information Middleware Network Platform Security Software

The following proposed Middleware Architecture guidelines provide a means to describe middleware implementation at the three universities. Middleware, or "glue", is a layer of software between the network and the applications. This software provides services such as identification, authentication, authorization, directories, and security. In today's Internet, applications usually have to provide these services themselves, which leads to competing and incompatible standards. By promoting standardization and interoperability, middleware will make advanced network applications much easier to use at the three universities.

2. Middleware Architecture Vision/Purpose

The items included under the heading of middleware differ depending on who is making the list. These categorizations are all centered around sets of tools and data that help applications use networked resources and services. Some

Section 4 – Middleware 3

Page 29: Tri-University

services, like authentication and directories, are in all categorizations. Others, such as co-scheduling of networked resources, secure multicast, and object brokering and messaging, are the major middleware interests of particular communities, but attract little interest outside of those particular communities. A popular definition of middleware that reflects this diversity of interests is "the intersection of the stuff that network engineers don't want to do with the stuff that applications’ developers don't want to do."

Middleware has emerged as a critical second level of the enterprise IT infrastructure. The need for middleware stems from growth in the number of applications, in the customizations within those applications and in the number of locations in our environments. These and other factors now require that a set of core data and services be moved from their multiple instances into a centralized institutional offering. This central provision of service eases application development, increases robustness, assists data management, and provides overall operating efficiencies.

3. Middleware Architecture Standards

Interoperable middleware between organizations is a particular need of higher education. Researchers need to have their local middleware work with national scientific resources such as supercomputing centers, scholarly databases, and federal scientific facilities and labs. Advanced network applications will transform instructional processes, but they will depend on middleware to function. The fact that higher education is fractal in structure will create markets that need interoperable standards and products.

4. Middleware Architecture Principals

Core middleware services are those that all other middleware services depend upon. The challenges in providing these services are as much political as they are technical. Many of the hardest issues involve the ownership and management of data in the complex world of higher education.

5. Middleware Target Architecture

These five services are central to middleware as a whole.

Identifiers - A set of computer-readable codes that uniquely specifies a subject.

Authentication - The process of a subject electronically establishing that it is, in fact, the subject associated with a particular identity.

Directories - Central repositories that hold information and data associated with identities. These repositories are accessed by people and by applications to, for example, get information, customize generic environments to individual preferences, and route mail and documents.

Section 4 – Middleware 4

Page 30: Tri-University

Authorization - Those permissions and workflow engines that drive transaction handling, administrative applications and automation of business processes.

Certificates and public-key infrastructures (PKI) - Certificates and PKI are related to the previous four core middleware services in several important ways.

6. Middleware Architecture Recommended Best Practices The software should be loosely coupled. Given

uncertainties such as the volatility of the technologies involved, it is likely that middleware will go through a rapid evolution in the next few years. Universities will want to replace and enhance components without having to redo the entire infrastructure.

Software deployments should demonstrate early wins. Given the political aspects of middleware deployment, it will be very useful to show immediate benefits of early components. This will help motivate the significant institutional investments that will be required. Individual components should have value in themselves as well as in concert.

Make software as economically and technically cheap as possible. IT organizations in higher education have limited resources. Financial stresses and employee retention issues suggest keeping software and expertise costs low.

The software systems should accommodate the distinctive aspects of higher education. The higher education IT environment has a number of special characteristics, such as the migratory workstation habits of students, traditions of academic freedom and privacy, and the legal requirements of public institutions. Middleware solutions must accommodate these characteristics.

The software should be easy to use. End users prefer natural naming and intuitive tools. Users may not be able to handle complexity in management of middleware components or personal data.

7. Middleware Architecture Technology Trends

The central goal of the Internet2 Middleware Initiative (I2-MI) is to foster the deployment of a "middle layer" of national network connectivity, one that sits on top of the machine-to-machine connectivity of IP, connecting people and objects to the network. The core elements of this layer are identifiers, authentication, directories, authorization and PKI. I2-MI's task is to build from these components a middleware infrastructure that has a coherent architecture that enables applications and provides data for network operations, that is interoperable within higher education and research, and that sheds light on how to build a middleware fabric for the wider society. The major activities of the initiative include the following.

Middleware Architecture Committee for Education (MACE). This group of leading campus IT architects is the

Section 4 – Middleware 5

Page 31: Tri-University

overarching management structure for I2-MI. Functioning as an informal advisory board for middleware, MACE provides both technical and programmatic direction for the rest of the initiative, and will play a critical role in middleware development within higher education. MACE is establishing working groups in three areas: directories, PKI, and web authentication and authorization.

Early Harvest and Early Adopters. The Early Harvest technical workshop brings together leading campus IT practitioners to establish a set of best practices for identifiers, authentication and directories. Early Adopters, the campus testbed phase of Early Harvest, is pushing forward the deployment of core middleware at currently 11 US campuses. Based on this experience, Early Adopters will develop roadmaps for other campuses to follow in their own deployments. Both Early Harvest and Early Adopters are funded by the National Science Foundation (NSF).

PKI Activities. I2-MI is working with the federal government in PKI developments. I2 is also working with Educause and CNI to establish a coherent vision for a PKI for higher education. In order to catalyze research and establish testbeds for interoperability, I2-MI is also beginning to define higher-education-specific research issues within PKI.

Shibboleth. A shibboleth is a word (or other identifier) by which one group of people (or computers) can recognize another. In the Shibboleth project, I2-MI is working with IBM to define the functional requirements for an inter-institutional resource-sharing infrastructure. If the functional specifications point to a solution, that solution will be implemented, allowing users from one campus to use their local credentials to access restricted resources on the web servers of other, remote campuses. This work is intended to lead to the broad distribution of an Apache server module that will permit resource sharing, requiring only that participating institutions maintain a standard deployment of authentication and directory services.

The Beta Grid. The Beta Grid is an NSF effort that will deploy 20 to 100 advanced computational and storage nodes at a similar number of universities, and that will interconnect these servers into a seamless computing environment. This will enable distributed computations, co-scheduling of network resources, high-volume data flows, and real-time manipulation of data. I2-MI will work to anchor these advanced services in the emerging common campus middleware infrastructure.

Medical middleware. One "vertical" market of particular interest to higher education involves the complex set of issues associated with medical schools. As important parts of both campus and health service environments, medical schools present significant challenges to middleware deployment. Through Early Adopters, among whose eleven campuses are eight with medical schools, and through working with national medical organizations such as National Institutes of Health (NIH) and National Library

Section 4 – Middleware 6

Page 32: Tri-University

of Medicine (NLM) and corporate medical software providers, I2-MI is developing approaches to core middleware services that will meet the demanding requirements of medical schools.

Directories. With the loose management structures of institutions of higher education, their need for inter-institutional interoperability, and complex public regulations, directories within higher education present major design and implementation issues. Efforts are underway to evaluate the need for a common database and directory subschema for educational institutions.

Section 4 – Middleware 7

Page 33: Tri-University

Appendix A. Glossary of Terms

An extensive glossary of terms for the entire ITA document is available as the last section of this document.

Section 4 – Middleware 8

Page 34: Tri-University

TRI-UNIVERSITY

System Wide Information Technology

Architecture

Target Network Architecture

AUG 2004

Section 5 - Network 1

Page 35: Tri-University

Section 5 – Network 2

Page 36: Tri-University

Tri-University Target Network Architecture

TABLE OF CONTENTS

1. Introduction

2. Network Architecture Vision

3. Network Architecture Purpose

4. Network Architecture Definition

5. Target Network Architecture

6. Recommended Network Architecture Standards

7. Network Architecture Principles

8. Network Architecture recommended Best Practices

9. Network Architecture Technology Trends

Appendix A. OSI Reference Model

Appendix B. Glossary of Terms

Section 5 – Network 3

Page 37: Tri-University

Tri-University Target Network Architecture

1. Introduction

The Tri-University Information Technology Architecture (ITA) describes a framework for information technology that supports the Universities’ strategic plans. ITA facilitates change in an orderly, efficient manner by describing a direction for the future that is supported by underlying principles, standards, and best practices. The implementation of ITA presents opportunities for the Universities to interoperate to deliver a higher level of courteous, efficient, responsive, and cost-effective service to their respective communities of interests. Individually, each University can independently implement ITA components that are interoperable. Economies of scale, consolidation, and cross-university savings may best be realized not just through interoperability, but by working together in partnership and sharing.

The Tri-University's Information Technology Conceptual Architecture document explains the overall strategic alignment of the ITA with the Universities’ goals and objectives, the principles behind the architecture, the domains to be addressed, the plans for addressing domains, and the technology trends to be taken into consideration.

ITA includes important business, governance, and technical components. The technical components, collectively referred to in the ITA provide technical guidance to the Universities. That guidance is supported by principles correlated to unit business functions, recommended standards, and applicable recommended best practices.

Tri-University ITA Domains Data/Information Middleware Network Platform Security Software

Network Architecture is the first ITA domain being developed by the Tri-University architectural task teams. The ITA will be developed in phases and is updated periodically. The domain architectures are driven by the business and program priorities of the Universities, are aligned with, and facilitate the strategic goals of the State Universities’ Information Technology (IT) plans.

The technical components of Network Architecture are presented relative to the Open Systems Interconnection (OSI) Network Reference Model as a single reference view of communication to furnish a common ground for analysis, discussion, and standards development.

2. Network Architecture Vision

The Tri-Universities’ Network Architecture delineates a reliable, scalable, resilient set of infrastructures that economically support the Universities’ core missions in an

Section 5 – Network 4

Page 38: Tri-University

efficient and effective manner. Network Architecture provides the framework and foundation to enable university business processes, instruction and research, new business opportunities, and new methods for delivering service.

3. Network Architecture Purpose

Network Architecture must align with and facilitate the strategic goals of the Universities’ and project IT plans. The key goals of the Tri-University IT Plan that impact requirements for Network Architecture are:

A. Increase the use of e-business solutions.

B. Effectively share common IT resources to enable the Universities to better serve the people of Arizona.

C. Improve access to broadband infrastructure system wide.

D. Improve the quality, efficiency, and usefulness of system wide applications integration and data sharing.

E. Improve the capability of IT functions in order to deliver quality products and services.

Network Architecture must support the business and program priorities of all three Universities. Technology investments in Network Architecture must provide measurable improvements in operations and support public service and should facilitate the ABOR’s goals for the Tri-University. The Network Architecture must enable the development of systems that make university information and programs more accessible to our community of interest.

Network Architecture must enable new applications to be developed more rapidly and modified more easily as business requirements change. New systems must be developed to accommodate rapid rates of change in the business and technical environments.

Network Architecture must support the use of information technology to continually improve university efficiency and effectiveness. The Network Architecture must define appropriate technology standards while still enabling old and new systems to work together.

Network Architecture must increase access to information and services for all faculty, staff, and students as well as the general citizens of Arizona, while protecting privacy and fostering openness. The Network Architecture must enable easier access and more widely available information, while still protecting individual rights of privacy.

Network Architecture specifies how information-processing resources are interconnected and documents the standards for topology (design of how devices are connected together), transport media (physical medium or wireless assignments), and

Section 5 – Network 5

Page 39: Tri-University

protocols (for network access and communication).

4. Network Architecture Definition

Network Architecture defines common, industry-wide, open-standards-based, interoperable network infrastructures providing reliable and ubiquitous communication for the Universities’ distributed information processing environments. It defines various technologies required to enable connections among its internal users, other universities, federal government, as well as the private business sector.

5. Target network architecture

The development of Target Network Architecture addresses all relevant criteria on a broad scale, rather than as part of the deployment of an individual application. The recommendations and decisions that are made during the development process may limit or eliminate certain options for future network components or services. The development of the Target Network Architecture is a collaborative process to allow all application/system/service developers/implementers to participate so that their current investment in certain products and services can be maximized while also developing a transition plan to allow obsolete or non-conforming elements to be phased out. Maximizing the investment and transitioning these elements should not be seen as mutually exclusive activities, since both are in the best interest of the universities.

The development of the Target Network Architecture is a continuous process, which is critically important in an environment where funding to implement may not be immediately available. The ongoing process provides the opportunity to continually refine the Target Network Architecture to keep it aligned with business and educational strategies and requirements, emerging standards, and changing technology.

The recommended implementation approach for the Target Network Architecture is as follows:

A. Developers/implementers assess their technology position relative to the Target Network Architecture recommendations and develop an implementation plan, including any necessary funding. They incorporate that plan into their project IT Plan.

B. Developers/implementers are responsible for the execution of the implementation.

C. The universities’ CIOs will ensure that the recommended principles, standards, and best practices are incorporated into all system wide Requests for Proposal that culminate in system wide procurement contracts. It is critical to align the procurement documents and process with the Target Network Architecture to provide projects with a streamlined vehicle to purchase products and services that support the Target Network Architecture.

Section 5 – Network 6

Page 40: Tri-University

6. Recommended Network Architecture Standards

Network Architecture Standards are established to coordinate project and university implementation of network infrastructure. The goal is to employ open systems based on common, proven, and pervasive industry-wide, approved, open standards; however, a full complement of open standards does not yet exist for all components of network infrastructure. Therefore, combinations of open standards, de facto industry standards, and mutually agreed upon product standards are currently required to support the Tri-Universities' heterogeneous operating environment. The recommended standards are contained in this document. All standards are periodically reviewed.

Recommended Standard 1

The standard for copper network cabling is Category 6 Unshielded Twisted Pair (UTP).

Category 6 UTP is certified to carry 100/1000 Mbps of data.

Category 6 UTP supersedes Category 5e UTP for new installations.

Category 6 UTP is an industry-standard structured cabling system and has support of the Institute of Electrical and Electronics Engineers (IEEE).

UTP shall be used unless specific issues exist, such as high Electromagnetic Interference (EMI) or long transport distances.

Wiring, cable, connector, and equipment vendors have standardized on Category 6 UTP.

Installation of copper network cabling shall conform to applicable building codes, IEEE, EIA/TIA, and BICSI.

Recommended Standard 2

The standards for fiber network cabling are single-mode and multi-mode, depending on requirements.

Intra-building fiber network cabling may be either multi-mode or single-mode. 62.5/125-micron multi-mode fiber is capable of transmitting Gigabit Ethernet up to a distance of approximately 220 meters. 50/125-micron multi-mode fiber is capable of transmitting Gigabit Ethernet up to a distance of approximately 550 meters. 8/125-micron single-mode fiber is capable of transmitting Gigabit Ethernet up to a distance of approximately 5 kilometers.

Inter-building fiber network cabling is both multi-mode and single-mode to allow Gigabit Ethernet and above transmission rates over greater distances as well as maximum flexibility.

Section 5 – Network 7

Page 41: Tri-University

All fiber network cabling shall be open, industry-standard as supported by IEEE.

Installation of fiber network cabling shall conform to applicable building codes, IEEE, EIA/TIA, and BICSI.

Recommended Standard 3

The standards for wireless network connectivity are versions of IEEE 802.11 (LAN) andIEEE 802.16 (MAN) with the relevant security standards, which could be the 802.1x security standard, VPN, LEAP, etc.

Versions of IEEE 802.11 offers relatively high-speed (11Mbps and 54 Mbps) links.

IEEE 802.16 offers Metropolitan Area Networks with up to 66 GHz of performance.

The majority of wireless manufacturers have adopted the IEEE 802.1x security standard in addition to other security standards.

Recommended Standard 4

The logical network topology shall be a star. The physical network topology may be a star, ring, or mesh.

Star, ring, and mesh topologies provide the capability to easily add or remove network devices as necessary.

Star, ring, and mesh topologies minimize the effect of connection failures between devices.

Recommended Standard 5

The standard for network link layer access protocol is Ethernet, IEEE 802.3 Carrier Sense Multiple Access with Collision Detection (CSMA/CD).

Ethernet is a widely accepted, reliable, stable protocol supported by manufacturers.

Ethernet is scalable; faster versions are emerging to manage the increase of data flow.

100/1000 Ethernet has the bandwidth necessary to support the requirements of converged voice, data, and video applications.

10 GIG Ethernet is the next generation of Ethernet to be deployed.

Recommended Standard 6

The standards for transport and network protocols are TCP/UDP

Section 5 – Network 8

Page 42: Tri-University

and IP, respectively.

TCP/UDP and IP makeup an open protocol suite that allows Internet access and the seamless integration of Intranets, Extranets, VPNs, and LANS.

The majority of manufacturers support TCP/IP and TCP/IP enabled products.

Recommended Standard 7

Network devices (routers, switches, firewalls, access servers, etc.) shall be manageable with Network Management platforms that use Simple Network Management Protocol (SNMP) and Remote Network Monitoring (RMON).

SNMP is part of the TCP/IP protocol suite.

SNMP and RMON facilitate the exchange of management information between network devices.

SNMP allows for network performance management, isolation and analysis of network problems, and growth planning.

The majority of vendors support SNMP and RMON.

Recommended Standard 8

Switching (Layers 2, 3, and 4) technologies are the standard for Local Area Network (LAN) network device connectivity.

Switching at OSI Layers 2, 3, and 4 in a LAN improves network performance by enabling the balancing of network traffic across multiple segments, thus reducing resource contention, providing scalability, and increasing throughput capacity.

Switching at OSI Layers 2, 3, and 4 in a LAN provides enhanced security and network management.

Recommended Standard 9

Network Interfaces: Internal networks may use “private” unregistered IP addresses for network workstations and appliances. External network Interfaces shall use “public” registered IP addresses for all external ports on Internetworking devices.

Network Address Translation (NAT) techniques deployed at network boundaries to the Internet enable the widespread reuse of non-unique or unregistered IP Version 4 (IPv4) addresses while still providing the required connectivity to applications and the Internet.

The Internet Assigned Numbers Authority (IANA) has reserved three blocks of IP address space for “private” Internets (reference Network Working Group Request for Comment (RFC)

Section 5 – Network 9

Page 43: Tri-University

1918). The blocks are 10.0.0.0. – 10.255.255.255, 172.16.0.0 – 172.31.255.255, and 192.168.0.0 – 192.168.255.255. IP addresses outside of these address spaces used as unregistered IP addresses are used without coordination with IANA or an Internet registry.

IP Version 6 (IPv6) will provide for expanded addressing; however, for an indefinite period, both IPv4 and IPv6 will coexist.

“Public” registered IP addresses provide the required uniqueness for Internet and network integrity. All public IP addresses should be traceable to a specific device.

The IANA provides coordination of all “public” IP address space.

Recommended Standard 10

Internal workstation network IP addressing may be assigned using Dynamic Host Configuration Protocol (DHCP).

DHCP provides flexibility for growth and migration of networks.

DHCP facilitates and simplifies IP network administration and the addition of workstations and devices to networks.

DHCP address allocation may be (1) an automatic allocation where DHCP assigns a permanent IP address to the workstation; (2) manually allocated and assigned by the DHCP administrator; or (3) dynamically allocated where DHCP assigns an IP address to a workstation for a limited period of time (lease).

Static IP addresses may be used in order to address limited, special security issues.

7. Network Architecture Principles

Network Architecture Principles guide the planning, design, and selection of network technology and services.

Principle 1

Networks provide the infrastructure to support the universities’ core mission, instruction and research, and administrative processes by:

A. enabling access to a wide spectrum of information, applications, and resources, regardless of the method of delivery or the location of the customer,

B. accommodating new and expanding applications including

Section 5 – Network 10

Page 44: Tri-University

different types of data (e.g., voice, data, image, and video), and a variety of concurrent users and

C. passing data across the network in a timely manner so that business decisions can be based on up-to-date information.

Principle 2

Networks must be operational, reliable, and available (24x7) for essential business processes and mission-critical business, instruction and research operations. Reliability, redundancy, and fault tolerance must be built-in, not added on, to ensure that any single point of failure does not have severe adverse effects on business applications or services.

Principle 3

Networks must be designed for growth, flexibility, and adaptability. As new processes and systems are developed and new information becomes available, networks must scale to allow for increased demand.

Principle 4

Barring special circumstances, networks should use industry-proven, mainstream technologies based on industry-wide, open standards, and open architecture to allow systems to interoperate to reduce communication and integration complexity and provide for the sharing of information across the Tri-University system.

Principle 5

Networks must be designed with confidentiality and security of data as a high priority and must be implemented with adherence to security, confidentiality, and privacy policies as well as applicable statutes such HIPPA, FERPA and GLBA, to protect information from unauthorized access and use.

Principle 6

Network access to University resources should be a function of authentication and authorization, not of network address to provide access to information, applications, and system resources in a timely and efficient manner to appropriate requesters. Authentication and authorization of users must be performed according to the security rules of the unit and the University.

Principle 7

Networks must be designed to be “application aware” (at layer 4) to deliver business-critical application systems. To deliver these services, networks must recognize, classify, prioritize, and protect business-critical applications while still enabling bandwidth-intensive and delay-sensitive multimedia and voice applications.

Section 5 – Network 11

Page 45: Tri-University

8. Network Architecture recommended Best Practices

Best Practices are approaches that have consistently been demonstrated by diverse organizations to achieve similar high-level results, which in the case of architecture, means demonstrating the principles.

Recommended Best Practice 1

Position networks for future growth in traffic and expansion of services such as voice/video by implementing:

flexible, open network designs and

networks to carry converged voice, video, data, and image traffic.

Recommended Best Practice 2

When industry standards do not yet exist or meet application requirement, use interim, common, proven, and pervasive product-based standards for networks.

Recommended Best Practice 3

Network planning must be an integral part of application design and development; it must be continually reviewed in production.

Recommended Best Practice 4

Design network-neutral applications. Application code should be isolated from the network-specific code so business rules and data access code can be deployed without regard to the type of network (i.e. WAN or LAN) or redeployed on a different platform, as necessary. Network-neutral applications allow networks to remain scalable and portable.

Recommended Best Practice 5

Consider the impact of middleware and data movement on network utilization and performance by:

Performing transactions locally between the resource manager and the queue to minimize network traffic,

Using asynchronous, store-and-forward messaging to limit the scope of transactions and network requirements between remote sites,

Using push technology, rather than client polling, to balance server and network link loading and

Using multicast, rather than broadcast transmission, to distribute messages to multiple points.

Recommended Best Practice 6

Section 5 – Network 12

Page 46: Tri-University

Unit business functions drive network design for redundancy, fault tolerance, and disaster recovery.

Recommended Best Practice 7

Units should agree on the use of a common set of tools for network design and documentation to ensures documentation and standard practices are followed and promotes opportunities for sharing and consolidation.

Recommended Best Practice 8

Establish technical contacts within the recognized units with responsibility for specified blocks of IP addresses.

9. Network Architecture Technology Trends

Trends, economic and technical, that impact and influence ITA are to be annually updated. Areas that will require near-term investigation include GIG and Multi-GIG Ethernet, Communication Convergence, IVp6 and growing security requirements.

Section 5 – Network 13

Page 47: Tri-University

Appendix A. OSI Reference Model

The Open Systems Interconnection (OSI) Reference Model, developed by the International Standards Organization (ISO), provides a framework for organizing the various data communications functions between disparate devices. This model is a guideline for developing standards that allow the interoperation of equipment produced by various manufacturers. Control is passed from one layer to the next, starting at the application layer in one station, proceeding to the bottom layer, over the channel to the next station and back up the hierarchy. Systems that conform to these standards and have interoperability as their goal are referred to as open systems.

The 7 Layers of the OSI Model

Application(Layer 7)

This layer supports application and end-user processes. Communication partners are identified, quality of service is identified, user authentication and privacy are considered, and any constraints on data syntax are identified. Everything at this layer is application-specific. This layer provides application services for file transfers, e-mail, and other network software services. Telenet and FTP are applications that exist entirely in the application level. Tiered application architectures are part of this layer.

Presentation(Layer 6)

This layer provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer.

Session(Layer 5)

This layer establishes, manages and terminates connections between applications. The session layer sets up, coordinates, and terminates conversations, exchanges, and dialogues between the applications at each end. It deals with session and connection coordination.

Transport(Layer 4)

This layer provides transparent transfer of data between end systems, or hosts, and is responsible for end-to-end error recovery and flow control. It ensures complete data transfer.

Network(Layer 3)

This layer provides switching and routing technologies, creating logical paths, known as virtual circuits, for transmitting data from node to node. Routing and forwarding are functions of this layer, as well as addressing, internetworking, error handling, congestion control and packet sequencing.

Data Link(Layer 2)

At this layer, data packets are encoded and decoded into bits. It furnishes transmission protocol knowledge and management and handles errors in the physical layer, flow control and frame synchronization. The data link layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. The MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. The LLC layer controls frame synchronization, flow control and error checking.

Physical(Layer 1)

This layer conveys the bit stream - electrical impulse, light or radio signal -- through the network at the electrical and mechanical level. It provides the hardware means of sending and receiving data on a carrier, including defining cables, cards and physical aspects. Fast Ethernet, RS232, and ATM are protocols with physical layer components.

Section 5 – Network 14

Page 48: Tri-University

This source for this graphic is The Abdus Salam International Centre for Theoretical Physics.

The reference model definition is from the Webopedia.- The Number one online encyclopedia dedicated to computer technology.

Section 5 – Network 15

Page 49: Tri-University

Appendix A. Glossary of Terms

An extensive glossary of terms for the entire ITA document is available as the last section of this document.

Section 5 – Network 16

Page 50: Tri-University

TRI-UNIVERSITY

System Wide Information Technology

Architecture

Target Platform Architecture

AUG 2004

Section 6 - Platform 1

Page 51: Tri-University

Section 6 - Platform 2

Page 52: Tri-University

Tri-University Target Platform Architecture

TABLE OF CONTENTS

1. Introduction

2. Platform Architecture Vision/Purpose

3. Platform Architecture

4. Platform Architecture Standards

5. Platform Recommended Best Practices

6. Future Technology Trends

Appendix A. Glossary of Terms

Section 6 - Platform 3

Page 53: Tri-University

Tri-University Target Platform Architecture

1. Introduction

The Tri-University Information Technology Architecture (ITA) describes a framework for information technology that supports the Universities’ strategic plans. ITA facilitates change in an orderly, efficient manner by describing a direction for the future that is supported by underlying principles, standards, and best practices. The implementation of ITA presents opportunities for the Universities to interoperate to deliver a higher level of courteous, efficient, responsive, and cost-effective service to their respective communities of interests. Individually, each University can independently implement ITA components that are interoperable. Economies of scale, consolidation, and cross-university savings may best be realized not just through interoperability, but by working together in partnership and sharing.

The Tri-University's Information Technology Conceptual Architecture document explains the overall strategic alignment of the ITA with the Universities’ goals and objectives, the principles behind the architecture, the domains to be addressed, the plans for addressing domains, and the technology trends to be taken into consideration.

ITA includes important governance and technical components. The technical components collectively referred to in the ITA provide technical guidance to the Universities. That guidance is supported by principles related to unit functions, recommended standards, and applicable recommended best practices.

Tri-University ITA Domains Data/Information Middleware Network Platform Security Software

The Platform Architecture is one of the ITA domains being developed by the Tri-University architectural task teams. The complete ITA will be developed in phases and updated periodically. The domain architectures, driven by the business and program priorities of the Universities, are aligned with, and facilitate the strategic goals of the State Universities’ Information Technology (IT) plans.

2. Platform Architecture Vision/Purpose

The Platform Architecture describes common, industry-wide, open-standards-based, interoperable devices facilitating the reliable and pervasive availability of, access interfaces with, and processing for, the Universities' distributed information processing environment. It defines various technologies required to deliver individual units' to all system users. It allows the Universities and individual departments to deploy and support effective and efficient end-user access interfaces to application systems, as well as providing the processing capability to execute application systems, while increasing the

Section 6 - Platform 4

Page 54: Tri-University

use of e-solutions and maintaining traditional methods of service delivery to client users.

The Platform Infrastructure Standard is intended to be vendor/manufacturer neutral by design, focusing instead on relative versatility, capability to seamlessly interoperate with other platform devices, operating systems, embedded security, adherence to open or pervasive industry standards, provision for open system standard interfaces, and utilization of open standard drivers.

3. Platform Architecture

Platform Architecture addresses platform devices relative to their: versatility, capability to seamlessly interoperate with other platform devices, operating systems, embedded security, adherence to open or pervasive industry standards, provision for open system standard interfaces, and utilization of open standard drivers. This approach focuses on the functionality of platform technologies to support requirements that enhance services and operational capacities, improve productivity, performance, and public services rather than addressing attributes such as specific platform configurations, explicit devices, and operating system revisions that neither provide a direction for current and future activities nor directly relate to unit functions.

PLATFORM ARCHITECTURE CATEGORIESCategories of the Platform Architecture range from enterprise-class mainframe-servers to individual workstations and hand-held computing devices along with the operating systems that control these devices. Platform categories, or tiers, complement each other and maximize the operation and usefulness of various specialized platform devices to address unit requirements. Platform Architecture categories include the following:

Server . The server with its associated operating system provides services requested by clients. Types of servers included are: mainframes, midrange, and network servers (application, file, print, database, etc.). Servers should be positioned to embrace a variety of applications so that, over time, as open-standard operating systems and open-standard interfaces are deployed, the traditional boundary lines between voice, data, and video are eliminated.

Storage . Storage is increasingly recognized as a distinct resource, one that is best thought of separately from the devices (servers, clients) that are its consumers and beneficiaries. Such storage is increasingly shared by multiple servers/clients, and acquired and managed independently from them. Storage solutions should address the State’s requirements for short term, long term, and permanent storage of information. Types of storage include:o Direct Attached Storage (DAS) is comprised of

interfaces (controllers) and storage devices that

Section 6 - Platform 5

Page 55: Tri-University

attach directly to a server or a client. DAS, in the form of independent storage devices, RAID arrays, or tape libraries, is the most common storage architecture today.

o Network Attached Storage (NAS) is an open-industry-standard, file-oriented, storage implementation where storage devices are connected to a network and provide file access services to server and client devices. A NAS storage element consists of an engine, which implements the file services, and one or more devices, on which data is stored. By connecting directly into a network, NAS technologies allow users to access and share data without impacting application servers.

o Storage Area Network (SAN) is an open-industry-standard, data-centric, storage implementation that traditionally uses a special-purpose network that incorporates high-performance communication and interface technologies as a means to connect storage devices with servers.

Client . The client, with its associated operating system, provides the end-user interface to applications. Clients include the personal computer (PC), thin client, host-controlled devices (terminals, telephones, etc.), voice interface devices, single- and multi-function mobile devices (Pocket PC, PDA, PDA-phone, etc.), telephony devices, smart cards, etc.

PLATFORM ARCHITECTURE GENERAL PRINCIPLES The planning, design, and development of Platform Architecture are guided by the following general principles that support the Universities’ strategic goals and objectives.

Platform Architecture provides the device infrastructure to support education, research, business and administrative processes.

Servers and storage that support essential processes and mission-critical operations shall be operational, reliable, and available 24x7x365.

Platforms shall use industry-proven, mainstream technologies based on pervasive industry-wide, open interfaces, and open architecture.

Platform operating system security should be based on industry-wide, open standards.

Platform configurations and associated operating system versions should be minimized.

Platform infrastructure should employ open, industry-standard components, using an n-tier model.

Platform infrastructure should be designed for growth, flexibility, and adaptability.

Platform infrastructure should maximize the design and availability of the Target Network Architecture for delivery of applications and services to end-users, regardless of location.

4. Platform Architecture Standards

Section 6 - Platform 6

Page 56: Tri-University

Platform Architecture Standards describe client and server devices along with storage platforms, operating systems, and open system interfaces that provide for interoperability and portability of application systems. The term “platform” applies to servers, storage, and client devices with their respective operating systems, interfaces, and drivers that provide a framework for interoperability, scalability, and portability. Open systems architecture for platforms will further promote the appropriate sharing of information/data and other IT resources.

Target platform architecture must incorporate a range of requirements since at any point in time, given the dynamic nature of the information technology industry and platform-product life cycles, a particular target platform device may no longer completely comply with all requirements. This approach allows for maximization of current investments in certain devices and services, as well as the development of transition plans to allow obsolete or non-conforming platform elements to be phased out.

Versatility: The device shall be flexible, adaptable, and scalable to provide for new and expanding service requirements.

Capability shall exist for achieving applicable architecture targets without requiring major upgrades and additional costs.

Capability shall exist for delivering and/or providing secure (as defined in the related Security Architecture Domain) end-user interface access to a variety of applications without necessitating substantial modifications, regardless of end-user location. Applications include, but are not limited

to, e-mail, human resources information systems (HRIS), financial management systems (FMS), Internet, office productivity software, telephony, and voice mail.

Capability shall exist for delivering and/or providing end-user interface access to a wide variety of applications using a fully converged network, regardless of end-user location.

Target Network Architecture standards shall be maximized in the connectivity of devices.

The device shall be scalable, without substantial modification, to allow for increased demands for services and new applications.

Widespread choices for off-the-shelf application solutions, without modifications, shall be available for the device.

Operating System: The device shall utilize either an open, industry-standard, secure operating system or a pervasive, industry-standard, secure operating system.

Section 6 - Platform 7

Page 57: Tri-University

The open or industry de-facto standard operating system currently installed shall be available for all similar devices offered by the manufacturer.

The operating system shall provide for updates to be pushed to, or accepted by, all associated devices.

Operating System Security: The device shall have an appropriate level of security functionality incorporated as part of the installed operating system.

Operating system security services, including access, authentication, and authorization techniques, shall align with the Security Architecture Domain and should utilize open standards, where possible.

Logging and security controls for applications, platform, and network levels shall be integrated to eliminate, or at least reduce, redundancies.

Support for integrated LDAP-based directory services shall be available.

Security updates from the operating system shall be capable of being pushed to, or accepted by, all associated devices.

Removals of extraneous services, open ports, etc., shall be enabled from default installations of the operating system, and prevented from returning during subsequent upgrades.

Interfaces: The device shall be capable of adhering to applicable, open-system-standard, interface specifications.

Open-systems standards for any particular interface shall be available and in use.

Management using standard SNMP-based management tools shall be enabled.

Network communication protocols shall conform to the Network Infrastructure Standards.

Off-the-shelf devices and peripherals conforming to open-systems standards shall be readily obtainable.

“Personal” input devices (tablet, keyboard, probe, etc.) and output devices (monitors, displays, projectors, speakers, printers, etc.) attached to a client device should use IEEE-standard interfaces and industry de facto standard software drivers.

Drivers: The device shall be capable of utilizing input/output (I/O) drivers that incorporate IEEE-standard interfacing and industry de-facto standard software drivers.

Multiple peripheral devices using open-standard drivers shall be available.

Off-the-shelf peripherals that conform to open system standards shall be readily available.

Server-attached, or network-attached output devices such as printers, plotters, etc., should

Section 6 - Platform 8

Page 58: Tri-University

use IEEE-standard interfaces and industry de facto standard software drivers.

5. Platform Recommended Best Practices

A. Major applications should be placed on uniformly configured servers to make overall maintenance, support and recovery less expensive. New major applications should be written for existing standard platforms and known future trends.

B. When considering new servers or upgrades, take into consideration future growth when sizing the platforms.

C. An effort should be made to come up with the uniform workstation configurations to reduce complexity and administration costs.

D. Build in the ability to securely manage remote servers and clients to reduce system administration costs.

E. Weigh needs vs. system administration costs when upgrading or implementing new systems and establish a balance between goals/flexibility and system management ease.

F. The design and configuration of server solutions should seriously consider and employ adequate redundancy for recovery purposes.

6. Future Technology Trends

Information technology trends and IT market trends will fundamentally affect the way we implement systems to meet the above architectural requirements. Examples of key trends are included here.

A. Hardware will get faster, cheaper, denser and more diverse

Computer technology will continue to advance without a corresponding increase in price. Computer processors will become faster, and memory and disk storage will get denser, approximately doubling every 18 months. Backbone network bandwidth technology will approximately quadruple every 18 months. Consumer devices (cell phones, pagers, PDAs, etc.) with embedded computer technology will be widely used and less expensive. The diversity of desktop and server operating systems will likely remain.

Implications Students will expect introduction of new technologies, increased

server, networks, remote access, and classroom-lab-dorm facilities performance and capability.

The capability to send data and information to consumer devices, such as class schedules sent to students’ PDAs; news sent to pager and phone displays; online phone book available on hand-held devices; etc. should be explored.

Section 6 - Platform 9

Page 59: Tri-University

The Arizona universities will have to continue to support multiple operating systems in an environment where desktop computers, servers, and consumer devices are ubiquitous.

Technological capabilities may increase faster than funding is available to implement them.

B. Demand for capacity will continue to increase. Application requirements and user demand will continue to

consume more IT resources (such as disk space, processing speed, and network bandwidth) due to:

o increasing use of interactive, multimedia, and distributed database applications

o increasing amounts of data made available, increasing sizes of data items (pictures, audio, video), and storage of data about data

o increasing volume of E-mail and data transfers between applications

o increasing number of users with increasing amounts of usage

Users will become more dependent on application, server, and network response and capability to complete their work (ex. "eCommerce" exchanges for paying bills or transferring funds, making alumni donations, card purchases, etc.)

Implications IT resources will become increasingly mission-critical and

must continually be upgraded and expanded. There will be increased costs to manage the growth.

C. Demand for access to "anything" from "anywhere" will continue to grow.

Applications, information, and data will become increasingly available over the web. Wireless technologies will be increasingly used and less expensive. Wireless networks will still be slower than wired networks. Cable modems, DSL, and similar technologies for home access to high-speed networking will be increasingly cheaper and more available. Convergence of voice, video, and data technologies over the same network will blur their traditional separation.

Implications Competitiveness in attracting students, faculty, and staff

will increasingly depend on the level and quality of access. Faculty and staff at many levels will be challenged to become facile in the use of web-enabled technologies. Telecommuting and learning regardless of location will increase. Converging telecommunications, video, and data networks to support demand for use of converged products will be a challenge. Sufficient server capacity and network capability will be necessary to handle the storage, search, and delivery of all types of data.

D. Security will be a primary concern with increased dependence on network applications.

Destructive hacking, viruses, denial of service attacks and similar problems will continue to grow. Authentication and authorization for use of network applications will be

Section 6 - Platform 1

Page 60: Tri-University

necessary. Userids and passwords will still be used. Biometric identification will become more commonplace. The available security technologies will change rapidly over the next three years. Data encryption will be more widely used in transmission and storage.

Implications Preparation should be considered to incorporate new

identification technologies. Faculty, staff, and students must be educated in good security practices. Security infrastructure should still allow cooperation and collaboration with other research and education institutions. There will be continuing issues over intellectual property and copyright violations due to increased amounts of teaching materials and research findings available over the network.

E. IT expertise will continue to be scarce and its cost will continue to rise.

System management and administrative costs will constitute an increasingly larger portion of the expense of systems over their lifetime than the purchase price of the hardware and software.

o Using extra hardware to reduce system administration will be more cost-effective than attempts to minimize hardware through increased tuning.

o There will be opportunities to reduce management costs through centralization.

o IT staff will need to learn new skills at an ever increasing rate.

o Increased specialization and distributed support will create the need for new specialties to provide a whole system view.

Implications There will be a need to find creative ways to attract

technologically competent people, keep them, and use their time efficiently. Calculating the cost of IT projects, services and resources should utilize a total cost approach that takes into account all expenses over their lifetime. There exists a balance between the savings of centralization and the customized support of decentralization. The real financial costs of support, both central and distributed should be considered in university strategic planning.

Section 6 - Platform 1

Page 61: Tri-University

Appendix A. Glossary of Terms

These terms are listed here due to their relevance to this domain. An extensive glossary of terms for the entire ITA document is available as the last section of this document.

Applicable is defined as pertinent, related to, relevant, and appropriate.

Capability is the potential and ability for development or use. It is the capacity to be used or developed for a purpose.

Device includes logical groupings or categories of server, storage, and client platforms in use statewide, or within an agency.

Maximize is defined as taking full advantage of the subject attribute(s).

Variety is defined simply as more than one. Note: the intent of versatility is to maximize flexibility and usefulness of a device relative to applicable applications.

Widespread is defined as extensive and prevalent.

Interoperability is the capability for services (applications) operating on different, diverse devices to exchange information and function cooperatively using this information.

Portability is the capability of software to operate and perform in the same manner on different types of devices.

Section 6 - Platform 1

Page 62: Tri-University

TRI-UNIVERSITY

System WideInformation Technology

Architecture

Target Information Security Architecture

AUG 2004

Section 7 - Security 1

Page 63: Tri-University

Tri-University Target Information Security Architecture

TABLE OF CONTENTS

1. Introduction

2. Information Security Architecture Vision

3. Information Security Architecture Definition

4. Target Information Security Architecture

5. Recommended Information Security Standards

6. Information Security Architecture Purpose

7. Information Security Architecture Principles

8. Information Security Architecture Best Practices

9. Information Security Architecture Technology Trends

Appendix A. Glossary of Terms

Section 7 - Security 2

Page 64: Tri-University

Tri-University Target Information Security Architecture 1. Introduction

The Tri-University Information Technology Architecture (ITA) describes a framework for information technology that supports the Universities’ strategic plans. ITA facilitates change in an orderly, efficient manner by describing a direction for the future that is supported by underlying principles, standards, and best practices. The implementation of ITA presents opportunities for the Universities to interoperate to deliver a higher level of courteous, efficient, responsive, and cost-effective service to their respective communities of interests. Individually, each University can independently implement ITA components that are interoperable, however, economies of scale, consolidation, and cross-university savings may best be realized not just through interoperability, but by working together in partnership and sharing.

The Tri-University's Information Technology Conceptual Architecture document explains the overall strategic alignment of the ITA with the Universities’ goals and objectives, the principles behind the architecture, the domains to be addressed, the plans for addressing domains, and the technology trends to be taken into consideration.

ITA includes important business, governance, and technical components. The technical components collectively referred to as Information Technical Architecture (ITA), provide technical guidance to the Universities. That guidance is supported by principles correlated to unit business functions, recommended standards, and applicable recommended best practices.

Tri-University ITA Domains Data/Information Middleware Network Platform Security Software

2. Information Security Architecture Vision

A secure information technology structure is essential for meeting the mission and legal obligations of the Tri-University system. The information technology structure is secure if it provides timely and appropriate access to resources while protecting against attacks and illegal or inappropriate use.

Some of the more recent security threats have demonstrated that the average University community end-user has no foolproof way to protect themselves against infection and exploitation. They must rather depend on the combined safeguards of personal vigilance and reliance on enterprise security initiatives.

3. Information Security Architecture Definition

Information Security Architecture defines common, industry-wide, open-standards-based security tools and practices providing

Section 7 - Security 3

Page 65: Tri-University

secure computing environments for the distributed information processing environment of the Tri-University community and Tri-University community partners (other universities, non-profits, state and federal government, and private sector).

4. Target Information Security Architecture

The development of Target Information Security Architecture addresses all relevant criteria on a broad scale, rather than as part of the deployment of an individual solution. The recommendations and decisions that are made during the development process may limit or eliminate certain options for future security solutions. The development of the Target Information Security Architecture is a collaborative process to allow security professionals from the Tri-University community to participate so that their current investment in certain products and services can be maximized while also developing a transition plan to allow obsolete or non-conforming elements to be phased out.

This development cycle is a continuous process, which is critically important in an environment where funding to implement may not be immediately available. The ongoing process provides the opportunity to continually refine the Information Security Architecture to keep it aligned with educational/business strategies and requirements, emerging standards, and changing technology. The recommended implementation approach for the target Information Security Architecture is as follows:

A. Information security professionals assess their position relative to the Target Information Security Architecture recommendations and develop an implementation plan, including any necessary funding. They incorporate that plan into their overall security plan.

B. Information security professionals are responsible for the execution of the implementation.

C. The Tri-University CIO/CTO’s will ensure that the recommended principles, standards, and best practices are incorporated into all system wide Requests for Proposal that culminate in system wide procurement contracts. It is critical to align the procurement documents and process with the Target Information Security Architecture to provide projects with a streamlined vehicle to purchase products and services that support the Target Architecture.

5. Recommended Information Security Standards

The overall industry standard for security is grounded in the three basic concepts of confidentiality, integrity, and availability. The Tri-University Security Architecture should be based on common, industry-wide, open-standards-based technologies. It should promote and facilitate delivery of services, and communication among students, faculty, the federal government, cities, counties, state, and local governments, as

Section 7 - Security 4

Page 66: Tri-University

well as the private business sector. Security design based on standards allows the universities to quickly respond to changes in technology, business, and information requirements, without compromising the integrity or performance of the enterprise and its information resources.

The target architecture of each of the three Universities should be developed under the direction of the institution’s CIO/CTO and tailored to particular institutional needs but should include the following standards:

o Network segmentationo Enterprise Security Managemento Enterprise Antivirus/SPAM protectiono Network firewallo Host-based firewallo Network monitoring systems (including intrusion

detection and prevention systems where appropriate)o Access controlso VPNo Enterprise File Integrityo Monitoring/Forensicso Risk Managemento Business Continuityo Robust Policy development/implementation/enforcement

cycleo Enterprise Security Awareness Programo End-user training

6. Information Security Architecture Purpose

The Security Architecture provides a means to economically assure the protection and ease of transaction of the Universities’ business, delivery of services, and communications among all users of the Tri-University systems. It encourages the units to incorporate technology security improvements for business requirements without compromising the integrity or performance of the enterprise and its information resources.

7. Information Security Architecture Principles

o The Security Architecture will enable the universities to perform business processes electronically and deliver secure e-services to the public.

o Security levels applied to systems and resources will be commensurate with their value to the universities and sufficient to contain risk to an acceptable level.

o The Security Architecture will be based on industry-wide, open standards, where possible, and accommodate varying needs for and levels of security.

o Security is a critical component of universities’ interoperability.

o The Security Architecture will accommodate varying security needs.

Section 7 - Security 5

Page 67: Tri-University

o The Security Architecture will take overall Attention to “defense in depth” or layeredProtection.

8. Information Security Architecture Best Practices

The overall attention to “defense in depth” or layered protection including attention to the following:

o Applying vendor-supplied fixes necessary to repair security vulnerabilities.

o Scanning computers for security vulnerabilities using available technical tools.

o Removing unneeded services and software.o Installing and maintaining antivirus software on all

client computers.o Installing and maintaining firewall software on all

client computers.o Encrypting sensitive data where possible.o Following effective procedures for user accounts and

access.o Develop role-appropriate staff, student, and faculty

information security training.o Provide secure tools such as VPN and secure mail

programs to campus computing community.o Assess information security policies and revise or

create new policies to address current issues.o Enterprise-critical systems are rigorously protected

including vendor patches, firewall protection, robust backup practices, and administrative access strictly controlled.

o Centralized host management where possible.o Maintain some form of network segmentation (VLAN,

router filtering, hardware firewalls).o Maintain appropriate physical access controls for

critical systems (including voicemail and the PBX)o Develop a security reporting plan that allows for

timely and accurate management response to incidents.

o Assign security responsibility and control for all system resources.

o Maintain appropriate change controls for security software.

o Maintain “sensitive position policies,” such as mandatory minimum vacations and rotation of personnel where possible given staffing limitations.

o Extend physical and logical access controls to voicemail and the PBX.

o Security reporting for timely and accurate management response to incidents.

o Maintain appropriate control over physical access points to IT resources.

o Appropriate change control over security software.o All system resources have an owner assigned

responsibility for security and control.

Section 7 - Security 6

Page 68: Tri-University

9. Information Security Architecture Technology Trends

Information security evolves in part as a response to an ever-changing threat. It is therefore difficult to predict what the most urgent needs of the future will be. The following are some trends that that the Tri-University community has begun to explore or implement:

o Intrusion Prevention/Behavioral Identificationo Pre-authentication Vulnerability Analysiso Improved identity management (Identity Federations)o Biometrics, including facial and voice recognition,

iris/retina scan, and fingerprint IDo Automated emergency response initiatives (tied to

behavioral and anomaly protection systems)o Pre-connection security screening and network

quarantine areaso PKI/PGPo Instant messaging securityo Wireless access securityo Voice over Internet Protocol securityo Hand-held device (e.g. PDA, Blackberries) security

Section 7 - Security 7

Page 69: Tri-University

Appendix A. Glossary of Terms

An extensive glossary of terms for the entire ITA document is available as the last section of this document. In additionthese URLs provide useful definitions and references.

o http://www.cert.org o http://csrc.nist.gov

Section 7 - Security 8

Page 70: Tri-University

TRI-UNIVERSITY

System Wide Information Technology

Architecture

Target Software/Application

Architecture

AUG 2004

Section 8 - Software 1

Page 71: Tri-University

Tri-University Software/Application Architecture

TABLE OF CONTENTS

1. Introduction

2. Software/Application Architecture Vision

3. Software/Application Architecture Definition

4. Target Software/Application Architecture

5. Recommended Software/Application Architecture Strategies

6. Recommended Software/Application Architecture Standards

7. Software/Application Architecture Purpose

8. Software/Application Architecture Principles

9. Software/Application Architecture recommended Best Practices

10. Software/Application Architecture Technology Trends

Appendix A. Three-Tier Enterprise Application Architecture Diagram

Appendix B. Example of PeopleSoft Software/Application Architecture

Appendix C. Glossary of Terms

Section 8 - Software 2

Page 72: Tri-University

Tri-University Software/Application Architecture

1. Introduction

The Tri-University Information Technology Architecture (ITA) describes a framework for information technology that supports the Universities’ strategic plans. ITA facilitates change in an orderly, efficient manner by describing a direction for the future that is supported by underlying principles, standards, and best practices. The implementation of ITA presents opportunities for the Universities to interoperate to deliver a higher level of courteous, efficient, responsive, and cost-effective service to their respective communities of interests. Individually, each University can independently implement ITA components that are interoperable. Economies of scale, consolidation, and cross-university savings may best be realized not just through interoperability, but by working together in partnership and sharing.

The Tri-University's Information Technology Conceptual Architecture document explains the overall strategic alignment of the ITA with the Universities’ goals and objectives, the principles behind the architecture, the domains to be addressed, the plans for addressing domains, and the technology trends to be taken into consideration.

ITA includes important business, governance, and technical components. The technical components collectively referred to in the ITA provide technical guidance to the Universities. That guidance is supported by principles related to unit business functions, recommended standards, and applicable recommended best practices.

Tri-University ITA Domains Data/Information Middleware Network Platform Security Software

The Software/Application Architecture is one of the ITA domains being developed by the Tri-University architectural task teams. The complete ITA will be developed in phases and updated periodically. The domain architectures, driven by the business and program priorities of the Universities, are aligned with, and facilitate the strategic goals of the State Universities’ Information Technology (IT) plans.

2. Software/Application Architecture Vision

The purpose of the Tri-Universities’ Software/Application Architecture is to define the major kinds of applications and software needed to manage the data and support the business functions of the Universities. The software/applications architecture is not a design for systems, nor is it a detailed requirements analysis. It is a definition of what applications will do to manage data and provide information to people performing business and academic functions. The applications enable Information Technology to achieve its mission, that is,

Section 8 - Software 3

Page 73: Tri-University

provide access to needed data in a useful format at an acceptable cost.

Applications are the mechanisms for managing the data of the enterprise for the Universities. The term “managing the data” includes such activities as entering, editing, sorting, changing, summarizing, archiving, analyzing, auditing, and referencing data.

The Software/Application Architecture strives to provide software and applications that “manage the data” so that it is reliable, relevant, and easily accessed in a university setting. The university setting is filled with a variety of internal and external stakeholders and involves software and applications unique to a higher education environment. All these stakeholders rely upon applications and software to manage and present data and information to make informed decisions in business, instruction, research, and shared governance realms. The Software/Application Architecture provides the framework and foundation to enable this decision-making. The Software/Application Architecture will build upon all the other Tri-University domains: client interface, middleware, platform, data, security, and network.

3. Software/Application Architecture Definition

The Software/Application Architecture defines common, industry-wide, open-standards-based, strategies and practices related to managing and facilitating the design, development, and purchase of software to automate and maintain University business and academic processes. Software and applications for a University consists of:

1. Software applications designed to automate and perform specific functions such as student administration (student records, student financials, financial aid, admissions and recruitment, classes, degree audit & advising), human resources/payroll, financial, course management, development, grants management, and library systems.

2. Development software including development environments, programming languages, middleware technologies to facilitate inter-application communication and data exchange, user interface development environments, API’s, report writers, etc.

3. Database software to organize and manage, access, provide security, and assure integrity of data in data storage.

4. Productivity software including office automation and collaborative software products and tools and productivity software.

5. Utility software for program, file, network, operating system, and data management.

There are other types of software and applications used in a University environment. This document defines Software/Application Architecture as currently concerned with software applications for enterprise use involving student, human resources, financial, and academic functions and the software used to develop and support these systems. Database

Section 8 - Software 4

Page 74: Tri-University

systems, productivity software – office automation, security software, middleware for the most part, and most utility software will be addressed in other sections of the Tri-University ITA Domains.

4. Target Software/Application Architecture

The goal of specifying a target Software/Application Architecture is to document those strategies, standards, and practices that allow an open architecture for Internet access and integration of enterprise software/applications subject to necessary security, legal, and technical constraints. In this domain, the architecture is somewhat fluid. This next generation architecture should leverage a number of Internet technologies and concepts to deliver simple, ubiquitous access to applications and enable the open flow of information between systems. While the overall goal of the ITA is to adopt common, industry-wide, open-standards-based infrastructures, the reality of working with proprietary core information systems coupled with the emerging and rapidly evolving nature of web-based applications/software, requires the architecture to remain at a high level. Target standards and best practices can be identified and recommended, but standards and practices cannot be dictated at this time.

5. Recommended Software/Application Architecture Strategies

“Web Service” is the key component of Software/Application Architecture. All other strategies, standards, principles, best practices, trends, and purposes are based on “Web Service”. Web service is simply an application component that is accessed programmatically over the Internet using HTML and/or XML over HTTP.

Any discrete component of application functionality can be exposed as a web service. Examples include a student class list, staff/faculty pay stub, or a payment voucher request. Any of these application components can be published and access over the Internet as web services.

Recommended Strategy 1No code on the client. All applications deploy on servers only. Any Internet enabled device, such as a web browser, running on a PC or a cell phone – which uses standard Internet technologies such as HTML, XML, and HTTP can access and execute the applications. University IT organizations have reduced installation, maintenance, and administration costs since the applications reside on a server instead of being distributed to thousands of PCs.

Recommended Strategy 2Implement Single Enterprise Application Architecture. Each university should strive for a single enterprise multi-tiered architecture for applications, for example, Web Server Application Server DBMS Server. Applications are easier to integrate, a lower cost of deployment and maintenance is achieved, and there is the benefit of centralized administration

Section 8 - Software 5

Page 75: Tri-University

and security. Training costs are greatly reduced because the same skill set can be leveraged across all applications. Multi-tiered architecture allows for flexibility in configuration for system performance and tuning. Servers can be added as needed to the web server level if that is the bottleneck or the application server tier as needed for example.

Recommended Strategy 3Identify and use Application Integration Points. These can be web services or traditional file interfaces. Common application access points leverage the same technology and skill set to achieve integration across all applications, reducing the initial and ongoing cost of enterprise-wide integration.

Recommended Strategy 4Enterprise applications should be driven by metadata. This allows for a more flexible customization environment. Business analysts can “configure” or make changes to applications by changing metadata eliminating the need for lower-level coding in some instances.

Recommended Strategy 5Use relational databases in all cases where practicable. For example, Oracle, IBM DB2, and Microsoft SQL Server are relational databases. Exception cases would include highly specialized data stores such as Active Directory or LDAP directory information services.

Recommended Strategy 6Standardize the development environment across the enterprise. Standard development tools, languages, requirement definitions, approval process and migration procedures should all be documented and followed.

Recommended Strategy 7Enterprise software applications should integrate with university directory services, i.e., LDAP, for end-user authentication. The role of the end-user will be defined within the application.

Recommended Strategy 8University Portal should have enterprise application software channels as appropriate.

6. Recommended Software/Application Architecture Standards

Software/Application Architecture should support a number of standard technologies that allow the Tri-Universities to leverage their computing infrastructures as well as the capabilities of the Internet.

All standards listed here are intended to interact and support relevant standards proposed in the other ITA domains. The network, security, data/information, middleware, and client interface standards should be especially relevant.

Section 8 - Software 6

Page 76: Tri-University

Recommended Standard 1

ANSI SQL: a standardized query language for requesting information from a database. In 1986, ANSI approved a rudimentary version of SQL as the official standard, but most versions of SQL since then have included many extensions to the ANSI standard. In 1991, ANSI updated the standard.

Recommended Standard 2

XML: Short for Extensible Markup Language. XML is a pared-down version of SGNL, designed especially for Web documents. It allows designers to create their own customized tags, enabling the definition, transmission, validation, and interpretation of data between applications and between organizations. XML provides a highly flexible and extensible platform for integration.

Recommended Standard 3

HTML: Internet architecture heavily leverages HTML for presentation. In the multi-tiered architecture model the application server will dynamically generate HTML that is delivered to the web browser via the web server.

Recommended Standard 4

HTTP: Internet enabled clients (web browser, cell phone, etc.) communicate with the web server over a secure HTTP connection.

Recommended Standard 5

Java and JavaScript: The programming language of choice for the Internet and web services. Web servers will execute Java servlets to serve HTML and JavaScript to web browser clients and using Java servlets in the integration architecture to pass XML messages between systems.

Recommended Standard 6

LDAP: Enterprise Software/Applications use LDAP for end-user authentication during login.

Recommended Standard 7

IBM DB2, Oracle and Microsoft SQL: All DBMS’s are enterprise capable and support ANSI SQL and allow communication and passing of data between the application and DBMS via native SQL.

Section 8 - Software 7

Page 77: Tri-University

7. Software/Application Architecture Purpose

The purpose of the Software/Application Architecture is to describe the approach to supporting Internet access and integration for university enterprise applications. The purpose of the architecture is to define the major kinds of applications needed to manage the data and support the processes of the Tri-Universities. These applications include:

A. Human ResourcesB. PayrollC. Course ManagementD. Student RecruitingE. AdmissionsF. Student RecordsG. Advising/Degree AuditH. Student FinancialsI. Financial AidJ. Class Scheduling K. General LedgerL. Accounts PayableM. Accounts ReceivableN. BudgetingO. Development/Alumni RelationsP. TravelQ. Other systems, i.e., Parking, Resident Hall Management,

The Software/Application Architecture must support the business and program priorities of all three Universities. Technology investments in Software/Application Architecture must provide measurable improvements in operations and support public service and should facilitate the ABOR’s goals for the Tri-University. The Software/Application Architecture must enable the development of systems that make university information and programs more accessible to our communities of interest.

The Software/Application Architecture must increase access to information and improve services for all faculty, staff, students as well as the general citizens of Arizona. The Software/Application Architecture must enable easier access and more widely available information, while still protecting individual rights of privacy. The application architecture should ensure data integrity within the application data structure.

8. Software/Application Architecture Principles

The Software/Application Architecture specifies how applications are accessed and integrated and documents the strategies, standards, and best practices for accomplishing these goals. The Software/Application Architecture principles are the foundation to provide access to applications over the web, as well as integrate applications with other internal and external systems.

Principle 1Tri-University Software/Application Architecture is a server centric, component architecture that enables secure end user

Section 8 - Software 8

Page 78: Tri-University

access to applications. It consists of multiple tier servers – web, application, and database.

The Internet application server is the heart of the Software/Application architecture. The application server executes the core application business logic. The application server will also dynamically generate the markup and scripting language. It will also interact with the Directory Service to authenticate end users and manage access privileges. The application server will also execute queries, reports, batch processes and manage all interaction with the relational DBMS via SQL.

The application server communicates with a web server that is Java enabled. The web server handles and presents the outbound HTTP request for transactions and queries. It maps the data to these requests and acts as a relay between access and back end services.

The database server is the repository for all information managed by enterprise applications. Not only is application data stored in the database, but metadata is also maintained in the database. The development tools maintain this metadata that is used to drive the runtime architecture. The application server executes business logic based on the metadata. Principle 2Easy Access – A user should be able to access an application by entering a URL or clicking on a hyperlink using a standard web browser without additional software installation requirements. Any Internet enabled device, such as a web browser running on a PC or a cell phone – which uses standard Internet technologies such as HTML, XML, and HTTP can access and execute applications. A user should not be limited by hardware or location. Any mobile Internet device should be able to interact with the application.

Principle 3Look and feel of Leading Web Sites – Web-based applications should look and feel like popular websites, providing an intuitive user interface that fully leverages the web paradigm with simplified integration and hyperlinking, effective use of graphics, and other standard web techniques and constructs.

Principle 4Content Management – The majority of content delivered over the web is unstructured data. Portal technology or application web pages manages the delivery of both structural (transactional) and unstructured data.

Principle 5Low Bandwidth Access – much web access today occurs over dial-up phone lines. To accommodate this constraint – applications are designed to access effectively over low bandwidth connections. This can be accomplished through a server architecture that uses HTML and JavaScript, and does not require the installation of Java plug-ins, proprietary components, or other heavy footprint

Section 8 - Software 9

Page 79: Tri-University

client software.

Principle 6Low cost of maintenance and deployment – Compared with client/server the architecture described in Principle 1 and 2 allows deployment at a much lower cost. It also allows for flexibility to customize applications without the deployment and maintenance issues associated with client/server.

Principle 7Secure access with easy administration – Directory server (LDAP) integration allows for access management in a centralized repository. This simplifies access and administration for applications to role definition.

Principle 8Web services are loosely coupled - Web services have well defined, published interfaces, i.e., XML, and can be easily accessed from remote systems over the Internet. They require a much simpler level of coordination between systems and the underlying technology can be changed and replaced without impacting the systems that invoke it.

The loose coupling nature of web services simplifies the integration process, lowering the cost of integration and making it easier to integrate applications than techniques used in the past.

9. Software/Application Architecture recommended Best Practices

Best Practices are approaches that have consistently been demonstrated by diverse organizations to achieve a similar high-level result, which, in the case of architecture, means demonstrating the principles. Some of the practices have already been identified previously.

Recommended Best Practice 1No code on the client. All applications deploy on servers only. Any Internet enabled device, such as a web browser, running on a PC or a cell phone – which uses standard Internet technologies such as HTML, XML, and HTTP can access and execute the applications.

Recommended Best Practice 2Single Enterprise Application Architecture. Each university should strive for a single enterprise multi-tiered architecture for enterprise applications.

Recommended Best Practice 3Enterprise applications should be driven by metadata. This allows for a more flexible customization environment. Business analysts can “configure” or make changes to applications by changing metadata eliminating the need for lower-level coding in some instances.

Recommended Best Practice 4

Section 8 - Software 1

Page 80: Tri-University

Make data cleansing everyone’s job, but correct the data at the source. Data cleansing products might help, especially as mission critical data-driven systems are implemented. Such systems often suffer if the underlying data is inaccurate.

Recommended Best Practice 5Standardize the development environment across the enterprise. Standard development tools, languages, requirement definitions, approval process and migration procedures should all be documented and followed.

Recommended Best Practice 6Use relational databases in all cases where practicable. For example, Oracle, IBM DB2, and Microsoft SQL Server are relational databases. Exception cases would include highly specialized data stores such as Active Directory or LDAP directory information services.

10. Software/Application Architecture Technology Trends

Trends, economic, and technical, that impact and influence ITA are to be annually updated. Areas that will require near-term investigation include:

Metadata-Driven Architecture Open Source Portal Integration with Enterprise

Software/Application SOAP, WSDL, and UDDI for integration points and

collaboration between systems Centralized access security Integration with outside (non-university) sources Integration between the Tri-Universities

Section 8 - Software 1

Page 81: Tri-University

Appendix A. Three-Tier Enterprise Application Architecture

Section 8 - Software 1

Page 82: Tri-University

Appendix B. Example of PeopleSoft Software/Application Architecture

Section 8 - Software 1

Page 83: Tri-University

Appendix A. Glossary of Terms

An extensive glossary of terms for the entire ITA document is available as the last section of this document.

Section 8 - Software 1

Page 84: Tri-University

Glossary Of IT Terms(Adapted From MT. SAN ANTONIO COLLEGE)

A

AI - Artificial Intelligence - The use of computer technology to perform functions that are normally associated with human intelligence, such as reasoning, learning, and self-improvement.

ALPHA - Alphanumeric, the set of characters representing the letters of the alphabet and the digits 0 through 9.

ANALOG - The representation of a continuous physical variable by another physical variable.

ANALOG COMPUTER - A computer in which continuous physical variables represent data.

ANSI - American National Standards Institute.

APPLICATIONS - The computer programs and systems that allow people to interface with the computer and collect, manipulate, summarize, and report data and information.

ARCHIE - A query system to scan the offerings of the many anonymous FTP sites on the Internet.

ASCII - American Standard Code for Information Interchange - A widely used system for encoding characters for processing and transmission between data processing systems, data communication systems, and associated equipment.

ASSEMBLER - A computer program that translates assembly language programs into a machine language that the computer "understands".

ASSEMBLY LANGUAGE - A low level programming language that includes symbolic language statements in which there is a one-to-one correspondence with machine language instructions.

ASYNCH - Asynchronous, without regular time relationship; unexpected or unpredictable with respect to the execution of a program's instructions; a physical transfer of data to or from a device that occurs without a regular or predictable time relationship.

ATM - Asynchronous Transfer M ode - A high-speed connection-oriented data transmission method that provides bandwidth on demand through packet-switching techniques using fixed-sized cells. ATM supports both time-sensitive and time-insensitive traffic, and is defined in CCITT standards as the transport method for B-ISDN services. Cell-switching technology that

Section 9 - Glossary 1

Page 85: Tri-University

operates at high data rates: up to 622 Mbps currently, but potential data rates could reach Gbps. ATM runs on an optical fiber network that uses Synchronous Optical Network (SONET) protocols for moving data between ATM switches.

AVATAR CARD - AVATAR brand 327x emulator (like IRMA) for Macintosh computers.

B

BACK-END DATABASE - An application running on a server that stores data and responds to requests for those data from front-end applications running on workstations and networked PCs. (See CLIENT/SERVER and FRONT-END DATABASE).

BANDWIDTH - The range of frequencies occupied by an information-bearing signal or that can be accommodated by a transmission medium.

BARCODE - A machine readable graphic representation of alphanumeric characters for rapid data input to a computer system.

BAUD - Change in the amplitude, phase, or frequency of a signal, used to encode the signal with digital information.

BATCH - Form of processing whereby input of a type is kept together and then processed all at one time, generally considered an older style of processing but still necessary for some applications.

BBS - Bulletin Board System - A system for posting news articles to various networks, such as Internet.

BINARY - having two components or possible states; usually represented by a code of zeros and ones.

BISYNC(H) - Binary-Synchronous (Synchronous) - Protocol communications, one method by which a computer converses with remote terminals, the form the data assumes for transfers.

BIT - The smallest unit of information in a computer, equivalent to a single zero or one. The word 'bit' is a contraction of binary digit.

BITNET - "Because It's Time Network" - A national educational network with similar counterpart networks in other countries, based on IBM's Remote Spooling Communications Subsystem (RSCS).

BPS - Bits Per Second - Communications speed, the rate at which data travels from one site to another. (Example: the Internet network runs at 56 Kilo BPS.)

Section 9 - Glossary 2

Page 86: Tri-University

BUFFER - A space reserved in a computer's memory for temporarily storing data, often just before it is to be transmitted or after it has been received.

BUS - A set of wires for carrying signals around a computer.

BUS TOPOLOGY - A layout for a local area network in which network stations are linked by means of an open-ended cable or other transmission medium.

BYTE - A sequence of bits, usually eight, treated as a unit for computation, typically an alpha or numeric character.

C

"C" - A general purpose, small and concise programming language developed at Bell Laboratories in conjunction with the UNIX operating system.

CARL - Colorado Alliance of Research Libraries - A public access catalog of library services available through the Internet.

CASE - Computer Assisted Software Engineering - The application of computer technology to systems development activities, techniques, and methodologies. Sometimes referred to as Computer Aided Systems Engineering.

CAUSE - Formerly the College And University Systems Exchange - Currently, CAUSE, the Professional Association for the Management of Information Technology in Higher Education, helps colleges and universities strengthen and improve their computing, communications, and information services, both academic and administrative.

CCITT - See International Telegraph and Telephone Consultative Committee

CICS - Customer Information Control System - The IBM teleprocessing monitor. Systems software that provides on-line/real time operation with a mainframe via a teleprocessing network.

CD-I - Compact Disk - Interactive - Mass storage medium with interactive access.

CD-ROM -Compact Disk - Read Only Memory - Read only direct access mass storage medium.

CLIENTS - Users, those receiving services from Information Technology or the resources they provide.

Section 9 - Glossary 3

Page 87: Tri-University

CLIENT/SERVER - A distributed computing system in which the CLIENT is the requesting program, sending requests to servers across a network, and the SERVER provides a service in response to requests from clients.

CMS - Conversational Monitor System - An IBM operating system that simulates many of the functions of Operating System (OS) and Disk Operating System (DOS), and allows OS and DOS programs to run in a conversational environment.

COAXIAL CABLE - A transmission medium composed of an insulated copper wire inside a tubular conductor.

COBOL - Common Business-Oriented computer Language - The primary language used in many older batch application systems.

CODEC - COder DECoder - This equipment converts voice signals from their analog form to digital signals acceptable to more modern digital PBXs and digital transmission systems. It then converts these digital signals back to analog so that you may hear and understand what the other party is saying.

COMSAT - The COMmunications SATellite corporation was created by Congress as the exclusive provider to the U.S. of satellite channels of international communications. COMSAT is also the U.S. representative to Intelsat and Inmarsat, two international groups responsible for satellite maritime communications.

CONTROL UNIT - Circuits that sequence, interpret, and carry out instructions from CPUs.

CPU - Central Processing Unit - The part of a computer that interprets and executes instructions. It is composed of an arithmetic logic unit, a control unit, and a small amount of memory. (see MAINFRAME)

CQI - Continuous Quality Improvement - A methodology intent on meeting or exceeding customer requirements by continuous improvement and innovation in products, processes, and services.

CTI - Computer Telephone Integration - A polite term for connecting a computer to a telephone switch and having the computer issue the switch commands to move calls around.

CURSOR - The movable spot of light that indicates a point of action or attention on a computer screen.

D

DASD - Direct Access Storage Devices - Generally disk drives.

Section 9 - Glossary 4

Page 88: Tri-University

DATA BUS - The wires in a computer that carry data to and from various locations (usually memory).

DATABASE - A collection of related information about a subject organized in a useful manner that provides a base or foundation for procedures such as retrieving information, drawing conclusions, and making decisions.

DBMS - Data Base Management System - A system used to store, retrieve, and manipulate data in an organized (modeled) fashion. Usually consists of Dictionary, Manipulation, Security, and Access components.

DDP/OA - Distributed (Local Based) Data Processing and/or Office Automation.

DEDICATED LINES - Telephone lines (exchanges) that are specifically used to provide access to file servers, files, or other systems.

DEVELOPMENT - Building an application system to carry out a process in an automated fashion, particularly employing a new method or replacing a manual effort.

DIAL-UP - To connect to a computer by calling it on the telephone (usually via a modem).

DIGITAL - Pertaining to the representation or transmission of data by discrete signals (as opposed to continuous analog signals).

DIGITAL COMPUTER - A machine that operates on data expressed in discrete, or on-off, form rather than continuous representation.

DIGITIZE - To represent data in digital, or discrete, form, or to convert an analog, or continuous, signal to such a form.

DISK - A round magnetized plate, usually made of plastic or metal, organized into concentric tracks and pie-shaped sectors for storing data.

DISK DRIVE - The mechanism that rotates a disk and reads or writes data.

DNS - Domain Name System - A distributed database system for translating computer names (like ibm.mtsac.edu) into numeric Internet addresses (like 140.144.204.50), and vice versa.

DOS/VSE - Disk Operating System/Virtual Storage Extended - An IBM operating system for mid-range processors. Usually called simply "VSE", it is a family of disk operating systems designed primarily for batch and transaction processing. DOS/VSE is one of several operating systems currently in use

Section 9 - Glossary 5

Page 89: Tri-University

on the district's mainframe computer.

DOWNLOAD - D/L. - The transfer of mainframe information/data/files to mini or microcomputers or the transfer of downloaded information from a central file server to other processors.

DQDB - Distributed Queue Dual Bus - The IEEE 802.6 MAN architecture standard for providing both circuit-switched (isochronous) and packet-switched services.

DSCH - Daily Student Contact Hours - Number of class hours each course is regularly scheduled to meet each day, multiplied by the number of students actively enrolled in the course.

E

EBCDIC - Extended Binary Coded Decimal Interchange Code - An IBM system for encoding letters, numerals, punctuation marks, and signs that accommodates twice as many symbols and functions as ASCII by using eight-place binary numbers instead of seven-place numbers.

EDUCOM - A nonprofit consortium of higher education institutions founded in 1964. Educom is focusing its energies on increasing individual and institutional intellectual productivity through access to and use of information resources and technology and ensuring the creation of an information infrastructure that will meet society's needs into the twenty-first century.

EFFICIENT-UTILITY - Computer hardware that provides immediate functional capability as well as the potential for greater utility without replacement.

ELECTRONIC MAIL (E-MAIL) - An application on both local and wide area networks that provides communication among users.

ENHANCEMENT - Upgrading of an existing system, such as the addition of new functions or reports.

ETHERNET - Networking architecture, a bus-structured local area network designed originally at Xerox Corporation.

EXECUTIVE INFORMATION SYSTEMS (EIS) - Decision support software; a wide variety of software intended to be used by an executive (manager) to assist in organizing information for decision making purposes.

EXPERT SYSTEMS - A practical development of Artificial Intelligence (AI) which requires creation of a knowledge base of facts and rules furnished by human experts and uses a

Section 9 - Glossary 6

Page 90: Tri-University

defined set of rules to access this information in order to suggest solutions to problems.

F

FEP - Front End Processor - Communication controller for the mainframe teleprocessing network of leased lines and remote non-programmable devices.

FDDI - Fiber Distributed Data Interface - A token ring-passing scheme that operates at 100 Mbps over fiber optic lines with a built in geographic limitation of 100 kilometers.

FIBER OPTICS - The technology of encoding data as pulses of light beamed through ultra thin strands of glass or plastic.

FILE SERVER - Computer that is modified to store and transfer large amounts of data to other computers. File servers often receive data from mainframes and store it for transfer to other micros, or from other micros to mainframes.

FINGER - An Internet utility that displays information about another user.

FLOPPY DISK - A small, flexible disk used to store information or instructions.

FLOPS - Floating Point Operations Per Second - A metric used to compare computing power.

FRAME RELAY - An ANSI and CCITT defined LAN/WAN networking standard for switching frames in a packet mode similar to X.25, but at higher speeds and with less nodal processing (assuming fiber transmission).

FRONT-END DATABASE - An application running on a workstation or networked PC that requests data from a centralized server, then presents the data in a way useful to the user. (See CLIENT/SERVER and BACK-END DATABASE.)

FTP - File Transfer Protocol - The primary method of transferring files over the Internet.

FULL DUPLEX - Simultaneous transmission of information by two participants engaged in an exchange of data through telecommunications.

G

GANTT CHART - A project planning and reporting chart developed by Henry Gantt. The Gantt chart is a horizontal bar chart

Section 9 - Glossary 7

Page 91: Tri-University

showing the relative duration of tasks plotted on a time scale

GATEWAY - A computer system that transfers data between normally incompatible applications or networks.

GIGABYTE - Billion bytes of data.

GOAL - A general statement of direction, as in university goals.

GOPHER - A menu driven access to many of the facilities of the Internet.

GROUPWARE - A class of applications that use collaborative data stored on a server. Examples are electronic mail, group scheduling, and project management applications.

GUI - Graphical User Interfaces - A graphics based system that incorporates visual representations of data and processes, using icons, pull down menus, and a mouse. Examples are Windows on the MS DOS platform, the Macintosh's Standard User Interface and OS/2 Presentation Manager.

GUIDE - An international not-for-profit association of information systems professionals providing business solutions through the application of information technology across a wide variety of environments.

H

HALF DUPLEX - Communications system in which the partners in an exchange of data take turns transmitting.

HARDWARE - The physical apparatus of a computer system.

H. 261 - International Video Conferencing Standard

H. 320 - International Video Conferencing Standard

HESC - Higher Education Software Consortium - A group of educational institutions utilizing IBM operating systems and software for educational purposes. A yearly purchase of membership in the consortium entitles the institution to obtain and upgrade selected IBM software at no additional cost.

HPO - High Performance Option - Upgrade to certain types of software.

I

INFORMATION - Data that has been processed to a point where it

Section 9 - Glossary 8

Page 92: Tri-University

conveys knowledge or represents a usable statement of fact.

INFRASTRUCTURE - System of wire, hardware, software and facilities that enables the connection of voice-data-video devices and the transmission of voice-data-video information from device to device.

INTEGRATED - Interrelated entities which result in a synergistic effect but which may also create extensive interdependencies.

INTELSAT - International Telecommunications Satellite Consortium

International Telegraph and Telephone Consultative Committee - (CCITT) - A telephonic media communications standards committee.

INTERNET - A concatenation of many individual TCP/IP campus, state, regional, and national networks (such as NSFnet, ARPAnet, and Milnet) into one single logical network all sharing a common addressing scheme.

IP - Internet Protocol - A network protocol that manages the logistics of getting a message from the sending machine to the receiving machine.

IRC - Internet Relay Chat - A "real-time" session on the Internet, where multiple users may "chat" interactively within discussion groups.

IRMA - A circuit card and software combination package from DCA that serves as a microprocessor responding to poll in an interactive 327x network. It emulates a terminal while residing in a PC. This makes PC's dual function devices - they serve as both terminals and micros (see also AVATAR card).

ISDN - Integrated Services Digital Network - CCITT I-series recommendation defined digital network standard for integrated voice and data network access, services, and user network messages.

ISI - Integral Systems Incorporated - A software company; providers of personnel database systems.

IT- Information Technology - The department responsible for mainframe, telecommunications, media, television, video, micro computing and technical services.

J

JAD - Joint Applications Development - An involvement oriented approach used to develop applications.

Section 9 - Glossary 9

Page 93: Tri-University

K

KILOBYTE - Kbyte: 1,024 bytes (1,024 being one K, or two to the 10th power) - Often used as a measure of memory capacity.

L

LAN - Local Area Network - A system of computer hardware and software that links computers, printers, and other peripherals into a network suitable for transmission of data between offices in a building, for example, or between buildings situated near one another.

LANGUAGE - A set of rules or conventions to describe a process to a computer.

LAPTOP - A portable computer that is sufficiently light and compact to permit easy transport and laptop utilization.

LASER - Technology of reading/writing data on durable media using a laser light source or producing print on dry toner image engines.

LCD - Liquid Crystal Display - A digital display mechanism made up of character-forming segments of a liquid crystal material sandwiched between polarizing and reflecting pieces of glass.

LISTSERV - An automated mailing list distribution system originally designed for the Bitnet/EARN network.

LSI - Large-Scale Integration - The placement of thousands of electronic gates on a single chip. This makes the manufacture of powerful computers possible.

M

MAC - Medium Access Control - IEEE 802 defined media specific control protocol.

MACHINE LANGUAGE - A set of binary-code instructions capable of being understood by a computer without translation.

MARC - MAchine Readable Code - Inventory cataloguing and coding system, read/write by computer.

MAINFRAME COMPUTER(CPU, M/F) - One of the largest types of computer, usually capable of serving many users simultaneously, with exceptional processing speed.

MAINTENANCE - Any modification required to keep a system operating at its intended level.

Section 9 - Glossary 1

Page 94: Tri-University

MAN - Metropolitan Area Network - A MAC level data and communications network that operates over metropolitan or campus areas and recently has been expanded to nationwide and even worldwide connectivity of high-speed data networks. A MAN can carry video, data, and has been defined as both the DQDB and FDDI standard sets.

MAPI - Messaging Application Programming Interface - Microsoft's Windows Messaging Application Programming Interface which is part of WOSA (Windows Open Services Architecture).

MEGABYTE - M - Million bytes of data. (1,048,576 bytes)

MEMORY - The storage facilities of a computer; the term is applied only to internal storage as opposed to external storage, such as disks or tapes.

MHz - Megahertz - A unit of measurement equal to one million electrical vibrations or cycles per second. Commonly used to compare the clock speeds of computers.

MICROCOMPUTER - A desktop or portable computer based on a microprocessor and meant for a single user; often called a home or personal computer.

MICROFICHE (FILM) - A file representation of a hard copy report that saves space in storage.

MICROPROCESSOR - A single chip containing all the elements of a computer's central processing unit; also called a computer chip.

MIDDLEWARE - Software that interprets requests between a PC or workstation application and an antiquated database running on a mainframe. Also used to describe software that helps an application communicate with an underlying operating system.

MINICOMPUTER - A midsize computer smaller than a mainframe and usually with much more memory than a microcomputer.

MIPS - Millions of Instructions Per Second - Measured in millions, i.e.: 19 MIPS is nineteen million machine instructions per second, a measure used to compare relative computing power.

MIS - Management Information Systems (MIS) - The total of all information resources, manual and automated, and their application to the normal functions of running an organization - management, administration, problem solving, etc.

MODEM - Modulator/Demodulator - A device that enables data to be transmitted between computers, generally over telephone lines but sometimes on fiber-optic cable or radio frequencies.

Section 9 - Glossary 1

Page 95: Tri-University

MONITOR - A television-like output device for displaying data, typically based on a cathode ray tube but can also be of an LCD or other variety.

MTBF - Mean Time Between Failures - The statistical average operating time between the start of a component's life and the time of its first electronic or mechanical failure.

MULTIMEDIA - Integration of various computer and audiovisual devices and methods to produce visual/graphical information and present it in a variety of formats.

MVS - Multiple Virtual Storage - IBM large systems operating systems software, generally a growth step following VSE optimization.

N

NANOSECOND (ns) - A billionth of a second, a common unit of measure of computer operating speed.

NEEDS ANALYSIS - A quantitative and qualitative study of the technology needs of an institution, including assessment, analysis, and forecasts.

NETVIEW ACCESS SERVICES (NVAS) - A program that simplifies the task of accessing applications and enables the user to work with several applications from a single terminal at the same time.

NEURAL NETWORKS - Computer architecture that enables redundancy, self-reparation of communications paths, and supports high traffic loads through routing decisions.

NIBBLE - Half a byte, or four bits.

NIC - Network Information Center - Any organization responsible for supplying information about any network.

NODE - A junction of communications paths in a network.

NOTEBOOK - A very small 'notebook sized' laptop computer.

NOTIS - A library management system.

NREN - National Research and Education Network.

NUMBER CRUNCHING - The processing of large quantities of numbers.

Section 9 - Glossary 1

Page 96: Tri-University

O

OBJECTIVE - Specific accomplishments necessary to the attainment of goals.

OCR - Optical Character Recognition -The process by which text on paper is scanned and converted into text files by a computer system.

OFFICEVISION – An IBM mainframe office management software program that provides electronic mail and personal calendars.

OLE - Object Linking and Embedding (OLE) - A method which establishes a way to transfer and share information between applications.

ON-LINE - Network applications, i.e.: real time programs or 'up and running.'

OOPS - Object Oriented Programming Systems - Method of Applications Development based on the assembly of functional modules.

OPAC - On-line Public Access Catalog - Any libraries on-line card catalog.

OPEN COMPUTING - A movement spawned by the Unix community to make computers and software that are standardized along published specifications so that hardware and software can be interchanged.

OPERATING SYSTEM - A complex program used to control, assist, or supervise all other programs that run on a computer system; known as DOS (Disk Operating System) to most microcomputer users.

OSF - Open Systems Foundation - (See OPEN COMPUTING)

OS/2 Operating System/2 - IBM microcomputer based operating system for use on Micro Channel based PCs, includes a GUI (Graphical User Interface).

P

PARALLEL - Pertaining to data or instructions processed several bits at a time, rather than one bit at a time.

PCMCIA - Personal Computer Memory Card International Association.

PERT Chart - Program Evaluation Review Technique chart - A project management chart illustrating task relationships and

Section 9 - Glossary 1

Page 97: Tri-University

dependencies.

PING - A command which allows an Internet user to query host computers on the network to verify that they are active and capable of sending and receiving.

PLUG and PLAY (PnP) - Capability (through infrastructure) for student and staff access to all appropriate resources (voice-data-video) from any campus location and any remote device with access to the campus.

POTS - Plain Old Telephone Service

POWER SUPPLY - A device for converting external alternating current into the direct-current voltages needed to run a computer's electronic circuits.

PRE-GRIDDING - Filling out variable fields of information on forms using the computer system and available databases to ensure consistency and reduce manual effort (and potential error).

PROGRAM - A sequence of detailed instructions for performing an operation or solving a problem by computer.

PS/2 - IBM Personal System/2 - A series of personal computers introduced in 1987 based on the Intel 8086, 80286, and 80386 microprocessors.

PROTOCOL - The formal rules that govern the internal workings of a communications system.

Q

QUERY - The ability to interrogate data bases without predetermined designs and/or programming expertise.

R

RAD - Rapid Application Development - Development lifecycle designed to give much faster development and higher quality results than the traditional lifecycle.

RAM - Random Access Memory - A form of temporary internal storage whose contents can be retrieved and altered by the user; also called read-and-write memory.

RDBMS - Data Base Management System of the Relational variety (based on an architecture of relational calculus and/or relational algebra) - Primary components are a Data Dictionary, Data Manipulation Language, Query Facility, Data Security System, and various/interactive systems.

Section 9 - Glossary 1

Page 98: Tri-University

REAL TIME - Programs which process immediately as information is received rather than accumulating data for long periods and processing all of it at one time (batch).

REGISTER - A special circuit in the central processing unit that can either hold a value or perform some arithmetical or logical operation.

REMOTE LOGIN - A network service that allows a user on one machine to connect to another machine across a network and interact as if directly connected to the remote machine.

RING TOPOLOGY - A layout for a local area network in which network stations are connected to one another by a closed loop of cable or other transmission medium.

RJE - Remote Job Entry - Programs are caused to run from a site removed from the computing facility.

ROM - Read-Only Memory - Permanent internal memory containing data or operating instructions that cannot be altered.

RPGII - Report Program Generator - A commercially oriented programming language specifically designed for writing application programs the meet common business data processing requirements.

RS-232 - A mechanical and electrical standard that permits the transfer of information between computers and communications equipment, and is also used to connect terminals, printers, and other peripheral devices.

S

SAA - Systems Application Architecture - A set of standards for communication among various types of IBM computers, from personal computers to mainframes.

SCSI - Small Computer System Interface (pronounced Scuzzy) - A mechanical, electrical, and functional standard for connecting small computers with intelligent peripherals such as hard disks and CD-ROMS.

SDLC - Synchronous Data Link Control - Data communications method, also known as digital.

SERIAL - Pertaining to data or instructions that are processed in sequence, one bit at a time, rather than in parallel (several bits at a time).

SERVER - A component of a distributed computing system that provides a service in response to requests from clients. (See CLIENT/SERVER).

Section 9 - Glossary 1

Page 99: Tri-University

SIG - Special Interest Group - A subgroup of an organization or a computer networking systems consisting of members who share a common interest.

SMDS - Switched MultiMegabit Data Service - A high-speed (up to 34Mbps), connectionless, packet switched MAN data service. It is considered a wideband/broadband data service and is designed to be easily integrated into user's existing local data communications and computing environments while having minimal impact on user's existing hardware and software.

SMTP - Simple Mail Transfer Protocol. - The Internet standard protocol for transferring electronic mail messages from one computer to another.

SNA - Systems Network Architecture (SNA) - based on the use of microprocessors in each major device in a hardware configuration and the use of SDLC protocol.

SOFTWARE - Instructions, or programs, that enable a computer to do useful work; contrasted with hardware, or the actual computer apparatus.

SONET - Synchronous Optical Net work - A US high-speed fiber optic transport standard for a fiber optic digital hierarchy. It can operate at speeds ranging from 51.48Mbps to 2.5Gbps.

SQL - Structured Query Language - A language set that defines a way of organizing and calling data in a computer database. SQL is becoming the standard for use in CLIENT/SERVER databases, and is the basis of IBM's SQL/DS and DB2 Data Base Management Systems and related products.

SYSGEN - Loading an operating system.

SYSTEMS - An interrelated set of entities which function in relation to each other, as in software systems, hardware systems, information systems, etc.

T

TASK - A specific step or single item of work to be performed in the process of completing a project.

TCP/IP - Transmission Control Protocol/Internet Protocol - The combination of a network and transport protocol developed by ARPANET for internetworking IP-based networks.

TELECOMMUNICATIONS - Systems of hardware and software used to carry voice, video, and/or data between locations. Includes telephone wires, satellite signals, cellular links, coaxial cable, etc., and related devices.

Section 9 - Glossary 1

Page 100: Tri-University

TELECOMMUTING - Use of electronic facilities to allow a worker to "commute" to work through communications networks rather than to physically travel to and from an office or workplace.

TELEPROCESSING - Work completed on the computer via terminals or other remote devices through the use of telecommunications.

TELNET - A terminal emulation protocol for logging on to remote computers through the Internet.

TERMINAL - A device composed of a keyboard for putting data into a computer and a video screen or printer for receiving data from the computer.

TERMINAL EMULATION - Hardware and software that enables a PC or other intelligent device to act as a host terminal.

TOKEN RING - A local area network architecture in which a token, or continuously repeating frame, is passed sequentially from station to station. Only that station possessing the token can communicate on the network.

TQM - Total Quality Management. - A system of ongoing, organization wide improvements that achieve full customer satisfaction through participation of trained team members using quality measures and techniques.

TWISTED-PAIR WIRE - A transmission medium consisting of two insulated copper wires twisted around each other, traditionally used in the telephone system.

U

UNIX - A multi-tasking, multi-user operating system developed by AT&T Bell Laboratories in the 1960's; used primarily on minicomputers.

UPLOAD - The transferring of files, which reside on minis or micros, (U/L) to a mainframe via electronic communications - (also the transfer of files to be uploaded to a mainframe from micros to a file server).

UPS - Uninterruptible Power Supply - A battery capable of supplying continuous power to a computer system in the event of a power failure.

USENET - An informal group of systems that exchange "news". News is essentially similar to "bulletin boards" on other networks. USENET actually predates the Internet, but the Internet is now used to transfer much of USENET's traffic.

UTD - Up To Date - Current.

Section 9 - Glossary 1

Page 101: Tri-University

UTP - Unshielded Twisted Pair wire.

UUCP - An International, cooperative wide-area network that links thousands of UNIX computers in the United States, Europe, and Asia.

V

VAN - Value-Added Network - A data network that supplements basic communications services acquired from a common carrier with additional features that correct transmission errors and ensure compatibility between dissimilar computers and terminals.

VAP - Value Added Process - Optional products that enhance the performance capabilities of various systems.

VERONICA - A group of databases that provide an index to information available through the Gopher tool on the Internet.

VIM - Vender Independent Messaging - The Vendor-Independent Messaging (VIM) Group. VIM includes Apple, Borland, IBM, Lotus, MCI Mail, Novell and Word Perfect. Together, the Group is intent will collaborate on developing an open, industry-standard interface that will allow e-mail features to be built into a variety of software products.

VM - Virtual Machine - A super operating system that allows expanded and varied configuration and utilization of the IBM and compatible mainframe computers.

VSAM - Virtual Storage Access Method - A data storage and retrieval mechanism designed to maintain large quantities of data on external disks or drums on computers designed for virtual storage systems.

VSAT - Very Small Aperture Terminal - A relatively small satellite antenna, typically 1.5 to 3.0 meters in diameter, used for transmitting and receiving one channel of data communications.

VSE - Virtual Storage Extended - IBM medium to large systems operating systems software (see DOS/VSE).

VTAM - Virtual Telecommunications Access Method (VTAM) - A systems software product that allows Bisynch and SDLC protocols to run simultaneously. SNA compatible, it allows more than one teleprocessing monitor to run at a time.

W

WAIS - Wide Area Information Server - A system for looking up

Section 9 - Glossary 1

Page 102: Tri-University

information in databases or libraries across the Internet.

WAN - Wide Area Network - A network connecting devices over long distances, typically using a common carrier.

WESTI - Westinghouse Teleprocessing Interface - A multi-tasking time sharing terminal support environment.

WHOIS - An Internet tool that allows the user to search a database of every registered domain and of registered users.

WINDOWS - GUI (Graphical User Interface) - Developed by Microsoft Corporation for IBM compatible PCs.

WORKSTATION - Powerful personal computer used in a CLIENT/SERVER environment.

WWW - World Wide Web - A hypertext-based system for finding and accessing Internet Resources.

WYSIWYG - Pronounced WIZZIWIG, means What You See Is What You Get. For example, the image on the CRT screen and that produced on the printer is identical.

X

X-AXIS - In a business graph, it is the categories axis, which is usually the horizontal axis.

X.25 - An international standard for connecting computers or terminals to a network that operates by means of packet switching.

X.75 - An international standard that provides for interconnections between data networks of different nations.

X.400 - An electronic mail addressing and address directory control system established by the IEEE, designed to facilitate electronic mailing between otherwise independent data networks.

X.500 - An electronic mail addressing and address directory control system established by the IEEE, designed to facilitate electronic mailing between otherwise independent data networks.

X-trieve - A product subset of B-trieve Data Base used to query information stored in the Data Base.

Y

Y-AXIS - In a business graph, it is the values axis, which is

Section 9 - Glossary 1

Page 103: Tri-University

usually the vertical axis.

Y/C - Equipment used to keep the luminance and chrominance portrays of a video signal separated.

YOURDON - A method for applying structure to the development environment popularized in the early 1980's.

Z

Z-AXIS - Third dimensional axis of a coordinate system where the X-axis and Y-axis represent a two dimensional graph.

Section 9 - Glossary 2