no.76 the magazine of the australian computer … · no.76 the magazine of the australian computer...

32
DeskPAC PROFESSIONAL COMPUTING No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992 - workstations to SE/ty A Brighter Future for Australian Computing GRAPHICS COMPUTER SYSTEMS

Upload: vuanh

Post on 05-Jul-2019

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

DeskPA

C

PROFESSIONAL

COMPUTINGNo.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992

-

workstations to SE/ty

A Brighter Future for Australian Computing

GRAPHICSCOMPUTER

SYSTEMS

Page 2: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

MAESTRO Super Executive V 32 MODEM'Ifyou are still using a2400 BPS modem - You could be barking up the wrong tree.

Perhaps its time you moved into the fast lane with thenewbfaestro S600 BPS modem .AUSTEL PERMIT NO:A91/37D/0413 INCREDIBLE VALUE

1200

2400

4800

9600

INC TAXSOME PEOPLE SELL 2400 BPS

MODEMS FOR THIS PRICE

’’CRAZY”

UHI

19200*

14 DAY MONEY BACK GUARANTEEIF YOU FIND A MODEM THAT PERFORMS BETTER, THEN YOU MAY RETURN YOUR SUPER EXECUTIVE

WITHIN 14 DAYS FOR A REFUND. (14 DAYS FROM DATE OF PURCHASE)

I WONDER WHAT THE SPEED LIMIT IS AROUND HERE?

38400**MNP5 Depending on file

* V42bis Depending on file

V.32 - 9600 BPS FULL DUPLEX ASYNQSYNC ERROR CORRECTION / DATA COMPRESSION

V.32 - 4800 BPS FULL DUPLEX ASYNQSYNC V.42bis - ERROR CORRECTION AND COMPRESSION TO 38400 BPS **

V.22bis - 2400 BPS FULL DUPLEX ASYNQSYNC V.42 - ERROR CORRECTION AND COMPRESSION

V.22 - 1200 BPS FULL DUPLEX ASYNQSYNC MNP 5 - ERROR CORRECTION AND COMPRESSION TO 19200 BPS*

V.21 - 300 BPS FULL DUPLEX ASYNC MNP 24 - ERROR CORRECTION

nnnmnnr

NEWMODEL

9642XR DATA / FAX 1WITH V.42his &A£NP2 - 5

$449 AUSTEL PERMIT C88/37A/145

FANTASTIC VALUE AT ONLY

SEND AND RECEIVE FAX MODEM

AUSTEL PERMIT C88/37A/145

9600XR DATA / FAX MODEM ONLY $399 INC D

SEND AND RECEIVE FAX MODEM WITHOUT DATA COMPRESSION OR CORRECTION

'AUSTRALIA'mmamm,GITAL COMMUNICATIONS

A.C.N. OOO 928 405

PHONE (06) 239 2369, FAX (06) 239 2370 UNIT 2 13-15 TOWNSVILLE ST, FYSHWICK, ACT 2609

Page 3: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

PRESIDENT’S MESSAGE:

Moving towards standardsTHE IT industry is moving, albeit slowly, towards the adoption of standards for hardware, for software and for communications. Yet we know that the most rele­vant variable in IT work is people.

Paul Sayers from Mazda told a recent ACS Victori­an Conference that, when it comes to productivity tools and techniques, the winner will always be “smart people.” Many studies have shown that smart people can make poor systems work — for smart systems you probably need even smarter people!

So, will we have standards for communications, for hardware and for software — but not for people? Or is it time that we took a serious look at some “people standards?” If we had them, how would they be set? How would they be monitored and how would they be enhanced to keep up with our dynamic profession? At the National ACS Council meeting, we set up a task force to investigate such matters.Thinking about “People Standards”

The Task Force will be looking at how other profes­sions in Australia and overseas are addressing the “people standards” issue as well as at how other coun­tries are addressing the “IT people standards” issue. We hope to be able to use the knowledge of some of those attending the International Conference on Soft­ware Engineering in Melbourne in May, especially those who have been involved in addressing these issues in their own countries.

One body that has a long history in applying stan­dards to IT people is the Institute for the Certification of Computer Professionals (ICCP) in the US. Found­ed in 1973, ICCP offers certification in four profes­sional designations. You may have seen the initials CCP CDP CSP ACP alongside author’s names in books and magazines. These stand for Certified Com­puter Programmer, Certified Data Processor, Certified Systems Professional and Associate Computer Profes­sional. ICCP has 11 constituent bodies and five affili­ated societies. They include ACM, ASM, DPMA, DAMA, as well as COMMON, an IBM User Group, and FNUG, the Federation of NCR User Groups. (Is it time to ban acronyms?)

Let me quote from an ICCP publication:“If you are trying to distinguish yourself in the

crowded information processing technology field, cer­tification puts you above the rest... whether you are sending out resumes, bidding for tenders or looking for a promotion, certification provides proof of your professional experience and expertise. Certification is the confidence-building proof that you have met spe­cific requirements and possess that high level of knowledge and skill. In tough economic times, certifi­cation adds to your professional credibility and gives you an advantage in the competitive job market.”

The ICCP Executive is made up of industry people and the American Council on Education has recom­mended the awarding of college credits to those who pass ICCP exams. ICCP certification is not about recognising entry level professionals — it is aimed at senior level personnel. A candidate must have at least 60 months of full-time direct experience in IT. Those holding post-secondary and tertiary qualifications may substitute their qualifications for up to 24 months of experience.

Continued page 2k

PROFESSIONAL

COMPUTINGCONTENTS: MAY 1992

CLIENT/SERVER PROCESSING — THE PRACTICAL APPLICATION: The term "Client/Server” is used to cover a whole range of computing scenarios. In this article we take a look at the practical application of client/server technology, utilising DOS based PCs with Windows, (the clients), to add processing power and user features to commercial systems with large databases on mini/mainframe computers, (the servers). 2

IT ISSUES OF THE 1990s: Over the next few years studying the computer industry will be akin to staring down a turning kaleidoscope. The single most important differentiator in the 90s is quality. 5

OSI APPLICATION ESSENTIALS: A further extract from 'The essential OSI’ produced by the consortium of Standards Australia, OSIcom, the Australian MAP/TOP Interest Group and NSW Tafe Commission’s Open College Network. 7

ACS in View 11

THE CONVERGENCE OF MANUFACTURERS PRODUCTS?: The next 18 months in computing will be one of the most interesting that we have witnessed for decades. Traditional centralised systems will be replaced by distributed systems, new companies will dominate the transnational IT world, while better informed and articulate users will dictate theevolution of the industry. 16

THE TENTH AUSTRALIAN COMPUTING IN EDUCATION CONFERENCE: A good program takes shape. 19

THE FUTURE OF COMMERCIAL COMPUTING: Growing demands for more powerful applications challenge PC performance. 20

OPENING MOVES: A Brisbane Branch Conference paper deals with the realities of Open OLTP. 23

PROFESSIONAL

COMPUTINGH,t*STATIO.*S TO ,

A Brighter future for Australian Computing ■

GRAPHICS » COMPUTER fj SYSTEMS

COVER: This issue’s cover depicts the SPARC-based product family of Australian systems integrator, Graphics Computer Systems. The SPARC marketplace is currently in a state of flux, as a wide range of third party manufacturers are introducing their CPU and peripheral support products. Australian-owned and Melbourne-based Graphics Comput­er Systems are part of this process, manufacturing both chassis and ooard level products for the domes­tic and export markets, with existing sales to New Zealand, Singapore, the US and Canada.

GCS is contactable by telephone on 03 888 8522 and by fax on 03 808 9151.

PROFESSIONAL COMPUTING, MAY 1992 1

Page 4: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

VICTORIAN BRANCH 1992 CONFERENCE PAPER

Client/Server processing: the practical applicationTHE term "Client/Server” is used to cover a whole range of computing scenarios. In this article we take a look at the practical application of client/server technology, utilising DOS based PCs with Windows (the clients) to add processing power and user features to commercial systems with large databases on mini/mainframe computers (the servers). We will provide some examples of systems of this kind that Megatec has developed and installed, and discuss the advantages of such an approach as well as the pitfalls we encountered.

By Peter Hill and Brad Allen

DIFFERENT people have different views or perceptions about what client/ server computing is. The US Business Research Group questioned Fortune 1000 in­

formation systems and end-user executives who were implementing client/server systems and found that their definitions fell roughly into four categories:■ Applications offering file or peripheral shar­

ing or remote computer access.■ Applications that disseminate a database

among more than one computer on a net­work.

■ Messaging applications, such as electronic mail.

■ Process intensive applications that distrib­ute different computing tasks amongst con­nected computers.Some choose to use the term “Cooperative

Processing” to describe the interaction be­tween PCs or Workstations and mainframes.

Rather than argue the merits of the various definitions, we need to provide you with our definition so that you can put the content of this discussion into context. For these pur­poses our definition is:

A commercial application that utilises the combination of a "mainframe” computer and PCs, to provide the application user with business benefits that are either not available via on-line terminal processing, or are an improvement on those offered by terminal processing.

In the mid-seventies when mini-c<Jmputers and distributed processing arrived on the scene, there were those in the industry who argued that mini-computers were not real computers and that they had no place in the

commercial arena. Many dangers were ex­pounded as reasons why a mini-computer was inappropriate for commercial use.

These included security, data integrity and data back-up disciplines. With PCs and work­stations now offering an even lower level of distributed processing, there are still a few people in the industry who have trouble ac­cepting the place of the PC in a commercial transaction processing environment, prefer­ring to view them only as useful tools for spread sheet and word processing.

Fortunately these people are in the minority, but those who have at least accepted the con­cept of using PCs in a client/server role are still faced with the same problems that face any decentralisation or distribution of pro­cessing power.

But before we tackle some of the problems, let’s look at the benefits of including PCs in a commercial application.

No matter how clever and efficient your programming is, or how big your processor is, if you have a central system with many termi­nals attached, which is expected to perform multiple processing requirements simulta­neously, then response times will vary. The relocation of CPU intensive forms handling procedures on to the PC results in a significant reduction in the demand on mainframe CPU resources and reduces the traffic on I/O and communication links.

While the savings vary from one project to the next, some client/server installations have experienced* a 25 perl cent reduction in CPU demand following a partial “re-development” of an application whicfkwas previously totally mainframe based. In addition to the improved performance seen by the user, if users are be­ing charged for CPU time then these charges will be reduced. There is also the potential to extend the life of the existing processor, even if only the forms handling is off-loaded.

Secondly, the client/server computing mod­el insulates the client (the PC) from the server and any problems it may experience. For ex­ample if the server is “overloaded or busy

* From page 1I am particularly keen to hear your views

on “people standards” for our profession. Occupational regulation of any form needs most diligent study, and the options include self-regulation, government-imposed regula­tion or no regulation, that is, no change to the status quo. I am sure that most ACS members would prefer to see self-regulation rather than government imposed regulation. A hypothetical conducted by the Victorian Branch in March 1992 indicated that there are a number of strong, and differing, views

on this topic. There was significant support for some degree of occupational regulation, but there was also some support for leaving things as they are. What do you think?Who would benefit?

As in all things, we need to ask ourselves what would be the point of it? That is, why consider occupational regulation at all? Just as there are many beneficiaries of standards in other IT areas so too could there be many beneficiaries of “people standards.” The in­dividual would benefit, as the ICCP materi­al states, but much more importantly, our

clients would benefit. Whether we are in full time employment, or contracting or consult­ing, the users of our services are entitled to know what they are buying. In the broader sense, so many IT systems impact on society and, in the long-term, it may be seen to be professionally irresponsible if we do not ad­dress the “people standards” issue in a seri­ous way.

Please let us know what you think. Write to the Tisdall Task Force, ACS, PO Box 319, Darlinghurst NSW 2010.

Geoff Dober

2 PROFESSIONAL COMPUTING, MAY 1992

Page 5: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

performing month end processing or backups, the client can continue a large part of its func­tion by performing local processing using the application software and data residing on the PC. For remote users there is the added bonus of still being able to continue processing, albeit with limited functionality, even if the communications link has failed.

Thirdly, the user is presented with a graphi­cal user interface (GUI) which is common across all applications whether they are main­frame or PC based. The incredible sales of Windows 3.0 underlines the demand for easy- to-use systems. The advantages of using a GUI are now well recognised. The US-based research company Temple, Barker and Sloane completed a study which found that users op­erating with a GUI work faster, completing 50 tasks in the same time that a character user interface user took to complete 37.

Furthermore the GUI users get 91 per cent of their tasks correct compared with 74 per cent for non-GUI users. Their conclusion was that GUI users accomplish 58 per cent more work. The study also found that GUI users are far more likely to explore and make use of the features of their systems as they do not feel intimidated by them.

Finally by using the PC with Windows, the client/server model allows users to have ac­cess to several applications at once regardless of whether they are mainframe or PC based. A simple example of this is where a user can be interacting with a sales analysis database held on a mainframe server, while at the same time operating a PC-based spreadsheet package.

To summarise the benefits, the processing power of the PC offers improved performance and provides less varied response times. The mainframe can have processing off-loaded which results in its life being extended. Com­munication links can operate far more effec­tively as they are only transmitting data, not forms information. The PC can also offer the flexibility of operating in a stand-alone mode where required.

The GUI available on the PC provides ma­jor system acceptance and productivity ad­vantages. Of course there are other ways of providing users with a GUI front end to sys­tems, but a large number of corporations al­ready have a major investment in hundreds or thousands of DOS-based PCs and a lot of these have Windows installed.

Let’s have a look at Megatec’s experiences developing and installing these commercial client/server systems.

The first client/server system that Megatec wrote was for a major hardware manufacturer that wanted to gain a competitive advantage by providing dealers with a system which would make product inquiries and order placement easy and quick. Effectively, the or­der processing and inquiry functions which were currently centralised would be decentra­lised for members of the dealer network. The plan was to write a sub-set of the mainframe distribution system on the PC utilising Win­dows (2.0 in those days) as the front end. Each dealer would be given a PC with the system installed ready for use.

The system was designed to allow the dealer to operate in stand-alone mode, uploading and downloading to and from the mainframe at suitable intervals, or directly on-line to the mainframe. The benefits to the dealer were the speed and convenience of being able to make product inquiries and place orders without having to phone the supplier. The benefits to the supplier included the competitive advan­tage of having the dealer more likely to use one system rather than phone a competitor, and the off-loading of high volume inquiry calls to the distribution centre.

When we initially tested the system there were a number of things that we learnt, that new players should look out for when design­ing a client/server application. Obviously there is a requirement for communications software to enable the client-based application to interactively communicate with the server application and databases. Megatec wrote this software and designed it so that it was package rather than application specific.

Initially this data access layer (DAL/1) did not include data compression, we found this to be essential and upgraded DAL/1 accord­ingly. Early testing showed that the initial ver­sion of DAL/1 was also light on error checking and correction. The quality of lines and mo­dems quickly forced us to remedy this. In the general design area we had not put enough thought into carefully working out what data did, or did not, have to be transferred to the PC.

Unnecesary transfer of data sometimes re­sulted in data transfers being too long. Finally we learnt a lot about the desirability of good quality modems, MNP protocol proved to be essential.

Other items that have to be considered for this type of decentralisation include whether you let the PC user decide when they will download, upload and back-up, or whether

Peter Hill

FORINTEGRATION

Automatic UNIX based FAX system

PROFESSIONAL COMPUTING, MAY 1992 3

Page 6: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

you provide an automated function that con­tacts the client at a suitable time and performs an upload of the information waiting on the PC, a download of new/changed information from the mainframe to update the PC files and a back-up.

Naturally file volatility has an impact on where data resides and how often the client and server need to communicate when you are providing an off-line processing facility.

The next system that we wrote was at the time of the introduction of Windows 3.0 which was a great improvement over 2.0. In this case our customer recognised that the ad­vantages of a GUI, a mouse and a client/ server approach would allow his company to use a computer system in a trade building supplies outlet where terminals and a charac­ter interface were considered unusable due to the amount of keyboard use required and the lack of intuitiveness of such a system.

The benefits to be achieved were fairly straight forward, sales information could be entered directly into the system rather than go through the error-prone process of being hand written and then key punched. The aim was to improve the accuracy of the invoicing, stock and sales reporting.

The processing was based completely on the PC system, which was made up of a PC server with three PCs connected on a LAN. The PC server in turn communicated with the main­frame via a dial-up link for download/ upload operations. The system is extremely successful and achieved the stated objectives, but once again there were a few lessons to be learnt on the way. PC user discipline, or lack of discipline caused one major problem.

The users were new to computers, so prior to the system introduction they were allowed to play games on a PC as a way of building their confidence and skills. However, no con­trols were put in place to stop the users load­ing foreign programs on to the company PCs. One such smuggled game introduced a virus and destroyed a database on the live system before we realised the need to introduce strict rules on the use of the computers.

Very poor quality Telecom lines demanded careful auditing of data trapsfers and strong/ flexible restart procedures. The PC system de­velopment was done using a Windows 4GL which proved to be a faster way of developing the system compared to programming in C.

Our most recent experience involved a sys­tem that our USA-based customer wanted to distribute to the company’s international of­fices. Perhaps this could be considered the “original” client/server system as the “modus operandi” was to have the PCs do all the processing but the mainframe hold and supply all the data.

This naturally involves the client being on-line to the server, rather than have an off-line operation facility with selected data stored on the PC. The system was very straight forward and posed no problems, there was however one design feature introduced that we consider to be well worthwhile: we introduced a layer between the application

4 PROFESSIONAL COMPUTING, MAY 1992

and the database communication software. This file access layer is responsible for making all file calls regardless of file type. The theory here is to provide the flexibility to change the file system without the application having to be changed.

These experiences provide some specific ex­amples of the pitfalls that we encountered in designing and implementing a DOS PC to a mainframe client server system. The following list highlights the major areas that have to be considered when designing this type of decen­tralised system, especially when data is shared between the clients and the server:■ Error recovery for data transfers between

the clients and the server.■ Strict audit control of all data transferred

from the PCs to the host.■ Verification of data before the host data­

bases are updated.■ Synchronisation of host updates to multiple

PCs.■ Minimisation and then compression of the

data to be transferred.In addition to the design considerations,

you will have to give some thought to stan­dards for the new areas of development that client/server systems introduce. One of the main considerations here will be the graphical user interface standards. At this early stage in the use of GUIs there is no single accepted standard, even Microsoft’s own products vary in their window design.

However, Microsoft put out a design guide which we found to be useful along with the IBM Common User Access, Advanced Inter­face Design Guide, (the latest version of this is referred to as “CUA3”). To maximise the benefits of using a GUI it is important that we provide our systems with standard interfaces. This allows end users to deal with the same screens and methods as they learn and use all software applications, whether they be client/ server or stand-alone PC applications like Word or Excel, this reduces training time an'd expense, increases productivity and heightens job satisfaction.

Designing and writing a system to handle user driven input via a mousefdemands quite different techniques than those used in a char­acter user interface system. One major differ­ence is that the use? has the ability to move between screens and applications even if a current transaction is incomplete.

From our experience there are certainly new challenges to be met in designing client/server systems, but business benefits can be provided through the introduction of PCs as an active part of a commercial application. There is the additional benefit for the developer of being able to provide a usable, as well as useful, system through the use of a graphical user interface. In our experience the users reaction to such a client/server system that utilises Windows is one of enthusiasm and that can be a good start in the implementation of a new system.Authors: Peter Hill and Brad Allen are with Megatec Pty Ltd.

The benefits to be achieved were fairly straight forward, sales information could be entered directly into the system rather than go through the error-prone process of being hand written and then key punched. The aim was to improve the accuracy of the invoicing, stock and sales reporting.

*

Page 7: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

VICTORIAN BRANCH 1992 CONFERENCE PAPER

IT issues of the 1990sBy Tony Benson

OVER the next few years studying the computer industry will be akin to star­ing down a turning kaleidoscope. The single most important differentiator in the 90s

is quality. By this is meant not only product quality but also ensuring that every process in the organisation delivers zero defects by the use of closed correction loops: total quality management.

To remain competitive the business cycle time must increase: faster design and manu­facture of products, faster response to inci­dents and rapid dissemination of policy changes. To increase productivity, processes must be collapsed; some that were serial can be run in parallel. The TQM analysis of pro­cesses can identify those that need to be re­engineered.

In many cases this will lead to a major revision of the purpose of computer applica­tions, and in moving to the new way of com­puting, systems will have to be re-designed. Many old applications incorporate rules which no longer meet today’s environment and are difficult to change.

By providing users direct access to the data, decision-making can be distributed, allowing the organisation to be more responsive and to keep its customers competitive. Intelligent ter­minals will increase the accuracy of the basic data.

By their nature, standards take a long time to define and agree, so that in our rapidly moving industry many implementations get ahead of the standard definition. Even exist­ing standards can be interpreted differently by product developers.

Also, to gain differentiation, vendors aug­ment the standards with “features” which may not be incorporated in later standard revi­sions. We see the wasted productivity of com­peting standards within the US and between the US and Europe.

At the detail level there are minor incom­patibilities which can be caused by interfaced products being at different revision levels.

In spite of these limitations, the widespread adoption of standards has led to the open revolution.

The systems integration (SI) of open prod­ucts requires special technical and manage­ment skills, which are not readily available, and it is difficult to manage the risk. It is important to understand where the risk is managed.

If the integration is done in the vendor’s plant then the supplier certification and man­agement is done for you and the cost and risk are reflected in the vendor’s price. (This is sometimes overlooked when buying PC clones.) If the SI is done by the customer or an SI agent, then allowance for risk should be part of the budget.

SI is not a one-shot function as the compo­nents suppliers adopt standard revisions at differing rates, and sometimes with no prior notice, so that what was inter-operating suc­cessfully today may not work as well tomor­row. To resolve these problems quickly re­quires well-trained staff, good diagnostic tools and standard test environments.

Computer vendors have in the past been recession-resilient, and have usually done well in poor times. However the current downturn is not only due to the economy, but to a fundamental re-structuring of our industry, caused by the new way of computing.

Those vendors which adopted open systems early are generally doing better than those which are stretching their R&D resources by trying to straddle both open and proprietary systems.

In NCR’s case we moved to open systems in 1981, on the basis that if our proprietary sys­tems were going to be made obsolete, it would be better to do it ourselves than have it done to us by competitors. At least we could tell our user base where our long term strategy was leading.

Tony Benson

FOR UNIPLEXAutomatic UNIX based FAX system

PROFESSIONAL COMPUTING, MAY 1992 5

Page 8: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

The drive for R&D critical mass and manufacturing volume will cause alliances to occur, not only between computer and telecommunications companies such as AT&T/NCR, but with large users of computers and vendors of related technologies.

In the open arena, where every level of sub- assembly can be sourced from competing sup­pliers, the pressure on margins is intense, so reaching a critical volume is mandatory to remain in business. To ensure this, NCR like other open vendors, will sell its technology at any level of integration to anyone.

Reduced margins must be made up in high volumes or through value-added services such as systems integration and services’ This could lead to the phenomenon of “computer­less computer companies” which exist entirely on these value-add revenues.

The drive for R&D critical mass and manu­facturing volume will cause alliances to occur, not only between computer and telecommuni­cations companies such as AT&T/NCR, but with large users of computers and vendors of related technologies.

Previous strength in the market is not only of little value, it may be a major obstacle. The user organisations can best be protected from the effects of these changes by moving to open systems as quickly as possible.

Once the economic advantages of the move to open systems are recognised and all but the most essential development work on the old systems terminated, the key issue is the identi­fication of the processes and data elements that should be transferred.

Candidates are those which will deliver the greatest improvement in productivity or qual­ity. Existing systems can be left in place as their data can be incorporated using an inte­grating environment.

This move will take most of the decade to complete.

There will be considerable resistance from those with a heavy investment in the old way of computing, and a significant training bud­get should be allocated. Outplacement pro­grams will need to be planned.

Top management must become heavily in­volved as this is the opportunity to consider where data is held and decisions delegated in the enterprise, which will'greatly impact the distribution of computing* power and capacity.

Downsizing (or rightsizing) the computing resource can offer significant cost savings. _

The |JIS role will change from one of “own­ing” tlfe computing resource, to one of em­powering the organisation and deploying the MIS team to work alongside users to ensure correct data management practices are in place.

Serious consideration should be given to providing basic development tools to users who will start to build their own decision sys­tems given controlled access to the database. Returning some responsibility for processing, which has been perceived as the domain of the MIS for the past 30 years, can have a positive effect on the attitude and productivity of workers.

The professional MIS will continue to have responsibility for the integrity, security and performance of the database. Issues of locking control and granularity and the impact on per­formance will be key issues.

By 1994 Unix will provide equivalent secu­

rity to proprietary operating systems, the delay being appropriate certification processes. However, maintaining security in an open sys­tem will continue to require the attention of specialised skills, which will probably reside in the SI companies and the major vendors.

Accumulation of all transactions and their potential availability over a global network will cause questions of privacy and control of access to be widely debated. Voice recognition systems will open the door to computer eaves­dropping. Advanced encryption algorithms will absorb some of the new mips and smart card controlled entry points to the system will become the norm.

It will become increasingly difficult to dis­tinguish a computer from a network switch, and network management will embrace LANs, enterprise systems and the transmission net­works. IT planning and management will be­come more complex as network topography, traffic and tariff issues are considered with the array of options that scalable distributed com­puters will offer.

Design, simulation and diagnostic tools and the skills to use them will be a major new growth service industry.

The MIS charter should extend to cover both computers and telecommunications.

A small group in NCR Sydney has been developing a general tool to assist in mapping the enterprise onto a distributed computing framework.

The concept is a simple yet comprehensive picture of the enterprise on a single sheet of paper, showing the seven levels of computing as defined by NCR, plus a customer interface level. The key issues that top management wish to address are listed.

The strategic processes that absorb most re­sources or contribute significantly to be bot­tom line are identified, usually by the Total Quality Manager. Major data bases are mapped shewing over which levels tjiey are distributed, such as desktop, deskside, work­group LAN server, department, business unit or enterprise. If required, the current distribu­tion of data and processes can be plotted to

Jf contrast with the open system plan.Since the introduction of open systems is

evolutionary, for a considerable period old and new systems must exist together, yet ap­pear to users as a single computer. In fact, users should be unaware of the transition of data or applications form one to the other, or both.

Thus an integrating environment is the first step in a strategy to move to the new era.

Another important step is to get users work­ing together as peers across the organisation, particularly in dispersed workgroups.

The best starting point to do this is E-mail, which should be supported from the very top of the organisation by giving faster response to letters delivered electronically and issuing all management directives by E-mail.

Once this is in place, products like Grape- Vine (which distributes soft information or intelligence) can offer added value by intro-

► Continued on page 22

6 PROFESSIONAL COMPUTING, MAY 1992

Page 9: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

STANDARDS

OSI application essentials:An excerpt from ‘The essential OSI’

CONTRARY to some misconceptions there are many hundreds of OSI prod­ucts on the market, and the number will increase in coming years as OSI gains greater

acceptance and vendors begin to jockey for positions in an expanding market.

OSI application products sit at (and above) the seventh layer of the OSI model. The pri­mary purpose of this layer is to provide a data communications interface for business applications.

Work is still proceeding on defining stan­dards for some applications, and several early standards have already undergone revisions.

It is in the applications area that the most obvious benefits of OSI can be seen. Electron­ic data interchange (EDI), electronic mail, and office automation products already exist, are the day-to-day “tangibles” of OSI for many organisations, and more applications are on their way.

Outlined below are some of the standards defined under the application layer, and the practical capabilities and products that these standards offer.X.400 message handling services

The X.400 MHS standard provides a com­mon user electronic mail and message hand­ling service. The closest alternative to X.400, from an interpersonal message viewpoint, would be Simple Mail Transfer Protocol (SMTP) which is widely available on Unix-based networks.

X.400 provides a secure, internationally ac­ceptable messaging service that can be in­voked by users directly, or by applications using X.400 as a message carrier. Together with its companion product X.500 Electronic Directory Services, X.400 is a powerful mes­saging service for both interpersonal messag­ing and EDI. For most organisations, X.400

will be the most significant OSI protocol adopted. At present, the major users of X.400 products are national carriers, multinational organisations, and value added network ser­vice providers — organisations that need to transmit large volumes of information to many destinations around the world, quickly and reliably.X.500 electronic directory services

The X.500 standard defines an electronic directory service analogous to the telephone directory. The major difference is that the X.500 directory can contain a much wider range of information and can be accessed via standardised protocols. Therefore it has a much wider range of application.

X.500 is a directory for user-friendly names (electronic mail users, business application names). It can provide mapping from these names to an equivalent network address, such as that required to locate a user on a network and can also become the knowledge base for OSI management systems.

Unlike many OSI standards, there are no real alternatives to X.500. Proprietary schemes will be unable to attain the same independent widespread adoption and usage that X.500 will enjoy. Major vendors have recognised this and are now working on gate­ways from proprietary directories or are align­ing their existing naming schemes with the X.500 naming scheme.

Adoption of X.500 places an organisation in a much better position to avail itself of busi­ness applications such as EDI, and OSI com­munication services such as X.400 areas his­torically not supported by directory services.

File transfer access and management (FTAM)

FTAM is a standard designed to support a

A further extract from The essential OSI’ produced by the consortium of Standards Austraia, OSIcom, the Australian MAP/TOP Interest Group and NSW Tafe Commission’s Open College Network.

FOR CLIQAutomatic UNIX based FAX system

PROFESSIONAL COMPUTING, MAY 1992 7

Page 10: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

Although in its infancy, FTAM co-exists with the many local area network file transfer protocols, in time it will become the single consistent file transfer mechanism an organisation will be required to support.

total file-handling service in a multivendor environment. It provides a means of handling files remotely, independent of local file defini­tions or the way in which files are manipulat­ed locally. It achieves this using a concept known as “virtual” file store.

There are alternative file transfer protocols in existence, including FTP (UNIX, TCP/IP), Kermit (workstations), IBM 2780, IBM 3780 and IBM 3770.

Some organisations use X.400 protocols to transfer the majority of office automation files, and as long as the files are relatively short, it is an easy-to-use solution. The major advantage FTAM has to offer over other pro­tocols is that it provides a single consistent file transfer mechanism with a wide range of user options to accommodate transfer of files that are short, long, complex, simple, partial trans­fers, all possible file types, and interactive and business application interfaces.

Although in its infancy, FTAM co-exists with the many local area network file transfer protocols. In time it will become the single consistent file transfer mechanism an organi­sation will be required to support.

Open document architecture (ODA)The purpose of ODA is to allow the many

different document types now produced in the office environment to be easily and transpar­ently exchanged and integrated. It is a relative­ly new area of standardisation pertaining to the representation and electronic transfer of structured documents such as letters, facsimi­les and orders.

Most of the popular office automation prod­ucts will have translation products which con­vert incompatible document formats. A num­ber of documents architectures, such as IBM’s DISOSS/DCA and Office Vision or Digital’s ODA, are already on the market.

The nearest equivalent to ODA is SGML (Standard Generalised Markup Language) which is itself an ISO standard. Most suitable for unstructured, large documents, SGML has been widely used in the publishing industry and has been adopted by the US Department of Defence for its CALS (Computer-aided Ac- j quisition and Logistics Support) program. The two standards are in fact complementary and CALS specifies both.

The major benefit of ODA will be improved productivity due to the increased ease-of-man- agement of documents consisting of text and graphics. While some proprietary office auto­mation (OA) systems claim to offer straight­forward reproduction of documents, it can only occur if a user’s preferred OA products have been integrated by the incumbent OA vendor.

The ODA standards work has only now pro­duced the first tangible deliverables to the OA vendors from which they can build their next generation of products. There are few ODA products available today and very little con­formance and interoperability testing has been conducted.

Most of the major OA vendors will be offer­ing a gateway solution to link ODA and their

8 PROFESSIONAL GOMPUTING, MAY 1992

proprietary OA products. While this is a logi­cal first step, user organisations need to assure themselves that the gateway solution proposed does in fact provide a reasonable ODA migra­tion path.

With Government Open Systems Intercon­nected Profile (GOSIP) encouraging the adop­tion of ODA by government departments, it is likely that many small businesses will need to implement ODA in order to conduct business with government departments around the world.

Remote database access (RDA)RDA enables access to a remote database

system. While it is sometimes confused with Standard Query Language (SQL), there is a difference. RDA’s role is to provide the com­munication path between a business applica­tion and a remote database. SQL is the lan­guage used to access and manage the database.

With the increasing adoption of distributed processing solutions, RDA will become a key component of open system computer net­works. Some proprietary alternatives to RDA already exist. However, because they are pro­prietary, they have not been taken up by other suppliers and will remain a single vendor solu­tion only.

RDA is still a draft international standard (DIS) and it is unlikely that products will be widely available for some time. As just about every reasonably sized organisation now has some database capability with an associated SQL, most organisations will eventually need to consider migration to RDA.

Virtual terminal (VT)The VT concept is that all terminal and

computer vendors have a common under­standing of the functions and sequences re­quired of a “virtual” terminal that they can incorporate in their products. This “virtual” standard terminal can then be mapped to a vendor’s real' terminal, or terminal-handling software. VT is designed primarily for connec­tion to an application residing on a remote host, where the terminal uses a local host (or

f terminal server) as an intermediary.The commonly available TCP/IP Telnet,

Digital VTxxx and IBM 3270 terminal proto­cols are all examples of protocols performing similar functions to those described by VT. Telnet and VTxxx are closely related to VT basic (which has current sponsorship from GOSIP).

One of the benefits of VT is that it will extend the life of current terminal equipment where it is required to access OSI-based appli­cations. Also, the costs of developing and maintaining business applications software that accommodates interactive terminal users should be significantly reduced.

The functionality of the VT has been kept to a minimum level, due to the difficulty in gain­ing agreement regarding the actual model to be used to describe the VT. This means there is some uncertainty as to how the VT standard will be further developed.

Today, the use of intelligent terminals is

Page 11: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

growing rapidly throughout most organisa­tions. These terminals tend to be graphics- based and have software capable of manipu­lating images and text on high resolution screens. VT has not yet been extended to cater for such terminals. Of those products that al­ready conform, the majority only accommo­date non-intelligent terminals acting in the cli­ent role of VT. The server function has not been widely implemented. Many computer vendors have committed to building VT prod­ucts. Governments have endorsed VT and some departments may, in the near future, implement VT.

There are many industries with existing ter­minal-based networks that are also good can­didates for the VT protocol. It will be impor­tant to monitor the success of these early implementations of VT, as there are still a number of areas of uncertainty surrounding the standard.

Many organisations have a mixture of non- intelligent and intelligent terminals (X-Win- dows) and workstations (PCs). They will have problems trying to build a common user inter­face to all their business applications from such a variety of terminal types. Functionally, VT will be the lowest common denominator, and so knowledge of the standard will be re­quired.Manufacturing message specifications (MMS)

The Manufacturing Message Specification has been initially defined to meet the need for integration of the various electronic devices (ranging from high level computers to simple pressure or temperature sensors) that are found in a typical factory environment. MMS is essentially a standard language used to structure and control the messages/instruc­tions passed between different MMS-compati- ble devices on a network. MMS has the poten­tial for use in any situation where application-to-application messaging is required.

While there are many proprietary equiva­lents of MMSD, there are no true equivalents capable of addressing the communication re­quirements of the wide range of computer- based control devices used in supervisory con­trol and data acquisition (SCADA) systems, programmable logic controllers (PLCs), weld­ing machines, robot controllers and numeri­cally controlled machines.

The major benefit of MMS will be the re­duced cost of maintaining large numbers of incompatible devices (particularly when such

factors as cost of maintenance and upgrade are considered). It will streamline data communi­cations throughout the factory environment, requiring only one set of data communica­tions protocols.

Prior to MMS the factory environment was segmented into “islands of automation”. The best that a production manager could hope for was to restrict the number of incompatible devices within each island. MMS provides the mechanism to integrate those devices.

MMS can be supported either by MAP- based IEEE 802.4 networks or on TOP (office environment) networks that use IEEE 802.3 (Ethernet) or X.25.X.700 systems management (SM)

The X.700 management standard was creat­ed in order to meet the challenge of managing data networks comprised of both hardware and software. The rationalisation of the many network management systems that exist in or­ganisations today will be tomorrow’s chal­lenge for network managers. By adopting uni­versally accepted X.700 standards this will be greatly simplified.

X.700 SM percolates through all seven lay­ers of the OSI model. Hence OSI application services such as X.400 or FTAM, OSI routers or LAN bridges, multiplexers and modems will all have in-built X.700 SM routines, all of which will be part of the same X.700 network management system.

The nearest equivalent to X.700 today is the Simple Network Management Protocol (SNMP) commonly adopted in Unix-based data networks a widely implemented set of de facto management standards. Many network managers believe that SNMP is a logical solu­tion for most of today’s management prob­lems and that X.700 will be tomorrow’s solu­tion once sufficient products emerge.

The X.700 standard will not only be appli­cable to data networks but also to voice net­works. The telecommunications industry, due to its own need for improved network man­agement, is currently leading the development and application of X.700.Application program interfaces (APIs)

An API is the “pipe” that connects business applications, such as payroll, stock control, logistics and accounting to the OSI commiini- cation services. APIs are not part of the formal OSI standards. They are the mechanism by which vendor-supplied products interface to OSI services. Each API is specific to the OSI services to which it interfaces.

_______ ► Continued on page 22

The functionality of the VT has been kept to a minimum level, due to the difficulty in gaining agreement regarding the actual model to be used to describe the VT. This means there is some uncertainty as to how the VT standard will be further developed.

TRUFAX is the first truly versatile fax package.TRUFAX is easily integrated into any UNIX based application system.TRUFAX bundled together with a fax modem gives the user a low cost fax system.TRUFAX fax output will be as clear as output from any laser printer at the receiving end.

Contact: - Saki Computer Services Pty Ltd. Tel: (03) 752 1512. Fax (03) 752 1098

aw.

PROFESSIONAL COMPUTING, MAY 1992 9

Page 12: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

5

Seriously though...

DataBoss 3.5 is a powerfuland professional environment for developing relational database applications in C or Pascal.DataBoss performs by saving you hours of programming time, because programs are created using pull-down menus, a mouse, and point-and-shoot selection windows. This means there is no need for extensive hand coding. DataBoss performs by generating single or multi-user structured “C” or Pascal source code, which can be easily modified if needed.

DataBoss performs with an intelligent WYSIWYG report designer which automatically offers you suggested report designs. You can accept these or simply “paint” your own.Kedwell Software performs by offering full technical support and a comprehensive, manual with a practical tutorial - and there’re no licence fees.DataBoss Add-on Product ManualWriter - Save hours and hours of manual writing with ManualWriter. ManualWriter will automatically create a manual for any of your DataBoss applications.

If you want more informa DataBoss 3.5 or ManualV this coupon to the addressName

tion aboutWriter, please mail below.

CompanyAddress

PhoneFax

KEDWELL

A.C.N. 010 752 799 P.O. Box 122. Brisbane Market Queensland Australia 4106 Telephone 61 -7-379 4551 Facsimile 61-7-379 9422

Let DataBoss perform for you!

Page 13: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

WHO’S WHO IN ACS

Prins Ralston, Northern Territory Chairman

HOW does the smallest ACS Branch manage to achieve the highest profit of all branches? Just as the best answer to increasing productivity in IT is to “hire smart

people” then the answer to improving ACS performance in branches is to “elect smart office bearers”.

Prins Ralston, possibly the youngest ACS branch chairman, was born in 1963 and was educated in Victoria and in the Northern Ter­ritory. He matriculated in 1979 as dux of the school and then went on to study electrical engineering at Sydney University, business computing, accounting and law. In his spare time he undertakes the CPA exams.

During his student days, Prins won the IBM Award for outstanding achievement in com­puting project work, the Institute of Chartered Accountants’ prize for the highest mark in auditing and the KPMG Peat Marwick award for the highest grade in Computer-Based Ac­counting.

In order to maintain a balanced life, he also played soccer, rugby and cricket at university and at State level, played reserve grade volley­ball and golf and, to acquire management skills he was president of the Sydney Universi­ty Soccer Association, foundation chair of the student association at Darwin Institute of Technology, a member of the Northern Terri­tory Council of higher Education and manager of the Northern Territory men’s volleyball team.

Prins’ first job was as an electrical engineer and his current position is information sys­tems manager at Northern Territory Universi­ty where he also lectures in information sys­tems and management accounting. Prins manages a network of over 500 devices utilis­ing optical fibre, co-axial cable and a Translan bridge for both data and voice communica­tions. There are other micro networks includ­ing DEC “thin wire ethernet”, Novel, IBM Token Ring and Appletalk. Several multi- media solutions are being researched on a mixture of Apple, DEC and IBM platforms.

Prins’ involvement in ACS activities is a logical extension of his earlier involvement in student and sporting bodies. “If you think something is important,” he says, “then you do your best to make it work as well as possi­ble and I think that ACS is very important. “In a small branch such as ours, ACS provides a vital opportunity for professional network­ing and we are gradually extending our profes­sional development activities. We are becom­ing a major and very relevant force in NT’s IT profession.”

Prins sees his major challenge as “getting other IT professionals involved in ACS man­agement so that we can maintain the momen­tum we have built up. Another challenge is to make ACS known to the wider community so that the skills and knowledge of ACS members become a valuable community asset for the Northern Territory”.

Prins Ralston

Victorian Branch News

THE Victorian Branch hosted a confer­ence for student members in March. It was most successful with the student au­dience and the branch now has over 30 stu­

dent contacts in various institutions who are prepared to be involved with promoting ACS activities on their campus. Several students commented that the conference had renewed their desire to work in IT and they were un­aware of the scope of opportunities.

Borrowing from the branch library may now become a reality, also the branch intends to make videos available on loan. Videos such as that made at the recent hypothetical on “Should You Have a Licence to Do IT” will be able to be borrowed.

“We plan a simple system,” says Victorian manager, Denise Martin. “Whatever you bor­row will require a $50 deposit, which will be

returned when the item comes back. If we incur handling and postage costs they will be passed on to the borrower, but. if members collect items there will be no hire charge.”

Denise Martin is now “boss of the office,” taking over this role from consultant Kate Behan whose initial three-year contract with the branch has now expired. Kate will remain on as a consultant for at least another year, but on reduced hours and with a primary focus on membership activities and promotion.

The branch’s very successful professional development program continues. A two-day conference will be held in June, a hypothetical on metrics — “How Long is A Piece of String”, a Rob Thomsett executive briefing on Managing Systems Development, and the popular Ross Jeffery course on Function Point Analysis are just some of their quality offer­ings during the next few months.

PROFESSIONAL COMPUTING, MAY 1992 11

Page 14: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

ACS IN VIEW

Pearl Levin

mm Pearl Levin, FACS — Farewell

PEARL Levin, a highly respected and ac­tive member of the Victorian Branch of the Australian Computer Society since 1971, passed away on 13 April 1992 after a

valiant struggle with illness.

She was elected as a Fellow of the Society in 1990 for outstanding services to computing in Australia, particularly in education. It was in computing that Pearl took up the cause of advancement of women, and her dedication to that cause resulted in many females achieving a successful career in the field. She was also a member of the Membership Board of the na­tional body of the Society and, at the time of her passing, was the Chairperson of the Inter­nal Membership Committee.

A long and highly successful history in com­puting lies behind these remarkable achieve­ments. Pearl joined what was then the Caul­field Technical College in 1965 as a Data Processing Operator for the early Ferranti Siri­us and Control Data 160A computers.

In 1969 she became a Tutor/Demonstrator in the EDP Department and was in charge of the highly successful Operators and Coders Certificate course, with graduates being keenly sought after by industry. In 1981 Pearl was promoted as Lecturer having completed her studies for the Bachelor of Applied Science (Computing). Subsequently she was promoted as Senior Lecturer and later Principal Lectur­er. The Principal Lectureship position was ti- g tied the “Caroline Chisholm Principal Lec-^ tureshiu” and was awarded to Pearl in 1987 for outstanding contributions to Jeaching. Pearl was also a successful consultant and

many of .the computing systems she designed are still in operation.

The many computing graduates of Chis­holm Institute of Technology (previously Caulfield Institute of Technology and now Monash University) will remember the excel­lent foundation education in computing they received at the hands of Pearl, also the chal­lenges involved in progressing through her carefully designed case study assignments.

More particularly, however, students will remember the thoughtful and caring way Pearl treated her precious brood of students, and the compassion she had for those students who had difficult personal problems. She was known to staff and students alike as the “Mother of Computing” both for her long ser­vice to computing in the Institute, and her sympathetic but thorough approach to the teaching of computing. She was at all times a thoroughly professional person, respected by both staff and students.

In 1990 Pearl accepted an appointment as Director of the Pearcey Centre, which is tied to the Faculty of Computing and Information Technology at the Caulfield Campus of Mon­ash University.

She was I member'offthe Victoria^ Branch Executive G&mmittee, for several years, and was the Programme Director for the ACS Na­tional Conference in Melbourne in 1987.

We say goodbye to this remarkable person who will always be remembered for her out­standing contributions to computing educa­tion and the computing profession for more than 25 years of dedicated service.

The Australian Computer Society

Office bearersPresident: Geoff Dober. Vice-presidents: Garry Trinder, Bob Tisdall. Immediate past president: Alan Underwood. National treasurer: Glen Heinrich. Chief executive officer: Ashley Goldsworthy.PO Box 319, Darlinghurst NSW 2010. Telephone (02) 2115855.Fax (02) 281 1208.

H Peter Isaacson Publications

A.C.N. 004 260 020

PROFESSIONAL COMPUTINGEditor: Tony Blackmore. Editor-in-chief: Peter Isaacson. Advertising coordinator: Christine Dixon. Subscriptions: Jo Anne Birtles. Director of the Publications Board: John Hughes.

Subscriptions, orders, editorial, correspondenceProfessional Computing, 45-50 Porter St, Prahran, Victoria, 3181. Telephone (03)520 5555. Telex 30880. Fax (03)510 3489.

Advertising^National sales manager: Peter Dwyer.Professional Computing, an official publication of the Australian Computer Society Incorporated, is published by ACS/PI Publications, 45-50 Porter Street, Prahran, Victoria, 3181.

Opinions expressed by authors in Professional Computing are not necessarily those of the ACS or Peter Isaacson Publications.

While every care will be taken, the publishers cannot accept responsibility for articles and photographs submitted for publication.

The annual subscription is $50.

12 PROFESSIONAL COMPUTING, MAY 1992

Page 15: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

ACS IN VIEW

ACT Branch Notes

Recruitment criticism from Foundation Fellow

EMERITUS Professor DJ Overheu has criticised ACT IT employers, govern­ment and commercial, for overlooking computer engineers when hiring computer

software staff.Professor Overheu was addressing the annu­

al general meeting of the Canberra branch of the Australian Computer Society on the topic of the Ada computer programming language. He departed from his prepared text to express concern that ACT IT employers had a too narrow focus when selecting staff.

Professor Overheu stated that the Universi­ty of Canberra produced the best IT graduates in Australia from its computer engineering course. However, many employers were fail­ing to recognise that the talents of these people included computer software development, not just hardware development.

At its March meeting in Sydney the Council

of the Australian Computer Society honored Professor Overheu with honorary life membership.

Professor Overheu becomes only the 15th such member in a society of more than 14,000 members. Overheu was an instigator and founding member of the South Australian Computer Society formed in 1960 and the Queensland Computer Society formed in 1962. He served as president of that society in 1963 and 1964. He then started and was a founding member of the Canberra Computer Society formed in 1965. He served as chair­man of that society in 1967 and 1968.

Professor Overheu played a leading role in the formation of the Australian Computer So­ciety in 1966. He was a member of the first council of the society and served as vice-presi­dent in 1968-69 and 1969-70. He was honored by being elected a Foundation Fellow.

He had a distinguished academic career at the University of Queensland, the Canberra College of Advanced Education, the Austra­lian National University and the University of Canberra and was appointed Pro-Vice Chan­cellor of the latter institute in 1990.

= 5= = DP Education (Seminars) pty ltdINDEPENDENT SPECIALISTS IN EDP TRAINING SINCE 1972

Independent trainers since 1972, we provide specialised training to professional computing personnel. Many of our courses are workshop style, with a limited audience of 20, ensuring excellence and value in training and development. Over 2000 professionals attend our courses each year.We take this opportunity to advise you of our calendar of courses during early 1992. All of these courses have PCP allocated hours.

Technical Leadership WorkshopDaniel Freedman & Dennis Davie

Sydney August 9-14

Program Testing Workshop Sydney May 25-26Geoff Quentin Melbourne June 9-10Program Maintenance Workshop Sydney May 27-28Geoff Quentin Melbourne June 11-12Software Testing ToolsGeoff Quentin

Sydney May 29

Structured Analysis Workshop Sydney June 1-3Sean Boylan Melbourne June 15-17Business Analysis for Users Sydney June 4-5Sean Boylan Melbourne June 18-19

Data Analysis Workshop Sydney June 22-23Sean Boylan Melbourne June 24-25User Documentation Workshop Sydney July 20-21Geoff Quentin Melbourne July 22-23Small Team Leadership Melbourne July 20-21Dennis Davie Sydney July 23-24Project Management Techniques Sydney July 27-29Peter Puschak Melbourne August 3-5Quality Assurance via Technical Reviews Sydney July 30Peter Puschak Canberra

MelbourneJuly 31 August 6

EARLY BIRD fee discounts are available to all our public courses, for those who act quickly. Ring (02) 550 6111 to see how you qualify, and receive further information on these and other courses.

1972 - 1992 20 Years of Service

PROFESSIONAL COMPUTING, MAY 1992 13

Page 16: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

Microsoft Visual Basic.

MicrosoftMicrosoft

liirDOSSvnnnxInctiulrs /* */i $ "•" /ir(h <inau v(I ' MUI mul .1 !■> " /<»• ilrmilviTJOKhlnlx

Microsoft Windows Software Development KitICwlo/mmil Took'fir. Httiltlint; Mimnofl Wiililn

MANAGHtLocal Area Network Software

Server

>0 R ftSMicrosoft^Vf

KKDOSwHiasa SystemsliKimles bolh 5'A " /ligh-deioin 11.2 MB) ami 31} 'disks.(Also mailable on 360K disks. Details inside.)

Professional Development Astern

Wfficmsait

VWWWWWW

/im

'Prvse-ii-er ’siyzrgrtJtdde,ikf*fj7ttji. ~ ;

i,is ■,’-}a.iwir::

Take part in the development of the new age of personal computing -

with these state^ofthe^art products from Microsoft.

With these tools, you can create applications and systems with unlimited potential for the Windows environment of client/server computing.

R

The SYRASE. Database Senvrfor PC Networks

yyw

10-User System

Microsoft

Page 17: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

The Microsoft Institute of Advanced SoftwareTechnology

invites you to become a Post-Graduate

During this decade, the face of computing will rapidly change. The PC will continue to develop, to become part of an integrated system of powerful workstations, providing a simple interface to global information systems.

As a result, the job market for mainframe-only skills, will diminish.

The Microsoft Institute of Advanced Software Technology was established to provide you with the skills necessary to build systems and applications for this new environment of distributed computing.

Attending a course at The Microsoft Institute is the most effective method of mastering the new model of client/server computing. Microsoft Institute

of Advanced Software Technology

Courses are held Australia-wide by highly qualified lecturing staff who have trained at Microsoft Corporation in Seattle. They are all experienced developers who

have participated in major software projects.

So, be amongst the first to profit from these new opportunities.

And train for the future by enrolling

at The Microsoft Institute.

Call (02) 870 2250 for a prospectus and course schedule now.

Page 18: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

PC’92 BURNING ISSUES CONFERENCE PAPER

The convergence of manufacturers’ products?In the workstation and Personal Computer market we have more true generic operating systems than ever — and more agreement on standards.

By Vance X Gledhill

THE notion of standardisation of IT prod­ucts has its genesis in the 1950s and ’60s with the very successful efforts to pro­vide a common development environment for

programmers in order to make the most of their intellectual investment in computer pro­grams. Fortran and, later, Cobol provided the opportunity for programmers to build applica­tions on one set of hardware with its propri­etary operating system, and then port those programs with relative ease to new computing platforms. The idea of any standardisation in hardware or operating systems was thought to be an unattainable goal in the 1950s and 60s.

With the advent of cheaper hardware or personal computers, CP/M from Digital Re­search gave developers a taste of what might be possible. A simple operating system that would run on a range of hardware provided by different manufacturers. Now an independent software developer could build a product for a horizontal (eg Wordstar) or vertical (eg local government) market and be assured that com­petition among hardware suppliers would pro­vide cheap, reliable systems. Hundreds of ap­plications were available on that platform. Such was the success of the idea, that compa­nies and government agencies seized on it to standardise when determining equipment pur­chases. The NSW government issued an edict

jthat government departments should only Jt purchase CP/M compatible systems to ensure®' optimii support.

A decade ago, this idea reached/maturity with the belief that adherence to standards for hardware, software and communications in­terfaces, would mean that computers would be far easier to buy, connect and use. If one hard­ware manufacturer was unsuitable, a user could switch to any one of dozens of other compatible machines without fear of losing their software investment. Software vendors had a standard target machine and operating system which enabled them to invest heavily in generic packages which could address a wide market. Users were given greater choice.' This reached a high point with the release of the IBM standard of PCs supported with the Microsoft DOS operating system. Over 70 million have been installed worldwide.

This standardisation, or acceptance of a common IT platform by the marketplace, al­

lowed software companies to invest millions of dollars to develop the “best” generic soft­ware that would have a market of tens of millions of machines. The manufacturers could focus on quality and performance to gain market share of their Intel-based personal computer systems.

The success of that strategy has led to the extrapolation of the idea into all areas of IT: from mainframes, to midrange, to worksta­tions, to communications and data standards. Unfortunately the idea is not as easily scalable as was thought intitially.

The title of this session reflects the naive hopes of people that saw the advantage of the PC revolution and the benefits it brought. These hopes failed to recognise the new so­phisticated demands of the market and the fierce competition among key transnational IT companies as they sought the formula for survival and growth in the 1990s.

Although sound in theory, it is a quagmire in reality. “Standards” and “open systems” have become the most overworked and mean­ingless words in the industry. Are open sys­tems those that can simply exchange informa­tion or must they also run each other’s software? Should they be built from standard parts, or from parts that conform to stan­dards? The only commonality among the in­dustry is that they will endorse “standards” that they can live with in their business plan. Thus we are seeing hastily formed and oppor­tunistic alliances amongst all companies in the industry inan effort ti> survive. It is the classic battle of the,robber Barons who will bed, wed, hunt, and dine with anyone who will serve their purpose. As in the classic game of Diplo­macy, it is not a matter of when these alliances will break down, but who will move first to try to take advantage of the other members.

No, unfortunately, we should not wait as has been suggested, for the manufacturers products to converge into a single open sys­tem. That will not happen. On the contrary, the first five years of this decade will be one of thejmost. confused as different groups seek to reach market dominance of MS-DOS and the PC of the 1980s.

Yet this diversity and experimentation will be for the good of %e user. The products that win the hearts and minds of the users for the remainder of the decade will be battle-wise and well honed. The competition will produce technology and applications that we can scarcely envision.

This article examines some of the turmoil in the industry and focuses on the diversity in each of the sectors of the workstation market. It will conclude with observations about the likely outcome.

16 PROFESSIONAL COMPUTING, MAY 1992

Page 19: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

Hardware diversity:The idea of broad adoption (standardisa­

tion) first embodied in the success of CP/M and the Apple II reached its ultimate realisa­tion with IBMs specification and marketing of the Intel-based IBM-PC. The openly pub­lished architecture and design of the PC al­lowed literally hundreds of other manufactur­ers to enter the market leading to the dominance of that system in the personal computer market with over 70 million sys­tems installed with a standard development environment of MS-DOS, a relatively unso­phisticated operating system.

The IBM-compatible PC defined the chip architecture, the bus, the interfaces, and the I/O configuration. MS-DOS provided a basic program interface that would ensure “shrinked wrapped products” which were bi­nary compatible across products from a num­ber of suppliers.

The perceived dominance by a few key play­ers to the exclusion of others led to the compe­tition looking for chinks in the armor. Apple introduced the highly successful Macintosh in recognition of the unfriendliness of the MS-DOS interface, while SCO Xenix intro­duced the world of Unix to the PC user. These two developments have encouraged others to follow their lead.Chip diversity:

The PC world was dominated for many years by the Intel Chip set. Over 80 per cent of all processors sold for PCs were part of the Intel series. Events of the past 18 months have opened the door to a number of other chip manufacturers which, over the decade, will threaten Intel’s position.

Manufacturers of chip sets that are vying for dominance, or at least a significant market, include MIPS, Motorola and other RISC sets including the Power series from IBM/Apple/ Motorola, the Alpha series from DEC and the SUN Sparc series.

This competition will lead to markedly im­proved performance for users. Not to be left behind, Intel have recently announced their P5 chip which will contain over three million transistors and operate at around 100 MIPS on a desktop workstation. Similar or better performances can be expected from the others to remain competitive.Bus architecture:

Workstations that wish to be successful in future will have to have an open bus architec­ture to enable third parties to integrate boards of their own design. For years, the AT was a standard to which many suppliers could build. In an attempt to improve performance and gain some further control of the market, a number of alternative bus designs are being promoted.

Now users have to contend with decisions about ISA, EISA, MCA, NUBUS and a range of new designs that are being promoted to provide an adequate bus speed for the new high performance processors, memories and peripherals.

Integrated hardware platforms:As the size of the PC and workstation mar­

ket grows, and the forecast of sales for mid­range and mainframe computers indicates continuing decline, numerous manufacturers are looking to this new, workstation-based multi-billion dollar industry as their salvation.

Today, and over the next few years, we will see a confusing array of new systems available to buyers. Not only will there be the Intel- based PC, but also the Intel-based ACE work­station (see later). In addition there are current and viable alternatives from Apple, Sun, Next and Tron. There have been signifi­cant alliances formed for future releases which include the Intel/MIPS ACE system, the IBM/Apple Power PC, and a new generation of Sun and Next computers focusing on the Intel processor platform.Software diversity:

Microsoft and Apple dominated the desktop market for a number of years. For reasons discussed previously, the opportunities now exist for more sophisticated systems to take a significant part of the future software market. New requirements such as security, fault toler­ance, multiprocessor support, multitasking, integrated communications, and support of object technology will lead to the introduction of much more sophisticated systems and software.Operating systems:

There will always be a place for simple file management and scheduling systems. How­ever, these will largely continue to be satisfied by a continuing evolution of MS-DOS. It is in the workstation area that the interest lies.

The main contenders for the new worksta­tion market are principally OS/2 2.0, a joint IBM/Microsoft development, the “Pink” op­erating system from Apple and IBM, Win­dows, NT from Microsoft and a wide variety of Unix variants.

Each of these operating environments look superficially similar and it will be hard for the naive user to make an informed decision in regard to the most suitable systems. On the other hand, each environment is quite differ­ent. Software developed by IS Vs or corporate developers for one platform will not run without significant source modification on another.

Ultimately, experience has shown, that the IS Vs will determine who will be successful. The system that is the focus of the major applications developers and vendors will be pushed to the fore as more and more users look to pre-packaged solutions and maximum connectivity for their purchases. This, in part, explains the current success of MS DOS and windows and the limited success of OS/2 and Unix on the desktop.Development tools and standards:

There is little sign of a convergence in the tools that developers use to build their soft­ware. The success of Cobol and Fortran on mainframes is unlikely to be repeated on per­sonal computers.

Vance Gledhill

*

/PROFESSIONAL COMPUTING, MAY 1992 17

\S*I

Page 20: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

The main contenders for the new workstation market are principally OS/2 2.0, a joint IBM/Microsoft development, the “Pink” operating system from Apple and IBM,Windows, NT from Microsoft and a wide variety of Unix variants.

If there has been a trend in recent years, it is towards C as a language. For the past six years, C has been the basic tool in System Develop­ment Kits (SDKs) used to build new applica­tions.

However, the complexity of the graphical user interface, the shift to event driven pro­grams, and the promise of reusable modules from object technology foreshadows a’further change. Over the next 18 months C++ will become a significant tool. This is not simply an extension of C, it is a new programming philosophy that may provide the marked in­crease in program and systems productivity that we need.

Riding on the back of this emphasis on object technology and GUIs are development environment such as Visual Basic, Smalltalk and Eiffel.

Fortunately, the manufacturers and devel­opers are coming together on some issues. There is agreement evolving internationally on the Application Program Interface (APIs) for Mail, LANs and Object Data Bases. This will allow application builders operating at higher levels to ensure that the products they develop will interoperate with other system components.Computer applications:

There is considerable uniformity amongst the products from different suppliers in the seven basic PC applications viz: Word Pro­cessing, Spreadsheets, Presentation Graphics, Personal Data Bases, Project, and Mail. Mi­crosoft, Lotus, Borland, etc have now focused their primary applications on the Windows environment with the subsequent porting of this “Look and feel” to other workstation worlds.

We can anticipate this consistency to con­tinue, with different products, from time to time, having market ascendancy.

It is anticipated that the major new applica­tions areas of Work Group Computing and Multimedia will go through'a period of experi­mentation before the pattern of products in these areas are established.|#the Advanced Computing Environment (ACE)

Currently over 85 different IT companies are members of the ACE consortium. Princi­ple companies include Microsoft, DEC, Com­paq, MIPS, Pyramid, SCO, Sony and CDD. The aim of the consortium is to agree to a nonproprietary, standards-based computing environment that will provide a specification for workstations for the coming decade. The specification nominates two chip standards, Intel and MIPS, and two operating systems, the SCO Open Desktop Unix and Microsoft’s Windows NT.

It is hoped that by ensuring a high level of agreement amongst equipment manufactur­ers, the PC phenomenon of the 1980s can be recreated for high end workstations. First products against the ACE initiative, together with the appropriate operating systems should be available this calendar year. It should be

noted, however, that two major players, IBM and Apple, are not members of the ACE con­sortium.Multimedia PCs (MPC)

Multimedia is seen as a major market op­portunity in all areas of business, the profes­sions and home use. The Multimedia PC plat­form arose out of a desire to focus on a desirable and sufficient set of technology to support this next and important applications environment.

A number of computer hardware and soft­ware companies, representing over 25 per cent of the world PC market, are producing Multi- media PC stand-alone systems or upgrade kits to turn standard PCs into MPCs. The mini­mum requirements for an MPC are a two M- Byte 80286 (or higher) based PC with Win­dows 3.0 and Multimedia Extensions. Storage to include a 30 M-Byte hard drive and a CD-ROM drive, a VGA board and a sound board. Experience would indicate that higher specification would be highly desirable.

The MPC platform has been slow to be seen as yet, but that is largely because of the lack of adequate software tools. At this early stage of product specification, a large number of devel­opers are building authoring tools to allow applications to be more easily assembled using the combination of the processor logic, high quality graphics, sound, and images.

ConclusionWhere does all this leave us? What started

out as a wishful hope for the convergence of manufacturers hardware and software plat­forms has turned into a nightmare of great diversity which is leading to great confusion amongst serious developers.

What will be the outcome? There are as many views of that as there are players in this cprrent roupd of product announcement, alli­ances, takeovers, and’ bankruptcies. >

The good news is that the outcome will be to the user’s advantage. The second decade of PCs will see high performance workstations, communicating cooperatively to bring infor­mation from a wide variety of sources to the user. New operating systems will be sophisti­cated, secure and reliable. New applications in multimedia and workgroup computing will dwarf stand-alone applications of the past.

The bad news? We will not know the out­come for some time yet. The next 18 months in computing will be one of the most interest­ing that we have witnessed for decades. Tradi­tional centralised systems will be replaced by distributed systems, new companies will dom­inate the transnational IT world, while better informed and articulate users will dictate the evolution of the industry. This time next year, there will be some winners and more losers, which will lead to consolidation of manufac­turers offering — we can only hope that this provides a solid platform to launch the next stage of computer systems development.

Author: Professor Vance Gledhill is director of the Microsoft Institute.

18 PROFESSIONAL COMPUTING, MAY 1992

Page 21: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

CONFERENCE

Melbourne

ACEC’92: Australian Computing in Education Conference, Melbourne, July 5-8.

Computing the Clever Country: Conference program takes shape

THE program of the 10th Australian Computing in Education Conference is taking shape, and will feature a number of well-known and exciting speakers address­

ing the topic Computing the Clever Country? It has been a tradition in conferences such as this to import British and American speakers, sometimes ignoring the fact that Australia is among the leaders in computer education.

To redress the balance this year, ACEC’92 will feature a number of well-known Austra­lian speakers including:■ Vance Gledhill, director of the Microsoft

Institute of Advanced Software Technology, will speak on Personal Computing Second Decade. It is only ten years since IBM intro­duced the first PC based on MS-DOS and industry and commerce began to take the microcomputer seriously. The second de­cade will be even more dramatic as the per­sonal computer becomes part of many con­sumer products, is integrated into home information services and becomes a basic office tool for all workers.

■ Bill Caelli, director of the Information Secu­rity Research Centre and Professor of Com­puter Science, Queensland University of Technology, is well known as a lively speak­er who compels his audience to come to grips with important issues. In 1986, Bill won the Australian Information Technology award for achievement in the Information Technology Industry. Bill is also technical director of ERACOM.

■ Gitte Lindgaard, principal scientist, Human Factors Research Team, Telecom Research Laboratories, is known to many ACS mem­bers for her work in the Ergonomics Society.

■ Rhys Francis, senior research scientist and project manager in the High Performance Computer Program, CSIRO Division of In­formation Technology, will address the topic Computing: Changing the Way Scientists Work. The advent of computing has permit­ted science to address problems which would otherwise be unsolvable. Computing has also changed the way scientists work. Computer-based simulations, visual depic­tion of results and the availability of tre­mendous calculating power have given to­day’s scientists much better tools with which to pursue science than those available to Einstein or Newton.

■ Paula Dawson, artist in residence, Austra­lian Museum, is working with the RMIT Computer Graphics Centre to create the world’s largest computer-generated holo­gram. The work requires innovations in 3D modelling; electron-beam lithography, sci­entific visualisation and computer-generat­ed holography techniques. Paula is the epit­ome of the Clever Australian and we hope you will be inspired by her example.

■ David Woodhouse currently holds the posi­tion of deputy executive director, Hong Kong Council for Academic Accreditation, but many ACS members will remember him from his work for the ACS and at La Trobe University. Asian countries like Singapore, Taiwan, Korea and Hong Kong are not just talking about becoming the Clever Country, their education and indus­trial policies are aiming at ensuring that they become Clever. Has Australia some­thing to learn from these countries?ACEC’92 does, of course, also have its quo­

ta of excellent overseas speakers. These in­clude Jeremy Roschelle (from the Institute for Research on Learning, Palo Alto, California), John Mason (Professor of Mathematics Edu­cation at the Open University, UK), Howard Flavell (a British secondary teacher who also works at the University of Birmingham), Bri­an Alger (from River Oaks Public School, On­tario, Canada), and Kathleen Sunshine (Direc­tor, International Telecommunications Centre and Professor of Communications, Ramapo College of New Jersey).

Early bird registration for ACEC’92 is $ 150 for the three days, and is open until 5 June. After this time, full registration will be $170. Single day registration is $85 per day. These prices include admission to all conference ses­sions, conference proceedings, lunch, morning and afternoon teas.

If you are interested in what is going on in education and whether education should be contributing to Computing the Clever Coun­try?, or even in acquiring some PCP points, ACEC’92 has to be the best value you are likely to find.

For further information contact the CEGV: phone (03) 520 7311, fax (03) 510 9750.

PROFESSIONAL COMPUTING, MAY 1992 19

Page 22: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

PC: GROWING UP

The future of commercial computing:Growing demands for more powerful applications challenge PC performance

This is the challenge of the 90s: how to maintain compatibility with the dominant PC hardware/software model while rapidly increasing the PC's power and functionality to meet the exploding needs of commercial computing.

By Steve Thomas

PCs: Performance challenges of the second decadeSINCE their first scattered appearance on desktops a decade ago, the personal computer has become the dominant force in commercial computing. Sales of PCs continue to gain mar­ket share, with mainframes and minicomput­ers continuing to lose ground. The commercial computing market represents about 90 per cent of all computing purchases, with only 10 per cent going to engineering and scientific applications. Today, there are over 90 million PCs installed around the world.

Behind this explosive growth is a de facto standard, Microsoft’s DOS operating system (OS) and its successor, Windows. Today, DOS/Windows supports more than 40,000 applications.

Now in its second decade, the PC is entering an era of “collaborative computing”. Increas­ingly, PCs need more power to meet the needs of larger applications and databases. Sales of 32-bit PCs (386- and 486-based) have explod­ed, exceeding 18 million units in 1991. To exploit enterprise wide networks, software is becoming much more integrated (utilising many developer and user-oriented layers) to meet the needs of the professional within com­puting workgroups. The combination of multi layered, graphical applications operating seamlessly within collaborative networks poses many challenges to today’s PC architec­ture. PCs will need to deliver five to 10 times,, their current power to meet these computing needs while preserving Microsoft application compatibility. /

This is the challenge, of the 90s: how to maintain compatibility with the dominant PC hardware/software model while rapidly in­creasing the PC’s power and functionality to meet the exploding needs of commercial computing.Applications: Increasingly graphical, integrated and collaborativeLOCAL area networks (LANs) continue to grow rapidly in the 1990s, challenging the pro­cessing power of PCs. According to Dataquest, one-third of all PCs in the US were networked in 1990, and that figure will rise to two-thirds by 1995.

Fuelling this growth is more powerful net­work computing software from Novell NetWare, USL Unix, and Microsoft’s new

multi-tasking, multiuser Windows NT operat­ing system. These new-generation operating systems will include the powerful capabilities typically found only in proprietary vendor op­erating systems, such as integrated network­ing, multiprocessing, fault-tolerant support, object linking, and distributed services support.

Data access, peripheral sharing and file transfer make up the bulk of today’s integrated PC LAN applications. But emerging “group- ware” productivity packages, such as work- flow applications and document management software, are beginning to exploit the fuller potential of enterprise networks.

Groupware relies on dedicated (database) servers or distributed server functions to networked PCs. Either way, PCs and the de­partmental servers that power them must greatly expand their power to meet the stagger­ing needs for network-wide fast responses and resource sharing.

Both PC applications and graphical user in­terfaces (GUIs) grow more power hungry as they exploit graphics, audio, video, and grow­ing multimedia capabilities. These technol­ogies require even more power as they become integrated and interactive, -and offer refine­ments suclj as three dimensional imaging, high-resolution, full fcolbr, and full motion.

According’' to PC Week (21 October 1991) elaborate graphical ana* audio capabilities will be standard in PCs by 1994. That will mean monitors with 1024-by-768-pixel resolution and a 70 MHz refresh rate at a minimum. Disk storage, magnetic or optical, will increase to 500 M-bytes. And PC Week estimates that CPUs will need more than 60 MHz of power, nearly twice the clock rate of most x86 desk­top systems today.

The integration of all these power-hungry applications and their “building block” sup­port software will totally change the commer­cial PC desktop into a “collaborative comput­ing environment.” Historically, as PCs evolve from standalone to network and on to collabo­rative environments, so do overall power needs.Building on a dominant PC baseWHILE PC power needs are growing, so is the world’s vast PC user base, which will top 150 million by the end of 1993. With this much momentum behind it, the PC market will con­tinue to grow in its user and overall market pervasiveness. New architectures and envi-

20 PROFESSIONAL COMPUTING, MAY 1992

Page 23: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

ronments will have to adapt to it not the other way around.

PCs are dominant because they are truly “open systems”. Independence in operating systems, hardware, peripherals, and the wealth of applications are the heart of the PC model. The tremendous volume of the PC industry keeps prices low and application volume high. Users prefer to be independent of a propri­etary hardware supplier, and this accelerating trend is reshaping the computer industry. Desktop operating systems such as those from Microsoft and Unix have become the single most powerful factor in the computing hierarchy.

In terms of dollar volume, open operating- system-based hardware now accounts for more than half of the computing dollars and represents 85 per cent of all units shipped. The power base in the industry has shifted from the hardware vendors to the software vendors. The most powerful software vendor is Micro­soft, and its independent operating system is at the centre of industry mass migration to hardware independence.The “Microsoft PC”MICROSOFT’S Windows 3.0 has garnered more than nine million users, sold more than 70,000 developer kits and generated over 5000 applications in less than two years — the fast­est selling operating system ever. Improved capabilities such as 32-bit processing, ad­vanced graphics, networking and application integration will further extend Windows’ mar­ket share in the near term.

Most observers agree that if OS/2 from IBM does not garner a larger share of users and software developers soon, it will be perma­nently eclipsed by Windows. According to IDC, by the end of 1992, the OS/2 installed base will account for only 500,000 units and Windows will represent over 20 million units. The upgrade of 70 million DOS users to Win­dows 3.0 represents the largest mass migration in computing, and this movement is reshaping the industry. The new Windows 3.0 NT micro kernel brings the powerful capabilities typical­ly found in UNIX and proprietary operating systems to the PC market. Microsoft’s deliv­ery of Windows NT on advanced 32-bit com­mercial processors (Intel’s 80386/80486 and MIPS RISC R4000) signals that a new genera­tion of open computing, beyond the desktop, is under way.

New generation PCs: Not a workstationTODA\( the challenge is to expand rather than replace the PC model to provide the power, price/performance and scalability that users and developers demand. For example, any new processor architecture that wants to address the commercial desktop must be PC compatible. PC compatible means it must support the PC operating systems and Win­dows applications, and protect the user’s in­vestments in applications data, peripherals, networks and training.

Beyond compatibility, a new architecture must be able to deliver the higher performance

and functionality needed to fill the “PC per­formance gap”. Without this greater perfor­mance, software developers will be hampered by inadequate platforms to run their new gen­eration applications. 4

Such commercial desktop-compatible and scalable power is not likely to come from work-station vendors. These vendors have fo­cused their efforts on-technical (floating-point) performance, rather than balanced micro­processor designs to address commercial (nu­merical) performance. Above all, vendors such as Sun, HP and IBM each dominate their architecture channel and only offer their proprietary versions of Unix. Lack of a hard­ware-independent, truly open operating sys­tem has limited overall growth of these work­station vendors and the overall workstation market. Workstation vendors shipped 520,000 units in 1991, versus 28 million units for PC vendors.

Some observers believe that sales of Unix workstations have peaked at half a million

The MIPS Magnum 4000 advanced Rise personal computer and the MIPS Millennium 4000 ARC server

I

PROFESSIONAL COMPUTING, MAY 1992 21

Page 24: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

Rather than adapting the PC model to accommodate the Unix workstation paradigm, the opposite is occurring. Independent providers of Unix operating systems (SCO and USL/Novell) are gravitating to high-volume commercial processor architectures.

For 10 years OSI has coped with changes in the IT industry.

units in 1991. According to Dataquest, in 1991 workstation unit sales actually declined eight per cent from the third to the fourth quarters. As Dataquest noted in its report (November, 1991) the average price of workstations re­mains high at $18,000. These high prices and lack of PC applications compatibility have kept workstations from overtaking the desktop. The lack of volume from each of the incompatible workstation vendors has kept software developers from writing applications for each vendors small Unix workstation mar­ket.

Rather than adapting the PC model to ac­commodate the Unix workstation paradigm, the opposite is occurring. Independent provid­ers of Unix operating systems (SCO and USL/ Novell) are gravitating to high-volume com­mercial processor architectures.

This transition of the largest Unix applica­tion base to the commercial architectures may spell the end of general workstation market growth and relegate them to their “engineering workstation” niche market.More power to the desktop

In the 1990s, commercial users are driving an exponential rise in the processing power needs of PCs just as they have done for work­stations since the late 1980s. In the worksta­tion arena the rush for power has meant a major shift in technology, from complex in­struction set computing (CISC) to reduced in­struction set computing (RISC). In a short three years the workstation market moved from 80 per cent CISC-based systems to 75 per cent RISC market share by 1991. Not only

i From page 9The provision of an API reduces the com­

plexity and hence design and programming effort otherwise required to provide an inter­face between the business application code and the OSI communication service code.

It frees system designers and programmers from providing software to process OSI com­munication services. This is a major benefit, as the difference between the business applica­tion logic and the OSI communication service^ logic is quite distinct.

The major users of APIs are software prod­uct development companies that have EDI, mail- or file-handling packages and wish to build applications for gateways to OSI com­munication services. Software developers can use API software products to reduce overall product development costs.Where to next?

The OSI protocol suite is constantly evolv­ing, particularly at the application layer. While ODA, RDA, MMS and so forth are being enhanced, new applications such as Transac­tion Processing (TP) and Security and Authen­tication are still being developed.

Because of the way OSI has been designed, as a series of layers, it has coped with the enormous changes in the IT industry in the past ten years. And it shows every indication of continuing that way.

does RISC offer higher performance because of its inherent design simplicity over CISC architectures, but it provides a much steeper technology curve. This trend has accelerated since the mid-1980s.

Higher performance RISC architectures and better price/performance solutions are acceler­ating the shift from proprietary mainframes and minicomputers, are making inroads into virtually every sector of the computing industry.An expanded PC model:Compatible innovationRISC technology, coupled with developments such as more powerful operating systems and networking, will give PCs the room to grow in the 1990s. This shift will enlarge the PC model to include mid-range commercial computing requirements. And it would make scalable power available to all systems in an enterprise network eliminating incompatibilities be­tween today’s clients and servers.

For the PC model to expand to its full po­tential, the performance gap must be filled. Once a compatible, scalable environment ex­ists, application developers will deliver an ever increasing level of innovation to meet end users growing needs. The continuum of application innovation requiring more and more power has never been satisfied and likely never will. The promise of RISC provides the technology bridge over the PC performance gap, allowing large commercial environments to implement truly open, large scale solutions. Author: Steve Thomas is commercial market manager, MIPS Computer Systems Inc.

i From page 6ducing external information feeds, which can then be enriched with comments from users of the network.„ Fax remains more popular than E-mail be­cause it is easier to use, so attention shpuld be given to the’'E-mail user interface.

Using simple data analysis tools, such as NIAM (Nijssen Information Analysis Meth- od), users should work with professionals to develop simple relational tables of the data they work with and become conversant with basic SQL of a simple forms product. Those with no computer background quickly accept the table as an electronic sheet of paper and soon automate simple tasks that used to be done in a notebook kept in the top drawer. New tables get created, although the MIS manager must keep a watching brief on the data structures.

The move to opeij, systems offers consider­able economic benefits, but as there is no such thing as a free lunch, the potential risks must also be considered.

I remember the computer revolution of the 1960s as a period of great excitement and learning. We are fortunate to have the oppor­tunity to participate in a second revolution.

Author: Tony Benson is general manager, NCR Australia Technology Centre.

22 PROFESSIONAL COMPUTING, MAY 1992

Page 25: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

MAY 1992

Open systems transaction processing: a realityProducts and performance makes open systems a natural solution to this key commercial application area.

By David Moles

THERE is a case for developing On-Line Transaction Processing Systems (OLTP) on Open Systems; the components for the necessary infrastructure are already avail­

able in the marketplace for these complex and critical systems. In this article I will use Unix- based references for a number of reasons:■ for many people Unix is synonymous with

Open Systems;■ many of the de facto and de jure standards

are being most rapidly delivered on Unix systems;

■ most (if not all) vendors have incorporated Unix in the technology development strategy.Furthermore and not the least, I am most

familiar with the Unix environment having “downsized” from a mainframe environment.Definitions

Firstly we must establish a framework. What is OLTP?

“Computer involved operations that change or display the state of the business in real time.” !

While this is a very general definition, I believe it succinctly describes the problem. No mention is made of the technical complexity, the size of the system or the implementation technology. The essence is the timeliness of the operation.

Real time is a piece of computer jargon — I am not referring to process control applica­tions, although OLTP application response times are critical to meet business service levels. Real time in this sense refers to num­bers of seconds rather than milli-seconds.

Timeliness, service levels and accuracy are critical for successful OLTP systems since the business is far more exposed to the public. Why on-line processing?

As organisations address the business pres­sures of the 1990s, the search for a competi­tive edge is increasingly important. Rather than viewing IT as a byproduct of the busi­ness, businesses require Information Technol­ogy to assist the competitive edge by including technology within product offerings and cus­tomer service.

The business pressures manifest themselves in many ways:■ Improving customer service — as market

competition drives down margins, service becomes an important differentiator in a crowded market;

■ Shortened product and service life cycles — tailoring services and products to market demands combined with enhancement of existing products;

■ Increased competition;■ Decision support — complete and timely

decision support for managers;■ Price/performance — the IT challenge is

delivery of these services within budgetary constaints:

■ Control of business resources — control and accountability for the application of the IT resources.Organisations are placing increased de­

mands on their IT systems. These IT systems must be easily extended and modified to meet these business challenges but at the same time work within cost constraints.IT implications

These business pressures have dramatic im­plications for the OLTP systems of the 1990s.

OLTP systems must now operate as part of a unified computing architecture. The com­puting architecture must incorporate the vari­ous styles of processing: OLTP, Batch, Interac­tive and Decision Support. All these styles play an important role in the complex process­ing systems required for the 1990s.

Customers are demanding increased service levels requiring systems to provide a more complex “point of service” facility. The OLTP system increasingly must be capable of pro­cessing all the customer needs — not just the most frequent business transactions.

The quick in/out transaction associated with banking transactions (TPC benckmarks) are no longer sufficient to cope with the busi­ness environment. “Transaction sagas” han­dling the complete business transaction are needed. Transactions become more complex as the “scope” of the customer interface is expanded. OLTP systems must handle more complex transactions types and processing. In­tegration of multi-media processing will be­come increasingly necessary. Already in the USA, systems are combining imaging of docu­ments for capture and storage with transaction processing for the traditional data processing. To successfully integrate these requirements an open system architecture is required. The

PROFESSIONAL COMPUTING, MAY 1992 23

Page 26: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

Worldwide OLTP Market SizeSource: Dataquest/Abeerdeen

Billion of $70

60

50

40

30

20

10

01989 1990 1991 1992 1993 1994

DMP Software

open systems environment allows the integra­tion of the key elements to address these com­plex transaction processing systems.Industry trends

OLTP comprises the largest segment in the commercial computing industry today with a growth rate double that of the overall comput^jfr er industry. Market projections indicate this trend will continue and increase in the coming years. Dataquest/Aberdeen predial world­wide hardware and software revenues for transaction processing will increase to $65 bil­lion in 1994. This has grown from $30 billion in 1989.

The applications of OLTP technology will be felt in a number of industries. Aberdeen has predicted the growth for various market segments. As you can see in figure one, all bar two indicators are above 10 per cent. Perhaps these predictions will be modified in the light of economic circumstances, but considering the relative rather than absolute values indi­cates the impact of OLTP technology. Two major industry groupings, Open Software Foundation and Unix International, have both identified transaction processing as a critical element of the technology directions. Identified with this OLTP requirement is sup­

port for distributed computing with Distribut­ed Transaction Processing (DTP). Interopera­bility with heterogeneous computers is provided within the Unix (Open Systems) OLTP environment with specific transactional gateways to mainframe systems considered fundamental to the environment. Client/server computing

Client/server computing, considered funda­mental to Open System Transaction Process­ing, divides an application into two compo­nents, the two parts co-operating to implement some business function:

Clients — interface to the external world, collect data and submit the business action to the server. Servers — process the business actions received from clients and respond to the client. The benefits of Client/Server com­puting include:■ integration of PCs and workstations to pro­

vide a more intuitive and effective inter­face;

■ optimisation of computing resources;■ reduced network traffic;■ scalability of application through addition

of more servers for increased capacity. There are three generally recognised points

to divide the application into the client and server components (with plenty of variation in between).

Presentation Client/Server locates the sim­ple presentation on remote terminals, eg, 3270 block mode terminals.

Client/Database Server separates the data­base from the client using SQL to manage the interface with the majority of the application executing on the input device.

Both these techniques intermingle presenta­tion services, networking, processing of the business transaction and data manipulation within the single application., Client Transaction Server divides the tradi­tional application info client and server com­ponents connected by,a transaction manager:■ the client responsible for the user interface

using a presentation tool;■ the transaction manager responsible for

routing the transaction (a unit of work) and managing the network to the server located somewhere in the network;

■ the server responsible for processing actions received from clients via the transaction manager. Client Transaction Server extends the benefits of the client/server model to include:• sharing the server resources over a large

number of clients reducing the cost per user;

• removal of the client processing with the associated heavy communications and in­terrupt processing;

• network independent applications since the network is managed by the transaction manager;

• device independent applications allowing a variety of transaction sources.

The Client Transaction Server architecture has demonstrated a factor of over 30 times improvement in processing capacity for the

24 PROFESSIONAL COMPUTING, MAY 1992

Page 27: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

same hardware configuration. This dramatic improvement translates to reduced capital ex­penditure for hardware and reduced mainte­nance charges for both the network and hard­ware systems.Standards

In the rapidly changing technology market­place — selection of products based on stan­dards is vital. Only by using these guidelines will the customer be assured of the continuing operational viability of systems. The politics and relationships of standards creation is a large enough field to warrant a separate pre­sentation. There are a large number of bodies actively concerned with the creation of stan­dards (both de-jure and de-facto) for OLTE This count does not include the various activi­ties associated with the OSI networking model or the work of the POSIX working groups. International Standards Organisation

ISO is defining a protocol for communica­tion between Transaction Monitors. This is currently available in Draft International Standard 10026 and is known as ISO TP. This standard will be incorporated by X/Open as part of the DTP model.Open Software Foundation OSF has selected technology through the Re­quest for Technology process and selected the Transarc Corporation model. This is based on a Remote Procedure Call paradigm. It is planned to ship the Encina product later this quarter. Encina is currently concluding Beta testing in selected customer sites in the US. Unix International

Unix International (UI) has adopted Tux­edo from Unix Systems Laboratories as the transaction processing system. Tuxedo has been incorporated with the UI Atlas model for

Opening Moves

distributed computing environments. The Tuxedo model of client transaction server has been adopted by X/Open within the Distribut­ed Transaction Model.X/Open

X/Open has been very active in defining a Distributed Transaction Processing model.The model identifies three major elements and describes separate interfaces (APIs) for the interface:■ XA defines the interface between the Data­

base (Resource Manager) and the Transac­tion Manager;

■ ATMI defines the interface between the Ap­plication and the Transaction manager;

■ ISO TP defines the interface between each systems transaction manager;

■ SQL is the interface between the Applica­tion and the Database (resource Manager).

Technology convergence Commercial availability of the core technol­

ogies of OLTP has been achieved. Each of the various components required are available from a wide variety of suppliers (including hardware vendors). These products mesh to provide a commercially viable OLTP environ­ment on Open System/Unix.Transaction monitors

A number of products are available in the marketplace now. Tuxedo, Encina and Topend provide the central transaction management component for OLTP previously cited as miss­ing by mainframe proponents. For new appli- ▼ Figure 2 — X/Opencation development, Open Systems TP moni- Distributed Transactiontors provide many of the features of CICS, Model

SYSTEM 1 DMP Software SYSTEM 2

Server Application

ZYeg SQL

32

ResourceManager |<

ATMI ATMI

Server Application

A

XA

Veg SQL

Transaction

Manager <±=^>DTP

IzTransaction

ManagerResource Manager

XAATMI ATMI

Client Application Client Application

PROFESSIONAL COMPUTING, MAY 1992 25

Page 28: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

COMPUTING S E R VI C E S

OPEN SYSTEMSPRODUCTSAccountingDistributionFinancialHuman Resources Treasury Non Contact Identification

Fax and Teiex Interfaces

SERVICESConversions Fixed Price Development

Maintenance and Support

Systems Analysis Management Consulting

Systems Review

SKILLSPickPrime InformationUniverseUnidataProgressSystem BuilderSB+Advanced PickUnixXenix

MACPHERSON OPEN SYSTEMS PTY LTDBrisbane (07) 870 8262 Melbourne (03) 866 1177 Sydney (02) 416 2788 Fax (07)371 4028 Fax (03) 866 4540 Fax (02) 416 9578

MEMORY EXPANSIONPRICES AT MAY 3RD. 1992

SIMM1MBx9 80ns $481MBx8 100ns $431 MBx8 80ns $484MBx9 80ns $183256x9 80ns $17(FOR SIP ADD $1) 4MBx8 80ns $175256x9 80ns $16(FOR SIP ADD $1) TOSHIBAT1000SE 2MB $230

T2000SX 4MB $320T1600 2MB $140T31000SX 2MB $135T3200SX 2MB $135T3200 3MB $230T5200 2MB $150T2000SX SMB $760DRAM-DIP411000 80 $5.40256 x 4 80 $5.6041256 80 $2.001MB x 4-80Z $25.00

1MB x 4-80S $25.00DRIVES3V4 PANAS 1.44 $80S'GATE IDE 64/16 $420S'GATE IDE 42/28 $325S'GATE IDE 106/16 $475S'GATE IDE 130/16 $530CO-PROCESSORS 387/33/40 $235/$280387/20/25 $210/$215SX 20/25 $155/5170287/10/20 $100/5125

Sales Tax 20%. Overnight Delivery. Credit cards welcome.

PELHAM Tel (02) 980 6988 Fax (02) 980 6991

Subscribe to Professional Computing for 3 years and save 25% or lake out a 2 year subscription and save 15%A three year subscription costs just $112.00, a two year costs only $85.00.

For details on these discounts phone Connie Georgas on:(03) 520 5590 Melbourne or 008 '335 196 (free caU) for other states to order your subscription NOW!

IMAGINE ADAM BASE

SI GOOD YOUR

PROSPECTS PAY TO RE ON IT

PIPMAIL accuracy through experience'I IIK niHKCT MARKKTIMl DIVISION Of J'KTEK ISAACSON I’UISI.ICATIOSS I'TV. I.TD.

Call Bernadette Elkins or Jacqui Ratnavira toll free on 008 335 196, Melbourne 520 5648/520 5620.

PROFESSIONALPROFESSIONAL

THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY

ADVERTISINGENQUIRIES

Phone Peter Dwyer

(03) 520 5555

Advertising conditionsAdvertising acceptedl^rpubficationln~Pro^ss(ona/Cbm-puting is subject to the conditions set out in their rate cards, and the rules applicable to advertising laid down from time by the Media Council of Australia. Every adver­tisement is subject to the publisher’s approval. No respon­sibility is taken for any loss due to the failure of an adver­tisement to appear according to instructions.The positioning or placing of an advertisement within the accepted classifications is at the discretion of Professional Computing except where specially instructed and agreed upon by the publisher.Rates are based on the understanding that the monetary level ordered is used within the period of the order. Maxi­mum period of any order is one year. Should an advertiser

fail to use the total monetary level ordered, the rate will be amended to coincide with the amount of space used. The word "advertisement" will be used on copy which in the opinion of the publisher, resembles editorial matter.The above information is subject to change, without notifi­cation, at the discretion of the publisher.

Warranty and indemnity

ADVERTISERS and/or advertising agencies upon and by lodging material with the publisher for publication or auth­

orising or approving^of the publication of any material, INDEMNIFY the publisher, its servants and agents against all liability claims or proceedings whatsoever arising from the publication, and without limiting the generality of the foregoing to indemnify each of them in relation to defama­tion, slander of title, breach of copyright, infringement of trademarks or names of publication titles, unfair competi­tion or trade practices, royalties or violation of rights or privacy AND WARRANT that the material complies with all relevant laws and regulations and that its publication will not give rise to any rights against or liabilities in the pub­lisher, its servants or agents and, in particular, that nothing therein is capable of being misleading or deceptive or otherwise in breach of Part V of the Trade Practices Act 1974.

26 PROFESSIONAL COMPUTING, MAY 1992

Page 29: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

while adding additional features geared to dis­tributed environments. Unix OLTP systems provide additional features such as:■ global transaction control with two phase

commit;■ support for leading databases via the X\

interface;■ distributed systems support;■ integration of the workplace v ia PC and

Unix workstation integration.All these products are (or will be) available for a variety of platforms, as an example Tuxedo is available on over 20 separate Unix plat­forms today and can be “ported" to new plat­forms relatively simply — 1-2 months work. Databases

The major Unix relational database suppli­ers have announced their intention to ship XA compliant products. By supporting global transactions, these products can be integrated into distributed networks, allowing existing data islands to be connected and to preserve existing applications.Computer Aided Software Engineering

CASE products are available to support cli­ent server computing. An Australian company is enhancing their CASE tool to incorporate the Tuxedo client/server transaction process­ing model. As the market grows, increasing enhancements to existing tools and new prod­ucts will appear to support this style of pro­cessing.Networking

The adoption of the OSI Reference Model provides for the interchange of data indepen­dent of the data representation of a particular hardware. Adoption of GOSIP will improve as more vendors announce the availability of compliant products. Already a de facto stan­dard of Simple Network Management Proto­col (SNMP) is well established for TCP/IP networks and already OSI CMIP/CMISE products are appearing in the marketplace. Workstation integration

Many organisations have hundreds of PCs or workstations. These typically provide con­nectivity to host systems via file transfer, net­work file systems or terminal emulation. These features will continue to be required for various applications, but the incorporation of the workstation into the OLTP environment allows for local processing together with the benefits of graphical user interfaces for access to workgroup and corporate systems.Unix operating system

The Unix operating system has and contin­ues to undergo revolutionary change. The hacker image of Unix is being replaced by a commercial acceptability. Features are being incorporated within the Unix system to pro­vide the necessary infrastructure for commer­cialisation of Unix. Security has been im­proved to comply with US Department of Defence Orange Book standards. Unix SVR4/MLS provides B1 level security with extension available through the SVR4ES ver­sion to provide B2 level rating. Workload management has been improved through the incorporation of improved scheduler algo­

High-End Open Systems Performance

|

I

3000+ : max MP

Mainframe

100+ : SP

rithms providing for fixed priority processes, many hundreds of concurrent users and sup­port for large ol’disk storage arrays. Disk man­agement system similar to the Logical Volume Manager specified within OSF/f provide en­hanced management facilities over traditional Unix. These facilities allow mainframe class management strategies for disk storage sub-systems.Hardware

An important component ofthe total OLTP package is the underiving hardware. High end Unix system arc delivering mainframe class solutions. Figure three displays the compari­sons of various classes of hardware. The com­bination of Reduced Instruction Set Comput­ing (RISC) and Symmetric Multiprocessing (SMP) is delivering hardware configurations capable of matching proprietary mainframe solutions. Many high-end Unix hardware ven­dors are incorporating high availability op­tions without the associated price penalty. These systems provide automatic reconfigura­tion to provide equivalent or better service levels to mainframes.

Figure 3 — Open systems performance

What can I buy today?Before I start, this product survey is pub­

lished as a representative list only. The rate of change in this industry segment is so rapid any list is out of date on publication. I will concen­trate on the transaction managers and value- added products since database and the Unix operating system availability is well estab­lished.

Transaction Managers Tuxedo from Unix System

Tuxedo is available from a variety of sources including Independent Software Ven­dors (ISVs) ourselves arid many hardware vendors. When marketed by various vendors Tuxedo adopts a variety of names such as “Open System Transaction Management” from ICL. Each vendor enhances the product without changing the standard application in­terface to take advantage of the particular en­vironment; for example ICL has enhance the

PROFESSIONAL COMPUTING, MAY 1992 27

Page 30: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

Opening Moves

The emergence of de jure and de facto standards is bringing about a change in the market. It is no longer necessary to obtain all the environmental software from the hardware vendor.

standard mainframe connection to provide ICL mainframe (VME) and IBM connectivity. USL is a source code software company and as such sells source licences for the product, this is advantageous in some circumstances for large companies and allows delivery of Tuxedo through IS Vs and incorporation in products. Tuxedo is compliant with the X/O­pen Distributed Transaction Model and is available now.

Topend from NCRTopend is available from NCR and offers

similar features to Tuxedo. NCR is making available source licences for Topend in a simi­lar fashion to Tuxedo form USL. I know of no implementations of Topend other than on NCR platforms. Topend is compliant with the X/Open Distributed Transaction Model and is available now.Encina from Transarc

Encina is shipping to OSF members. Pre-re- lease documentation is available from the Transarc Corporation. When I spoke to Tran­sarc last September, the product was undergo­ing beta testing at selected client sites. These were large mutli-national companies who had purchased the software for evaluation. Encina can also be purchased as a source code prod­uct from the Transarc Corporation. Encina is shipping to OSF members.

Unikix from Unicorn Systems Addressing the mainframe downsizing mar­

ket, Unicorn Systems have a IBM CICS prod­uct for Unix platforms. This allows migration of Cobol CICS applications from IBM main­frame hardware to more cost effective Open System platforms. The programming para­digm is preserved with a high degree of com­patibility included screen forms support, VSAM file support and, ilsing an enhanced Cobol environment th# programming lan­guage. Unikix is available for a number of Unix environments currently but does notjf comply with X/Open since this is limited by the CICS implementation.VIS/TP from VISystems /

The VIS/TP transaction monitor also al­lows for the migration of IBM CICS Cobol source to Open Systems. IBM CICS Cobol programs are executed without modification by providing a CICS environment on open systems. Support is provided for C programs. VIS/TP delivers transparent connectivity and interoperability among CICS systems and oth­er VIS/TP systems using IBM LU 6.2 or TCP/IP for communications.Future directions

As the market matures, we can expect to see companies adopt and incorporate the OLTP technology in products or enhance the devel­opment or runtime environment.Independence Technology Inc

Independence Technology Inc of Fremont

California has developed a complete Open Systems (Unix) Transaction processing solu­tion. Currently, Tuxedo provides the transac­tional engine in a similar fashion to the RDBMS engine supplied by database vendors. A transactional language derived from C inter­face to the TPM and applications code is con­trolled by the transactional C language. This model of operation allows use of a variety of TPMs for applications when a viable business alternative to Tuxedo emerges. Incorporated in the ITI model is a complete development environment including configuration manage­ment, test management and a Graphical User Interface for interfacing to Tuxedo. To com­plete the development environment, ITI also provides a complete operational management facility for the OLTP environment. The man­agement tool incorporates network manage­ment based on SNMP or CMIP, system man­agement of all the hardware and software components using management agents and an operator interface to the system all based on GUI interfaces.

ConclusionThe emergence of de jure and de facto stan­

dards is bringing about a change in the mar­ket. It is no longer necessary to obtain all the environmental software from the hardware vendor. Arguably it is not desirable or possible since each vendor has limited research and development resources. To effectively spread the resources across all the standards emerging in the industry is and will be extremely diffi­cult.Transaction processing software, like Rela­tional Database software, will become freely available in the marketplace. Conforming to standards and technical innovation will deter­mine the success of products — not just the technology\.alone. The j commodity market­place gives the purchaser the benefit! of (al­most) shrink-wrapped\software a la the PC market. Multiple startup companies will pro­vide technical innovation and expertise to ad­dress specific market segments. The down side of the new market is the level of knowledge required to integrate the various products from various vendors. Customers can no long­er be assured of integration — subtle interpre­tations/variations of the standards can dra­matically impact delivery of services. The role of the - system integrator becomes increasing important to ensure the solution will operate as advertised.In conclusion to summarise:■ apologies need not be made for Unix;■ Open Systems Transaction Processing can

tackle large complex TP applications;■ Open Systems offer excellent price/perfor­

mance advantages;■ Open Systems promote non-obsolescence of

applications.There is no doubt, the 1990s is the decade of

Open Systems Transaction Processing.Author: David Moles is a senior consultant for DMP Software in Brisbane.

28 PROFESSIONAL COMPUTING, MAY 1992

Page 31: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

5 FECIAL

CLU3

SAVE OVER $20

"Computer Excellence" RRP $49.95ACS Price only $20.00

Plus $6.00 for postage and handling

"Computer Excellence", edited by Brett Whitford, hard cover, 467 pages with colour & black & white photos

Success stories of 50 Australian I.T. companies - software, hardware and services companies

" Too often the newspapers run stories of spectacular computer company collapses. Too often it is the companies that fail to win orders that make the headlines. This is typical of a country which would rather see policy changes based on a few failures, rather than examine why the successful businesses are successful. It is only by examining the reasons for success, rather than the reasons for failure, that Australian companies, I.T. and otherwise, will achieve the success they deserve." Senator John Button, on "Computer Excellence."

"It takes vision, commitment and effort, the vital ingredients for any worthwhile undertaking. I get excited when I think of the opportunities ahead for our country, through Information Technology. The stories that follow show what we can do and have done, many times. Read them and be proud." Phil Cheney, on "Computer Excellence."

/-----------------------------------------------------------"----------*-------------------------------------------------ORDER FORM FOR "Computer Excellence" Edited by Brett WhitfordNAME: I, ..............................................................wish to order the above bookMy ACS membership number is....v...........................Telephone..........................I enclose $26.00, please mail it to me at:...............................................................

Special offer available only to current financial members of ACS. Send this form to ACS, Private Bag 4, Richmond, Vic 3121

Page 32: No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER … · No.76 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY MAY 1992-workstations to SE/ty A Brighter Future for Australian Computing

INHAUS D

esigr

WORKSTATIONMEMORY

3322!

\ Cki

4±A», *■■i '•?

KingstonJL m. technology corporation

The memory to remember.

Australasian MemoryExclusive distributor of

Kingston Technology products

OPTIMIZE YOUR WORKSTATION PERFORMANCE

Whether your application is database management, financial modeling,

engineering development, scientific analysis or whatever.

MORE MEMORY MEANS BETTER PERFORMANCE

FOR YOU.Kingston designs and manufactures

the widest variety of memory upgrades-* available for workstations, PCs and laser printers, as well as a versatile

line of mass storage products. All are available exclusively from

Australasian Memory and their select dealers. For high quality workstations

memory backed by a lifetime warranty, reasonable prices and great service, call

Australasian Memory today.

■ ■CURENTLY AVAILABLE FOR...

• IBM RISC System 6000 - DEC 3100, 5000 and 5100

• HP Apollo • Sun Microsystems

• Silicon Graphics • Data General Aviion

SYDNEY 1/11 Packard Avenue Castle Hill NSW Australia 2154 Phope (02) 899 5637 Fax (02) 899 2170 MELBOURNE 13 Maroondah Highway Croydon Victoria 3136 Phone (03) 879 4005 Fax (03) 879 5431

PERTH 5/64 Canning Highway Victoria Park Western Australia 6100 Phone (09) 470 2422 Fax (09) 362 4472

NEW ZEALAND Box 100563 Northshore Mail Centre Auckland 10 New Zealand

Phone (09) 479 2811 Fax (09) 479 3586

Manufactured by Kingston™ Technology Corporation All trademarks and registered trademarks arc of their respective companies