cmga98 logo
proceedings


Garry Barker  (IBM Australia)
Virtualisation of the World
Tom Beretvas  (Beretvas Performance Consultants)
Storage Processors Performance Comparison (Parts 1&2)
Tom Beretvas  (Beretvas Performance Consultants)
Tuning New Technology DASD
Eve Bye  (IBM Australia)
OS/390 Trends and Directions
Monish Chopra  (Storagetek)
Protecting Data Under Open Systems Databases
Lynn Collier  (Storagetek)
Coming in from the Cold?!
Gavin Erickson  (National Australia Bank)
SAS/AF. ITS AS EASY AS 1-2-3-4-5-6-7...

Ron Fellows**  (IBM Australia)
Vitual Tape Subsystems - Alive and Kicking in Australia

Denis Fox, Gene Leganza (Programart Corporation)
The Year 2000: Strategies for Managing Application Performance

William Gray  (Storagetek Canada)
What Did You Do in the War Daddy? Positioning, Planning and Tuning for Virtual Tape.Parts 1 & 2
Adrian Heald  (Capacity Reporting Services)
Revamping Your Capacity Management System
Mark Heers  (Amdahl)
PSLC in an Hour - Constructing a Simple Parallel Sysplex
John Knight  (Coles Myer Ltd.)
Helping Your Business Help You With Capacity Planning
Chris Langshaw, Mike Tsykin  (FBA Computer Technology Services)
End-To-End Response Time and Beyond: Direct Measurement of Service Levels
Stan Laugher  (PSR Software)
Tips & Techniques for Processing Large SAS Datasets
Craig Linn  (School of Computing and IT, UWS Nepean)
Object and Object-Relational Database Systems: A Growing Shift
Pierre Louys  State Rail NSW
The Environment - Who Cares?
Pierre Louys  (State Rail NSW)
Understanding Change Management in a Data Centre
Mitch Mackrory  (Storagetek NZ)
Virtual Tape Management Systems -Solution or Problem?
Jeff McNaughton  (Jeff McNaughton Communication Software Development)
From CICS to the Internet Rapidly
Pam Morris, Jean-Marc Desharnais  (Total Metrics)
New Methods for Measuring Function Points in Outsourcing Contracts
John Mycroft  (Mycroft Systems Ltd.)
Implementing Software Asset Management

Tom Payne  (EXECP)
Extending AFP to the Enterprise
Tom Payne  (EXECP)
How to Avoid the Pitfalls of Enterprise Printing...
David Pickett  (Pickett Computer Services Pty. Ltd.)
How to get the most out of your AS/400
Gary Powell*  (Boole & Babbage)
Taming the MQSeries Beast
Stephen L. (Steve) Samson  Candle
The Folly of Ownership, Revisited -or- Whose CICS Region is That, Anyway?
George Sawyer  (Novadigm)
Managing Year 2000 Compliance in a Wired World
Rick Sewell, Fred Shields  (Amdahl Pacific Services Pty. Ltd.)
THE COPY SUITE : Sorting through the Storage Tool Bag
Mike Smith  (Centrelink)
Mick Smith - Smartcard Technology Provides Centrelink with Security and Service Delivery Solutions
Phil Smith  (IBM Australia)
Crytpo S/390 101
Phil Smith  (IBM Australia)
Why S/390?
Bill Stewart  (Consultant)
Client Server in a Government Agency. Five years of Failure!
Peter Taylor  (Australian Taxation Office)
Client Server Volume Testing
John Tyrell  (IBM)
The Only Good I/O Is a Dead I/O!
* First Year Presenter's Prize
**President's Prize Winner




Revamping Your Capacity Management System
Adrian Heald Capacity Reporting Services
Capacity management is mainly about providing the decision makers of an organisation with information they require to make those decisions. Several processes must occur before the data collected at the monitored platform can be presented as useful information; data collection; data storage; data summarisation; data archival; and report generation. These processes are dependent upon one another (data collection before report generation etc.) and thus many establishments have them rigorously structured. This often causes problems when components fail, the most significant of these potential problems is data loss. This paper looks at an alternate method of structuring your capacity management system to eliminate the dependency between the various processes involved.

Helping Your Business Help You With Capacity Planning
Bloomer for John Knight Coles Myer Ltd
Sound familiar? Well, we set about designing a new system that would help the businesses and ourselves based on Microsoft's Access. This meant cooperating with the businesses and trying to instigate what they really wanted. I found that I could predict this reasonably well by putting on my business hat. What does a business want to see the first time they start the application? They want to see how they are tracking against budget! We provided this facility and much, much more. Businesses that have this application (it has yet to be rolled out to all) are extremely happy with it and the others are eager to get it as soon as their software versions support ours. Having this tool has prompted real investigation into what causes resource budget blow-outs. Consultants have been called in to "Strobe" troublesome applications. Information is accessible by business applications people in a timely manner. We provided a package solution.

The Only Good I/O Is a Dead I/O!
John Tyrrell IBM
How to reduce that inveterate batch window? There are many functions and techniques in S/390 platforms that can truly help you greatly reduce that elusive (and growing) batch window, but how do you find those opportunities and quantify them? This paper will describe actual cases where the batch window was analyzed,recommendations made and implemented, and then later re-measured to verify the study analytical results. In one case, a 6.5 TB shop was able to reduce their batch window by over 4.5 hours per night! This represented an overall 50% reduction in their critical batch window.



Protecting Data under Open Systems Databases
Monish Chopra StorageTek
UNIX is becoming more stable and many large companies have started deploying mission critical applications on the UNIX platforms.Even NT environment is attracting some small department level applications. This trend is expected to continue. Data Protection of these databases is being ignored for the time being - as IT operations try to do more with less, and focus on more burning issues. Here is a tutorial to explain the various options in simple terms and present a cook book using which you can start developing a strategy for protecting this data. Presentation will cover:

Object and Object-Relational database systems: a growing shift
Craig Linn School of Computing and IT, UWS Nepean
An increasing number of vendors including IBM and Oracle are extending their existing relational DBMS in order to provide better support for complex objects. The result is an Object-Relational DBMS that is capable of supporting diverse applications, particulary those requiring complex multimedia datatypes. This paper firstly examines what a pure Object-Oriented DBMS is and then compares this with the Object-Relational hybrids that are appearing. A number of issues are examined including:



The Environment - Who Cares?
Pierre Louys State Rail IT
Acknowledgement The materials of this presentation is mainly drawn from the Fuji Xerox publication "down-to-earth officecare - a practical guide to environmental action in the office". ($30) Do we care that... everyday the world burns an amount of fuel energy that took the earth 10,000 days to produce municipal waste production in Australia is around 680Kg per person per year of which more than half could be re-cycled many of our activities are not environmentally sustainable meaning...
  1. we use resources faster than they are renewed
  2. we cause cumulative degradation to the environment
  3. financial bottom lines come at a price to the environment
Why would you care? You do not have to be a "greenie" to appreciate that our world s resources are being depleted. You'd rather be part of the solution not the problem... Because a common action fosters unity and good attitude in the workplace Down-to-earth toolkits Equipment Lightning Air conditioning Paper EMS and ISO 14000 more information...

Understanding Change Management in a Data Centre
Pierre Louys  State Rail IT
[Prevention is better than cure...]Today, for a reliable organisation, simply fixing what has gone wrong in the production line or the delivery of a service is not enough. The management and customers' requirements demand effective actions to prevent error occurrence and recurrence. Change Management in IT Many IT providers struggle to address these requirements and spend as much as 25% of their resources to find and fix problems, deploying automated solutions in order to: control the integrity of their IT infrastructure; manage their complex and distributed technologies. Changes affecting the integrity of the IT infrastructure can be broadly divided in three categories:
  1. changes introduced before a "system" becomes operational
  2. changes implemented after a "system" is operational
  3. unplanned changes (due to equipment failure, human error, disaster)
The first category is a matter for technology suppliers and application developers. The third category is a problem management issue. This paper is only concerned with the second category and it explains how it can be addressed in a Data Centre environment.

New Methods for Measuring Function Points in Outsourcing Contracts
Pam Morris Total Metrics
Function Point Analysis (FPA) is used by organisations worldwide as one of the measures used to establish the baseline size of their software assets. This paper introduces new techniques, which enable all the functionality delivered and worked on by the supplier to be included in the productivity performance monitoring of these contracts. Typically only the business applications layer can be measured using FPA. The infrastructure software eg. Utilities, device drivers and gateway applications, are usually overlooked because FPA is not designed to, nor easily adapted to, measuring internal layers of functions not delivered to the business user. This new Full Function Point Technique developed by the University of Quebec in Montreal is a refinement of the FPA technique. It is no longer limited to only measuring MIS type applications but was specifically designed to meet the needs of organisations who build and support infrastructure applications and real-time and embedded software.

Implementing Software Asset Management
John Mycroft Mycroft Systems Ltd
The process of putting Software Asset Management in place:

Extending AFP to the Enterprise
Tom Payne EXECP
IBM's Advanced Function Presentation (AFP) architecture has been often viewed as too costly and complicated to implement enterprise-wide. To further exacerbate the issue, just as AFP has become a standard in many MVS environments, PCL has become the dominant force in laser printers distributed throughout the enterprise. How can these two technologies be integrated, and what benefits can accrue to the enterprise that takes on the challenge of extending AFP to the Enterprise?

How to Avoid the Pitfalls of Enterprise Printing...
Tom Payne EXECP
While creating business value from an enterprise printing management strategy today's users consider easy, quick, high-quality printing as a right rather than a privilege. They want to print whatever, wherever, and whenever they have a need. "Complexities be damned - just let me print!"
For those responsible for the day-to-day operations, distributed printing is rarely discussed in a strategic context. rather, printing issues are consigned to the dreaded helpdesk call - "Why can't I print?" The focus is tactical, and the demands ever-increasing. To avoid the pitfalls associated with printing in complex environments, strategic thinking is required.
Indeed, as corporate LANs continue to proliferate at a dizzying rate, it is imperative that Enterprise printing be elevated to a strategic context within IT. Just as the role of the host has changed, so has the nature of printing host-based output. The Enterprise now requires that disparate, far flung networks and printers are connected and transparently accessible to all users. Furthermore, important corporate printing resources are to be leveraged, extended and shared by the entire Enterprise. In today's competitive environment, the need to connect, standardize and optimize is urgent.

The Folly of Ownership, Revisited -or- Whose CICS Region is That, Anyway?
Stephen L. (Steve) Samson Candle Corporation
A nearly universal trend in large IT installations has been to transform resources "owned" by user departments to installation ownership with more or less automated management. Examples include DASD volumes and JES initiators. However, one counterexample to this trend persists: CICS address spaces ("regions") are still overwhelmingly owned by user departments or the proxies who maintain their inventory of applications. Consequently, the set of transactions resident in each region mixes the important and the unimportant, the sprinter and the slug. Since the execution environment is managed only at the address space level (when enclaves are not defined), the effect of such transaction mixing leads to less than optimal results in both performance and performance management. This paper proposes some solutions to achieve better resource distribution, better control of transactions' performance, and increased efficiency.



From CICS to the Internet Rapidly
Jeff McNaughton Jeff McNaughton Communications Software Development
The trouble of getting information from CICS to the Internet is the lack of tools to marry the MVS environment to the Internet world and also the expense of developing applications. What can hinder you from getting rapid Internet access to your CICS data. Come and hear about some real live experiences I have had of reducing developing timeframes from years to months in SE Asia.



From Little Things, Big Things Grow: Smartcard Technology Provides Centrelink with Security and Service Delivery Solutions
Mick Smith Centrelink
Five years ago industry commentators were predicting that by 1998 smart card technology would be an integral part of every Australian's life.
Unfortunately the growth rate of smart cards in Australia has been somewhat slower than predicted with most card projects being community or organisation based. Centrelink are currently involved in a number of smart card related activities but, like so many organisations, have not been able to take advantage of the enormous potential offered by the technology at the rate it had anticipated.
This paper presents a summary of some of the uses of smart card technology and my views on the issues which I believe need to be addressed if the technology is to be allowed to achieve its full potential.



Taming the MQSeries Beast
Gary Powell Boole & Babbage
MQSeries from IBM has been gaining momentum in the marketplace in the last few years as the preferred messaging middleware product, particularly for large environments with many distributed, homogeneous platforms. However, enterprises with high volumes of message traffic are realising that the management tools provided by IBM to monitor message traffic are insufficient for effective management. The requirement for extensive monitoring, centralised configuration updates, and in particular automation of responses to problems are essential to ease the MQSeries management burden. This paper focuses on MQSeries management requirements. It discusses options for implementation that would ease the subsequent management burden, clarifies some of the myths surrounding MQSeries, discusses generic requirements for management effectiveness, and highlights the need for automation to reduce staff involvement in complex management issues.



How to get the most out of your AS/400
David Pickett Pickett Computer Services Pty Ltd
Many large organisations have AS/400s for special purposes but don't know how to properly measure and improve performance. This presentation will give an overview of tools available to assist in proper performance management, including no-charge tools that are a standard part of OS/400.

Client Server in a Government Agency! Five years of Failure
Bill Stewart Consultant
We wish to know whether a project is successful; but although everyone hopes to learn by a mistake, nobody likes to admit they have made one. One sort of project is to devise a strategy; and strategists avoid admitting they have failed by arguing, wrongly, that they have no responsibility for implementation. The mistakes they make include failing to select and promote management and technical methodologies, and to establish in sufficient detail a logical connection between the methodologies and the perceived and implicit requirements of the organisation. We consider the case of client server information technology in a government agency over the past five years.

Client Server Volume Testing
Peter Taylor ATO
Over the last year the ATO has been developing new applications using the Cool-Gen (ex-COMPOSER) RAD tool. These applications still rely on the MVS environment for CICS and DB2 but have given the Business people a look and feel of a window application. The challenge has been to maintain consistent support in the area of Volume Testing for these applications.
This paper will describe the joys and experiences of converting to Mainframe Client/Server testing and what to look out for when you bite that bullet. The audience will gain a better insight into the "How-To's" of testing a Mainframe Client/Server application.

End-To-End Response Time and Beyond: Direct Measurement of Service Levels
Mike Tsykin FBA Computer Technology Services
End-To-End Response Time (ETE RT) is the preferred metric for the measurement of service Level. It shows whether a User was prevented from working at full capacity. ETE RT is an indirect metric due to difficulties in identifying a transaction in Open Systems. Other measurements may be better suited to the task. One of many is the time when a User was capable of working but was prevented from doing so.
This paper reviews existing approaches to measurement of ETE RT, lists available tools, outlines the concept of direct measurement of Service Levels and describes an approach to its implementation.



OS/390 Trends and Directions
Eve Bye IBM Australia
OS/390 is now 3 years, 2 versions and 5 releases old. What has happened in this time and what is coming in the future?
This presentation covers the new functions introduced in OS/390 2.4 and 2.5 as well as the changes in packaging and delivery. We also take a sneak look at what is coming in OS/390 2.6 (GA Sept 98).

PSLC in an Hour - Constructing a Simple Parallel Sysplex
Mark Heers Amdahl
IBM have recently imposed more stringent requirements to continue to receive the benefits of IBM's more cost effective Parallel Sysplex Licence Charges (PSLC). In effect, this means that sites with multiple processors will have to implement some form of Parallel Sysplex sharing in the near future to continue receiving the benefit of PSLC. This paper explores the construction of a simple Parallel Sysplex. Whilst the paper does not recommend quick and unplanned implementations, it is designed to give an idea of the requirements of constructing a Parallel Sysplex. As in all construction, it commences with the foundations of a sysplex - the definitions of the signalling paths between the MVS images, the implementation of a common timer reference, the control of resource serialisation (under GRS) and the creation of a sysplex couple dataset. Upon these foundations, a set of guidelines and actions called policies are defined to describe the usage and resourcing of the Parallel Sysplex by various applications. Finally the paper describes the implementation of a sample application, namely the sysplex-wide recording of hardware and software failures (logrec error data).
"A house is a machine for living in" Le Corbusier (1887-1965), "Towards an Architecture"

Crypto S/390 101
Phil Smith IBM Australia
Security traditionally was Userids/Passwords, with a select few working with line encryption, these processes still work and work well, but the delivery of service has fundamentally changed who the customer is and how they connect, so what has changed on S/390, where can it be used and why.

Why S/390
Phil Smith IBM Australia
The marketplace is changing, if it has not already changed. Customers now basically want to be able to connect to any application, at anytime and anywhere. This may sound like a fundamentally sound process to adopt, but it puts pressure back on the organisation supplying the service. Choice of Operating System, Data Base, security package, hardware etc all adds confusion and doubt to the decision process. So why should any one platform stand out more so that any other when running "Core Business" applications.



Virtualisation of the World
Garry Barker IBM Australia
Virtual disk and virtual tape subsystems are being made available by several vendors in the storage marketplace today. They offer benefits to computer installations, in terms of service delivery and cost reduction and also in terms of functionality to the business. Their successful exploitation requires a mindset change; they need to be thought about differently, particularly from the point of view of performance management.
The concept of virtual storage is not new; processor virtual storage has become so fundamental to everything we do that the advantages and methodologies for managing it are now second nature. This paper first reviews the advantages and some of the performance considerations virtual storage brought to the processor world. It then explores the similarities between these factors and the factors that relate to virtual disk and virtual tape devices.

Tuning New Technology DASD
Tom Beretvas Beretvas Performance Consultants
With the advent of storage processors (e.g., IBM RAMAC, EMC Symmetrix, IBM RVA2 etc.) conventional wisdom of DASD tuning has to be revisited to reflect the characteristics of the new world of storage processors. This paper provides the overview of a methodology for identifying and curing DASD performance problems. The old and new DASD worlds are compared. Case studies illustrate the process. The methodology presented includes the metrics to be examined and some of the data reduction techniques. It also indicates potential avenues for solutions. The process begins by looking at RMF data and refining the scope of examination so that DASD, control unit and path performance problems can be identified.

Storage Processors Performance Comparison Parts 1 & 2
Tom Beretvas Beretvas Performance Consultants
New DASD technology is used in proprietary control units, so-called "storage processors". The paper examines these technologies, such as IBM RVA, EMC Symmetrix, etc. with a view of comparing performance limitations of these processors. Various publicly available performance measurements are used to address the performance limitations. Maximum likely I/O rates achieved are suggested.

Coming in from the COLD?!
Lynn Collier Storagetek
There has been a dramatic change in the way in which automated tape is being deployed in the marketplace. From a traditional backup environment there has been, and continues to be, a significant increase in the use of automated tape to facilitate the introduction of document management solutions, support of image archives and to extend the use of automation to enable new business applications to be delivered. In an area previously associated with optical disk there has been a massive swing in the definition of storage requirements. These new application areas have specific profiles and requirements relating to capacity, performance, data retention and management. The metrics and processes to plan and manage the storage centric environment are key to developing capacity planning and performance management techniques for the future.

Vitual Tape Subsystems - Alive and Kicking in Australia
Ron Fellows IBM Australia
VTS's have been running production workloads in Australian sites since April, 1997. This presentation will cover the practical experiences of several of these installations including implementation planning, data migration strategies, performance management, and disaster recovery issues. Come and hear the "REAL" story about this exciting new tape technology and like virtual storage and virtual disk before it how it is changing the whole world of S/390 tape processing.

What Did You Do in the War Daddy? Positioning, Planning, and Tuning for Virtual Tape Parts 1 & 2
William Gray Storage Tek Canada
With reference to the "Three Waves" model of tape processing [GRAY1], we explore tape issues up to the implementation of virtual tape systems which will introduce new complexities for us. So that we do not end up fighting the next war the way we fought the last, we create a methodology that can be used for positioning, planning, and tuning all aspects of tape processing. Using a real case-study, we progress a large tape shop from real to virtual drives. Batch problems are gone but replaced by new challenges for we analysts: cache sizing, backend bandwidth, frontend bandwidth, LRU issues and other items related to running with other tape processes.

Virtual Tape Management Systems - Solution or Problem?
Mitch Mackrory Storage Tek NZ
Virtual Tape Management Systems are a hot storage topic. They have, on occasion, been presented as the answer to all sorts of storage issues. Certainly they are the answer many age old problems, but they need to be sized and implemented appropriately, and with technical understanding. If they do not do the job properly, senior management may have bought a white elephant and that does nobody any good. Users must ensure that they understand the implications of this new technology before committing to incorrectly sized or engineered equipment. This paper addresses some of the solutions offered in the past, such as tape stacking, and investigates some potential new costs and considerations.

THE COPY SUITE : Sorting through the Storage Tool Bag
Rick Sewell Amdahl Pacific Services Pty Ltd
Over the last few years there has been a major improvement in the availability of tools that will allow increased data availability. All of these tools provide the ability to improve data availability through a number of different techniques. However, there are a number of issues such as hardware and software dependencies so trying to understand what approach will really solve individual problems is not all that easy. Choosing the appropriate tool also hasn't been made any easier by the use of a multitude of acronyms such as PPRC, XRC, SRDF, HODM, HRC and TDMF. This paper will describe all of the various tools available for increased data availability and examine both the positive and negative aspects of each. From attending this paper the audience will gain a better insight into what tools are available and which will be appropriate to their individual needs.



SAS/AF. ITS AS EASY AS 1-2-3-4-5-6-7...
Gavin Erickson National Australia Bank
Capacity Planners and Performance Analysts in MVS shops are often are often asked to create CPU usage profile graphs for System X for yesterday's prime shift (08:00 through 16:00), another for System Y for the afternoon shift (16:00 through 24:00) and finally another for System Z for the whole day (00:00 through 24:00). It can be a real PITB to create three separate graphs for three separate systems for three different time intervals. Having first hand experience of this type of situation, I decided to do something about it, permanently, and in such a way that others could produce these graphs without my assistance. The end result was the creation of a SAS/AF application that utilised my existing SAS programs.
This presentation shows step by step how easy it is to build a simple SAS/AF application using an existing SAS macro program.

Tips & Techniques for Processing Large SAS Datasets
Stan Laugher PSR Software
Today's Management Information, Executive Information, Data Warehousing applications thrive on large volumes of data. Typically, these systems have millions of observations (rows) to be processed and summarised. This paper will discuss tips and techniques that can be used by SAS practitioners to efficiently and effectively process large volumes of data. The author will draw on practical examples from MVS, NT and PC SAS environments.



The Year 2000: Strategies for Managing Application Performance
Denis Fox, Gene Leganza Programart Corporation
The risks associated with the failure to manage application performance during Y2K reengineering efforts include unacceptable response times for business-critical applications and the inability to redeploy applications due to performance problems. This paper examines ways to ensure critical applications are redeployed on schedule and run efficiently in Year 2000. It discusses strategies designed to assist IS organizations in managing application performance before, during and after Y2K reengineering efforts.

Managing Year 2000 Compliance in a Wired World.
George Sawyer Novadigm Inc.
This paper discusses techniques that assist enterprise customers and organisations with distributed software, to automatically identify, update and manage Year 2000 compliant and non-compliant software applications. These techniques have achieved success in reducing administration overhead by up to 80% and have been able to maintain operational reliability of 99+%, resulting in significant reductions in total cost of ownership and increases in service levels.