Mainframe Cost Awareness
Adrian Heald (Capacity Reporting Services)
Effective management of Information Technology relies, to a great extent, on an understanding of the cost of that technology. While the costs associated with hardware, software, and staffing etc. can often be obtained the true cost of developing and running applications cannot as these costs can only be derived from measurements taken from the applications.
This paper looks at implementing a SQL Server based Cost Awareness system that addresses the need to allow users to see the cost of the IT activities they initiate. Providing costing information to the users generating those costs empowers them to change the processes with a view to cost reduction. Application developers can gain a better understanding of the costs of systems they develop before they are implemented into production; tuning efforts can be judged based on the overall cost of application systems; there are many benefits.
This paper follows on from SQL Server A capacity management data repository? utilising the data structures and procedures put in place to deliver effective cost awareness.
SQL Server - A capacity management data repository?
Adrian Heald (Capacity Reporting Services)
There appears to be a growing trend away from the more traditional mainframe
based databases for storing capacity management data. A number of my
clients are currently investigating (or are in the process of implementing)
the use of SQL server as a vehicle for storing their data. It seems that
the high cost of traditional mainframe-based solutions both in terms of
software license fees and mainframe resources (CPU, DASD etc) is the main
driving force behind this move. There are however, some additional benefits
such as: reduced hardware costs of storing and processing the data; simpler
inclusion of non-mainframe based data; greater access to analysis tools; and
simplified report generation, just to name a few. In this paper we examine
one method of reducing ongoing costs by moving the capacity management
reporting functions to SQL Server.
Open System Capacity Planning for a Large ICT Organisation
Cathy Wright (British Telecom)
A case study detailing the capacity planning and performance management process in use for open systems measurement within BT, currently the UK's largest ICT company, with particular reference to UCPS, an internally developed software tool used to capacity manage over 2000 Unix and NT servers.
EMail: What will your IT department do when the subpoena arrives? - Paper
EMail: What will your IT department do when the subpoena arrives? - Presentation
Brad Bruhahn (Sandpiper Data Systems, Inc.)
Email has become a hot topic for IT, both legally for the content we
type into an email without thinking anyone will ever read it besides the
recipient, government imposed retention requirements, and not to mention the
amount of storage it now consumes on a daily basis. Are you at risk for
lawsuits brought on by email content? Would your corporation be able to
respond to a court order to produce years worth of email search extracts?
This presentation will examine the latest legal issues of email retention and
the challenges and methodology used to recover and search years worth of email
across almost 100 Exchange servers for a very large US federal court case, and
will discuss the various issues associated with the data recovery and search
process. Most DR plans are intended only to restore the latest version of your
email servers and to get the email process up and running quickly after an
outage. Come see why these plans fall short when responding to a court order
to produce email as evidence.
Coming to Grips with the Outsourcer - Paper
Coming to Grips with the Outsourcer - Presentation
Dave Lancaster (Dept of Veterans' Affairs)
The Department of Veterans' Affairs was the first Commonwealth Department to
outsource infrastructure in 1992. We have completed two 5 year contracts and
are now some months into our third.
Much has been learned from these contracts. This paper presents some of the
problems encountered and lessons learned managing storage in this outsourced
environment. Specifically it details attempts to verify storage charges and
ensure service levels are met. In addition, the architecture of an
availability and storage monitoring solution is presented
Operation Cost Reductions in an Outsourced World - Paper
Operation Cost Reductions in an Outsourced World - Presentation
Tony Miotla (CPT Global)
Co-author Michale Augello
IT Outsorucing contracts are structured with penalty and discount clauses and safety nets on fees and payments toprotect both the client and service provider.
To be truly effective, Operational Cost Reduction programmes must operate within the terms of the contract schedules.
This paper looks into the structure of outsourcer contracts, external and internal charge-back models, and operating a successful cost reduction programme across all supplied IT services.
PERFORMANCE & MANAGEMENT
On the Performance Considerations of Bursty Workloads
Tony Mungal (EMC Corp)
This presentation examines the unique requirements of Bursty workloads. These workloads are characterized by resource requirements which can vary drastically over time. A common method of dealing with most of these workloads has been to employ the 'peak:average' analysis techniques. While this method worked favorably for understanding most 'legacy' workloads within a processor context, some extensions are required to adapt it to todays workloads and today's I/O subsystems. Modern day I/O subsystems are configurable in a variety of ways to accommodate all workloads; either as a single homogeneous subsystem, or as a cocktail of vastly different heterogeneous workloads within the same, or a small number of subsystems. Understanding the bandwidth requirements of these subsystems to successfully satisfy these types of workloads is no easy feat since it requires a detailed understanding of, not only the workloads themselves, but also their varied interactions, and a multitude of configurability options of the hardware itself. To further complicate this, an assortment of availability techniques need to be examined in conjunction with the recoverability of said workloads. This presentation will utilize data from some commonly known and well understood workloads to illustrate these concepts.
Asset Management, Or Keeping Your Hand On Your Ha'penny
Cathy Wright (British Telecom)
This paper examines the importance of tracking accurately both hardware and software assets through the estate of one of Europe's largest ICT companies. It will look at what asset management is, why we need it, how we at BT implemented a total lifecycle management solution across both our mainframe and distributed system estates, and at what we learned from the process.
PERFORMANCE MEASUREMENT & REPORTING
Web Performance Measurement from the Users Perspective
Tony Allan (Allan Project Management Services)
This presentation describes an implementation of web performance measurement using agents on dedicated systems to measure performance from a users network perspective. Data is transmitted to a central location for storage, analysis and reporting.
The specific example will examine response times for internal users accessing external web sites
Data From the Windows Server Family - Whats there and Ways to get it out!
Steven Dunn (Mainframe Performance Products)
Since NT 3.51 we have had the ability to monitor the activity data for a server. As new products have appeared and existing products have matured the application specific metrics have increased. It is still not to the level of detail enjoyed on Z/OS machines but it is getting there. This presentation will attempt to provide attendees with the current 'state of play' covering what types of data is available and what levels of monitoring and reporting is currently realistically achievable with this data.
Application Performance Management
Charlie Meek (Compuware Asia Pacific)
The process of identifying inefficiences in mainframe applications and
tuning them to optimise performance has always been perceived as a
specialist job. This could not be further from the truth. By following some
simple steps and implementing new processes, every mainframe site can
achieve optimal application performance Charlie Meek, APM Technical
Specialist will discuss the importance of APM and outline a practical
methodology that will help you improve the reliability and performance of
your mainframe applications. The topics that Charlie will cover include:
- Where does APM fit in?
- Why is it important?
- What are the tactical Benefits?
- The process of Application Performance Management
Successfully Auditing Your Meta-Data Environment
Steven Clar (Mainstar Software Corporation)
Data Centers are constantly growing. This growth is attributed to
the increase of critical Corporate Data. With this constant growth of data it is imperative that ALL your records and structures are accurate, complete and correct. To insure your HSM CDS's, Catalogs and Tape Management data structures are healthy and accurate, you need to perform regular audits on all these Meta Data Structures. These Audits must be complete and accurate and finish in an acceptable time frame.
This discussion will detail 2 different audit approaches. I will discuss what you may and may not know about these audits, and how you can successfully audit your environment without concerns. I will include examples of the DANGERS of HSM Automatic Fixes and how these concerns can be avoided. I'll show you HOW you can become more PROACTIVE with your audits, not only for your Control Data Sets, but also with all your HSM tapes.
This discussion will include some true company dilemmas, and how they were quickly and accurately resolved.
Consolidated mainframe and open-systems tape infrastructure
Trevor Jones (StorageTek Australia)
Mainframe and open systems have distinctly differing requirements of secondary storage. In this presentation a customer case study of a ground-up implementation in Melbourne will be analysed providing insights into issues that need to be addressed when consolidating infrastructure across disparate platforms.
Issues confronted during the implementation were:
These issues will be detailed and discussed in the presentation, along with guidelines which will assist attendees when making future architectural decisions.
- implementing a SAN
- using the SAN to facilitate backup consolidation
- considering Mainframe virtual tape and DR requirements
- addressing network security issues and centralised backup
- who controls the library?
iSCSI and New Fibre Channel Technologies
Mike Le Voi (Hitachi Data Systems)
The buzz word in the storage arena for the last 12 months has been SCSI, FCIP, iFCP et al. However, it is only in the last 6 months that we have seen stable, useful products emerge that solve connectivity issues at a reasonable cost. This paper examines the latest developments in the Internet Fibre Channel world and discusses when to use each technology for maximum advantage.
The IBM z990 - Technical Overview - Part 1
The IBM z990 - Technical Overview - Part 2
Mike Hall (IBM Australia Ltd)
On 14 May IBM announced the new z990 range of large scale z/Architecture mainframe processors. This presentation will cover details of the sophisticated new memory, processor management and packaging technologies. These allow a very scalable and high performance machine to run with multiple partitions, dynamic reconfigurable memory, and multiple channel subsystems - while still maintaining software compatibility across operating systems and applications.
Modern Processor Architecture
Richard Smith (Sun Microsystems Australia)
The goal of processor architecture is to maximize performance, subject
to design constraints, to meet the needs of target markets. A new
pipeline for a modern processor might take 5 years to design and bring
to market, during which time many design decisions have to be made.
These ultimately determine a processor's performance characteristics
when subjected to different workloads. Regrettably, some benchmarks
shed little light on a given processor's behaviour. Its the author's
contention that a knowledge of processor architecture is invaluable for
developers of high performance software trying to extract maximum
performance from a processor, and enables greater insight to be
obtained from benchmark results. The paper explores modern processor
architecture and design features, how they relate to performance, and
identifies some of the key tradeoffs.
CICS Performance Management 2003
Ivan Gelb (Gelb Information Systems Corp.)
Performance management controls of CICS Transaction Server (TS) for OS/390 z/OS greatly affect performance and the effective capacity of a complex is impacted. This presentation focuses on the controls and z/OS environmental factors which affect a CICS regions overall performance, total required processor capacity, real and virtual storage, and disk input/output service. Workload Manager (WLM) definitions that may help or hinder CICS will be included. The potential risks and benefits associated with the selection of actual and default values will be identified. Samples of reports for health monitoring and problem analysis will be presented.
Who Just Killed My DB2? (Parts 1 & 2)
Ivan Gelb (Gelb Informaiton Systems Corp.)
Insuring optimum DB2 service levels in z/OS environments is challenging due to dependencies between many subsystems. Performance biases introduced by tuning can affect the complex's service levels and total effective capacity. This presentation describes how to focus DB2 tuning projects while insuring that interdependent areas of z/OS, CICS, and DB2 are optimized. Attendees will learn how to avoid being caught in unproductive finger pointing sessions by insuring that subsystems are tuned with proper bias, and by monitoring performance metrics that indicate the likely degradation causes.
Five Bullet Points:
- Basics of Performance Tuning
- z/OS Point of View
CICS Point of View
- DB2 Point of View
- Pointing in the Right Directions