IDMS/SQL News              7.2

Vol 7.2       Technical Information for CA-IDMSâ Users                       June 1998

 In This Issue

Headlines

Good Bye to IDMS!
No Warning on Bad APARS!
New Base and Maint Tape for Release 14.0

Potpourri

Migration to Release 14.0 - Experiences
3390 Disks and Page Size
VM/CMS Clients and Year 2000 Testing
Deadlocks in OLTP Systems
LU6.2 Shortage!

SQL in Practice : How to do Corner

A Peep into DB2 Universal Database
How Many Indices for a Table?
SQL RCM Issues
DECIMAL Datatype in SQL
Unload/reload of SQL databases

Product News etc

Two Phase Commit for CA-IDMS
Countdown to Zero!
2 Pepsi for every 6 Coke!

Crossfire Section 7.2

Back to Main Page

Located at http://www.reocities.com/SiliconValley/Vista/4905/idms72.htm


Good Bye to IDMS!

--- Copenhagen Dispatch

We have information that Per Mogensen, Chairman of Danish IDMS Users Group is leaving the IDMS Community in Denmark and Europe. Per is changing job as of August 1998. Per will be responsible for Web servers/browser technology, Lotus Notes, Firewall and security in the distributed environment etc etc at his new job. Per is already a veteran in the IDMS and mainframe area. Per has been involved with IDMS in Denmark since 1985 and has played a key role in Release 12.0 implementations on Siemens BS2K and IBM MVS/ESA.
Per Mogensen has been involved in co-operation with all IDMS Clients in the whole of Scandinavia. With the shortage of mainframe and IDMS knowhow in the market, such departures will be strongly felt!
IDMS/SQL News wish Per Mogensen all the best in his new career.

Newsgroup - IDMS-L

IDMS-L - The IDMS mailing List CA-IDMS Discussion Forum based at the University of Georgia. At the moment besides technical discussion, there is very interesting discussion about IDMS History going on. Those interested, this is the right time to go back to the nostalgic years of 70s and 80s!

No Warning on 'Bad' APARS!

- From Helsinki Dispatches

We have read about the database crisis in Finland in the last issue. What caused the corruption, was never conclusively determined. There has been some intense speculation that the absence of APAR #: LO00012 (Unpredictable errors may occur if the IDMSCOMP or IDMSDCOM database procedures are called at improper times) might have been the cause. But this is highly unlikely, because the client did not have any schema with database procedure being called at improper times. Also the symptoms do not match.

IDMS/SQL News has noted recently that many PTFS had errors and they are not marked as bad PTFs in CA System, even though corrections are already published. For example, HYPER APAR GO80610 published on 2 NOV 1995, has errors and can cause serious problems during recovery. The correction appeared as GO87631 in February 1996. But even today, the old PTF is not marked 'bad' in TCC system. Similarly LO00174 published on 12 JUL 1996, for SR8 problems and 1142 errors contains serious errors and can cause database corruption. This was corrected by HYPER APAR LO03809 in 19 SEP 1996, but there is no 'warning' on the old one.

Also not all the HYPER apars are hit on the keyword search. So it is quite possible for someone to download the bad apar any time, even now, without knowing there is a correction for it!

Clients are recommended to stay reasonably current on tapes, even if you don't apply each and every APAR on a daily basis.

New 14.0 Base and Maint Tape - 9711

The 14.0 maintenance tape for MVS is available now. There is also a base tape 9711 for MVS clients. Clients on 12.0 can directly go for 9711 14.0 Base Tape. The last APAR on the tape is LO25803.

A Potpourri of Information: Technical and not so Technical

Experiences with CA-IDMS Release 14.0

- Hans-Peter Kotowski, Dipl. Math.
Manager Database-Administration
CENTRAL Krankenversicherung AG, Köln, Germany

[A greately abridged version of the Session: I5930S at IMC at CA-World 1998]

Introduction

CA-IDMS

Other DB/DC-Software

Application

DC-Systems

Production systems

Total Number of Transactions - Jan 1998

CPU	system	transactions	
  A-CPU	IMS-DC	 3.660.000	
  D-CPU	IDMS-DC	    800.000	
	IMS-DC	    740.000	
	CICS	 1.760.000	
        total	D-CPU	 3.300.000	
      grand	total	 6.960.000	

Conversion to CA-IDMS Release 14.0

Motivation for 14.0

Migration plan

list of applied optional APARs (I)

list of applied optional APARs (III)

list of applied optional APARs (IV)

List of applied new optional APARs

List of applied required APARs (I)

list of applied required APARs (II)

list of applied required APARs for printer problem

list of applied hyper APARs

list of missed required APARs in 14.0

Planned Work for Application


Check Application for Adaption
source changes
1 	Culprit
1 	Assembler program
recompiles
1 	PL/I program
5 	Assembler programs

Reasons for Adaption

changes in control blocks or DSECTs
recompile of edit routines for mapping
change of record layout for task statistics in logfile
remove circumvention from release 12.01 for option
TO (subschema-control) in BIND RUN-UNIT-command

Affected Applications


DC-statistics
memory-function
external version of subschema control block
map edit modules
display record description from IDD
check for signon of user

Conversion Timetable Spanned over a period of 6 months from base install CA-IDMS release 14.0 in June 1997 to Conversion of production systems in December 1997.

Conversion Effort


Release			12.01		14.0	
concept, produced by DBA staff       600 h		180 h	
realization of migration                   1.560 h	         	 425 h	
testing thru end-users                    120 h	 	80 h	
efforts of system-programming staff  300 h		100 h	
tuning                      		  225 h	  	  0 h	
corrections            		  40 h		200 h	
total                          		 2.845 h 	  	985 h	

Problems Solved

Problems Solved before Migration (I)

Problems Solved before Migration (II)

Problems Solved after Migration

Suggestion for improvements

DARs

Requests

Conclusion

less effort for migration no difference in response time no difference in CPU-usage no difference in I/Os release 14.0 runs stable migration was a success less effort because no migration of IDD necessary no change in functionality less adaptions of own written solution less effort for tests less elapse time Figures from Production
There is elaborate statistics from Perfmon and DCMT from various production CVs. Also information on LE/370 Usage with PL/1 programs are discussed in detail. For all these, please refer to the presentation I5930S at IMC at CA-World 1998. --- Editor, IDMS/SQL News

Future Plans

WRONG PAGE SIZES FOR 3390'S

- Docup

Page sizes specified for 3390 devices in the CA-IDMS Database Design Guide are partially incorrect. Any value that is not divisible by 4 should be reduced until it is divisible by 4.

VM/ESA and Year 2000 Testing

APAR #: LS33092

PROBLEM DESCRIPTION: VM/ESA VERSION OF OPTIONAL APAR LS09001 FOR YEAR 2000 TESTING. YOU MUST BE AT 9605 OR LATER TO APPLY THIS APAR.

Caution:

This APAR changes the internal DATE/TIME fields used by CA-IDMS, including the DATE and TIME fields used in the Journals and Log. It is recommended that you re-initialize your Journals and Log file between cycles of the CA-IDMS system. Testing has shown that the DDLDCLOG will not offload if it fills. As of this date there are no reports of problems with the journals.

This apar is applied to the Operating system dependent moudle, RHDCOCMS. After application of this apar you will need to relink the startup modules for the systems you wish to test on. However, you can apply the apar directly to the startup module by replacing the module name RHDCOCMS with the startup module name.

With this Apar applied each time the CA-IDMS system is started it will be started with an internal date of 12/31/99 and an internal time of 23:45:00.

If the operating system dependent module is included in the concatenation for a Local mode or batch to cv job, each time these jobs are run they will be started with an internal date of 12/31/99 and an internal time of 23:45.

For further details contact the latest notes on APAR : LS33092.

Deadlocks in OLTP Systems


-- compiled by Per Chr. Hansen from client experiences in Sweden and Norway

Locks have been classified as a necessary evil. When two tasks are waiting each other for resources and no one can proceed until one of them relinquishes control, we have deadlock situation. This is different from the normal "wait" situation, where a task wait for resources kept by another (longterm) task. An online task can "wait" and time out if a CV job is keeping resources for a long period.

Heavy task?

What is a heavy task? The terminology is relative. That is, what is considered "heavy" in an online OLTP environment might be a trivial job for the batch! Indeed, we want to focus on this type of tasks here.

An online transaction is very fat. Usually the response time is sub-second. Even in cases, where there is a 1-3 seconds response time, the actual CPU time involved will be sub-second. Such tasks can still contribute to deadlocks if they are accessing and locking the same or related records. Very often, clients want to run some CV batch jobs during daytime. Some of these are very small as per batch criteria. But even such "small" jobs taking 5-10 seconds CPU are extremely heavy if it has to run at the same time as online users! The result will be that many online users will be locked out (wait abend) or get deadlocks

Solutions

How can you avoid this? The solution is very much application dependent. One easy solution is to restrict the running of batch-CV jobs to some narrow windows and lock out online users during this time. This is a like traffic light solution. Some people are allowed and some are just stopped. How about restricting the batch-CV jobs to between 12 and 12.30 (lunch time for online users?)? This may or may not be feasible depending upon the end user requirements. The rest of the batch-CV jobs should be run after 17.00 hrs.

COMMITS

Frequent COMMITs in the batch-CV jobs can certainly reduce the deadlocks with online users. But then we have to take care of the abnormal situation where the batch job abends! How to rollback part of the data, how to account for the data which is already committed? How to process the rest of the transactions in an automatic way?

If the batch-CV jobs are doing some kind of order processing, one should investigate the possibility of using some flags on the order record which says, the order as incomplete. They should be flagged as "ready" only after all the orders have been put in. Again, the online tasks which use these orders should be modified to take care of the "flag". In the event of a batch-CV abend, another task can be started to delete all the incomplete orders and re-start the batch_CV job again to insert all the orders. Again this may or may not be possible depending upon how the orders are processed. If the order is updating some warehouse records, then deleting incomplete orders may not help.

Another way is to keep track of the orders which are COMMITTED in the batch-CV job. Then in the case of an abend, the job can be re-run and the committed orders can be skipped.

We have noted that many network databases have sets with link to many other records. This increases the probability of deadlocks. We are just wondering using SQL option with unlinked constraints is a solution in these situations?

Wanted : IDMS/LU6.2 Knowhow

In IDMS/SQL News 6.3 we had a discussion on some of the problems of LU6.2 when the conventions used by partner systems differ. In this short writeup, we see a mounting problem on a Worldwide basis - Shortage of LU6.2 people!

There have been many requests from IDMS client base to give more importance to the LU6.2 part of IDMSDC. But the vendor was downplaying the product . In the early 90s, CA gave the impression that CA90s had the necessary components to give the functionalities of LU6.2 support in IDMSDC, so that no more development was needed for the current LU6.2 line driver. But alas! Which CA90s component can support program-program communication across CICS-IDMSDC, IDMSDC-OS/2, IDMSDC/Tandem, in Assembler or ADS? None! Since a coherent approach was missing from the vendor, LU6.2 in IDMS have been supported by isolated consultants at various clients, all these years.

Who wants to go through VTAM traces in this "Mickey Mouse" Era of Computing? No one! Who wants to read PTERM (IDMS) traces? No one!! Who wants to go through that meticulous Assembler code of SEND/RECEIVE? No one!!! But then we have a problem! ie there is no one to support a very powerful functionality which has always been a part of IDMSDC all these years!

In spite of all the connectivity noise from various vendors, nothing has replaced the conventional LU6.2 protocol. Even in IBM's DRDA implementation, LU6.2 is made use of internally! IBM is quite serious about the importance of APPC even in this Internet Era, proof of which is seen in publications (May 1998) like Multiplatform APPC Configuration Guide(GG24-4485-00).

In Scandinavia the following companies have production level IDMSDC/LU6.2 knowledge (among other things). System Tjenester Services Oslo, Vegasoft Oy Helsinki. There are also many independent consultants who are well versed in LU6.2.


SQL in Practice - How to do Corner

A Peep into DB2 Universal Database

— Helsinki Dispatch!

Change is the essence of life. Since IBM reluctantly introduced DB2 in the early 80s, it has come a long way. Subsequent releases were far behind the existing databases. But the relational twist in the database world and the 'backbone-less attitude' of the competing vendors were favourable to IBM. Even at DB2 V 4.3 level, on the mainframe , DB2 was still catching up with many database features of IDMS Release 12.0. But things are changing now with Version 5.0.

There is not much change in the mainframe area. The change is of a much global nature. DB2 Universal database Version 5.0 was simultaneously on Mainframe, UNIX, NT, OS/2 and Windows/95. This is a bold step forward. What is different in Version 5 is the connectivity and compatibility offered across the products.

DB2 Sample Screen - Courtesy IBMâ

On a source level and at implementation level, the mainframe product stands distinct and even different from other platforms. This is understandable, since one cannot expect portability across a 370 machine and a PC! But on all other platforms (UNIX,OS/2,NT and W95) IBM claims that the internal source of DB2 UDB 5.0 is the same!

We have installed the product on our PC and is in the process of doing some testing. Installation is excellent. Documentation is good. Simple things are working. The only negative point we have seen so far, is that you need a BIG PC indeed. Even with a pentium 133 with 24 M memory things were slow and Windows resorted to a SWAP file of 50M when we tried some queries and Access Plans. It looks like one needs a Pentium 200 and with some 80+M RAM!!

Since there is no IDMS/SQL on PC, we intend to use DB2/Windows 95 for simple testing in future. More articles will follow on DB2 in future issues.

How Many Indexes for a Table?

— Helsinki Dispatch!

How many indexes should be there for a table, at most? 5,10,15.... ? It all depends upon the size of the table, processing requirements etc. Nevertheless, there should be some guidelines which can be used to arrive at a meaninful desgin.

In the CODASYL world the same issue was there when some key records participated in too many sets. Later some clients added indexes too, when the integrated indexing was introduced with IDMS 10.0. Since linked sets are not there in SQL (or can be avoided), it may appear that we do have more freedom here. But since the owner-member links are there in a pure SQL database, this necessitates the usages of more foreign keys, which in turn makes more indexes a requirement.

An old IBM DB2 Manual (V 1.2) says that there should not be more than 5 indexes for an OLTP SQL table. Since this was written in mid 80s and there have been advances in technology (12.0 features, ESA, Syplex for example) we can be a little more flexible now. Still, we believe that there is nothing wrong in setting an upper limit of 10 indexes for an SQL Table, with a strong recommendation to keep this at 5, if the table is going to be updated a lot in online.

Recently one among us attended an SQL Course in London (not on IDMS). A UNIX database was used as an example, and the instructor's recommendation was to use not more than 2 indexes for a table to be used in online!!

RCMs and Access Modules

Access Mdoule is a very important component of SQL access. It is vital because, it can be dynamically created at run time, depending upon the options on each access module. If 'AUTO RECREATE' is on, IDMS will recreate the AM at run time, if needed using RCMs involved in the AM.

RCMs will be loaded into reentrant program pool for this reason. If for any reasons, the AM has to be recerated more than once, this can give rise to certain anomaloies, again very much dependent on the various options used. One of those issues were discussed in Serious SQL Issues Item 6 (IDMS/SQL News 7.1). Here is an issue which is related.

CREATE ACCESS MODULE uses the RCM in the reentrant (XA) program pool, if available, rather than a possibly newer one in the load area. The user can force the load area version to be used in any case, by issueing a DCMT VARY PRO NCI for the RCM.

Decimal Datatype in SQL

--- an ex-Cullinet technician contributed to this note

Everyone knows that datatype INTEGER is in fact signed binary equivalent to PIC S9(8) COMP (= Fullword in Assembler, FIXED BIN (31,0) in PL/1). What about DECIMAL datatype? Though not obvious, in the following definition, column IQ-VAL will support negative values.

CREATE TABLE IQ 
 (NAME CHAR (20) NOT NULL,
 IQ-VAL DECIMAL (4) NOT NULL );

 SELECT * FROM IQ;

NAME               IQ-VAL
----	            ----
HUMAN BEING         100
ORANGUTAN            70
MONKEY               57
JOKER MADMAN        -10
MIDNIGHT OWL          -20
The reason is that decimal is in fact = signed packed decimal. This becomes clear from the following display in OCF IDMS Release 14.0.
               OCF 14.0 Online 
     DIS TABLE IQ LIKE REC AS SYN;
     ADD RECORD NAME IS IQ
    COMMENT 'Record built from 
          SQL TABLE IDMSSQL.IQ'.
 03  NAME   PIC X(20).
 03  IQ1    PIC S9(4)
          USAGE COMP-3. 

Unload and Reload of an SQL database

First of all SQL database areas can be UNLOAD/RELOAD-ed using the same utility used for network databases. There are some differences and restrictions.

The Utility Manual says " Using UNLOAD on an SQL-defined database:
You can use the UNLOAD utility statement to unload and reload an SQL-defined database to make these changes:
- Change area page ranges
- Change page size
- Change the maximum number of records per page
The page changes are specified in the new DMCL definition.
Note: The modified DMCL must be available during the unload operation as well as during the reload operation." The syntax for SQL unload/reload becomes

 UNLOAD SEGMENT segment-name
 (AREA  area-name)
 RELOAD INTO same-segment-name 
 DMCL dmcl-name
The named DMCL above is the new DMCL which contains the same named SQL segment, optionally with the limited changed described above. This DMCL will be used for the RELOAD operation. The DMCL used for UNLOAD is the one given under //SYSIDMS card. This minute difference is not clear in the manual.

For SQL databases, unload and reload should use the same segment name. So if one has to resize the areas, this can be done only by using a new DMCL. You cannot make changes to table definitions in an SQL-defined database between the unload and reload steps.

For SQL-defined segments, area and table stamps are unloaded during the UNLOAD steps of an SQL-defined database and are reloaded during RELOAD processing. This becomes a problem if one wants to unload an SQL data area from production to test IDMS. Some clients do want to do this to use the same data in testing (the actual database size in pages will be smaller in test). Such an operation is common in the case of network databases.

Product Experiences / Reviews
In this Issue : Back to Top

OCA-COMMIT - A Two-Phase Commit for CA-IDMS

-- Jean-Daniel Bégin, Aquisoft Inc.

More and more, CA-IDMS sites are also DB2, IMS, MQSeries and/or VSAM users. Moreover, new systems need to maintain integrity across all those repositories. Until now, this has been a non-issue for pure blue sites because CICS supports two-phase commit through its SYNCPOINT command. By requesting a SYNCPOINT, all repositories in use by the task either commit or none of them does. CICS acts as coordinator in the two-phase commit process. CA-IDMS cannot participate in the two-phase commit process because it doesn't offer a DML called PREPARE-TO-COMMIT.

Aquisoft will soon offer OCA-COMMIT. This product gives CICS-IDMS transactions the ability to write a PREPARE-TO-COMMIT journal record. When the CICS transaction requests a SYNCPOINT, a TRUE (task related user exit) is invoked for each repository and a PREPARE-TO-COMMIT request is sent to all. OCA-COMMIT provides a TRUE that sends the request to all CVs that have an active run unit. Note that you can have multiple CA-IDMS run units against different backends. Once all the PREPARE-TO-COMMITs are successful, the COMMITs or their equivalents are requested.

OCA-COMMIT provides all the components needed for synchronization with CICS at startup. In the event of a CA-IDMS crash, upon startup CA-IDMS will synchronize any INDOUBT run unit with CICS. Console messages are issued if CICS is not present. Roll forward and Rollbacks utilities are fully supported.

In the future, we will be looking at writing a coordinator within CA-IDMS. This would allow us to commit run units across multiple backends and/or MQSeries (using our OCA-MQSeries Interface).

The product availability date is Fall 98.

Jean-Daniel Bégin Aquisoft daniel@aquisoft.com (1)-(888) 518-9111 (1)-(414) 969-8073 Visit the home Page at http://www.aquisoft.com

Just One more!

2 Pepsi for every 6 bottles of Coca Cola!

Commenting on the Department of Justice suggestion that Microsoft should ship Netscape Navigator alongwith Windows 98, Bill Gates said "It is like asking Coca Cola Corporation to include 2 bottles of Pepsi with every package of 6 bottles of Coca Cola!"

In this Issue : Back to Top (Contents)
Back to Main Page


IDMS/SQL is published on behalf of IDMS WatchDog Group, Helsinki-Hřrsholm-Oslo for free circulation among IDMS Users Worldwide. IDMS/SQL News is not a CA publication. CA-IDMS/DB, CA-IDMS/DC and CA-ADS are registered trademarks of Computer Associates International Inc. CICS, IMS-DB/DC and DB2 are registered trademarks of IBM Corporation. Technical examples are only guidelines and modification might be required in certain situations and operating systems. Opinion expressed are those of the authors and do not represent the views of IDMS clients or related vendors. © Permission is hereby granted to use or reproduce the information, only to IDMS Customers and Consultants, provided the material is distributed free of charge.


Comments to idmssql@reocities.com
This page hosted by Get your own Free Home Page
1