IDMS/SQL News 9.1
Vol 9.1   Technical Information for Mainframe and IDMS Users    March 2000

In This Issue


Playing with Tape Numbers

IDMS/PC Remembered

IUA Applauds James Winn

SQL Corner

IDMS/SQL - Fatal Shortcomings!

Loading an SQL Table

Multiple Database Support in SQL

Area and Table Timestamps

Defaults on Date and Timestamp


Documentation Errors

Program Loads and DCMT NC

String Searching within ADS Process Source

Y2K Hype dies down - at last

Web Attacks and The White House

Special Dispatches

Linux Story for the IDMS Veteran

Web development using OS/390 IDMS

Diminishing Technical Support

Third Party Courses and Products

Just One More!  

This page at

Playing with Tape Numbers

What IDMS Tape you are running? 10.2, 12.0, 14.0, 14.1 ?
We have too many tapes and too few people to support the product from the vendor. Tapes are announced as panacea for every problem reported . What is the difference between 14.0 and 14.1? Better LE/370 support? But is 14.1 LE/370 error-proof, obviously not.

Multiple page group support. Yes, but is this feature easy to use ? Needed for everyone? Needed for the dying IDMS client who is sitting with last-but-one IDMS application?

In spite of the Y2K hype, there are clients who have successfully used 10.2 system without any Y2K problems! Since Y2K support within the builtin functions can easily be installed under 10.2, there was no big deal.

Release 15.0 has been announced. Earlier in late 90s SQL clients got the reply that the timestamp issue with respect to SQL database backup and copy will be solved in 15.0. But now 15.0 specifications on the table does not include this problem. CA does not have enough development mass for IDMS to implement such a feature.

Once upon a time 10.2 tapes were run parallel in two series - S1020X and S10210X with minimal differences in actual database/application software (There was some difference in PERFMON support).

Release 14.1 Tools Tape [CH on IDMS-L]

Some time ago discussion appeared on IDMS-L list about there being no 14.1 tools installation guide. There ARE some differences (between the 14.0 TOOLS INSTALLATION MANUAL and the 14.1 INSTALLATION JOBS(base 199911)) that I thought I would share with the list: running CAISAG creates jobs 1-10, but not in the order listed in the 14.0 TOOLS install guide:

Job # What the 14.0 book says What is really there in 14.1 (199911)

        1 Allocate Allocate
        2 Link CAIIPDS Link CAIIPDS
        3 Customize Options Customize Options
        4 Build RHDCUXIT Build GSISVCX
        10 UPDATE IDD Build RHDCUXIT

Obviously, looking at the jobs will tell you what is happening in each job, but for those new to installations, it is a bit disconcerting when the doc does not match what is in front of you on the terminal screen .....

IDMS/PC Remembered

IDMS/SQL News 8.1 had a brief history of IDMS Database. We got many letters from the readers - many of them original Cullinane/Cullinet employees. A warm thanks to all of you who responded.

We had briefly mentioned IDMS/PC there. Originally IDMS/PC was called TAB (The Application Builder). The idea was to give a PC based (DOS) IDMS database and ADS mainly to develop mainframe applications on PC. Something like Microfocus-CICS/COBOL. Typically emulating 3270 type terminals, not the current graphical Windows Interface. Mid eighties, this was quite reasonable. TAB used to advertise seminars in Computerworld and was an instant success.

TAB was a very good product for its age. One could easily develop ADS, COBOL applications with fully portable IDMS database definitions on PC. This was good for learning and very useful for consulting companies who wanted to prototype an IDMS database and ADS application. The ability to define a CODASYL database on PC was remarkable.

Later TAB supported network. Unlike the fileservers this was a true network product which generated very little traffic. A client in Finland in fact had made production level applications using TAB on 386! Another one in Sweden (not an IDMS MF client at all) was running TAB in production on PC!

After acquiring TAB, CA changed the name to IDMS/PC. For a while it was also combined with CA-DB/VAX to provide IDMS/UNIX. There was even talk about IDMS/PC as a production platform. Talk was there to test CAS under IDMS/PC. This was too much. One cannot expect 100% portability between IDMS and IDMS/PC. Whatever was available was very good for offloading application development from TSO to PC for ADS and COBOL.

Soon the PC world changed to Windows and 95. CA also acquired new PC/UNIX products like Ingres etc. Focus was shifted away. A client in Holland did indeed use IDMS/UNIX successfully.

Many wished CA had developed the product to give SQL and even Windows support. This was possible. If ADS+/PC could be transformed into Powerbuilder, IDMS/PC/DOS could have been modified to provide a Windows based IDMS. But the will was not there. With positioning on the mainframe IDMS itself under question, one cannot expect the vendor to put resources on something like IDMS/PC.

In spite of all these, there are still old IDMS guys running IDMS/PC on their desktop. It's a pleasure to do OBTAIN CALC on your PC and having an ADSA dialog there!

James Winn applauded by IUA

James Winn, board member and former National Chairperson is stepping down from IUA Board after 10-plus years of service to the IDMS Community. James is at present Systems Manager at Williams International where he heads Web and Windows based IDMS Access. Scandinavian Users recall James Winn's participation in the European IUA meeting held alongwith Finnish IUA in Helsinki in 1996.

Here is a picture from the boat Silja Line from Helsinki to Stockholm. From left James Winn, Peter Kotovski, Gopi Nathan, Rolf Hopland. On the right FIUA (96) Chairman Rauno Kokko (opposite to the camera) and representatives from Belgium, Holland and so on. [Click on the image for a larger copy]

SQL Corner                 In this Issue : Back to Top

IDMS/SQL - Fatal Shortcomings!

Ever since the first non-trivial SQL application was put into production in 1994 in Norway, IDMS/SQL News has been in the forefront of advocating SQL for IDMS Clients. Not surprisingly, the introductory seminars attracted many participants from clients in Norway and Finland. Today after almost 8 years since Release 12.0 came out, it is time to look back. SQL has been used by some clients but not many. If you compare with ADS/Online implemenstations within 5 years of its introduction during 1983-1988, the spread of SQL among IDMS clients is negligible.


But we read in the industry journals that Cullinet and IDMS lost the leading market in the late eighties, because they did not have pure SQL. Now SQL is there and no one is using it! This sounds like a paradox.

From 92 to 2000

IDMS/SQL has not come very far from 92 to 2000. The product as a first release was, surprisingly, well ahead of the competing products (at least the early releases of the competing products). Until DB2 4.3 was released some time in the 90s (Yes 90s) IDMS R12.0 SQL implementation was far ahead. DB2 4.3 was the first attempt where the main database features caught up with IDMS/SQL. This might be surprising news for all those IDMS fellows who were under the wrong impression that IDMS/SQL implementation was a patchwork. But as new releases came out, there were minimum enhancements to the core, though fancy tools like Quickbridge and Visual DBA were given by the vendor.

TimeStamp Issue

The main problem is caused by not really IDMS but as the bye-product of a well known SQL feature - the timestamp. IDMS faithfully implements the timestamp and runs into all the troubles associated with the timestamp.

Backup and Restore

When a backup of SQL database is made, one should also backup the catalog. Similarly when a RESTORE is made, the database and the catalog should be restored at the same time. Simple cases, this works fine. But if a catalog contains more than one SQL database, there is no way to individually backup one of them. Synchronization requires that one backs up all databases defined in one catalog. This will be problematic in practice, especially if the databases are large. Also restore is required if one detects a problem in one of the databases. In that case it is meaningless to restore all the others.

Copying SQL Database from Production to Test

Another side effect of this issue is that one cannot copy databases alone from production to test. One has to copy the catalog also. Again this will create practical problems except in simple cases. There is no guarantee that the test catalog does not have any more tables than the production catalog.

What happens if your Catalog goes corrupt?

What happens if your catalog goes corrupt? And the database is in tact. Can you reproduce a catalog to match that of the database ?

In the case of network database, even in the worst case situation of complete loss of dictionary, one can recreate the dictionary and populate with schema definitions [Obviously we have the schema source somewhere!]

But for SQL, even if you have the whole schema and table definitions in tact in source form, once the catalog is damaged, there is no way to recreate a new catalog. If you restore an old backup of the catalog, timestamp synchronization problems may arise.

Today, IDMS gives no way - documented or undocumented - of getting out of this serious situation. You cannot use your database at all, until you have a valid catalog at run time. And there is no way to re-create a valid catalog that will match the table timestamps in the database area.

Timestamp Manipulation Utility?

The solution to all these timestamp problems is a timestamp manipulation utility. User should be able to synchronize the database and the catalog at will. If the integrity is guaranteed by the user, the utility should be able to update the catalog timestamp with that of the database. This way one can copy production databases to test IDMS and synchronize the test catalog. If the catalog itslef is corrupt or lost, one can create new definitions and synchronize the timestamp from the database.

Multiple Database Issue

In the existing IDMS environment uses multiple database feature. This is implemented such that one needs only a single logical definition for any number of physically different . This is achieved by DBNAME mapping. In the case of SQL Databases, such a mapping is not possible. One has to duplicate definition for each physical database. Timestamp issue dictates that one cannot physically copy database from one 'db' to the other 'db'.

Run Time Access Module Situation

While accessing multiple databases, SQL needs to repeat logical definitions. The program source and load module can be shared. But not the access module. For every SQL program (dialog, cobol program), you also get an access module. In certain cases access module can be shared across programs (when the run unit is shared). If you have to access multiple databases, then the access module have to be recreated for every one of the multiple databases. Unlike the network solution, it is not possible to dynamically do a dbname mapping. The following table illutrates the situation and compares with the multiple database support in SQL and network IDMS.

Situation # of access modules SQL # of AM for 10 multiple dbs (SQL) Newtork subschema Subschema for 10 multiple dbs
1 program 1 10 1 1
10 programs 10 10*10 = 100 1 1
100 programs 100 10*100=1000 1 1
1000 programs 1000 10*1000=10000 1 1

This makes pathetic reading! If the client has another complete set for learning then again it gets multiplied. In network database today, you need only one subschema no matter how many 'dbnames' you have! Relational makes this into a horrible situation. How can you load all these into program pools? How can we store all these access modules into SQL load Area? How do we maintain this mess? We tried to find out how this is done in other relational systems. [See below Multiple Database Support in SQL]

Table Load from a flat file - An example

We had a question from one Jim regarding SQL using flat file input. As far as we could understand SQL cannot access a simple flat file. But IDMSBCF can be used to load a table with input from a flat file. Here is a simple example. After this job the user must build indexes (if any) and validate. The table to be loaded is IDMSSQL.TESTLOAD


//* Job card
//* Load of SQL Table from Flat file
//* SQL Catalog CV

Where the input file

TESTDATATEST 1      12345678ENDTEST COMMENTS 1                          
TESTDATATEST 2      11111111ENDTEST COMMENTS 2                          
TESTDATATEST 3      33333333ENDTEST COMMENTS 3                          

Multiple Database Support in SQL

Recently the issue of multiple database support in SQL has surfaced in IDMS-L discussion forum. A letter from Kate Hall says "I am in the process of defining an SQL schema for the first time. There will be multiple physical definitions for this schema. I can find no way to have one logical definition (schema) related to multiple physical definitions (segment/dbname). Has anyone done this? Do you have any advice? Also, any other problems you have encountered in creating SQL databases that I can avoid would be much appreciated! "

Kay Rozeboom comments "I went through this some time ago, and I was informed by the CA Support Center that it can't be done. In other words, you must have a separate logical definition for each physical definition.

It seems silly to me that CA would implement IDMS/SQL in the same release (R12) that implemented separate physical and logical definitions for the old IDMS network databases, and not provide the same capability for the new IDMS/SQL databases. I guess this was a case of the right hand not knowing what the left hand was doing! "

Not really! The real problem here is not CA! That's the way SQL works! But with the experience on IDMS, the development people could have bent a little to give a better solution.

Philippe Jacqmin has the explanation: "As soon as a ‘CREATE TABLE’ is done in OCF/BCF, IDMS will set a D2S2 (Date/time stamp) for this table in the Catalog and for this physical table, in the physical area also. D2S2 will make sure that IDMS/SQL will work with the correct definitions from the SQL Catalog, on the correct physical table using the correct Access Module at run time. This mechanism prevents any potential desynchronization between these entities. This can be considered as a step backwards compared to navigational but as physical access is SQL engine's responsibility and sits outside of your program, this will enforce consistency between the different components."

That's the problem! Unlike the network database definition, CREATE Table indeed creates the table physically in the data area! This establishes a 1:1 sync relationship between the definition and the physical database. Techincally, IDMS could easily deactivate the timestamp checking and share an access module just the same way it does now for SQL against network databases. Then there must be some way to tell the runtime that we (the programmers) guarantee the synchronization (not at physical level, but at least at logical level) between the two 'SQL' databases. If the size of one SQL database is 1000 pages and the other one 100000 pages, access strategy for the same program will obviously be different. Then the access module has to be different.

IDMS/SQL News tried to investigate how this is done in other SQL databases. The result was

"Water water everywhere,
Not a drop to drink!"

That is: we got 'xyz' answers, but none of them satisfactory! Does any of the readers know how exactly this is done in DB2? Can they avoid duplication of definitions? Can they avoid duplication of access modules? And what about the omnipotent Unix based "big mouth" products? How do they handle this issue?

It's high time the people in the industry put the SQL standards into the waste paper basket and find some practical solutions to the SQL database problems. For example, why can't we make a static Global SQL Access Module and give an option to the DBA of using that instead of access modules which have to timestamp checked every time? If the subschema can be trusted for the network DML access why can't we trust such an AM for SQL?

Area and Table Timestamps

Timestamps are kept in SQL Application Catalog. The stamp in System Catalog (where DMCL is stored) is not used by SQL. If by mistake one end up using the wrong stamps in System Catalog (CATSYS), it can create confusion and errors.

Case 1 : When STAMP BY AREA is used

Status = 0 SQLSTATE = 00000


Status = 0 SQLSTATE = 00000

1999-01-18- <-- the real AREA Timestamp

Case 2 : When STAMP BY TABLE is used
AREA Timestamps are not kept if 'STAMP by Table'

Status = 0 SQLSTATE = 00000

Status = 0 SQLSTATE = 00000

0001-01-01- no area stamp is kept

Defaults on DATE, TIME and TIMESTAMP Fields

SQL supports many data types which are unfamiliar in the network database world. Most people know INTEGER and CHARCATER data types. What about DECIMAL? NUMERIC ? DATE? TIMESTAMP? How are they stored? How much space do they take? There is a simple way to find out the conversion. Create a table in SQL and try to display the table 'AS REC' within OCF. Here we go:

*+ DATE CREATED 1999-05-11- BY MRY

Now we display the table in OCF as follows:


'Record built from SQL TABLE IDMSSQL.XYZ' .
03 I3-CHAR10 PIC X(10).
03 I4-NUMERIC5 PIC S9(5).
03 I6-DATO PIC X(10).
03 I7-TIME PIC X(8).

How does the timestamp look like internally?

*+ ------- --------------      -------  -------------
*+ 1999-05-11 0164638000000000 09.30.19 000000085AB00000
*+ I8_TIMESTAMP                HEX(FUNCTION)
*+ ------------                -------------
*+ 1999-05-11- 016463885AB2B717
*+ 1 row processed

PotPourri - Technical and not so Technical                 In this Issue : Back to Top (Contents)

Documentation Errors! Who cares!!

When a new product is out in the market, the documentation may be incomplete or incorrect. But when the product is 7-8 years old one expects these errors to be fixed. Well, noy always.

On Print Log the Utility Manual gives

print log from archive
start at '1996-1-18:20:00'
stop at '1996-1-19-';

This won't work. Gives syntax errors. The correct way to give time is

START AT '2000-02-15-'
STOP AT '2000-02-15-'

In our tests giving time as ''2000-02-15-14.20' or ''2000-02-15:14:20' did not work.

COBOL DML Reference Manual talks about ENQUEUE "The ENQUEUE statement acquires or tests the availability of a resource or list of resources. Resources are defined during installation and system generation and typically include storage areas, common routines, queues, and processor time." Again in the syntax it says

"resource-id :The symbolic name of a user-defined field that contains the resource ID. The resource ID must be the name of a resource defined to the DC system."

All this implies that if you have to ENQUEUE a resource you have to define it in sysgen. So how do you enqueue a piece of storage? How do you define a piece of your storage is sysgen, during installation ?

The Manual has given an example too:

Example 1: The following statement requests CA-IDMS to enqueue the CODE-VALUE and PAYROLL-LOCK resources. CODE-VALUE is reserved for the issuing task's exclusive use; PAYROLL-LOCK can be shared.


In this example how and where do you define CODE-VALUE in sysgen?


The perfmon sysadmin book is incorrect in the following: it stats that in order to cut TSKWAIT smf records, either IMDCLOG=YES or IMSMF=YES must be set. This is incorrect. TSKWAIT records are controlled by the application monitor, NOT the interval monitor, and are therefore controlled by AMDCLOG=YES and/or AMSMF=YES. CH on IDMS-L News adds here that CA will provide a docup.

Program Loads and DCMT New Copy

Recently there has been heavy discussions on IDMS-L on this issue. Jim Lancelot,U. S. Department of Veterans started the discussion with the following question: "One of our programmers re-complied a COBOL II program and varied the program NEW COPY, but old copy stayed in memory until the CV was re-cycled. The result of the DCMT VARY PROG said the program was marked to new copy, but when it was displayed with DCMT, the "times loaded = 1" didn't increment and old version remained in memory. Program defined in SYSGEN as NEW COPY ENABLED and is located in the highest concatenated application library under CDMSLIB. We tried every variation of DCMT VARY PROG we could think of, but, nothing loaded the new version of the program until the CV was re-cycled. We're running 12.01 9607 on MVS. "

There were various responses on IDMS-L to this. Most pointed to the question of pdes for DICTNAME and loadlib. Here are a few of them:

* I somehow created a program entry under the dictionary I was using by neglecting the asterisk in front of the program name. From that point on the program under that dictionary name seemed to be referenced, until the CV cycled or the program under the dictionary name is deleted.

* I don't think anything is really wrong, I believe that you had 2 PDE's for the program and only new copied one of them.

To avoid this in the future you can do a DCMT DIS PROG FROM PPPPPPPP where PPPPPPPP is the name of the program, if there are two PDE's you will see both of them. The reason you get two most likely is that one user does not set a DICTNAME and executes the program and another user sets a DICTNAME and executes the program. In this case there would be two PDE's for the same program, if you only new copy the one, some users will still execute the old.

* When you look at the DCMT DIS PROG FROM you will see CDMSLIB or a DICTNAME,

* It sounds to me like you may have had more than 1 active PDE for the program. The NC was done on one PDE, and the other was the one being picked up. We have hit this a few times. There can be 1 PDE for each place the program has been loaded from (or something like that). For instance 1 for CDMSLIB (if applicable) and one for a secondary dictionary (if applicable).

* Try LOOK PROGRAM=xyz or SHOWMAP xyz, these both should load new copy of the program. See times loaded after command(s).

* Just a few things to add...
A dcmt v prog to new copy does just that. It only marks it for new copy.The new prog will not be loaded into memory until it is called for. A showmap can do this for you. Also, If you are calling a prog that was new copied, my understanding is that the new prog will not be loaded into memory if this same prog is active at the same time by another job or task.

* I'm curious. We have subschemas that we NC occasionally on our test system. When we do a NC on one in particular we always get a message:

     <number>     <type>      <source>
         1      SUBSCHEMA      LOADLIB
         2      SUBSCHEMA      DICTIONARY

Is the fact that multiple PDEs exist for the subschema the reason for this?

* You proably have a CV batch job run that's loaded the subschema from the CDMSLIB and an online program that loaded it from the dictionary also.

In the new spectrum of things since 12.0 showmap should give an error stating that this is not a map. The command

look program=programname
is the proper way of doing this. The compile information of the program is displayed also. This is also useful when you issue a dcmt v program programname nci

More on this issue:

Most of the comments are centered around the dictionary issue. But dictionary based PDE is only used for dictionary based load modules - maps, subschemas, dialogs. COBOL program is not loaded from dictionary. So there must be something else going on. Also one can sysgen a program and say NODYNAMIC. For example, this is done for ADSOMAIN. It only guarantees single PDE. But IDMS can still load two copies with a single PDE! If you do DCMT D PRO *.ADSOMAIN you may see multiple copies at the bottom. [This is typically caused by tasks running below and above the line - ADSOMAIN will be loaded in both 24 and 31 bit pools]. The immediate effect for the user is that ADSALIVE may not function from now on until CV is recycled.

But this still does not answer the COBOL mystery. Yet another thing which is not mentioned in the IDMS-L discussion is the issue of COBOL compiler! If you are using one of the 'modern' IBM compilers (COBOL/370/370 LE bla bla), with some options IDMS won't be able to load the program at all! It will create PDE and even find out how big is the load module, but load will fail - even if you are using only LOOK. Here is an example:

DCMT  D PRO *.PCOB0100                                                
   Program Name PCOB0100            Ddname          CDMSLIB           
   Type         PROGRAM             Type            LOADLIB           
   Language     COBOL               Dictname                          
   Size (bytes) 00008056            Dictnode                          
   ISA size     00000000            Database key    NOT IN DICT       
   Status       ENABLED AND INSRV   Storage Prot    YES               
   Dynamic      ALLOWED             Residence       NOT IN POOL       
   Reusable     YES                 Threading       CONCURRENT        
   Reentrant    FULLY REENTRANT     Overlayable     YES               
   Tasks use ct 000                 New Copy        ENABLED           
   Times called 00000001            Times loaded    000001            
   PGM chk thrh 005                 Pgm check ct    000               
   Dump thrh    000                 Dump ct         000               
   Amode        31                  Rmode           ANY               
   PDE address  06D27ABC            MPmode          SYSTEM            
The program was dynamically defined with 
If an ADS dialog is trying to link to this COBOL program we get a better message
 07:01   COBOL-BAD COMPILER/OPTION/VERB                   
 DC466014 V2 Abort occurred in dialog ...
At this even LOOK fails to load the program. 
LOOK PROGRAM=PCOB0100                                                      
IDMSLOOK  -  Selection Parameter Follows:    
IDMSLOOK  -  Failed Trying To Load Module PCOB0100  -  Reason Code 08       

At this stage the user deleted the dynamic definition of the COBOL program (or removed it from Sysgen).

Without any definition executed LOOK
It worked 
LOOK PROGRAM=PCOB0100                                                      
PCOB0100 was LOADed From --> CDMSLIB                                      
Entry Point Offset +0      -  Reentrant     -  AMODE 31  -  RMODE ANY           
        6,752 Bytes in Load Module PCOB0100 loaded at 07301A00                  
                         Module    Offset   Date   Time                          
                         RHDCLENT   +718    960705  1722                          
 			. . . 
IDMS also created a dynamic PDE for this
D PRO PCOB0100                                              
  Program Name PCOB0100            Ddname          CDMSLIB         
  Type         UNDEFINED           Type            LOADLIB         
  Language     ASM                 Dictname                        
  Size (bytes) 00006752            Dictnode                        
  ISA size     00000000            Database key    NOT IN DICT     
  Status       ENABLED AND INSRV   Storage Prot    NO              
  Dynamic      ALLOWED             Residence       IN POOL AT 07301
  Reusable     YES                 Threading       CONCURRENT      
  Reentrant    FULLY REENTRANT     Overlayable     NO              
  Tasks use ct 000                 New Copy        ENABLED         
  Times called 00000001            Times loaded    000001          
  PGM chk thrh 005                 Pgm check ct    000             
  Dump thrh    000                 Dump ct         000             
  Amode        31                  Rmode           ANY             
  PDE address  06D27094            MPmode          SYSTEM          

In this case the problem was that LE/370 COBOL program was compiled with DYNAM option which is not supported. Once the user changed this to NODYNAM everything worked.

String Searching within ADS Source

Recently there was a question in IDMS-L news from Brian Brown of McGraw Hill. The question was on how to find out what dialogs use a particular DC-COBOL subroutine when

ADS process call DC-COBOL subroutines as in


Such control commands are not cross-referenced in IDD. How to list all ADS processes which are calling COBOL programs ? Michael A. Newman, Sophisticated Business Systems, Inc. gave the answer "I have used a batch OLQ query for years to look for strings inside ADS dialogs. Here is the syntax I use. You can search for any number of strings by adding to the where criteria. I have used this in both release 12 and 14. " This is a general solution for searching any string.

SET USER xxxxxxxx


AND (SOURCE-088 CONTAINS 'string-1' -
OR SOURCE-088 CONTAINS 'string-2') -


That's it. One can run this in OLQ Online too provided the dictionary is not very large. OLQBATCH is preferable where one can even run it in local mode without affecting the online performance.

Hype about Y2K Doomsday

So the Y2K noise is over, at least from the media. On that "fateful" day when the clocks rolled over at midnight in New Zealand and Australia, many were expecting disasters. Nothing happened! And then the clocks ticked past midnight in China, South East Asia, Indian Subcontinent - nothing happened ! By now it was clear that the Y2K was too much of a media hype. Besides program failures, the much feared breakdowns were expected with power supplies, utility services and worst case even nucelar reactors! The chip did n't have much of an embedded code for the millennium shift and nothing happened!

By the time new year dawned over Europe, other than CNN Live Coverage of the celebrations, there was a feeling of anti-climax. There was a feeling of 'no-happenings'! Countries which spent billions in Y2K precautions still tried to justify the expenditure. The US treasury had a reserve of more than 100 billion dollar cash, just in case the banking system collapsed! But Italy who did not spent a penny for any Y2K extra effort, did not have any problems either.

Was there no problem?

There indeed was the real program code issue where you were going to subtract 00-98 etc. But most of these were already taken care of. Such errors would have already appeared ever since mid 90s. There are IDMS clients who put yyyy for year as early as 1984! In any case, such errors just won't explode at midnight of 31 Dec 1999. [Recall that one software company was hyping an advertisement with big explosions at midnight!]

In fact the real preparations were all finished by 1997-98. Most of the money spent in 98-99 was simply moneymaking game. Consultants wrote huge reports and solutions in Word about Y2K without writing a single line of IDMS, DB2 or CICS code. Companies sold 'snake oil' tools of all kinds. One of them just listed out all the COBOL or ADS line which contained a date string! A simple QFILE in OLQ would have listed all processes with any string!!

Is the problem over?

Hype is over and not surprisingly some of the program code errors can indeed surface now. Also many places even if the date is internally handled correctly, the display still shows only two digits for the year. This can cause some minor problems. One simple example from PC follows. PC clock handles Y2K allright. Sill the directory listing will show only 2 digits for the year. Sort on date fails. The following is listing of C: disk sorted on date! Like this there are 1000s of places, where date is still listed with 2 digits for the year. Subsequent usage of such a date will create problems.

CONFIG SYS	309 	2-19-00 4:51p 
FFASTUN FFA 	4491 	2-19-00 8:55pH
FFASTUN FFL 	172032 	2-19-00 8:55pH 
FFASTUN FFO 	73728 	2-19-00 8:55pH 
FFASTUN0 FFX 	376832 	2-19-00 8:55pH 
COMMAND COM 	94600 	5-15-98 8:01p 
IO SYS 		222390 	5-15-98 8:01pH 
STRTLOGO OEM 	129080 	5-15-98 8:01p 
LANG TXT 	140 	9-07-99 12:40p 
SOFTWARE TXT 	8129 	9-07-99 12:40p 
AUTOEXEC DOS 	393 	9-13-99 3:49p 
CONFIG DOS 	213 	9-13-99 3:49p 

Wolf! Wolf!!

Now if a real Y2K issue is brought up no one takes it seriously! EDP Manager won't allocate a penny for any more Y2K activities. Then the programmer will be left with fixing the "real thing", which the snake oil solutions never managed to tackle. Already several places '00' is appearing in print and output files. These files are supposed to be correct. But if they are input to further processing without handling '00' as 2000 you are in trouble.

Web attacks and The White House

Recently well known sites like yahoo, amazon, CNN etc were bombarded to a halt with pseudo traffic of upto 1 Giga/sec. Alarm bells rang all over and even reached the White House. President Clinton called a meeting of the top executives. Of the very few CEOs attended one was Charles Wang of Computer Associates. Wang empahsized the password changing issue. [With money making products like ACF2/Top Secret, password is all that matters to CA!]

Today's web problems are much more than a simple password change. The problem in the web is that browsers give the option of storing the password somewhere, so that next time you don't have to type it again. Then there are the cookies which is an invasion of your privacy. And then there are ways to find out your identity as soon as you visit a web page! It is even possible to find out from where you jumped over to the current web site's URL. All these are big holes in security.

The whole philosophy of web programming and browsing is based on UNIX ad hoq development methodology. UNIX was developed in the laboratory. For ease of use or typing convenience, the original developers used directories like /usr, /bin, /home, /etc which are still there. Someone said even the command 'unmount' was once misspelled as 'umount' and it came to stay as it is! And there is the curly brackets {} to denote some kind of begin-end for program structures which was originally used in 'C' and came to be there in all new languages. Why should this be so unreadable? [Note that in many European countries on IBM mainframe one cannot see {}, they are mapped for local characters like etc, making C code on mainframe totally unreadable] Why not use BEGIN; END or some other similar keyword pairs ? Afterall, the compiler will eventually replace everything with hexa values.

Last year Melissa virus attacked Microsoft and Intel. Melissa used a big whole in Word97 where one can embed a macro which will be executed automatically when you open the document. Not surprisingly many Outlook users are attaching Word document (quite unnecessarily) within their mail. Even when a simple embedded ASCII text will do the job, people attach Word document (which might be about 50 times larger than the ASCII file) without knowing the amount of traffic it generates. Melissa multipled and jammed, of all the sites, Micrsosoft's own home site!

Today no one uses simple html code. One must have Java Code in it. This Java executes on your PC under the browser. Executable Java code is closest one can get to what we call network computers. But executing someone else's code on your PC without knowing the consequences, is a total violation of security. Java code can easily guess the Windows directories (which are universally named as /windows/system, Program Files, My Documents etc) and more. With Java Code one can easily pull out information from your PC and store it elsewhere on the net! And yet there are people who believe Java Programming is 'the future language and environmnet'. Hail to the Hackers!

One simple Test

Many companies are advocating Java as the future programming language. Many are using it. Make a simple test. List 10-20 most known sites, with a mix of software vendors, news services etc. Try to visit their ome pages and try to find some information you want. How friendly and fast are they? Write down and benchmark yourself. You will be surprised to see that some of the companies advocating magic solutions for the web has some of the worst websites you can ever come across! If you have time and patience, try to look into the source behind the webpage. You will see a heck of garbage just to display a simple menu page with some pictures!

Web problem will continue to exist until some kind of clean methodology is arrived at. With new programming languages claiming to solve all the problems appearing every fortnight, this will not happen in the near future.

Special Dispatch I               In this Issue : Back to Top

Linux Story

- Pekka Salminen, Finnish Institute of Technology, Otaniemi, Helsinki, Finland

Linux is a complete clone of UNIX, which is one of the most popular operating systems worldwide because of its large support base and distribution. It was originally developed at AT&T as a multitasking system for minicomputers and mainframes in the 1970's but has since grown to become one of the most widely-used operating systems anywhere, despite its sometimes confusing interface and lack of central standardization.

Many hackers feel that UNIX is the Right Thing--the One True Operating System. Hence, the development of Linux by an expanding group of UNIX hackers who want to get their hands dirty with their own system.

Linux is a free version of UNIX developed primarily by Linus Torvalds at the University of Helsinki in Finland, with the help of many UNIX programmers and wizards across the Internet. The Linux kernel uses no code from AT&T or any other proprietary source, and much of the software available for Linux was developed by the GNU project of the Free Software Foundation in Cambridge, Massachusetts, U.S.A. However, programmers from all over the world have contributed to the growing pool of Linux software.

Linux was originally developed as a hobby project by Linus Torvalds. Itwas inspired by Minix, a small UNIX system developed by Andy Tanenbaum. The first discussions about Linux were on the Usenet newsgroup, comp.os.minix. These discussions were conc erned mostly with the development of a small, academic UNIX system for Minix users who wanted more.

The very early development of Linux mostly dealt with the task-switching features of the 80386 protected-mode interface, all written in assembly code. Linus writes.``After that it was plain sailing: hairy coding still, but I had some devices, and debugging was easier. I started using C at this stage, and it certainly speeds up development. This is also when I started to get serious about my megalomaniac ideas t o make `a better Minix than Minix.' I was hoping I'd be able to recompile gcc under Linux someday... ``Two months for basic setup, but then only slightly longer until I had a disk driver (seriously buggy, but it happened to work on my machine) and a small file system. That was about when I made 0.01 available (around late August of 1991): it wasn't pretty, it had no floppy driver, and it couldn't do much of anything. I don't think anybody ever compiled and it couldn't do much of anything. I don't think anybody ever compiled that version. But by then I was hooked, and didn't want to stop until I could chuck out Minix.''

On October 5, 1991, Linus announced the first ``official'' version of Linux, which was version 0.02. At that point, Linus was able to run bash (the GNU Bourne Again Shell) and gcc (the GNU C compiler), but not much else. Again, this was intended as a hacker's system. The primary focus was kernel development--user support, documentation, and distribution had not yet been addressed. Today, the Linux community still seems to treat these issues as secondary to ``real programming''- kernel development.

Linux supports features found in other implementations of UNIX, and many which aren't found elsewhere. Linux is a complete multitasking, multiuser operating system, as are all other versions of UNIX. This means that many users can log into and run programs on the same machine simultaneously.

The Linux system is mostly compatible with several UNIX standards (inasmuch as UNIX has standards) at the source level, including IEEE POSIX.1, UNIX System V, and Berkely System Distribution UNIX. Linux was developed with source code portability in mind, and it's easy to find commonly used features that are shared by more than one platform. Much of the free UNIX software available on the Internet and elsewhere compiles under Linux ``right out of the box.'' In addition, all of the source code for the Linux system, including the kernel, device drivers, libraries, user programs, and development tools, is freely distributable.

Linux supports various file systems for storing data, like the ext2 file system, which was developed specifically for Linux. The Xenix and UNIX System V file systems are also supported, as well as the Microsoft MS-DOS and Windows 95 VFAT file systems on a hard drive or floppy. The ISO 9660 CD-ROM file system is also supported. We'll talk more about file systems in Chapter 2 and Chapter 4. Linux provides a complete implementation of TCP/IP networking software.This includes device drivers for many popular Ethernet cards, SLIP (Serial Line Internet Protocol) and PPP (Point-to-Point Protocol), which provide access to a TCP/IP network via a serial connection, PLIP (Parallel Line Internet Protocol), and NFS (Network File System). The complete range of TCP/IP clients and services is also supported, which includes FTP, telnet, NNTP, and SMTP.

The Linux kernel is developed to use protected-mode features of Intel 80386 and better processors. In particular, Linux uses the protected-mode, descriptor based, memory-management paradigm, and other advanced features. Anyone familiar with 80386 pro tected-mode programming knows that this chip was designed for multitasking systems like UNIX. Linux exploits this functionality [just as Windows misused this compeletely!].

The kernel supports demand-paged, loaded executables. Only those segments of a program which are actually in use are read into memory from disk. Also, copy-on-write pages are shared among executables. If several instances of a program are running at once, they share physical memory, which reduces overall usage. In order to increase the amount of available memory, Linux also implements disk paging. Up to one gigabyte of swap space may be allocated on disk (upt to 8 partitions of 128 megabytes each). When thesystem requires more physical memory, it swaps ina ctive pages to disk, letting you run larger applications and support more users. However, swapping data to disk is no substitute for physical RAM, which is much faster.

Coherent Operating System

Linux is by no means the first Unix-clone for the PC. In fact there were several attempts in the late 80s and early 90s to provide a cheap alternative to Unix. One notable product was Coherent from Mark Williams Company. Coherent was a character based complete clone of Unix for 386 processors distributed on just 4 diskettes. It came with a 1000 page superb manual on how to use the components and was brilliant for $99 price tag. But it was a proprietary product and not a joint effort by the internet hacker community and that was probably the reason why it failed to make it in the commercial market place. By mid 90s Linux was available freely from the internet and Coherent was a casualty in the run.

IDMS/SQL: Recently one of us went to a mainframe site and talked to about 5 IDMS programmers and no one has heard of Linux! So IDMS/SQL asked University Student Pekka to contribute this article. Pekka is a born Linux hacker. We will have comments from IDMS mainframe users on Linux in the next issue.

Heard on IDMS-L : Cheaper Mainfram Power!

A mainframe MIP in 1990 cost $100,000. Right now, that same MIP costs $2,247. By the end of 2003, they estimate that same MIP will cost only $400.

Special Dispatch II                In this Issue : Back to Top

Web Development Using OS/390 IDMS

There has been some discussions in IDMS forums about Web access to IDMS Data. The recent discussion was triggered by a question from John Elliott on "Web Developement Using OS/390 IDMS : Wanted to know what development software tools people were using to get IDMS
data on a mainframe for display and update on web pages. We're considering a development project to do this. Some tools I know about are CA-SQL,CA-Server,CCI,ODBC,Visual Basic. Are there mainframe based web servers that could access IDMS directly? Any other routes?"

There were many responses from the IDMS-L members. Before going into the details, we want to stress that one of the first commercial Web/IDMS application was developed in Scandinavia by an IDMS/SQL Client as early as 1995! This was done using SQL, ODBC, IDMS Server, Powerbuilder front-end using some code on the Server. For 1995 this was a remarkable achievement.

Now the time has changed. People want to do more. Since the much prophesied "death of mainframe" did not materialize, many clients are seriously looking at MVS databases as serious Web Servers. Since the front end person on WWW doesn't care where the data is residing there are all the more reasons to keep this data on a safe and reliable place, at the same time providing all the 'modern' interfaces to the end user.

IDMS-L reponses:

Let's us browse through some of the responses:

"We have done some web development using the CICS Web Server and are currently evaluating the TACT/Vegasoft IDMS Web Server. The CICS Web Server development is done in that very high level language COBOL! Our response times are excellent. If you are going to CA World, I am giving an overview session on many of the ways one can access IDMS data from the web."

"We are starting a project here using the Shadow Web Server product from Neon. Another site at our company has successfully used this product to Web-ize their COBOL-IMS application. It has been in production for a couple of years now. The benefits for us are that we only have to buy one additional product (the Shadow Web Server that runs on the mainframe) and we continue to code PL/1 programs making DML calls to our network database (less training & can use the same development process & config. control).I can't say this is the most forward thinking solution, but when you are strapped for cash (or time) it is at least a step forward."

"We are implementing our web system using the following: We run Websphere on the mainframe in Unix system services The code running there is Java and static html (I think) We use a product called MQSeries from IBM to communicate between websphere and IDMS. In IDMS we use a product from Aquisoft called OCA-MQSeries that takes output/input from existing dialog code and sends it to websphere. In this we used most of our existing system to provide input to the web application."

"We use VegaSoft's Web Server, which resides in the IDMS region. You may need various components to tailor your needs. Check their website: and their US distributor is at"

"EDBC provides real-time, high-performance read/write connectivity to OS/390 Enterprise Databases from mission-critical Windows client/server and Web-deployed (Internet, Intranet, & Extranet) applications. Leveraging existing mainframe technology, EDBC directly connects Web-based and multi-tiered business applications to mainframe-based enterprise databases such as native VSAM, CICS/VSAM, IMS, DB2, CA-IDMS, and CA-Datacom"

"Have you looked at CA-OPAL? It uses existing 3270-type application screens and can (I think) be used on a Web server, generating the web HTML for you. I've not used it fully yet, but it looks like an inexpensive solution for something."

"Web Engineer has been re-branded, and is now LiveContent VISION. Its owned and sold directly by PeerLogic ( and also distributed by partners notably International Software Products ( It is highly suitable for web-enabling/integrating IDMS applications to the web."

IDMS/SQL News Analysis

It is obvious there are solutions available now. But some are more suitable than others. Here we restrict to Web Access to IDMS. The original question from John Elliott already mentions CA-Server and ODBC for Web. This is in fact the same solution used by a pioneering SQL site. This is the Web extension of the ODBC access from PC. For simple query this is good enough. For high volume update applications there are serious limitaions here. ODBC connection using CA-Server and ENF is not fast or direct enough. You need code on the Server besides you need ODBC on your PC.

A couple of people mention CA-OPAL. But this is only screen scraper front-end, not a true programmable Web Interface. CA themselves now talk about a new product called EDBC above. On the other hand, we don't have much information about EDBC. We can't recall this being presented in the last IMC 99. Then it is most likely a product CA has acquired recently, quickly being tailored to meet 'all' databases and VSAM! Last IMC 99, CA solution was CA- ICE, which required all the following :

Spyglass Webserver (CGI/API) - NT or UNIX (NT - 150M HD or UNIX - 120 M HD)
Ingres II, Ingres Net etc
Ingres Enterprise Access to IDMS (Gateway and IngresNet in an MVS address space)

Obviously no one wants to install Ingres and the rest to Web-ize IDMS! There are too many components here. CA solution changes every year. One day it is ODBC, the next IMC it is Jasmine, then it is Ingres Gateway, then it is harmoni, and it seems now it is EDBC. But clients cannot change products every year based on change in CA's 'strategic directions or acquisitions! IDMS Clients who had the main product for over 20 years are forced to ask: how long is the 'half-life' of this new wonderful product? What IDMS client needs is a clean and stable Web access to their 20 year old 'OLD database'!

Some clients have mentioned MQSeries and Websphere; both are being heavily pushed by IBM. MQSeries itself has nothing to do with web, but can be used for web connection too. Both Vegasoft and Acquisoft provide MQSeries interface to IDMS. But we feel that the primary purpose of MQSeries is not Web.

Direct Web Interface to IDMS

What is needed is a direct interface to IDMS cutting all the middle layers. Two products stand out unique doing the same thing. TCP/IP Interface from Vegasoft and Acquisoft. Both are based on standard Socket Interface from IBM. They provide direct programmable interface between IDMS and TCPIP/390. [Many other connections including ODBC support TCPIP as a connectivity medium, but not as a programmable interface]. Using TCPIP link any platform (Unix, PC..) supporting TCPIP can access IDMS database. One can write applications in DC-COBOL with embdded calls to Vega TCP/IP modules.

Once TCPIP link is established Web Interface is an automatic bye-product. As far as we see it, WebServer from Vegasoft is doing this job in the simplest and direct way. Webserver supports special builtin functions in ADS, so that in the simplest case, one can even make Web applications using ADS alone. Simple 'MOVE' statements do the job for you. C++, Java etc programs can also access IDMS data.

Compared to many solutions which were mentioned above, the Webserver seems to be the simplest and most direct. Because it does not need anything on the NT or UNIX boxes. In fact, there is nothing needed on the PC front-end either. There is even the possibility of storing 'jpg, gif' images on IDMS (IDD load area). This way the whole application can reside in one and only one place - within IDMS.

Vegasoft Webserver also allows you to store html templates within IDD. html pages can be standard html only code or can contain Java Scripts or applets. SQL is not needed, but fully supported. There is also an added advantage here: it supports both IDMS and DB2 from IDMS address space. This means that the same ADS/Online web dialog can access IDMS and DB2, if needed.

Diminishing Technical Support

In many countries, there are no IDMS personnel left within CA. The assumption from CA is that there are no more new projects going on using IDMS and it is enough to have some dummies picking out APARS from the system and sending it to the client base. Basically a problem can got to hell!

The very fact that IDMS survived the doomsday predictions of the early 90s and many clients are indeed making new applications proves the merits of the original product. In fact, applications which were declared 'dead' or dormant as early as 1989 have grown to gigantic proportions at many sites.

CA's own attempt to convert IDMS clients into Ingres clients was a disaster. It will be interesting to note that of all the database products CA has (IDMS, Datacom, Ingres, Jasmine) IDMS will outlive the rest in spite of the fact that money has been poured into marketing the object oriented Jasmine and IDMS has been practically ignored.

IUA Connections (Winter 2000) has a dispatch from CA which reads "CA's GPS has experienced CA-IDMS consultants who are available to assist our clients ". Where are these IDMS technicians coming from ? from Mars, we suppose! In reality, most CA offices there are no IDMS knowhow left. CA's sales and pre-sales force who appear at clients meeting only to rewrite the old contracts. These people have created a sorry impression about the product and even the clients who stayed with IDMS from 89-99 period, prefer to run away after listening to the hogwash talk.

In the same issue of IUA Connections a client says "we have many happy users who don't have a clue that they are using IDMS. Some of them would have refused to use the product otherwise, on the basis that they didn't want to use "old technology" (page 17) If the vendors positioning had been good, such an impression would not have been created at all.

Today, with some third party products, it is possible to provide very fast and clean access to IDMS from WWW. In fact such a setup can outperform any UNIX based web interface. But the last person to hear about such a possibility will be CA's own marketing team!

Third Party Courses, Products

There are practically no IDMS courses offered from most CA offices in Europe. "Dead" products don't need courses - seems to be the motto from the vendor. Most courses have also been downgraded to 'on demand'.

IUA Connections (winter 2000) says "CA and IUA are cooking up new IDMS Education Curriculum". Even though we applaud IUA' s desperate attempt to make some sense out of the current situation, it all sounds almost like re-inventing the wheel.

In most cases the courses are taken over by some ex-Cullinet personnel or company run by an ex-man. This may look wonderful for the client and the vendor for some time. But it is only a short term solution.

The ex-person need not be up-to-date (may be he is: since the product is not changing much.) He may be unaware of the latest status of the product. Sooner or later he is guaranteed to lag behind the product's positioning.

On the other hand, with few changes on the status of IDMS as such, many third party products are trying to fill the vacuum in the IDMS market place. Some of them are good - TCPIP, Web and MQSeries support. But if the positioning of the base product is not good enough, all these third party attempts are pointless and a waste of time. Unless the clients stay with the main product, there is no room for supporting tools.

SQL is the KEY

There are some comments in IUA Connections about how to secure the future of IDMS. It reads "IDMS would have a better life if and when CA delivers SQL option as an integrated part of CA-IDMS"

Well, technically SQL is integrated with IDMS DBMS. What IUA meant was in marketing terms. Today, client does not get SQL as part of IDMS. He has to pay extra. The last 10 years CA has done nothing to convince the client to 'buy' SQL. Why not give it part of IDMS? ie existing clients should get SQL free of charge. Since CA has admitted that they can't sell any more IDMS to mainframe clients, what is the harm in giving SQL to existing clients, if that would result in those clients staying with the product?

Just One More: An Incredible Product!

Yes, we are talking about Microsoft Windows! It's incredible how this third rate product conquered the market. One thing it was able to beat OS/2 which was a superior product in all respects. PC programmers and users who are enchanted by the versatilities of MS Windows are typical single users who don't mind doing CTRL/ALT/DEL a couple of times a day! My Pentium PC with 24M was running Windows 95 and Word 95 reasonably well. And until recently I had an old browser MSIE2.0! One thing, it was fast because it ignored all Java Code! Yes, it was good enough, but could not browse pages with frames. So I had to change to IE 3.0 and then to IE4.0. Also I could no longer manage with Word 95 and Word 97 Viewer. It was time to switch over to Word97.

I noticed that with Word and IE on the PC was taking too much space for SWAP file. During a session SWAP file which started at 16M at startup grew to 50M or more. Friends told me that it was time to change the PC. Yes I got a Pentium 500 with 128M RAM with Windows 98. For simple internet access, I was expecting lightning speed. Nothing happened. It was a little faster. But the PC was still using enormous SWAP files! 128 M and still it needs SWAP files. In fact the swap file was bigger than my old PCs! This was strange! And the idle PC suddenly wakes up and there is a flurry of disk activity (tic-tic reminding one of Morse Code Message Sending) for about a minute. My son thought there might be some ghosts inside the PC. Did some tuning which required me to allocate a huge SWAP file permanent.[Imagine I changed the PC only to get rid of SWAP file, here I am sitting with permament SWAP file of 256M!]. Some improvements. But not the lightning speed I was hoping for. Tic-Tic was reduced, but not eliminated! I upgraded to IE5.0. Yes this seems better and faster than IE4.0.

And while using FrontPage Express with a small html code, it ran out of memory! All I did was to do some cut and paste from Word97(which it didn't do the right way, but ate all the memory!). And take this 11th commandment! "Though shall not do too many cut and paste with Word or any Microsoft product, it will eat away your memory!"

It is impossible to believe such a PC is the front end for many UNIX/NT applications all of which claim superfast response times! Simple Bluff! the PC startup time itself is too long. If one is using a network PC, then this is still longer.

And what about networks? Well here is a working example. 3270 Connection from a PC Network is always slower than a remote ISDN line to the same MVS through TCP/IP! Think of it: Your 3270 connection through NT network from your PC which is on the third floor of your Computer Center in New York, is slower than using the same MVS terminal by someone sitting in Fiji Islands and dialing to New York with an ISDN Line! This is not a joke, simple reality.

Part of the trouble here is the Novell Network. Though Novell prefers to call "Novell Network Operating System", it started as a simple file server and evolved into today's monster product. Novell does n't understand operating systems at all, else they would not have thrown away Unix which came into their hands.

Bottom line is that Windows and Windows Products are written with the wrong 'a priori' assumption that there is incredible memory and CPU power. The net effect is that one is able to type and print a letter with all the necessary business fonts and pictures using an old stupid 386 running Windows 3.1 and with a damaged harddisk of 40M and an old Word processor faster and cleaner than a person sitting with Pentium500, Windows98 and Word97!

Downgrade your software!

Recently happened to come across a site which gives tips for Windows . Some of the tips helps, some doesn't. The best tip was missing and here it is. If you have an old PC (which today everyone has, the moment you reach home from the shop the PC is outdated!), resist the temptation to upgrade to the latest Windows. Either keep it the way you got it or still better downgrade your software without loss of functionalities. For example, if the bottleneck is Windows98, throw it away and try Windows 95. You use much less storage. It's faster. Word97 etc runs much faster too. This is irrespective of the benchmark figures propagated by the vendors of software and of course, hardware!

In this Issue : Back to Top
Back to Main Page
This page at

IDMS/SQL is published on behalf of IDMS WatchDog Group, Palo Alto-Helsinki-Oslo for free circulation among IDMS Users Worldwide. IDMS/SQL News is not a CA publication. CA-IDMS/DB, CA-IDMS/DC and CA-ADS are registered trademarks of Computer Associates International Inc. CICS, IMS-DB/DC and DB2 are registered trademarks of IBM Corporation. Technical examples are only guidelines and modification might be required in certain situations and operating systems. Opinion expressed are those of the authors and do not represent the views of IDMS clients or related vendors. Permission is hereby granted to use or reproduce the information, only to IDMS Customers and Consultants, provided the material is distributed free of charge.

Comments to This page hosted by Get your own Free Home Page