Oliver Müller is managing director and CEO of OMSE Software Engineering GmbH. He is responsible for the resorts software engineering, IT security and nearshoring.
Sentenced to be dead decades ago, the mainframe is still alive. Big business wouldn't be possible without the dinosaurs of information technology. At the backend of system networks processing financial transactions, stock exchange orders or flight reservations you will find often mainframes even today. You don't see or feel them, but they are still there -- hidden behind modern web interfaces or sheltered from direct user interactions by "front-end" servers running Unix or Windows. Operating systems such as z/OS, z/TPF, OS2200, OpenVMS and BS2000/OSD are the dinosaurs of information technology, but in contrast to their biological pendants they are still participating in evolution in the hidden background of our digital life.
Twenty years of postulating mainframe's death are leaving their marks. The mainframe field is suffering from skill shortages. Statistics tell us that the typical mainframe expert is in his late 40s or early 50s. Young professionals don't see a career in the dead-end "mainframe". So youngsters rarely seen in mainframe departments. Last but not least people knowing both worlds -- modern, "state of the art" systems like Unix and Windows as well as legacy systems -- are hardly found. Due to this lack of knowledge to many of us the field of mainframes seems to be a mythical secret science with its own magical wording. In our view it is left in a fog of smattering and half knowledge. In this scaring environment things are taken as they are without questioning.
It is no wonder that in many project meetings you can hear statements such as "the only way to access a mainframe is by TELNET and FTP" or "we can transfer the data only unencrypted by FTP because the mainframe doesn't understand SFTP". The words are still echoed by the walls of the conference room, the heads have started already nodding. The mainframe is yet understood as a big old box stuck on the technical level of the 1960s/1970s -- but that's wrong! The mainframe is a dinosaur but it is still alive and evolving. According to Darwin to survive in evolution means fitting well into a modern, changing environment. And the mainframe fits well into a world made of cryptography and certificates driven by FTPS and SFTP.
Because z/OS is dominating the mainframe sector by a market share of over 90 percent, I will concentrate on z/OS in this article. The mentioned principles apply, more or less, to other contemporary mainframe and mid-range systems, too. Besides connecting Big Iron by application or middleware specific connectors such as for CICS, IMS, MQ or DB2 it is common to use FTP to transfer data for scheduled batch processing. I will focus on FTP transfers and show substitutes for this insecure protocol.
z/OS can be split into three parts or operational environments:
- Interactive computing using the Time Sharing Option (TSO)
- Batch processing using the Job Entry System (JES) and controlled by programs written in Job Control Language (JCL)
- UNIX System Services (USS) -- the POSIX compatible environment
TSO is a command-line environment similar to COMMAND.COM of DOS, the Unix shell or Windows' PowerShell. #1 and #2 above are the core of z/OS which is still referred to as MVS ("Multiple Virtual Storage") -- the old name when the operating system had no Unix compatible subsystem. #3 is the ticket into a more common environment. Like other legacy systems z/OS provides a POSIX and Unix compatible environment.
Because the MVS filesystem is completely different from those used in Unix environments, z/OS has to deal with two filesystems. The MVS filesystem is a flat system which doesn't support structures like directories or folders. Only inside special (partitioned organized) files similar structures are known. But these structures are not part of the MVS filesystem. They are already part of internal organization in a file.
What is called a "file" on other platforms goes by a "data set" in MVS. A data set stores data as a set of records of a defined format. Data sets need to be allocated before using. In other words, a data set has an initial defined size. To be flexible this initial size can be extended with predefined "extents". In contrast, a file on a Unix filesystem stores data as a stream of bytes. The initial size is zero and can grow as long as there is free disc space or until a quota limit is reached.
A data set's name (DSN) consists of multiple parts, separated by dots. Each of these parts is one until eight (case-insensitive) characters long and is called a "qualifier". The first one is the "high level qualifier" (HLQ). It has a special function in data management and organization of z/OS. The last one is called "low level qualifier" (LLQ). It tells in some cases something about the type of data which are stored inside the data set. It can be compared with the file name extension on Unix or Windows. Valid samples of DSNs are: SYS1.PROCLIB, USER.TOOLS.CNTL, OTTO.HELLOW.COB.
Unix filesystems which are used by USS are very different from the MVS filesystem. They are hierarchical organized by levels of directories which holds the files. The directory levels are separated by slashes. The file and directory names are case-sensitive. All this is incompatible with MVS's idea of data sets. So z/OS banns whole USS filesystems in special data sets. These data sets are presented to USS as they were discs or (logical) volumes. It is a similar principle like hard disc images in VMware, Virtual Box and other emulator environments. There are two types of UNIX filesystems in USS: Hierarchical Filesystem (HFS) and The Z Filesystem (ZFS). ZFS is simply the newer one and has more features.
Processes are called "address spaces" in MVS. Many MVS address spaces today put one leg into USS to be operational in the MVS and the Unix world. Samples of such programs are WebSphere AS and the z/OS Communication Server which provides FTP and TELNET.
Additional software which is mentioned in this article is RACF and ISPF. RACF is an acronym for "Resource Access Control Facility". It is the subsystem which takes care about security and access rights. RACF provides authentication and authorization data, e. g. the user and group database as well as auditing functionality. Alternatively to IBMs RACF you can find primarily ACF2 and TopSecret by CA. We will focus here on RACF only.
ISPF stands for "Interactive System Productivity Facility". This program provides a full screen editor and a user interface organized in "panels". A panel can be understood as a screen with menus and dialogs or a rudimentary kind of window. ISPF provides panels to hide TSO commands and run them in background. Many products and software suites -- including RACF -- use ISPF to provide a user friendly interface. Hence ISPF has gained a role like X Window for Unix and OpenVMS and as Windows was it for DOS long time ago.