Sunday, January 3, 2010

When Is a Year a Leap Year?

Triggered by the preceding post, an afterthought:

Do you know how to tell whether a year is a leap year? Or do you only *think* you know?

Here's how it goes: A year is a leap year if
  • It is evenly divisible by 4
    • but not evenly divisible by 100
      • unless it is evenly divisible by 400
        • and not evenly divisible by 4000 (Gotcha!)

Y2K: A Dud?

Well, we've finished with the decade beginning with the year 2000, and now there are a number of lists and videos making the rounds, representing the views of their creators on what were the best, most, biggest, worst, whatever, events, songs, videos, movies and so on, of the decade.

One item of agreement among several of the lists and videos seems to be that Y2K was a disappointment, a bust that didn't live up to its hype, and was therefore overinflated to begin with.

'Tain't so, it says here.

Home computers will be mentioned here only in passing, as my direct experience was elsewhere. However, I do want to say that Bill Gates got away with bloody murder in testimony before Congress. Asked when Microsoft began planning for Y2K, he replied "From day one." No one thought to ask him, "Well then, why the 1999 flurry of fixes and patches for all the versions of Windows?"

Y2K didn't live up to its hype because billions - quite literally, billions, of dollars were spent to ensure the outcome that in fact became reality.

I was an employee of a contracting firm and was farmed out to IBM during the last half of 1998 and all of 1999 (and beyond, but that's not relevant here), and I know that IBM paid my company more than a quarter of a million dollars for my Y2K related work. That's one person in one tiny corner of one corporation in one country. At a guess, more than three million dollars were spent on Y2K in the ten person department I was attached to.

Some of the software maintained by that department was used internally by IBM, and some in support of *huge* clients, multi-billion dollar corporations.

Why was all this necessary? How did it come about?

The problem existed at all levels of computers and programming, from the PC you used at home or at work to mainframes, the "big iron" used for decades by companies - think HAL from 2001 and smaller versions.

At the beginning, both memory and storage for computers were much more expensive than today, and bytes (think "characters") that did not absolutely have to be present were not, simply in order to save money. But even then there were special programming routines in many systems to deal with differences between centuries.

If a program needed to determine the age of an individual, that age was calculated. The date of birth was often carried in a YYMMDD format, so someone born on March 21, 1973 had a birth date of 730321. If a program running on February 23rd, 1994 needed to know that person's age, subtractions were done, 94 minus 73, 02 minus 03 (oops, a negative number, so subtract 1 from the "years" result and add 12 to the "months" result), and 23 minus 21. There were several other approaches, but they all accomplished the same thing - determining the age of the individual.

But what if the person had been born on March 21, 1897? Now the year subtraction routine running on February 23, 1994 dealt with "year data" that required subtracting 97 from 94 and a nonsense result was achieved, so 100 had to be added to the result.

Undetectable in many situations was the case where the person was born in 1893. The calculation - including 94 minus 93 - showed a 101 year old person to be 1 year old.

OK, so much for that. In the beginning, then, the elimination from data of these numbers that showed the century was perhaps necessary, perhaps only highly desirable, but in any case a common practice.

The first warnings that this would someday present a problem, at least the first warnings that I recall, came in the 1970's. Naturally, people in the 1960's knew of the potential, but 2000 was *so* far away. Surely the system would have been replaced by then, and the replacement would include the century.

Now this next is only *my* perspective. It is certainly true in many cases but equally certainly not true in at least some cases.

Businesses are universally reluctant to spend money if they can find a workable way around it, and reasonably so. Unfortunately, this can lead to unreasonable solutions. In the 1980's, more people began saying that there would be a year 2000 problem. But as existing systems became inadequate for other reasons - new products, higher volumes, whatever - companies consistently took the cheaper path and modified existing systems rather than spend the time and money to rewrite them. The systems that would "surely" be replaced were in fact hanging around.

By the early to mid-1990's, everyone knew there would be a problem. But in the late 1980's and early 1990's, many data processing managers - all the way up to the executive level - were reluctant to take the problem to their bosses and say "We have to spend hundreds of thousands (or in many cases millions) of dollars to fix this problem." In some cases they weren't going to be around in that arena and that company come the year 2000, and they simply left the problem to their successors.

Thus in the several years remaining there began a scramble to fix a problem whose existence had been known for more than three decades.

Many companies brought old programmers out of retirement, particularly old COBOL programmers because COBOL had been *very* commonly used. I don't know whether anyone has done a serious study of how much money was spent worldwide on Y2K between say, 1995 and a few days after January 1, 2000, but it is certainly hundreds of billions of dollars.

So yes, Y2K passed *mostly* uneventfully, but what would you expect with that kind of effort and investment.

BTW, birth dates and ages were not the only problem, not by a long shot. Many systems used dates for all kinds of things - payrolls, dates of file creations, elapsed times between iterative processes, navigation, health monitoring devices, etc.

At IBM on December 31 and January 1, we worked in shifts, watching hour by hour as each time zone around the world reached the critical hour, people in each zone ready to learn from and react to any problems that arose in time zones ahead of "us."

Some examples of things that *didn't* go well:
  • In a test of its Y2K changes, one California city's sewage system dumped tons of raw sewage into a public area.

  • Newspapers, public neon date and time displays, and other media showed the date as 19100, as did a "Y2K experts" firm's web site.

  • A video store began charging customers for returning videos 100 years after the due dates.

  • Some ATM's rejected credit cards, "thinking" they had expired more than 90 years earlier.

  • Australian Diner's club statements showed that January purchases occurred before December purchases.
Actually, the problem began earlier and some manual intervention was required in the late 1990's when inadequately tested "Y2K compliant" software went into production. Products with expiration dates that carried only two digits for the year were determined to have expired if the real expiration date was "00."

But these are mouse nuts compared to what could have happened.

You're welcome. Ah ha ha ha ha ha.