The bad old early days, actually
(You know, if I really wanted you to get a feel for the early days of microcomputers, I could make you read this whole page in green on black. Naah, I'm not that cruel. Anyway...)
I first got involved with micros back in '75, when I was working as a lab technician at the Auditory Research Lab. The Auditory Lab, an offshoot of Princeton's psych department, was run by Ernest Glen Wever, one of the founders of hearing research. We had a little campus of our own over on Forrestal Road, with eight buildings carefully constructed to provide complete acoustical and electrical isolation for the delicate measurements Glen was making. He and his cohort of researchers were looking at the electrical potentials in the auditory nerves of all kinds of animals: fish, dolphins, bats (yes, we had vampire bats, among others), chinchillas, lizards, frogs, giant sea turtles, penguins...you name it. It was an interesting place to work, to say the least.
One couple of married PhDs, Jim and Kathleen Lang, had ambitous plans to test the hearing of turtles (the small, painted kind—Chrysemys picta and C. scripta) using an elaborate automated Skinner-box sort of setup. The automation was supposed to come from microcomputers, which at that time were just becoming available.
Now if you were a techie like me in those days and read Popular Electronics faithfully from cover to cover, you'd have seen a big story splashed across the front of the January 1975 issue: build your own microcomputer, it said, for only $400. The featured machine, dubbed the Altair 8800 by its designers (a tiny firm in Albuquerque called MITS), had binary toggle switches for input and rows of red LEDs for output.
As sold (in kit form), the Atair 8800 couldn't actually be used for anything. But it was a real computer! If you spent a few hundred bucks more, you could add 4K of RAM to it...and an interface to a Teletype machine with a paper tape reader/punch that would let you store your programs...and maybe start to actually do something with it. Assuming you could write all your own software, that is. There wasn't any available commercially. (Later on, a two-man company called Microsoft wrote "Altair BASIC," copying its features from an existing product. Its owners, a couple of cocky college kids named Billy Gates and Paul Allen, would go on to make large amounts of money by repeatedly appropriating the ideas of others.)
Altairs and assembly language
The Altair was what the Langs pinned their hopes on. They read all the computer-hobbyist newsletters that were springing up, and they signed themselves and me up for a course in 8080 assembly language programming at Trenton State College. The course was fun (and incidentally, the only formal education I ever got in computers), and I emerged from it with a long, hand-assembled program to run the turtle tester.
Only trouble was, we didn't have a computer that could run it. Of the three Altairs the Langs had bought, we never got one to run for more than a day at a time. One afternoon when all the stars and planets were correctly aligned, I did succeed in adding 010 to 100 and obtaining 110, but that was about as far as we ever got...although I soon had distinctive groove-shaped calluses on my fingers from flipping the Altairs' toggle switches to enter programs in binary. After a few months I began to realize that if I didn't have a computer to practice on, my newfound programming skills would rapidly wither away. The Altairs were a lost cause. So I took my savings and bought a computer for myself. It was a Poly-88 from Polymorphic Systems, and I named it "Fred."
Fred was a second-generation S-100 system—compatible with the Altair, but with advanced features. (And unlike the Altairs at the lab, it actually worked reliably!) I bought it from the Hoboken Computer Works, which was at that time the only computer store in the state of New Jersey. It had a 2MHz Intel 8080 processor, 8K of RAM and an audiocassette interface for storing files on tape. As purchased, the only software that came with it was the operating system: 1K of code on an EPROM on the motherboard. No programming languages, no applications, no nothing.
Fortunately, the Poly's video-terminal-based operating system (which replaced the Altair's binary switches and LEDs) was a pretty good one. And I had the 8080 assembly language class fresh in my mind. So I started programming, writing out my assembly language in pencil on yellow legal pads. I'd leave all the jump addresses blank, then when I got done and knew where all the routines were, I'd go back, calculate the hexadecimal addresses and fill them in. Then I'd type in the program (in hex) from the keyboard and run it. This was a big step up from binary toggle switches!
I wanted to do two things most of all: word processing and graphics. There was one commercial word processor on the market—Michael Shrayer's "Electric Pencil"—but it sold for $75, which I thought was outrageous. So I wrote my own word processor. And then I wrote my own graphics software. The Poly had 128 x 48 pixel graphics resolution—advanced for the time!—so I built my own paint program (though of course I didn't call it that), and later added animation capabilities—the program could cycle through a 16-frame sequence at several frames per second. The artwork I created was not impressive overall (at that resolution, how could it be?), though a few images had a sort of stylized, geometric charm. Still, I was learning computer graphics in the best way: by implementing the programs myself. I had to think carefully about what features were important, how they should work, and then write the assembly-language code to make them happen. It was an exciting time.
Eventually I got an assembler, and then a BASIC interpreter (not the Microsoft one), which made programming considerably easier—but that first year of hand-assembling 8080 code was a wonderful education, and some of the most intense and enjoyable programming I've ever done. Later I bought a printer mechanism—a basket-case IBM Selectric salvaged from a mainframe installation. I designed and wire-wrapped the interfaces needed; then wrote driver software so my word processor could control it...I was in heaven! The crowning glory was when I saved up my pennies to buy a $600 floppy disk drive and controller (in kit form) from Northstar Systems. I could now store a whopping 90K on one little disk, and it was many times faster than the cassette tapes.
Flying with Dumbo
But time and technology move on, and a few years later Fred was starting to look long in the tooth. Also, Polymorphic Systems had gone under, like the vast majority of early personal computer companies. So I moved up to a Ferguson Big Board—a kit-built single-board computer with a hot new Z80 processor and all the trimmings, including 64K of RAM. After I got it soldered together, I bought an industrial-surplus video terminal housing and mounted the board, a new 8" floppy drive and a 12" monitor inside. Because the case was huge and white, it reminded me of a white elephant...so with that tenuous connection, I named it Dumbo.
Dumbo was fast—after some homebrew circuit-board modifications, I got it to run at 5 MHz—and for the first time I had a machine with commercial software: the CP/M operating system and the Wordstar word processor. Wordstar was the first of the "feature-laden behemoth" school of word processors (it was to be succeeded in the marketplace by the even bigger and uglier WordPerfect, and finally by MS Word, the apatosaurus of word processors). And I had a keyboard to match: a surplus Micro Switch keyboard that had over forty function keys. I got some "relegendable" keycaps (you could slip a piece of paper under the clear top to relabel the key) and wrote a keyboard macro utility that caused them to generate my forty most commonly used Wordstar command sequences. That allowed me to turn off Wordstar's annoying menu display, which normally filled a third of the screen.
The only thing Dumbo didn't have was graphics. It was a great word-processing machine, but the display was strictly text. To remedy this shortcoming I picked up an Atari 400 at Toys 'R' Us for seventy five bucks, thinking I'd turn it into a slave graphics terminal for Dumbo. But before I got very far with that plan, something happened: Apple introduced the Macintosh. That changed all my plans.
Taking Apple seriously
You have to understand, I had looked down on Apple up to this point. Yes, I'd seen the Apple ][ prototype when Wozniak and Jobs had brought it to the first Atlantic City computer fair in 1976, but I'd dismissed it as little more than a game machine. Oh, I admired Woz's cleverness in minimizing the number of parts required—but as far as being a useful machine was concerned, the Apple ][ had three big strikes against it. First, it could only display 40 characters per line. Even printing in large (10-pitch) type, you needed a minimum of 64 characters per line to do any serious word processing, and the industry standard was 80. Second, the original Apple ][ could only display uppercase letters and numbers—useless for word processing. Third, the text on the screen looked godawful. Woz's oh-so-clever minimalist video circuitry resulted in "confetti characters" with multicolored fringes. In short, nobody in their right mind would try to use this machine for text editing, and that was my primary use for a computer. From my point of view the Apple ][ was primarily a game machine, and I wasn't really into games.
But the Mac was a whole different kettle of fish. I had been reading the publications coming out of Xerox's PARC research facility since the Seventies, drooling over the graphical interface, WYSIWYG text and art and animation capabilities of their hundred-thousand-dollar STAR sytem. I'd been hearing about Apple's Lisa, which brought a much more refined version of the PARC concepts to market, for a year...while thinking "That sounds pretty cool, but $10,000? Forget it!" Now the technology of my dreams was within my grasp, and I wanted it—badly. One look at MacPaint and I was in love.
The screen that changed my life
—more to come...when I get around to it!--