Sugestão de leitura: Free as in Freedom

Aqui vai uma cópia do 1º capítulo do livro Free as in Freedom, uma abertura super relevante nestes tempos em que empresas que poderiam nos ajudar a lutar por uma web aberta estão sucumbindo, tempos em que a conveniência de assistir superproduções no conforto do lar alimentam com grana uma indústria anti-conhecimento e nos encurrala a cada dia…

Prometo voltar a estes assuntos ainda muitas vezes, por enquanto curtam a leitura. Marcações por minha conta.

For Want of a Printer

I fear the Greeks. Even when they bring gifts.
– Virgil, The Aeneid

The new printer was jammed, again.

Richard M. Stallman, a staff software programmer at the Massachusetts Institute of Technology’s Artificial Intelligence Laboratory (AI Lab), discovered the malfunction the hard way. An hour after sending off a 50-page file to the office laser printer, Stallman, 27, broke off a productive work session to retrieve his documents. Upon arrival, he found only four pages in the printer’s tray. To make matters even more frustrating, the four pages belonged to another user, meaning that Stallman’s print job and the unfinished portion of somebody else’s print job were still trapped somewhere within the electrical plumbing of the lab’s computer network.

Waiting for machines is an occupational hazard when you’re a software programmer, so Stallman took his frustration with a grain of salt. Still, the difference between waiting for a machine and waiting on a machine is a sizable one. It wasn’t the first time he’d been forced to stand over the printer, watching pages print out one by one. As a person who spent the bulk of his days and nights improving the efficiency of machines and the software programs that controlled them, Stallman felt a natural urge to open up the machine, look at the guts, and seek out the root of the problem.

Unfortunately, Stallman’s skills as a computer programmer did not extend to the mechanical-engineering realm. As freshly printed documents poured out of the machine, Stallman had a chance to reflect on other ways to circumvent the printing jam problem.

How long ago had it been that the staff members at the AI Lab had welcomed the new printer with open arms? Stallman wondered. The machine had been a donation from the Xerox Corporation. A cutting edge prototype, it was a modified version of a fast Xerox photocopier. Only instead of making copies, it relied on software data piped in over a computer network to turn that data into professional-looking documents. Created by engineers at the world-famous Xerox Palo Alto Research Facility, it was, quite simply, an early taste of the desktop-printing revolution that would seize the rest of the computing industry by the end of the decade.

Driven by an instinctual urge to play with the best new equipment, programmers at the AI Lab promptly integrated the new machine into the lab’s sophisticated computing infrastructure. The results had been immediately pleasing. Unlike the lab’s old printer, the new Xerox machine was fast. Pages came flying out at a rate of one per second, turning a 20-minute print job into a 2-minute print job. The new machine was also more precise. Circles came out looking like circles, not ovals. Straight lines came out looking like straight lines, not low-amplitude sine waves.

It was, for all intents and purposes, a gift too good to refuse.

Once the machine was in use, its flaws began to surface. Chief among the drawbacks was the machine’s susceptibility to paper jams. Engineering-minded programmers quickly understood the reason behind the flaw. As a photocopier, the machine generally required the direct oversight of a human operator. Figuring that these human operators would always be on hand to fix a paper jam, if it occurred, Xerox engineers had devoted their time and energies to eliminating other pesky problems. In engineering terms, user diligence was built into the system.

In modifying the machine for printer use, Xerox engineers had changed the user-machine relationship in a subtle but profound way. Instead of making the machine subservient to an individual human operator, they made it subservient to an entire networked population of human operators. Instead of standing directly over the machine, a human user on one end of the network sent his print command through an extended bucket brigade of machines, expecting the desired content to arrive at the targeted destination and in proper form. It wasn’t until he finally went to check up on the final output that he realized how little of it had really been printed.

Stallman was hardly the only AI Lab denizen to notice the problem, but he also thought of a remedy. Years before, for the lab’s previous printer, Stallman had solved a similar problem by modifying the software program that regulated the printer, on a small PDP-11 machine, as well as the Incompatible Timesharing System that ran on the main PDP-10 computer. Stallman couldn’t eliminate paper jams, but he could insert software code that made the PDP-11 check the printer periodically, and report jams back to the PDP-10. Stallman also inserted code on the PDP-10 to notify every user with a waiting print job that the printer was jammed. The notice was simple, something along the lines of “The printer is jammed, please fix it,” and because it went out to the people with the most pressing need to fix the problem, chances were that one of them would fix it forthwith.

As fixes go, Stallman’s was oblique but elegant. It didn’t fix the mechanical side of the problem, but it did the next best thing by closing the information loop between user and machine. Thanks to a few additional lines of software code, AI Lab employees could eliminate the 10 or 15 minutes wasted each week in running back and forth to check on the printer. In programming terms, Stallman’s fix took advantage of the amplified intelligence of the overall network.

“If you got that message, you couldn’t assume somebody else would fix it,” says Stallman, recalling the logic. “You had to go to the printer. A minute or two after the printer got in trouble, the two or three people who got messages arrive to fix the machine. Of those two or three people, one of them, at least, would usually know how to fix the problem.”

Such clever fixes were a trademark of the AI Lab and its indigenous population of programmers. Indeed, the best programmers at the AI Lab disdained the term programmer, preferring the more slangy occupational title of hacker instead. The job title covered a host of activities – everything from creative mirth making to the improvement of existing software and computer systems. Implicit within the title, however, was the old-fashioned notion of Yankee ingenuity. For a hacker, writing a software program that worked was only the beginning. A hacker would try to display his cleverness (and impress other hackers) by tackling an additional challenge: to make the program particularly fast, small, powerful, elegant, or somehow impressive in a clever way. [1]

Companies like Xerox made it a policy to donate their products (and software) to places where hackers typically congregated. If hackers used these products, they might go to work for the company later on. In the 60s and early 70s, they also sometimes developed programs that were useful for the manufacturer to distribute to other customers.

When Stallman noticed the jamming tendency in the Xerox laser printer, he thought of applying the old fix or “hack” to this printer. In the course of looking up the Xerox laser-printer software, however, Stallman made a troubling discovery. The printer didn’t have any software, at least nothing Stallman or a fellow programmer could read. Until then, most companies had made it a form of courtesy to publish source-code files–readable text files that documented the individual software commands that told a machine what to do. Xerox, in this instance, had provided software files only in compiled, or binary, form. If programmers looked at the files, all they would see was an endless stream of ones and zeroes – gibberish.

There are programs, called “disassemblers,” to convert the ones and zeroes into low-level machine instructions, but figuring out what those instructions actually “do” is a long and hard task, known as “reverse engineering.” To reverse engineer this program could have taken more time than five years’ worth of jammed printouts. Stallman wasn’t desperate enough for that, so he put the problem aside.

Xerox’s unfriendly policy contrasted blatantly with the usual practices of the hacker community. For instance, to develop the program for the PDP-11 that ran the old printer, and the program for another PDP-11 that handled display terminals, the AI Lab needed a cross-assembler program to build PDP-11 programs on the PDP-10 main computer. The lab’s hackers could have written one, but Stallman, a Harvard student, found such a program at Harvard’s computer lab. That program was written to run on the same kind of computer, the PDP-10, albeit with a different operating system. Stallman never knew who had written the program, since the source code did not say. But he brought a copy back to the AI Lab. He then altered the source code to make it run on the AI Lab’s Incompatible Timesharing System (ITS). With no muss and little fuss, the AI Lab got the program it needed for its software infrastructure. Stallman even added a few features not found in the original version, making the program more powerful. “We wound up using it for several years,” Stallman says.

From the perspective of a 1970s-era programmer, the transaction was the software equivalent of a neighbor stopping by to borrow a power tool or a cup of sugar from a neighbor. The only difference was that in borrowing a copy of the software for the AI Lab, Stallman had done nothing to deprive anyone else of the use of the program. If anything, other hackers gained in the process, because Stallman had introduced additional features that other hackers were welcome to borrow back. For instance, Stallman recalls a programmer at the private engineering firm, Bolt, Beranek & Newman, borrowing the program. He made it run on Twenex and added a few additional features, which Stallman eventually reintegrated into the AI Lab’s own source-code archive. The two programmers decided to maintain a common version together, which had the code to run either on ITS or on Twenex at the user’s choice.

“A program would develop the way a city develops,” says Stallman, recalling the software infrastructure of the AI Lab. “Parts would get replaced and rebuilt. New things would get added on. But you could always look at a certain part and say, ‘Hmm, by the style, I see this part was written back in the early 60s and this part was written in the mid-1970s.’”

Through this simple system of intellectual accretion, hackers at the AI Lab and other places built up robust creations. Not every programmer participating in this culture described himself as a hacker, but most shared the sentiments of Richard M. Stallman. If a program or software fix was good enough to solve your problems, it was good enough to solve somebody else’s problems. Why not share it out of a simple desire for good karma?

This system of cooperation was being undermined by commercial secrecy and greed, leading to peculiar combinations of secrecy and cooperation. For instance, computer scientists at UC Berkeley had built up a powerful operating system called BSD, based on the Unix system they had obtained from AT&T. Berkeley made BSD available for the cost of copying a tape, but would only give these tapes to schools that could present a $50,000 source license obtained from AT&T. The Berkeley hackers continued to share as much as AT&T let them, but they had not perceived a conflict between the two practices.

Likewise, Stallman was annoyed that Xerox had not provided the source-code files, but not yet angry. He never thought of asking Xerox for a copy. “They had already given us the laser printer,” Stallman says. “I could not say they owed us something more. Besides, I took for granted that the absence of source code reflected an intentional decision, and that asking them to change it would be futile.”

Good news eventually arrived: word had it that a scientist at the computer-science department at Carnegie Mellon University had a copy of the laser printer source code.

The association with Carnegie Mellon did not augur well. In 1979, Brian Reid, a doctoral student there, had shocked the community by refusing to share his text-formatting program, dubbed Scribe. This text formatter was the first to have mark-up commands oriented towards the desired semantics (such as “emphasize this word” or “this paragraph is a quotation”) rather than low-level formatting details (“put this word in italics” or “narrow the margins for this paragraph”). Instead Reid sold Scribe to a Pittsburgh-area software company called Unilogic. His graduate-student career ending, Reid says he simply was looking for a way to unload the program on a set of developers that would take pains to keep it from slipping into the public domain. (Why one would consider such an outcome particularly undesirable is not clear.) To sweeten the deal, Reid also agreed to insert a set of time-dependent functions – “time bombs” in software-programmer parlance – that deactivated freely copied versions of the program after a 90-day expiration date. To avoid deactivation, users paid the software company, which then issued a code that defused the internal time-bomb anti-feature.

For Stallman, this was a betrayal of the programmer ethos, pure and simple. Instead of honoring the notion of share-and-share alike, Reid had inserted a way for companies to compel programmers to pay for information access. But he didn’t think deeply about the question, since he didn’t use Scribe much.

Unilogic gave the AI Lab a gratis copy to use, but did not remove or mention the time bomb. It worked, for a while; then one day a user reported that Scribe had stopped working. System hacker Howard Cannon spent hours debugging the binary until he found the time-bomb and patched it out. Cannon was incensed, and wasn’t shy about telling the other hackers how mad he was that Unilogic had wasted his time with an intentional bug.

Stallman had a Lab-related reason, a few months later, to visit the Carnegie Mellon campus. During that visit, he made a point of looking for the person reported to have the printer software source code. By good fortune, the man was in his office.

In true engineer-to-engineer fashion, the conversation was cordial but blunt. After briefly introducing himself as a visitor from MIT, Stallman requested a copy of the laser-printer source code that he wanted to modify. To his chagrin, the researcher refused.

“He told me that he had promised not to give me a copy,” Stallman says.

Memory is a funny thing. Twenty years after the fact, Stallman’s mental history tape is blank in places. Not only does he not remember the motivating reason for the trip or even the time of year during which he took it, he also has no recollection of who was on the other end of the conversation. According to Reid, the person most likely to have fielded Stallman’s request is Robert Sproull, a former Xerox PARC researcher and current director of Sun Laboratories, a research division of the computer-technology conglomerate Sun Microsystems. During the 1970s, Sproull had been the primary developer of the laser-printer software in question while at Xerox PARC. Around 1980, Sproull took a faculty research position at Carnegie Mellon where he continued his laser-printer work amid other projects.

When asked directly about the request, however, Sproull draws a blank. “I can’t make a factual comment,” writes Sproull via email. “I have absolutely no recollection of the incident.”

“The code that Stallman was asking for was leading-edge, state-of-the-art code that Sproull had written in the year or so before going to Carnegie Mellon,” recalls Reid. If so, that might indicate a misunderstanding that occurred, since Stallman wanted the source for the program that MIT had used for quite some time, not some newer version. But the question of which version never arose in the brief conversation.

In talking to audiences, Stallman has made repeated reference to the incident, noting that the man’s unwillingness to hand over the source code stemmed from a nondisclosure agreement, a contractual agreement between him and the Xerox Corporation giving the signatory access to the software source code in exchange for a promise of secrecy. Now a standard item of business in the software industry, the nondisclosure agreement, or NDA, was a novel development at the time, a reflection of both the commercial value of the laser printer to Xerox and the information needed to run it. “Xerox was at the time trying to make a commercial product out of the laser printer,” recalls Reid. “They would have been insane to give away the source code.”

For Stallman, however, the NDA was something else entirely. It was a refusal on the part of some CMU researcher to participate in a society that, until then, had encouraged software programmers to regard programs as communal resources. Like a peasant whose centuries-old irrigation ditch had grown suddenly dry, Stallman had followed the ditch to its source only to find a brand-spanking-new hydroelectric dam bearing the Xerox logo.

For Stallman, the realization that Xerox had compelled a fellow programmer to participate in this newfangled system of compelled secrecy took a while to sink in. In the first moment, he could only see the refusal in a personal context. “I was so angry I couldn’t think of a way to express it. So I just turned away and walked out without another word,” Stallman recalls. “I might have slammed the door. Who knows? All I remember is wanting to get out of there. I went to his office expecting him to cooperate, so I had not thought about how I would respond if he refused. When he did, I was stunned speechless as well as disappointed and angry.”

Twenty years after the fact, the anger still lingers, and Stallman presents the event as one that made him confront an ethical issue, though not the only such event on his path. Within the next few months, a series of events would befall both Stallman and the AI Lab hacker community that would make 30 seconds worth of tension in a remote Carnegie Mellon office seem trivial by comparison. Nevertheless, when it comes time to sort out the events that would transform Stallman from a lone hacker, instinctively suspicious of centralized authority, to a crusading activist applying traditional notions of liberty, equality, and fraternity to the world of software development, Stallman singles out the Carnegie Mellon encounter for special attention.

“It was my first encounter with a nondisclosure agreement, and it immediately taught me that nondisclosure agreements have victims,” says Stallman, firmly. “In this case I was the victim. [My lab and I] were victims.”

Stallman later explained, “If he had refused me his cooperation for personal reasons, it would not have raised any larger issue. I might have considered him a jerk, but no more. The fact that his refusal was impersonal, that he had promised in advance to be uncooperative, not just to me but to anyone whatsoever, made this a larger issue.”

Although previous events had raised Stallman’s ire, he says it wasn’t until his Carnegie Mellon encounter that he realized the events were beginning to intrude on a culture he had long considered sacrosanct. He said, “I already had an idea that software should be shared, but I wasn’t sure how to think about that. My thoughts weren’t clear and organized to the point where I could express them in a concise fashion to the rest of the world. After this experience, I started to recognize what the issue was, and how big it was.”

As an elite programmer at one of the world’s elite institutions, Stallman had been perfectly willing to ignore the compromises and bargains of his fellow programmers just so long as they didn’t interfere with his own work. Until the arrival of the Xerox laser printer, Stallman had been content to look down on the machines and programs other computer users grimly tolerated.

Now that the laser printer had insinuated itself within the AI Lab’s network, however, something had changed. The machine worked fine, barring the paper jams, but the ability to modify software according to personal taste or community need had been taken away. From the viewpoint of the software industry, the printer software represented a change in business tactics. Software had become such a valuable asset that companies no longer accepted the need to publicize source code, especially when publication meant giving potential competitors a chance to duplicate something cheaply. From Stallman’s viewpoint, the printer was a Trojan Horse. After a decade of failure, software that users could not change and redistribute – future hackers would use the term “proprietary” software – had gained a foothold inside the AI Lab through the sneakiest of methods. It had come disguised as a gift.

That Xerox had offered some programmers access to additional gifts in exchange for secrecy was also galling, but Stallman takes pains to note that, if presented with such a quid pro quo bargain at a younger age, he just might have taken the Xerox Corporation up on its offer. The anger of the Carnegie Mellon encounter, however, had a firming effect on Stallman’s own moral lassitude. Not only did it give him the necessary anger to view such future offers with suspicion, it also forced him to turn the situation around: what if a fellow hacker dropped into Stallman’s office someday and it suddenly became Stallman’s job to refuse the hacker’s request for source code?

“When somebody invited me to betray all my colleagues in that way, I remembered how angry I was when somebody else had done that to me and my whole lab,” Stallman says. “So I said, ‘Thank you very much for offering me this nice software package, but I can’t accept it on the conditions that you’re asking for, so I’m going to do without it.’ ”

It was a lesson Stallman would carry with him through the tumultuous years of the 1980s, a decade during which many of his MIT colleagues would depart the AI Lab and sign nondisclosure agreements of their own. They may have told themselves that this was a necessary evil so they could work on the best projects. For Stallman, however, the NDA called the the moral legitimacy of the project into question. What good is a technically exciting project if it is meant to be withheld from the community?

As Stallman would quickly learn, refusing such offers involved more than personal sacrifice. It involved segregating himself from fellow hackers who, though sharing a similar distaste for secrecy, tended to express that distaste in a more morally flexible fashion. Refusing another’s request for source code, Stallman decided, was not only a betrayal of the scientific mission that had nurtured software development since the end of World War II, it was a violation of the Golden Rule, the baseline moral dictate to do unto others as you would have them do unto you.

Hence the importance of the laser printer and the encounter that resulted from it. Without it, Stallman says, his life might have followed a more ordinary path, one balancing the material comforts of a commercial programmer with the ultimate frustration of a life spent writing invisible software code. There would have been no sense of clarity, no urgency to address a problem others weren’t addressing. Most importantly, there would have been no righteous anger, an emotion that, as we soon shall see, has propelled Stallman’s career as surely as any political ideology or ethical belief.

“From that day forward, I decided this was something I could never participate in,” says Stallman, alluding to the practice of trading personal liberty for the sake of convenience – Stallman’s description of the NDA bargain – as well as the overall culture that encouraged such ethically suspect deal-making in the first place. “I decided never to make other people victims as I had been a victim.”


O livro completo pode ser baixado e lido em diversos formatos aqui ou em outros lugares (procure pelo nome em qualquer site de buscas), no mesmo link existe opção de comprar a versão em papel. Se quiser simplesmente doar para um futuro com menos cavalos de Tróia e menos restrições de liberdade e acesso à informação em troca de conveniência, fundações sem fins lucrativos que eu recomendo são a FSF, a EFF, a Wikimedia e a Creative Commons.

PS: Eu normalmente linkaria a Mozilla nesta lista também, mas estou extremamente puto da vida com a traição da Mitchell Baker e demais cabeçudos de Mountainview que aceitaram abrir mais uma janela para presentes de grego deste tipo no navegador que se dizia promotor de uma Web Aberta. Quero que se explodam.


se você gostou deste post e quiser receber novas atualizações deste blog por email, clique aqui

Fabricio Campos Zuardi

Read more posts by this author.