An interesting thread of discussion has come up over on Highered Intellect about the "Zero Tolerance Policy" our schools are adopting these days, in the wake of the school shootings at Columbine and Jonesboro and Paducah and all the rest of them. (That is the first of many posts of theirs, co-bloggers Michael Lopez and William Moon, to discuss the issue and a few side-issues as well.)
"Zero Tolerance" is one of those things that sounds great in theory, but in reality it leads to those whacko incidents we've all read about: kids getting suspended for having fingernail clippers or pocket-knives of whatever. The whole "Zero Tolerance" thing is pretty goofy (as George Carlin once observed, "You can probably beat someone to death with the Sunday New York Times"), but it's probably not going away. And not because it's really protecting our kids, but because it's part of a growing trend to replace actual, human thought with impersonal process.
I used to see this kind of thing all the time in my various jobs, especially in the first restaurant company where the upper managers were constantly waxing poetic about "systems". Everything had to be a system. If something went wrong, if we had a month in which we ran bad sales numbers or missed our labor targets or ran higher expenditures than usual, it was because we either weren't allowing our "systems" to work or because "we didn't have systems in place". It got to be a monthly ritual of sorts, when the Area Manager would come around and lecture us on our need for "systems". Of course, he was less than helpful when asked specifically what he had in mind for new "systems".
Sometimes the systems were nice, but at certain points they tended to break down when a point is reached beyond which the system's standard assumptions no longer apply. But that's not even my main problem with them; it's the way systems and their closely-related species, policies, quickly become a crutch to managers and insinuate themselves into the process such that they become taken for granted. Thus we have school officials suspending a third-grader for a month because she had a toothpick on her person, and then shrugging and saying, "It's our policy. I can't do anything about it. I have no choice." Or as I had to do as a restaurant manager, and tell the poor nine-year-old girl that she couldn't post a picture of her missing dog on our bulletin board because it went against our "no solicitation" policy. (That's a real example, and that same type of thing came up a lot. It was all part of my company's attempts to stay "Union-free".) Some would say that the policy is dumb because it doesn't allow for things like that, but I often think it's the reverse: dumb policies and systems are allowed to fester because they allow those in authority to do absurd things without looking like idiots. If we have to be the "bad guy" every now and then, how much easier it is to simply point our finger at "the system" or "the policy" whilst issuing a mealy-mouthed statement of regret. Institutional idiocy is easier to swallow, I suppose, than individual idiocy.
Thus our new paradigm seems to be: "We don't want to make decisions, so we're going to institute systems and policies that make our decisions for us. Yes, the result will be the occasional bad or even horrible decision, but that's preferable to having some one person actually be the villain." It goes to ridiculous lengths. When recently searching for a job, I was informed that I had to make my resume "scannable". At some point, unbeknownst to me, it became standard practice for companies to scan resumes into a computer and let the computer make initial determinations, based on keywords, as to who to interview and who to file in the "Also Ran" box. So, if a well-qualified person falls through a company's cracks, it's not the Human Resources person who's to fault for not paying attention to the resumes in his or her inbox, it's the applicant's fault for not complying with a system that is theoretically put in place to help Human Resources find that person in the first place. Or the bank employee will spread their hands, telling that potential first-time homeowner, "I'm sorry, there's nothing I can do. The computer rejected you." Another example is the well-intentioned "Three strikes" laws, the ones that toss people in prison for life the third time they commit a felony: the idea of "allowing the punishment to fit the crime" is done away with, because that would require actual human thought. Better to let a system make a mistake of overreaching than to allow a human being to make a mistake of underreaching, apparently.
It's a fine line that exists between "The system exists to help us function" and "We're here to make sure the system functions", or more perniciously, "Let the system do your job for you". This is why I find the idea of term limits for elected officials unpalatable: to introduce this idea, to put this system in place, is to tacitly say, "We just can't be trusted to pay attention and make sure these people are doing what they're supposed to be doing, so we'll just put in a system to automatically kick them out when the time comes." The proper role of a system is to help people make decisions. Too often we reverse the process: the role of people is to implement the decisions of the system. As useful as systems are, we become so entranced with them that we actually abdicate our powers of reason in the favor of a well-oiled system. And I wonder if our powers of reason don't atrophy as a result.
A system is a tool. Very often, a particular system is a good tool. But as with all tools, there are jobs for which a given system is called for and jobs for which it is not. Slavishly adhering to a system because it's always been there and because it makes things easier and well, dammit, because we can't do anything about it anyway because it's, you know, the system makes us into the carpenter who is so enamored of his brand-new hammer that he refuses to put it down, even when he needs to cut a piece of wood.