Items to fit into your overhead compartment |
Today's article, from Nautilus, is even older than most that grab my attention: first published in, apparently, 2013. That's ancient by internet standards. Well, technically, each species is unique in its own way. But it's unsurprising that humans would be most interested in the uniquity of humans. (I just made that word up, and I like it.) If you dropped a dozen human toddlers on a beautiful Polynesian island with shelter and enough to eat, but no computers, no cell phones, and no metal tools, would they grow up to be like humans we recognize or like other primates? That's a lot of restrictions for one experiment. How about we just drop them off on the island? (Ethics bars the toddler test.) Annoying. Neuroscientists, geneticists, and anthropologists have all given the question of human uniqueness a go, seeking special brain regions, unique genes, and human-specific behaviors, and, instead, finding more evidence for common threads across species. And yet, evidently, there is something that makes humans different from nonhumans. Not necessarily better, mind you. But if there weren't a unique combination of traits that separates a human from a chimpanzee, or a mushroom from a slime mold, we wouldn't put them in different conceptual boxes. Meanwhile, the organization of the human brain turns out to be far more complex than many anticipated; almost anything you might have read about brain organization a couple decades ago turns out to be radically oversimplified. And this is why the date of the article matters: in the twelve years since it came out, I'm pretty confident that even more stuff got learned about the human brain. To add to the challenge, brain regions don’t wear name tags (“Hello, I am Broca”), and instead their nature and boundaries must be deduced based on a host of factors such as physical landmarks (such as the hills and valleys of folded cortical tissue), the shapes of their neurons, and the ways in which they respond to different chemical stains. Even with the most advanced technologies, it’s a tough business, sort of like trying to tell whether you are in Baltimore or Philadelphia by looking out the window of a moving train. Yeah, you need to smell the city to know the difference. Even under a microscope human brain tissue looks an awful lot like primate brain tissue. That's because we are primates. When we look at our genomes, the situation is no different. Back in the early 1970s, Mary-Claire King discovered that if you compared human and chimpanzee DNA, they were so similar that they must have been nearly identical to begin with. Now that our genomes have actually been sequenced, we know that King, who worked without the benefit of modern genomic equipment, was essentially right. "Must have been nearly identical to begin with." Congratulations, you just figured out how evolution proceeds. Why, if our lives are so different, is our biology so similar? The first part of the answer is obvious: human beings and chimpanzees diverged from a common ancestor only 4 to 7 million years ago. Every bit of long evolutionary history before then—150 million previous years or so as mammals, a few billion as single-celled organisms—is shared. Which is one reason I rag on evolutionary psychology all the time. Not the only reason, but one of them. Lots of our traits were developed long before we were "us," and even before we diverged from chimps. If it seems like scientists trying to find the basis of human uniqueness in the brain are looking for a neural needle in a haystack, it’s because they are. Whatever makes us different is built on the bedrock of a billion years of common ancestry. And yet, we are different. I look at it like this: Scotch is primarily water and ethanol. So is rum, gin, vodka, tequila, other whisk(e)ys, etc. But scotch is unique because of the tiny little molecules left after distillation, plus the other tiny little molecules imbued into it by casking and aging. This doesn't make scotch better or superior to other distilled liquors, but it does make it recognizable as such. (I mean, I think it's superior, but I accept that others have different opinions.) I was unable to find, with a quick internet search, the chemical breakdown of any particular scotch, but, just as I'm different from you, a Bunnahabhain is different from a Glenfiddich, and people like me can tell the difference—even though the percentages of these more complicated chemicals are very, very small. Point is, it doesn't take much. But trying to find this "needle in a haystack" (how come no one ever thinks to bring a powerful electromagnet?) might be missing the point. And yes, that pun was absolutely, positively, incontrovertibly intended. Humans will never abandon the quest to prove that they are special. We've fucking sent robots to explore Mars. I say that's proof enough. But again, "special" doesn't mean "superior." Hell, sometimes it means "slow." |
Here's a relatively short one (for once) from aeon. It's a few years old, but given the subject, that hardly matters. And right off the bat, we're getting off to a bad start. Proclaiming that something is "always" (or "never") something just begs someone to find the one counterexample that destroys the argument. In this case, that someone is me. You have probably never heard of William Kingdon Clifford. He is not in the pantheon of great philosophers – perhaps because his life was cut short at the age of 33 – but I cannot think of anyone whose ideas are more relevant for our interconnected, AI-driven, digital age. 33? That's barely old enough to have grown a beard, which is a prerequisite for male philosophers. Or at least a mustache. However, reality has caught up with Clifford. His once seemingly exaggerated claim that ‘it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence’ is no longer hyperbole but a technical reality. I'll note that this quote is not the same thing as what the headline stated. I guess it's pretty close, but there's a world of difference between "without evidence" and "upon insufficient evidence." There is, for example, no evidence for a flat Earth beyond the direct evidence of one's senses (assuming one is in Kansas or some other famously non-hilly location), and overwhelming evidence that the Earth is basically round. Okay, not a great example, because flat-Earth believers can be shown to be wrong. But morally wrong? I'm not so sure. I hold the belief, for a better example, that murder is wrong. There's no objective evidence for this, and moreover, we can argue about what constitutes "murder" as opposed to other kinds of killing, such as assisted suicide or self-defense. And yet, it seems to me that believing that murder is wrong is, on balance, a good thing for people's continued survival, and thus morally right. His first argument starts with the simple observation that our beliefs influence our actions. Okay, that seems self-evident enough. The article provides examples, both practical and ethical. The second argument Clifford provides to back his claim that it is always wrong to believe on insufficient evidence is that poor practices of belief-formation turn us into careless, credulous believers. Clifford puts it nicely: ‘No real belief, however trifling and fragmentary it may seem, is ever truly insignificant; it prepares us to receive more of its like, confirms those which resembled it before, and weakens others; and so gradually it lays a stealthy train in our inmost thoughts, which may someday explode into overt action, and leave its stamp upon our character.’ I've heard variations on this argument before, and it does seem to me to have merit. Once you believe one conspiracy theory, you're primed to believe more. If you accept the concept of alien visitations, you can maybe more easily accept mind-control or vampires. That sort of thing. Clifford’s third and final argument as to why believing without evidence is morally wrong is that, in our capacity as communicators of belief, we have the moral responsibility not to pollute the well of collective knowledge. And that's fair enough, too. So why do I object to the absolutist stance that it's always wrong to believe on insufficient evidence? Well, like I said up there, I can come up with things that have to be believed on scant-to-no evidence and yet are widely considered "moral." The wrongness of murder is one of those things. That we shouldn't be doing human trials for the pursuit of science without informed consent and other guardrails. That slavery is a bad thing. And more. I'm not even sure we can justify most morality on the basis of evidence (religious texts are not evidence for some objective morallity; they're just evidence that someone wrote them at some point), so to say that belief on the basis of insufficient evidence is morally wrong (whether always or sometimes) itself has little evidence to support it. You have to start by defining what's morally right and wrong, or you just talk yourself in circles. While Clifford’s final argument rings true, it again seems exaggerated to claim that every little false belief we harbour is a moral affront to common knowledge. Yet reality, once more, is aligning with Clifford, and his words seem prophetic. Today, we truly have a global reservoir of belief into which all of our commitments are being painstakingly added: it’s called Big Data. Again, though, that's a matter of scale. People have held others to certain standards since prehistory; in the past, this was a small-community thing instead of a global surveillance network. None of this is meant to imply that we should accept the spread of falsehoods. The problem is that one person's falsehood can be another's basic truth. That makes it even more difficult to separate the truth from the lies, or even to accept the reality of certain facts. Yes, having evidence to support one's beliefs is a good thing overall. But we're going to end up arguing over what constitutes evidence. |