Thursday, September 26, 2019

More problems for reincarnationists

Stimulated by reading a book Michael Shermer wrote last year, I've identified two more explanatory problems for the people who tout reincarnation.

I'm talking primarily about those who tout traditional religion-based reincarnation, whether the personal soul version of much of Hinduism and Jainism, or the impersonal life force version of Buddhism.

I'm not talking about the New Ager distortionists who believe all their past lives were as the king, queen, or mighty warrior, because they're wrong even within the world of reincarnation.

Anyway, the actual reincarnation world of Hindus, Jains and Buddhists says you may come back not as the  king or queen, but as the peasant shoveling shit out of the king's stables, or, much more importantly for this, a dung beetle in that shit in the stable.

We have, a la ideas explored by and generated from Thomas Nagel's famous, or infamous, "What Is It Like to be a Bat," (text here) a mind-mapping problem. This is more a problem for the personal soul type of reincarnation; obviously, an impersonal life force doesn't have human personal soul characteristics. Whether early Buddhists thought of this as a way to explain, or explain away, this issue, I don't know.

Anyway, Nagel argued that we can't understand bat consciousness because of its subjectivity and its different sensory basis, and went from there.

Well, except for Hindus and Jains, many people, whether of professional biological bent or not, would have difficulty extending consciousness at all to a beetle. Plus, the difference between its sensory interactions with the world and ours is orders of magnitude different from the human-bat difference.

So, if karma is an iron law of rewards and punishments, beyond the well known difficulties with people (if we're all being reincarnated) not remembering past lives, how can it even be a punishment to be reincarnated as a beetle? How can the "beetle-self" feel punished? Since as far as we know, beetles don't have emotions, period, along with not having consciousness, how can they feel anything, as in feeling as emotional affect and not sensory input?

This has a flip side. If, especially if you're a Jain who takes consciousness of some sort down to what most of us would call inanimate objects, what if a, say, an ameba is being rewarded by getting promoted UP to being a beetle for being an incredible ethical and altruistic ameba?

Does that sound as silly to you reading it as it did to me, typing it?

That leads to what I see as the even larger problem. (Yep, the above is the lesser problem.) And this one hits the Buddhist types as well.

While Charles Darwin wasn't around 2,500 or more years ago, nonetheless, these ideas of reincarnation and the karma behind them seem based on a "progress" misunderstanding of evolution, and of biology in general.

Who says it's "worse"being a beetle than a shoveler of the king's shit or even the king? I suppose a "good Buddhist" might use this as a wedge to claim that the whole idea of karma is itself maybe maya, but in that case, he or she is already lighting the fuse on their own petard. From that, they're making themselves even more irrelevant to the discussion.

So, we move forward. Given that the planet would soon be run over with shit, if we had no dung beetles, whereas the world might be quite good had we no more Homo sapiens, if I were to engage a progress-based version of zoology, I'd argue the dung beetle is superior. (And, we haven't even talked about the myth of cockroaches surviving nuclear war.)

That's bad enough. Let's take it inside the human world.

Here, of course, the claim is that being the shoveler of the king's shit is worse than being the king.

Says who? Per the French Revolution, we could stand to get rid of yet more kings. Per the labor theory of value, the shoveler of the king's shit is more important whether the king is alive or shat mat.

In other words, karma and reincarnation, taken as a unit, are ultimately part of the classism of the Hindu caste system. And, as I see it, here, Buddhism talking about only an impersonal life force being reincarnated is still a Social Darwinist failure too. It still based karmic reincarnation cycles on the idea that some humans are, by group sociological observations, superior to others.

On the third hand, the British Raj intensified and codified the caste system, as part of the old divida et impera.

Thursday, September 19, 2019

Benjamin Libet dethroned; what replaces him?


I had never before known what the backstory was of the Libet experiments, what inspired Benjamin Libet.

And now I do.

It would seem indeed, per the author of an interesting pieceat The Atlantic, that the decade-earlier German research, now understood better today, undermines Libet's experimental angle.

But also reinforces his philosophical scrivening.

Here’s the backstory:
 In 1964, two German scientists monitored the electrical activity of a dozen people’s brains. Each day for several months, volunteers came into the scientists’ lab at the University of Freiburg to get wires fixed to their scalp from a showerhead-like contraption overhead. The participants sat in a chair, tucked neatly in a metal tollbooth, with only one task: to flex a finger on their right hand at whatever irregular intervals pleased them, over and over, up to 500 times a visit. 
The purpose of this experiment was to search for signals in the participants’ brains that preceded each finger tap. At the time, researchers knew how to measure brain activity that occurred in response to events out in the world—when a person hears a song, for instance, or looks at a photograph—but no one had figured out how to isolate the signs of someone’s brain actually initiating an action. 
The experiment’s results came in squiggly, dotted lines, a representation of changing brain waves. In the milliseconds leading up to the finger taps, the lines showed an almost undetectably faint uptick: a wave that rose for about a second, like a drumroll of firing neurons, then ended in an abrupt crash. This flurry of neuronal activity, which the scientists called the Bereitschaftspotential, or readiness potential, was like a gift of infinitesimal time travel. For the first time, they could see the brain readying itself to create a voluntary movement.
Twenty years later, Libet used this to argue for what he then sought to show by his own experiments — that “we” act before any of “us” makes a conscious decision to act.

But Bahar Gholipour says Libet got it wrong, and wrong from the start:
The Bereitschaftspotential was never meant to get entangled in free-will debates. If anything, it was pursued to show that the brain has a will of sorts. The two German scientists who discovered it, a young neurologist named Hans Helmut Kornhuber and his doctoral student Lüder Deecke, had grown frustrated with their era’s scientific approach to the brain as a passive machine that merely produces thoughts and actions in response to the outside world. Over lunch in 1964, the pair decided that they would figure out how the brain works to spontaneously generate an action.
Well, there you go.

Problem was, this was 1964. MRIs, CAT scans, none of that existed. What’s an early neuroscientist to do? This:
They had a state-of-the-art computer to measure their participants’ brain waves, but it worked only after it detected a finger tap. So to collect data on what happened in the brain beforehand, the two researchers realized that they could record their participants’ brain activity separately on tape, then play the reels backwards into the computer. This inventive technique, dubbed “reverse-averaging,” revealed the Bereitschaftspotential. 
But, what did it mean? Gholipour said that was anybody’s guess. One possibility? 
Scientists explained the Bereitschaftspotential as the electrophysiological sign of planning and initiating an action. Baked into that idea was the implicit assumption that the Bereitschaftspotential causes that action. The assumption was so natural, in fact, no one second-guessed it—or tested it.
Enter Libet:
He repeated Kornhuber and Deecke’s experiment, but asked his participants to watch a clocklike apparatus so that they could remember the moment they made a decision. The results showed that while the Bereitschaftspotential started to rise about 500 milliseconds before the participants performed an action, they reported their decision to take that action only about 150 milliseconds beforehand. “The brain evidently ‘decides’ to initiate the act” before a person is even aware that decision has taken place, Libet concluded.
Argument has been fast and furious ever since. The original attack angle was to accuse Libet of mismeasurement, especially since 150 milliseconds is pretty thin slicing.

And, Libet has had his defenders, including to fair degree, yours truly. That said, I took the findings in other directions, namely that selfhood isn’t so unitary; I didn’t see this as a proof of “determinism” at all.

Anyway, more and more neuroscientists, cognitive scientists and philosophers of mind have accepted that Libet got it all measured correctly. It was just the interpretation was wrong for many of them.

Move to 2010, as Gholipour does:
In 2010, Aaron Schurger had an epiphany. As a researcher at the National Institute of Health and Medical Research in Paris, Schurger studied fluctuations in neuronal activity, the churning hum in the brain that emerges from the spontaneous flickering of hundreds of thousands of interconnected neurons. This ongoing electrophysiological noise rises and falls in slow tides, like the surface of the ocean—or, for that matter, like anything that results from many moving parts. “Just about every natural phenomenon that I can think of behaves this way. For example, the stock market’s financial time series or the weather,” Schurger says.
“Static.” “Noise.” “Snow.” We all know this.

Surely that type of noise is random, though, and can’t be what Libet, or his German predecessors found, can it? Welll …
But it occurred to Schurger that if someone lined them up by their peaks (thunderstorms, market records) and reverse-averaged them in the manner of Kornhuber and Deecke’s innovative approach, the results’ visual representations would look like climbing trends (intensifying weather, rising stocks). There would be no purpose behind these apparent trends—no prior plan to cause a storm or bolster the market. Really, the pattern would simply reflect how various factors had happened to coincide. 
“I thought, Wait a minute,” Schurger says. If he applied the same method to the spontaneous brain noise he studied, what shape would he get?  “I looked at my screen, and I saw something that looked like the Bereitschaftspotential.” Perhaps, Schurger realized, the Bereitschaftspotential’s rising pattern wasn’t a mark of a brain’s brewing intention at all, but something much more circumstantial.
And from there, a new interpretation.
Two years later, Schurger and his colleagues Jacobo Sitt and Stanislas Dehaene proposed an explanation. Neuroscientists know that for people to make any type of decision, our neurons need to gather evidence for each option. The decision is reached when one group of neurons accumulates evidence past a certain threshold. Sometimes, this evidence comes from sensory information from the outside world: If you’re watching snow fall, your brain will weigh the number of falling snowflakes against the few caught in the wind, and quickly settle on the fact that the snow is moving downward. 
But Libet’s experiment, Schurger pointed out, provided its subjects with no such external cues. To decide when to tap their fingers, the participants simply acted whenever the moment struck them. Those spontaneous moments, Schurger reasoned, must have coincided with the haphazard ebb and flow of the participants’ brain activity. They would have been more likely to tap their fingers when their motor system happened to be closer to a threshold for movement initiation. 
This would not imply, as Libet had thought, that people’s brains “decide” to move their fingers before they know it. Hardly. Rather, it would mean that the noisy activity in people’s brains sometimes happens to tip the scale if there’s nothing else to base a choice on, saving us from endless indecision when faced with an arbitrary task. The Bereitschaftspotential would be the rising part of the brain fluctuations that tend to coincide with the decisions. This is a highly specific situation, not a general case for all, or even many, choices. 
Other recent studies support the idea of the Bereitschaftspotential as a symmetry-breaking signal.
Symmetry breaking. Like the weak nuclear force and certain particle symmetry being broken once cosmic inflation expanded enough, according to the “Standard Model”?

Why not?
It’s still possible that Schurger is wrong. Researchers broadly accept that he has deflated Libet’s model of Bereitschaftspotential, but the inferential nature of brain modeling leaves the door cracked for an entirely different explanation in the future.
So, I’ll expound my idea.

As I see it, this is similar to "virtual particles" popping up out of a "quantum foam," per an idea in quantum mechanics.

Put another way, this takes Dan Dennett's idea on how "subselves" pop up another step further, while also refuting part of it.

Those subselves, or specific actions by bigger ones of them, at least, aren't a matter of Darwinian competition. Rather, they're ultimately that mental substrate version of quantum foam.

Of course, I have long rejected Dennett's ideas that Darwinian-type evolution drives much of life outside of biological evolution. I have even more rejected the idea that algorithms are the driver. Quantum mechanics, when it comes to individual events, no matter what quantum interpretation one stakes one's life on, is not algorithmic. 

With Dennett, more than the greedy reductionism he decries in others but seems to practice all too much himself, often, I now think, it's lazy reductionism.

I've argued with Massimo Pigliucci about Libet before, as here. (Polite, not normal Internet arguing, but in the more professional sense, and we don't totally disagree on some things; in fact, we semi-agree on a fair amount of free will and volition ideas.) I could partially reconcile on Libet's interpretation of his experiments being wrong, but I've never fully bought into his interpretation in the first place. I've always interpreted them in the vein of subselves, and now, this new take on what Libet was actually finding only moves me more that way.

==

Update, Nov. 5: Bill Bryson, in his excellent new book "The Body," talks about the one-fifth second delay in optic nerve transmission speed and how the brain confabulates that to try to give a "real time" appearance of what we see. While there's no choice nor veto there, that confabulation also calls into account traditional ideas of free will, while also not at all supporting traditional ideas of determinism, either.


Thursday, September 12, 2019

Conspiracy theories: the new Gnosticism


The psychology of conspiracy thinkers appears complex.

On the surface, it might seem simple. More and more social psychology research ties acceptance of conspiracy thinking to perceived loss of control over life.

But that itself can’t be the sum of it. Many people who lose a greater amount of control over life than they previously had do not buy into conspiracy thinking. For example, 95 percent of people who have strokes (very conservative estimate, surely more like 99.5 percent) don’t claim their stroke (if they even use that word) was caused by chemtrails.

Per the medieval Western Church pondering the mystery of salvation, “Cur alii, non alii”?

So, the psychology is more complex than “loss of control.”

But, acceptance of conspiracy theories is also about more than psychology. Trying to reduce the likelihood of acceptance of conspiracy theories to loss of control is like Orwell’s tale from India about the blind men describing an elephant. Even outside of that, limiting the discussion to psychology would be like men with severe astigmatism trying to describe it or something.

Movement skeptics or Skeptics™ folks might say that conspiracy thinking is anti-scientific. Well, partially true with anti-science conspiracy theories such as the chemtrails above or faked moon landings, but not even fully true there. And science, other than social sciences, isn’t involved at all with politics or history conspiracy theories.

But philosophy is.

Logic, basically classic informal logic and the classical logical fallacies, are obviously in play, even if Massimo Pigliucci says we should stop calling people on fallacies, even when they’re committing what would be considered classical fallacies by any disinterested observer.

But, other areas of philosophy are involved, too. One is epistemology. Another is philosophy of language, specifically on agreeing on language used to describe and to “frame” an event. And, in some cases, it and epistemology may overlap here.

Then there’s the role of the Internet.

The ramp-up of misinformation in general, and conspiracy thinking in particular, has been fueled by the Net in general and social media in particular. That's even more the case, I think, with disinformation, which is deliberate, per the distinctions the author has at the link. The question is, is this something desired by conspiracy theory promoters? Are a higher percentage of them, than of the general public, anarchists in some way? If so, which came first, believing conspiracy theories or anarchist tendencies?

And, is conspiracy thinking, or at least promotion of it, an addictive behavior inside the addictive behavior of being online in general and being online with devices and/or social media in particular? (There's ironies here. I'm including this in a blog post that could be just for "the machine," and the author wrote this piece which could also just be for "the machine.")

Beyond THAT, though, those theories still get no closer to the cur alii, non alii than we have been so far.

One author at Psychology Today postulates a second reason. (I’m taking understanding and certainly as ultimately a subset of control and thus nut a second reason.)

And that is the positive self-image angle.

David Ludden doesn’t use the word, but … I just thought of it.

Conspiracy theorists are Gnostics. They believe they have secret, esoteric knowledge. And that does help their self image.

Beyond that, the rise of the social media has made it much easier for people to self-sort. For instance, there’s several sub-conspiracy theories within the JFK assassination conspiracy theory world, as in, LBJ did it, or the Mob did it, or Castro did it, or the “deep state” did it, etc. Facebook groups, and probably even more, sub-Reddits, are among leading avenues for allowing these to grow.

Not to underestimate the power of the spoken word, and the ease of making videos now, YouTube is probably No. 3. Especially now that it’s becoming easy to fake videos.

Now, the parallelism with Gnosticism may not seem complete. For example, where is the difference between “adepts” and “learners” or “auditors”?

Well, with things like “closed” or “secret” Facebook groups, sub-Reddits, it’s right there. You may have to have demonstrated a certain amount of knowledge on “open” Facebook, Twitter, Reddit larger groups, etc., before you gain admission to one of these groups. And, since it’s a group run by a leader, you’re always at danger of expulsion.

The psycho-history angle also has parallels.

This, then, actually ties back to real live Gnosticism. Gnosticism arose in the late Hellenistic era, but took off when? Under the Roman Empire, certainly the most powerful nation state west of China both in external power and in internal control of its citizenry before modern times. And, east of its borderlands, the Parthians semi-organized, and then the Sassanids more organized, an empire of sorts of their own An America where, ostensibly democratic fronts, people worry about the big brother of big government, big business or both, is very real.

The loss of control ties with an attempt to regain control, even if the area of control has to be massively circumscribed.

That said, to the degree we do take knowledge and certainty itself as a third issue, that links back to philosophy, namely, epistemology.

That’s more insight, but still not total insight on the cur alii, non alii.

And Undark seems to have another piece of the puzzle, from neuroscience. 

People who understand much about our hominid ancestors know that they are believed to have had a penchant for two things: agency imputation and pattern detection. It’s also believed that the most evolutionarily successful hominids were those that overdid it to some degree, because the price of a false positive was far less than that of a false negative.

According to two researchers, one Dutch, one American, Elizabeth Preston says that conspiracy theorists are likely to have a high level of false positives on pattern detection. The Dutch researcher, with a Dutch colleague, adds that many conspiracy theorists may also imbibe in another early hominid issue: xenophobia toward outgroups. Given that issues like that are how more conservative people allegedly differ from more liberal people, per the Big Five personality scale (I think the claims are overblown), this could be seen as a partial additional explainer of some politically conservative conspiracy thinking.

Finally, per David Hume reminding us that the reason need always follow the passions, conspiracy theories are always emotionally driven. That's even more the case than with traditional motivated reasoning.