Showing posts with label cognitive science. Show all posts
Showing posts with label cognitive science. Show all posts

Thursday, November 20, 2025

Did Leon Festinger commit some sort of research fraud?

"When Prophecy Fails" is a seminal publication in launching the whole idea of cognitive dissonance.

Now, there's new claims that Leon Festinger, who led research into the cult movement behind the book, committed multiple types of what is not just misframing but arguably research fraud. 

At The Debrief, Ryan Whalen writes about both these claims and some of the pushback. 

This, about two-thirds through, is arguably the nutgraf on all of that:

Fundamentally, Kelly’s work clearly illuminates many ethical breaches in When Prophecy Fails, and underscores the authors’ narrow focus on how groups respond to falsified predictions. However, not everyone feels that Kelly’s arguments completely upend the decades-old research, and it is important to note that Kelly’s paper offers a relatively narrow and specific refutation of the ideas in When Prophecy Fails and its claims regarding cognitive dissonance.

Maybe we should view this as similar to the Dunning-Kruger effect — it's partially refuted, but not totally, and we should narrow the scope of our claims and usage on both. 

After all, both ideas have often been used in a denotative, not connotative sense, with sneers and politicization. 

Thursday, August 06, 2020

The GIF and Hume, Sapir, Whorf, and Goodman

It's a piece from late 2017, but still quite interesting.

Some people hear GIFs.

Yes, those looped bit videos that have no sound.

Some people hear them, normally in cases where sounds would be expected in real life, such as a GIF of hands clapping, one of police lights (with inferred sirens), and such.

Are such things being heard?

Yes, indeed, an audiologist professor told the Times' journalist.

Two cognitive neuroscientists said it is similar to the "filling in" of some types of optical illusion. They added that people with synesthesia are more likely to do this.

That said, why the author, Heather Murphy, stopped with scientists, I don't know, but it presented a punches-pulled story.

Obviously, this has connections to empirical philosophy of centuries past, and per the cognitive neuroscientists, connections to cognitive philosophy today, namely the issue of qualia, and within that, how specific qualia may be in the auditory world. (Murphy does loop in Christof Koch, but, academically, despite his gushing about pantheism, he's a scientist, not a philosopher.)

Of course, hearing exists inside our heads.

"If a tree falls in the forest, does it make a sound?" It makes sound waves, but it doesn't make a sound until it's heard. (That said, there are plenty of foxes, bears, deer and squirrels to hear it.)

This is partially what the whole idea of empiricism is about. But, David Hume, and his predecessors, didn't really wrestle with the issue behind sensory experience. (Of course, 300 years ago, they weren't really able to wrestle with how the brain works!)

It focuses itself in modern philosophy with the discussion of qualia. As a sidebar, and not to go too Sapir-Whorf, but per Nelson Goodman's new problem of induction, there's the issue of how we experience a sound based on not only our genetics, as in the synesthesia, but also based on our individual developmental histories. (The Inverted Spectrum idea, a thought experiment often discussed in thinking about qualia, more directly connects to Goodman.) To riff on Sapir-Whorf, an Inuit may hear 20 different sounds from snow at different temperatures, different thicknesses and different degrees of compaction and you or I might here three. But, a recording microphone will show the same sonic signatures for the Inuit's 20 sounds and my three.

And, this leads me to wonder aloud on other things.

Other than feeling vibrations from clapping, especially, let us say, large group clapping at a concert, political rally, etc., do deaf people have the equivalent of "hearing" clapping? If so, how would they respond to these GIFs?

Monday, December 24, 2012

Stacking the deck on faith at Christmas, part two

Jonathan Sacks is worse, far worse, than Simon Critchley, who was the focus of my immediate prior blog post.

Sacks, in Britain, is chief rabbi of the United Hebrew Congregations of the Commonwealth, so his claims, overblown as they are, are not Christian-specific.

First are the obligatory nods, wrong ones, to neuroscience and behavioral psychology. Mirror neurons are overblown, Mr. Rabbi. And Kahnemann's fast-vs-slow thinking has little to do specifically with religion. (Indeed, in orthodox Christianity's past, many an alleged which was burned as the stake based on "fast" thinking; religion has no monopoly on slow thinking.)

That's why this is dreck:
If this is so, we are in a position to understand why religion helped us survive in the past — and why we will need it in the future. It strengthens and speeds up the slow track.  
Sachs, content to quote pop science when it suits his ends. But, when it comes to the empirical basis of science, not so fast! For these claims, and those that continue later in his paragraph, like this:
It reconfigures our neural pathways, turning altruism into instinct, through the rituals we perform, the texts we read and the prayers we pray.
He is perfectly content to make these claims without a shred of evidence, of course.

And, this is why I actually prefer fundamentalists at times compared to the more liberally religious, who will dip their toes into the waters into behavioral psychology, or biblical archaeology, then dive full in to a 2-inch-deep pool while pretending it's not a 2-inch-deep pool.

Related to this just above, while Robert Putnam has had some good sociological observations, he too is overblown. And, as for religious persons' contributions to charity, that one is way overblown. (For one thing, if church operations, etc., count as charity ...)

Beyond that is the dreck of claiming that all of these findings from evolution, about the nature of altruism, along with fast-vs-slow thinking, etc., find their "summa" in organized religion.

Nonsense. Given how fast organized religions have, in just the past two centuries, and within different liberal or conservative strains, stripped their gears on the morals of slavery, women's rights and gay and lesbian rights, to take three biggies, show that Sachs is, per Wolfgang Pauli, "not even wrong."

And beyond that, on slavery? In both UK and US, much of the push for abolition was from secular sources.

Thursday, November 29, 2012

Does AI engage in behavioralism? Chomsky says yes


Noam Chomsky, a long and persistent critic of artificial intelligence, says yes, or that it at least engages in the equivalent thereof, as related in this extended interview with the Atlantic.

If he is right, and I think he’s at least in the right ballpark, I think this arguably explains why AI, for all its self-touting, is the biggest research science and technology failure this side of peaceful fusion power. Indeed, progress on the two shares a remarkably similar arc.

Noam Chomsky/From The Atlantic
The interview is indeed worth a read. It’s in-depth, and as the Atlantic editor-reporter notes, it’s rare these days, because everybody wants to interview Chomsky on political topics, not scientific ones.

Beyond his “behaviorist” comments, he suggests AI researchers, and at least some people in fields such as his own cognitive science, are still doing research on mind and intelligence at what might be called the wrong level of abstraction. It brings to mind Dan Dennett’s comment (ironic at times, given Dennett) of “greedy reductionism.”

It also brings to mind Paul Davies’ book “The Eerie Silence,” which criticizes SETI, the Search for Extra-Terrestrial Intelligence, for various blinders it may be wearing in its search.

Chomsky, in the interview, also veers at least a bit into his home turf of linguistics. As part of that, he doesn’t have a lot of good to say about Bayesian statistics.

He says there are better ways for us to try to understand the “noise” with which we are bombarded on a daily basis.

I have to agree from a different, folk-level point of view.

To me, Bayesian statistics seems like “the hip thing” for pop and semi-pop observers of human cultural sociology. All it needs is a new book by Malcolm Gladwell.

From there, he ties linguistics back to cognitive science. And, hits the nail on the head, in my opinion:
It's worth remembering that with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject.
He notes that, with the likes of Paracelsus, natural philosophy before Galileo, in the hands of the likes of Paracelsus, was quasi-experimental. It had certainly become more empirical than the philosophical speculation of the Greeks. But, things like the null hypothesis, or the idea of using any particular hypothesis to direct experimentation, weren’t fully there.

Chomsky goes on to question issues related to algorithms. Again, I broadly agree with him. In a more technical way, certainly, than me, he appears to question the issue of claiming that mental processes in general, and especially those that would give us what we would call an intelligent consciousness, can be reduced to, or framed in terms of, algorithms. Indeed, when asked about it, in relation to some particular research and mental modeling, he specifically rejects the need for algorithms.

It sounds like in both cognitive science and artificial intelligence, if not already, Chomsky could soon become about as controversial as he is on U.S. foreign policy.

But, don’t stop there. Chomsky, getting into his theories of language, and with a nod to Wittgenstein, argues that something analogous to language could be used as part of new attempts to understand bodily systems, such as, say, how the immune system works. It’s true that biology already talks about things such as “signaling,” but in the past, it’s seemed to use these words and phrases anthropomorphically, and Chomsky is saying, “take the next step.”

Anyway, I’m just scratching the surface of my analysis, both in terms of how much of the interview I’m analyzing and how much analysis I’m putting forth. Go read the full thing yourself.

Let me just add that he closes by saying something else I agree with, and that I read Steve Toulmin saying 15 years or so ago: Scientists still need philosophers of science challenging them, now perhaps more than ever.

Saturday, February 18, 2012

Consciousness is not the same as attentiveness

It's long been established that we have what could be called "subconscious attentiveness," which can cause things such as certain types of psychological priming through images being presented to people, but too quickly for them to be consciously aware of the images.


It now appears, in the latest in attempts to unravel human consciousness, that this cuts both ways.


But, the story doesn't go as far as it could, both on speculation and on Wittgenstein-like questions on our use of language on these issues.


Perhaps "consciousness," "attentiveness" and "awareness" need more precision in usage in such aspects. Or maybe they need to be redefined to some degree. Or replaced.


Whether language will be crafted to this end remains to be seen.

Monday, January 30, 2012

The state of consciousness studies

Science News has two excellent articles on where we are at, part of a just-started ongoing series.

The first, by Tom Siegfried, follows in many of the footsteps of Douglas Hofstadter to talk about consciousness and self-referential systems. The title of "Self as Symbol" gives some hint of where he's headed. And, he says self-referentiality may actually deepen our eventual understanding of consciousness rather than acting as a barrier.


Laura Sanders looks at the neuroscience side of the coin, and what brain studies are telling us these days. Not too much of high specificity, but we're getting ideas on how to refine, and in some ways change, our searching.

Sunday, November 13, 2011

Free will - a "god of the gaps" parallel?

Is "free will," at least as "compatibilists" generally strive to define (and save) it, a philosophical equivalent of "a god of the gaps"? I say the answer is an arguable yes.

Philosophy professor Eddy Nahmias is the latest to try to defend some neo-traditionalist, if you will, version of free will.

Of course, when you start with a straw man howler like this, it's easy for you to get called "a free willer of the gaps":
When (neuroscientist Patrick) Haggard concludes that we do not have free will “in the sense we think,” he reveals how this conclusion depends on a particular definition of free will.  Scientists’ arguments that free will is an illusion typically begin by assuming that free will, by definition, requires an immaterial soul or non-physical mind, and they take neuroscience to provide evidence that our minds are physical. 
First, not all neuroscientists make that assumption. And, philosophers like the Daniel Wegner whom you linked at the start of the column definitely don't link free will, or its absence, to dualism, or its lack.

Then, there's this:
Many philosophers, including me, understand free will as a set of capacities for imagining future courses of action, deliberating about one’s reasons for choosing them, planning one’s actions in light of this deliberation and controlling actions in the face of competing desires.  We act of our own free will to the extent that we have the opportunity to exercise these capacities, without unreasonable external or internal pressure.  We are responsible for our actions roughly to the extent that we possess these capacities and we have opportunities to exercise them.These capacities for conscious deliberation, rational thinking and self-control are not magical abilities.
Well, if you're not going to wrestle with what consciousness is, let alone what standing free will at the level of consciousness has in the absence of a Cartesian theater, you may have a problem. Nahmias does eventually get around to tacking Benjamin Libet and the famous 200-millisecond gap, but only to wave it away:
First of all, it does not show that a decision has been made before people are aware of having made it.  It simply finds discernible patterns of neural activity that precede decisions.  If we assume that conscious decisions have neural correlates, then we should expect to find early signs of those correlates “ramping up” to the moment of consciousness. 
Ahh, this is a petard hoister. It's all in how you define "decisions" as well as "free will," isn't it? Under the Dan Dennett multiple drafts model, this is rather the subconscious impulse that "wins out" to the level of consciousness.

Finally, to riff on Samuel Johnson, Nahmias enters into the last refuge of a free-will philosophy scoundrel: He makes the "fatal" is-ought error.
We need conscious deliberation to make a difference when it matters — when we have important decisions and plans to make.
Need? As in "ought to have"? Ooops.

Some other thoughts from Wikipedia on free will, including reference to Haggard, here.

That said, I think it IS possible to talk about free will in some way, but only in a way that includes subselves and subconscious processes.

UPDATE, Nov. 26: Massimo Pigliucci actually defends Nahmias, claiming he "provides a nuanced and intelligent brief discussion of the topic." Massimo is often thought-provoking and never dumb, but he's just off base on this one. (In the same post, he says that way too much is read into Libet. I'll split the difference and say that somewhat too much may be read into him, and that what Libet's experiments study are somewhat imprecise. But, to claim he's pretty much irrelevant to discussions of free will is a stretch, at the least.)

UPDATE, Nov. 27: Add this excellent essay to your reading. From a neuroscience perspective, it argues that brain systems that evolved to detect actual (or apparent) "intentionality" are a focal point for the rise of an illusion of "self." And, here's the journal essay that influenced that blog essay.

This ties in with Dan Dennett's "heterophenomenology." We assume "selves" in others because of this 'intentionality set" that appears to be built into our brains. But, Dennett doesn't quite note this is a two-way street. Per modern social psychologists, the "self," or what we call a "self" for ourselves, is in part a construct based on our interaction with others. That includes them seeing, and noting, seeming "intentionality" in ourselves.


So, even if there isn't a unitary self, not only do we act "as if" there is, we find it hard not to do so because of this outside conditioning as well as our own brain's mindset.


Now, a Buddhist meditation adept, or a devotee of deep self-hypnosis, might be able to transcend that to some degree. But (and this is why I only half-jokingly say "the only good Buddhist is a dead Buddhist") the person who recognizes, and more than just intellectually understands, that "self" is to some degree an illusion is generally unable to hold on to that idea. The Zen monk rejoins the rest of the monastery; the hypnosis adept walks out the door and into the larger world. And "conventional" ideas of self get reinforced again.

Saturday, February 19, 2011

Don't fret over Watson too much?

Wired reminds us that Ken Jennings' human brain used as much energy as a 12-watt light bulb on Jeopardy, while Watson the computer needed special cooling equipment, and that the Deep Blue computer that beat Gary Kasparov at chess was a fire hazard.

And, does Watson have "metaknowledge"? Can it recognize that it can't quite remember something but knows that it's on the tip of its cybertongue? Does it have the emotional power of "knowing"?

Not yet. Watson's interesting. Is it intelligent? No.

Emotions, especially when understood as value judgments, are part of the package of intelligence.

So, while Watson may have been hot under the cybercollar from the heat of his circuits, he never really was sweating, so to speak, because he couldn't.

Friday, February 04, 2011

Sam Harris' Immoral Landscape

Sam Harris' "The Moral Landscape," much lauded by many reflexive, relatively unthinking New Atheists who have made him into a rock star of the movement, falls far short of its hype. In fact, I one-starred it on Amazon.

What's wrong? Harris is a Platonic idealist in drag. He also engages in scientism. Related to both of these, despite his having an undergraduate degree in philosophy, he really appears not to understand a lot of philosophical issues relevant to this book's subject. Or else, he doesn't care to.

Beyond that, his Islamophobia in the early part of this book seems to largely come straight from the neoconservative playbook. Possibly related to that, he creates straw men out of liberals all allegedly being moral relativists.

Sam Harris tries to draw a hard-and-fast dichotomy between science-based morals and ethics and religious-based morals and ethics in this book.

However, this is the real world, not a Platonic idea (Harris comes off as quasi-Platonic in more than one way in parts of this book), and so, it's not totally amenable to Harris' bifurcation.

Take abortion. Many religious people support at least some right to abortion, but noted atheist Nat Hentoff is 100 percent prolife. Ditto on end of life issues. And, if I looked a little bit, I could surely find atheists and agnostics with less enlightened views on gay rights than many religious people.

Now, as to the science part ... the idea that we can have a science-based morality? Harris offers little in the way of actual neuroscience studies on the brain processing moral issues.

We may well get oodles more such studies in the future, but that's not today. Harris also doesn't address the issues of what MRIs measure, how well this correlates with thought output, etc.

Likewise, he discusses little in the field of well-done evolutionary psychology (to distinguish it from Pop Evolutionary Psychology).

Beyond that, he simply ignores that the study of the human mind, whether from the POV of cognitive neuroscience or evolutionary psychology, is at best in the Early Bronze Age and is arguably, at least on the matter of morality and ethics, still in the Neolithic.

So, while science may at some point (far?) in the future offer us significant oversight on specific moral issues, it doesn't today because it can't. And, per the specific moral issues I listed above, it may never be able to.

Indeed, with reference to that, Harris' approach to science and morality smacks of a fair degree of scientism. And, I write this as an irreligious, skeptical naturalist.

That said, there's several other problems with this book. Read on at the jump for the details! I'm going to address several overview issues first, before making any page-by-page critique of the book.

First is the matter of Harris' Islamophobia. Since Islam is in general cited regularly for examples of immoral behavior and beliefs, we need to examine this.

First of all, it seems much of Harris' Islamophobia comes from the neoconservative political playbook. He favorably references an off-the-wall neocon writer, Bat Ye'Or, whose book on Islam's alleged takeover of Europe was one-starred by me.

Secondly, he's confusing a static historic snapshot of history with a moving picture. If we went by snapshots, 900 years ago, Christian Crusaders would have been the poster boys for immoral behavior. 750 years ago it would have been pagan/animist Mongols. 600 years ago, polytheistic Aztecs.

Finally, if we confine ourselves to today, the Hindu Tamil Tigers of Sri Lanka killed 30,000 in their civil war, far more than al-Qaeda has killed.

Second, whence comes Harris' moral stance, ultimately? I believe he is not just a moral objectivist, albeit a consequentialist (a stance more often associated with moral relativism but compatible with objectivism too), but a moral absolutist -- specifically a Platonic Idealist moral absolutist. There's irony there in spades, since the early and middle Platonic dialogues were devoted to Socrates, deconstruction of other people's definitions of moral issues such as justice. (Of course, Socrates usually doesn't offer his own idealist definition back; such things arise only in later dialogues.)

Third, what of Harris' claims to be examining morality and its foundations from a scientific perspective?
First of all, he's not the first to do so. He didn't invent sociobiology or evolutionary psychology. (Let me be clear here -- much of what passes for science in alleged evolutionary psychology is actuallly the pseudoscience of Pop Evolutionary Psychology. However, unlike a P.Z. Myers, there is legitimate work being done in this field, albeit little and far between.) So, Harris isn't new in his effort and he's certainly not new in his hope.

That said, for someone who wants to be scientific, he seems often lacking. (No shock here; I saw the same problem way back in "The End of Faith." First, from an evolutionary standpoint, Harris doesn't address issues of individual vs. group selection. Now, I'm not as bullish on group selection as, say, David Sloan Wilson, but I do think it deserves more consideration than many evolutionary biologists give it. Second, Harris doesn't devote any scientific examination to cultural evolution. Admittedly, there's not a lot to really nail down ant this intersection of biology and sociology, but Harris doesn't even get into what is out there.

Beyond what I mention above, for someone with a graduate degree in neuroscience, he spends about ZERO time referencing actual neurological study of the brain. No V.S. Ramachandran here, folks! Not even close.

Fourth, Harris and philosophy, not just the "is-ought" issue, but certainly including that.

First of all, for people who have read previous works of his, and not embraced him as a bundle of light, his arrogance in dealing with the philosophical background should be of no surprise. But, it still needs quoting.

Page 197, footnote 1: "Many of my critics fault me for not engaging with the academic literature on moral philosophy. ... First ... I did not arrive at my position ... by reading the work of moral philosophers; I came to it by considering the logical implications of our making continual progress in the sciences of the mind. Second, I am convinced that every appearance of (academic terminology) directly increases the amount of boredom in the universe. ... (T)he professional philosophers I've consulted seem to understand and support what I'm doing"
Let's unpack what's wrong with this quote.

1. Harris might actually have learned something by engaging with other moral philosophers either of today or the past. That would include wrestling more with Hume's is-ought; that would certainly include a provocative AND nontechnical book like Walter Kaufmann's "Beyond Guilt and Justice."
2. Is Harris saying he's either too dumb or too lazy to "translate" language of academia to a general audience? Or a too-arrogant mix of both? One of the best classical philosophers on moral issues was Hume, precisely because he wrote in a way for the general public (of a certain educational level) to understand.
3. Neuroscience is a "hard" science with plenty of its own technical language. That doesn't stop Harris from wanting to focus on advances in scientific discovery, albeit while, rather than discussing them in a nontechnical level, not discussing them at all. I smell a HUGE steaming pile of hypocrisy here.
4. In light of what I noted above about Socratic dialogues, Harris never discusses what happens when two big moral issues, like "fairness" and "compassion," collide. This is one of the brilliancies of Kaufmann's book mentioned above.

In light of all that, let's look at Hume's famous is-ought issues.

Hume discusses the problem in book III, part I, section I of his A Treatise of Human Nature (1739):
In every system of morality, which I have hitherto met with, I have always remark'd, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surpriz'd to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it shou'd be observ'd and explain'd; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.
Hume calls for caution against such inferences in the absence of any explanation of how the ought-statements follow from the is-statements. But how exactly can an "ought" be derived from an "is"? The question, prompted by Hume's small paragraph, has become one of the central questions of ethical theory, and Hume is usually assigned the position that such a derivation is impossible. This complete severing of "is" from "ought" has been given the graphic designation of Hume's Guillotine.

See Wikipedia for more on the "is-ought" issue.

Several issues here:
1. "Ought" is multivalent. Sometimes, most notably in ethics, it has an explicitly moral tone. Other times, far from that. For instance, in late 19th-century physics, scientists said the ether, the luminiferous ether, "ought" to weigh a certain amount, even though experiment rebelled against that.
2. In the case of ethics, to worry about "is-ought" is to approach the issue the wrong way. Rather, staying within Hume, one can ask what ethics can be naturalistically devised and supported. In this case (contra what Harris seems to say) we turn to evolutionary psychology **properly done** (and not Pop Ev Psych), as well as evolutionary biology of non-hominids. We can, through cultural anthropology, partially reinforce hominid ev psych findings. That then said, we would note that often, there is not one "right" ethical answer to some issues of ethics. We also should note, per someone like Walter Kaufmann, sometimes there is no right answer at all, or that a "right" answer may be culturally determined, or that a "right" answer for an individual may be the "wrong" answer for society. In this last case, no science gives us "the answer" as to whether individual needs or societal needs should prevail. And, for that matter, different religions may give us different answers, or the same religion may give us different answers at different times, as they do on other issues such as collective guilt.

Re a critic of my Amazon review, who invited me to look at a Harris post on Huffington Post.

Harris' "refutation" of his critics actually confirms much of what they say about him on the Islamophobia. Ditto on his .... gullibility, for want of other words, on the credibility of the psi folks.

As for his stance on Buddhism, it seems clear he's trying to have his cake and eat it, too, by purporting to be on a search for "the authentic Buddha," in essence. Shades of Albert Schweitzer!

That said, the review by John Horgan, which Harris loathes? I think Horgan goes too far in taking science to the moral woodshed, but, in a general way, he's right. To this day, Western scientists still have few problems with exploiting indigenous peoples, for example. One might fault Horgan for failing to distinguish science from individual scientists, but this is part of connecting Harris' stance to scientism, I think.

On the good side, though, he does some great petard-hoisting on Harris:
Some will complain that it is unfair to hold science accountable for the misdeeds of a minority. It is not only fair, it is essential, especially when scientists as prominent as Harris are talking about creating a universal, scientifically validated morality. Moreover, Harris blames Islam and Catholicism for the actions of suicide bombers and pedophilic priests, so why should science be exempt from this same treatment?
And more:
Harris asserts in Moral Landscape that ignorance and humility are inversely proportional to each other; whereas religious know-nothings are often arrogant, scientists tend to be humble, because they know enough to know their limitations. "Arrogance is about as common at a scientific conference as nudity," Harris states. Yet he is anything but humble in his opus. He castigates not only religious believers but even nonbelieving scientists and philosophers who don't share his hostility toward religion.
Finally, Horgan raises the same concerns about neuroscience I do:
Harris further shows his arrogance when he claims that neuroscience, his own field, is best positioned to help us achieve a universal morality. "The more we understand ourselves at the level of the brain, the more we will see that there are right and wrong answers to questions of human values." Neuroscience can't even tell me how I can know the big, black, hairy thing on my couch is my dog Merlin. And we're going to trust neuroscience to tell us how we should resolve debates over the morality of abortion, euthanasia and armed intervention in other nations' affairs?
Indeed. But, that, too, is part of Harris' scientism. That said, P.Z. Myers and Vic Stenger, on their claims to have proved the nonexistence of god, show that Harris isn't alone among New Atheists in falling into the pit of scientism.

Sunday, April 18, 2010

No, you CAN'T multitask

Not really. Not even if you're a woman. It looks like the human brain, using its bilateral division, assigns two tasks to different hemispheres and does something similar to computer buffering as needed.
"The human prefrontal function seems to be built to control two tasks simultaneously. It means in everyday behaviour we can readily switch between two tasks but not between three. With three tasks the division is limited to only two hemispheres, so there is a problem," Dr. Etienne Koechlin said.
What does that mean if you're doing more than two things at once, or trying to? Pretty simple:
The study suggests that this basic division of the brain into two halves may explain why human beings tend to prefer a simple choice between two options rather than three or more.
The story author then tries to extrapolate to British politics, and perhaps goes too far:
It might even explain why the Liberal Democrats, as the third political party, find it hard to get a look in at general elections.
Nope, not an explanation. Look at Germany, for example. Rather, the British, like the Americans, have a "first past the post" election system which makes it tougher on third parties.

Beyond that misconjecture, though, the full story is worth a read.

Tuesday, March 23, 2010

New indications of what drives consciousness

A theory known as "global workspace" seems to be getting some tentative empirical confirmation, including that "the researchers found a 300 ms delay between presenting the stimuli and witnessing this explosion of neural activity," which may be analogous to the famous half-second delay between starting what turns out to be a conscious action and actually consciously willing to do it.

That said, per David Chalmers, this study seems to answer an "easy question" about consciousness, or a couple, more than any "hard questions."

Saturday, February 27, 2010

Pop Ev Psych goes off the rails on depression

The idea that clinical depression was evolutionarily selected for is why many real evolutionary biologists laugh at something like this as a classical "Just So Story."

It's caveated, has little real explanatory power, doesn't allow for alternative explanations, doesn't well explain away counterexamples and is generally weak.

Beyond that, Andy Thomson and Paul Andrews undercut their own theory, and at a grade-school level.

In response to criticism, they admit that, in essence, "We don't know what depression is."

Well, if you don't know what a trait is, how can you even claim it's selected for, in the first place? You've just said you don't know what it is, so you don't know what is being selected for.

Duh.

And, of course, given the present (but growing) state of cognitive science and neuroscience, this is the case about ev psych, or rather, Pop Ev Psych, claims about just about any mental or emotional state.

Meanwhile, on his blog, Jonah Lerner, the author of the NYT Mag story, actually defends the general line of thinking of Thomson/Andrews, though in the story, he was good enough to marshal plenty of opponents of their claims.

Saturday, February 20, 2010

Cultural neuroscience - a new frontier

When the addition is still in basic arithmetic, Chinese speakers use a different part of their brain than do Westerners.

Things like this are part of the purview of cultural neuroscience.

Saturday, February 06, 2010

The human brain, simulated?

In Switzerland, a neuroscientist is hoping to use an IBM supercomputer much more powerful than Deep Blue to do just that.

That's interesting enough. The real thing is that Henry Markram says we (mainly, his professional colleagues being referenced) need to ditch many of their scientific preconceptions about how individual neurons, neuron groups and areas of the brain work.

The story describes how he is modeling the simulation on actual "slices" of mouse neocortex.

Thursday, May 14, 2009

Monkey do bad, monkey do different

Monkeys, not even non-human primates, can learn from their mistakes.

Time to further adjust the bar on learning and other issues that do NOT differentiate H. sapiens from “mere animals.”

Thursday, April 02, 2009

A reply to 'Invictus'

Am I indeed the captain of my soul?
I find it hard to believe that is so.
Translating the individual “I”
To the global core of humanity
I think that it’s well-nigh impossible.
The individual human psyche,
Convoluted and self-referential,
Means the “I” is not quite that simple.
As for that “master” subroutine inside,
The one that supposedly masters “I”?
The king always faces peasant revolts.
If not that, a master can go haywire.
And, when that happens, then who masters it?
– April 2, 2009

INVICTUS, by William Ernest Henley

Out of the night that covers me,
Black as the Pit from pole to pole,
I thank whatever gods may be
For my unconquerable soul.

In the fell clutch of circumstance
I have not winced nor cried aloud.
Under the bludgeonings of chance
My head is bloody, but unbowed.

Beyond this place of wrath and tears
Looms but the horror of the shade,
And yet the menace of the years
Finds, and shall find me, unafraid.

It matters not how strait the gate,
How charged with punishments the scroll,
I am the master of my fate;
I am the captain of my soul.

Sunday, January 11, 2009

Pinker – Nature-nurture debate over, but on MY terms!

Cognitive psychologist Steven Pinker has a long interview in the New York Times Magazine. An extended comment of his on page 2 jumped out at me:
The most prominent finding of behavioral genetics has been summarized by the psychologist Eric Turkheimer: “The nature-nurture debate is over. . . . All human behavioral traits are heritable.” By this he meant that a substantial fraction of the variation among individuals within a culture can be linked to variation in their genes. Whether you measure intelligence or personality, religiosity or political orientation, television watching or cigarette smoking, the outcome is the same. Identical twins (who share all their genes) are more similar than fraternal twins (who share half their genes that vary among people). Biological siblings (who share half those genes too) are more similar than adopted siblings (who share no more genes than do strangers). And identical twins separated at birth and raised in different adoptive homes (who share their genes but not their environments) are uncannily similar.

I have several problems with this.

First, knowing a bit about addictive behavior, I know that predisposition to cigarette smoking or alcoholic drinking does NOT have “substantial fraction” due to heritability.

Second, on behaviors like that, people like Pinker have never even made an effort to sort out familiar social influence.

Third, the “uncannily similar” comes off as being nothing more than a door-slamming phrase, i.e., “How can you be scientific if you question it”?

Well, with twins, I question it on several grounds.

First, identical twins themselves differ on when after fertilization the twinning event occurred:
• Do they have separate amniotic sacs and placentas?
• Share a sac but with different placentas?
• Share even placentas?

All of the above are of course environmental and not genetic effects, but Pinker conveniently ignores that.

Second, are identical twins “uncannily similar”? Not necessarily. Again, Pinker refers us to no research; he just throws out a statement to us and demands we accept it as gospel truth.

’Tis true, Pinker does qualify both his own comments, and those of Turkheimer, with some more general versions of what I just noted, on the next webpage. But, to me, the carts was enough before the horse, and emphasized enough more, to tell me where Pinker falls.

And, on page 5, Pinker shows we still have far to go in our understanding of the how and of the specific genes of genetic heritability.

Height is widely acknowledged, due to the information from statistical correlation, as being the single most heritable human trait. But, as Pinker notes, in 2007 a genomewide scan of nearly 16,000 people turned up a dozen height-related genes. However, these genes collectively accounted for just 2 percent of the height variation; plus, a person who had most of the genes was barely an inch taller, on average, than the general population.

And, I haven’t even gotten to Pinker’s biggest flop, or deliberate oversight.

It’s becoming ever more clear that what has to this point been called “junk DNA” isn’t; rather, some of it may code for frequency of expression of a gene, or control what genes interact together and when, etc. And, though still looked askance by some geneticists precisely for what it hints at, Stanley Prusiner’s work on prions, along with other research, shows that the heritability pathway may not be a one-way street at the cellular level.

Pinker starts to wrap up his take on the science of personal genomics with this:
At the same time, there is nothing like perusing your genetic data to drive home its limitations as a source of insight into yourself.

Too bad, he doesn’t realize, or refuses to accept, the limitations of genetic data today go far beyond that.