Friday, November 07, 2008

Dead Cert

BBC Radio Four broadcast an excellent programme on political doubt and certainty last night.

"Doubt seems a dangerous thing in politics. If possible, you don't admit it; not about your values, nor your analysis, nor the policies that will magically bring about the change that you are certain is needed. Confidence, by contrast, thrives: confidence in the power of our own analysis, of who is to blame and why, the strident confidence of politicians or business people in their preferred remedies. In this edition of Analysis, Michael Blastland asks whether these common assumptions might actually have their own dangers."

The programme will be repeated on Sunday 9th November at 21.30 GMT; podcast and transcript are available from the programme website for limited period only.

I shall post a review later.

You don't have to be smart to search here ...

... but it helps.

I'm prompted to write this post by a throwaway remark from David McCoy, in a post on election statistics: "You don’t have to be smart to search nowadays - all you have to do is enter the key snippet."

Ah, but how do you find the key snippet?

My son had a school essay to write comparing two films, so we thought it would be worth looking on the internet to find some analysis. But if you just search for the names of the films, you just get endless cinema listings and DVD sales, plus a few fairly superficial newspaper cuttings.

So we tried another tack. Who are the key figures (film theory, media studies, sociology) that might be name-dropped in a serious essay? Let's start with Lacan.

When we added "Lacan" to the name of one of the films, the search engine suddenly unearthed an entirely different set of web pages, including a bunch of blogs apparently created as part of a high school project (sixth-form) and talking about a set of related films including the two we were interested in. Could we have found these blogs any other way?

Okay I admit it, my son hasn't read Lacan, hadn't even heard of him, but he had a bit of parental help. The point I'm making here is that sometimes the more knowledge you can put into the search, the more useful the results.

Even Microsoft sometimes misses important stuff when it searches the internet - for example when checking a brand name. See my post on Google and Longhorn.

Internet search looks rather like a P v NP problem. It's fine for checking unoriginality: for example, if the teacher suspects a student of plagiarism, she can put a suspiciously well-phrased sentence into an internet search engine and confirm that the sentence is not original. It is also fine for finding well-structured material: if you want to check Missouri voting statistics, you can probably find something relevant. But if you want to find an unusual thought, you will have to find an unusual combination of search terms.

You do have to be smart to search here.

Sunday, September 14, 2008

Confirmation Bias

Adam Shostack has a couple of posts on Confirmation Bias. I've added some comments on Adam's blog; here's a digest of the discussion.

Things Only An Astrologist Could Believe

Adam picks up an astrological analysis of a recent action by Google: apparently the timing of the Chrome release was astrologically auspicious.

Vedic Astrologer: "Such a choice of excellent Muhurta with Chrome release time may be coincidental, but it makes us strongly believe that Google may not have hesitated to utilize the valuable knowledge available in Vedic Astrology in decision making."

Adam: "This is a beautiful example of confirmation bias at work. Confirmation bias is when you believe something (say, Vedic astrology) and go looking for confirmation. This doesn't advance your knowledge in any way. You need to look for contradictory evidence. For example, if you think Google is using Vedic astrology, they have a decade of product launches with some obvious successes. Test the idea. I strongly believe that you haven't."

Myself: "What our Vedic friend is actually telling us is that Google "may not have hesitated" in its use of Vedic astrology. To be honest, I also find it hard to believe that Google executives sat around dithering about whether to use Vedic astrology or not."

In further comments, the Vedic astrologer argues that the astrological method is no different from other forms of observational science, using the scientific method, which requires the prediction of future results.
  • Hypothesis: The sun comes up every 24 hrs.
  • Method: I will time when the sun crosses the horizon.
  • Results: I successfully predicted 50 sunrises with a 100% degree of accuracy. This is further evidence that my hypothesis is correct.
  • Caveat: Although, I note that since 24 hrs is the period between sunrises by definition of a day, this is circular.
Actually, the statement that the sun rises exactly in 24 hour intervals is only believable if you live near the equator and you know nothing about astronomy, or if you adopt a solar method for measuring the length of an hour.

What confuses me about the hypothesis posed by our Vedic friend is whether he is trying to predict the decision-making behaviour of Google executives or the successful outcome of their decisions. Even if Google executives are making auspicious decisions, this could be "explained" either by the fact that they are employing the services of an astrologer, or by the fact that Google happens to have good (= astrologically blessed) executives. Or something.


More on Confirmation Bias

According to an old article by Michael Shermer in the Scientific American [The Political Brain, June 2006], "a recent brain-imaging study shows that our political predilections are a product of unconscious confirmation bias". Devan Desai concludes that "hardcore left-wing and hardcore right-wing folks don’t process new data".

When I first read that line about "hardcore left-wing and hardcore right-wing folks" quoted in Adam's blog I assumed it was talking about serious extremists - communists and neoNazis. Turns out it was just looking at people with strong Democrat or Republican affiliation. Maybe any party affiliation at all seems pretty hardcore to some people.

As far as I can see, the study only actually looked at people with strong political opinions, and didn't compare them with any control group. Like, er, the middle-of-the-road folks who fund and write up this kind of research.

I wonder whether anyone would get research funding or wide publicity for exploring the converse hypothesis - that people with strong political opinions are actually relatively open-minded, and that the people who have the most entrenched opinions are the bureaucrats who staff the research funding bodies and the people who write popular articles for Scientific American.

(Of course I'm jumping to conclusions myself here, that's what bloggers do isn't it?)

I'm not saying I believe that bigots are more open-minded than wishy-washy middle-of-the-roaders. I'm just saying we need to be mistrustful of studies that are designed to confirm the prejudices of the researchers, and suspicious of people who latch onto these studies to prove a point. The problem is that there may be confirmation bias built into the way these kind of pseudo-scientific studies are funded, organized and then publicized. Not surprising then if "the FMRI findings merely demonstrate what many of us already knew".

As director of the Skeptics Society, Michael Shermer latches onto a study showing that people are biased. Shermer himself has a particular set of bugbears, including evolutionary psychology, alien abductions, prayer and healing, and alternative medicine. Are we really to imagine that he approaches any of these topics with a truly open mind? And why should he anyway? The rest of us don't.

Monday, August 04, 2008

Peer Review

Tonight's Science programme on BBC Radio 4 was critical of the peer review process, in which scientific articles are filtered for publication according to the comments of other researchers in the same field. [Peer Review in the Dock, 4 August 2008]

The purpose of peer review is to give us confidence in the quality of published scientific research. Like many other social institutions, it has well-known weaknesses as well as strengths. [BBC News, Science will stick with peer review]

I have often been asked to provide peer reviews on articles for journals and conferences. Sometimes I find I know much more about the subject of the article than the authors, or at least some aspects of the subject. Even when my knowledge is less, I can usually find some areas of weakness or confusion in the article, demanding (in my opinion) either a significant re-write or complete rejection.

Having gone to the trouble to provide these reviews, I used to be shocked when I discovered that papers sometimes slipped through to publication without the identified flaws being adequately corrected. Experienced authors (or their supervisors) know how to game the system, and most journals and conferences simply don't have the resources to prevent these games. Some years ago I wrote a critique of this process and identified a number of negative patterns [Review Patterns].

The BBC programme this evening identified several more, including the "famous institution" bias and the "publication" bias. The latter is particularly important for research that involves sophisticated statistics (such as medical research), because if only publishable data are included in the analysis, then the publication criteria may themselves distort the findings. The publication bias also affects the opinions of so-called experts, whose assumptions will have been reinforced by the papers they have read.


URL for this post: http://tinyurl.com/cx6wbv

Tuesday, June 17, 2008

Memory and the Law

Rebecca Fordham writes:

"Many experts are challenging the view that eyewitnesses recounting what they saw is the best way of tapping their memory. Some think brain scans could be the way forward." [Memory Mixup, BBC News Magazine, 17 June 2008]

We already have technology that is supposed to detect discrepancies between what the witness remembers and what the witness says - it's called a polygraph or lie detector. Now we apparently need another technology that detects discrepancies between what the witness consciously remembers and what is buried in the witness's unconscious.

The lie detector has been controversial ever since its invention, and features in a Chesterton story called "The Mistake of the Machine". (Of course it is not the machine that makes the mistake, as Chesterton's hero Father Brown points out, but the people using the machine who misinterpret its output.)


Of course humans sometimes lie, and sometimes this can be detected by the polygraph, but that doesn't make the polygraph an instrument of truth. (For that matter, people sometimes blurt out secrets under the influence of alcohol or torture, or get artistic inspiration under the influence of mind-bending drugs, but none of these are reliable instruments of truth either.)

And human memory is sometimes unreliable, but that doesn't make the brain scan an instrument of truth either. Constructing evidence from the unconscious contents of a brain is no more reliable than constructing history from an archaeological sift through a mediaeval rubbish tip. It may be possible, and may yield some intriguing results, but the results are always speculative and uncertain.

Meanwhile, our "common sense" understanding of the brain and its contents is probably less accurate and less coherent than our understanding of mediaeval waste disposal. That's why psychoanalysts make more money than archaeologists. They do, don't they?

Update


"India has become the first country to convict someone of a crime relying on evidence from this controversial machine." [Source: New York Times, via Bruce Schneier]

Friday, November 30, 2007

Solitary Thinking

Hackers get busted. Dan Cvrcek raises a very interesting question about the flaws in thinking committed when bright people try to solve problems in isolation.

Tuesday, June 20, 2006

Science and Censorship

Dark materials. Nuclear scientist Joseph Rotblat campaigned against the atom bomb he had helped unleash. Is it time for today's cyber scientists to heed his legacy?
"There is an ever-widening gap between what science allows, and what we should actually do. There are many doors science can open that should be kept closed, on prudential or ethical grounds. Choices on how science is applied should not be made just by scientists."
Essay by Martin Rees
President of the Royal Society
[Guardian, Saturday June 10, 2006]

We cannot allow the terrorists to terrorise us. Scientific research shouldn't be halted simply because it might fall into the wrong hands.
"The scientist's job is to shine light in the darkness, and if we occasionally burn our fingers on the candle, so be it. Lord Rees can choose the darkness if he wants. I'm not going to."

I am uneasy about both sides of this debate. Should science be restrained - either by scientists or by society. Do politicians represent the interests of society, or is there a better and more democratic way for society's interests to be represented?

The fact is that science is already restrained by all sorts of social and commercial forces - above all the willingness to fund particular kinds of research and not others. The choice is not simply between a risk-averse establishment (represented by the Royal Society) and a risk-seeking free-thinking radical alternative (represented by the Foundation).

Of course Anderson is right to be wary of the distorted perceptions of risk by politicians and the non-scientific public. But the proper response to this is a properly constituted debate. Meanwhile, politicians will often seek stupid measures to use and abuse scientists.

As I reported last year (Research Under Fire), scientists and engineers at the University of Berkeley are wary of academic restrictions imposed by the US Federal Government in the name of national security. Thankfully this isn't the kind of restraint Rees is advocating.

Technorati Tags:

Friday, June 09, 2006

A reasonable percentage (3)

One piece of intelligence was accurate.
A man described as Abu Musab al-Zarqawi's "spiritual adviser" inadvertently led US forces to the spot where the militant leader was finally located and killed, the US military says.

Major General William Caldwell said the operation to track down the most wanted man in Iraq was carried out over many weeks, before he was killed after two US air force F-16s bombed a house in a village north of Baghdad.

"The strike last night did not occur in a 24-hour period. It truly was a very long, painstaking deliberate exploitation of intelligence, information gathering, human sources, electronic, signal intelligence that was done over a period of time - many, many weeks," Gen Caldwell said on Thursday.


One piece of intelligence was flawed.
Anti-terror police raided a house at Forest Gate last week after saying they received "specific intelligence" that a chemical device might be found there.

Scotland Yard later said they had "no choice" but to act while the prime minister said it was essential officers took action if they received "reasonable" intelligence suggesting a terror attack.

Tony Blair said he backed the police and security services 101% and he refused to be drawn on suggestions that the armed operation had been a failure.


It's a reasonable percentage. (Previous posts: April 9th, April 18th.)

But that's part of the problem with intelligence - it delivers probability rather than certainty. Perhaps the outcomes are the right way around this time - the presumed-guilty man was killed, and the presumed-innocent man merely injured. (So we shouldn't complain, should we? Imagine the complaints if it had been the other way around!)

But over the long run, are there too many errors? (Difficult to tell, as we only know of some of the better publicized successes and failures.) Should we be uneasy about the errors of intelligence, and the consequences of acting upon erroneous intelligence? There are fundamental questions here about the relationship between knowledge (or ignorance) and action (or inaction).