tag:blogger.com,1999:blog-12543156791639901532024-03-08T21:31:10.052+00:00Systems Thinking for Demanding ChangeRichard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.comBlogger44125tag:blogger.com,1999:blog-1254315679163990153.post-4486831827568541402019-03-03T10:39:00.000+00:002019-03-11T08:34:27.243+00:00Ethics and UncertaintyHow much knowledge is required, in order to make a proper ethical judgement?<br />
<br />
Assuming that consequences matter, it would obviously be useful to be able to reason about the consequences. This is typically a combination of <b>inductive reasoning</b> (what has happened when people have done this kind of thing in the past) and <b>predictive reasoning</b> (what is likely to happen when I do this in the future).<br />
<br />
There are several difficulties here. The first is the problem of induction - to what extent can we expect the past to be a guide to the future, and how relevant is the available evidence to the current problem. The evidence doesn't speak for itself, it has to be interpreted.<br />
<br />
For example, when Stephen Jay Gould was informed that he had a rare cancer of the abdomen, the medical literature indicated that the median survival for this type of cancer was only eight months. However, his statistical analysis of the range of possible outcomes led him to the conclusion that he had a good chance of finding himself at the favourable end of the range, and in fact he lived for another twenty years until an unrelated cancer got him.<br />
<br />
The second difficulty is that we don't know enough. We are innovating faster than we can research the effects. And longer term consequences are harder to predict than short-term consequences: even if we assume an unchanging environment, we usually don't have as much hard data about longer-term consequences.<br />
<br />
For example, a clinical trial of a drug may tell us what happens when people take the drug for six months. But it will take a lot longer before we have a clear picture of what happens when people continue to take the drug for the rest of their lives. Especially when taken alongside other drugs.<br />
<br />
This might suggest that we should be more cautious about actions with long-term consequences. But that is certainly not an excuse for inaction or procrastination. One tactic of Climate Sceptics is to argue that the smallest inaccuracy
in any scientific projection of climate change invalidates both the
truth of climate science and the need for action. But that's not the
point. Gould's abdominal cancer didn't kill him - but only because he
took action to improve his prognosis. @<a href="https://twitter.com/aoc/status/1099854840145031168">Alexandria Ocasio-Cortez</a> has recently started using the term Climate Delayers for those who find excuses for delaying action on climate change.<br />
<br />
The third difficulty is that knowledge itself comes packaged in various disciplines or discourses. Medical ethics is dependent upon specialist medical knowledge, and technology ethics is dependent upon specialist technical knowledge. However, it would be wrong to judge ethical issues exclusively on the basis of this technical knowledge, and other kinds of knowledge (social, cultural or whatever) must also be given a voice. This probably entails some degree of cognitive diversity. Will Crouch also points out the uncertainty of predicting the values and preferences of future stakeholders.<br />
<br />
The fourth difficulty is that there could always be more knowledge. This raises the question as to whether it is responsible to go ahead on the basis of our current knowledge, and how we can build in mechanisms to make future changes when more knowledge becomes available. Research may sometimes be a moral duty, as Tannert et al argue, but it cannot be an infinite duty. <br />
<br />
The question of adequacy of knowledge is itself an ethical question. One of the classic examples in Moral Philosophy concerns a ship owner
who sends a ship to sea without bothering to check whether the ship was
sea-worthy. Some might argue that the ship owner cannot be held responsible
for the deaths of the sailors, because he didn't actually know that the
ship would sink. However, most people would see the ship owner having a
moral duty of diligence, and would regard him as accountable for
neglecting this duty.<br />
<br />
But how can we know if we have enough knowledge? This raises the question of the "known unknowns" and "unknown unknowns", which is sometimes used with a shrug to imply that noone can be held responsible for the unknown unknowns.<br />
<br />
(And who is we? J. Nathan Matias argues that the obligation to experiment is not limited to the creators of an artefact, but may extend to other interested parties.)<br />
<br />
The French psychoanalyst Jacques Lacan was interested in the opposition between impulsiveness and procrastination, and talks about three phases of decision-making: the <b>instant of seeing</b> (recognizing that some situation exists that calls for a decision), the <b>time for understanding</b> (assembling and analysing the options), and the <b>moment to conclude</b> (the final choice).<br />
<br />
The purpose of Responsibility by Design is not just to prevent bad or dangerous consequences, but to promote good and socially useful consequences. The result of applying Responsibility by Design should not be reduced innovation, but better and more responsible innovation. The time for understanding should not be dragged on forever, there should always be a moment to conclude.<br />
<br />
<br />
<hr />
<br />
Matthew Cantor, <a href="https://www.theguardian.com/environment/2019/mar/01/could-climate-delayer-become-the-political-epithet-of-our-times">Could 'climate delayer' become the political epithet of our times?</a> (The Guardian, 1 March 2019)<br />
<br />
Will Crouch, <a href="http://blog.practicalethics.ox.ac.uk/2012/01/practical-ethics-given-moral-uncertainty/">Practical Ethics Given Moral Uncertainty</a> (Oxford University, 30 January 2012) <br />
<br />
Stephen Jay Gould, <a href="http://www.stephenjaygould.org/library/gould_median-isn't-the-message.html">The Median Isn't the Message"</a> (Discover 6, June 1985) pp 40–42.<br />
<br />
J. Nathan Matias, <a href="https://medium.com/mit-media-lab/the-obligation-to-experiment-83092256c3e9">The Obligation To Experiment</a> (Medium, 12 December 2016)
<br />
<br />
Alex Matthews-King, <a href="https://www.independent.co.uk/environment/pollution-chemicals-hormone-disruption-whales-reproduction-gender-a8798376.html">Humanity producing potentially harmful chemicals faster than they can test their effects, experts warn</a> (Independent, 27 February 2019)<br />
<br />
Christof Tannert, Horst-Dietrich Elvers and Burkhard Jandrig, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2002561/">The ethics of uncertainty. In the light of possible dangers, research becomes a moral duty</a> (EMBO Rep. 8(10) October 2007) pp 892–896<br />
<br />
Stanford Encyclopedia of Philosophy: <a href="https://plato.stanford.edu/entries/consequentialism/">Consequentialism</a>, <a href="https://plato.stanford.edu/entries/induction-problem/">The Problem of Induction</a><br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/There_are_known_knowns">There are known knowns</a> <br />
<br />
The ship-owner example can be found in an essay called "The Ethics of Belief" (1877) by <a href="http://en.wikipedia.org/wiki/William_Kingdon_Clifford" title="William Kingdom Clifford (Wikipedia)">W.K. Clifford</a>, in which he states that "it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence".<br />
<br />
I describe Lacan's model of time in my book on <a href="https://leanpub.com/orgintelligence/">Organizational Intelligence</a> (Leanpub 2012)<br />
<br />
Related posts: <a href="https://demandingchange.blogspot.com/2010/04/ethics-and-intelligence.html">Ethics and Intelligence</a> (April 2010), <a href="https://demandingchange.blogspot.com/2018/06/practical-ethics.html">Practical Ethics</a> (June 2018), <a href="https://rvsoapbox.blogspot.com/2018/11/big-data-and-organizational-intelligence.html">Big Data and Organizational Intelligence</a> (November 2018)<br />
<br />
<span style="font-size: xx-small;">updated 11 March 2019</span>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-47596152258887441082014-04-26T09:30:00.002+01:002021-05-13T20:55:14.086+01:00On the true nature of knowledge<p>@<a href="https://twitter.com/pickover/status/459676294733889536">pickover</a> suggests that these two books, in theory, contain the sum total of all human knowledge. "The Joy of Logic", he remarks (via <complete id="goog_1017019693">@</complete><a href="https://twitter.com/DavidFCox">DavidFCox</a>).<br />
<br />
</p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFqaUpC1D5fdzGHm7z6RJ3fHV_ZGf5RFBSbATDFQ0gJayayOn7JQBVqDM6otrrjMHMw4ve0j3PPR9grGHOvW8oftdVRS0Dphepl0jZFFYa9F2xi7OkPfpyLeJVphHWvlJmzSUFMJaJgZX1/s599/BHFnkYxCQAIe9Yo.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="439" data-original-width="599" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFqaUpC1D5fdzGHm7z6RJ3fHV_ZGf5RFBSbATDFQ0gJayayOn7JQBVqDM6otrrjMHMw4ve0j3PPR9grGHOvW8oftdVRS0Dphepl0jZFFYa9F2xi7OkPfpyLeJVphHWvlJmzSUFMJaJgZX1/s320/BHFnkYxCQAIe9Yo.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="https://pbs.twimg.com/media/BHFnkYxCQAIe9Yo.jpg">What They Teach You At Harvard Business School<br />What They Don't Teach You At Harvard Business School</a></td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table><p>Why is this wrong? Because knowledge doesn't follow the laws of elementary arithmetic. Adding two lots of knowledge together doesn't give you twice as much knowledge. (Does anyone really think that teaching children creationism as well as evolution will double their education?)<br />
<br />
Knowledge is like light. When you add two light beams together, you may sometimes get more light. But you may also get puzzling patches of darkness. This is called interference. In high-school physics we learn that this is because light is a wave. If the two waves are out of phase, they cancel each other out.<br />
<br />
(Curiously, uncertainty is also like light. When you add two pieces of uncertainty together, you may get less uncertainty. This is called hedging. Works best when the uncertainty is out of phase.)<br />
<br />
<br />
Obviously these two books are out of phase.<br />
<br />
</p><hr /><blockquote class="twitter-tweet"><p dir="ltr" lang="en">Forget the old joke about the two 'What teach you/don't teach you at Harvard Business School' books comprising all human knowledge<br /><br />These two books comprise the entire universe <a href="https://t.co/3QcZn3FcVk">pic.twitter.com/3QcZn3FcVk</a></p>— davidallengreen (@davidallengreen) <a href="https://twitter.com/davidallengreen/status/1392923121129250818?ref_src=twsrc%5Etfw">May 13, 2021</a></blockquote><p> <script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlfp6xC0HIqzo2tplgs-wcp1wJ3mq7BJX90WGI6w8rxP1S11jUZSDUOcYCCMIsdX1PHJZkl348edtlaJJDkUtHCSosbOgknC8QnDW87TaeBzuUltxJp33ygWIL0MvYarztsIh5YGbBvFZo/s680/E1SoHOHXoAcVpQE.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="601" data-original-width="680" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlfp6xC0HIqzo2tplgs-wcp1wJ3mq7BJX90WGI6w8rxP1S11jUZSDUOcYCCMIsdX1PHJZkl348edtlaJJDkUtHCSosbOgknC8QnDW87TaeBzuUltxJp33ygWIL0MvYarztsIh5YGbBvFZo/s320/E1SoHOHXoAcVpQE.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">On The Map<br />Off The Map<br /></td></tr></tbody></table> <p>Related posts<br />
<br />
<a href="http://rvsoftware.blogspot.com/2014/04/does-big-data-release-information-energy.html">Does Big Data Release Information Energy?</a> (April 2014) </p><p></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-56952231298843170672011-03-24T12:43:00.001+00:002022-08-25T09:11:49.065+01:00The Wisdom of the TomatoVarious people have tweeted the following aphorism.<br />
<br />
<q>Knowledge is knowing a tomato is a fruit. Wisdom is knowing not to put it in a fruit salad.</q><br />
<br />
Please permit me to quibble with this aphorism. Classifying tomatoes as fruit is merely <b>information</b>. This classification is supported by <b>data</b>, such as the observation that the tomato contains its own seeds. Knowing not to put it into a fruit salad is a culinary <q>best practice</q>, based on a series of social conventions about the proper constitution of fruit salad and its place within a meal. So this is <b>knowledge</b>, or what is sometimes called <q>received wisdom</q>. However, <b>innovation </b>often involves disobeying social conventions and surprising those who rely excessively upon received wisdom. For example, how did chefs discover that it was okay to put flower petals into salads (<q>next practice</q>)? So <b>courage </b>is knowing that you are not <q>supposed</q> to put tomatoes into fruit salad, but doing it anyway. And real <b>wisdom </b>is not inflicting such gross culinary experiments on the wrong people at the wrong time in the wrong way.<br />
<br />
<hr />
<br />
Wikipedia attributes this aphorism to the Irish rugby captain <a href="http://en.wikipedia.org/wiki/Brian_O%27Driscoll">Brian O'Driscoll</a>. Various interpretations can be found in the comments to Brendan Cole's blog <a href="http://www.rte.ie/ie/sportsixnations/entry/what_did_bod_mean">What did BOD mean?</a> (Feb 2009)<br />
<br />
On the <a href="http://www.bbc.co.uk/programmes/p014cqn3">Unbelievable Truth</a> (Series 10 Episode 5), the @RealDMitchell rants about whether a tomato is a fruit or a vegetable. He claims that the US Government taxes tomatoes as vegetables, and regards this as more authoritative than mere science.<br />
<br />
See also my post <a href="http://rvsoapbox.blogspot.co.uk/2012/11/co-production-of-data-and-knowledge.html">Co-Production of Data and Knowledge</a> (Nov 2012) <br />
<br />
<span style="font-size: xx-small;">Updated 29 January 2013</span> Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-67657233952443980472010-04-06T23:21:00.003+01:002020-09-04T10:01:35.523+01:00Ethics and Intelligence@<a href="http://twitter.com/flowchainsensei/status/11581487036">flowchainsensei</a> (Bob Marshall) argues that <a href="http://www.fallingblossoms.com/opinion/content?id=1001">All Executives are Unethical</a> (pdf).<br />
<br />
More precisely, he argues that it is unethical to believe things without proper evidence. (He is particularly interested in beliefs about product and software development, but the argument applies more generally.) <br />
<br />
As far as I can see, there are three steps in this argument.<br />
<br />
1. People are ethically responsible for their beliefs. (According to Bob, this was the basis for a controversial paper presented to the Metaphysical Society by <a href="http://en.wikipedia.org/wiki/William_Kingdon_Clifford" title="Wikipedia: William Kingdon Clifford">William Kingdon Clifford</a> in 1876.) <br />
<br />
2. An unfounded belief is unethical.<br />
<br />
3. A person who holds unfounded beliefs is unethical. <br />
<br />
<br />
Let's look at step 1 first. This appears to entail an ethical obligation to subject one's beliefs to some kind of "due diligence". However, most of our beliefs are based, not on evidence that we have personally collected and analysed, but at least partly on evidence that has been filtered through other sources. We may have reasons to trust certain sources more than others, but if it is unethical to believe things without proper evidence, it would also surely be unethical to trust things without proper evidence. We may accept an ethical obligation to subject our beliefs to "due diligence", but this is normally a collective obligation rather than an individual obligation.<br />
<br />
Step 2 asserts that any failure to ground beliefs in proper evidence is an ethical failure. People are rightly held accountable for failing to act in certain circumstances (for example failing to save someone from drowning), but ethical censure generally assumes both awareness (knowing that someone needed rescue) and capability (being able to swim). So the problem with Step 2 is that the more complex the beliefs are, the greater the intellectual power (intelligence) that is required to appreciate and thoroughly investigate these beliefs. If the management team isn't individually or collectively intelligent enough to understand what proper evidence would look like, then believing things without proper evidence is a consequence of insufficient intelligence.<br />
<br />
Does being stupid count as an ethical failure? (Being deliberately or avoidably stupid might, but most instances of stupidity are not deliberate.) Appointing people and teams who don't have enough intelligence might be unethical, but only if the appointment was deliberate or avoidable, and so on along the responsibility chain until we can find someone who should have known better.<br />
<br />
Step 3 assumes that we can categorize people as ethical or unethical based on incidence of ethical or unethical behaviour. Once we have a hard-and-fast concept of sin, then we can define a sinner as a person who has committed (and not yet purged) at least one sin. The trouble with this is that if we are all sinners, the category of "sinner" ceases to have much value except for the purposes of hellfire rhetoric. Labelling all executives as unethical (and why stop at executives, by the way) becomes merely a rhetorical gesture.<br />
<br />
<br />
<hr /><p>So where does this leave the virtues of diligence, responsibility and probity? Firstly, I hold that these are collective virtues - executives display moral character in a particular organizational setting, and we may not know how their ethics would stand up in a different setting. <br />
<br />
Secondly, I think character and intelligence are distinct virtues. We should not automatically suppose that intelligent people are more ethical than less intelligent people, and therefore we should not define "ethical" to mean something that only very intelligent or highly educated people can comply with.<br />
<br />
Thirdly, there is a widespread belief (especially among consultants) in the value of knowledge (although I don't know exactly what would count as proper evidence for this belief - if executives are unethical, I dread to think where this leaves consultants). If we define knowledge as justified true belief, then knowledge is degraded to the extent that it is unjustified or untrue, or for that matter disbelieved. If it is unethical to believe something without proper evidence, it may sometimes also be unethical to disbelieve something. Sometimes excessive scepticism shades into cynicism and negativity, and maybe this can be just as unethical as unjustified optimism.</p><p> </p><p></p><hr><p>Related posts: <a href="https://demandingchange.blogspot.com/2013/02/intelligence-and-governance.html">Intelligence and Governance</a> (February 2013), <a href="https://demandingchange.blogspot.com/2019/03/ethics-and-uncertainty.html">Ethics and Uncertainty</a> (March 2019) <br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-40753575688477678892010-03-09T14:56:00.000+00:002019-03-02T17:54:36.331+00:00Knowledge Claims@<a href="https://twitter.com/JDeragon/status/10215958374">JDeragon</a> blogs about <a href="http://www.relationship-economy.com/?p=9062">the emergence of a "know" profile</a>. He identifies four types of knowledge - intellectual, social, creative and spiritual - and advocates the construction of individual profiles that express our individual "knowledge inventory" across these four types of knowledge.<br />
<br />
The metaphor of "knowledge inventory" is based on his assertion that people are containers of knowledge. But what if people are NOT "containers" of knowledge, asks @<a href="https://twitter.com/EskoKilpi/status/10218222516">EskoKilpi</a>, who argues that one of the main challenges for knowledge management is <a href="http://eskokilpi.blogging.fi/2010/03/06/bridging-the-gap-between-knowing-and-acting/">bridging the gap between knowing and acting</a>.<br />
<br />
I fully concur with Esko's objection to the "container" metaphor, and I agree that the relationship between knowing and acting is important - in fact it's a critical connection in my model of <a href="http://organizational-intelligence.wikispaces.com/">organizational intelligence</a>. However, I think we have to be careful not to imply that the gap between knowing and acting can ever be completely closed. There is always a need to act under conditions of uncertainty.<br />
<br />
Even if we were willing to regard knowledge-as-content, Jay Deragon acknowledges that this knowledge would need to be measured and vetted over time. So the best that we could possibly expect from a set of knowledge profiles is a collection of knowledge claims, together with some information that would allow us to evaluate a given claim. Does this person really know everything about project management? Does this medical researcher really know that this procedure is safe and effective?<br />
<br />
When I pointed out that I can claim knowledge about all sorts of things, and asked who is the best judge of how much I really know, @<a href="https://twitter.com/oscarberg/status/10218937937">oscarberg</a> replied that "real life is the best judge".<br />
<br />
But since we don't have a reliable way of interrogating real life in real-time, we must surely treat all knowledge-claims with caution.Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com2tag:blogger.com,1999:blog-1254315679163990153.post-64831423802028758422010-01-05T01:45:00.009+00:002022-08-21T00:37:00.642+01:00Puzzles and MysteriesFor my second post inspired by Malcolm Gladwell's latest book "What the Dog Saw, and Other Adventures", I want to turn to the chapter "Open Secrets" (originally published in the <a href="http://www.newyorker.com/reporting/2007/01/08/070108fa_fact">New Yorker, Jan 2007</a>).<br />
<br />
In his analysis of different kinds of intelligence, Gladwell picks up the distinction between puzzles and mysteries originally proposed by Gregory Treverton. (See <a href="http://www.techsoc.com/reshaping.htm">Curtis Frye's review</a> of his 2003 book Reshaping National Intelligence for an Age of Information. See also <a href="http://www.smithsonianmag.com/people-places/presence_puzzle.html">Risks and Riddles</a>, published in the Smithsonian Magazine, June 2007. And see my post on <a href="http://demandingchange.blogspot.com/2010/01/making-intelligence-relevant.html">Making Intelligence Relevant</a>.)<br />
<br />
A puzzle is characterized by having a definite answer, if we can only find it. A puzzle is difficult only because we don't have enough information. For example, Gladwell and Treverton classify the question "Where is Osama Bin Laden?" as a puzzle. If we knew exactly where Bin Laden was, then this would cease to be a puzzle at all. So the purpose of intelligence here is to get relevant pieces of information that help narrow down the field of search.<br />
<br />
A mystery is characterized by ambiguity and uncertainty. For example, the question "What is Osama Bin Laden up to?". There is perhaps no shortage of information here, there may even be too much information, the difficulty is in interpreting the information correctly.<br />
<br />
Gladwell uses the distinction to discuss the Enron case. He argues against the popular view that Enron management concealed their dealings, and points out that the information was freely available to those that took the trouble to wade through the documents. In contrast to the Watergate case, which was only revealed because an insider (the famous "Deep Throat") leaked information to Woodward and Bernstein, journalists investigating the Enron case simply downloaded the information they needed from the Enron website. Indeed, several years before Enron fell, a bunch of MBA students had carried out a pretty accurate analysis, based merely on the published accounts. In Gladwell's opinion, Enron therefore counts as a mystery rather than a puzzle.<br />
<br />
This is consistent with an assertion made by Harold Wilensky in his 1967 book Organizational Intelligence (which Gladwell has cited elsewhere), that a sophisticated reporter working with open sources may achieve more than an agent working with top-secret information. Wilensky highlights the distorting effects a doctrine of secrecy can have on intelligence: one example in Wilensky's book concerns the possible consequences of an American invasion of Cuba, where reporters read the situation more accurately than the CIA experts. <br />
<br />
Dishonest people can create puzzles simply by withholding important information. Enron executives went to prison because their conduct was judged dishonest. Gladwell agrees that Enron was reckless and incompetent, but defends the company and its executives against the charge of concealment. The state of Enron's finances was too complicated even for its own executives to understand; we might imagine that some of the executives consciously took advantage of this complexity, but we could equally imagine that this was a situation that Enron blundered into without any deliberate strategy. Hence the mystery,<br />
<br />
Characterizing Enron as a mystery also goes some way to explaining why the auditors were useless in detecting the fraud - if it was indeed a fraud. The audit process is designed to spot errors and omissions in the financial accounts, which might indicate dodgy dealings somewhere in the organization. The audit process is not designed to spot excessive complexity or risk, and auditors do not generally practice the kind of ratio analysis that the MBA students used. (Auditing is therefore one of those "best practices" whose flaws are exposed by complexity.)<br />
<br />
Another lens through which the Enron accounts could reasonably have been viewed was the taxation lens. The fact that Enron wasn't paying much corporation tax (in several years it paid no income tax at all) might have been seen as an important clue to its lack of real profitability. However, those who wanted to believe in Enron's profitability could easily convince themselves that the low level of tax payments represented clever tax avoidance - in other words, interpreting it as evidence of the smartness of the accountants and/or the stupidity of the tax authorities. (Thus the accountancy lens was used to discredit alternative lenses that might have revealed alternative truths.)<br />
<br />
Another way of thinking about the difference between puzzles and mysteries is that puzzles are about people (deliberate conspiracies) while mysteries are about systems. As Gladwell tells the story, Enron wasn't about a handful of bad people misleading everyone else, it was about a system that led everyone astray. Working out a mystery is not a question of collecting more information, but about finding a frame or lens for systematic analysis, to make sense of the information we already have.<br />
<br />
Does it make sense to divide intelligence problems into puzzles and mysteries, as Treverton and Gladwell do? I'm not convinced there is a simple either/or, but I'm not sure that's what Treverton and Gladwell are claiming anyway. I think what is important here is not to identify which problems count as puzzles and which ones count as mysteries, but to acknowledge that at least some problems do count as mysteries in Treverton's terminology, and therefore we need an intelligence capability that helps us to make sense of too much information, and not rely solely on an intelligence capability that merely gathers more information in the hope of resolving something. Examples of this need can be found both in national security and in business.<br />
<br />
In terms of organizational intelligence, this means achieving a good balance between two capabilities - the information gathering capability (Perception) and the analysis capability (Sense-Making) - and linking effectively into the remaining capabilities (Decision, Action, Learning). Sometimes merely collecting more information doesn't help solve the problem, especially if we don't have the capacity to interpret the information we already have, or if the new information merely provides an excuse for further procrastination.<br />
<br />
(This is similar to the point some of us were discussing recently on Twitter: whether it is always good to produce more ideas, or whether it is sometimes possible to have too many ideas, especially if you don't have the capacity to use them effectively.)<br />
<br />
Was Enron intelligent? Enron certainly did some clever and innovative things, but the spectacular failure of Enron suggests that there were some flaws in the thinking. Here are two suggestions. Firstly, a collective failure within Enron to appreciate the scale of its exposure to risk indicates a weakness in sense-making - an insufficiently robust way of seeing beyond the complexity of the accounts and understanding the true financial state of the company. And secondly, a collective refusal to learn from the voices of doubt coming from external critics. When a company is convinced by its own cleverness, this conviction can become a barrier to learning, and therefore a limitation of true intelligence.<br />
<br />
<hr />
Other examples of the puzzle/mystery dichotomy<br />
<br />
<ul>
<li>Bob Rosner (<a href="http://abcnews.go.com/Business/CareerManagement/story?id=2875012">ABC News Feb 2007</a>) suggests that puzzles are left-brain and mysteries are right-brain</li>
<li><a href="http://larryferlazzo.edublogs.org/2009/12/13/is-figuring-out-how-to-make-schools-better-a-puzzle-or-a-mystery/">Larry Ferlazzo</a> suggests that puzzles are data-driven and mysteries are data-informed</li>
<li>Security blogger Gunnar Peterson suggests that <a href="http://1raindrop.typepad.com/1_raindrop/2008/04/authorization-i.html">Authorization is a Puzzle, Authentication is a Mystery</a></li>
<li>The Moonrider blog looks at accidents versus collisions in <a href="http://wmoon.wordpress.com/2009/12/19/motorcycle-safety%e2%80%94is-it-a-puzzle-or-is-it-a-mystery/">Motorcycle Safety<br /></a></li>
<li>Brett Miller quotes Gary Kasparov on Life versus Chess: <a href="https://gbrettmiller.blog/2010/01/27/uncertainty-is-far-more-challenging/">Uncertainty is far more challenging</a></li><li>In the 1950s, Frederik Buytendijk described men as puzzles and women as mysteries (via Annemarie Mol).<br /></li>
</ul>
Related posts: <a href="https://demandingchange.blogspot.com/2010/01/connecting-dots.html">Connecting the Dots</a>, <a href="https://demandingchange.blogspot.com/2010/01/explaining-enron.html">Explaining Enron</a>, <a href="https://demandingchange.blogspot.com/2010/01/making-intelligence-relevant.html">Making Intelligence Relevant</a><br />
<br />
<ul>
</ul>
Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-5409059759529802512009-09-20T19:33:00.000+01:002020-02-18T22:26:59.991+00:00Linking Facial ExpressionsTwo studies about facial expressions have been reported by the BBC in the past few weeks.<br />
<br />
<a href="http://news.bbc.co.uk/1/hi/sci/tech/8199951.stm">Facial expressions 'not global'</a> (14 August 2009). In research carried out by a team from Glasgow University, East Asian observers found it more difficult to distinguish some facial expressions. (Findings published in Current Biology journal.)<br />
<br />
<a href="http://news.bbc.co.uk/1/hi/health/8261491.stm">Delinquents 'misinterpret anger'</a> (19 September 2009). A Japanese study of young offenders found they often misread facial expressions. (Findings published in Child and Adolescent Psychiatry and Mental Health journal.)<br />
<br />
I have not read the academic papers, but it looks as if there might be an interesting link between the two studies. There are some missing perceptions in some people (perhaps by culture and/or personality type), and these are linked to patterns of behaviour. The Japanese researchers are looking at personality type, while the European researchers are looking at cultural differences.<br />
<br />
But how do such links ever get identified, especially as it is perfectly within the bounds of possibility that there is nobody who reads both of these journals? I only spotted it myself, because I read the second story and recalled having read something similar not long before, and because I was able to find my way back to the first story.<br />
<br />
The first general point to pay attention to here is the process of memory retrieval, in this case involving a collaboration between my brain and a simple internet search. <br />
<br />
The second general point is about the fragmentation of knowledge and the "architecture" of joined-up research. How do such accidental links influence not only what we happen to know, but also what becomes available to be known?<br />
<br />
<br />
Related post: <a href="https://rvsoftware.blogspot.com/2019/03/affective-computing.html">Affective Computing</a> (March 2019) Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com2tag:blogger.com,1999:blog-1254315679163990153.post-13232066471176803112009-03-30T11:41:00.000+01:002019-03-02T17:53:28.858+00:00Hard scienceFound an extraordinary exam question in a post called <a href="http://cabalamat.wordpress.com/2007/08/31/gcses-are-dumbed-down-and-getting-worse/">GCSEs are dumbed down and getting worse</a>, by Cabalamat, taken from an actual physics exam (Edexcel GCSE Physics P1b reference 5010 taken on 9 November 2006) (via <a href="https://twitter.com/bengoldacre/status/1413671111">Ben Goldacre</a>).<br /><br /><blockquote><i>Our Moon seems to 'disappear' during an eclipse. Some people say this is because an old lady covers the Moon with her cloak. She does this so that thieves cannot steal the shiny coins on the surface.<br /><br />Which of these would help scientists to prove or disprove this idea?<br /><br />A - collect evidence from people who believe the lady sees the thieves<br />B - shout to the lady that the thieves are coming<br />C - send a probe to the Moon to search for coins<br />D - look for fingerprints<br /></i></blockquote><br />I have read this question several times, and I am still unsure what answer they are looking for.<br /><br />A - Well, this is exactly the kind of thing that social scientists would probably do. The question doesn't specify what kind of scientists it is talking about.<br /><br />B - Well, this is a good experimental approach. If shouting affected the outcome, and if shouting about thieves produced a significantly different outcome to shouting about other things, then this would be good evidence in support of the hypothesis. However, if shouting didn't affect the outcome, this wouldn't help to disprove the hypotheses because there is a vacuum between the Earth and the Moon and sound doesn't carry in a vacuum. The old lady might have cybertronic ears, but then again she might be deaf.<br /><br />C - Finding or not finding coins doesn't really help us much. If there are coins, it could mean that the old lady has outwitted the thieves, or that the thieves thought it would be unlucky to take all the coins, or that there aren't any thieves. If there are no coins, it could mean we are looking in the wrong place, or it is the wrong time of the month, or Fred Goodwin's got them.<br /><br />D - Fingerprints. Same as coins. By the way, are we looking for fingerprints on the coins, or fingerprints on the cloak?<br /><br />I suspect that any child who really understands science and the scientific method will waste more time on this question than a child who hasn't a clue. So this isn't just dumbing down, it is levelling down.Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com6tag:blogger.com,1999:blog-1254315679163990153.post-87498613415622771432009-03-14T11:53:00.001+00:002023-03-16T08:10:11.581+00:00Thinking with the MajorityA.A. Milne "wrote somewhere once that the third-rate mind was only happy when it was thinking with the majority, the second-rate mind was only happy when it was thinking with the minority, and the first-rate mind was only happy when it was thinking". (War with Honour)<br />
<br />
I wrote somewhere once that "thinking with the majority" is an excellent description of Google.<br />
<blockquote>'The suggested improvements (in Google) are just great for those people who want to ask the same questions as everyone else, and get the same answers. Google rankings already depend on the clicks of previous websurfers, and this dependency will become more sophisticated. Google will therefore support, with ever-greater efficiency and effectiveness, an intellectual activity characterized by A.A. Milne (author of Winnie-The-Pooh) as "Thinking with the Majority". '<br />
</blockquote>And as <a href="http://www.armannd.com/minority-vs-majority-vs-truth.html">Titus-Armand</a> points out, it is also a good description for<br />
<blockquote>'reliance upon authority in which the “authority figure” is represented by the entire population rather than a single individual or a particular group'.<br />
</blockquote><br />
What about thinking with the minority? There is a popular meme known as "thinking the unthinkable", and I think this is what the third-class mind supposes the second-class mind to be doing.<br />
<br />
If you are one of those who are happiest when thinking, please <a href="http://feeds2.feedburner.com/DemandingChange">subscribe to this blog</a> and also <a href="https://twitter.com/richardveryard">follow me on Twitter</a>.<br />
<br />
<hr />Update: within a few minutes of this item's being syndicated on Twitter, <a href="http://twitter.com/HotFusionMan/status/1326846320">Al Chou</a> reminds me of a related quote: "A great many people think they are thinking when they are merely rearranging their prejudices." Was it Churchill or William James? The authority of the majority (aka Google) prefers the latter; who am I to argue?Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com2tag:blogger.com,1999:blog-1254315679163990153.post-39277171753079444362009-03-06T17:00:00.005+00:002019-03-02T17:53:29.285+00:00Negative Evidence@<a href="https://twitter.com/snowded">snowded</a> blogs on <a href="http://www.cognitive-edge.com/blogs/dave/2009/03/negative_evidence_and_the_vill.php">Negative evidence and the village idiot syndrome<br />
</a><br />
<br />
People who should know better (so-called scientists) make this problem worse by using the phrase "no scientific evidence". For example "no scientific evidence that eating infected meat carries any risk to humans" or "no scientific evidence that mobile phones cause headaches".<br />
<br />
This creates the impression that there may actually be lots of evidence, but we can safely ignore it because it hasn't been collected or approved by somebody in a white coat.<br />
<br />
Just as some kinds of evidence are inadmissible in a court of law, so some kinds of evidence are inadmissible in a scientific journal. Among other things, this leads to publication bias, where people perform calculations based only on the data that have passed through some publication filter, which is then systematically incomplete.<br />
<br />
See my comment to <a href="http://www.emergentchaos.com/archives/2008/06/science_isnt_about_checkl.html">Science isn't about Checklists</a>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-47041119843115949702009-03-02T09:35:00.000+00:002019-03-02T17:53:29.485+00:00Learning from ExperienceUnder the heading <a href="http://www.cognitive-edge.com/blogs/dave/2009/02/big_fish_eat_little_fish_and_s.php">Big Fish Eat Little Fish</a>, Dave Snowden posts a series of photos, telling a visual story with a dramatic conclusion. (Go on, look at the story, I can wait here until you come back.)<br /><br />An obvious lesson to draw from this story is one about learning from experience.<br /><br />However, there is a further twist: the final photo in the series turns out to be faked. (There is further explanation of this on <a href="http://www.snopes.com/photos/accident/crane.asp">Snopes</a>.)<br /><br />In addition to learning from experience, another lesson we could draw from this story is to be a little wary of situations whose narrative structure is too good. (When there are forces within a story that make us so want it to be true, maybe that's the time to switch logical levels.)<br /><br />But of course, even if the story is part-fiction, that doesn't stop us learning from it. We do need to learn from experience, and experience includes fiction. The story is therefore True (at some level) because it is Relevant and Meaningful.Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-24163634630916818792008-11-07T12:00:00.000+00:002019-03-02T17:53:29.085+00:00Dead CertBBC Radio Four broadcast an excellent programme on political doubt and certainty last night.<br /><br /><blockquote>"Doubt seems a dangerous thing in politics. If possible, you don't admit it; not about your values, nor your analysis, nor the policies that will magically bring about the change that you are certain is needed. Confidence, by contrast, thrives: confidence in the power of our own analysis, of who is to blame and why, the strident confidence of politicians or business people in their preferred remedies. In this edition of Analysis, Michael Blastland asks whether these common assumptions might actually have their own dangers." </blockquote><br />The programme will be repeated on Sunday 9th November at 21.30 GMT; podcast and transcript are available from the <a href="http://news.bbc.co.uk/1/hi/programmes/analysis/7712933.stm">programme website</a> for limited period only.<br /><br />I shall post a review later.Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-66944236474617267242008-11-07T09:25:00.000+00:002023-03-16T20:45:02.800+00:00You don't have to be smart to search here ...<p>... but it helps. </p><p>I'm prompted to write this post by a throwaway remark from David McCoy, in a post on <a href="http://web.archive.org/web/20090604084628/http://blogs.gartner.com/dave_mccoy/2008/11/06/bad-statistics-8976-faulty/" title="Bad Statistics - 89.76 % Faulty">election statistics</a>: "You don’t have to be smart to search nowadays - all you have to do is enter the key snippet."</p><p>Ah, but how do you find the key snippet? </p><p>My son had a school essay to write comparing two films, so we thought it would be worth looking on the internet to find some analysis. But if you just search for the names of the films, you just get endless cinema listings and DVD sales, plus a few fairly superficial newspaper cuttings. </p><p>So we tried another tack. Who are the key figures (film theory, media studies, sociology) that might be name-dropped in a serious essay? </p><p>Let's start with Lacan.
When we added "Lacan" to the name of one of the films, the search engine suddenly unearthed an entirely different set of web pages, including a bunch of blogs apparently created as part of a high school project (<a href="http://en.wikipedia.org/wiki/Sixth_form" title="Wikipedia: Sixth form">sixth-form</a>) and talking about a set of related films including the two we were interested in. Could we have found these blogs any other way? </p><p>Okay I admit it, my son hasn't read Lacan, hadn't even heard of him, but he had a bit of parental help. The point I'm making here is that sometimes the more knowledge you can put into the search, the more useful the results. </p><p>Even Microsoft sometimes misses important stuff when it searches the internet - for example when checking a brand name. See my post on <a href="http://rvsoftware.blogspot.com/2003/10/google-and-longhorn.html">Google and Longhorn</a>.</p><p>Internet search looks rather like a <a href="http://www.claymath.org/millennium/P_vs_NP/">P v NP</a> problem. It's fine for checking unoriginality: for example, if the teacher suspects a student of plagiarism, she can put a suspiciously well-phrased sentence into an internet search engine and confirm that the sentence is not original. It is also fine for finding well-structured material: if you want to check Missouri voting statistics, you can probably find something relevant. (See <a href="https://posiwid.blogspot.com/2008/11/missouri-loses-bellwether-status.html">Missouri loses bellwether status</a>, November 2008.) </p><p>But if you want to find an unusual thought, you will have to find an unusual combination of search terms. You do have to be smart to search here.</p><p> </p><hr /><p><b>Update March 2023</b></p><p>What I described in this post is now known as <b>prompt engineering</b>. The importance of this has become much more apparent since the emergence of AI tools such as Dall-E and ChatGPT. See James Bridle, <a href="https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt">The Stupidity of AI</a> (Guardian, 16 March 2023). (The Wikipedia entry on <a href="https://en.wikipedia.org/wiki/Prompt_engineering">prompt engineering</a> was created in October 2021.)<br /></p><p>Meanwhile while there are now automated tools to help teachers detect basic plagiarism, these are not much use against students who try to cheat using AI tools. See my post on <a href="https://demandingchange.blogspot.com/2023/01/reasoning-with-majority-chatgpt.html">Reasoning with the majority - chatGPT</a> (January 2023).</p><p>Which develops a view of the limitations of Google I have talked about elsewhere: <a href="https://demandingchange.blogspot.com/2009/03/thinking-with-majority.html">Thinking with the majority</a> (March 2009)<br /></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com1tag:blogger.com,1999:blog-1254315679163990153.post-21728957485047905462008-09-14T09:40:00.000+01:002019-03-02T17:53:50.621+00:00Confirmation BiasAdam Shostack has a couple of posts on Confirmation Bias. I've added some comments on Adam's blog; here's a digest of the discussion.<br /><br /><h4><a href="http://www.emergentchaos.com/archives/2008/09/things_only_an_astronomer.html">Things Only An Astrologist Could Believe</a></h4>Adam picks up an astrological analysis of a recent action by Google: apparently the timing of the Chrome release was astrologically auspicious.<br /><br />Vedic Astrologer: "Such a choice of excellent Muhurta with Chrome release time may be coincidental, but it makes us strongly believe that Google may not have hesitated to utilize the valuable knowledge available in Vedic Astrology in decision making."<br /><br />Adam: "This is a beautiful example of confirmation bias at work. Confirmation bias is when you believe something (say, Vedic astrology) and go looking for confirmation. This doesn't advance your knowledge in any way. You need to look for contradictory evidence. For example, if you think Google is using Vedic astrology, they have a decade of product launches with some obvious successes. Test the idea. I strongly believe that you haven't."<br /><br />Myself: "What our Vedic friend is actually telling us is that Google "may not have hesitated" in its use of Vedic astrology. To be honest, I also find it hard to believe that Google executives sat around dithering about whether to use Vedic astrology or not."<br /><br />In further comments, the Vedic astrologer argues that the astrological method is no different from other forms of observational science, using the scientific method, which requires the prediction of future results.<br /><ul><li>Hypothesis: The sun comes up every 24 hrs.</li><li> Method: I will time when the sun crosses the horizon.</li><li> Results: I successfully predicted 50 sunrises with a 100% degree of accuracy. This is further evidence that my hypothesis is correct.</li><li> Caveat: Although, I note that since 24 hrs is the period between sunrises by definition of a day, this is circular.</li></ul>Actually, the statement that the sun rises exactly in 24 hour intervals is only believable if you live near the equator and you know nothing about astronomy, or if you adopt a solar method for measuring the length of an hour.<br /><p>What confuses me about the hypothesis posed by our Vedic friend is whether he is trying to predict the decision-making behaviour of Google executives or the successful outcome of their decisions. Even if Google executives are making auspicious decisions, this could be "explained" either by the fact that they are employing the services of an astrologer, or by the fact that Google happens to have good (= astrologically blessed) executives. Or something.</p><p>(See my post <a href="http://posiwid.blogspot.com/2009/04/does-fortune-telling-work.html">Does Fortune-Telling Work?</a>)</p><p><br /></p><h4><a href="http://www.emergentchaos.com/archives/2008/09/more_on_confirmation_bias.html">More on Confirmation Bias<br /></a></h4>According to an old article by <a href="http://www.michaelshermer.com/">Michael Shermer</a> in the Scientific American [<a href="http://www.sciam.com/article.cfm?id=the-political-brain">The Political Brain</a>, June 2006], "a recent brain-imaging study shows that our political predilections are a product of unconscious confirmation bias". <a href="http://www.concurringopinions.com/archives/2008/09/baffled_by_comm.html" title="Baffled By Community Organizing">Devan Desai</a> concludes that "hardcore left-wing and hardcore right-wing folks don’t process new data".<br /><br />When I first read that line about "hardcore left-wing and hardcore right-wing folks" quoted in Adam's blog I assumed it was talking about serious extremists - communists and neoNazis. Turns out it was just looking at people with strong Democrat or Republican affiliation. Maybe any party affiliation at all seems pretty hardcore to some people.<p></p> <p>As far as I can see, the study only actually looked at people with strong political opinions, and didn't compare them with any control group. Like, er, the middle-of-the-road folks who fund and write up this kind of research.</p> <p>I wonder whether anyone would get research funding or wide publicity for exploring the converse hypothesis - that people with strong political opinions are actually relatively open-minded, and that the people who have the most entrenched opinions are the bureaucrats who staff the research funding bodies and the people who write popular articles for Scientific American.</p> <p>(Of course I'm jumping to conclusions myself here, that's what bloggers do isn't it?)</p> <p>I'm not saying I believe that bigots are more open-minded than wishy-washy middle-of-the-roaders. I'm just saying we need to be mistrustful of studies that are designed to confirm the prejudices of the researchers, and suspicious of people who latch onto these studies to prove a point. The problem is that there may be confirmation bias built into the way these kind of pseudo-scientific studies are funded, organized and then publicized. Not surprising then if "the FMRI findings merely demonstrate what many of us already knew".</p><p>As director of the <a href="http://www.skeptic.com/">Skeptics Society</a>, Michael Shermer latches onto a study showing that people are biased. Shermer himself has a particular set of bugbears, including evolutionary psychology, alien abductions, prayer and healing, and alternative medicine. Are we really to imagine that he approaches any of these topics with a truly open mind? And why should he anyway? The rest of us don't. </p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com1tag:blogger.com,1999:blog-1254315679163990153.post-21333666493415685722008-08-04T23:29:00.000+01:002021-03-22T22:00:45.235+00:00Peer Review in the DockTonight's Science programme on BBC Radio 4 was critical of the peer review process, in which scientific articles are filtered for publication according to the comments of other researchers in the same field. [<a href="http://www.bbc.co.uk/radio4/science/pip/208vb/">Peer Review in the Dock</a>, 4 August 2008]<br /><br />The purpose of peer review is to give us confidence in the quality of published scientific research. Like many other social institutions, it has well-known weaknesses as well as strengths. [BBC News, <a href="http://news.bbc.co.uk/1/hi/sci/tech/4600402.stm" title="BBC News, 10 January 2006: Science will stick with peer review">Science will stick with peer review</a>]<br /><br />I have often been asked to provide peer reviews on articles for journals and conferences. Sometimes I find I know much more about the subject of the article than the authors, or at least some aspects of the subject. Even when my knowledge is less, I can usually find some areas of weakness or confusion in the article, demanding (in my opinion) either a significant re-write or complete rejection.<br /><br />Having gone to the trouble to provide these reviews, I used to be shocked when I discovered that papers sometimes slipped through to publication without the identified flaws being adequately corrected. Experienced authors (or their supervisors) know how to game the system, and most journals and conferences simply don't have the resources to prevent these games. Some years ago I wrote a critique of this process and identified a number of negative patterns.<br /><br />The BBC programme this evening identified several more, including the "famous institution" bias and the "publication" bias. The latter is particularly important for research that involves sophisticated statistics (such as medical research), because if only publishable data are included in the analysis, then the publication criteria may themselves distort the findings. The publication bias also affects the opinions of so-called experts, whose assumptions will have been reinforced by the papers they have read.<br /><br /><hr /><p>Related post: <a href="https://demandingchange.blogspot.com/2004/07/trahison-des-clercs-antipatterns-of.html">Trahison des Clercs - AntiPatterns of Peer Review</a> (July 2004)<small> </small></p><p><small>URL for this post: <a href="http://tinyurl.com/cx6wbv">http://tinyurl.com/cx6wbv</a></small></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-29594638603367087742008-06-17T17:32:00.001+01:002022-04-10T09:56:45.526+01:00Memory and the LawRebecca Fordham writes:<br />
<br />
<blockquote>
"Many experts are challenging the view that eyewitnesses recounting what they saw is the best way of tapping their memory. Some think brain scans could be the way forward." [<a href="http://news.bbc.co.uk/1/hi/magazine/7457653.stm">Memory Mixup</a>, BBC News Magazine, 17 June 2008]</blockquote>
<br />
We already have technology that is supposed to detect discrepancies between what the witness remembers and what the witness says - it's called a polygraph or lie detector. Now we apparently need another technology that detects discrepancies between what the witness consciously remembers and what is buried in the witness's unconscious.<br />
<br />
The lie detector has been controversial ever since its invention, and features in a Chesterton story called "The Mistake of the Machine". (Of course it is not the machine that makes the mistake, as Chesterton's hero Father Brown points out, but the people using the machine who misinterpret its output.)<br />
<br />
<ul>
<li>Chesterton and Friends: <a href="http://chestertonandfriends.blogspot.com/2005/06/lie-detectors.html">Lie Detectors</a></li>
<li>David Wallace-Wells: <a href="http://www.washingtonmonthly.com/features/2007/0704.wallace-wells.html">The Big Lie</a> (Washington Monthly, April 2007), via <a href="http://antipolygraph.org/blog/?p=125">AntiPolygraph.org</a></li>
</ul>
<br />
Of course humans sometimes lie, and sometimes this can be detected by the polygraph, but that doesn't make the polygraph an instrument of truth. (For that matter, people sometimes blurt out secrets under the influence of alcohol or torture, or get artistic inspiration under the influence of mind-bending drugs, but none of these are reliable instruments of truth either.)<br />
<br />
And human memory is sometimes unreliable, but that doesn't make the brain scan an instrument of truth either. Constructing evidence from the unconscious contents of a brain is no more reliable than constructing history from an archaeological sift through a mediaeval rubbish tip. It may be possible, and may yield some intriguing results, but the results are always speculative and uncertain.<br />
<br />
Meanwhile, our "common sense" understanding of the brain and its contents is probably less accurate and less coherent than our understanding of mediaeval waste disposal. That's why psychoanalysts make more money than archaeologists. They do, don't they?<br />
<br />
<h4>
Update</h4>
<br />
"India has become the first country to convict someone of a crime relying on evidence from this controversial machine." [Source: <a href="http://www.nytimes.com/2008/09/15/world/asia/15brainscan.html?_r=1&oref=slogin" title="India’s Novel Use of Brain Scans in Courts Is Debated (Anand Giridharadas, September 14, 2008)">New York Times</a>, via <a href="http://www.schneier.com/blog/archives/2008/09/india_using_bra.html" title="India Using Brain Scans to Prove Guilt in Court">Bruce Schneier</a>]<br />
<br />
<br />
Related posts: <a href="https://rvsoapbox.blogspot.com/2020/02/the-dashboard-never-lies.html">The Dashboard Never Lies</a> (February 2020), <a href="https://rvsoftware.blogspot.com/2022/04/lie-detectors-at-airports.html">Lie Detectors at Airports</a> (April 2022)<br />Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com1tag:blogger.com,1999:blog-1254315679163990153.post-13625075621592476712007-11-30T15:33:00.000+00:002019-03-02T17:53:50.443+00:00Solitary Thinking<a href="http://www.lightbluetouchpaper.org/2007/11/30/hackers-get-busted/">Hackers get busted</a>. Dan Cvrcek raises a very interesting question about the flaws in thinking committed when bright people try to solve problems in isolation.Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-91231621825221663562006-06-20T19:46:00.000+01:002019-03-02T17:53:51.852+00:00Science and Censorship<span style="font-weight: bold;">Dark materials</span>. Nuclear scientist Joseph Rotblat campaigned against the atom bomb he had helped unleash. Is it time for today's cyber scientists to heed his legacy?<br /><blockquote>"There is an ever-widening gap between what science allows, and what we should actually do. There are many doors science can open that should be kept closed, on prudential or ethical grounds. Choices on how science is applied should not be made just by scientists."<br /></blockquote><div style="text-align: right;"><small>Essay by Martin Rees</small><br /><small>President of the <a href="http://www.royalsoc.ac.uk/">Royal Society</a></small><br /><small>[<a href="http://www.guardian.co.uk/comment/story/0,,1794321,00.html">Guardian, Saturday June 10, 2006</a>]</small><br /></div><hr /><span style="font-weight: bold;">We cannot allow the terrorists to terrorise us</span>. Scientific research shouldn't be halted simply because it might fall into the wrong hands.<br /><blockquote>"The scientist's job is to shine light in the darkness, and if we occasionally burn our fingers on the candle, so be it. Lord Rees can choose the darkness if he wants. I'm not going to."</blockquote><div style="text-align: right;"><small>Response by <a href="http://www.lightbluetouchpaper.org/2006/06/20/censoring-science/">Ross Anderson</a><br />Chair of the <a href="http://www.fipr.org/">Foundation for Information Policy Research</a><br />[<a href="http://education.guardian.co.uk/higher/comment/story/0,,1801768,00.html">Guardian, Tuesday June 20, 2006</a>]</small><br /></div><hr />I am uneasy about both sides of this debate. Should science be restrained - either by scientists or by society. Do politicians represent the interests of society, or is there a better and more democratic way for society's interests to be represented?<br /><br />The fact is that science is already restrained by all sorts of social and commercial forces - above all the willingness to fund particular kinds of research and not others. The choice is not simply between a risk-averse establishment (represented by the Royal Society) and a risk-seeking free-thinking radical alternative (represented by the Foundation).<br /><br />Of course Anderson is right to be wary of the distorted perceptions of risk by politicians and the non-scientific public. But the proper response to this is a properly constituted debate. Meanwhile, politicians will often seek stupid measures to use and abuse scientists.<br /><br />As I reported last year (<a href="http://rvtrustblog.blogspot.com/2005/02/research-under-fire.html">Research Under Fire</a>), scientists and engineers at the University of Berkeley are wary of academic restrictions imposed by the US Federal Government in the name of national security. Thankfully this isn't the kind of restraint Rees is advocating.<br /><br /><small>Technorati Tags: <a href="http://technorati.com/tag/censorship" rel="tag">censorship</a> <a href="http://technorati.com/tag/research" rel="tag">research</a> <a href="http://technorati.com/tag/science" rel="tag">science</a></small>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-18432845031908013332006-06-09T22:46:00.000+01:002019-03-02T17:53:50.533+00:00A reasonable percentage (3)One piece of intelligence was accurate.<br /><blockquote><span style="font-size:85%;">A man described as Abu Musab al-Zarqawi's "spiritual adviser" inadvertently led US forces to the spot where the militant leader was finally located and killed, the US military says.<br /><br />Major General William Caldwell said the operation to track down the most wanted man in Iraq was carried out over many weeks, before he was killed after two US air force F-16s bombed a house in a village north of Baghdad.<br /><br />"The strike last night did not occur in a 24-hour period. It truly was a very long, painstaking deliberate exploitation of intelligence, information gathering, human sources, electronic, signal intelligence that was done over a period of time - many, many weeks," Gen Caldwell said on Thursday.<br /><br /></span><div style="text-align: right;"><span style="font-size:85%;">[<a href="http://news.bbc.co.uk/1/hi/world/middle_east/5060468.stm">BBC News</a>]</span><br /></div></blockquote><hr />One piece of intelligence was flawed.<br /><blockquote><span style="font-size:85%;">Anti-terror police raided a house at Forest Gate last week after saying they received "specific intelligence" that a chemical device might be found there.<br /><br />Scotland Yard later said they had "no choice" but to act while the prime minister said it was essential officers took action if they received "reasonable" intelligence suggesting a terror attack.<br /><br />Tony Blair said he backed the police and security services 101% and he refused to be drawn on suggestions that the armed operation had been a failure.<br /><br /></span><div style="text-align: right;"><span style="font-size:85%;">[<a href="http://news.bbc.co.uk/1/hi/uk/5066166.stm">BBC News</a>]</span></div></blockquote><hr />It's a reasonable percentage. (Previous posts: <a href="http://knowledgeanduncertainty.blogspot.com/2006/04/reasonable-percentage.html">April 9th</a>, <a href="http://knowledgeanduncertainty.blogspot.com/2006/04/reasonable-percentage-2.html">April 18th</a>.)<br /><br />But that's part of the problem with intelligence - it delivers probability rather than certainty. Perhaps the outcomes are the right way around this time - the presumed-guilty man was killed, and the presumed-innocent man merely injured. (So we shouldn't complain, should we? Imagine the complaints if it had been the other way around!)<br /><br />But over the long run, are there too many errors? (Difficult to tell, as we only know of some of the better publicized successes and failures.) Should we be uneasy about the errors of intelligence, and the consequences of acting upon erroneous intelligence? There are fundamental questions here about the relationship between knowledge (or ignorance) and action (or inaction).Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-37083315474067349662006-05-26T17:20:00.000+01:002019-03-02T17:53:50.857+00:00Conflicting OpinionsDisagreements are unsettling. In a stable world, we like our experts to provide simple and authoritative truth.<br /><br />Science isn't really supposed to work like that. Science should be constantly open to new discoveries, and new interpretations and explanations of old discoveries. [Kant, Peirce] Science is supposed to work with conjectures, proofs, refutations and paradigm shifts. [Popper, Kuhn, Lakatos, Feyerabend] These are essentially architectural notions. [See my earlier post on <a href="http://knowledgeanduncertainty.blogspot.com/2005/07/open-architecture.html">Open Architecture</a>]<br /><br />But the institutions and bureaucracies of science don't conform to this stereotype. There is a closed loop of research funding and journal publication, based on so-called peer review. In a recent report the Royal Society, Britain's most prestigious scientific club, deplores the popular media coverage of science and calls upon scientists to exercise greater self-restraint in publicizing findings that have not undergone the proper peer review process. [Source: John Kay, Financial Times, <a href="http://www.johnkay.com/trends/443">May 22, 2006</a>]<br /><br />As John Kay argues, this is essentially an appeal for scientists to be dull and boring. Peer review is inward looking, inhibits new and radical ideas, and serves as a kind of professional censorship. He concludes<br /><blockquote>"Any form of censorship, including self-censorship and censorship by fellow professionals, encourages complacency and discourages innovation. The history of modern scholarship is that, more slowly than we would wish, truth and new knowledge emerge only from a cacophony of conflicting opinions."</blockquote><br /><span style="font-size:85%;"> Technorati Tags: <a href="http://technorati.com/tag/architecture" rel="tag">architecture</a> <a href="http://technorati.com/tag/Kant" rel="tag">Kant</a> <a href="http://technorati.com/tag/knowledge" rel="tag">knowledge</a> <a href="http://technorati.com/tag/open" rel="tag">open</a> <a href="http://technorati.com/tag/Peirce" rel="tag">Peirce</a> <a href="http://technorati.com/tag/uncertainty" rel="tag">uncertainty</a></span>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-72348970756880408782006-04-18T18:14:00.000+01:002019-03-02T17:53:51.529+00:00A reasonable percentage (2)"It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns, and 60 percent of (fake) bombs. And recently, testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts."<br /><div style="text-align: right;">[<a href="http://www.schneier.com/blog/archives/2006/03/airport_passeng.html">Bruce Schneier</a>]<br /><hr /><div style="text-align: left;">If security finds nearly half the bombs, does that count as success or failure? (Glass half-full or half-empty?) Schneier reckons it's probably good enough, and points out: "Against the professionals, we're just trying to add enough uncertainty into the system that they'll choose other targets instead."<br /><br /></div> </div> <hr />"Do not despair; one of the thieves was saved. Do not presume; one of the thieves was damned."<br /><div style="text-align: right;">[Saint Augustine]<br /></div><br />"One of the thieves was saved. (<i>Pause.</i>) It's a reasonable percentage."<br /><div style="text-align: right;">[<a href="http://samuel-beckett.net/">Samuel Beckett</a>, <a href="http://samuel-beckett.net/Waiting_for_Godot_Part1.html">Waiting for Godot</a>]</div><hr />Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-15407842313869842202006-04-09T00:49:00.000+01:002019-03-02T17:53:50.472+00:00A reasonable percentage"Do not despair; one of the thieves was saved. Do not presume; one of the thieves was damned."<br /><div style="text-align: right;">[Saint Augustine]<br /></div><br />"One of the thieves was saved. (<i>Pause.</i>) It's a reasonable percentage."<br /><div style="text-align: right;">[<a href="http://samuel-beckett.net/">Samuel Beckett</a>, <a href="http://samuel-beckett.net/Waiting_for_Godot_Part1.html">Waiting for Godot</a>]</div><hr /><div style="text-align: left;">One of the greatest writers of the twentieth century, Samuel Beckett (whose centenary falls in a few days time) had a deep interest and understanding of the topic of knowledge and uncertainty.<br /><br />In Waiting for Godot, Vladimir speculates on the possibility of redemption. He starts with the story told by Saint Luke and used by Saint Augustine - that one of the two sinners crucified with Jesus was forgiven. It's a reasonable percentage.<br /><br />So which of the two characters will be saved: Vladimir or Estragon? (In some interpretations, Vladimir represents the intellectual side of man, while Estragon represents the physical.) Does Vladimir's knowledge of the Bible improve his chances, or his determination to survive? Does his speculative thinking result in presumption or despair?<br /><br />Vladimir then introduces another degree of uncertainty. Only Saint Luke tells this story about the two sinners. One gospel tells a conflicting story (both were damned), and the other two omit the story altogether. So the uncertainty - the reasonable percentage - is itself based on uncertain information, from contradictory sources.<br /><br />And what about Beckett himself? Many Christian preachers discuss St Augustine's principle, and often refer to Beckett in passing. For example, <a href="http://www.oxford.anglican.org/page/665/">Richard Harries</a> (Bishop of Oxford). Meanwhile, <a href="http://www.oakwood.edu/ocgoldmine/adoc/faculty/gbasaninyenzi/index.html">Gatsinzi Basaninyenzi</a> regards Godot as an anti-Christian text (largely on the assumption that Godot is supposed to represent God), and provides detailed advice on how to teach such a text from a Christian perspective. Do not presume, do not despair.<br /><br />Beckett himself repudiated the simple equation of Godot with God. "If by Godot I had meant God I would have said God, and not Godot."<br /><br />So we have three types of uncertainty here - uncertainty of outcome (one of the thieves was saved), uncertainty of knowledge (only one of the Evangelists tells this story), and uncertainty of meaning (what exactly does Godot represent anyway). Enough to be going on with?<br /><br /><small>Further material: <a href="http://arts.guardian.co.uk/comment/story/0,,1735248,00.html">Champion of Ambiguity</a> (Terry Eagleton), <a href="http://arts.guardian.co.uk/features/story/0,,1555060,00.html">Godot Almighty</a> (Peter Hall), <a href="http://arts.guardian.co.uk/features/story/0,,1535466,00.html">Godot Almighty</a> (Simon Callow).</small></div>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-17247642540326186182005-11-12T11:46:00.000+00:002019-03-02T17:54:00.694+00:00Dilbert on Intelligent DesignInteresting post by <a href="http://dilbertblog.typepad.com/the_dilbert_blog/2005/11/intelligent_des.html">Dilbert on Intelligent Design</a>, which raises some interesting questions of Knowledge and Trust.<br /><br />He makes the following points.<br /><br />1. The arguments for Darwinism (and intellectual defences against the "flaws" identified by the Intelligent Design and Creation folk) are complex.<br /><br />2. Belief in Darwinism depends on trusting the vast majority of scientists working in the field.<br /><br />3. However, the scientific field relevant to Darwinism is compartmentalized. Scientists are required to trust evidence from other specialisms and disciplines. We are not just talking about non-scientists trusting scientists, but scientists trusting other scientists.<br /><br />4. Therefore the entire scientific edifice of Darwinism is based on inter-disciplinary trust.<br /><br />5. If the institution of science is anything like the organizations that Dilbert has made a fortune analysing and drawing, then we have to take seriously the possibility that they've all got it completely wrong.<br /><br />Of course, the possibility that science has got things completely wrong has been explored by philosophers of science such as Kuhn, Lakatos and Feyerabend. But Dilbert is pointing to a new angle on this - the institutional mechanisms (familiar in large organizations) that permit lots of small bits of evidence to be accumulated and amplified into false knowledge.<br /><br />Dilbert is making a profound point about the way knowledge is composed from lots of bits of evidence. If much of this evidence is stronger when seen from a distance, and weaker when examined closely, this seems to call the whole body of knowledge into question. (Dilbert doesn't go into the recursive loops of analysis and interpretation, where the evaluation and interpretation of any piece of evidence depends on lots of prior knowledge from elsewhere - on what Bruno Latour calls Black Boxes. But this would add to his argument.)<br /><br />What Dilbert is rejecting is the theory of repetition, whereby if you repeat something often enough it becomes true, or if you get enough bits of weak evidence from different sources, it becomes strong evidence. This is a theory that is embedded in the way that lots of organizations behave, and in the way that a lot of computer systems produce so-called "business intelligence". Dilbert is all-too-familiar with the ways in which false knowledge can emerge (or should I say evolve?) in complex social settings.<br /><br /><small>Technorati Tags: <a href="http://technorati.com/tag/Darwin" rel="tag">Darwin</a> <a href="http://technorati.com/tag/Dilbert" rel="tag">Dilbert</a> <a href="http://technorati.com/tag/knowledge" rel="tag">knowledge</a> <a href="http://technorati.com/tag/science" rel="tag">science</a> <a href="http://technorati.com/tag/trust" rel="tag">trust</a></small>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com3tag:blogger.com,1999:blog-1254315679163990153.post-13626137196711529062005-10-26T21:27:00.000+01:002020-12-17T17:19:04.991+00:00Innocence or Ignorance<span style="font-style: italic;">“The greatest pleasure I know, is to do a good deed by proxy, and to find it out by accident.” </span>(with apologies to Lamb)<br /><br />In 1987, the supposedly most powerful man in the world, a former radio sports commentator called Ronald Reagan, was forced to regain his credibility with the American people by claiming ignorance of a complex deal, cooked up by members of his administration.<br /><br />Codenamed Irangate, the deal was to sell arms to self-styled moderates inside Iran, obtain the release of hostages in Beirut, and to pass the profits to the fighters of freedom in Central America. That does not concern us here. What does concern us is the profession of ignorance, and the advantages of it. Why does Reagan (who claims he knew nothing, or perhaps forgot) do better than Nixon (whose behaviour revealed his knowledge) ? What are the implications of knowing, or not knowing ?<br /><br />See previous posts on <a href="https://demandingchange.blogspot.com/2004/06/ronald-reagan.html">Ronald Reagan</a> and <a href="https://demandingchange.blogspot.com/2004/07/jimmy-carter.html">Jimmy Carter</a>.<br /><br /><hr />There are two words in English for not knowing. The word <span style="font-weight: bold;">innocent</span> comes from the Latin and, perhaps because of its association with the Roman Catholic Church, is regarded as synonymous with moral purity. Knowledge compromises purity, knowledge is dangerous - witness the Fall of Adam. Thus innocence is usually a word of praise, within a system of values that deplores all knowledge but that of God.<br /><br />The other word <span style="font-weight: bold;">ignorance</span> comes from the Greek, within the Renaissance system of values on which the modern notions of education and scientific humanism are founded. Knowledge is noble, knowledge is power. Thus ignorance is usually a degrading word.<br /><br />The romantics were ambivalent. Goethe recreated the figure of Faust, to explore the implications of knowing too much. (This story was rewritten for the screen by the feminist wife of a romantic poet; she called her version Frankenstein.)<br /><br /><span style="font-style: italic;">“I do not approve of anything that tampers with natural ignorance. Ignorance is like a delicate exotic fruit; touch it and the bloom is gone. The whole theory of modern education is radically unsound. Fortunately, in England at any rate, education produces no effect whatsoever.” </span>[Oscar Wilde]<br /><br /><hr />Power through knowledge, versus power through ignorance. The ability to have one's wishes carried out <span style="font-weight: bold;">without one knowing, or needing to know</span>. The best known example of this: the assassination of Beckett. Power through ignorance - is this not paradoxical ?<br /><br /><span style="font-style: italic;">“ ‘Knowledge is power’ is a misleading slogan. Knowledge may well be important to the maintenance of power, but that does not mean that the knowledgeable are powerful.” </span><span style="font-size: 85%;">[David Lyon, The Information Society (Cambridge, Polity Press/Basil Blackwell, 1988) p 62]</span><br /><br />Nor that the powerful are themselves knowledgeable.Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-23831916979752292682005-09-16T14:33:00.000+01:002019-03-02T17:54:00.966+00:00Indeterminacy<a href="http://web.archive.org/web/20070205233525/http://blogs.sun.com/racingsnake/entry/the_em_quantum_em_mechanics">Robin Wilton</a> uses quantum mechanics to explain the behaviour of politicians under stress. He supports this idea with an analysis of a radio interview with Tony Blair, which was broadcast this morning on the BBC. (I didn't hear the interview, so I cannot comment on the accuracy or fairness of his analysis.)<br />
<br />
But I think there is an important difference. On Robin's theory, the behaviour of politicians is overdetermined, closed. For example, once you have got consistent and succinct, you cannot also have logical. This is the very opposite of quantum mechanics, where the state of a subatomic particle is underdetermined, open.<br />
<br />
People under stress are often unable to tolerate certain types of uncertainty and risk - they become uptight, clinging pathetically to a few would-be certainties. People in leadership positions are particularly determined to appear determined. Politicians (at least in public) appear to be in this overdetermined state most of the time.Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0