Thursday, June 30, 2011

Atkins MR/ID Death Penalty court decisions: Pruitt v TN (2011) and Howell v TN (2011)

Two more recent Atkins MR/ID decisions to add to the Atkins MR/ID Court Decisions blogroll, both from TN. I have not read either of the decisions so I don't know if the TN Supreme court recent reversal that indicates that the standard error of measurement (SEM) and Flynn Effect can now be considered in Atkins cases influenced these two lower court decisions.

Pruitt v TN (2011)

Howell v TN (2011)

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator

Atkins MR/ID Death Penalty decision: Moore v US (2011)

Thanks again to Kevin Foley for sending me the following Atkins-related decision with the following brief comment.

In U.S. v. Moore, No. 09-5935 (6th Cir., June 1, 2011), the court rejected the argument that Atkins rendered Moore’s 15 year sentence for being an “Armed Career Criminal” cruel and unusual punishment, but remarked that, “this sentence did account for his mental retardation. . . Here, the district court sentenced him at the very bottom of the guideline range, expressly noting Moore’s condition while pronouncing his sentence.” Interestingly, the sentencing judge wanted to assess an even lower sentence, but felt he was legally unable to do so. “The district court judge remarked that, if he had the authority to do so, he would consider imposing a sentence below the statutory minimum due to the circumstances of the offense and Moore’s mental deficiencies. . . [but] he did not possess that authority and proceeded to sentence Moore to 180 months’ imprisonment.”

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator

General intelligence: To g or not to g? Dr. Joel Schneider comments

Last week there was a spirited exchange on CHC listserv regarding the status of the theoretical construct of general intelligence (g). Dr. Joel Schneider provided a very thought provoking response that included some of his recent writings on the subject. I asked Joel if I could share on IQs Corner, and he agreed. Below are his comments "as is." As the reader will learn from some of his comments, he was responding to other individuals who made some statements about g on the list.

Yes, opinion polling is not the way to do science but ultimately science IS about consensus-building. A single researcher can produce evidence so compelling that the entire field is forced to change its mind. When it comes to g, however, there is no compelling evidence about what it is or is not. Here are three excepts from a chapter I wrote that is in preparation:

"Spearman’s (1904) little g caused a big stir when it was first proposed and has, for over a century now, been disrupting the natural state of harmony that would otherwise prevail amongst academics. Many a collegial tie has been severed, many a friendship has soured, perhaps even engagements broken off and marriages turned into dismal, loveless unions because of the rancor this topic provokes. I have seen otherwise mild-mannered professors in tweed jackets come to blows in bars over disagreements about g — okay…not really…but I have seen some very sarcastic emails exchanged on professional listservs!"

"It turns out that these two groups [the mono-g-ists and the poly-G-ists] are not merely on opposite sides of an intellectual debate — they are members of different tribes. They speak different dialects, vote for different candidates, and pray to different gods. Their heroic tales emphasize different virtues and their foundation myths offer radically different but still internally consistent explanations of how the world works. If you think that the matter will be settled by accumulating more data, you have not been paying attention for the last hundred years."

"The theoretical status of g will not cease to be controversial until something extraordinary happens to the field. I do not pretend to know what this might be. Maybe a breakthrough from biology will resolve the matter. Maybe divine intervention. Until then, I feel no need to join either tribe. I will remain agnostic and I will not get too excited the next time really smart people eagerly announce that finally, once and for all, they have proof that the other side is wrong. This has happened too many times before."

Shifting topics:

You are right, I have estimated a person's intelligence and said something about it out loud. In like manner, I have said about different people, "She's nice." "He's a jerk!" "He's funny!" "She's impressive." "He's a good person." I agree with Spearman that "intelligence" is a pre-scientific folk concept, just as nice, jerk, funny, and good are folk concepts. There is nothing wrong with these terms. They communicate pretty clearly what I want to say. However, I do not believe that there is an underlying personality variable called "goodness" or "impressiveness." Such terms probably do have an indirect relationship to more fundamental cognitive structures, though.

Here is an excerpt from an early draft of the forthcoming chapter I wrote with Kevin McGrew. Almost of all of this section was removed because the chapter was starting to look like it was going to be over 200 pages. Editing the chapter down to 100 pages was painful and many parts we liked were removed:

Is g an ability?

The controversy about the theoretical status of g may have less fire and venom if some misunderstandings are cleared up. First, Spearman did not believe that performance on tests was affected by g and only g. In a review of a book by his rival Godfrey Thomson, Spearman (1940, p. 306) clarified his position.

“For I myself, no less than Thomson, accept the hypothesis that the observed test-scores, and therefore their correlations, derive originally from a great number of small causes; as genes, neurones, etc. Indeed this much seems to be accepted universally. We only disagree as to the way in which this derivation is to be explained.”

Second, Spearman (1927, p. 92) always maintained, even in his first paper about g (Spearman, 1904, p. 284), that g might consist of more than one general factor. Cattell (1943) noted that this was an anticipation of Gf-Gc Theory. Third, Spearman did not consider g to be an ability, or even a thing. Yes, you read that sentence correctly. Surprisingly, neither does Arthur Jensen, perhaps the most (in)famous living proponent of Spearman’s theory. Wait! The paper describing the discovery of g was called “‘General Intelligence’: Objectively Determined and Measured.” Surely this means that Spearman believed that g was general intelligence. Yes, but not really. Spearman thought it unproductive to equate g with intelligence, the latter being a complex amalgamation of many abilities (Jensen, 2000). Spearman believed that “intelligence” is a folk concept and thus no one can say anything scientific about it because everyone can define it whichever way they wish. Contemplating the contradictory definitions of intelligence moved Spearman (1927, p. 14) to erupt,

“Chaos itself can go no farther! The disagreement between different testers—indeed, even between the doctrine and the practice of the selfsame tester—has reached its apogee. […] In truth, ‘intelligence’ has become a mere vocal sound, a word with so many meanings that finally it has none.”

Spearman had a much more subtle conceptualization of g than many critics give him credit for. In discussing the difficulty of equating g with intelligence, or variations of that word with more precise meanings such as abstraction or adaptation, Spearman (1927, p.88) explained,

“Even the best of these renderings of intelligence, however, always presents one serious general difficulty. This is that such terms as adaptation, abstraction, and so forth denote entire mental operations; whereas our g, as we have seen, measures only a factor in any operation, not the whole of it.”

At a conference in which the proceedings were published in an edited volume (Bock, Goode, & Webb, 2000), Maynard Smith argued that there isn't a thing called athletic ability but rather it is a performance category. That is, athletic ability would have various components such as heart volume, muscle size, etc. Smith went on to argue that g, like athletic ability, is simply a correlate that is statistically good at predicting performance. Jensen, in reply, said, "No one who has worked in this field has ever thought of g as an entity or thing. Spearman, who discovered g, actually said the very same thing that you're saying now, and Cyril Burt and Hans Eysenck said that also: just about everyone who has worked in this field has not been confused on that point." (Bock, Goode, & Webb, 2000, p. 29). In a later discussion at the same conference, Jensen clarified his point by saying that g is not a thing but is instead the total action of many things. He then listed a number of candidates that might explain why disparate regions and functions of the brain tend to function at a similar level within the same person such as the amount of myelination of axons, the efficiency of neural signaling, and the total number of neurons in the brain (Bock, Goode, & Webb, 2000, p. 52). Note that none of these hypotheses suggest that g is an ability. Rather, g is what makes abilities similar to each other within a particular person’s brain.
In Jensen’s remarks, all of the influences on g were parameters of brain functioning. We can extend Jensen’s reasoning to environmental influences with a thought experiment. Suspend disbelief for a moment and suppose that there is only one general influence on brain functioning: lead exposure. Because of individual differences in degree of lead exposure, all brain functions are positively correlated and thus a factor analysis would find a psychometric g-factor. Undoubtedly, it would be a smaller g-factor than is actually observed but it would exist.

In this thought experiment, g is not an ability. It is not lead exposure itself, but the effect of lead exposure. There is no g to be found in any person’s brain. Instead, g is a property of the group of people tested. Analogously, a statistical mean is not a property of individuals but a group property (Bartholomew, 2004). This hypothetical g emerges because lead exposure influences all of the brain at the same time and because some people are exposed to more lead than are others.

In the thought experiment above, the assumptions were unrealistically simple and restrictive. It is certain that individual differences in brain functioning is influenced in part by genetic differences among individuals and that some genetic differences affect almost all cognitive abilities (Exhibit A: Down Syndrome). Some genetic differences affect some abilities more than others (e.g., William’s Syndrome, caused by a deletion of about 26 genes on chromosome 7, is associated with impaired spatial processing but relatively intact verbal ability). Thus, there are general genetic influences on brain functioning and there are genetic differences that effect only a subset of brain functions.

The fact that there are some genetic differences with general effects on cognitive ability (and there are probably many, Plomin, 20??) is enough to produce at least a small g-factor, and possibly a large one. However, there are many environmental effects that effect most aspects of cognitive functioning. Lead exposure is just one of many toxins that likely operate this way (e.g., mercury & arsenic). There are viruses and other pathogens that infect the brain more or less indiscriminately and thus have an effect on all cognitive abilities. Many head injuries are relatively focal (e.g., microstrokes and bullet wounds) but others are more global (e.g., large strokes and blunt force trauma) and thus increase the size of psychometric g. Poor nutrition probably hampers the functioning of individual neurons indiscriminately but the systems that govern the most vital brain functions have more robust mechanisms and greater redundancy so that temporary periods of extreme malnourishment affect some brain functions more than others. Even when you are a little hungry, the first abilities to suffer are highly g-loaded and evolutionarily new abilities such as working memory and controlled attention.

Societal forces probably also increase the size of psychometric g. Economic inequality ensures that some people will have more of everything that enhances cognitive abilities and more protection from everything that diminishes them. This means that influences on cognitive abilities that are not intrinsically connected (e.g., living in highly polluted environments, being exposed to water-borne parasites, poor medical care, poor schools, cultural practices that fail to encourage excellence in cognitively demanding domains, reduced access to knowledgeable mentors among many many others) are correlated. Correlated influences on abilities cause otherwise independent cognitive abilities to be correlated, increasing the size of psychometric g. How much any of these factors increase the size of psychometric g (if at all) is not yet known. The point is that just because abilities are influenced by a common cause, does not mean that the common cause is an ability.

There are two false dichotomies we should be careful to avoid. The first is the distinction between nature and nurture. There are many reasons that genetic and environmental effects on cognitive abilities might be correlated, including the possibility that genes affect the environment and the possibility that the environment alters the effect of genes. The second false choice is the notion that either psychometric g is an ability or it is not. Note that if we allow that some of psychometric g is determined by things that are not abilities, it does not mean that there are no truly general abilities (e.g., working memory, processing speed, fluid intelligence, and so forth). Both types of general influences on abilities can be present.

In this section, we have argued that not even the inventor of g considered it to be an ability. Why do so many scholars write as if Spearman believed otherwise? In truth, he (and Jensen as well) often wrote in a sort of mental shorthand as if g were an ability or a thing that a person could have more of or less of. Cattell (1943, p. 19) gives this elegantly persuasive justification:

Obviously "g" is no more resident in the individual than the horsepower of a car is resident in the engine. It is a concept derived from the relations between the individual and his environment. But what trait that we normally project into and assign to the individual is not? The important further condition is that the factor is not determinable by the individual and his environment but only in relation to a group and its environment. A test factor loading or an individual's factor endowment has meaning only in relation to a population and an environment. But it is difficult to see why there should be any objection to the concept of intelligence being given so abstract a habitation when economists, for example, are quite prepared to assign to such a simple, concrete notion as "price" an equally relational existence.

Generated by: Tag Generator

Atkins MR/ID Death Penalty Decision: Stripling v GA (2011)

Another Atkins Court Decision added to the Court Decisions blogroll. Stripling v Georgia (2011) is a case where the Georgia Supreme Court held that Georgia’s proof beyond a reasonable doubt standard for Atkins claims is not unconstitutional. I am not a lawyer, but such a standard does make it very tough on defendants. It seems hard enough to win an Atkin's decision using the preponderance of evidence standard; even harder if the “clear and convincing” standard applies; and harder still if proof beyond a reasonable doubt is required.

I'll leave it to the legal eagles to comment on the relevance of these different standards of proof to Atkins cases. If any reader with legal background would like to make a "standard of proof" for dummies quest blog post please contact me. It would be helpful to us non-lawyers.

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator

Wednesday, June 29, 2011

BOOKS: "The Ultimate Sanction" by Robert Bohm

Professor Robert M. Bohm has published a new book on capital punishment, The Ultimate Sanction: Understanding the Death Penalty Through Its Many Voices and Many Sides.  The book looks at the issue of capital punishment through interviews with people affected by the system in different 

<snip>.  Rest of story at link below

Sent with MobileRSS HD

Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Research Brief: Empirical analysis of Psychological Inventory of Criminal Thinking Styles

Click image to enlarge

- iPost using BlogPress from my Kevin McGrew's iPad

Tuesday, June 28, 2011

Special issue of Assessment journal on the WAIS-IV and WMS-IV research

The journal Assessment just published a special issue on the WASI-IV/WMS-IV. I love the journal cover (see above)

Frazier, T. W. (2011). Introduction to the Special Section on Advancing WAIS-IV and WMS-IV Clinical Interpretation. Assessment, 18(2), 131-132.

Bowden, S. C., Saklofske, D. H., & Weiss, L. G. (2011). Augmenting the Core Battery With Supplementary Subtests: Wechsler Adult Intelligence Scale-IV Measurement Invariance Across the United States and Canada. Assessment, 18(2), 133-140.

Brooks, B. L., Holdnack, J. A., & Iverson, G. L. (2011). Advanced Clinical Interpretation of the WAIS-IV and WMS-IV: Prevalence of Low Scores Varies by Level of Intelligence and Years of Education. Assessment, 18(2), 156-167.

Drozdick, L. W., & Cullum, C. M. (2011). Expanding the Ecological Validity of WAIS-IV and WMS-IV With the Texas Functional Living Scale. Assessment, 18(2), 141-155.

Gregoire, J., Coalson, D. L., & Zhu, J. J. (2011). Analysis of WAIS-IV Index Score Scatter Using Significant Deviation from the Mean Index Score. Assessment, 18(2), 168-177.

Holdnack, J., Goldstein, G., & Drozdick, L. (2011). Social Perception and WAIS-IV Performance in Adolescents and Adults Diagnosed With Asperger's Syndrome and Autism. Assessment, 18(2), 192-200.

Holdnack, J. A., Zhou, X. B., Larrabee, G. J., Millis, S. R., & Salthouse, T. A. (2011). Confirmatory Factor Analysis of the WAIS-IV/WMS-IV. Assessment, 18(2), 178-191

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator

Monday, June 27, 2011

FYiPOST: Top-Ten Recent SSRN Downloads

in criminal law and procedure ejournals are here. The usual disclaimers apply. Rank Downloads Paper Title 1 2221 Less than Picture Perfect: The Legal Relationship between Photographers' Rights and Law Enforcement Morgan Leigh Manning, University of Tennessee, Knoxville - College...

Sent with MobileRSS HD

Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Thursday, June 23, 2011

FYiPOST: Brits: American psychiatry needs new theoretical frame

Ever since the American Psychiatric Association launched its multi-million dollar diagnostic industry with the publication of the DSM-III in 1980, the approach to successive editions has been to tinker, fiddle, and tweak: Change a diagnostic threshold here; reword a criterion there; remove an outdated label and add two or three more in its place.

Meanwhile, the underlying structure is so shoddy and out of touch with reality that the best thing to do would be to tear the whole thing down and start over. That's the message of the British Psychological Society, the UK's 50,000-member professional body for psychologists, responding to the latest draft of the Diagnostic and Statistical Manual. The APA had invited the Society to comment on the DSM-5, currently due out in 2013.

Rest of story at link below

Sent with MobileRSS HD

Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

FYiPOST: Justice Department creates great new evidence-based criminal justice resource


I am extremely pleased to see (thanks to this item at The Crime Report) that The U.S. Justice Department has created an important  new website, Crime, to enable all of us to better assess the effectiveness of state and local criminal justice programming.  Here is more background vie The Crime Report:

Sent with MobileRSS HD

Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

FYIPOST: Blume & Paavola on Neuroimages in Capital Cases

John H. Blume (pictured) and Emily C. Paavola (Cornell Law School and Cornell Law School) have posted Life, Death, and Neuroimaging: The Advantages and Disadvantages of The Defense's Use of Neuroimages in Capital Cases - Lessons from the Front (Mercer...

Sent with MobileRSS HD

Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Wednesday, June 22, 2011

FYiPOST: Texas executes murderer despite evidence suggesting mental retardation

Complete story at Sentencing Law and Policy blog.

As detailed in this Reuters article, "Texas executed on Tuesday a man convicted of fatally shooting two people and paralyzing a third near Houston in 1998, despite evidence that he was mentally disabled." Here is more: (at link below)

Sent with MobileRSS HD

Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Monday, June 20, 2011

IAP 101 Psychometric Brief # 9: The problem with the 1/1.5 SD SS (15/22) subtest comparison "rule-of-thumb"

Psychologists who engage in intelligence testing frequently compare subtest scores to determine if they are statistically and practically part of the clinical interpretation process. Most IQ test publishers provide sound statistical procedures (tables or software for evaluating the statistical difference of two test scores; confidence band comparison rules-of-thumb).

However, traditional and clinical lore has produced a common "rule-of-thumb" that is problematic. The typical scenario is when a clinician subtracts two test SS's (M=100; SD=15) and invokes the rule-of-thumb that the difference needs to be 15 SS points (1 SD) or 22/23 points (1.5 SD). This is not correct.

SS difference scores do NOT have an SD scale of 15! When you subtract two SS's (with mean=100; SD=15) the resultant score distribution has a mean of zero and an SD that is NOT 15 (unless you transform/rescale the distribution to this scale) The size of the difference SD is a function of the correlation between the two measures compared.

The SD(diff) is the statistic that should be used, and there are a number of different forumla for computing this metric. The different SD(diff)'s differ based on the underlying question or assumptions that is the basis for making the comparison.

One way to evaluate score differences is the SEM band overlap approach. This is simple and is based on underlying statistical calculations (averaged across different scenarios to allow for a simple rule of thumb) that incorporates information about the reliability of the difference score. Test publishers also provide tables to evaluate the statistical significance of differences of a certain magnitude for subtests, such as in the various Wechsler manuals and software. These are all psychometrically sound and defensible procedures.......let me say that again...these are all psychometrically sound and defensible procedures. I repeat this phrase as the point I make below was recently misinterpreted at a state SP workshop as me saying there was something wrong with tables in the WISC-IV...which is NOT what I said and is NOT what I am saying here).

However, it is my opinion that in these situations we must do better and there is a more appropriate and better metric for evaluating differences between two different test scores, ESPECIALLY when the underlying assumption is that the two measures should be similar because they form a composite or cluster. This implies "correlation"...and not simple comparison of any two tests.

When one is attempting to evaluate the "unity" of a cluster or composite, an SD(diff) metric should be used that is consistent with the underlying assumption of the question. Namely, one is expecting the scores to be similar because they form a factor. This implies "correlation" between the measures. There is an SD(diff) calculation that incorporates the correlation between the measures being compared. When one uses this approach, the proper SD(diff) can vary from as small as approximately 10 points (for "tight" or highly correlated Gc tests) to as high as approximately 27 pts (for "loose" or weekly correlated tests in a cluster).

The information for this SD(diff) metric comes from a classic 1957 article by Payne and Jones (click here) (thanks to Joel S. for brining it to my attention recently). Also, below are two tables that show the different, and IMHO, more appropriate SD(diff) values that should be used when making some example test comparisons on the WISC-IV and WJ-III. (Click on images to enlarge)

As you see in the tables, the 15 (3 if using scaled scores) and 22 (4.5 if scaled scores) rules-of-thumb will only be correct when the correlation between the two tests being compared is of a moderate magnitude. When the correlation between tests being compared is high (when you have a "tight" ability domain) the appropriate SDdiff metric to evaluate differences can be as low as 9.9 points (for 1 SDdiff) and 14.8 (for 1.5 SDdiff) for the Verbal Comp/Gen Info test from the WJ-III Gc cluster or 2.2 scaled score (1 SDdiff) and 3.3 (1.5 SDdiff) when comparing WISC-IV Sim/Vocab.

In contrast, when the ability domain is very wide or "loose", one would expect more variability since traits/tests are not as correlated. In reviewing the above tables one concludes that the very low test correlations for the tests that comprise the WJ-III Gv and Glr clusters produce a 1 SDdiff that is nearly TWICE the 15 point rule of thumb (27-28 points).

I have argued this point with a number of quants (and some have agreed with me) but believe that the proper SS(diff) to be used is not "one size fits all situations." The confidence band and traditional tables of subtest significant difference approaches are psychometrically sound and work when comparing any two tests. However, when the question becomes one of comparing tests where the fundamental issue revolves around the assumption that the tests scores should be similar because they share a common ability (are correlated), then IMHO, we can do better...there is a better way for these situations. We can improve our practice. We can move forward.

This point is analogous to doing simple t-tests of group means. When one has two independent samples the t-test formula includes a standard error term (in the denominator) that does NOT include any correlation/covariance parameter. However, when one is calculating a dependent sample t-test (which means there is a correlation between the scores), the error term incorporates information about the correlation. It is the same concept.....just applied to group vs individual score comparisons.

I urge people to read the 1957 article, review the tables I have provided above, and chew on the issue. There is a better way. The 15/22 SS rule of thumb is only accurate when a certain moderate level of correlation exists between the two tests being compared and when the comparison implies a common factor or ability. If one uses this simplistic rule of thumb practitioners are likely using a much too stringent rule in the case of highly correlated tests (e.g., Gc) and being overly liberal when evaluating tests from a cluster/composite that are low in correlation (what I call ability domain here for prior post explaining/illustrating this concept). The 15/22 SS rule of thumb is resulting in inaccurate decisions regarding the unusualness of test differences when we fail to incorporate information about the correlation between the compared measures. And, even when such differences are found via this method (or the simple score difference method), this does not necessarily indicate that something is "wrong" and the cluster can't be computed or interpreted. This point was recently made clear in an instructional video by Dr. Joel Schneider on sources of variance in test scores that form composites.

If using the recommended SDdiff metric recommended here is to much work, I would recommend that practitioners steer clear of the 15/22 (1/1.5 SD) rule-of-thumb and instead use the tables provided by the test publishers or use the simple SEM confidence band overlap rule-of-thumb. Sometimes simpler may be better.

- iPost using BlogPress from my Kevin McGrew's iPad

Friday, June 17, 2011

APH position paper on IQ testing individuals who are blind or visually impaired available

A new position paper (from the Accessible Tests Department of the American Printing House for the Blind) on IQ testing with individuals who are blind or visually impaired is now available here.

- iPost using BlogPress from my Kevin McGrew's iPad

CHC Intelligence Theory v2.0: Broad and Narrow Ability Definitions Revised

Dr. Joel Schneider and I have crafted an abridged summary of our forthcoming "tweaking" of the CHC taxonomy of broad and narrow ability definitions to be published in the 3rd edition of Flanagan and Harrison's Contemporary Intellectual Assessment (CIA) book. The book chapter is extensive and does not included a table of revised definitions. Thus, we have developed such a summary and make it available here.

Please be careful in the use of the definitions. In our chapter we expand on the definitions and include a section on "unresolved issues" the taxonomy is fluid and evolving and should not be seen as cast in stone. Purchasing the book (when published) and reading the complete chapter, as well as a ton of other excellent chapters in CIA-3, is strongly recommended.

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator

Atkins MR/ID Death Penalty Court Decision: Bourgeois v TX (2011)

A recent Atkins case is the Alfred Bourgeois matter out of the U.S District Court, Southern District of Texas. The opinion is VERY large. 225 pages long. I received it in 6 separate PDF files. I have not read one page of the decision. It looks to have been a very interest proceeding with 663 filings, several days of hearings and multiple witnesses. I have posted it as a single ZIP file...that is over 19 MB in size. So, you have to download the .zip file and then unzip it on your computer if you want to read all 6 parts.

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator

Thursday, June 16, 2011

IAP home office-revised

Spent the morning redoing the IAP home office to be more efficient and to provide more work space. All ready to "get r' done". can never have enough computers or monitors...only one missing from the picture is my iPhone...which was used to take the picture. Picture does not include at least 3 semi-dead laptops in closet :)

Tuesday, June 14, 2011

FYiPOST: Meixner on Neuroimaging-Based Credibility Assessments in Court

Recently posted to SSRN: "Liar, Liar, Jury's the Trier? The Future of Neuroscience-Based Credibility Assessment and the Court" Northwestern University Law Review, Vol. 106, No. 3, 2012 JOHN B. MEIXNER, Northwestern University - School of Law, Northwestern University - Department...

Sent with MobileRSS HD

Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Monday, June 13, 2011

On the road-blogging lite or zero till end of week

Research brief: Gender differences in intelligence on the WAIS-III (Irwing, in press)

There has been no shortage of contemporary research on gender differences in cognitive abilities (click here for prior IQs Corner posts), and g (general intelligence) in particular. Irwing has a new article "in press" that contributes to this literature, both by reinforcing some prior findings...but also being at variance with other. The introduction provides a nice brief overview of some of the reasons (primarily methodological) for difference on the male-female g-difference research.

Double click on images to enlarge.

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator

Friday, June 10, 2011

Burns on FYIPOST: Structural Errors, Procedural Default, and Ineffective Assistance

Amy Knight Burns has posted Insurmountable Obstacles: Structural Errors, Procedural Default, and Ineffective Assistance (Stanford Law Review, Vol. 64, 2011) on SSRN. Here is the abstract: Federal habeas corpus procedure involves an elaborate set of rules for when state criminal...

Sent with MobileRSS HD

Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Thursday, June 9, 2011

ABA State of Criminal Justice 2001: Capital Punishment overview by Tabek

Somehow Tabek's overview chapter on the death penalty, as published in ABA's State of Criminal Justice 2001, found its way to my inbox. This is an annual publication of the American Bar Association's Criminal Justice Section.

The entire book can be purchased for a modest amount within a few
days at this link (which currently may still display the 2010 book). Check later if it is the 2010 version.

- iPost using BlogPress from my Kevin McGrew's iPad

intelligence,intelligence testing,Atkins cases,ICDP blog,psychology,school psychology,neuropsychology,forensic psychology,criminal psychology,criminal justice,death penalty,capital punishment,ABA,IQ tests,IQ scores,adaptive behavior,AAIDD,mental retardation,intellectual disability

Wednesday, June 8, 2011

Law Review Article: How to Achieve the Categorical Exemption of MR Defendants from Execution (Dillard,2011)

Amy Dillard (University of Baltimore - School of Law) has posted "And Death Shall Have No Dominion: How to Achieve the Categorical Exemption of Mentally Retarded Defendants from Execution" (University of Richmond Law Review, Vol. 45, No. 3, March 2011) on SSRN. Click here to view.


This article examines the Court’s categorical exclusion of mentally retarded defendants from execution and explores how trial courts should employ procedures to accomplish heightened reliability in the mental retardation determination; it maintains that if a mentally retarded defendant is subjected to a death sentence then the Atkins directive has been ignored. To satisfy the Atkins Court’s objective of protecting mentally retarded defendants from the “special risk of wrongful execution,” the article explores whether trial courts should engage in a unified, pre-trial competency assessment in all capital cases where the defendant asserts mental retardation as a bar to execution and how the ancient in favorem vitae doctrine could ensure fairness and protect defendants who may be at special risk “that the death penalty will be imposed in spite of factors which may call for a less severe penalty.”

- iPost using BlogPress from my Kevin McGrew's iPad

Tuesday, June 7, 2011

Atkins MR/ID Death Penalty Court Decision: Winston v AL (2011, 2009, 2007)

Still trying to clean out the ICDP inbox. Here is an update to Winston v VA (2011, 2009, 2008, 2007) which was first posted 1-26-10. I have had no time to read and thus post without comment.

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator

Atkins MR/ID Death Penalty Court Decisions: Mithcell v Epps (MS,2011) and Daniel v AL (2011)

Two recent Atkins decisions are now posted at the court decisions blogroll without comment. Thanks again to Kevin Foley, who keeps my Atkins inbox full...and me behind in posting :)

-Daniel v AL (2011)
-Mitchell v Epps (MS, 2011)

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator

Friday, June 3, 2011

Case involving executive function deficit arguments at Law and Biosciences Blog

Visit URL for more info.

Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

National Academies Press PDF Books Free to Download

To view this email as a web page, go here.

The National Academies Press
June 3, 2011
Subscribe to RSS Follow NAP on TwitterFollow NAP on Facebook Forward this to a Friend

Dear Subscriber,

As of June 2, 2011, all PDF versions of books published by the National Academies Press (NAP) will be downloadable free of charge to anyone. This includes our current catalog of more than 4,000 books plus future reports published by NAP.*

Free access to our online content supports the mission of NAP--publisher for the National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council--to improve government decision making and public policy, increase public education and understanding, and promote the acquisition and dissemination of knowledge in matters involving science, engineering, technology, and health. In 1994, we began offering free content online. Before today's announcement, all PDFs were free to download in developing countries, and 65 percent of them were available for free to any user.

Like no other organization, the National Academies can enlist the nation's foremost scientists, engineers, health professionals, and other experts to address the scientific and technical aspects of society's most pressing problems through the authoritative and independent reports published by NAP. We invite you to sign up for MyNAP --a new way for us to deliver free downloads of this content to loyal subscribers like you, to offer you customized communications, and to reward you with exclusive offers and discounts on our printed books.

Sign up now. It's quick, easy, and free.


Barbara Kline Pope
Executive Director for Communications and The National Academies Press

*There are a small number of reports that never had PDF files and, therefore, are not available for download. In addition, part of the "Nutrient Requirements of Domestic Animals" series is not available in PDF. Future titles in this series will also not have PDFs associated with them.
Connect with NAP Subscribe to RSS Follow NAP on TwitterJoin NAP on Facebook Forward this to a Friend
To unsubscribe from this list please go here or reply to this message with "unsubscribe" in the subject line.

To manage your subscriptions and account preferences go here.
Copyright © 2011. National Academies Press.
500 Fifth St. N.W., Washington, D.C. 20001.
Privacy Statement
The National Academies

Wednesday, June 1, 2011

Influences to consider when interpreting ability test scores: Dr. Schneider amazing video tutorial

Double click on image to enlarge.

Dr. Joel Schneider has (again) posted an amazing video tutorial explaining the various kinds of influences on specific test ability scores. It is simply AWESOME...and makes me envious that I don't have the time or skills to develop similar media content.

Clearly the message is that the interpretation of test scores is not simple and is clearly a mixture of art and science.

- iPost using BlogPress from my Kevin McGrew's iPad

Atkins MR/ID Death Penalty Court Decision: Blue v Thaler (TX, 2010)

Thanks again to Kevin Foley for sending me the federal habeas court's opinion for Blue v Thaler (TX, 2010).  Also available are copies of various expert reports/affidavits (click here, here, and here).  Apparently the decision was made on the basis of documents and no formal hearing was held. 

Of interest to readers may be the intelligence testing evidence.  In state court Blue did not have a complete IQ test (just a couple of subtests and an IQ estimate--apparently completed pre-Atkins).  For the federal court Blue was administered the WAIS-III in 2/08 and obtained a FSIQ of 76.  The examiner recommended retesting given the fact that the newer WAIS-IV was just being published.  The judge agreed.  Nine months later Blue obtained a FSIQ of 77 on the WAIS-IV.  Administration of WAIS-IV eliminated the Flynn Effect argument for the WAIS-III test scores and nine months between testing does mitigate the practice effect argument.  Blue loses on the first prong of MR/ID.  

The court also had some concerns about selective attention to certain portions of the adaptive behavior evidence..

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator