Tuesday, July 31, 2018

Flynn Effect Reference Project has been updated 07-31-18


The Flynn Effect Reference Project document has just been updated.  It now includes 302 references.  Access can be found at this prior post (click here)

Monday, July 16, 2018

What is an applied psychometrician?

I wear a number of hats within the broad filed of educational psychology.  One is that of an applied psychometrician.  Whenever anyone asks what I do, I receive strange looks when that title rolls out of my mouth.  I then always need to provide a general explanation.

I've decided to take a little time and generate a brief explanation.  I hope this helps.

The online American Psychological Association (APA) Dictionary of Psychology defines psychometrics as: n. the branch of psychology concerned with the quantification and measurement of mental attributes, behavior, performance, and the like, as well as with the design, analysis, and improvement of the tests, questionnaires, and other instruments used in such measurement. Also called psychometric psychology; psychometry.

The definition can be understood from the two components of the word. Psycho refers to “psyche” or the human mind. Metrics refers to “measurement.” Thus, in simple terms, psychometrics means psychological measurement--it is the math and science behind psychological testing.  Applied psychometrics is concerned with the application of psychological theory, techniques, statistical methods, and psychological measurement to applied psychological test development, evaluation, and test interpretation. This compares to more pure or theoretical psychometrics which focuses on developing new measurement theories, methods, statistical procedures, etc. An applied psychometrician uses the various theories, tools and techniques developed by more theoretical psychometricians in the actual development, evaluation, and interpretation of psychological tests. By way of analogy, applied psychometrics is to theoretical psychometrics, as applied research is to pure research.

The principles of psychometric testing are very broad in their potential application., and have been applied to such areas as intelligence, personality, interest, attitudes, neuropsychological functioning, and diagnostic measures (Irwing & Hughes, 2018). As noted recently by Irwing and Hughes (2018), psychometrics is broad as “It applies to many more fields than psychology, indeed biomedical science, education, economics, communications theory, marketing, sociology, politics, business, and epidemiology amongst other disciplines, not only employ psychometric testing, but have also made important contributions to the subject” (p. 3).

Although there are many publications of relevance to the topic of test development and psychometrics, the most useful and important single source is “the Standards for Educational and Psychological Testing” (aka., the Joint Test Standards; American Educational Research Association [AERA], American Psychological Association [APA], National Council on Measurement in Education [NCME], 2014). The Joint Test Standards outline standards and guidelines for test developers, publishers, and users (psychologists) of tests. Given that the principles and theories of psychometrics are generic (they cut across all subdisciplines of psychology that use psychological tests), and there is a standard professionally accepted set of standards (the Joint Test Standards), an expert in applied psychometrics has the skills and expertise to evaluate the fundamental, universal or core measurement integrity (i.e., quality of norms, reliability, validity, etc.) of various psychological tests and measures (e.g., surveys, IQ tests, neuropsychological tests, personality tests), although sub-disciplinary expertise and training would be required to engage in expert interpretation by sub-disciplines. For example, expertise in brain development, functioning and brain-behavior relations would be necessary to use neuropsychological tests to make clinical judgements regarding brain dysfunction, type of brain disorders, etc. However, the basic psychometric characteristics of most all psychological and educational tests (e.g., neuropsychological, IQ, achievement, personality, interest, etc.) assessment can be evaluated by professionals with expertise in applied psychometrics.

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014). Standards for Educational and Psychological Testing. Washington, DC: Author.

 Irwing, P. & Hughes, D. J. (2018). Test development. In P. Irwing, T. Booth, & D. J. Hughes (Eds.), The Wiley Handbook of Psychometric Testing: A Multidisciplinary Reference on Survey, Scale and Test Development (pp. 3-49. Hoboken, NJ: John Wiley & Sons

Thursday, July 12, 2018

Great psychometric resource: The Wiley Handbook of Psychometric Testing

I just received my two volume set of this excellent resource on psychometric testing.  There are not many good books that cover such a broad array of psychometric measurement issues.  This is not what I would call "easy reading."  This is more like a "must have" resource book to have "at the ready" when seeking to understand contemporary psychometric test development issues.

National Academies Press: Neuroforensics: Exploring the legal implications of emerging technologies


This new publication is now available from the National Academies Press.

Court decision: Moore v Texas (2018) after SCOTUS vacated decison based on Briseno AB standards



I'm a bit behind in posting information regarding recent Atkins-related court decisions.

Despite SCOTUS recently vacating Moore v Texas based on Texas's Briseno standards not being consistent with prevailing medical and professional standards, Moore was still found to not be ID in the latest decision from Texas.  The majority opinion can be found here.  The dissenting opinion can be found here.

Wednesday, July 11, 2018

"Intellectual Disability, The Death Penalty, and Jurors"



----
"Intellectual Disability, The Death Penalty, and Jurors"
// Sentencing Law and Policy

The title of this post is the title of this new paper on SSRN authored by Emily Shaw, Nicholas Scurich and David Faigman. Here is its abstract:

In Atkins v. Virginia (2002), the United States Supreme Court held that intellectually disabled defendants cannot be sentenced to death; but since then, the Court has continued to grapple with how intellectual disability should be legally defined. Typically, however, it is jurors who determine whether a defendant is intellectually disabled and therefore categorically ineligible for the death penalty. Very little is known empirically about how jurors reason about and make these decisions.

This Article presents the results of a novel experiment in which venire jurors participated in an intellectual disability hearing and a capital sentencing hearing. The diagnosis of a court-appointed expert was experimentally manipulated (defendant is or is not intellectually disabled), as was the provision of information about the crime (present or absent). Jurors were considerably more likely to find the defendant not disabled when the expert opined that the defendant was not disabled.  They were also more likely to find the defendant not disabled when they learned about the details of the crime. Similarly, jurors were more likely to sentence the defendant to death after learning about the details of the crime, which increased perceptions of both the defendant's blameworthiness and his mental ability.  These findings highlight the reality that jurors' assessments of intellectual disability are influenced by crime information, contrary to pronouncements made by the United States Supreme Court, and they support the use of bifurcated disability proceedings, as some states have recently adopted.


----

Read in my feedly



Sunday, July 8, 2018

Practice or retest effects in measures of working memory capacity (Gwm): A meta-analysis

Retest effects in working memory capacity tests: A meta-analysis
Jana Scharfen, Katrin Jansen, Heinz Holling. Article link

© Psychonomic Society, Inc. 2018

Abstract

The repeated administration of working memory capacity tests is common in clinical and research settings. For cognitive ability tests and different neuropsychological tests, meta-analyses have shown that they are prone to retest effects, which have to be accounted for when interpreting retest scores. Using a multilevel approach, this meta-analysis aims at showing the reproducibility of retest effects in working memory capacity tests for up to seven test administrations, and examines the impact of the length of the test-retest interval, test modality, equivalence of test forms and participant age on the size of retest effects. Furthermore, it is assessed whether the size of retest effects depends on the test paradigm. An extensive literature search revealed 234 effect sizes from 95 samples and 68 studies, in which healthy participants between 12 and 70 years repeatedly performed a working memory capacity test. Results yield a weighted average of g = 0.28 for retest effects from the first to the second test administration, and a significant increase in effect sizes was observed up to the fourth test administration. The length of the test-retest interval and publication year were found to moderate the size of retest effects. Retest effects differed between the paradigms of working memory capacity tests. These findings call for the development and use of appropriate experimental or statistical methods to address retest effects in working memory capacity tests.

Keywords Meta-analysis · Retest effect · Practice effect · Working memory



- Posted using BlogPress from my iPad