Rachael L. Ellison and Monica Stika
Performance validity tests (PVTs) and symptom validity tests (SVTs) are important in neuropsychological testing to capture the patient’s engagement with the testing process and any attempts to influence it. However, it can be clinically difficult to provide patients with meaningful feedback on the testing when one or more measures of validity were failed. This is a frequent challenge in cases of mild traumatic brain injury. Several methods of presenting useful information to the patient and family are described.
A few weeks ago, Robert, a 33-year-old Caucasian male with a college degree, contacted you about self-reported cognitive difficulties from a mild traumatic brain injury (mTBI; concussion). He reported being hit in the head two years earlier by a baseball bat while playing a pickup game. Robert fully remembers the day and the events prior to being hit and denies any loss of consciousness or post-traumatic amnesia at the time of the incident. He described alterations in consciousness (e.g., disorientation and confusion) for about 30 minutes after being struck—“seeing stars.” Robert expressed current increased difficulty with his memory (e.g., forgetting appointments, forgetting to take medications) and with paying attention, particularly during conversations. He indicated that he is easily distracted, and he has trouble completing multi-step tasks.
You conducted a clinical interview with him along with neuropsychological testing that consisted of a flexible battery assessing attention, working memory, executive functioning, visual and verbal memory, processing speed, visuospatial skills, language, mood, and pre-morbid functioning. You included a mixture of embedded and standalone performance validity tests (PVT), as well as tests of symptom validity (SVT) during your evaluation. Unfortunately, Robert failed one standalone PVT, as well as invalidated his personality testing, as his symptom endorsement patterns fell outside normal expectations on a SVT. You have a feedback session scheduled for later this week with Robert and his wife, but you are still vacillating on how to best frame the feedback. How do you provide appropriate and clinically sensitive feedback about invalid testing?
Nature of Mild Traumatic Brain Injury
TBIs are extremely common, and sequelae and prognosis vary significantly depending on the severity. There were about 2.8 million TBI-related emergency department visits in the U.S. in 2013 (Taylor, Bell, Brieding, & Xu, 2017). The majority of TBIs, around 75%, are mTBIs (Center for Disease Control and Prevention, 2003; see Table 1 for diagnostic criteria, utilizing the Glasgow Coma Scale and other features of the event/symptoms).
Concussion or other brain injuries result from a diverse array of causes. These include motor vehicle accidents, combat-related trauma, sport injuries, falling and hitting one’s head, penetrating wounds, or assaults. Brain injuries can also happen from things like rapid motion, shaking, or even blast waves.
Brain injuries may not always be obvious. Patients may not have a skull fracture or broken skin on their head. The damage in the brain may not be in the expected spot, as you may observe coup injuries (at the site of the impact) as well as countercoup injuries (on the opposite side of the head) with cerebral contusions. Brain injuries may also be classified as complicated (evidence of neuroradiologically detected intracranial abnormalities such as bruising, bleeding, or swelling), or uncomplicated (with no detected abnormalities on neuroimaging). The location (focal vs. diffuse) and type of injury (complicated vs. uncomplicated; grey matter lesion vs. diffuse axonal [white matter] injury) is also a significant factor in the potential neurocognitive sequelae from TBI. These different injury types can result in different symptom presentations and recovery prognoses.
Not all brain injury cases manifest the same symptoms, and not all cases have the exact same pattern of symptoms. This is even truer with mTBI. To complicate matters further, while we have seen incredible advances in diagnostic neuroimaging, CT scanning is still not sensitive enough to detect the mild changes suspected in mTBI (Belanger, Vanderploeg, Curtiss & Warden, 2007). In addition, 43–68% of patients diagnosed with mTBI typically have normal MRI scans (Hughes et al., 2004). Neuropsychological testing may have better sensitivity than scanning in determining impairments associated with mTBI.
It is important after an mTBI to provide psychoeducation on expected symptoms and recovery timeline, as this alone can minimize both short- and long-term symptoms (Suhr & Gunstad, 2005). Most mTBI symptoms resolve on their own in a few days to a few months post injury, although a small subset of mTBI individuals (7–33%) continue to report and experience symptoms (Belanger, Vanderploeg, Curtiss & Warden, 2007). This small subset may experience adverse outcomes persisting for years following the injury (Vanderploeg, Curtiss, Luis, & Salazar, 2007). This lingering collection of physical, cognitive, and psychosocial symptoms is termed post-concussion syndrome (PCS). PCS symptoms include headaches, blurred vision, dizziness and imbalance, difficulties with concentration, forgetfulness, slowed thinking, dysregulated sleep, and irritability.
Most research shows no relationship between these lingering symptom complaints and objective findings on neuropsychological tests, physical examinations, or neurological examinations (Schretlen & Shapiro, 2003). Additionally, non-brain injury factors (e.g., changes in mood associated with depression or PTSD, disrupted sleep, pain) play a significant role in symptom complaints for patients with enduring PCS, and a high comorbidity of PTSD with mTBI has been observed (Hoge et al., 2008). It is likely that large meta-analytic studies exploring mTBI recovery do not report the range of individual symptoms experienced by those with residual cognitive deficits (Iverson, 2010).
A given TBI event should not be conceptualized in isolation when doing a head injury evaluation. It is important to explore how a presenting mTBI fits into the context of the patient’s entire life history of head injuries (if applicable). Awareness is growing regarding the potential cognitive sequelae of repetitive concussions and/or sub-concussive blows (Guskiewicz et al., 2003), as is the literature (and national debate) on chronic traumatic encephalopathy (previously described as dementia pugilistica; McKee et al., 2009).
In this regard, clinicians should be aware of second impact syndrome, a rare (and at times debated) condition described primarily in children and young adults, in which a second concussion occurs before the symptoms from an initial concussion have resolved, resulting in potentially catastrophic cerebral edema (Wetjen, Pichelmann, & Atkinson, 2010). All the above possibilities should be considered in order to develop a full understanding of a patient’s head injury.
The Truth is in the Numbers—or is it?
While the discipline of psychology is long established, the field of neuropsychology is still considered a relatively new and developing field, even given its rapid growth since the 1980s. A major strength of neuropsychological testing is its objectivity and standardization. To determine strengths and weaknesses within cognitive domains (e.g., attention, processing speed, memory, language, executive functioning, etc.), neuropsychologists rely on normative data. All measures are administered to many “healthy” individuals across demographic bands (e.g., age, education, gender, ethnicity) to establish what is normal functioning for that group.
With the increase of worker’s compensation claims in the last several decades, the concept of “malingering,” or intentional feigning of medical or mental health disorders for secondary gain, became an issue of concern. To more formally operationalize the definition of possible, probable, and definitive malingering, Dan Slick and colleagues (1999) proposed a set of diagnostic criteria for clinical and research purposes.
The need to objectively establish a patient’s level of effort and engagement on testing has led to the advent of “validity testing.” Initially, symptom validity tests sought to determine effort and engagement on both cognitive tests and personality/emotional functioning measures. As this concept evolved, multiple terms have emerged in the literature to delineate between cognitive performance and symptom report. Specifically, tests that examine for level of effort and engagement on cognitive tests are considered “performance validity tests” (PVT) as they relate to specific performance on testing. Symptom validity tests (SVT) examine endorsement patterns on measures of personality/emotional functioning given that they relate to consistency and accuracy in self-report of mood symptoms. Occasionally, some authors use the terms interchangeably.
Utilization of PVTs and SVTs is dependent upon the clinical context. They are important and need to be integrated with other data resulting from behavioral observation, documented history, and self and collateral report when performing a comprehensive examination (American Academy of Clinical Neuropsychology [AACN], 2009; National Academy of Neuropsychology [NAN], 2005).
Let’s focus on PVTs. The current standard of practice in neuropsychology is to use a combination of both standalone and embedded validity measures both within the performance testing battery and ideally throughout the evaluation process (Boone, 2009). Standalone measures are individual tests whose sole purpose is to assess for level of engagement and effort (e.g., Test of Memory Malingering, Rey-15, Victoria Symptom Validity Test). Embedded measures, on the other hand, are parts of regularly administered tests (e.g., memory; Wisconsin Card Sort Test- Failure to Maintain Set; Wechsler Adult Intelligence Scale- Reliable Digit Span; Repeatable Battery for the Assessment of Neuropsychological Status- Effort Index, Effort Scale, Performance Validity Index, Charleston Revised Index of Effort). Table 2 presents examples of commonly used PVTs and key citations for each.
When interpreting psychological test data, scores below clinically established cut-offs are representative of notably aberrant performances that fall well outside normal expectations or even below chance. Cut-offs are determined in relation to identified populations or based upon statistics. For example, individuals with advanced Alzheimer’s disease can pass what seems like a challenging memory test. We conclude that individuals who “fail” this measure and do not present as significantly impaired are likely not fully engaged or providing full effort. Confidence intervals also allow us to examine scores based on their relation to chance; scores at or below chance on a forced-choice performance validity test may suggest an effortful attempt to perform poorly. Such results, however, should never be considered in isolation.
We know that there are many reasons individuals may “fail” validity tests. Never base the interpretation of PVTs on a single test, but rather on a convergence of data across several validity tests—as well as behavioral observations, chart review, and clinical judgment. With SVTs, these are placed within the context of behavioral observations and reported mood during interviews. While frank malingering should be on that list of possibilities to consider, other factors may frequently be involved and play a bigger role.
We know that many factors can influence neuropsychological testing more broadly, including sleep/fatigue, pain, and/or mood (Viola-Saltzman & Watson, 2012). We also know that the subjective experience of cognitive change can be quite frightening and anxiety provoking. Patients may attempt to ensure that clinicians understand their difficulties by over-endorsement of memory problems or mood symptoms. Behavioral observations during the evaluation are also an important source of data, beyond the test results. Poor performance in the morning, with improvement in the afternoon, may reflect poor sleep the night before. Good performance in the morning, with declining performance in the afternoon, may reflect fatigue or the effect of taking a pain pill during the lunch period. If a patient only failed one PVT, it is critical to consider when did it occur, and what might have contributed to it. In addition, since testing occurs over many hours, it is important to consider who interacted with the patients during breaks and the focus of those interactions. Did the patient have contact with family members or other health professionals? Was any news received during the contact that might affect the patient’s subsequent performance?
Since no uniform consensus exists as to the number of validity tests that must be passed in order for an examination to be considered valid, it is considered good clinical practice to use multiple PVTs throughout an evaluation (Boone, 2009; Heilbronner et al., 2009). Understanding the strengths and limitations of validity testing is important (Merten et al., 2007), particularly as to which measures are less sensitive for detecting suboptimal performance in specific situations. Various validity measures may have limited validity in specific clinical populations (e.g., Rey 15 in mTBI; Flaherty, Spencer, Drag, Pangilinan, & Bieliauskas, 2015). Extreme situations (e.g., a frank amnestic episode due to conditions such as herpes encephalitis; chronic toxic encephalopathy) may be situations in which it is expected for patients to fail validity testing (van Hout, Schmand, Wekking, Hageman, & Deelman, 2003). Take caution against over pathologizing failed validity testing in these instances.
From Conceptualizing the Data to Feedback—What to Say?
How can you best explain testing results to the patient when invalid performance or symptom validity is present? Interpreting an extensive battery of neuropsychological test results is a complex process. Most patients, however, are seeking an understanding of the larger overall questions—how am I doing, am I functioning in the normal range, will I get better or worse over time? Using an analogy to describe brain functioning and human performance is often useful. This is true regardless of whether the patient has “failed” any validity testing.
Describing neuropsychological testing as the “behavioral MRI” can be an effective analogy. During an MRI, you must stay completely still. If you move, it makes the picture blurry and difficult to accurately interpret. In a similar manner, there are things during the neuropsychological testing that parallel “moving” during an MRI. Poor sleep the night before, not eating prior to the evaluation, taking or not taking needed medication, changes in mood, or the experience of physical pain can make the overall results of neuropsychological testing “blurry” and difficult for the neuropsychologist to interpret confidently.
Using an analogy of a “wet blanket” can also be effective in explaining how these various factors can affect our overall neuropsychological functioning. These co-occurring events (such as pain, mood, fatigue, hunger, and so forth) can, in effect, put a metaphorical “wet blanket” on our brain and slow processes down, even if underlying brain structures are structurally fine. This can provide a way of introducing the importance of a healthy lifestyle for promoting maximal performance. Using an analogy of a car engine with four cylinders may be useful with more mechanically oriented patients. Usually four cylinders are available for cognitive process such as attention, memory, and thinking; however, if pain, upset mood, and/or fatigue are using up one or more of the cylinders, fewer are available for mental focus on any task at hand. Modifying an analogy to best fit the patient and their specific language usage will make your explanation of the testing results more relatable to them and their life.
Many factors can influence neuropsychological testing more broadly, including sleep/fatigue, pain, and/or mood. . . . We also know that the subjective experience of cognitive change can be quite frightening and anxiety provoking.
There is some debate among practitioners as to how much information to provide the patient about the validity testing process. Should you specifically comment on the lower performance on specific validity measures designed to measure effort and task engagement? Is it appropriate to comment on the possibility that something was interfering with a full effort on the day of testing? If specifically mentioning validity testing, do not describe the tests in detail. It is important to not alert the patient as to which tests were measuring their effort (and keep the integrity of those validity tests). In explaining invalid test results, analogies can help the patient understand what occurred without it feeling like an accusation.
Avoid using the word malingering, as it has an accusatory connotation and can make the patient feel defensive during feedback. Many patients who fail validity testing are experiencing significant physical, psychological, and/or cognitive distress. Failed validity testing may be viewed at times as a “cry for help” or the patient wanting (consciously or unconsciously) to show how distressed they are. Failed validity testing may also occur if there are financial incentives (e.g., disability claims, service-connected disability for Veterans, etc.). Failing validity testing, however, does not necessarily equal purposeful malingering. As noted above, many other things can affect one’s effort and engagement with testing (e.g., sleep, pain, mood, etc.). Behavioral observations during neuropsychological testing is important, and the combination of both sources of data is key to interpretation.
The Balancing Act—Sensitivity and Utility
A patient failing validity testing does not mean that all data are uninterpretable. “Normal” scores (i.e., average or higher scores) can still be interpreted using an “at least as” analysis. Such normal scores may be an underestimate of a patient’s true abilities in that particular domain, but are at least able to suggest that there is no significant impairment in that area of cognition, as the patient is still performing in the expected “normal” range.
Second, this may also be a nice opportunity to put a positive spin or frame on the testing results. If there are interfering factors (e.g., mood, pain, sleep, etc.), it may be helpful to highlight to the patient that these things can be treated. Have referrals and/or recommendations for these areas ready for the feedback session. In most cases of mTBI, there are few specific treatments outside of the passing of time (and avoidance of re-injury). Also, specific treatment of co-occurring symptoms, such as mood problems, pain, or sleep is not dependent on a TBI diagnosis. There is no cognitive behavioral therapy (CBT) for mood- and sleep-related symptoms associated with TBI; the standard CBT applies. It will therefore benefit the patient to begin treating these other symptoms, regardless of their etiology, using standard treatment protocols. Remind the patient at this time that once other symptoms (i.e., “wet blankets”) have improved (or continuing with the analogy above, have been lifted), they are welcome to come back for a reevaluation in the future if they are still having cognitive complaints. Then, without the increased interference of these other symptoms, it may allow the opportunity to evaluate true cognitive functioning more clearly.
Be mindful as to what specific information will be included in the report. Again, do not assume failure in validity testing equals malingering. Be sensitive to that language, both for the benefit of the patient, as well as any other future providers who may access their report. Although normal scores can still be interpreted after failed validity measures, give caution before interpreting or reporting any below-average scores in the report. Consider not reporting any raw scores for non-normal scores, as they are not interpretable with failed validity. In these cases, it may be important as to avoid having the patient or future providers jump to conclusions about the patient scoring in the impaired range, and what those “impaired” scores may mean when generalized to understand their cognitive abilities.
Be precise about language regarding TBI and cognitive sequelae of symptoms. Avoid, particularly with a patient with mTBI, having the patient or their family walk away from an invalid feedback experience only interpreting that they have been told they “did not have a TBI.” Rather, with invalid results, clarify that the current symptoms cannot be determined to be from a mTBI, in the context of unusable test results. Provide psychoeducation that the TBI is the “event,” and that providers can never take away the fact that a TBI has occurred.
Lastly, in preparing for patient feedback, it is never too early to begin to self-evaluate any procedural improvements for future patients. Consider additions that might have been helpful in the patient’s examination (e.g., if they had been provided some brief psychoeducation of the neuropsychological testing and feedback process), during their consent process and clinical interview. Was enough time spent inquiring from the patient what they were looking to get out of the testing process? Were all parties on the same page about what are the clinic/practice’s services? Did the patient and their family fully understand all the potential outcomes from testing? Providing feedback on invalid results is difficult, and there is always room to improve clinical style.
This discussion is not meant to be a comprehensive review of validity testing, but rather, a broad overview as part of the larger conversation on providing feedback in instances of failed validity testing in this specific population (mTBI). For a more in-depth review of performance validity and symptom validity in neuropsychology, please refer to the references notated with an asterisk in the reference section for some helpful citations.
Rachael L. Ellison, PhD, is a Neuropsychology Post-Doctoral Fellow at Edward Hines Jr. VA Hospital, and a Post-Doctoral researcher with the Biology, Identity & Opportunity (BIO) Study at Northwestern University. Dr. Ellison received her PhD in Clinical & Community Psychology at DePaul University, with a focus in Neuropsychology. She completed her Clinical Internship at the UCSD/San Diego VA Health Care System with a focus in neuropsychology, traumatic brain injury (TBI), cognitive rehabilitation, and post-traumatic stress disorder (PTSD).
Monica Stika, PhD, is a Neuropsychology Post-Doctoral Fellow at Edward Hines Jr. VA Hospital, where she also completed her Pre-Doctoral Internship. She received her PhD in Clinical Psychology at Rosalind Franklin University of Medicine and Science, with a focus in Neuropsychology. Dr. Stika’s research interests are focused broadly on aging and memory, the relationship between psychological and neuropsychological measures, including effort, in clinical and forensic populations.
Note. Performance and symptom validity in neuropsychology (selected texts) notated with asterisk.
Belanger, H. G., Vanderploeg, R. D., Curtiss, G., & Warden, D. L. (2007). Recent neuroimaging techniques in mild traumatic brain injury. The Journal of Neuropsychiatry and Clinical Neurosciences, 19(1), 5-20.
*Bianchini, K. J., Mathias, C. W., & Greve, K. W. (2001). Symptom validity testing: A critical review. The Clinical Neuropsychologist, 15(1), 19-45.
*Bigler, E. D. (2012). Symptom validity testing, effort, and neuropsychological assessment. Journal of the International Neuropsychological Society, 18(4), 632-640.
Boone, K. B. (2009). The need for continuous and comprehensive sampling of effort/response bias during neuropsychological examinations. The Clinical Neuropsychologist, 23, 729-741.
*Bush, S. S., Ruff, R. M., Tröster, A. I., Barth, J. T., Koffler, S. P., Pliskin, N. H., …& Silver, C. H. (2005). Symptom validity assessment: Practice issues and medical necessity NAN Policy and Planning Committee. Archives of Clinical Neuropsychology, 20, 419-426.
Centers for Disease Control and Prevention. (2003). Report to Congress on mild traumatic brain injury in the United States: Steps to prevent a serious public health problem. Atlanta, GA: Centers for Disease Control and Prevention, 45.
Delis, D. C. (2000). CVLT-II: California verbal learning test: Adult version. Psychological Corporation.
Flaherty, J. M., Spencer, R. J., Drag, L. L., Pangilinan, P. H., & Bieliauskas, L. A. (2015). Limited usefulness of the Rey Fifteen-Item Test in detection of invalid performance in veterans suspected of mild traumatic brain injury. Brain injury, 29(13-14), 1630-1634.
*Fox, D. D. (2011). Symptom validity test failure indicates invalidity of neuropsychological tests. The Clinical Neuropsychologist, 25(3), 488-495.
Green, P. (2005). Green's Word Memory Test for Microsoft Windows: User's manual. Green's Publications Incorporated.
Green, P., Allen, L. M., & Astner, K. (1996). The Word Memory Test: A user’s guide to the oral and computer-administered forms, US version 1.1. Durham, NC: CogniSyst.
Greiffenstein, M. F., Baker, W. J., & Gola, T. (1994). Validation of malingered amnesia measures with a large clinical sample. Psychological Assessment, 6(3), 218.
Guskiewicz, K. M., McCrea, M., Marshall, S. W., Cantu, R. C., Randolph, C., Barr, W., ... & Kelly, J. P. (2003). Cumulative effects associated with recurrent concussion in collegiate football players: the NCAA Concussion Study. JAMA, 290(19), 2549-2555.
*Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G. J., Millis, S. R., & Conference
Participants (2009). American Academy of Clinical Neuropsychology consensus
conference statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist, 23, 1093-1129.
Hofman, P. A., Stapert, S. Z., van Kroonenburgh, M. J., Jolles, J., de Kruijk, J., & Wilmink, J. T. (2001). MR imaging, single-photon emission CT, and neurocognitive performance after mild traumatic brain injury. American Journal of Neuroradiology, 22(3), 441-449.
Hoge, C. W., McGurk, D., Thomas, J. L., Cox, A. L., Engel, C. C., & Castro, C. A. (2008). Mild traumatic brain injury in US soldiers returning from Iraq. New England Journal of Medicine, 358(5), 453-463.
Hughes, D. G., Jackson, A., Mason, D. L., Berry, E., Hollis, S., & Yates, D. W. (2004). Abnormalities on magnetic resonance imaging seen acutely following mild traumatic brain injury: correlation with neuropsychological tests and delayed recovery. Neuroradiology, 46(7), 550-558.
Inman, T. H., & Berry, D. T. (2002). Cross-validation of indicators of malingering: A comparison of nine neuropsychological tests, four tests of malingering, and behavioral observations. Archives of Clinical Neuropsychology, 17(1), 1-23.
Iverson, G. L. (2010). Mild traumatic brain injury meta-analyses can obscure individual differences. Brain Injury, 24(10), 1246-1255.
*Martin, P. K., Schroeder, R. W., & Odland, A. P. (2015). Neuropsychologists’ validity testing beliefs and practices: A survey of North American professionals. The Clinical Neuropsychologist, 29(6), 741-776.
McKee, A. C., Cantu, R. C., Nowinski, C. J., Hedley-Whyte, E. T., Gavett, B. E., Budson, A. E., ... & Stern, R. A. (2009). Chronic traumatic encephalopathy in athletes: progressive tauopathy after repetitive head injury. Journal of Neuropathology & Experimental Neurology, 68(7), 709-735.
Merten, T., Bossink, L., & Schmand, B. (2007). On the limits of effort testing: Symptom validity tests and severity of neurocognitive symptoms in nonlitigant patients. Journal of Clinical and Experimental Neuropsychology, 29(3), 308-318.
Rey, A. (1941). L'examen psychologique dans les cas d'encéphalopathie traumatique. (Les problems). Archives de psychologie.
Schretlen, D. J., & Shapiro, A. M. (2003). A quantitative review of the effects of traumatic brain injury on cognitive functioning. International Review of Psychiatry, 15(4), 341-349.
Slick, D., Hopp, G., Strauss, E., & Thompson, G. (1997). The Victoria symptom validity test. Odessa, FL: Psychological Assessment Resources.
*Slick, D. J., Sherman, E. M. S., & Iverson, G. L. (1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist, 13(4), 545-561.
Suhr, J. A., & Gunstad, J. (2005). Further exploration of the effect of “diagnosis threat” on cognitive performance in individuals with mild head injury. Journal of the International Neuropsychological Society, 11(1), 23-29.
Taylor, C. A., Bell, J. M., Brieding, M. J., & Xu, L. (2017). Traumatic brain injury–related emergency department visits, hospitalizations, and deaths—United States, 2007 and 2013. MMWR. Surveillance Summaries, 66.
Tombaugh, T. N. (1996). Test of memory malingering: TOMM. North Tonawanda, NY: Multi-Health Systems.
Vanderploeg, R. D., Curtiss, G., Luis, C. A., & Salazar, A. M. (2007). Long-term morbidities following self-reported mild traumatic brain injury. Journal of Clinical and Experimental Neuropsychology, 29(6), 585-598.
van Hout, M. S., Schmand, B., Wekking, E. M., Hageman, G., & Deelman, B. G. (2003). Suboptimal performance on neuropsychological tests in patients with suspected chronic toxic encephalopathy. Neurotoxicology, 24(4), 547-551.
*Victor, T. L., Boone, K. B., Serpa, J. G., Buehler, J., & Ziegler, E. A. (2009). Interpreting the meaning of multiple symptom validity test failure. The Clinical Neuropsychologist, 23(2), 297-313.
Viola-Saltzman, M, & Watson, N.F. (2012). Traumatic brain injury and sleep disorders. Neurologic Clinics, 30(4), 1299-1312.
Wechsler, D. (2008). Wechsler adult intelligence scale–Fourth Edition (WAIS–IV). San Antonio, TX: NCS Pearson, 22, 498.
Wetjen, N. M., Pichelmann, M. A., & Atkinson, J. L. (2010). Second impact syndrome: concussion and second injury brain complications. Journal of the American College of Surgeons, 211(4), 553-557.