• <div style="background-image:url(/live/image/gid/32/width/1600/height/300/crop/1/41839_V14Cover_Lynch_Artwork.2.rev.1520229233.png)"/>

Eukaryon

CCK Impulsivity Scale Validation

Carli Pentz and Colin Willis
Department of Psychology
Lake Forest College
Download PDF

Primary Article

 

Abstract

To develop a valid measure of the personality trait impulsivity, four dimensions of the trait were identified through a literature analysis; these dimensions were Sensation Seeking, Lack of Premeditation, Lack of Perseverance, and Urgency. Items based upon these dimensions were then developed and used to build a scale. To test the validity of the scale, henceforth known as the CCK scale, the scale was administered to 40 college students, of whom 32 scales were included in the analysis, concurrently with the BIS-11, a pre-validated impulsivity scale. By evaluating the correlation between the scores on the two scales, the validity of the CCK scale was determined by its validity coefficient. The analysis revealed a strong correlation between the two, r(30)=.826, p=6.07 E-8. The relationship between the CCK and the BIS-11 was significant, thus there is considerable evidence for the validity of the CCK.

Introduction

Impulsivity Dimensions

Impulsivity, in simplest terms, is a particularly broad and fragmented personality construct. Impulsivity can describe a person’s tendency to give into cravings, inability to plan or weigh options before deciding (Kirby & Finch, 2010), seek out adventure or thrills (Whiteside & Lyman, 2001), lack of patience, inability to appreciate consequences, and propensity for uninhibited inappropriate behaviors (Reynolds, et al., 2001). These broad characterizations suggest the extent to which impulsivity is defined in everyday terms; ostensibly, impulsivity encompasses a wide range of daily events.

Whiteside and Lyman (2001) found evidence for a four-factor solution that accounted for 66% of the variance in the measures studied. Subsequent to an item content analysis, the researchers named the four factors: Sensation Seeking, Lack of Premeditation, Lack of Perseverance, and Urgency. To construct the scale, we made a master list of tested dimensions based on the research by Whiteside and Lynam (2001). These researchers reviewed a considerable amount of commonly used impulsivity scales, and performed factor analysis on the aggregation of all these scales, particularly in a college student population.

Sensation Seeking is characterized by the tendency for the individual to look for or prefer exciting and adventuresome events. Lack of Premeditation is an inability to plan, coupled with the tendency to act before thinking. Lack of Perseverance is the tendency to become bored with an activity or failure to finish a task. Urgency is exhibited when an individual acts rashly, cannot resist temptation, or reacts to strong emotions in a regrettable manner.

Scale Construction

A master list of possible items was constructed; four categories were originally chosen for our scale, which included lack of planning, urgency, sensation seeking, and lack of perseverance. Distractor items were included from an empathy scale developed by Hashimoto and Shiomi (2002). The 20 items we created differed in word choice, but still kept the intended ideas of empathy for each item. These distractor items were interspersed randomly among the impulsivity items.

Throughout the process of creating our scale, we needed to address the issues of response bias stated previously. To avoid acquiescence bias, or the tendency of a participant to agree with every statement on the scale, we reverse keyed some of the impulsive items on the scale. This would reduce acquiescence bias by preventing the student from rating each item as a “5,” or strongly agree. Each student will need to read the item carefully and be able to express his or her own opinion instead of only agreeing or rating with a high-value number.

To avoid the social desirability bias, we made it very clear in the directions that this scale will be anonymous and the only necessary information was age and gender. This method eliminates social desirability by making sure the names of the students participating in the pilot testing are not associated with their scores, thus giving them a reason to score each item in a way that is true to their own opinions rather than wanting their scores to look favorable to us.

Finally, to avoid observer bias, the scale included clear directions that informed the student of their anonymity, their ability to stop taking the scale at any moment, and providing a mixture of impulsivity items and distractor items, we were able to give a pilot test without using any observations or collecting actual data. By not observing the student’s impulsivity, we avoid inter-rater observer bias.

The draft scale was revised to reflect clarity, avoid double barrel questions, and to ensure that the items representing the correct dimension of impulsivity, such as urgency, sensation seeking, lack of planning, or lack of perseverance.  The resulting scale was then given to ten willing volunteers to pilot test, and then, after further revisions to clarify vague items, we pilot tested nine more individuals, resulting in two pilot testing sessions.

Impulsivity Scale Validation

Construct validity is a psychometric property that indicates whether a test measures what it claims to measure. The three subcategories of construct validity are content, convergent, and discriminant validities. Content validity is the extent to which a test represents the key dimensions of the construct. For a scale that measures impulsivity, the items on the scale would need to represent all different dimensions of the personality trait impulsivity. When an independent, validated scale of the same personality trait correlates highly with a scale under scrutiny, the scrutinized scale is said to have convergent validity. An example of convergent validity is that people’s scores on a scale made for impulsiveness should have similar results as other known scales that test for this same trait. Conversely, discriminant validity is defined by a low correlation between a previously validated scale that measures a similar, yet not the same, personality trait, and the scrutinized scale. A scrutinized scale would be said to have discriminant validity if the scrutinized scale made for impulsiveness and another, already validated, scale assessing the cautiousness of an individual were found to not correlate.

Another type of validity that helps assess if construct validity is in a study is criterion-related validity, which is the extent to which the measure that needs to be correlated with an independent measure that is known to be valid.

XPHP: Invalid XML
<widget type="image"><arg id="id">8800</arg><arg id="width">300</arg><arg id="height">284</arg><arg id="format"><img width="{width}" height="{height}" alt="Pentz&amp;Willis_PrimaryA1" class="lw_align_left" align="left"/></arg><arg id="cache" valid="f4851d5c085140c149fa8095df6fd08b"><img width="300" height="284" alt="Pentz&Willis_PrimaryA1" align="left" src="/livewhale/content/images/32/8800_pentzwillis_primarya1_8497653a9e8c2fce2631dc0c65901390.jpg" class="lw_image lw_image8800 lw_align_left"/></arg></widget>

 

 

 

 

 

 

 

 

 

 

Figure 1: Distribution of Scale Score for Full Sample

XPHP: Invalid XML
<widget type="image"><arg id="id">8801</arg><arg id="width">300</arg><arg id="height">230</arg><arg id="format"><img width="{width}" height="{height}" alt="Pentz&amp;Willis_PrimaryA2" class="lw_align_left" align="left"/></arg><arg id="cache" valid="2b06cdb2770e45f4f569271039473149"><img width="300" height="230" alt="Pentz&Willis_PrimaryA2" align="left" src="/livewhale/content/images/32/8801_pentzwillis_primarya2_1497bf3f24bb69db3564df14958c1770.jpg" class="lw_image lw_image8801 lw_align_left"/></arg></widget>

 

 

 

 

 

 

 

 

Figure 2: Gender Differences in Impulsivity

According to Anastasi (1968), there are two sub categories of criterion-related validity: predictive and concurrent validity. Predictive validity focuses on predicting future outcomes of an individual (Antastasi, 1968). If predictive validity were to be used in our study, we would Barratt Impulsiveness Scale (BIS-11), to students in the beginning of the school year. The purpose of the BIS-11 distribute an already known to be valid criterion, such as the would be to predict the students’ impulsiveness in the future. To confirm if our CCK Impulsivity Scale (CCK) is valid, the CCK would be administered to students a few years later. Students should receive similar scores on both scales administered; thus, the CCK would be valid. Concurrent validity is a way to compute two scales at the same time to identify if participants receive the same results on both scales. In concurrent validity, each participant would take the CCK and the BIS-11 at the same time. The BIS-11, a known to be validated scale, is used as the criterion (Patton & Barratt, 1995). If the participant has similar scores on both scales, then the CCK can be considered valid.

According to Antastasi (1968), the alternative methods rejected for use to validate the CCK scale were the peer rating and contrast groups methods. Peer rating is when the predictor variable is the participant and the criterion variable is the participant’s friend that has valid information about the participant. Specifically relating to the CCK scale, after the participant’s impulsiveness is assessed with the CCK, the friend takes the same scale. If both students have similar scores, then the CCK can be considered valid. If there are two friends involved, the two scores would be aggregated, or the means taken and correlated with the participant’s score. This method was not used due to the chance that the participant’s friend could learn how the participant scored on the scale. If the friend had this knowledge, he or she could purposely score similarly; thus, the scores would be biased. Since this risk could jeopardize significant results, this method was not used (Antastasi, 1968). Contrast groups is when a group known to be high in impulsiveness, such as gamblers, scores high on an impulsivity scale and a group known to be low in impulsiveness, such as individuals who obsessively check their bank accounts, scores low on an impulsivity scale. For the CCK to be considered valid, two groups at Lake Forest College would need to be identified as either known to be high or low in impulsiveness and each group would need to score appropriately. However, there are no groups of students at Lake Forest College that are known to score high or low in this personality trait; therefore, this method was not chosen as a way to validate the scale.

To avoid the problems that the two alternative methods present, the researchers chose to use concurrent validity to assess the validity of the CCK. Identifying the current impulsivity of students at Lake Forest College with a known to be valid scale, the BIS-11, and the CCK scale will deliver immediate results of the validity of the CCK. Identifying if construct validity is evident in the scale is the ultimate goal of this project, and the method of using concurrent validity determines validity without the need to wait for predictive results to be returned in the future.

XPHP: Invalid XML
<widget type="image"><arg id="id">8798</arg><arg id="width">400</arg><arg id="height">98</arg><arg id="format"><img width="{width}" height="{height}" alt="Pentz&amp;Willis_PrimaryA_3" class="lw_align_left" align="left"/></arg><arg id="cache" valid="abd0fd797e1321cdf1861c559c7014a1"><img width="400" height="98" alt="Pentz&Willis_PrimaryA_3" align="left" src="/livewhale/content/images/32/8798_pentzwillis_primarya_3_daf908018c03c4131c7a1fd0a6f231ac.jpg" class="lw_image lw_image8798 lw_align_left"/></arg></widget>

 

 

 

 

 

Method

Participants

The CCK and the BIS-11 were printed on different sides of the same paper and administered to 40 individuals. The 40 individuals constituted a convenience sample of a particular subset of the Lake Forest College student population: individuals who use the library. These individuals were all using the library to complete homework either at tables or on computers. Of the 40 administered scales, eight were discarded: five individuals did not complete the scale, two individuals failed to return their scales, and one individual marked every item the highest score possible on both scales. Thus, data for 32 individuals were used. Of these 32 individuals, 10 (31.3%) were male and 22 (68.8%) were female. The age of the participants ranged from 18 to 23, M=19.84, and SD=1.26.

Procedure

In order to assess the validity of the CCK, the scale was administered concurrently with the BIS-11, a validated scale frequently used on college student populations and last updated by Patton, Stanford, & Barratt (1995). In 2009, Stanford, Mathias, Dougherty, Lake, Anderson, & Patton published a meta-analysis and review of the BIS-11 examining the validity and influence of the BIS-11 over the course of the past fifty years. The BIS-11 was not modified in any manner for the purposes of this study. The overall impulsivity scores of the CCK and BIS-11 scales constitute the predictor variable and the criterion variable, respectively.

By evaluating the correlation between the two scores, the validity of the CCK will be determined based upon the validity coefficient.

Results

CCK Internal Consistency Reliability Analysis

The internal consistency reliability of the CCK scale was assessed prior to running tests of significance upon the data. The internal consistency reliability level was initially α = .677 for 22 items. The item-total statistics revealed a few items worthy of deletion. Particularly, the deletion of item 7, “I prefer to participate in activities rather than plan them,” would raise the alpha to .700, so it was removed. This item was likely tapping a different construct such as extraversion, which led to its inconsistency with the other items. Likewise, items 22, “I like puzzles,” 19, “I like to surprise my friends,” 2, “I enjoy thrills, such as riding a roller-coaster,” and 24, “I get angry easily” were also deleted. Again, these items likely measured constructs weakly related to impulsivity, so they were not consistent with the other items. This process resulted in a reliability statistic, α = .764, that exceeded the cut-off of .70 for internal consistency reliability. Thus, the CCK was an internally consistent scale with the removal of items 7, 22, 19, 2, and 24.

Descriptive Statistics

The mean score on the CCK was 2.71, the median score was 2.61, the scores ranged from 2.00 to 4.18 out of a minimum score of 1.00 and a maximum score of 5.00, and the standard deviation was 0.462. The distribution is positively skewed and leptokurtic in nature, which suggests that Lake Forest College students tend to score low on impulsivity, and the scores tend to be approximately the same score, see Figure 1.

Figure 2 shows the average scores on the CCK of both males and females. The graph indicates that males tend to be more impulsive (M=2.84, SD=0.54) than females (M=2.65, SD=0.42). An independent samples t-test was conducted to assess the significance of the difference between genders in impulsivity. Levene’s test for equality of variances was not significant, p=.650, so equal variances were assumed. The independent samples t-test revealed that there is no significant difference between the two genders, t(28)=1.070, p=0.294, CI 95% [-0.179, 0.557], see Table 1. Females are not significantly different in impulsivity from males on average.

XPHP: Invalid XML
<widget type="image"><arg id="id">8799</arg><arg id="width">350</arg><arg id="height">220</arg><arg id="format"><img width="{width}" height="{height}" alt="Pentz&amp;Willis_PrimaryA_4" class="lw_align_left" align="left"/></arg><arg id="cache" valid="730d390b1b408421867b3fc77b04104c"><img width="350" height="220" alt="Pentz&Willis_PrimaryA_4" align="left" src="/livewhale/content/images/32/8799_pentzwillis_primarya_4_6af6af6584b1bfe2de337c7ec82b40aa.jpg" class="lw_image lw_image8799 lw_align_left"/></arg></widget>

 

 

 

 

 

 

 

XPHP: Invalid XML
<widget type="image"><arg id="id">8802</arg><arg id="width">300</arg><arg id="height">241</arg><arg id="format"><img width="{width}" height="{height}" alt="Pentz&amp;Willis_PrimaryA5" class="lw_align_left" align="left"/></arg><arg id="cache" valid="067e83cc67fea4e7d057097a8a0eaf99"><img width="300" height="241" alt="Pentz&Willis_PrimaryA5" align="left" src="/livewhale/content/images/32/8802_pentzwillis_primarya5_33a1052e30ac88042105e039f2a7b3e6.jpg" class="lw_image lw_image8802 lw_align_left"/></arg></widget>

 

 

 

 

 

 

 

 

Figure 3 illustrates the relationship between the age of the participants and their average scores on the CCK scale. The scatterplot indicates a lack of a relationship between the two variables. A correlation analysis between the age and score of participants was conducted to determine the nature of the relationship between age and impulsivity. The analysis revealed a small, positive, but non-significant relationship, r(31)=.052, p=0.787. Impulsivity is not significantly related to age, see Table 2.

BIS-11 Internal Consistency Reliability Analysis

Prior to conducting an analysis of the validity of the CCK, the BIS-11 was assessed for internal constancy reliability. The initial analysis resulted in an α=.786. Five items, 21, “I change residencies,” 15, “I like to think about complex problems,” 16, “I change jobs,” 23, “I can only think about one thing at a time,” and 4, “I’m happy-go-lucky” were deleted sequentially. These items address behaviors commonly practiced by college students at Lake Forest College that are not necessarily related to one’s impulsivity. For instance, students who live on campus regularly change residencies between years or between semesters due to 

roommate issues or better housing lottery numbers. Similarly, students frequently work multiple jobs over the course of their college career. While the BIS-11 is a validated scale, some of the items may not be valid for the current population. Thus, even though items were removed for the purposes of this analysis, these items should not be excluded from future studies. The final analysis resulted in an α=.841, which exceeds the cut-off for internal consistency reliability. Thus, the BIS-11 was an internally consistent scale in the Lake Forest College student population. 

 

XPHP: Invalid XML
<widget type="image"><arg id="id">8803</arg><arg id="width">320</arg><arg id="height">316</arg><arg id="format"><img width="{width}" height="{height}" alt="Pentz&amp;Willis_PrimaryA6" class="lw_align_left" align="left"/></arg><arg id="cache" valid="d9aade710e0f5019075634510d4c3408"><img width="320" height="316" alt="Pentz&Willis_PrimaryA6" align="left" src="/livewhale/content/images/32/8803_pentzwillis_primarya6_39716c996df6a1c48ed94be2660accf2.jpg" class="lw_image lw_image8803 lw_align_left"/></arg></widget>

 

Figure 4: Relationship between CCK and BIS-11

 


XPHP: Invalid XML
<widget type="image"><arg id="id">8804</arg><arg id="width">400</arg><arg id="height">235</arg><arg id="format"><img width="{width}" height="{height}" alt="Pentz&amp;Willis_PrimaryA7" class="lw_align_left" align="left"/></arg><arg id="cache" valid="413d0cad7499d7cf7b6600849be7f842"><img width="400" height="235" alt="Pentz&Willis_PrimaryA7" align="left" src="/livewhale/content/images/32/8804_pentzwillis_primarya7_401b33f806253e50832bf5954416def1.jpg" class="lw_image lw_image8804 lw_align_left"/></arg></widget>


 

Validation Analysis

Figure 4 shows the relationship between the CCK and BIS-11 scales; both scales’ raw scores were corrected to be out of five for the CCK and four for the BIS-11.  The relationship between the two scores is clearly strongly positive. To assess the validity of the CCK, a correlation analysis between the overall scores on the CCK and the BIS-11 was conducted. The analysis revealed a strong correlation between the two, r(30)=.826, p=6.07 E-8, see Table 3 and Figure 4. The relationship between the CCK and the BIS-11 was significant, thus there is considerable evidence for the validity of the CCK.

Discussion

Summary

The CCK was given to a number of college students in the Lake Forest College library and was found to be internally consistent. No significant gender differences in impulsiveness emerged. There was no significant relationship between age and impulsivity. Due to the positively skewed data, it can be suggested that the participants in our study are overall not very impulsive. Since the participants’ scores were positively skewed for both scales administered, it can further be suggested that there is a high correlation between each of them. This suggestion was confirmed with a correlational analysis of the two scales, which found a remarkably strong correlation between the CCK and the BIS-11. The strong correlation between the CCK and BIS-11 provides evidence that when these two scales are given at the same time, the CCK is indeed valid.

Factors that may spuriously increase the validation coefficient

While our validation coefficient is remarkably strong, a number of factors can either increase or decrease it. These factors range from methodology, response styles, criterion contamination, and restriction of range. A factor that may have spuriously increased the validation coefficient is response style. In order to counter any response styles, we used distractor items and reverse-keyed some of the items on the CCK. The college students might have demonstrated a social desirability bias by answering each question on the two scales based on how they perceived the experimenters would want them to answer, rather than based on their honest opinions. Even if the participants were not aware of the type of personality scale we were administering, each person could still rate each question depending on how he or she thought we would want him or her to respond to the items. The students might have answered all of the questions with the highest or lowest score possible. Each student could have chosen to answer each question with the response categories “Extremely Accurate” or all “Extremely Inaccurate” on the CCK, or the response categories “Rarely/Never” or all “Almost Always/Always” on the BIS-11. The distractor items prevented participants from guessing the goal of our study and thereby preventing biased answers; moreover, the distractors allowed us to identify if the participant answered the questions relating to impulsivity very high or low, or if the participant answered every question very high or low. The reverse-keyed questions helped determine if the participant read through the question and still answered very high or low, or if the questions relating to impulsivity had varied scores.

Another factor that can increase the validation coefficient is criterion contamination, or the process by which the score the subject received on the scale is influenced by another subject. The peer rating method is especially vulnerable to criterion contamination. If the peer, as a criterion component, gets any hint of the participant’s score, then this participant’s score could influence the peer’s score, and therefore increase the correlation between the average scores. Since we chose to use concurrent validity, criterion contamination was not necessarily a problem. However, we cannot rule out the possibility considering that participants were given the two scales in close proximity of one another, so they could have discussed their scores with one another. Even if this were to occur, peer rating is much more prone to this limitation.

Factors that may spuriously decrease the validation coefficient

We will now assess the three components of the methodology that may have spuriously decreased the validation coefficient: the sample size, location, and counterbalancing the scales. We administered scales to 40 participants and used data from 32 participants. The correlation coefficient could have decreased due to us not collecting a larger sample size of college students, which would have been a better representation of Lake Forest

College students. Our location could have been a major limitation. We only collected data from students working in the library, which could decrease our ability to conclude that the CCK is valid. If the two scales were distributed to other parts of the campus, such as at a college party, then the correlation between each scale might have decreased. Not counterbalancing the order of the two scales is another limitation of our methodology. The two scales were printed double-sided on one piece of paper. Each student began working on the CCK before beginning the BIS. The participant might grow tired or bored after completing one scale, so the second scale might not receive the same amount of attention or thought. Although highly unlikely in our study, the correlation coefficient could decrease if the participants were all given the CCK to take first.

Restriction of range is another factor that could decrease the validation coefficient. This factor is when the sample’s scores are too similar on the variable impulsivity (Mitchell & Jolley, p. 244). Referring to Figure 4, there is a high-end restriction of range in our study. Specifically, most individuals scored on average in a range from two to three and a half on the CCK and scores on the BIS-11 in a range from one and a half to three. Besides one notably impulsive individual, our range fails to include average scores above three and a half on the CCK and three on the BIS. Despite this high-end restriction of range, the correlation between the two scales is still high with a .826 correlation coefficient.

General Conclusions

Taking into consideration all of these factors, we can conclude that our scale likely remains valid in comparison to the BIS-11, considering the steps we took to prevent decreasing the validation coefficient. Criterion contamination likely did not occur because we are generally able to rule out the possibility of the participants discussing their scores on both scales. Thus, the validation coefficient did not decrease. Having a high restriction of range did not hurt our validation coefficient, considering the coefficient is still  .826. The three components of the methodology: the sample size, location, and counterbalancing could have either enhanced or detracted from the strength of the relationship between the CCK and BIS-11 scales. Given our analyses in the previous two sections, we can conclude that the methodology in each section did not spuriously contribute greatly to the enhancement or detraction of the relationship between the two scales. Essentially, the validation coefficient remained high despite any factor that could decrease it and methodological safeguards prevented the correlation coefficient from being spuriously increased; thus, the CCK is found to be valid in this study, although further research is required to establish the scale’s convergent, discriminant, and predictive validity.

Ideal Study

An ideal future would focus on addressing any limitation from the original study, as well as include other forms of validation. To address the concern of location in the methodology and possible impact of restriction of range, data would be collected from a variety of locations on the college campus. For example, we would find participants in the library, the cafeteria, dorm rooms, the mailroom, the theatre, the gym, classrooms, and the student center. This expansion of location would provide us with a much larger sample of the Lake Forest College population. Although the high-end restriction of range does not decrease the validation coefficient too badly, the larger sample could help eliminate any restriction of range problem in our current study and enable us to generalize the results of Lake Forest College participants to the student body.

To further develop the validity of the CCK, it would be beneficial for us to use additional criterion-related validity techniques, such as distributing multiple scales, predictive validation, peer ratings, and contrasted groups. The multiple scales might include both the BIS-11 and Eysenck Impulsiveness Questionnaire (I7) (Claes et al., 1999). We could distribute the CCK scale at the beginning of the school year and then distribute the BIS-11 and I7 scales to the same students again at the end of the school year, or we could correlate average scores on the CCK with known to be impulsive behaviors later in life. If the students scored the same at the end of the year or had respective impulsive behaviors, then we could conclude the CCK predicts either impulsive scores or impulsive behavior. To use peer ratings, we would contact hundred students for each year in school. After receiving the phone numbers of two friends and permission to meet with them, the next step would be to distribute the CCK to the participants and then to their friends. Each friend would take the scales in two separate study rooms to ensure that participant did not influence the friends’ scores and that neither of the friends influenced one another’s scores. Using the peer ratings would eliminate any problems with self-report or social-desirability from the actual participant. The final criterion related validity technique we would use would be contrasted groups. This technique could be useful if we could compare the CCK with the BIS-11 among major organizations or groups at the school, such as sororities, fraternities, sports teams, academic majors, and other clubs on campus. For instance, if a known to be impulsive fraternity scores high on the CCK while the honors students score low on the scale, then we could conclude that the scale can detect the differences in impulsivity between the two groups. Overall, the addition of other criterion related validity could further establish the validity of our scale.

References

Anastasi, A. (1954). Psychological testing (3rd ed.). Oxford England: Macmillan.

Claes, L. L., Vertommen, H. H., & Braspenning, N. N. (2000). Psychometric properties of the Dickman Impulsivity Inventory. Personality and Individual Differences, 29(1), 27-35. doi:10.1016/S0191-8869(99)00172-5

Hashimoto, H., & Shiomi, K. (2002). The structure of empathy in Japanese adolescents: Construction and examination of an empathy

scale. Social Behavior and Personality, 30(6), 593-602. doi:10.2224/sbp.2002.30.6.593

Kirby, K. N., & Finch, J. C. (2010). The hierarchical structure of self-reported impulsivity. Personality and Individual Differences, 48(6), 704-713. doi:10.1016/j.paid.2010.01.019

Mitchell, M. L. & Jolley, J. M. (2010).  Research design explained.  Seventh Edition.  Wadsworth: Cengage Learning.  (ISBN: 978-0-495-60221-7

Patton, J. H., Stanford, M. S., & Barratt, E. S. (1995). Factor structure of the Barratt Impulsiveness Scale. Journal of Clinical Psychology,

51(6), 768-774. doi:10.1002/1097-4679(199511)51:6<768::AID JCLP2270510607>3.0.CO;2-1

Reynolds, B., Ortengren, A., Richards, J. B., & de Wit, H. (2006). Dimensions of impulsive Personality and behavioral measures. Personality and Individual Differences, 40(2), 305-315. doi:10.1016/j.paid.200

Stanford, M. S., Mathias, C. W., Dougherty, D. M., Lake, S. L., Anderson, N. E., & Patton, J. H. (2009). Fifty years of the Barratt Impulsiveness Scale: An update and review. Personality and Individual Differences, 47(5), 385-395.

doi:10.1016/j.paid.2009.04.008

Whiteside, S. P., & Lynam, D. R. (2001). The Five Factor Model and impulsivity: Using a structural model of personality to understand impulsivity. Personality and Individual Differences, 30(4), 669-689. doi:10.1016/S0191-8869(00)00064-7 5.03.024

Appendix

Thank you for taking the time to complete this scale.  Please be aware that completion of this scale is entirely voluntary, meaning you may decline to participate or stop at any time.  All responses will be kept confidential.

Gender (circle one):  M       F Age:  ___

Please indicate on the scale provided below to what extent you agree or disagree with the following statements.

XPHP: Invalid XML
<widget type="image"><arg id="id">8805</arg><arg id="width">400</arg><arg id="height">304</arg><arg id="format"><img width="{width}" height="{height}" alt="Pentz&amp;Willis_PrimaryA8" class="lw_align_left" align="left"/></arg><arg id="cache" valid="2d1f3494f3681b90f54fdc73891fedef"><img width="400" height="304" alt="Pentz&Willis_PrimaryA8" align="left" src="/livewhale/content/images/32/8805_pentzwillis_primarya8_cf53a95b0a1bc3b2e7724e04c00ea60e.jpg" class="lw_image lw_image8805 lw_align_left"/></arg></widget>

Disclaimer

Eukaryon is published by students at Lake Forest College, who are solely responsible for its content. The views expressed in Eukaryon do not necessarily reflect those of the College.

Articles published within Eukaryon should not be cited in bibliographies. Material contained herein should be treated as personal communication and should be cited as such only with the consent of the author.