Several posts ago I wrote about a large settlement that FedEx entered into. Part of the case involved a challenge to their promotional exam.
In this post I identified the test in question as PSI's Basic Skills Test. This was incorrect. According to representatives from PSI, a testifying expert in the case confirmed that the test being used was not from PSI. In fact, no PSI product has ever been successfully challenged in a legal proceeding.
I sincerely apologize about the error. PSI has a reputation as being a reputable organization that offers high quality instruments. In fact, they are one of the few companies whose products I routinely recommend to customers.
Feel free to comment with any questions or concerns.
Monday, April 30, 2007
Several posts ago I wrote about a large settlement that FedEx entered into. Part of the case involved a challenge to their promotional exam.
Wednesday, April 25, 2007
Many people use MySpace, Facebook, and other social networking sites to create personalized web pages that often include information we might call part of a "digital first impression"--music they like, books they read, TV shows they watch, etc.--the types of things that come up in conversation that we use to judge someone's similarity to us and what type of person they are.
From a recruiting and assessment perspective, the most obvious use of this information is to identify potential candidates and gather additional information about applicants to judge their qualifications and fit. This is already happening and these "digital selves" may increase in importance with the introduction of new web services.
Aside from legal concerns, an important issue is the accuracy of these judgments. Does the fact that someone has a link to classical music on their page mean anything? How about whether or not they provide contact information?
Much of the work done on how physical and digital displays relate to personality has been done by Dr. Sam Gosling at the University of Texas and Dr. Peter Rentfrow at the University of Cambridge. In addition, several studies have been done in both applied and university settings. Most of these studies have used the Big 5 as a way to describe adult personality.
Overall, research suggests people are fairly accurate in making personality judgments about people based on their web pages--this is particularly true when judging openness to experience and extraversion. Specifically, here's what different aspects of a web page have been linked to:
- Including information about one's hometown/region: Higher on agreeableness and conscientiousness, lower on neuroticism
- Expressing a lot of personal beliefs/emotions: Higher on neuroticism and openness to experience, lower on agreeableness and conscientiousness
- Having a blog: Higher on extraversion and openness to experience
- Seeking explicit feedback (e.g., comments): Lower on neuroticism
- Having links to Internet/computer sites: Lower on extraversion
- Having links to visual arts, having a music lyrics category (strongest relationship): Higher on openness to experience
- Having a webcam: Lower on agreeableness
- Linking to one's resume/vitae, posting family pictures, or having a visitor counter: Higher on conscientiousness
What about specific aspects of a person, such as personal tastes? Here's what research has uncovered about specific preferences:
Much of the work done on how personal tastes reflect personality has focused on musical preferences. Here's what various musical tastes tend to correlate with (you can take this assessment here):
- Reflective and complex (e.g., jazz, classical): Higher on openness to experience (strongest overall relationship), higher verbal ability
- Intense and rebellious (e.g., rock, alternative): Higher on openness to experience, higher verbal ability
- Upbeat and conventional (e.g., country, pop): Higher on extraversion, agreeableness, and conscientiousness; lower verbal ability
- Energetic and rhythmic (e.g., hip-hop, electronica): Higher on extraversion
Here's a sample of how people that liked certain books, movies, and social activities described themselves:
- Fiction & Literature: Creative
- Business: Attractive, successful
- Science Fiction: Intelligent, weird
- Fantasy: Weird
- Cooking: Lovable
- Adventure: Fun, lovable
- Comedy: Funny
- Independent: Creative, unique
- Science Fiction: Kind, weird
- Clubbing: Attractive, fun, socially adaptable
- Bowling: Funny
- Computer Gaming: Intelligent, weird
The findings are intriguing, but we desperately need more research in this area, particularly looking at aspects of the observer that influence accuracy and research into additional forms of information linkages, such as tags. Hopefully over time we'll uncover more about how well (or poorly) the online self matches up with who you meet in person.
"e-Perceptions: Personality impressions based on personal websites"
"The do re mi's of everyday life: The structure and personality correlates of music preferences"
"Message in a ballad: The role of music preferences in interpersonal perception"
"Personality impressions based on Facebook profiles"
"Personality in cyberspace: Personal websites as media for personality expressions and impressions"
"A social network caught in the web"
Tuesday, April 24, 2007
Good, if brief, interview with Dr. Ann Howard, Chief Scientist at DDI and well-known name in the field of I/O psychology. Her contributions include editing The Changing Nature of Work and co-authoring the much-discussed recent DDI report, Slugging Through the War for Talent.
Dr. Howard talks about assessment centers, leadership, how she got into I/O psychology, and her marriage to Doug Bray, another leader in the field.
The interview includes two audio clips.
Monday, April 23, 2007
In a previous post I talked about some of the products showcased in the program for the 2007 SIOP Conference.
In this post I'd like to highlight some of the more interesting (to me--and to you too, I hope!) presentations being made. Because there are so many interesting presentations, I'll use several posts to cover a number of them.
Gender and letters of recommendation: Agentic and communal differences (249-8)
Here's a study that should make you think twice about those letters of recommendation you review (if you don't already). After analyzing nearly 700 letters of recommendation for psychology faculty positions, the researcher found that women in these letters tended to be described as more affectionate, warm, and kind, while men were described as more ambitious, dominant, and self-confident. In addition, letters for women contained more references to their physical appearance (insert shudder here).
Data trends in open mode, online, unsupervised cognitive ability testing (61-28)
Personality testing online (unsupervised) and paper and pencil (supervised) (103-21)
Retest effects on an unproctored Internet-based GMA test (205-29)
DFIT analysis of web-based and paper-based versions of the WPT (261-21)
These four studies all looked at online testing in some way or another. The first three provide some support for online testing; they suggest that online general mental ability (GMA) test scores are relatively stable over time and the psychometric qualities of a personality test were consistent regardless of whether the test was taken online & unproctored or in person & proctored. Before we get too excited, however, the last study found that the paper-and-pencil and online versions of the WPT were not completely identical. It also found that WPT-Q scores differed between proctored and unproctored settings. So overall, mixed support for online testing. Chances are other factors (e.g., physical environment, Internet self-efficacy) play major roles.
Fancy job titles: Effects on recruitment success (261-25)
Chief Fun Officer. Brand Evangelist. Some organizations are coming up with creative job titles in an attempt to lure candidates who may find "Marketing Executive" a tad dull. But does it work? In this study, Dr. Klaus Templer presented nearly 400 marketing students with four hypothetical job ads using various titles, including fancy (e.g., Global Brand Insights Manager) as well as traditional (e.g., Marketing Officer). Results? Attitude toward the job was significantly higher with the fancier title, as was the extent to which the job was recommended to a friend. Why? Templer hypothesizes that fancier job titles lend the job more prestige, making it more attractive. Interesting follow-up question I have: Does the response vary between high-potentials and low-potentials? Also, we should keep in mind that surveys suggest job titles may have less of an impact on retention.
More conference goodness in upcoming posts!
Friday, April 20, 2007
Yes, I said Spock. No, not as in Star Trek.
I posted about Spock last year--they are a "people search" website that will allow users to search for, say, 'Engineer', and instead of getting links to Wikipedia or BLS, will get actual people that are Engineers. Spock has, according to its website, indexed over 100 million people.
What's the big deal? Well, imagine how powerful this could be for both recruiting AND assessment. Looking for a Unix developer in San Antonio, TX? Plug it in and see who comes up! Interested in hiring Sally Garcia but need more information about her? Plug in her name and see what comes up!
To address concerns about accuracy (see this series from George Lenard) Spock also incorporates the "wise crowd" idea, a la eBay or Wikipedia, to help ensure accuracy.
This has the potential to be a more objective, and much more comprehensive, version of MySpace or any other social networking site. More like a universal social catalog.
So what's new? A couple things. First, they gave a very well-received demo at the Web 2.0 Expo in San Francisco this week. Second, they announced their invitation-only beta (more on that soon I hope).
So what's it look like? If you do nothing else, watch this screencast. There's so much here it's hard to put in a single post--widgets, search bars, the whole kit 'n caboodle. I guarantee it will raise your eyebrows.
How soon 'till its up and running? Two months, according to reports. Keep an eye on this one, folks!!
Thursday, April 19, 2007
Ever wonder how much it costs an organization to defend itself from an employment discrimination lawsuit? Check out this graph from a recent Business Week article.
Here are the numbers:
- $10,000 if the suit is settled
- $100,000 if it's resolved through summary judgment or other pre-trial ruling
- $175,000 if it goes to trial
- $250,000 if the trial is won by the plaintiff(s)
- $300,000 if the plaintiff victory survives appeal
The numbers vary depending on the type of case, with age and disability suits often the most costly. A good source of this type of data is Jury Verdict Research. The EEOC also has detailed information on their website about the amount they've obtained through both litigation and other avenues.
The good news for employers? Very few suits go to trial--only 6%. The vast majority are settled.
Of course this doesn't take into consideration the impact on PR, recruiting, time wasted, etc. etc...
Thanks to Michael Harris for pointing out the Business Week article.
Wednesday, April 18, 2007
Can job applicants fake personality tests? This topic is one of the hottest in assessment research these days (in fact, there's a whole book about it), and there's quite a bit of research indicating these tests can be "faked", meaning people can figure out how they should answer given the job. But most research compares applicants to incumbents or asks the same subjects to be honest then "fake good" (with some exceptions)--this may or may not be the best way to investigate "faking."
Now, one of the editors of the above-mentioned book, Richard L. Griffith, has co-authored a new study titled "Do applicants fake? An examination of the frequency of applicant faking behavior."
Why is this study different from previous ones? Because they used a within-subjects design, meaning they compared scores of people taking exams as applicants to scores of the same individuals at a later point in time.
What did they find? That 30-50% of the folks elevated their scores when applying. In addition, this elevation resulted in significant changes in rank-ordering, which impacted hiring decisions.
The $400 million question that arises out of this is: Does it matter? I've talked a little bit about this before, but the major research done in this area (such as this and this) indicate it doesn't make much difference. Personality tests can still be very effective in predicting job performance--sometimes impressively so. Nevertheless, many people are still highly skeptical of personality tests. This may be one of those situations where some decision-makers are comfortable with the risks inherent in using the tool but do so anyway because of its proven value, while others simply don't want to go there. In reality, the same could be said about practically any assessment method.
My advice? Look to the job analysis. If personality factors are indicated as a major influence on job performance, don't ignore personality tests as a possible tool. There's too much riding on each hiring decision. Still interested, but not comfortable? Try giving a personality test as a research instrument (i.e., don't consider it for hiring purposes) and gather post-hire performance data--you might be surprised at how accurate it predicted success.
Monday, April 16, 2007
Please note: This is an updated post that corrects an inaccurate fact I had posted, namely that the test in question was PSI's Basic Skills Test. This was not true. I relied on information from another blog, which turns out to have been incorrect. I sincerely apologize about the error on my part. No harm was intended.
FedEx Corp. has agreed to settle a class action employment discrimination claim (Satchell v. FedEx Express) for $54.8 million filed on behalf of black and Hispanic workers who claimed systematic discrimination in performance evaluations, promotion, compensation, and discipline throughout the Western Region. This is one of the larger settlements for this type of claim (the grand prize of $508 million goes to Hartman v. Powell)
The part of the suit involving a test claimed that it resulted in disparate impact against class members and FedEx failed to show they had the validation evidence to back it up. The settlement requires that FedEx discontinue the use of the test for courier, ramp transport driver, and customer service positions. The suit claimed that 86% of white employees had passed the test, compared to 47% of black employees and 62% of Latino employees, a clear violation of the "4/5ths rule."
Unfortunately at this point we don't have any details about the validation efforts (or lack thereof) that FedEx Express took to ensure that the test was being used appropriately. What this case does emphasize is the need for organizations to make sure they can stand behind their selection practices--particularly ones that result in adverse impact.
Some questions to ponder:
Do you track your applicant flow/adverse impact statistics (basic example here)?
If so, what do you do with the results?
If not, well...take this as a warning.
- Motion for class certification
- Plaintiff's reply memorandum in support of class certification
- Order approving class certification
- Notice to potential class members
Media articles about this case can be found (among other places) here and here.
George Lenard has discussed this case here, here, and here. Michael Harris discusses it here.
Sunday, April 15, 2007
The Spring 2007 issue of Public Personnel Management is here (IPMA-HR membership required for full access), and it's got some great stuff inside. Let's take a look at the two articles specifically focused on recruitment/assessment...
First, "The validity of assessment center ratings and 16PF personality trait scores in police Sergeant promotions: A case of incremental validity" by Love & DeArmond. As the title suggests, what the authors looked at here was the ability of job-related personality test scores (measured by the 16PF) to add incremental validity above and beyond assessment center (AC) scores in predicting performance as a police Sergeant. (Not familiar with ACs? Check out this great intro by Bill Waldron and Rich Joines). Anyhoo, here's the rundown:
Sample: 54 candidates, 48 male, all Caucasian, from small and medium-size agencies
AC: Five work sample exercises
Previous hurdle: All had passed a written job knowledge exam
Criterion: Supervisor ratings
Results: AC performance dimensions significantly predicted performance ratings (R-square of .16, p<.01) and 16PF scores accounted for additional unique variance (change in R-square of .08, p<.05). When entered first in the regression, however, personality did NOT significantly predict performance ratings. Hmmmm....
Concerns I have: The reliabilities for the five 16PF factors investigated were not good, even considering the small sample (alphas ranged from .04 to .55). The sample was small, and not particularly diverse (a fact the authors acknowledge as a limitation). And, um...I'm not particularly impressed with that incremental validity, although it's pretty par for the course.
Second, in "Legal issues for HR professionals: Reference checking/background investigations" William Woska provides a great overview of why conducting reference and background checks is so important, what an employer's obligations are, and the importance of focusing on job-related factors. Aside from simply being plain 'ol best practice, avoiding a negligent hiring (tort) lawsuit is a great reason to always do reference checks. Woska also covers avoiding violating an applicant's privacy and/or the Fair Credit Reporting Act, and the nature of waivers. This is one of those "save and put in your files" articles.
By the way, there's some other great stuff in here, including legal analyses of constructive discharge and affirmative action, a look at turnover in jails, and an essay on (among other things) Civil Service reform in Florida.
Friday, April 13, 2007
A little Friday fun for ya. Just finished a trip up and down the west coast on I-5, and happened upon the little town of Talent, Oregon.
Saw some interesting road signs, snapped a couple, thought you might find them amusing (I tried to add these to my blog layout and failed miserably!):
Have a great weekend!
Deborah Whetzel and George Wheaton's 1997 book, Applied measurement methods in industrial psychology was extremely dog-ear worthy, with wonderfully practical material by a variety of well-known authors on topics such as job analysis, interviews, low-fidelity simulations, and rating job performance.
Their updated volume, Applied measurement: Industrial psychology in human resource management, promises to be equally worthy of a place in your library. Dr. Whetzel presented an overview at last year's IPMAAC conference, and the book is due out tomorrow.
So what's inside? More great material by experts in our field. Here's a sample of content areas covered:
- Job analysis
- Measurement plans
- Cognitive ability
- Training & experience measures (new to this volume)
- Background data (biodata)
- Situational judgment tests
- Assessment centers (new)
- Performance measurement
- Test validation
- Developing legally defensible content valid selection procedures (new)
Price? All that for a very reasonable $39.95. I think I might have to pick one up!
Monday, April 09, 2007
I've said before that I think the "holy grail" of selection will be matching quality measures of candidate values, interests, and competencies with those required for a position in a particular organization. (And I'm not alone here)
While job search search sites and applicant tracking software provide generic matching capability--say, letting candidates or employers search by full-time/part-time, geographic preference, etc.--I haven't seen anything more impressive, even though we know quite a bit about person-job and person-organization fit and its importance--until now.
Jobfox (previously Market10), developed by CareerBuilder founder and former CEO Rob McGovern, is taking things a step further by allowing job seekers and employers to find each other using a variety of "dimensions" (10, to be specific), including some of the aforementioned:
- Previous employment
- Skills (tied to particular employment)
- Desired salary
- Willingness to travel
Pretty basic stuff, right? Well here's where things get interesting. Look at the additional pieces of information a candidate can enter:
- Growth stage of company you want to work for (Start-up/Growing/Established)
- Dress code preference (Business Professional/Business Casual/Casual)
- Size of employer preferred (<500,>500)
- Location you work best from (home, work, part of each)
- Employer type desired (for profit, not for profit, government)
- Benefits desired (yes this seems basic but I don't recall seeing this before)
The other thing that's very different from traditional job boards is the job seeker doesn't SEARCH for jobs. Instead, results are generated based on data the individual enters/attaches and how well it matches specific opportunities.
From the employer's perspective, they get (for a fee) candidates whose profile best matches position/organization needs.
Less clutter to sift through for both sides--I love it. I see this as a great step toward eventually doing a much better of job of matching candidates and employers.
Results are looking positive with various employers signed up. Unfortunately right now they only serve the Atlanta and Washington, D.C. markets but they will be expanding to the San Francisco Bay Area in May, and Boston by June.
Downside? This is still active job seekers and employers. The next big challenge will be to not only have a database of valid individual assessment results (and accurate job descriptions) but to have this information for passive as well as active job seekers. And not just current job openings, but information for all employers. What would this look like? I'm not sure. Maybe an outgrowth of a popular social networking site, such as MySpace, mixed with organizational data from, say, Vault? All I know is it will be fun to watch!
Employer brochure available here. Other information, including press releases, here.
Wednesday, April 04, 2007
Getting the SIOP Conference program in the mail is kind of like getting the toy catalog as a kid. At least for me it is. I look forward to peering inside at all the hundreds of different presentations and topics. I won't be going this year but I get a heck of a lot out of simply reviewing nearly 200 pages of glossy-covered goodness.
So it was with great interest that I cracked open this year's conference program to see what was inside.
There is SO much content at a SIOP conference (too much, some say) that I won't even attempt to cover it all--heck, there are over 30 items (presentations, poster sessions, panel discussions, etc.) listed under "Personality" alone--but I'm going to hit some high points. Mostly stuff I find interesting.
The first thing I'm going to talk about isn't conference content at all--it's the ads in the program. Specifically, the test products mentioned in the ads. In addition to the program being a great overview of what's going on in the world of I/O research, it's a good way to find out about new tests and/or consulting firms. I won't be covering some, like the 16PF or HPI, because, well, they've been around for so long that most folks know about 'em (or should). Instead my eyes were drawn to the new kids on the block--at least new to me.
First up, the personality inventory for integrity assessment (PIA) from S & F Personalpsychologie. The ad states, "With...PIA, you can identify honest employees and decrease bullying, theft, and absenteeism." The website claims PIA is "the first ever German integrity test." What else do we know about it? Not a lot. The test presents candidates with various questions on "behavior conforming with contractual agreements, on reliability, and on their willingness to take risks." Apparently the test has been in use since 2000 and they are currently conducting a validation study by comparing answers given by inmates to non-detained individuals.
Next, the Sales Leader Navigator from Wilson Learning. This a 360-degree feedback instrument designed to be used for promotions in sales positions. The tool is tied to 76 competencies and "character elements" and can be tailored to specific organizational needs. The leadership roles focused on include visionary, tactician, facilitator, and contributor. Ratees can request feedback from raters via an online system. Other details about the assessment (say, item/scale type, reliability & validity data) will hopefully be forthcoming. For more general content info, see their"point-of-view whitepaper."
Last, two products from Saville Consulting--the Wave Professional Styles, designed to be used at the manager/director level, and a shorter version, Wave Focus, which apparently can be used with a wider applicant population. According to the website, the Wave "measures motivation, talent and preferred culture." It also apparently maps to the Big Five personality factors. The assessment presents a series of six statements, e.g., "I am an optimist" and uses a 9-point scale ranging from "Very Strongly Disagree" to "Very Strongly Agree." If two statements receive the same rating, they may be presented again and force the candidate to differentiate them. More information regarding development can be found here, including some reliability and validity information (without a description of the samples, unfortunately).
In future posts I'll cover some of the content of the presentation, including personality and cognitive ability testing. For those of you that are curious, the entire program is searchable by going to this link.
Registration is now open for the 31st annual IPMAAC Conference in St. Louis on June 10-13.
Along with the annual SIOP conference, this is THE professional event to attend to learn about innovations, best practices, and the latest research in personnel assessment.
- Dr. Wayne Cascio on Do employee behaviors matter? Some economic effects.
- Dr. Robert Hogan on What we know about leadership.
- Dr. Nancy Tippins on Unproctored testing.
- Using logic-based testing to develop powerful measures of reasoning ability.
- Oral examinations.
- Examination planning.
- Situational judgment test: Development and applications.
- Adverse impact: Pitfalls, Pollyanna, and practical advice for practitioners.
Unlike some conferences, IPMAAC is very doable--there are a lot of presentations but you can generally hit most that you're really interested in. Another benefit? It's cheap. Registration is only $345 for IPMAAC members, $445 for non-members. Still not convinced? Take a look at some of the presentations from previous annual conferences.
I'll post more about the content when the full program is published. Registration brochure is here. Highly recommended!
Tuesday, April 03, 2007
Yesterday I posted about an assessment that Monster.com offers job seekers in order to help them understand their strengths, values, etc. I mentioned that I wished that Monster offered a more valid tool that employers could use to match position requirements to job seekers.
A comment to my post yesterday pointed out that Monster offers such a tool to employers, the Monster Performance Assessment. This "online behavioral screening application" was developed in conjunction with DDI and the idea is that an invitation to the assessment is e-mailed to potential candidates and the results are stored by Monster. The tool can be built into Monster job postings and is designed to integrate with several ATSs. The tool is available for several job families, such as customer service, health care, IT, and sales.
Cost? $100 per posting--very cheap when you think about price per applicant.
Completion time? 10-15 minutes--very important given short attention spans.
Items? Looks like situational judgment--thank goodness its not just T&E.
In terms of getting the right people, I'm generally a fan of this formula: brand management + realistic job previews + constant recruiting (especially through referrals). But if ya have a situation where you've got a heck of a lot of active candidates, this is a pretty elegant way to go, although I'd like to see a technical report. Could also use biodata or pesonality tests, but those are trickier. I also like that they explicitly measure face validity.
A brochure on the product is available, as is a very informative webinar (Internet Explorer only).
Monday, April 02, 2007
A friend of mine sent me a link to an "assessment" that Monster is using (through MyMonster) called JASPER--Job Assets and Strengths Profiler. According to the website:
"Based on over 60 years of research, this fun and enlightening test will uncover your job strengths and preferences and help you use them to your advantage.
- Discover your work and leadership style
- Gain confidence in your job related skills
- Enhance your ability to work with others
- Improve your resume, job search and more."
The assessment was developed by Tickle, which appears to be one of these outfits that has a number of "pop" tests of questionable validity, some of which apparently are "PhD-Certified" (whatever that means).
Anyway, it does appear some work went into the development of this test (including factor analysis of survey results, which apparently gives the test "content validity, face validity, and reliability"!), although it looks to be a perfect example of a-theoretical test development. From Tickle's website:
"Led by a team of Ph.D. behavioral scientists, the research team determined which psychological and career concepts were supported by research and would also provide test takers and employers with the most accurate, useful, and in-depth feedback. The team turned to general career research as well as referenced tests such as the Self-Directed Search (SDS), the Strong Interest Inventory, the Myers-Briggs Type Indicator (MBTI), and Campbell's Interest and Career Survey (CISS). "
To their credit, they aren't claiming the assessment is the be-all and end-all:
"JASPER does not have what is called predictive validity, meaning that this test doesn't claim to predict one's future success. Rather, its purpose is to accurately assess the test taker's strengths as it relates to the work environment. "
So why am I even talking about this? Two reasons. First, Monster, one of the most trafficked job sites, is using it. Second, it does make use of a variety of different testing formats and measures, including (literally) sliding scales, picture comparisons (wonder how they would deal with accessibility issues?), and scrolling words that must be selected. Nothing earth-shattering, but it's nice to see some exploration of different methods.
Of course I HAD to take the test. Results? Apparently my type is "Mentor", which 10% of test-takers are. My leadership style is "Innovative", my work personality "Rousing", my universal skill "Communication", and my work style "Questioning." What, you thought my results would say "Crazy megalomaniac time-waster?" (Yes, this is a whole heck of a lot of different measurement concepts tossed into a single assessment. Yes, this is very Myers-Briggs-like in that you are a "type." No, reliabilities are not reported anywhere.)
One last thing: the results report is more substantial than many "pop" assessments and makes for interesting reading. This type of thing would be a lot more useful if it was actually tied to specific job requirements, which seems like something Monster could take advantage of for job matching...
Additional information include this whitepaper ("technical manual") from Tickle and this blog post from Monster.