Malleability of Soft-Skill Competencies
Development with First-Term Enlisted Experience
Laura G. Barron and Mark R. Rose
U.S. Air Force Air Education and Training Command
Download the PDF
Abstract
As in the labor force as a whole, military recruits typically begin their careers with deficiencies in at least some soft skills relevant for workplace success (e.g., accepting feedback, collaboration, integrity, self-awareness). While many may assume that soft skills can be developed naturally as enlistees gain experience on the job, certain soft skills may prove more resistant to change than others. The current study quantitatively compares the relative malleability of distinct soft-skill competencies in an Air Force context. Specifically, we evaluate the development of first-term airmen on 17 distinct organizationally valued soft-skill competencies based on the perceived proficiency of first-term airmen (in aggregate) at two career milestones. Estimated change in competency proficiency is based on survey ratings from 1,059 technical training instructors and 6,894 first-line supervisors on the competency proficiency of first-term airmen (a) upon completion of technical training and (b) at the end of their four-year enlistment. Results are interpreted through a framework of developmental difficulty theory previously tested only in the context of the short-term development of midcareer professionals. Implications for the prioritization of certain competencies in screening and personnel selection are discussed.
When employers cannot select employees who are proficient on all competencies needed for highly effective performance, they face difficult decisions to prioritize and distinguish competencies that are not readily amenable to change without extensive intervention (that may need to be targeted in personnel selection) from competencies that can be more readily developed on the job as employees gain experience. This prioritization may be particularly important in tight labor markets, especially for all-volunteer militaries that recruit for a fixed term of enlistment and often cannot be as selective in hiring as many private organizations.
As noted by many authors (see Campion et al., 2011; Schippmann et al., 2000), “competencies” is a broad term and can refer to any knowledge, skills, abilities, or other characteristics needed for effective job performance in a given context. U.S. military organizations similarly define the term broadly and often distinguish technical, occupationally specific competencies from competencies that are nontechnical and intended to apply across (military) occupations (see U.S. Department of Defense, 2016). The latter type of competencies, our focus in this article, has also been termed “soft skills” and has been defined by previous researchers and human resources practitioners as encompassing both interpersonal (people) skills and personal qualities and (career) attributes (Robles, 2012; see also Meeks, 2017).
Evaluations of competency change have generally provided encouraging evidence that many soft-skill competencies can be developed (e.g., Gibbons et al., 2006; Martin-Raugh et al., 2019; Mueller-Hanson et al., 2015; Straus et al., 2018). However, questions as to which soft-skill competencies are more or less amenable to development than others remain largely unanswered. With few exceptions (e.g., Gibbons et al., 2006), there has been little attempt to meaningfully classify competencies to evaluate their malleability based on the underlying structure of what is to be learned.
Relative Malleability of Distinct Soft-Skill Competencies
The few studies that have explored the topic of competency malleability typically have proposed theories without presenting empirical evidence, or they presented preliminary empirical findings that had significant methodological limitations. We describe three theoretical models that have been proposed (Brush & Licata, 1983; Hellervik et al., 1992; Waters, 1980) and the limited empirical evidence to date.
Waters (1980) distinguished four types of managerial skills based on the expected time required for their development and the degree of behavioral specificity: practice (short time interval, behaviorally specific), insight (short time interval, behaviorally nonspecific), context (long time interval, behaviorally specific), and wisdom (long time interval, behaviorally nonspecific). He argued that practice skills (e.g., active listening, oral presentation) are the most malleable, whereas wisdom skills (e.g., charisma, “working the hierarchy”) are the least. Insight skills, which emerge through the gradual acquisition of insight rather than from planned practice (e.g., working in groups, dealing with ambiguity) and context skills (e.g., building commitment and motivation) were believed to have intermediate levels of malleability.
Similarly, Brush and Licata (1983) proposed that skills that primarily depend on acquiring specific knowledge or following set procedures are the easiest to develop, whereas skills requiring interpersonal interaction or noncognitive elements (i.e., changes in attitudes, dispositions, or values) are more challenging to develop. They noted that it is easier to change cognitive processes (e.g., a process or technique for dealing with a complaining customer) than emotional ones (e.g., reaction to conflict).
Hellervik et al. (1992) proposed that malleability is driven by the complexity of the behavior to be learned. In this framework, behaviors that are highly complex, requiring high levels of cognitive ability, are less malleable, whereas behaviors that are less complex, requiring lower levels of cognitive ability, should be easier to learn. Hellervik et al. suggested example behaviors from Campbell’s (1990) taxonomy of job performance that might be easier (e.g., “write a grammatical sentence free of spelling errors”; p. 840) and more difficult to learn (e.g., “prepare a scientific treatise”; p. 840).
Few studies have provided empirical evidence of the relative malleability of distinct competencies. One study identified 16 competencies consistently described as critical for managerial performance, derived from 1,095 dimensions from 65 sources (Gibbons et al., 2006). For all 16 competencies, we found some evidence of malleability. However, because the studies varied considerably in method, population, length, and dimension definitions, it was impossible to directly compare the magnitude of change across competencies. More recent studies of objective competency change using competencies such as leadership (e.g., Avolio et al., 2009), communication (e.g., Barth & Lannen, 2011), teamwork (e.g., Salas et al., 2008), and other interpersonal skills (e.g., Klein, 2009) present similar challenges. These studies provide objective measures of change but lack sufficient similarity to allow for an appropriate comparison between competencies on their extent of malleability.
Other research has gathered subject-matter expert ratings of competency malleability (see Gibbons et al., 2006; Smith & Brummel, 2013), with results generally showing some alignment between the results of these subjective rating studies and studies of objective change. For example, Gibbons et al. (2006) surveyed 139 managers in several organizations on their beliefs regarding the developability of 16 competencies. Most dimensions received ratings significantly above the neutral point for perceived development with creativity and motivation as the only exceptions. Written communication, planning and organizing, teamwork, and information seeking received the highest development scores, indicating that respondents believed these skills could be developed with appropriate training.
Despite the theoretical models and studies focused on competency malleability to date, early calls suggested by studies like Brush and Licata (1983) to microanalyze competencies to better understand their level of malleability and how malleability impacts training effectiveness have largely been ignored. One exception from the multisource, multirater literature is described next.
Longitudinal Development of Soft-Skill Competencies With Feedback and Experience
In contrast to research on training interventions, studies tracking longitudinal change on multisource, multirater competency assessments (“360-degree feedback” ratings) could provide a useful source of information on the relative malleability of distinct competencies. In these studies, the same limited intervention—that is, feedback on the results of a competency assessment, in some cases supplemented by workplace coaching—is potentially applicable to improvement on a very broad range of competencies. Further, all competencies are assessed directly in the same standardized manner, based on coworkers’ natural observations over an extended time period.
Even within this literature, however, studies comparing longitudinal improvement on distinct competencies have been limited. For example, at the time of a 2005 meta-analysis, 24 studies had evaluated longitudinal change in multisource, multirater competency ratings (Smither et al., 2005). However, only four of the 24 studies distinguished results based on competency. Of those that did, none distinguished competencies based on the underlying type of competency. While there was evidence that managers improved more on competencies they self-set as goals for improvement, there was no evidence of greater improvement on competencies designated by the organization as critical, competencies explicitly targeted for development, or competencies on which managers were rated lowest (Avery, 2000; Hezlett & Ronnkvist, 1996; Nemeroff & Cosentino, 1979; Quadracci, 1995).
We could identify only one study that sought to distinguish the extent of longitudinal improvement on the basis of the underlying type of soft-skill competency (Dai et al., 2010). In the study, 78 midcareer managers working within a financial services company were provided with an executive coach and incentivized to submit progress reports documenting their efforts to improve on target competencies; the managers were then rerated on the same multirater competency assessment one to two years later.
The Dai et al. (2010) analysis sought to validate a commercial “developmental difficulty index” intended to indicate how hard it is for managers to develop on 67 leadership competencies included on the multirater assessment (Lombardo & Eichinger, 1995). The index (1 = easiest to 5 = hardest) was derived from rational coding of each competency by two psychologists, Lombardo and Eichinger (1995), but had not been previously empirically validated. The index developers theorized that competencies are, by their underlying nature, more difficult to develop if they are more (i) likely to involve, engage, or trigger emotions; (ii) closely related to attitudes, values, opinions, and beliefs of the individual; (iii) closely related to intellectual abilities (“cognitive complexity”); and/or (iv) complex in terms of the sheer number of rules and processes involved. While more tautological, the index was also based on ratings of the extent to which the competency was (v) viewed as more innate—that is, closely related to predispositions or natural tendencies (“human makeup”)—and (vi) requiring more experience to develop. It should be noted that Lombardo’s index bears close resemblance to the theoretical models described in the previous section (e.g., Brush & Licata, 1983; Hellervik et al., 1992; Waters, 1980), with each element represented in at least one of the earlier studies. Despite the limited sample size, Dai et al. (2010) reported that the average extent of longitudinal improvement on 54 managerial competencies (that one or more study participants selected for targeted development) was negatively correlated with the developmental difficulty index (r = -.27).
Current Study
The current study contributes to the limited literature on competency malleability in two ways. First, using ratings from a large sample of U.S. Air Force trainers and supervisors, we document the extent of improvement on 17 organizationally valued competencies based on aggregate ratings of enlisted airmen (a) at the time of graduation from initial technical training (as rated by technical training instructors and the immediately gaining supervisors of new graduates) and (b) at the end of four years of military service (as rated by their supervisors). Our study extends previous research (see Dai et al., 2010) through our focus on longer-term development of entry-level recruits in diverse military occupations rather than the short-term development of (a very small sample of) midcareer managers in the private sector. Second, recognizing that no list of competencies is likely to be exhaustive, we empirically test the extent to which each of Lombardo and Eichinger’s (1995) theorized “developmental difficulty” criteria explain the relative extent of competency improvement. To do so, multiple psychologists rated each competency on the six theoretical criteria. We then relate the instructor- and supervisor-observed extent of cohort improvement on each competency to the psychologist ratings to evaluate which of Lombardo and Eichinger’s (1995) theoretical criteria (if any) empirically explain the relative malleability of the range of competencies.
Method
Study Overview
The current study surveyed (a) technical training instructors (TTIs) responsible for training U.S. Air Force enlistees immediately prior to their first duty assignment and (b) supervisors of new enlistees on the job. Our use of an Air Force sample, in which all new enlistees assigned to a given career field complete the same training prior to job assignment, allowed for the use of two independent sources to establish a baseline for initial competency proficiency: (a) TTIs rated competency proficiency upon graduation and (b) supervisors rated competency proficiency when new graduates report to their first duty assignment, typically within a few weeks of technical training graduation. Although instructors and supervisors both rated the competency proficiency of airmen at the same career milestone (technical training graduation), supervisors’ ratings were at least partially retrospective.
Supervisors also rated the competency proficiency of enlistees at the end of four years of military service (mandatory minimum active duty service commitment), which typically coincides with new enlistees’ first duty assignment tenure. Within this context, competency improvement between technical training graduation and the end of four years of service can be largely attributed to individual development rather than cohort changes due to attrition. Unlike in private organizations, enlistees typically cannot voluntarily separate during their service commitment, and first-term separations after technical training graduation are rare (~2% total attrition postgraduation).
Organizationally Valued Soft-Skill Competencies
Official Air Force doctrine has defined “institutional competencies” that are expected across job types during an Air Force career (U.S. Air Force, 2014). While these institutional competencies have been used as a basis for the Basic Military Training curriculum (prior to technical training), enlistees are not explicitly trained or developed on these competencies during technical training or within their first four years on the job. (Air Force enlisted personnel receive formal instruction on institutional competencies later in their careers as part of professional military education, typically after approximately five years of service.) For the present study, 29 competencies were defined based on observable behaviors identified in AFMAN 36-2647 as expected at lower ranks (U.S. Air Force, 2014). Of the 29 competencies, we focus on the 17 that the majority of first-line supervisors identified as expected of all new enlisted members in their career field upon reporting to their first job.
Focal Population (Ratees): New Enlistees
For enlistment eligibility, recruits must meet physical, medical, and cognitive aptitude requirements and possess a high school degree or equivalent (Matthews, 2017). After completing Basic Military Training, new enlistees immediately complete technical training for their assigned career field; the length of technical training varies substantially by career field, ranging from approximately six weeks (e.g., the personnel career field) to 72 weeks (for certain types of cryptologic language analysts). All technical training includes practical training and requires trainees to demonstrate proficiency in performing specific work tasks.
Study Participants (Raters)
TTIs. All 3,727 current (at the time of the study) Air Force enlisted TTIs with at least six months' experience were invited to participate in the online survey. Of these, 1,158 completed the survey. TTIs in career fields that were only open to retrainees were excluded, for a sample of 1,059 for analysis. TTIs typically held the rank of E-5 (25.5%), E-6 (40.0%), or E-7 (17.9%) and averaged 30 months’ experience as a TTI; 86.9% were male and 82.3% identified as White.
Supervisors of First-Term Airmen. Supervisors were invited to participate in the survey if at least one of their current first-line supervisees had enlisted within the past four years and worked in their same primary career field. Of the 54,957 people invited to participate, 8,519 completed the survey. As an additional criterion for ratings quality, we limited analyses to respondents who had supervised at least five members of their career field for at least six months (N = 6,894).
Within this sample, the typical survey respondent had supervised members of the career field for five years and had supervised twelve members of their career field. The supervisors most commonly held the rank of E-5 (39.4%), E-6 (44.8%), or E-7 (13.8%). The sample was predominantly male (83.8%) and White (76.0%). With more than 140 distinct enlisted career fields, no single career field accounted for more than eight percent of the sample. The largest career fields represented were security forces (N = 515), munitions systems (N = 356), aerospace medical (N = 204), and aircraft armament systems (N = 202).
Survey Measures
The focal questions appeared at the beginning of a longer survey soliciting recommendations for improving recruiting. TTIs were presented with each competency definition (Table 1) and responded to the following question: “Upon graduation from the [career field] training pipeline, how many trainees possess the competency to the level that should be expected in their first duty assignment?” Supervisors responded to two parallel versions of the question based on the same competency definitions: “How many new enlisted accessions in your career field possess the competency to the level that should be expected in their first assignment?” (a) “Upon reporting to their first duty assignment” and (b) “At the end of four years.” Note that, because supervisors’ ratings were at least partially retrospective, their judgments were typically based on a somewhat earlier cohort of first-term airmen than those rated on by TTIs. TTIs and supervisors responded on the same scale: “All or Nearly All” (5), “Most” (4), “Some” (3), “Few” (2), or “None or Nearly None” (1). Raters who did not have a basis for rating a given competency or who did not believe the competency should be expected at the start of the first duty assignment could leave their response to a given item blank and progress in the survey to provide ratings in other areas.
Psychologist Competency Ratings
Five experienced psychologists with a broad range of backgrounds (research psychologists with training and personnel selection expertise; military operational psychologists) independently rated the extent to which each of the 17 competencies met the six Lombardo and Eichinger (1995) criteria using the following scale: 0 = Not at All, 1 = To a Small Extent, 2 = To Some Extent, 3 = To a Moderate Extent, 4 = To a Great Extent, and 5 = To a Very Great Extent.
Results
Development Based on Supervisor Ratings (Same Rater Pre- and Post-)
Across competencies, supervisor ratings of enlistees’ competency proficiency at the end of four years were significantly greater than their ratings of enlistees’ proficiency at the start of their first job (paired samples t-tests, p < .001 for all competencies). Competencies that showed the greatest improvement were decision-making (d = 1.41), problem-solving (d = 1.28), and upward communication (d = 1.14). Competencies that showed the least improvement were integrity (d = .41), professionalism (d = .53), and accepting feedback (d = .61). See Table 2.
Development Based on Instructor Ratings Upon Technical Training Graduation and Supervisor Ratings at End of Four Years
Comparing instructor ratings of enlistees immediately before reporting to their first job to supervisor ratings at the end of four years showed more limited evidence of cohort improvement but similar results in terms of the competencies that were least (most) malleable relative to others. Cohort improvement on 12 of the 17 competencies was statistically significant (independent samples t-tests, p < .05). Competencies that showed the greatest improvement were upward communication (d = .42), speaking (d = .38), and cultural awareness (d = .32). Competencies that showed the least improvement were accepting feedback (d = .03), integrity (d = .04), and professionalism (d = .04). See Table 3.
Psychologist Competency Ratings
The 17 competencies were rated as covering a wide range on each of the six criteria. A developmental difficulty index based on the average across all six criteria would identify timeliness, professionalism, and followership as easiest to improve and openness to alternative views, collaboration, decision-making, and self-awareness as hardest to improve (see Table 1).
Summarizing Results by Developmental Difficulty Criteria
Consistent with Lombardo and Eichinger’s (1995) theoretical predictions regarding developmental difficulty, competencies that were (a) most dependent on the attitudes, values, opinions, and beliefs of the individual and (b) more emotionally involved were the least amenable to change among enlistees. Contrary to their theoretical predictions, competencies that were cognitively complex or otherwise highly complex overall were actually the most amenable to change among enlistees. See Table 4, which relates instructor and supervisor-rated improvement on the competencies (Tables 2 and 3) to psychologist-rated developmental difficulty criteria.
Discussion
Practitioners have emphasized the importance of distinguishing competencies that must be addressed in recruiting and personnel selection from those that employees can develop on the job (Hallenbeck & Eichinger, 2006). To the extent that most employees enter the labor market with deficiencies in at least some soft skills important for workplace success, our study sought to distinguish competencies that are harder for entry-level employees to develop with experience (less amenable to change) than others (Hart Research Associates, 2015). The U.S. Air Force context, in which new enlisted members are provided with no formal training on organizationally valued soft skills during their first duty assignment, provided a unique opportunity to evaluate the malleability of distinct competencies in the absence of formal intervention.
Our results suggest that soft-skill competencies such as integrity, professionalism, and accepting feedback are among the most difficult to develop and should potentially be prioritized in screening and personnel selection. Comparison of the instructor-rated proficiency of new enlistees upon technical training graduation to supervisor-rated proficiency at the end of four years showed no significant improvement in integrity, professionalism, accepting feedback, or openness to alternative views. We note that our results are not intended to suggest that it would not be possible to design effective training interventions on these competencies, merely that enlistees showed little to no improvement over the natural course of gaining on-the-job experience. In contrast, results showed substantial improvement on competencies such as upward communication, speaking, decision-making, problem-solving, and collaboration in the absence of formal training intervention specifically targeting these competencies.
The results are potentially more broadly informative regarding the relative difficulty with which other soft-skill competencies can be learned through experience. The results partially support the developmental difficulty model proposed by Lombardo and Eichinger (1995). As theorized, beliefs (the extent to which a competency depends on the attitudes, values, opinions, and beliefs of the individual) and emotion involvement (the extent that performing the competency involves, engages, or triggers emotions) negatively related to competency change. In contrast to the theory, however, skill complexity (the extent that highly complex skills are needed for competency performance) and cognitive complexity (the extent that performing the competency requires complex parallel processing of incomplete information) positively related to competency change. Although the 17 competencies evaluated in this study are not a comprehensive list of those valued across all organizations, our findings more broadly suggest that among other commonly valued competencies (Tett et al., 2000), those most closely related to one’s personal beliefs (e.g., loyalty, rule orientation) and emotions (e.g., compassion) would likely be more resistant to change than other soft skills.
Limitations and Recommendations for Future Research
Previous authors have emphasized that a key feature of soft skills is that such skills are continually developed both inside and outside of the workplace (Robles, 2012). As a result, interpreting our findings as specifically due to workplace experience is inappropriate. Rather, our results likely reflect a combination of lifespan development of young adults and workplace experience. While we do not view this as necessarily a study limitation—that is, employers benefit from development of their employees, regardless of the cause of that development—we do note that the study findings may be less likely to generalize to older populations for this reason.
Our study has several methodological limitations that we hope future studies can build on. First, although our use of two independent sources of information (technical training instructors and first-line supervisors) is a methodological strength, future studies should include instructor/supervisor ratings on individual trainees/subordinates rather than global ratings on a cohort. Ratings on individual trainees/subordinates would also allow for a true longitudinal study rather than one in which (current) supervisors provided ratings that were partially retrospective.
Additionally, it would be helpful for future studies to distinguish among different types of problem-solving and decision-making competencies, some of which may be more resistant to change than others. The definitions of problem-solving and decision-making included in the current study were broad and could have been interpreted in terms of technical procedural, domain-specific problem-solving and decision-making, rather than in terms of problem-solving and decision-making in situations that are more novel, dynamic, and complex.
Finally, while we sought to show the extent to which first-term enlistees develop on organizationally valued competencies in the absence of formal, explicitly targeted interventions, development on some competencies may have been (tacitly or directly) emphasized more than others. Although all competencies included in analyses were identified by a majority of supervisors as important from the start of a new enlistee’s first job, it is possible that some supervisors focused on specific competencies. For example, scholars have theorized that employees are more likely to engage in developmental activities when they believe that such development would result in recognition by managers, and studies have found that supervisor support relates positively to employees’ participation in developmental activities (Dubin, 1990; Farr & Middlebrooks, 1990; Kyndt & Baert, 2013).
The views expressed in this article are those of the authors and are not necessarily those of the U.S. government, the Department of Defense, or the U.S. Air Force.
References
Avery, K. (2000). The effects of 360 degree feedback over time [Unpublished doctoral dissertation]. Chicago School of Professional Psychology.
Avolio, B. J., Reichard, R. J., Hannah, S. T., Walumbwa, F. O., & Chan, A. (2009). A meta-analytic review of leadership impact research: Experimental and quasi-experimental studies. Leadership Quarterly, 20(5), 764–784. https://doi.org/10.1016/j.leaqua.2009.06.006
Barth, J., & Lannen, P. (2011). Efficacy of communication skills training courses in oncology: A systematic review and meta-analysis. Annals of Oncology, 22(5), 1030–1040. https://doi.org/10.1093/annonc/mdq441
Brush, D. H., & Licata, B. J. (1983). The impact of skill learnability on the effectiveness of managerial training and development. Journal of Management, 9(1), 27–39. https://doi.org/10.1177%2F014920638300900104
Campbell, J. P. (1990) Modeling the performance prediction problem in industrial and organizational psychology. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (pp. 687–732). Consulting Psychologists Press.
Campion, M. A., Fink, A. A., Ruggeberg, B. J., Carr, L., Phillips, G. M., & Odman, R. B. (2011). Doing competencies well: Competency modeling best practices. Personnel Psychology, 64(1), 225–262. https://doi.org/10.1111/j.1744-6570.2010.01207.x
Dai, G., De Meuse, K. P., & Peterson, C. (2010). Impact of multi-source feedback on leadership competency development: A longitudinal field study. Journal of Managerial Issues, 22(2), 197–219.
Dubin, S. S. (1990). Maintaining competence through updating. In S. S. Dubin (Ed.), Maintaining professional competence (pp. 9–45). Jossey-Bass.
Farr, J. L., & Middlebrooks, C. L. (1990). Enhancing motivation to participate in professional development. In S. S. Dubin (Ed.), Maintaining professional competence (pp. 195–213). Jossey-Bass.
Gibbons, A. M., Rupp, D. E., Snyder, L. A., Silke Holub, A., & Eun Woo, S. (2006). A preliminary investigation of developable dimensions. The Psychologist-Manager Journal, 9(2), 99–123. https://doi.org/10.1207/s15503461tpmj0902_4
Hallenbeck, G. S., & Eichinger, R. W. (2006). Interviewing right: How science can sharpen your interviewing accuracy. Korn Ferry International.
Hart Research Associates. (2015). Falling short? College learning and career success: Selected findings from online surveys of employers and college students conducted on behalf of the Association of American Colleges and Universities. https://www.aacu.org/sites/default/files/files/LEAP/2015employerstudentsurvey.pdf
Hellervik, L. W., Hazucha, J. F., & Schneider, R. J. (1992). Behavior change: Models, methods, and a review of evidence. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (pp. 823–895). Consulting Psychologists Press.
Hezlett, S. A., & Ronnkvist, A. M. (1996). The effects of multi-rater feedback on managers’ skill development: Factors influencing behavior change [Paper presentation]. 11th Annual Conference of Society for Industrial and Organizational Psychology, San Diego, CA, United States.
Klein, C. R. (2009). What do we know about interpersonal skills? A meta-analytic examination of antecedents, outcomes, and the efficacy of training [Unpublished doctoral dissertation]. University of Central Florida.
Kyndt, E., & Baert, H. (2013). Antecedents of employees’ involvement in work-related learning: A systematic review. Review of Educational Research, 83(2), 273–313. https://doi.org/10.3102%2F0034654313478021
Lombardo, M. M., & Eichinger, R. W. (1995). The CAREER ARCHITECT ® user’s manual. Lominger Limited.
Martin-Raugh, M. P., Williams, K. M., & Lentini, J. E. (2019, April 4–6). The malleability of workplace-relevant non-cognitive constructs: Empirical evidence from 39 meta-analyses and reviews [Poster presentation]. 34th SIOP [Society for Industrial and Organizational Psychology] Annual Conference, National Harbor, MD, United States.
Matthews, M. (2017). Assessing the use of employment screening for sexual assault prevention (RR-1250-AF). RAND Corporation. https://www.rand.org/pubs/research_reports/RR1250.html
Meeks, G. A. (2017). Critical soft skills to achieve success in the workplace [Unpublished doctoral dissertation]. Walden University.
Mueller-Hanson, R. A., White, S. S., Dorsey, D. W., & Pulakos, E. D. (2005). Training adaptable leaders: Lessons from research and practice (Research Report 1844). U.S. Army Research Institute for Behavioral and Social Sciences. https://apps.dtic.mil/sti/pdfs/ADA440139.pdf
Nemeroff, W. F., & Cosentino J. (1979). Utilizing feedback and goal setting to increase performance appraisal interview skills of managers. Academy of Management Journal, 22(3), 566–576. https://doi.org/10.5465/255745
Quadracci, R. H. (1995). The effects of 360 degree feedback, method of feedback and criticality of skill sets on changes in self-perception [Unpublished doctoral dissertation]. California School of Professional Psychology-Los Angeles.
Robles, M. M. (2012). Executive perceptions of the top 10 soft skills needed in today’s workplace. Business Communication Quarterly, 75(4), 453–465. https://doi.org/10.1177%2F1080569912460400
Salas, E., DiazGranados, D., Klein, C., Burke, C. S., Stagl, K. C., Goodwin, G. F., & Halpin, S. M. (2008). Does team training improve team performance? A meta-analysis. Human Factors: Journal of the Human Factors and Ergonomics Society, 50(6), 903–933. https://doi.org/10.1518/001872008x375009
Schippmann, J., Ash, R. A., Battista, M., Carr, L., Eyde, L. D., Hesketh, B., Kehoe, J., Pearlman, K., Prien, E. P., & Sanchez, J. I. (2000). The practice of competency modeling. Personnel Psychology, 53(3), 703–739. https://doi.org/10.1111/j.1744-6570.2000.tb00220.x
Smith, I. M., & Brummel, B. J. (2013). Investigating the role of active ingredients in executive coaching. Coaching: An International Journal of Theory, Research, and Practice, 6(1), 57–71. https://doi.org/10.1080/17521882.2012.758649
Smither, J. W., London, M., & Reilly, R. R. (2005). Does performance improve following multisource feedback? A theoretical model, meta-analysis, and review of empirical findings. Personnel Psychology, 58(1), 33–66. https://doi.org/10.1111/j.1744-6570.2005.514_1.x
Straus, S. G., McCausland, T. C., Grimm, G., & Giglio, K. (2018). Malleability and measurement of leader attributes: Personnel development in the U.S. Army (RR-1583). RAND Corporation.
Tett, R. P., Guterman, H. A., Bleier, A., & Murphy, P. J. (2000). Development and content validation of a “hyperdimensional” taxonomy of managerial competence. Human Performance, 13(3), 205–251. https://doi.org/10.1207/S15327043HUP1303_1
Waters, J. A. (1980). Managerial skill development. Academy of Management Review, 5(3), 449–453. https://doi.org/10.5465/amr.1980.4288876
U.S. Air Force. (2014). Institutional competency development and management (Air Force manual 36-2647). https://www.ang.af.mil/Portals/77/documents/force_dev/AFD-150528-002.pdf?ver=2016-09-21-092900-540
U.S. Department of Defense (DOD). (2016). DoD civilian personnel management system: Civilian strategic human capital management planning (SHCP) (DOD Instruction 1400.25). U.S. Government Publishing Office.
Laura G. Barron, PhD, is an industrial/organizational psychologist at Air Education and Training Command. She received her PhD in industrial/organizational psychology from Rice University and previously served as a senior personnel research psychologist and chief of Strategic Research and Assessments at Air Force Personnel Center. She conducts research and analysis in the areas of job analysis/competency modeling, assessment development, program evaluation, and personnel screening and classification. She has published in numerous peer-reviewed journals such as Military Psychology, Human Performance, and International Journal of Selection and Assessment.
Mark R. Rose, PhD, is a senior psychologist and technical director at Air Education and Training Command, within the Studies and Analysis Squadron Airmen Advancement flight. He received his PhD in industrial-organizational psychology from the University of South Florida in 1997. He has conducted several recent projects that involved gathering input on task and competency requirements from subject-matter experts across the Air Force to determine potential enhancements to Air Force screening processes. His current role involves conducting research and serving as an advisor to Air Force leaders on enlisted promotion testing.
Back to Top