Increasing Aptitudes

Home Forums CAVEWAS Forum Increasing Aptitudes

  • This topic has 3 replies, 3 voices, and was last updated 2 months, 1 week ago by Jennifer Griffiths.
Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #2034
    Jennifer Kengis
    Participant

    What are the guidelines for increasing aptitudes with training? There was some training provided that suggested that all aptitudes can be increased by only 1 level (with appropriate training), and that V, N, S cannot be increased above the G.

    Can anyone share their best practices when looking at increasing aptitudes?

    #2035
    Francois Paradis
    Participant

    Hello Jennifer,

    This is a complex question without a straightforward answer. Aptitudes, as I understand them, represent innate abilities that are generally stable over time, with only minimal impact from training. However, training and practice can indeed improve performance, particularly in specific areas like math or language.

    Aptitude tests, such as the GATB, CAPS, CareerScope, or CFIT, typically focus on speed and accuracy rather than a broad knowledge base. For instance, Numerical aptitude tests often consist of arithmetic items and don’t include higher-level math questions.

    I believe that training might lead to noticeable improvements in aptitude testing at lower levels (e.g., from low to low average or to an average level). However, the impact of training is likely negligible at higher levels (e.g., from high average to high).

    While I don’t have specific research or literature to reference, I think it would be reasonable to consider raising a person’s aptitude score if they successfully undergo related training, but I would caution against raising it beyond the average level.

    #2037
    Francois Paradis
    Participant

    Hello again Laura,

    I decided to look further into your question and I thought I would share a conversation I had this week with the kind folks at EDITS, the publisher of the COPES system. The bottom line being that aptitudes themselves don’t increase significantly with training. Rather, a person may underperform when weak literacy or numeracy skills get in the way of assessing their true ability levels. As always, feel free to comment!

    On August 21, 2024 at 12:51 PM, francois@career-options.ca (francois@career-options.ca) wrote:

    Good afternoon,

    I am hoping you can address this question regarding the impact of training on aptitude scores. I am a vocational evaluator based in Toronto and a colleague of mine recently asked a pertinent question as follow:

    Are there any guidelines for increasing aptitudes with training? My colleague reportedly received training suggesting that suggested that all aptitudes can be increased by only 1 level (with appropriate training), and that V, N, S cannot be increased above the G.

    I provide below my thoughts on this and was told there is a Dr. Lee at your office that might have more insight:

    This is a complex question without a straightforward answer. Aptitudes, as I understand them, represent innate abilities that are generally stable over time, with only minimal impact from training. However, training and practice can indeed improve performance, particularly in specific areas like math or language.
    Aptitude tests, such as the GATB, CAPS, CareerScope, or CFIT, typically focus on speed and accuracy rather than a broad knowledge base. For instance, Numerical aptitude tests often consist of arithmetic items and don’t include higher-level math questions. I believe that training might lead to noticeable improvements in aptitude testing at lower levels (e.g., from low to low average or to an average level). However, the impact of training is likely negligible at higher levels (e.g., from high average to high). While I don’t have specific research or literature to reference, I think it would be reasonable to consider raising a person’s aptitude score if they successfully undergo related training, but I would caution against raising it beyond the average level. I should add that I am the person in charge of professional development with CAVEWAS, the Canadian association of vocational evaluators. Your input would be of help to multiple people. Thank you in advance for your help.

    Sincerely,

    Francois Paradis, M.A., CVE, CCVE, ICVE

    From: EdITS Customer Service <service@edits.netSent: Wednesday, August 28, 2024 4:35 PM To: francois@career-options.ca <francois@career-options.caCc: Lisa Lee <lisalee@edits.netSubject: Re: The impact of training on aptitude testing

    Francois,

    Thanks for reaching out! My name is Andrew Tricarico and I have discussed this with Lisa, my direct supervisor. I have a master’s degree in industrial-organizational psychology (where I accumulated a solid amount of testing and measurement expertise), so we both can offer some insight.

    My background is in Industrial Psychology, and we typically classify job-related attributes as “knowledge,” “skills,” “abilities,” or “other characteristics.” Below are some relevant definitions as I understand them:

    • Knowledge is an attribute that was purely developed through learning (biology, physics, the English language, etc.).
    • A skill is an attribute that improves with practice (persuasion, judgment and decision making, time management, etc.).
    • An ability is a more innate quality that can’t as easily be practiced (physical flexibility, deductive reasoning, reaction time, etc.).
    • Aptitude (in terms of psychology) is closest in definition to ability, but specifically concerns one’s potential.

    While I am not intimately familiar with the GATB, I was able to find some literature that discusses its origin and structure which may have direct bearing on the training that your colleague participated in. Scores on V and S certainly couldn’t be higher than G, as V is measured by Test 4 and S is measured by Test 3. G is simply a composite score of Tests 3, 4, and 6. N is measured by Tests 2 and 6, so someone could potentially score very high on Test 2 and surpass G, but it is probably a rare occurrence (and could be impossible depending on how the scores are calculated). Perhaps some math was done in the test development process to ensure that neither V, N, nor S could not surpass the G composite score, but I do not have access to a manual.

    I agree that the question of aptitude versus ability and the possibility of improving scores is a very complex question. From my perspective, I think your definition is closest to “ability.” Though again, aptitude and ability are quite similar. I browsed a couple abstracts, and did find a meta-analysis discussing how education can increase general mental ability (Richie & Tucker, 2018). Kulik et al. (1984) specifically focused on participants taking practice versions of aptitude tests, finding that ability moderated the effect of practice tests on aptitude test performance. Specifically, those with high ability scores experienced higher score gains, opposite to what is proposed in your email. Of course, even if it is a meta-analysis, it’s an older one and I have not read enough to offer a thorough critique. Additionally, a method of training that isn’t general schooling or a practice test certainly could moderate the relationship differently… I would need to compile a true literature review to know for sure. Also yes, numerical aptitude assessments are typically focused on speed and accuracy regarding lower-level problems, because using higher-level math problems would start to lean the assessment towards a measure of high-level math knowledge rather than general math-related aptitude.

    I don’t understand what was meant by “it would be reasonable to consider raising a person’s aptitude score, (but) would caution raising it beyond the average level,” though. To me, the wording implies modifying scores on an individual basis after an assessment has been taken, which should never happen (unless you’re deleting their data entirely because you know they cheated, didn’t finish the assessment, or something along those lines). The only way to determine if one’s aptitude (as measured by a particular assessment) has been modified is to have them take the same assessment again. Practice effects can influence the validity of these assessments for sure (particularly if assessment-specific training is received), but editing results is a guaranteed threat to validity. The CAPS (and most other aptitude assessments), are designed to take a snapshot of a participant’s aptitude at a specific point in time.

    A participants’ score is just what the assessment gives you- it’s your *interpretation* of that score that should change. If you know the participant has undergone training (and especially if that training is specific to the content of a particular test), you also know that their results *might* be a little less valid. On the other hand, especially if the previous administration of the assessment and subsequent training had been years prior, perhaps the training did manage to increase one’s aptitude (or, simply, more experience was gained in general).

    In short, if you’re leveraging a validated psychometric instrument like the GATB or the CAPS, one shouldn’t ever consider raising (or lowering) scores due to outside factors because they were not designed for post-hoc modifications. If someone who scored far below average later scored high above average on an aptitude assessment, I’d certainly look into it, but I would always be using the scores the assessment presented as the basis of my interpretation.

    Overall, test batteries such as the CAPS offer a fairly accurate measure, at this point in time, of a client’s ability levels. Test scores can improve through additional training, but we have no way to tell just how much improvement can be achieved.

    I hope I have addressed your question! Regardless, I am happy to discuss this further over the phone or through email.

    References:
    Kulik, J. A., Kulik, C.-L. C., & Bangert, R. L. (1984). Effects of Practice on Aptitude and Achievement Test Scores. American Educational Research Journal, 21(2), 435–447. <u>https://doi.org/10.2307/1162453</u&gt;

    <div>Ritchie, S. J., & Tucker-Drob, E. M. (2018). How Much Does Education Improve Intelligence? A Meta-Analysis. Psychological Science, 29(8), 1358-1369. <u>https://doi-org.libproxy.sdsu.edu/10.1177/0956797618774253</u&gt;</div>
    Sincerely,
    Andrew
    EdITS LLC
    Educational & Industrial Testing Service

    On August 30, 2024 at 9:53 AM, francois@career-options.ca (francois@career-options.ca) wrote:

    Good morning Andrew and Lisa and thank you Andrew for your detailed response. I am including one of my colleague, Laura, who initiated this discussion. Laura, please feel free to jump in on this conversation. I think it is an important one and I wish our forum members to benefit from it.

    Regarding my previous statement (“it would be reasonable to consider raising a person’s aptitude score, (but) would caution raising it beyond the average level,” ), it was not meant to say that we would raise a person’s test score. Rather, we may recommend an occupation one level above a person’s scores if we have reasonable expectations those scores are an under-representation of their true ability levels due to literacy/numeracy issues. This does not mean that person has increased those aptitudes but rather, we are getting a more accurate measurement of their true aptitude levels by removing said literacy/numeracy barriers.

    Let’s imagine for a moment a typical scenario to illustrate this: an immigrant with a bachelor’s degree in engineering and previous work experience as a mechanical engineer comes to Canada and wishes to get certified as a mechanical engineering technician. He now has to complete 3 years of college studies in engineering technology to qualify for a license here. Let’s say this man’s English literacy skills are at about a grade 8 equivalent. Let’s also assume we gave him an aptitude test and that he achieved a low average level of verbal aptitude and an average level of general learning ability. We know, from the occupational profile of a mechanical engineering technician, that a high average level of general learning ability and an average level of verbal aptitude are required. Can we say that this person is likely to fail in becoming an engineering technician in Canada? I think not. Rather, we can expect that his true aptitude levels are higher than demonstrated on testing, given his background. His scores are depressed by his lower literacy skills. In this scenario, we are not raising his aptitude scores but assume those are under-representations of his true ability levels. We may recommend a career requiring one aptitude level above his current scores, with the assumption that he successfully complete academic upgrading to bring up his literacy level to a point where he can succeed in an engineering technology program.

    If we were to test this person again after completion of his training, it would be reasonable to expect that his G and V aptitude scores have increased to the required level for his occupation. So, to be clear, we did not raise his initial test scores but made an assumption that his true ability levels would be revealed once the literacy barrier is removed.

    In summary, I would advise against recommending occupations above a person’s aptitude scores, unless there are specific circumstances and remediations to support it.

    Thank you for your time and insights!

    Sincerely,

    Francois Paradis, M.A., CVE, CCVE, ICVE

     

    Francois,

    I totally understand what you’re getting at now! I was looking at it from the wrong angle- test bias/type II error obfuscating a true score is a massive issue. In short, Lisa and I 100% agree with you guys here. We recommend that our CAPS examinees draw a line to the stanine above and below their given score, creating a crude “confidence interval” that is probably more representative of their true ability. Of course, language difficulties (affecting the non-language aspects of the assessment) may contaminate an observed score more than “typical error” would. This all emphasizes the importance of making an effort to ensure that assessments are, at the very least, culture-fair. It also certainly helps to get assessments back-translated and available in various languages so these threats to validity can slowly be stomped out (but of course, doing so is quite expensive).

    Regards,

    Andrew Tricarico

    EdITS LLC

    Educational & Industrial Testing Service

    • This reply was modified 2 months, 2 weeks ago by 3986.
    #2041
    Jennifer Griffiths
    Participant

    Great discussion and research on this topic guys, thanks for making the efforts which will benefit our mutual practices!

Viewing 4 posts - 1 through 4 (of 4 total)
  • You must be logged in to reply to this topic.