Article Version of Record

Empirical ensemble equating under the NEAT design inspired by machine learning ideology

Author(s) / Creator(s)

Jiang, Zhehan
Han, Yuting
Zhang, Jihong
Xu, Lingling
Shi, Dexin
Liang, Haiying
Ouyang, Jinying

Abstract / Description

This study proposes an empirical ensemble equating (3E) approach that collectively selects, adopts, weighs, and combines outputs from different sources to take and combine advantage of equating techniques in various score intervals. The ensemble idea was demonstrated and tailored to the Non-Equivalent groups with Anchor Test (NEAT) equating. A simulation study based on several published settings was conducted. Three outcome measures – average bias, its absolute value, and root mean square difference – were used to evaluate the selected methods’ performance. The 3E approach outperformed other counterparts in most given conditions, while the cautions, such as tuning weights and assuming possible scenarios for using the proposed approach were also addressed.

Keyword(s)

ensemble learning equating machine learning NEAT educational assessment

Persistent Identifier

Date of first publication

2023-06-30

Journal title

Methodology

Volume

19

Issue

2

Page numbers

116–132

Publisher

PsychOpen GOLD

Publication status

publishedVersion

Review status

peerReviewed

Is version of

Citation

Jiang, Z., Han, Y., Zhang, J., Xu, L., Shi, D., Liang, H., & Ouyang, J. (2023). Empirical ensemble equating under the NEAT design inspired by machine learning ideology. Methodology, 19(2), 116-132. https://doi.org/10.5964/meth.10371
  • Author(s) / Creator(s)
    Jiang, Zhehan
  • Author(s) / Creator(s)
    Han, Yuting
  • Author(s) / Creator(s)
    Zhang, Jihong
  • Author(s) / Creator(s)
    Xu, Lingling
  • Author(s) / Creator(s)
    Shi, Dexin
  • Author(s) / Creator(s)
    Liang, Haiying
  • Author(s) / Creator(s)
    Ouyang, Jinying
  • PsychArchives acquisition timestamp
    2023-11-23T11:52:09Z
  • Made available on
    2023-11-23T11:52:09Z
  • Date of first publication
    2023-06-30
  • Abstract / Description
    This study proposes an empirical ensemble equating (3E) approach that collectively selects, adopts, weighs, and combines outputs from different sources to take and combine advantage of equating techniques in various score intervals. The ensemble idea was demonstrated and tailored to the Non-Equivalent groups with Anchor Test (NEAT) equating. A simulation study based on several published settings was conducted. Three outcome measures – average bias, its absolute value, and root mean square difference – were used to evaluate the selected methods’ performance. The 3E approach outperformed other counterparts in most given conditions, while the cautions, such as tuning weights and assuming possible scenarios for using the proposed approach were also addressed.
    en_US
  • Publication status
    publishedVersion
  • Review status
    peerReviewed
  • Citation
    Jiang, Z., Han, Y., Zhang, J., Xu, L., Shi, D., Liang, H., & Ouyang, J. (2023). Empirical ensemble equating under the NEAT design inspired by machine learning ideology. Methodology, 19(2), 116-132. https://doi.org/10.5964/meth.10371
    en_US
  • ISSN
    1614-2241
  • Persistent Identifier
    https://hdl.handle.net/20.500.12034/9141
  • Persistent Identifier
    https://doi.org/10.23668/psycharchives.13661
  • Language of content
    eng
  • Publisher
    PsychOpen GOLD
  • Is version of
    https://doi.org/10.5964/meth.10371
  • Is related to
    https://doi.org/10.23668/psycharchives.12949
  • Keyword(s)
    ensemble learning
    en_US
  • Keyword(s)
    equating
    en_US
  • Keyword(s)
    machine learning
    en_US
  • Keyword(s)
    NEAT
    en_US
  • Keyword(s)
    educational assessment
    en_US
  • Dewey Decimal Classification number(s)
    150
  • Title
    Empirical ensemble equating under the NEAT design inspired by machine learning ideology
    en_US
  • DRO type
    article
  • Issue
    2
  • Journal title
    Methodology
  • Page numbers
    116–132
  • Volume
    19
  • Visible tag(s)
    Version of Record
    en_US