2023-06-20 13 英文报告下载
Issues related to racial equity and unfair bias were at the heart of every listening session we held. In particular, we heard a conversation that was increasingly attuned to issues of data quality and the consequences of using poor or inappropriate data in AI systems for education. Datasets are used to develop AI, and when they are non-representative or contain undesired associations or patterns, resulting AI models may act unfairly in how they detect patterns or automate decisions. Systematic, unwanted unfairness in how a computer detects patterns or automates decisions is called “algorithmic bias.” Algorithmic bias could diminish equity at scale with unintended discrimination. As this document discussed in the Formative Assessment section, this is not a new conversation. For decades, constituents have rightly probed whether assessments are unbiased and fair.
Just as with assessments, whether an AI model exhibits algorithmic bias or is judged to be fair and trustworthy is critical as local school leaders make adoption decisions about using AI to achieve their equity goals. We highlight the concept of “algorithmic discrimination” in the Blueprint. Bias is intrinsic to how AI algorithms are developed using historical data, and it can be difficult to anticipate all impacts of biased data and algorithms during system design. The Department holds that biases in AI algorithms must be addressed when they introduce or sustain unjust discriminatory practices in education. For example, in postsecondary education, algorithms that make enrollment decisions, identify students for early intervention, or flag possible student cheating on exams must be interrogated for evidence of unfair discriminatory bias—and not only when systems are designed, but also later, as systems become widely used.