AI Made Friendly HERE

Education Dept. offers guidance on developing AI for the classroom

The Department of Education is the latest federal agency to release a guide for stakeholders and developers to build artificial intelligence solutions that are safe and effective for academic environments.

Released on Thursday, the guidance builds off Education’s 2023 AI Report, and is in response to President Joe Biden’s October 2023 AI Executive Order. The new guidance sharpens its focus to five areas officials recommend AI software developers keep in mind when building AI and machine learning edtech capabilities: designing for teaching and learning; providing evidence of rationale and impact; advancing equity and protecting civil rights; ensuring safety and security; and promoting transparency and earning trust. 

Education continued to use a metaphor seen in its earlier AI resource to help frame the agency’s perspective on AI: thinking of AI softwares like e-bikes. 

“Teachers and students should be in control as they use the capabilities of AI to strengthen teaching and learning,” the press release announcing the new guidance says. “Just as a cyclist controls direction and pace but preserves energy with the assistance of an e-bike’s drivetrain, so should participants in education remain in control and be able to use technology to focus time and energy on the most impactful interactions and activities.”

Education hopes to see partnerships between software developers and education professionals, both in the initial design phase of the technology and in conducting field tests throughout the lifecycle of the product. 

The guidance also introduced multiple frameworks to serve as visual guides for AI governance. One was the concept of a Responsibility Stack for AI edtech development. Outside the education sector, Responsibility Stacks are AI governance frameworks that are applicable to a broad range of fields that work to ensure the safety and security of an AI product. The framework introduced in Education’s new guidance divides AI software management into two categories: the “responsibility stack” and “innovation stack”.

Privacy protection, bias identification and mitigation, and transparency are some of the pillars within the “responsibility stack” column, while, running parallel, the “innovation stack” highlights oversight into deployment, model training and construction, and data creation and curation. 

This model, which the report authors note should be tailored to individual entity needs, underscores the larger mission of the guidance: to design safe and thoughtful AI systems while prioritizing educational needs and outcomes.

The guidance also highlights the need to integrate common examples of academic ethics into AI model design. Education officials noted that software developers raised concerns about ethics being a critical focus area during listening sessions related to crafting the new guidance. 

This feedback prompted Education to list general ethics themes that are important to AI edtech — transparency, fairness, privacy and beneficence, to name a few — along with education-specific ethics that should be incorporated into AI edtech solutions: pedagogical appropriateness, children’s rights, AI literacy, teacher wellbeing and student needs. 

“Product leads and their teams should not only be aware of ethical concerns but should also find ways in which ethics can be interwoven throughout the life cycle of product development,” the report reads. 

Above all, building a sense of trust in both the AI model’s capabilities and the humans overseeing the software’s functionality is of “paramount importance,” it noted.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird