Here at OCW, we’ve spent ten years demonstrating how educational materials can be shared at scale. In that time, Web 2.0 technologies have developed to support scaled interaction, and those tools have been used by projects like OpenStudy and Peer 2 Peer University to support educational interactions.
Part of the story behind the story of MOOCs is the development of assessments that can be provided at scale. In this piece, the New York Times chronicles one aspect of that story, the development of essay grading systems for edX:
The EdX assessment tool requires human teachers, or graders, to first grade 100 essays or essay questions. The system then uses a variety of machine-learning techniques to train itself to be able to grade any number of essays or answers automatically and almost instantaneously.
The software will assign a grade depending on the scoring system created by the teacher, whether it is a letter grade or numerical rank. It will also provide general feedback, like telling a student whether an answer was on topic or not.
Dr. [Anant] Agarwal said he believed that the software was nearing the capability of human grading.
“This is machine learning and there is a long way to go, but it’s good enough and the upside is huge,” he said. “We found that the quality of the grading is similar to the variation you find from instructor to instructor.”
These essay grading systems are just one of a class of sophisticated online assessments including Q&A engines, simulations, and immersive environments that are poised to offer meaningful, sophisticated and immediate feedback to large numbers of learners through MOOCs and other educational environments.
Does anyone know when a demo might be available?
We don’t, sorry.