Use of big data techniques for Massive Open Online Courses

MOOC
Massive open, online courses (MOOCs) have gained popularity recently with the widespread use of internet devices along with increased comfort level of taking online courses. So, far the biggest challenge faced by creators of the MOOCs is in the assessing and grading of each student. This problem has been addressed mostly by use of multiple choice questions for assessment. This technique works well for Math and some other subjects but does not do well for Language, Writing, Journalism and other disciplines.

Some researchers/teachers have been looking at some of the big data techniques for help. As many of the you know that unstructured data (such as a piece of writing by a student) is an important part of big data and there are many tools and techniques that can be used to analyze unstructured data.

Following is from a recent paper published by Center of Technology Innovation by Brookings Institution titled "Data Analytics and Assessment of Student Writing".

Automated scoring represents one way to make digital assessment scalable to millions
of students. Massive open, online courses (MOOCs) employ automated tools as do a
growing number of private educational companies. This takes the form of style checks,
spelling checkers, grammar tools, vocabulary tests, and pop-up quizzes, among other
things. The best of the tools identify what is wrong but also help students figure out how
to fix their mistakes. When combined with user profiles, these types of analytical tools
enable researchers to parse student learning by year, gender, subject area, styles of
learning, and many other dimensions.

Big data analytics don’t answer every educational challenge. So far, they do best when
applied to tasks such as mathematics or multiple choice tests where machines can easily
distinguish correct from incorrect answers. But it has been more challenging to assess
higher-level functions such as problem-solving or critical thinking. Those tasks require
context and nuances that remain difficult to assess through automated tools.

Increasingly, though, instruction is combining digital content delivery with embedded
assessment. That helps students see how much progress they have made and enables
teachers to determine where they should devote their efforts. It is this progress towards
personalized learning that represents the most promising development in computerized
assessment.

Students learn in very different ways so we need metrics to evaluate those differences. One
of the virtues of online systems is that they encourage students to engage literature at a
personal level. The designers of educational programs encourage students to read books,
interact with friends, and find their own voice in the process. We should develop curricula
that encourage higher order skills. Appropriately designed formative assessments can
fulfill the promise of accountability reforms without narrowing instruction or spoiling the
success of high quality educational programs.

Image Credit: By Elliot Lepers [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons