The comments and feedback on my post just before summer provided me a lot to think and reflect. Thanks to everyone who shared their thoughts.
While there are advances happening at places, as Paul referred at the Open University of the Netherlands, they still seem to be isolated efforts. In past few months, two major events I attended that caught my attention to some fantastic research happening in various research groups, which would really benefit our education once/if adopted widely. I noticed that while there is a lot of focus on shift in pedagogy alongside technology, cognitive abilities have started to grasp the attention of researchers far more than ever before (more on this perhaps in future discussion). Learning analytics is also gaining momentum.
The first event was the IMS learning impact 2015, where I saw actual “product” emphasis on competency based learning. Roger Hartley led the development of a report for IEEE Technical Committee on Learning Technology few years ago, suggesting de-facto curriculum for learning technology with particular focus on competencies, but now we are finally seeing some products that may start to change the actual educational practices on large scale. IMS extended transcript efforts is one example. There was an interesting, albeit brief discussion on whether Bloom’s Taxonomy is only for knowledge component, or could it also be used for evidence of skills. What evidence(s) can portfolios provide that could be attached to transcripts? One of the questions I have is, considering that difference disciplines have different types of competencies, how to create something that works over multiple disciplines, so that it can actually be used at institutional level. Any examples?
The issue of assessment remains a tricky one. How many institutions around the world are thinking of aligning assessment process to match how learning actually takes place (or should take place) in the ecosystem of education (or smart learning, if I could still use that term)? The reason I had assessment in parentheses in my previous post, as Clare pointed out, is that very little is being done in this area to change the common practices, as far as I can see, even at research level. Lots of work to create mobile and ubiquitous systems that could combine micro-learning and micro-assessment, but we still end up putting our students in two/three hour exams in exam rooms of some sort (whether invigilated or take-home). Even presentation based assessments tends to follow the same summative format most of the time they intend to replace!
Second event was the International Conference on Smart Learning Environments where learning analytics ruled the day! Rob Koper delivered a fantastic keynote (which had heated discussion during the talk as the talk progressed, which was not something I had experienced for a long time). He mentioned that his work so far has made learning faster but not better. The more I think about it, the more I see this to be the case all around. It may be that the assessment instruments we use are primarily geared to measure the speed of learning, instead of the quality of learning, or perhaps it is difficult to create rubric for assessing the improvement in the quality of learning? Thoughts?
Here are couple of more questions that are bothering me recently:
* While classroom based learning has been around for centuries, it has been under scrutiny in recent years. For example, the whole flipped classroom approach is trying to change the nature and role of the classroom. So, is there an effective role that classroom can/should still play?
* While the education in most parts of the world is subject based, Finland has recently announced move from subject based learning to topic based learning. Do subjects still have role in education? In real life, we do not encounter problems that are based on a particular subject. So, where do/should we teach the skills for integrating knowledge and skills from those subjects?
I look forward to learning from the discussion…..