Math: Maybe we’re doing something right?

test writingFirst day I’m a TA for the Research and Assessment Course for Teacher candidates, and what question was top of mind? “Why are we doing so poorly in math?” The school board this Faculty of Education is located in is near the bottom of provincial rankings for mathematics on the annual achievement testing conducted by Ontario’s  Education Quality and Accountability Office (EQAO). Based on my own teaching and research experience in this board, there may be a problem with the messaging. Teachers are being directed through numerous professional learning sessions to spend more time on inquiry-based mathematical reasoning, than providing the basic numeracy practice which our curriculum documents also tell us we should be working on. And I don’t believe this problem is actually unique to this board – I have met teachers from other school boards who also feel they need to keep basic math practice under the table.

However, as I thought about my research on how different forms of assessment position us to learn in different ways I started to wonder if maybe we are doing something right. Teachers are spending a lot of time in this school board helping students do collaborative, inquiry-based mathematical reasoning – because that’s what the  curriculum tells us 21st century learning should look like. student collaborative learningBut if we want students to do better at writing paper/pencil math tests independently, that’s what we need to be spending a whole lot more class time doing. It’s as simple as that. So when Ontario’s Ministry of Education signalled at the beginning of this school year that it is looking to reform curriculum, report cards, and EQAO to better reflect 21st century learning, the Peel board of Education began a petition to put EQAO on ice. We may not all agree on what mathematical skill should look like in the future, and we need to have those debates. But it’s time we stop asking ourselves what teachers are doing wrong.

Advertisements

To All the Grades I’ve Known Before

a-plus-grade

Did I just use you for your looks? Did I blow you off if you weren’t good enough? Did I trust you?

I just read two papers on faculty/student perceptions of assessment in higher ed: Fletcher et al., 2012 and Hernadez, 2012. Like many authors on assessment, they share a concern with the problem of using the same assessment both formatively -as feedback to improve one’s work – and summatively, as a grade. Both studies were large-scale surveys of faculty and student perceptions of assessment in higher education, but they were conducted in different ways and locales. However, they came to a similar finding:

Students generally don’t find marks useful for learning or trustworthy.

Fletcher and colleagues surmise that Faculty need to make their assessment practices more transparent to students, and they need to help students see how feedback can be useful. I concur wholeheartedly. Hernadez dug a little deeper in questioning whether a mark with written commentary can be considered feedback, especially if the timing comes at the end of a term. Both cite Carless, 2007, who contends that faculty need to develop strategies for effective feedback to students and take a learning-oriented (assessment for learning) approach to assessment. Sounds good to me.

But it must also be recognized that feedback happens in post-secondary education in many more ways than written commentary on an assignment. Meetings with students, answering emails, tweeting, office hours, reviewing drafts, commenting on online forums, performance coaching…as Harry Torrance points out, the workload for ‘assessment as learning’ can be just as burdensome or even heavier for faculty as it is for students. Torrance wonders if such learning is really learning at all. Many have bewailed ‘grade inflation’ due to all the extra help students get these days. Here is a link to The Good Enough Professor, who defends the fact that many of her students earn an A precisely because of the intensive feedback she gives them.

Epic Win? Feedback From Game-Play For L2 Learning

screen-shot-2016-11-01-at-12-29-18-pm

A teacher educator who is working with EFL teachers asked me recently about strategies for formative assessment. Second language (L2) learners are often reluctant to speak in class for fear of making mistakes in front of their peers. How can we get L2 learners responding in a way  that gives teachers the evidence they need to focus instruction without shutting the conversation down? I immediately thought about using Kahoot! [1] 

What the heck is a kahoot?

Kahoot is a free[2] platform designed to motivate students by creating ‘winning moments’ that leverage emotion, competition, and the shared experience of game-play. Teachers can find and repurpose or create their own multiple choice or survey style games in minutes using a simple template. They illustrate each question by selecting images, videos, or animations from Kahoot’s library, or embed their own. Kahoot generates a projectable timed quiz game with music, visuals, and countdowns. Students use their own devices to select from four responses for each question. The winners of the game are posted on the classroom screen. Promotional videos of classes  ‘kahooting’ show a lot of fist-pumping, shouting, game-playing action.

 

Should I use Kahoots in my Second Language Classroom?

Kahoots get students reading and responding quickly, and we know that reading and fluent recall are important for second language acquisition.[3]

A kahoot could be an excellent tool for introducing or reviewing defined content such as vocabulary, language chunks, and grammatical forms; and the visuals support comprehension. The game design is motivating to students because it is fun, everybody plays, and students receive immediate corrective feedback on their personal devices without the problem of feeling centred out. A kahoot can also give teachers real-time feedback to focus instruction, by projecting the number of responses for each choice on the screen. After the lesson, teachers can also access individual results for students if they created and hosted the game.

Are there down-sides to using kahoots? The forced choice and silent nature of the response design means language knowledge is simplified and decontextualized from language use. And then there is the competition question. Some students may feel demoralized if they never ‘win’ the game.

How can I get the most out of Kahoot for my L2 students?

Think of a kahoot as a conversation starter. Used at the beginning of a lesson, a multiple choice kahoot provides feedback on group comprehension which can be used to focus instruction and design communicative opportunities. Structured as a survey, a kahoot can springboard open-ended discussions. Overall, while the skills Kahoot!™ targets are relatively simple, it is the accessibility of the response and feedback design which is attractive to language learners. The most powerful way to add complexity and winnable moments may be to get students designing their own kahoots.

[1] This post has not been solicited by Kahoot! and the author receives no financial benefit from Kahoot!

[2] Teachers should know that ‘free’ online content always comes at a data collection cost. However, getkahoot.com does have a privacy policy which protects users under the age of 13 from third party data sharing.

[3]  Lightbown, P. & Spada, N. (2013). How languages are learned, 4th edition. Oxford:  Oxford University Press.

The Urban Legend of Formative Assessment

Myths or Facts ConceptWhat’s the problem with formative assessment?

I happen to agree with the pedagogical principles outlined in Black and Wiliam’s Theory of Formative Assessment.[i] As a teacher, I much prefer the work of facilitating student learning to evaluating it. And assessment for learning promised to be a panacea for improving student achievement.  Governments around the world raced to implement reforms calling for the sort of formative assessment espoused by ‘Inside the Black Box: Raising Standards through Classroom Assessment’ (Black & Wiliam, 1998a) in the journal of educational taste-making, Phi Delta Kappan. In this article, Paul Black and Dylan Wiliam summarized their large review of empirical studies on disparate practices of classroom formative assessment,[ii] claiming standards could be raised if teachers helped students understand what success looks like for a given task or learning goal, took note of their learning process to make instructional adjustments, and provided non-evaluative feedback before summative grading so that students could refine their performance along the way.

Somehow, someway, this concrete assertion of common sense pedagogy touched a nerve. Maybe the needle wasn’t moving far enough after the education accountability movement of the 80s and 90s swept the globe; maybe assessment researchers were looking for a Trojan horse to crack open the fortress large scale standardized testing had on the hearts and minds of policy makers. Ballooning to mythic proportions, the review became a ‘meta-analysis;’ the effect sizes reported in the literature morphed from large (0.4) to mind-blowing (1.0).[iii] As Randy Bennett (2011) points out, even the more modest claim of a 0.4 effect size would mean roughly double the typical gains U.S. elementary students show on standardized tests in a year. Taking stock, Bennett called out “respected testing experts” Popham and Stiggins (2011, p. 10) for exaggerating the claims and effect sizes; he also critiqued the inclusion process and findings of the original review, equating the entire research narrative around formative assessment to urban legend:

In their review article, then, Black and Wiliam report no meta-analysis of their own doing, nor any quantitative results of their own making. The confusion may occur because, in their brief pamphlet and Phi Delta Kappan position paper, Black and Wiliam (1998a, 1998b) do, in fact, attribute a range of effect sizes to formative assessment. However, no source for those values is ever given. As such, these effect sizes are not the ‘quantitative result’, meta-analytical or otherwise, of the 1998 Assessment in Education review but, rather, a mischaracterisation that has essentially become the educational equivalent of urban legend (Bennett, 2011, p. 12).

beautiful young woman in retro styleBut ‘Inside the black box’ had become the little black dress of the education world, ‘assessment for learning’ an easy fix to dress up any research or policy initiative. Rick Stiggins is often cited for coining the terms assessment for, as, and of learning to help teachers over the logical hurdle of how and when to use judgements of student work both formatively and summatively. The term assessment literacy also took on a life of its own when Stiggins and colleagues (2006) said there was a ‘right’ way to do assessment for learning.[iv] Take this claim with a grain of salt – as Morrisette (2011) points out, research which states teachers are failing at formative assessment or any other practice often has more to do with seeing how teachers measure up to idealized norms in interviews and on surveys than with studying what they are actually doing.

Two decades on, the staking out of research territory on ways to improve assessment for learning continues[v] – along with a rather cranky commentary by Paul Black, noting if people had attended to his program of research post 1998, they would have realized the claims were overstated in the first place:

The 1998 review was too optimistic where it said there was enough evidence to justify applying the research findings to practical action (Black, 2015, p. 163).

There are lessons here for teachers, system leaders, and policy makers about understandings of formative assessment. Black argues that he and his collaborators always made the case that improving student achievement went hand in hand with improving teacher pedagogy: a messy, fragile and contingent process (2015, p.163). Formative ‘assessment’ was never really about assessment at all in the common sense of judging students, nor was it intended to replace evaluation. It was about developing teacher judgement in the collection and use of evidence of student learning to guide further instruction.[vi] And judgement, in the messy, contingent work of teaching, cannot be boiled down to one size fits all strategies (Bennett, 2011). There is another side to the story, too. There is more than one approach to formative assessment, despite the dominant narrative. Forms of teacher collaborative inquiry such as pedagogical documentation, learning stories, and video inquiry[vii] can also be considered genres of formative assessment.

But the legend of formative assessment is also a morality tale. How should I, as a researcher, conduct myself? How should we, as users of research, respond to claims? The critiques of formative assessment by Bennett and Black remind us to go back to the basics:

  • Don’t jump off the bridge because everyone else is
  • Learn from the past but stay in the present
  • Don’t make promises you can’t deliver
  • Consider the source
  • Show your work
  • Question everything

Continue reading

Norwegian Lessons

norway-fiord Google “Finnish lessons,” and the top hits are not apps for learning the language. In 2000, Norway came 13th among OECD nations on PISA (Programme for International Student Assessment). Finland, their next door neighbour, was number 1. Like Canada losing to the USA in an Olympic hockey game, Norwegians were gobsmacked. Nothing less than an entire overhaul of their education system was in order. What happened next can teach us a lot about the need for assessment pedagogy.  Continue reading

Leading from the middle: Why we still have large scale assessment in Canada

Screen Shot 2016-09-15 at 2.54.46 PM.png

According to Volante and Ben Jaafar (2008) there is a tension in Canada in how large scale assessments (LSA) are used to give feedback to the systems they serve. Theoretically, a LSA such as Ontario’s EQAO assessments of reading, writing and mathematics can provide information to teachers on individual student achievement which can be used to adapt instruction; however, “the policies regarding the use of LSA call for a system-wide improvement strategy, creating an inherent tension between the logic of the testing structure and the demands of the policy” (p. 207). In practice, leading happens from the middle because at the board level, leaders “communicate both up and down the hierarchy” in their spheres of immediate influence: advocating for funding and directing teacher practice (p. 208). A model of this flow of power illustrates why the balance tips in favour of using LSAs to guide teacher development and allocate resources. If leading was happening at the teacher level or ministry level, how might the problems and priorities of testing change?

Citation:

Volante, L. & Jaafar, S.B. (2008). Profiles of Education Assessment Systems Worldwide:             Educational assessment in Canada. Assessment in Education, 15(2), 201.

 

 

Grey Assessment

Connection

What are your grey areas of assessment?

On the first day of high school, my daughter was initiated into a secret: her marks were subject to negotiation and self-advocacy. “Let’s just say you need a 90 to get a scholarship, and you have an 88…if you come to me, and you’ve been a hard worker and not given me any problems, I’m going to be more likely to give you that extra 2 percent…” said the bored teacher leaning up against a file cabinet to his neophytes.

So on my first day of grad school, as I thumb through the syllabi and plot out my assessments and due dates, I ponder the participation marks, which typically range from 10-20% of the total evaluation. Rubrics generally say attendance and participation in class discussions is the requirement. Some professors delve deeper into collaborative engagement and critical reflection. But really, what takes someone from an 88 to a 90 in participation? How much is just showing up weighted over quality of interaction? In all honesty, how much of the participation mark is reserved for maintaining control?

 

Assessment as Narrative

cropped-istock_75557797_xxlarge.jpgThe stories we tell about learning are consequential, but they are in the end only stories. Thinking about assessments as narrative constructions is a way to critically explore not just what the stories are telling, but what is telling the stories. This blog is a cafe conversation about assessment where both teachers and academics can find a seat at the table, and more stories can be told. Join the conversation by commenting and sharing.