Having given my first quiz on the first two skills of the year, I’m faced with a dilemma. I’ve tried to make a really big deal out of “scaling both axes consistently, starting from zero, because I know how important it will be later on in the course. Along these lines, I’ve worded the “X” description of my “skill rubric” as follows: “The scale I used wasn’t consistent, or didn’t start at zero, or many required parts of the graph were missing.”

This sounds well and good, but according to this rubric wording, all four of these graphs have earned the same grade:

Doesn’t seem totally right, does it? The first graph clearly knows what a consistent scale is, but thought that 0 to 90 didn’t really count, so do they really deserve the lowest possible grade for their graph? On the other hand, graphing this way won’t make it easy to find “y-intercepts” when the time comes. Foremost on my mind is the very fragile emotional relationship to failure that my students are beginning my class with (see yesterday’s post for details!), and I worry that the students like the one who made the top left graph, and there are MANY, will cross a threshold of frustration that’s unhealthy for the beginning of the year.

Am I just being nitpicky about the importance of “starting from zero”? After the rubric has been written, does whether this is picky or not even matter? Or is this first and foremost another chance to talk about setbacks as learning opportunities?

I’d love any thoughts you have, in the comments below or via twitter at @josephlkremer! We’re doing our self-assessment on Monday, so I have until then to make up my mind how to handle it.

##graphing ##skills ##setbacks ##physicsfirst

### Like this:

Like Loading...

*Related*

I have similar dilemmas with my students (we teach physics first to all freshmen). I end up assigning points for each separate portion, one point for starting axes on 0, one point for each axis that is appropriately scaled some points for plotting the points properly, points for labeling axes, point for title and so on. It softens the blow and highlights where they need to improve. It really has to be a bit nitpicky because many of these errors either lead to big errors in slope calculations or point out some troublesome misconceptions about best fit lines, y-intercepts and the like. What I tend to do is emphasize graphing techniques in our first labs and whiteboard sessions so that by the time they’re taking their first graphing quiz they’ve had at least 5 in class opportunities to plot and get feedback. Another thing I have found that helps is to not use any computer plotting programs until we get to constant acceleration. The kids seem to turn their graphing brains off when the computer turns on. Hope this helps I’m muddling through the same dilemmas.

I think that marking all of these 3 graphs as “X” would be reasonable. Learning how to graph in a way that accurately represents the data from which students can accurately estimate y-intercepts and get a sense for the “shape” of the data is (to my mind) an essential skill. It’s an area where attention to detail is important and “almost right’ is not “right.” It’s a skill that I’ve found takes a few rounds for students to consistently do correctly, but I think it’s worthwhile to stick to it until they do.

When you choose a binary grading system, you’re almost by definition choosing a system where students with differing degrees of “almost” will get the same grade of “X” or “No” or “Not Yet” or whatever you call it. So, I wouldn’t be overly concerned about them all getting the same grade.

It might also be useful to check in with the math department and see what there expectations are in this area. Perhaps you can reinforce what they’re learning there and/or be explicit about what’s different.

I would modify the rubric and seperate the “starting the scale at zero” and the consistent scaling. It is possible to produce a graph that doesn’t start at zero that “works” as long as the y-intercept crosses the y axis in the consistently scaled area.

Wow. A real dilemma. Good comments above, too. Maybe you could record the “almost” scores with some encouraging symbols, to lighten the blow while still making the point that something critical was missing. I’m thinking of: X+, X+++ or the like

Thanks for the comments, folks! I’m sorry for not being more prompt in my reply, but now I can tell you how things have all turned out…

Anthony, I ended up doing something similar to your suggestion, and you’re right that I should modify the rubric for later on. I gave the graphs with the 0-90 mistake P’s, and circled that jump.

Having seen the other side of their reassessments on this quiz, there was a lot of improvement! The 0-90 issue came up much less, because those students know now that this won’t work for physics class. But scaling with numbers from the data table was still very common, and I’m wondering how to fix it. I think I actually need some space in the X description to DESCRIBE what I’m calling inconsistency, for the kids who are really having trouble making the connection.

Thanks again for all your feedback and suggestions!!