Click here for Table of Contents
Click here for home, brianrude.com
Diagnosis is an important concern in the field of medicine. For some reason it is not considered so important in education. It is not unknown in education - in fact it is done everyday. However I think it is often given little thought and done poorly. There are a number of reasons for this. Many teachers are acting on the performance perspective of teaching and therefore are not attuned to diagnosis as a part of teaching except in a very limited or superficial way. At other times teachers simply don't know what to look for, or are diagnosing at the wrong level. The purpose of this chapter is to analyze and explain diagnosis, to at least some depth. I will first analyze the different levels of diagnosis, and then try to show what to look for in diagnosing learning problems.
Most of the examples I use in this chapter are from my experience as a math teacher in a prison school (which explains why the students have names like Smith and Jones instead of Johnny and Jane). Because this is not a typical classroom situation I will take a minute to explain the setting more fully. I taught in a minimum security prison for young first offenders. Most of the inmates were age 17 to 20. They were serving sentences from two years to six years. With good behavior an inmate might serve as little as one third of his sentence before being released on parole. Also they were credited with time spent in county jail awaiting trial. Thus some of my students would be with me only a few months before being paroled. Others would stay as long as a year or two. The majority of the inmates had not graduated from high school and were assigned a half day of work and a half day of school. Every week I would get several new students, and each week several students would leave, usually because they were paroled, but sometimes because they finished a unit of math credit and would transfer to another class.
This situation, and the fact that classes were kept small - rarely as many as twelve students - dictated individualized instruction. A student in my class would work through a preset course at his own pace. I would grade his daily assignments, provide help when he needed it, and give him chapter tests when he was ready. When he completed the course I would notify the office and he would be transferred to another class.
I began to realize, after I was in this situation a few months, that I was seeing much more about teaching and learning than I could see in the classroom situations I had previously experienced. Not all that I discuss in this chapter can be applied to a normal classroom situation, but I think the general principles are valid.
I will start with two examples in my experience that bring up the question of diagnosis. The first is a conversation I had with a fellow teacher. The second is a routine bit of interaction from one of my classes.
Example 1:
Colleague: "We give them a pre-test and a post-test. That way we can tell what they are learning, what they need to learn, what we need to stress the next time we teach the course".
Rude: "That sounds good, and what do these tests tell you?"
C: "Well they usually show that they need more work on the basics. Some of them can't even add right. So we give them quite a bit of extra practice."
R: "Well I guess they can't complain much about that."
C: "Oh, they do. Some of these smart alecks think they know it all. 'When are we going to get into algebra?' they ask. I just tell them, 'If you think you know it all, what are you doing in this class?' That stops them!"
Example 2:
Rude: "Anderson, there's one thing wrong with this problem. Do you know how to divide decimals?"
Anderson: "Sure, you draw that little line in two places. You gotta do it on both the inside and the outside."
R: "Yes, I see you've done that, but it's not quite all there is to it. You've got to move the decimal point the same number of places on the inside and the outside. It's not always just two places. You move it outside to the end of the number, and you move it inside the same number of places."
A: "Ya, you move it all the way here."
R: "Yup."
A: "So then you move it all the way inside here."
R: "Nope."
A: "But I thought you move it both ways. . . ."
R: "Yes, but always the same number of places inside and out.
Both of these examples illustrate diagnosis of learning, but they are on vastly different levels. The first is very broad. My colleague was talking about diagnosis of a whole class over the whole field of mathematics. The second is very narrow. I am looking for the mistake on one particular problem and pointing it out to the student.
I will use the term "placement level of diagnosis" to refer to the broad levels of diagnosis such as my colleague was talking about in the first example. The purpose of this diagnosis is to find out where the students are in the subject matter, to find out where they should be placed, or to find out if they are wrongly placed. In contrast to this I will use the term "correction level of diagnosis" to refer to the detailed types of diagnosis involved in getting a student over a particular stumbling block in his day-to-day learning. The second example illustrates the correction level of diagnosis.
These two levels of diagnosis can be compared to the coarse and fine adjustments of a microscope. the coarse adjustment permits one to quickly bring the specimen into rough focus and then the fine adjustment allows precise adjustment of the focus. Correspondingly the placement level of diagnosis permits one to place the student at the proper level in a subject and the correction level of diagnosis permits one to help make him progress at that level.
Each of these two levels could be endlessly subdivided, and the two levels merge at the boundary between them. The placement diagnosis my colleague was talking about involved the whole class. Determining whether a particular student belongs in a particular class is another level of placement diagnoses, a lower level. When a state board of education seeks to determine if the students of the state are being adequately prepared for "the computer age", a higher level of placement diagnoses is involved. Similar distinctions could be made on the correction level. One minute a teacher might be trying to pin down a misconception that she suspects is shared by the entire class and making the current chapter particularly difficult. The next minute she might be seeking a calculation error that is throwing one student off on one particular problem. Thus instead of a two part system of classification of diagnosis, we might have a multipart system. For simplicity I will stick with the two part classification in this article.
Both levels of diagnosis are needed, and one level cannot substitute for the other. Without the placement level of diagnosis a student may be placed in an impossible situation, either because he is overplaced and cannot do the work expected of him, or because he is underplaced and therefore his time is being wasted. In either case he will make little progress. Once he is correctly placed then he is in a position to make progress, and then the correction level of diagnosis can come into play. He will run into difficulties, and without help these difficulties can be insurmountable. Help consists of finding out what his problem is and helping him overcome it, and then repeating the process as many times as needed.
The correction level of diagnosis is a part of day to day teaching if one uses the management perspective of teaching. It is not a part of everyday teaching in the performance perspective. The placement level of diagnosis is not a part of day to day teaching in either perspective. Rather it is done before teaching begins, and perhaps never done again.
Having defined these two levels of diagnosis I will next discuss each in some detail.
Placement diagnosis may be done formally or informally. When a teacher or counselor tells a student "I really don't think you ought to take chemistry" based on his subjective judgment, he is acting on informal placement diagnosis. A slightly more formal form of placement diagnosis occurs at the end of the school year when teachers and principals consider holding back some students who are not ready to proceed to the next grade. Informal placement diagnosis also occurs at the beginning of the year as teachers size up their new classes and perhaps make some changes in their plans or expectations. Placement diagnosis may be simply left undone in many cases, resulting in students being left in classes which are not appropriate to them. At the other extreme, placement diagnosis may be done formally and conscientiously, with diagnostic tests, conferences, etc. Larger schools which practice homogeneous grouping may put considerable effort into correctly placing each student.
I didn't have homogeneous grouping when teaching at the prison school. However with individual instruction I felt I had an obligation to do careful individual placement diagnosis on each new student. I will describe my approach to this diagnosis, as it leads to some further observations. Each time I got a new student in one of my classes I would go to the files and look up his standardized test results. If, for example, he scored 5.7 (meaning seventh month of fifth grade) on arithmetic reasoning and 7.2 on arithmetic computation then I would have some idea of what to expect of him in math. I would expect him to know how to add, subtract, and multiply whole numbers. I would expect him to make a few common errors in division of whole numbers. I would expect him to know that he must get a common denominator when adding fractions, but I would expect many mistakes in applying that knowledge. I would expect that he pretty much forgot all he ever learned about multiplying and dividing fractions, that he knows decimals are done quite a bit like whole numbers except for putting in the decimal point in the right place, that he probably doesn't usually know just where that place is, that per cents are pretty much of a mystery to him, and so on.
All this could predicted by his standardized test results. However, I would not act on these predictions. Standardized tests can be misleading at times, and they can be just plain wrong. Even if the tests are accurate they do not tell how a student came up with his score. Before setting the student to work in a course of study I would like to know just what combination of knowledge and ignorance resulted in his score. I would like to know if there are any quirks or abnormalities in his knowledge of mathematics.
I devised my own diagnostic test and duplicated it. This test consisted of about five pages of mathematics problems starting with simple addition and subtraction of whole numbers and ending with a few problems in basic algebra. This test just repeated what the standardized tests had done, of course, but with one important difference - it was my test. I knew it intimately, especially after I had used and revised it over a period of time. Therefore I could come up with more than just a score. By going over the test thoroughly I could get some idea of the content of a new student's knowledge of math.
Usually my test would only confirm, in general terms, what the standardized tests had indicated. It was not uncommon, however, to encounter something unexpected. One situation that occurred several times was that a student would start out on the test and make a few mistakes on the easy problems, indicating he would make progressively more mistakes until he finally would get stuck solid on the second or third page. But this would not happen. Instead he would just keep going, always making enough mistakes to cast doubt on his knowledge of the topic, but always making enough progress to indicate that he had some idea of what he was doing. In this way he would come up with a poor score, perhaps corresponding to about seventh grade level, but with good evidence that he was somehow beyond that level.
There are two general explanations for this kind of situation. In the first case the student has an understanding of the principles involved, but has never gotten all the details straight. This may be because his previous teachers were so concerned that he understand the problems that they neglected to make sure he knows the "how to." This situation would be aggravated if the student were intelligent and even interested, but not a steady worker and not motivated to make good grades. I will call this type of person an "intelligent detail dropper."
A second explanation for this type of situation is that the student is conscientious and has diligently learned the "how to" of many types of problems, but is not too capable and has never gained the understanding that would normally be expected to underlie this knowledge. I will call this type of person a "conscientious algorithmist" (an algorithm being a mathematical recipe to follow, with or without understanding).
These two situation are opposites, and would call for opposite approaches from the teacher. Therefore when faced with such a student I would have to extend my placement diagnosis a bit further. I would have to keep digging in his mind until I figured out in which category he belonged. I would not do this extended diagnosis formally however. Rather I would just look more closely than usual at his diagnostic test, and I would watch his progress more closely than usual for a few weeks. Soon I would figure out what type of student I was dealing with.
Most students at the prison school would fit the standard mold. Their standardized test scores would be accurate indications of their ability and knowledge. A few would seem to fit the intelligent-detail-dropper or conscientious-algorithmist mold. And then there were a few that seemed unique. One student, who I first thought was not too bright, turned to be intelligent, but in a strange way. I could devote pages to describing him, but I cannot explain him. He remains a puzzle to me to this day. There was another student who I thought was of average intelligence at first. He proved to be of average intelligence all right, but the longer I knew him the more I realized that there was something different about him. He seemed to have a personality that I couldn't figure out, and a way of learning to match. Again I could devote pages to describing him, but I cannot explain him.
I feel I did justice to these two students, and a few other "oddballs", simply because I was able to place them on approximately their level and made good use of their time. However I surely cannot claim that my methods of placement diagnosis were totally adequate. Placement diagnosis for such students must continue indefinitely.
Compare the individual placement diagnosis that I have been describing to the group placement diagnosis my frustrated colleague was doing in the example at the beginning of this article. It was my colleague's frustration that made me remember the conversation, and to think about these matters. The diagnosis was made on a group basis, not an individual basis. There was a wide variety of students thrown together. No individual grouping could be done on the basis of the diagnosis. The course was a night class that met twice a week for a few months and was meant to help the students pass the math part of the high school equivalency test. Circumstances did not permit homogeneous grouping or much individual help, so a considerable degree of frustration is not unexpected. The teacher was probably correct in thinking that he was teaching on the right level for the class, but that did not ease the frustration of those who were overplaced and lost, or of those who were underplaced and bored. However I don't know what could be done in this situation. I tried teaching such a course myself for a time, but gave it up as too frustrating.
The individual placement diagnosis that I used, first by standardized test, then by my own diagnostic test, and then on an informal basis for as long as might be needed, is time consuming. It can be justified only if the teacher is in a position to act on this diagnosis. This was the case in the prison school. Most of my students were in general math, but when I found a student who could study algebra, then I taught him algebra, or geometry, trigonometry, or even, for one student, calculus. I could act on my diagnosis. But that was not the case with any other teaching situation I had, and it surely is not the case for most teachers. When I taught science in a public school I had little flexibility. The seventh graders were in seventh grade science, the eighth graders in eighth grade science, the ninth graders were in earth science, the tenth graders in biology, and I had a few eleventh and twelfth graders in chemistry. Placement was set before I arrived on the scene. Therefore I had no reason to devote time and effort to formal placement diagnosis.
I will now turn to the correction level of diagnosis, which is illustrated by the second example given at the beginning of this chapter. Correction diagnosis is usually done on an individual basis, though at times it may be done on a group basis if the whole class gets hung up on the same problem. This diagnosis has one object in mind, to get the student over a particular problem. It might seem that if a student is working on the right level and has the benefit of a clear presentation of information, then things should not go wrong. He should not get stuck. However, experience shows that students do get stuck. They get stuck so solidly that their learning would totally stop if they did not get help. I think this is probably the main reason that few people do any great amount of serious learning on their own without a teacher, and it is the reason that the management perspective of teaching is necessary in any level below college.
Correction diagnosis is done primarily by verbal communication with the student, by eliciting his thoughts on the topic in question and discovering how these thoughts are going wrong. Correction diagnosis, unlike placement diagnosis, cannot be done formally. Correction diagnosis is concerned with details, and there are far too many details in any subject for all of them to be put on a diagnostic test. Rather the teacher must simply "go over" the material with the student until he discovers what the problem is. At times this can be time consuming, but not always. Some times, especially if the teacher has a little experience and knows what to look for, it can go very quickly.
A common fault of teachers in doing correction diagnosis is that they want to do all the talking. Instead of really finding out what the student's problem is, they jump right in and repeat their explanation of the topic. Since the student didn't understand this explanation when it was first given he can hardly be expected to understand it if it is simply repeated. In fact he may be less likely to understand it because he recognizes it as being a repetition and therefore "tunes it out."
Repetition itself is not always bad, in fact it is often needed. But it is not diagnosis. Notice in the second example at the beginning of this chapter that I twice repeated the explanation that Anderson had available to him in his workbook - " . . .You've got to move the decimal the same number of places on the inside and the outside." However I did this in the context of a particular problem, I did it in response to the student's statements, I elaborated on it, and we stuck with the problem until he understood it. Repetition in order to elicit a response, or repetition for emphasis, can be valuable at times. But blind repetition is seldom valuable, and it is not diagnosis.
There are a number of things to look for when doing correction diagnosis. I have found that most difficulties students encounter can be classified into one of the following categories:
- erroneous elements
- structural gaps
- fragile structure
- hidden assumptions
- wrong priorities
An erroneous element can simply be a wrong fact, such as when a student "knows" that Columbus discovered America in 1643. This can be exposed by a little digging. It is one of the benefits of recitation. When the teacher asks questions over the material and calls on students to answer them it is quite common to uncover errors. A student exclaims "Oh, I thought . . . . .!" and the teacher reaffirms the correct fact.
A structural gap is simply a bit of missing information that prevents other bits if information from making sense. In the going-to-the-movies example of the previous chapter the omission of any of the first eight statements would constitute a structural gap. Most structural gaps are rather subtle, as illustrated in this example:
Student: "How do you get this one? It says to find the premium of a $5000 policy if the rate is 28 cents per $100.
Teacher: "Do you know what insurance is?"
S: "Sure, that's like the building burns down so the insurance company pays for it."
T: "Right, and do you know what rate means?"
S: " . . . . . . uh . . . ."
T: "Twenty eight cents is what you would pay if you wanted to be paid $100 when the building burns down. If you wanted to be insured for $200 you'd pay fifty-six cents."
S: "Okay, then how much for $5000?"
T: "Do you know how many 100's there are in 5000?"
S: "Well . . . five . . . no, . . . divide, . . .50."
T: "Right, you have fifty of them, and each one costs twenty-eight cents. So what do you do?"
S: "Okay, I get it, fifty times 28 cents."
In this case the student did not quite know what "twenty-eight cents per hundred dollars" meant. He had a gap in his structure of knowledge. His structure contained the ideas of division, multiplication, insurance, money, and so on (which is a rather extensive body of knowledge, when one starts to analyze it). All I had to do was fill in one gap. Then it all made sense. I easily found this gap. It was the second thing I checked. The first thing I checked was whether or not he had the basic idea of what insurance is. When his response indicated that he did, then my next guess was that he didn't understand the rate. This proved to be the case. When he got this missing bit of knowledge, when he closed this gap in his structure of knowledge, he was able to put it all together.
A structural gap is often easy to find if the teacher is very familiar with the students' thinking. It can be very hard to find when the teacher is not familiar with the students' thinking. I had the student in the above example in my class for several months and had helped him on problems almost every day. Further, I had a coherent course of study laid out for him, and had helped many other students through the same problems. Thus it only took a minute or so to help him. But in a different situation the same difficulty might be hard to find. A friend or neighbor might come to me and tell me that his insurance agent computed the rate on his fire insurance, but that "it just doesn't make sense, it doesn't click". I would certainly try to help my friend, but it wouldn't be as easy as helping a student. There would be a much broader body of knowledge to investigate. When I lay out a coherent course of study for a student I know that the knowledge and skills needed for today's work have been covered in the previous lessons. This greatly narrows the range of possible problems that I must check out. When a friend or neighbor comes to me with the same problem I would have no idea what knowledge and skills he might possess. Therefore my job would be much larger. In such a situation I might be able to be of help, but it is also quite possible that I could do little more than confirm that the insurance agent had indeed done his computations correctly. It is perhaps even more possible that the actual data is not available. The insurance agent did the calculations back in his office and did not really present figures that could be checked out. The gap in my friend's structure of knowledge cannot be filled.
I suspect that structural gaps are seldom found by college professors (or by high school teachers who only lecture). As a college student I have often observed a student ask a question in class and the professor conscientiously try to answer it, but with little success. The professor's answer would miss the mark, so the student would rephrase the question. The professor would try again, but again with little success. After several attempts like this they would disengage, for neither student nor professor would want to tie up the whole class for very long. This is not to say that college professors do not teach well. Their type of teaching is simply different than the teaching that is expected at lower levels. College professors are paid to give a coherent presentation of the subject, not to pick into every student's brain. They are not expected to apply the management perspective of teaching except in a very superficial way.
A "fragile structure" is somewhat similar to a structural gap. The material doesn't quite make sense to the student. Something is missing. However in a fragile structure it is not possible to find one bit of information that is definitely missing. Rather each essential bit of information has been learned, but only superficially and cannot be depended on to be remembered when needed. The student has all the parts of knowledge that he needs but he just can't "get it all together." As an example of a fragile structure I will modify the example given above. The student might bring me a problem something like this:
"I just can't seem to make it come out right. First I did this:
"That didn't work out, but then I figured out what was wrong. So:
" And then I saw the 500 was wrong, so:
"But that's not right. I messed up the 50 again. So then I got this far:
"And that's close, but it's not what the answer book says. So then I tried this:
"But I know that's not it either . . . . ."
If you go through this example carefully you will find a different error in each computation. There is no single structural gap to be found. I've met this type of fragile structure many times in my teaching, though most students don't have the persistence to go this far on their own. The cure for such a problem is basically more practice, as I will discuss in the next chapter.
A fragile structure cannot be deeply hidden, but neither can it be pinpointed. A structural gap, in contrast, can be deeply hidden but once it is found then there is no question about it. When a structural gap is found and filled in the student goes back to work eagerly, for the topic finally makes sense. When a fragile structure is found the student goes back to work with considerably less enthusiasm, for his work is only beginning.
Fragile structure and structural gaps can grade into one another. In some cases it may be hard to say whether the trouble is caused more by one particular bit of information being entirely missing or by a lot of information being halfway learned. In such a case it is not crucial whether one calls it a structural gap or a fragile structure. The correction consists of filling the gaps as they become apparent and giving more practice. The example I gave of a structural gap, that the student didn't realize how rate works, might be considered a fragile structure. Understanding the rate is a rather complex structure of knowledge in itself. Obviously the student had some understanding of rate. I called it a structural gap because the student had not identified the rate as the part of the problem that didn't make sense.
Fragile structure should not always be considered a "difficulty", for it is also just a regular part of the process of learning. If I carefully read over a chapter in a history book I may feel that I have learned it. however if I wait a day or so and then take a test on it I will find that I hadn't learned it after all. I will find that I have forgotten a lot of facts, and I can't fit together satisfactorily the facts that I have retained. My structure of knowledge of that chapter was very fragile, in other words. That should be no surprise. Of course it was fragile. I may have built the structure of knowledge in my mind adequately when I first read over the chapter, but it takes more than one careful reading to really learn something. Thus teachers assign homework and they have recitation in class. In other words they make sure the students get some practice in what they learn.
Without practice in using the ideas one learns the structure of knowledge will remain fragile. Failure to recognize this is what I call "assumption of fluency", and is a very common error among teachers. In fact I believe it is the number one systematic error made by teachers.
This error, assumption of fluency, came to my attention one day when I was a student in a college chemistry class. The professor had been lecturing most the hour when a student raised his hand to ask a question. I don't remember what the question was, or even what the general topic was, but I remember well the professor's response - "Oh, but I covered that at the beginning of the hour. Didn't you get it?" The professor was honestly taken aback. He thought he had delivered a coherent lecture, and indeed he had. He therefore assumed that the information he covered was at out fingertips. It wasn't. It was in our notes, of course, and in our textbook, but it was not at our fingertips. The professor was assuming fluency in the knowledge that he had covered. This fluency would come eventually, after we had gone over our notes, read the text book, did the problems at the end of the chapter, and so on, but not until then.
Obviously teachers at any level must assume some degree of fluency with what has been previously taught or no progress could ever be made. The problem occurs only when the teacher assumes an unrealistic or unattainable degree of fluency. That is like trying to build a house on a concrete foundation before the concrete has begun to harden. The whole structure will come tumbling down. The solution is to provide for fluency by providing practice, and then by expending the effort to get feedback and to act on that feedback.
Structural gaps and fragile structure do not involve any extra elements in the structure of knowledge, elements that do not belong and thereby cause trouble. This is certainly not always the case though. It is quite possible for a student to "know" something that isn't true. This may simply be an erroneous element in the structure, such as I previously described. But it may also be more than this. There may be a structural element in the form of an assumption or premise that is totally unexpected by the teacher. Consider this example, which is hypothetical but very representative of what I am talking about:
Student: "This problem, 'Find the sum of 6x - 3y and 2x + y.' I don't know how to set that up."
Teacher: "Well you just add the X's and then add the Y's, like adding apples and oranges. How many X's altogether?"
S: "Well, 6x and 2x, that's 8x. I understand that."
T: "Right, then how many Y's?"
S: "Okay, -3y and y, that makes -2y."
T: "Sure, so what's your final answer?
S: (writing on paper) "6x - 3y + 2x + y = 8x -2y"
T: "Yes, . . . . but to write it down you probably ought to put parenthesis around the 2x + y."
S: "Like this?" (again writing) "6x - 3y + (2x + y) = 8x - 2y"
T: "Right, that's how you do it."
S: " . . . . . okay . . . . ."
The student is very dissatisfied with something. What is it? He has the right answer. He will probably be able to do the rest of the problems on the page. Why is he dissatisfied?
The problem here is a hidden assumption. But this hidden assumption is not apparent from what I have presented in this example. The student "knows" that every problem starts with an equation, and then the equation must be solved. He learned this back in the third chapter, when every problem did start with setting up an equation, and every equation then had to be solved. This hidden assumption would serve him well for much of algebra, but not all. When polynomials are introduced we do not start by setting up an equation. We learn to manipulate polynomials. Equations are only peripherally involved. The student's hidden assumption is not valid. It will become valid again in the next chapter when polynomials are used in more advanced equations.
This assumption is an extra element in the structure of knowledge. It needs to be deleted. It is easy to delete when it is addressed directly, but not so long as it remains unrecognized and unverbalized. In the example above it is unrecognized and unverbalized. The only clue to its existence is the student's dissatisfaction, his puzzled expression that says that something is not quite right. If the teacher will say, "Are you thinking you set up an equation and solve it, like we did in chapter three?", or if the student will say, "Okay, we've set up the equation. Now we solve it to find out what x is, right?" then the hidden assumption, the extra element, can be quickly rooted out. But if neither teacher nor student has any clue to this hidden assumption, it may remain and cause confusion indefinitely. Hidden assumptions can be much harder to find than structural gaps or fragile structures.
Here is another example from my experience:
Smith: "It says here to find the perimeter of this triangle. The sides are 25, 25, and 30, and the altitude is 20. So I added 25 and 30 and multiplied by 2 . . . . . . . . . . But that doesn't seem right. . . . . . . , but that's the way I did it back on this page. . . . . . . . . "
Rude: "Yes, but that's the formula for the perimeter of a rectangle. This is a triangle, so you add all three sides together . . . "
S: "Then what do I do with the 20? And why should a triangle be any different than a rectangle? . . . The formula says . . . . ."
R: "Perimeter is the distance around. The distance around is the sum of the three sides. Add 25, 25, and 30."
S: "okay . . . . but the formula . . . "
This is similar to the last example. The student is trying to do today's problems with yesterday's methods. However it is a little deeper this time, and it took a while before it began to dawn on me what was going on. Smith is assuming that you always find the right formula and apply it. In spite of the fact that I had just said, "perimeter is the distance around it" and in spite of the fact that the textbook had said the same thing, Smith's definition of perimeter would be, "the number you get when you work the formula." The text had not given a formula for the perimeter of a triangle because it is so simple. Therefore Smith was assuming that he should use the last formula that was given. More importantly, he was assuming that the topic was abstract, not concrete, that the problem was to be worked by following special mathematical rules and methods, not by common sense. Therefore he had laid his common sense aside.
If Smith laid aside his common sense for this problem, doesn't that mean he probably had laid aside his common sense in the previous lesson where he got all the answers right by applying the formula?
Compare these last two examples. The faulty element in the previous example - the rule that you set up an equation and solve it - was a good and valid element in another context. It simply didn't belong in the present context. It was an extra element that had to be deleted. But it was concrete enough to be identified if only one will put forth a little effort to look into the student's thinking. Smith's faulty element - the assumption that you apply rules, formulas, and techniques, but not common sense and understanding - is not correct in any context. Further, it is rather abstract. It is not easily verbalized and was totally foreign to the teacher (me). Thus it was very hard to uncover and deal with.
Smith's faulty assumption brings up another common error that teachers make, an error that I call the "assumption of right thinking". This is the assumption that if the answer is right then the thinking behind it must also be right. Obviously this is not always the case. The thinking can be dead wrong and the answer may still come out right. It is for this reason that math teachers demand that the work be shown on math problems. Smith, in the above example had been doing adequate work. This example brought out the fact that he had been doing it in a mechanical way with little understanding. His thinking was not totally "right", even though most of his answers were.
I believe the assumption of right thinking is probably systematic error number two, right after the assumption of fluency. This is especially true in technical subjects such as math and science (subjects with a structure of implication, as opposed to a structure of accretion, as I will discuss in another chapter). But I suspect it could occur in about any subject.
Structural gaps, fragile structures, and hidden assumptions can all occur together and be hard to disentangle. In fact, out of the many times a day that a teacher helps students with problems only a few times can the problem be definitely assigned to one of these categories. Usually when I help a student with a problem I never know exactly what the problem is. Rather I dig in and try to find the problem, but after a minute or so the student says, "Oh, I get it . . . . . ," and proceeds to show that, whatever is problem was, he now has it straightened out. The exact nature of the problem usually does not have to be known or verbalized.
The last problem that I will talk about is the matter of wrong priorities. As a student reads a book or listens to a lecture he constantly evaluates each sentence or idea that is presented. Some points are filed away in mind under first priority. Other points are filed away, but not given much attention. They support the main idea, perhaps, but are not too important in themselves. Still other points are of only momentary relevance and are quickly dismissed. This prioritization is usually done with little effort, sometimes it is done almost unconsciously. It is done in response to cues planted in the text by the author or lecturer, and in response to the context of one's previous knowledge of the subject. Whenever a teacher says, "The main point is . . .," or "for example . . ." or "forget all that, the main point is . . . " he or she is giving clues of priority. When a textbook uses boldface print, or sets something apart in a box, or uses other graphic punctuation, it is giving clues of priority.
There are times, however, when a student's process of prioritization is faulty. Top priority can be given too indiscriminately, or it can be given too sparingly. Or the student may have no idea of what should get high or low priority. Correction of faulty prioritization is usually verbal. The teacher elicits feedback from the student and discovers that he is not giving the proper priorities. The correction is simply to tell the student what the proper priorities are.
Prioritization problems are not always vertical (high priority versus low priority). There can also be "horizontal" prioritization problems. In history, for example, a student may have trouble on examinations because he is attuned to political history but the teacher is more concerned with economic history. Or in literature a student might be attuned to the aesthetics of a piece of poetry while the teacher puts technical questions about meter and form on the test. The cure for all these problems is again to elicit feedback from the student, identify the problem, and give verbal correction or assistance.
In this chapter I have discussed diagnosis of learning problems with little reference to correction. What do you do with problems once they are diagnosed? That, of course, is the subject of the next chapter.