From Now On
The Educational Technology Journal


  Vol 7|No 5|February|1998

Emerging from the Smog:
Making Technology Assessment Work for Schools
by Jamie McKenzie


Why do assessment? What's the pay-off?

There are at least a dozen explanations or reasons . . .

Focus 
 Encouragement
Learning
Invention
Steering  
Denial
Marketing
Credibility
  Accountability
 Prioritizing
Grant-Seeking 
Staff Development 

Focus

Some folks maintain that "What gets measured gets taught."

One argument on behalf of robust technology assessment is the opportunity to focus attention on the prime goals.

  • What should students be able to do with these tools?
  • What should strong performance look like?

The assessment program actually helps classroom teachers understand what the program is all about.

You want them to make inferences from Census Data and show their findings with dramatic charts?

You want them to write a new law based upon their electronic research? (GPISD example)

The goals written into a district technology plan or curriculum document may draw dust on a shelf and pass unnoticed into oblivion, while a robust assessment program may cause folks to sit up and pay attention.


Encouragement

We have substantial evidence that task commitment increases when people feel they are being watched. Researchers call this phenomenon the "Hawthorne Effect."

If substantial resources are devoted to assessment and if the activities are sustained, many classroom teachers are likely to take the project and the desired outcomes seriously.

They genuinely seem to care about student research skills because they keep measuring how the students are doing.

Encouragement is much better than enforcement because it is more likely to promote genuine efforts rather than mere compliance.


Learning

Assessment data can teach us which strategies and activities work best and which need to be left behind because they are ineffectual. If we do not gather such data, we cannot understand very fully what is actually happening. We cannot shed our early versions. We cannot rise from caterpillar to butterfly.

Especially when it comes to pioneering efforts [ such as the use of networked information to support student research efforts] we cannot rely upon time-honored practice or conventional wisdom. Building a program is, by definition, a discovery process.

Some of the discovery process is "trial-and-error." We cannot discard our least valuable efforts if we do not even know which reached the target and which fell short.


Invention

Assessment empowers us to modify, customize and improve the program mid stream. We can create a new version each time we gather new data and see what wants changing. We can adapt and adjust key elements until we are happy with the mix. Louder? Softer? Faster? Slower? Deeper? More challenging? More independent?

We "pilot" a program to see how it needs modification. It's a bit like a good strong sauce cooking on the stove. We taste from time to time to see if we need to change the seasonings. If we could not taste the sauce, we'd be "cooking in the dark" and have no good way to know how much salt, paprika or pepper to add.


Steering

To keep our program "on course" we rely on the equivalent of radar, GPS (geographical positioning software) and depth finders to tell us where we are and where we are headed.If we fail to collect data, we may be victims of "drift." The winds of chance may combine with unsuspected currents to block our efforts or distract us from our purpose.

The best assessment provides rapid and frequent feedback to the innovators which is "user friendly" so that adjustments can be made while the program is underway. The assessment data is used along with other information to navigate past obstacles and problems, steering the program forward in a sound manner.

Unfortunately, much of the educational research of the past emphasized summative evaluation, assessments near the end of a project which indicated whether or not the project met its goals. Research which helps the participants change direction and steer the program more wisely, formative evaluation, is rare, and yet this is likely to be the kind of research most helpful to a school council, a technology committee and a group of innovators. Teachers can learn from day to day what is working and what needs changing.

Another important source of insight is qualitative research which applies the perspectives of anthropology to information such as interviews, journals and observations rather than relying upon quantitative research which springs primarily from numerical measures. The tools associated with qualitative research are more accessible to school practitioners than are the statistical models associated with quantitative research.


Denial

Some schools and districts are quite content with denial, proceeding merrily along as if the program is excellent in all respects. They would simply rather not know if anything is going wrong. Having spent millions of dollars on equipment, no news is good news.

See no evil . . . hear no evil . . .

While it may be comfortable in the short run, denial ends up painful and self-defeating. The most skillful practitioners of denial (the ostrich family) who hide their heads in the sand, ultimately end up rear-ended ( or served a sirloin substitute).

Sound assessment practices put an end to denial as well as the accompanying fog and smog which can leave programs shielded from scrutiny.

Exactly what is happening here?

Are we doing anything good at all?

Is anybody learning anything?

Those who interfere with cultures thriving on denial, take heed!

Remember what happened to the mirror in Snow White and the Seven Drawfs when it refused to tell the Queen that she was still "the Fairest of them all."

There are significant political dangers associated with assessment, as some school leaders would rather have no news at all than risk the possibility of bad news. The way to reassure them is to create internal assessment activities which can thrive in relative obscurity (and safety) during stages of maximal risk and experimentation.

Properly implemented, assessment might actually strengthen the program and increase the chance of reporting impressive results while denial offers a serious risk of discovery and embarrassment.


Marketing

The funding of robust technology programs requires astute marketing skills in order to raise funds from local citizens and other sources. Assessment data can show the community a healthy "return on investment" provided student outcomes are actually being met.

How do I know students are learning any better because of all this new technology?

Why should I give you even more money?

If we expect to sell expansion, enhancement and continuation of our programs, we need to demonstrate impressive results. Our marketing must proceed well beyond vague promises. We must bring the program to life in the Public Mind as a thriving, remarkably effective experience for our students, but we must do so without relying upon hot air publicity balloons. The benefits must be

  • real
  • palpable
  • visible
  • demonstrable


Credibility  

Authentic assessment makes our program and our goals believable (and sustainable). We can move past rhetoric and vague promises. As we establish a track record of achieving powerful results, our words, our forecasts, our proposals and our requests for support win a different kind of audience as well as a higher level of respect than we might achieve without a history of observable results.

Too many folks rely upon hot air to win support or respect. Assessment data sustains program development and growth.


Accountability

What should happen if someone is not "pulling their own weight" and contributing to the success of students and program? What if some staff are working vigorously while others are ignoring the program and refusing to involve their students?

Does anybody know?

Does anybody care?

Assessment across a grade or a school raises the stakes for the entire group to take the venture seriously while not placing a harsh spotlight directly on individuals in a punishing manner. If results are disappointing, denial may be thwarted and the team must rally to the cause. If they "stonewall," they risk losing the credibility and respect mentioned earlier.

Introducing accountability to school cultures offers dangers as well as opportunities. A heavy hand may do more damage than denial itself.

Program assessment may include staff surveys and performance measures [ such as the Mankato Scale] which show growth and progress over time. In the best of cases, these measures are valued by the staff, most of whom see themselves on a journey toward powerful use of new technologies with students.


Prioritizing  

You gotta know when to hold 'em
And know when to fold 'em

How often do we have more resources than we need? Ever?

Programs need to shed elements which are not paying off in order to place more emphasis on those elements which are working well.

Assessment data tell us where to prune and where to invest. Without data, the debate rages in the dark with nothing more than personal preference and hunches to guide choices.

Grant-Seeking

Numbers help win grants. They feature prominently in several sections of a grant proposal. Applicants who can present numbers to show a need, clarify goals and measure progress are more likely to attract sponsors and patrons than those who rely upon platitudes.

1. Statement of Need - In this section the applicant shares data showing that there is a gap of some kind between the "desired" condition and current status.

Writing samples of our middle school students show that 73 per cent fall below the "proficiency" level when judged for logic, persuasiveness and the use of the Six Traits of Effective Writing.

2. List of Expected Outcomes - In this section the applicant predicts in numerically measured terms what the program "effects" will be.

During the three years of this intensive electronic writing and researching program, the percentage of participating middle school students falling below the "proficiency" level will decline at the rate of 15-20% each year until fewer than 20 per cent end up below "proficiency" and the percentages scoring at "advanced" and "strong" will grow at the rate of 7-8% each per year. Control groups will not show comparable growth.

3. Evaluation Design - In this section the applicant demonstrates familiarity with good research design as well as instruments which may already possess proven validity and reliability.

Vague promises are unlikely to win funding or support. When it comes to grant writing, numbers can speak more eloquently than words.


Staff Development  

When teachers participate in assessment activities, they may learn as much or more by observing the students wrestling with an authentic problem-solving challenge than they would by participating in an adult "training" session of some kind.

I first became aware of assessment as staff development when classroom teachers in Bellingham became observers charged with the task of measuring the students' group behaviors, logic and persuasiveness. For many, the experience made the goals of the technology program come to life in palpable terms while dramatizing what work still needed to be done. Many commented that they had learned more by watching than they normally learned when charged more officially with learning.


Why has assessment been neglected?
Previous articles suggest explanations.

 Why Assess?
 Why Not?
 How
 Resources

Return to February

Credits: The icons are from Jay Boersma.
Other drawings, photographs and graphics are by Jamie McKenzie.

Copyright Policy: Materials published in From Now On
may be duplicated in hard copy format if unchanged in format and content for educational,
non-profit school district use only. All other uses,
transmissions and duplications
are prohibited unless permission is granted expressly. Showing these pages remotely through frames is not permitted.
FNO is applying for formal copyright registration for articles.



Network 609