Vol 5 . . . No 4 . . . December, 1995


"Did anybody learn anything?"
Assessing Technology Programs and the Learning Accomplished

Introduction

Ever since microcomputers came ashore around 1980, schools have been scooping them up by the millions as if they represented some great panacea to resolve the dozens of crippling issues raised by A Nation at Risk and countless other critical reports.

Fifteen years and millions of dollars later, what evidence can we present to justify the investment?

"Did anybody learn anything?" is the essential question. Has this just been another great educational bandwagon or boondoggle? Or has the introduction of new technology made a substantial difference in the learning of students?

The most substantial research into student learning with technologies has examined performance on lower order tasks and basic skills. And much of that research was highly biased and seriously flawed. In all too many cases, the findings were generated by vendor contracts and the research failed the independence test. A careful review of professionally conducted research provides little evidence that growth in skill persists beyond the initial "gadget stage." The impact of technology upon such skills is rarely contrasted with alternative strategies such as training teachers to be more effective teachers of reading. Given several hundred thousand dollars, what's the best way to provoke dramatic student gains?

Too little work has been done measuring gains in higher order skills. We have few studies which explore the growth in student group problem-solving skills, for example. How does the power of student communication improve when they are taught to compose essays with a word processor - when they are taught to wield the computer as an idea processor rather than a glorified typewriter? How well can they "crunch" data in order to gauge relationships between variables? Can they conduct explanatory research rather than mere descriptive research? Or are they simply more powerful word movers?

How do new information technologies enhance student learning? Does e-mail make for stronger writers and communicators? Does access to the Internet encourage a global perspective? How well can our students manage info-glut, info-garbage, info-tactics and cyberporn?

For decades now, many educators have shied away from measuring progress on essential learning tasks. Recent attention to "student outcomes" has brought the challenge to center stage, but much of the early work has been either frustrating to teachers or seriously flawed. Those who have pushed for standards and assessment of outcomes have often found themselves on the defensive as various groups have launched assaults (Example: Go by Web to Kossor Newsletter - http://www.voicenet.com/~sakossor/pe1_3.html) against the movement.

The premise of this article is that "deep" assessment is central to both program growth and student progress. Time has come to measure results. We have the tools and the models. (Example: an abstract of the article, Computer-Mediated Collaborative Learning: An Empirical Evaluation, MIS Quarterly, (18:02), June 1994, pp. 159-174, written by Maryam Alavi, College of Business and Management, University of Maryland - http://www.bmgt.umd.edu/Business/AcademicDepts/IS/Learning/misq1802.html) Now we must "face the mirror."

On the other hand, if we cannot look at reality, we will be left with virtual success, which tastes, when all is said and done, about as appetizing as virtual lunch.

The Sad and Sorry State of Technology Program Assessment

In September of 1992, From Now On explored this topic in considerable detail and depth. Go to fulltext of that issue. At that time it was clear that little was being done to measure or report student gains. That finding holds true three years later.

Failure to Make the Connection:
Integrating Technologies into Classroom Learning

Before turning specifically to the status of program assessment, we need to confront the evidence that fifteen years after the attempted infusion of technology into schools, we have failed to make much progress. The federal OTA (Office of Technology Assessment) recently published a disheartening report describing the huge gap between the promise and the reality of technology use by teachers. Key findings are quoted below:

TEACHERS AND TECHNOLOGY: MAKING THE CONNECTION
SUMMARY OF KEY FINDINGS

(Go by Web for full text of report - http://otabbs.ota.gov/T128/)

o Projections suggest that by spring 1995, schools in the United States will have 5.8 million computers for use in instruction--about one for every nine students. Almost every school in the country has at least one television and videocassette recorder, and 41 percent of teachers have a TV in their classrooms. Only one teacher in eight has a telephone in class and less than 1 percent have access to voice mail. Classroom access to newer technologies like CD- ROM and networking capabilities are also limited. While 75 percent of public schools have access to some kind of computer network, and 35 percent of public schools have access to the Internet, only 3 percent of instructional rooms (classrooms, labs, and media centers) are connected to the Internet.

o Despite technologies available in schools, a substantial number of teachers report little or no use of computers for instruction. Their use of other technologies also varies considerably.

o While technology is not a panacea for all educational ills, today's technologies are essential tools of the teaching trade. To use these tools well, teachers need visions of the technologies' potential, opportunities to apply them, training and just-in-time support, and time to experiment. Only then can teachers be informed and fearless in their use of new technologies.

o Using technology can change the way teachers teach. Some teachers use technology in traditional "teacher-centered" ways, such as drill and practice for mastery of basic skills, or to supplement teacher-controlled activities. On the other hand, some teachers use technology to support more student-centered approaches to instruction, so that students can conduct their own scientific inquiries and engage in collaborative activities while the teacher assumes the role of facilitator or coach. Teachers who fall into the latter group are among the most enthusiastic technology users, because technology is particularly suited to support this kind of instruction.

Lack of Actual Evaluation Reports

As I discovered when I did the research for the 1992 issue on this topic, very little actual evaluation of technology is being reported each year. ERIC has published fewer than 40 articles each year since 1980, and the trend has persisted since 1992.

Number of Evaluation of K-12
Educational Technology Articles
Reported by ERIC Each Year 1980-92

1980 - 8
1981 - 16
1982 - 10
1983 - 29
1984 - 25
1985 - 22
1986 - 32
1987 - 27
1988 - 27
1989 - 27
1990 - 29
1991 - 36
1992 - 13 - ERIC search results
1993 - 9 - ERIC search results
1994 - 11 - ERIC search results
1995 - 2 (incomplete year)
Perhaps 10 per cent of the reports cited share actual student performance data. Most are anecdotal or testimonial program evaluations.

The abstracts from two 1994 exceptions to this tendency are quoted below. Both New York City and Austin, Texas report no significant gains from their (expensive) ILS systems.

ED379305
Instructional Technology in AISD, 1993-94. Publication Number 93.06.
Curry, Janice; Sabatino, Melissa
-ABSTRACT- During the 1993-94 school year, the Office of Research and Evaluation of the Austin Independent School District (AISD) (Texas) conducted a districtwide evaluation of instructional technology. The evaluation consisted first of an accurate count of all computers in AISD schools, and then of an in-depth evaluation of the integrated learning systems of the Computer Curriculum Corporation (CCC) and Jostens Learning. The over 11,000 computers in the Austin schools are more than twice the amount present 3 years ago. Of these, 39% are considered old. This amounts to six students for every one computer in the district. Gains in student achievement have not been significant enough to declare either of the integrated learning systems effective, but the gains made at some schools warrant their continued use.

ED381133
Educational Systems Integrators/Integrated Learning System Project: Titan Schools 1993-94. OER Report.
-ABSTRACT-
The 1993-94 Integrated Learning System (ILS) project, a means of delivering individualized instruction through a computer network, involved approximately 70 schools from New York City school districts. To help schools learn about and operate the technology in an ILS, districts were given the option of hiring one of the following companies (referred to as education systems integrators): Instructional Systems Inc., Jostens, the Waterford Institute, and Titan. Of the four integrators, Titan elected to have the Office of Educational Research (OER) evaluate its program. Titan, who was chosen as integrator by six schools, contracted with Computer Networking Specialists (CNS) on Long Island to perform the integration services, and with the Waterford Institute to provide teacher training. Two of the six schools were part of the grantback phase and the other four were in the capital phase of the project. Problems resulting from the asbestos crisis in New York City public schools and delayed deliveries and installations affected both phases of the project, but especially the capital phase. Half of the schools were very satisfied with the teacher training they received, while the other half voiced dissatisfaction with the initial training. Opinions about the software programs were mixed; one area of dissatisfaction was the schools' involvement in decision making about the ILS project. Student achievement scores showed no significant differences in reading between program participants and the rest-of-school population. Recommendations include: reexamine teacher training; clarify the roles of CNS and Waterford; and consider how the program expects schools to integrate the use of the ILS lab.

Do we have a case of the Emperor's New Clothes?

Hypotheses for the Sad and Sorry State

In this section we will explore a number of hypotheses which might serve to explain the "glass ceiling" keeping annual ERIC evaluation reports under 40 since 1980. Following each hypothesis will appear a rationale, most of which will be conjecture.

Hypothesis #1: Most school districts do not have the expertise or the resources to conduct solid evaluation studies.

Most of the existing studies have been completed by large districts, vendors or universities. Few districts have personnel with formal evaluation skills or the specific assignment to conduct such evaluations. Research is rarely conducted as part of the decision-making process. The collection and analysis of data, a cornerstone in the Total Quality movement, is rare in many school districts. In times of scarce resources, these are the kinds of budgets and projects first cut.

Hypothesis #2: Program proponents have a vested interest in protecting new programs from scrutiny.

Those who push new frontiers and encourage large expenditures are always taking a considerable risk, especially when there is little reliable data available to predict success in advance. Careful program evaluation puts the innovation under a magnifying glass and increases the risk to the pioneers.

Hypothesis #3: Accountability is sometimes counter-culture.

Many school districts have been careful to avoid data collection which might be used to judge performance.

Hypothesis #4: There is little understanding of formative evaluation as program steering.

Since most program evaluation in the past has been summative (Does it work?), few school leaders have much experience with using data formatively to steer programs and modify them. While this kind of data analysis would seem to be more useful, more helpful and less threatening than summative evaluation, lack of familiarity may breed suspicion.

Hypothesis #5: Vendors have much to lose and little to gain from following valid research design standards.

Districts are unlikely to pour hundreds of thousands of dollars into computers and software which will produce no significant gains. Careful research design tends to depress some of the bold results associated with gadgetry and the Hawthorne effect. Amazing first year gains, for example, often decline as programs enter their third year. In some cases, vendors report only the districts or schools with the best results and remain silent about those which are disappointing.

Hypothesis #6: School leaders have little respect for educational research.

Many school leaders joke that you can find an educational study to prove or disprove the efficacy of just about any educational strategy. Studies have shown that such leaders typically consult little research as they plan educational programs.

Hypothesis #7: Technology is often seen as capital rather than program.

Some school leaders do not associate technology with program. They view technology as equipment not requiring program evaluation. Equipment may be evaluated for speed, efficiency and cost but not learning power.

Hypothesis #8: Evaluation requires clarity regarding program goals.

Unless the district is clear about its learning objectives in terms which are observable and measurable, as was done by the RBS study, it will be difficult to design a meaningful evaluation study. In some districts, the technology is selected before a determination is made regarding its uses.

Hypothesis #9: Adherence to evaluation design standards may create political problems.

In addition to increasing risk by spotlighting a program, evaluation can also anger parents as some students are involved in experimental groups and others may have to put up with the traditional approach. Random selection can anger people on either side of the innovation, participating teachers, included. Voluntary participation, on the other hand, immediately distorts the findings.

Hypothesis #10: Innovative programs are so demanding that launching an evaluation at the same time may overload the system.

Many schools are perennially stable and conservative organizations with a preference for first order change (tinkering) rather than second order change (fundamental change). Stability needs conflict with innovation, as change is seen as threatening and pain producing. Because the potential for resistance runs high in such organizations, many leaders may trade off evaluation just to win buy-in for a change.

Theses hypotheses originally appeared in the September, 1992 issue of From Now On. Explore the full text of the original article. (http://fno.org/fnosept92.html)

Why Bother? What's the Pay-Off?

Assessment may assist schools and teachers with the challenge of Making the Connection between classroom practice and new technologies. Assessment may help schools replace the heroics of a few with a more broadly based adoption of sound practice.

In all too many places, as OTA's "Making the Connection" confirms, the daily realities of technology programs are quite different from the published goals and plans. Many teachers simply avoid technology and refuse to consider its possibilities. (Go by the Web to check out Steven Hodas' excellent article on Technology Refusal). It is rare to find a system or a school where technology is comfortably blended into the daily life of all classrooms. The most one can hope for is heroics.

With sound assessment in place, any gap between goals and practice becomes quite evident and may inspire a school staff to ponder the following questions:

  1. Is there a gap?
  2. Are we surprised?
  3. Why is there a gap?
  4. What could we do differently to close the gap?
  5. Who are the key players?
  6. How do we bring everybody on board?
  7. What resources will we need to make progress?
  8. What can we learn from other schools and districts?

Unfortunately, denial provides fertile ground for technology refusal. As long as nobody takes notice of how many hours the computers are being used, as long as nobody measures what the students are capable of doing and as long as everybody maintains "a separate peace," the prospects for Making the Connection are limited.

As "Crossing the Great Divide: Adult Learning for Integrative and Innovative Use of Technologies with Students," the September, 1995 issue of From Now On argued:

Time has come to cross the Great Divide. We need adult learning experiences which will enable teachers to move beyond what Mandinach and the ACOT researchers have called the Survival and Mastery stages (where the task is learning the equipment and the software) through the Impact and Innovation stages (where the task is employing such tools to restructure the learning environment to support student investigation, problem-solving and decision-making.

As we move forward with our technology initiatives, staff development in the broadest sense (organizational development and cultural change) will be the deciding factor in whether our projects are real or virtual. These adult learning experiences must include deep and authentic assessment of programs underway. Classic training models tend to fixate on technology skills or the learning of applications rather than how to make use of these technologies to promote and support student learning. (Go to "Skills Fixation" FNO-Oct93). They also leave intact the cultural divisions which gridlock the school in a conflict between pioneers and sages. (Go to "Staff Balkanization" FNO-Sept93) Assessment sets in motion a different kind of learning - an experimental approach intent upon finding effective strategies.

The Centrality of Clear Goals and Outcome Statements

Just what is it we wish our students to be able to accomplish using these new technologies? Is it enough that they spend time with computers? Can we be satisfied with drill-and-kill ILS (integrated learning systems)? Are we after 100 WPM keyboarding skills?

Why are we spending all this money?

In many previous articles and several books I have argued strenuously for the primacy of student learning as a focus for a district technology plan. Unless the student outcomes are stated with specificity, it is easy for the technology program to wander hither and yon, never fulfilling any especially significant purposes.

I was drawn to the Bellingham Public Schools because a staff committee had drawn up a Technology Plan which paid far more attention to student learning than to hardware. It stated three clear learning priorities (communicating, analyzing data and solving problems), and listed student outcomes by level of the district.

Go by Web to Bellingham Public Schools Technology Plan

Clear goals and outcome statements lead naturally to authentic and deep assessment. The assessment instruments we have constructed match the outcomes, and as we begin to gather and analyze data on what students can accomplish, we apply the resulting insights to program changes.

Assessment for Navigation

The best assessment provides rapid and frequent feedback to the innovators so that adjustments can be made while the program is underway. The assessment data is used along with other information to navigate past obstacles and problems, steering the program forward in a sound manner.

Unfortunately, much educational research of the past emphasized summative evaluation, assessments near the end of a project which indicate whether or not the project met its goals. Research which helps the participants change direction and steer the program more wisely, formative evaluation, is rare, and yet this is likely to be the kind of research most helpful to a school council, a technology committee and a group of innovators. Teachers can learn from day to day what is working and what needs changing.

Another important source of insight is qualitative research which takes the perspectives of anthropology and looks at information such as interviews, journals and observations to make judgements rather than relying upon quantitative research which relies upon numerical measures. The tools associated with qualitative research are more accessible to school practitioners than are the statistical models associated with quantitative research.

Authentic assessment as used by the Coalition of Essential Schools refers to student activities which demonstrate learning other than the traditional paper and pencil exams and standardized tests. The student reveals insight and skill by providing a performance or portfolio at the end of the unit which requires personal translation of key ideas.

Technology rich learning may ask that teachers re-write old scripts. Instead of relying upon patterns that have worked in the past, they are inventing new versions. As Making the Connection points out, the technology suggests an adjustment toward student-centered classrooms. The "sage on the stage" becomes "the guide on the side."

Assessment makes it possible to peek over the horizon and steer a safe course.

Self-Assessment Instruments

Surveys are one sound strategy to measure progress. Staff and students rate their levels of competence with regard to various skills using rubrics. This was a primary research approach for the Making Connections OTA Report.

While such instruments may inflate the actual skill levels because participants tend to provide socially desirable responses, in the case of educational technology there is little risk that students will report high skill levels and frequent technology learning opportunities where they do not exist.

The Bellingham Schools owe a debt of gratitude to Doug Johnson and the Mankato (MN) Public Schools (Go by Web to Mankato WWW Site) for their Technology Scale developed several years ago to measure staff progress on technology-related competencies. This scale has been modified to assess progress by both staff and students.

The Bellingham Schools began administering the Mankato Scale to all staff members at a site level in June of 1994. This instrument is now repeated at least once each year to see how staff is moving along on its journey and to provide a basis for the planning of adult learning.

The 1995-96 Bellingham version of the Mankato Scale for staff is available for downloading and use from the district's WWW site at http://www.bham.wednet.edu/policies.htm (Go by Web to Bellingham Public Schools).

A student version of the Mankato Scale has been created for each level of the district (elementary, middle and high school). These are also available for downloading and use from the district's WWW site at http://www.bham.wednet.edu/policies.htm (Go by Web to Bellingham Public Schools).

Performance Assessment Instruments

The ultimate question is whether or not the students can perform at a reasonably high standard the kinds of tasks identified in the student outcomes section of the learning plan as essential. Performance assessment requires action beyond pen and pencil tests.

The Bellingham Public Schools successfully implemented technology-related performance assessment with samples of 5th graders from all 12 elementary schools in Fall of '95. Middle and high school versions will be introduced in Spring of '96.

Because the District Technology Plan (Go on the Web to the Technology Plan) calls for the development of three main skills (communicating, analyzing data and solving problems), the performance task asks teams of four students to spend three hours studying data about accidents in order to prepare a multimedia report recommending action to the government. When done, the quality of each performance is rated for 1) cooperative group problem-solving behaviors, 2) thoughtful analysis of the data, and 3) persuasiveness of the presentation.

During this 3 hour exercise, students must crunch numbers with a spreadsheet, create charts, cull critical information from an electronic encyclopedia to place in a database and gather all their findings and recommendations into presentation software. (Go on the Web to the BPS Performance Assessment Instrument)

What did we learn from the first assessment of 5th graders? In most cases, the schools were pleased with the students' technology proficiencies (creating a chart with a spreadhseet, for example) but were disappointed in their attempts at teamwork and analysis. The Fall assessment has challenged the schools to find ways to bring classroom experiences more in line with the goals of the district plan so that 5th graders will grow enough by June to demonstrate enhanced skills.

When all is said and done

Thanks to the Columbia Book of Quotations on Microsoft Bookshelf, we can enjoy the thinking of four poets with regard to authentic and deep assessment . . .
It is a pleasure to stand upon the shore, and to see ships tost upon the sea: a pleasure to stand in the window of a castle, and to see a battle and the adventures thereof below: but no pleasure is comparable to standing upon the vantage ground of truth . . . and to see the errors, and wanderings, and mists, and tempests, in the vale below.

Francis Bacon (1561-1626)

Most of the change we think we see in life Is due to truths being in and out of favor.

Robert Frost (1874-1963)

One measure of a civilization, either of an age or of a single individual, is what that age or person really wishes to do. A man's hope measures his civilization. The attainability of the hope measures, or may measure, the civilization of his nation and time.

Ezra Pound (1885-1972)
Measure not the work
Until the day's out and the labour done,
Then bring your gauges.

Elizabeth Barrett Browning (1806-61)

Resources

Columbia's Reform Readings
A collection of articles in LiveText covering dozens of essential topics regarding educational change and restructuring.
Developing Educational Standards
An excellent annotated list of those sites which have educational standards documents prepared by various states and professional organizations.
Educational Technology Taxonomy
"Educational Technology: Tools for Inquiry, Communication, Construction,and Expression" - This paper presents a taxonomy of educational technology applications organized in terms of the ways they support integrated, inquiry-based learning.
IS World Net Teaching & Learning
Evaluation and Measurement Page - Higher Ed resources to assess growth in technology-related skills.
Kossor Education Newsletter Talking to legislators about OBE
Attacks OBE
Making the Connection - Abstract
Federal study of why few classroom teachers have made substantial use of new technologies.
Making the Connection - Summary of Findings
Summary of the Office of Technology Assessment findings.
New Times Demand New Ways of Learning
This section of a report from NCREL details the indicators that educators and policymakers can use to measure the effectiveness of technology in learning.
Technology
NCREL's technology-planning resource page
Technology and School Reform
A wonderfully annotated list of resources provided to you by the folks at Armadillo, one of the great educational lists on the WWW.
What Does Research Say About Assessment?
More great information about assessment from NCREL.


Return to December, 1995

Copyright Policy: Materials published in From Now On may be duplicated only in hard copy format for educational, non-profit school district use only. All other uses, transmissions and duplications are prohibited unless permission is granted expressly. Showing these pages remotely through frames is not permitted.
FNO is applying for formal copyright registration for articles.


p>