spring
2004 focus on... assessment
In Literacies #2,
fall 2003, we raised questions about how the IALS
(International Adult Literacy Survey) relates
to literacy practice. The ensuing
online discussion included a
lively thread about how tests
which try to establish skill
levels can lead to accountability
processes that do not describe what actually happens
in literacy programs.
The discussion participants
wanted to explore these questions
further, so Literacies brought
together writings that describe
national research about recent
changes to funding and accountability
frameworks, look at ways of documenting
broader literacy outcomes, and
question how the field can engage
in discussions about policy.
We have also included articles
describing how these issues have
developed in Sweden, Scotland,
the UK and the US. A perspective
from South Africa is available
online.
In the web
forum, started with questions such as:
- What dilemmas or issues
about assessment do learners,
programs and policy-makers
face?
- What are some possible solutions
to these dilemmas?
- How can Canada learn from
experiences in other countries?
spring
2004 forum report
What
counts? Focus on assessment
The web forum once again
proved that when literacy workers come together,
the discussion is always thoughtful and thought
provoking. We started with questions of what does
progress look like and can we
define it; if we can define it,
should we measure it; if we should measure it,
how should we measure it and
what conditions need to be present
in the program, in the community
and in learners’ lives
for learning and progress to happen.
We discussed
the tension that arises as learners, tutors, instructors,
program workers, program administrators, funders and
policy makers each try to define what counts as
success. We came back to “Literacy for what?” and
“Policy for what?” and the fact that
each of the above groups and individuals within
those groups will answer these questions differently.
An understanding
of literacy practices or the learning needs of
a community that a learner wants
to participate in may be a guiding
principle for program planning and assessment,
but will this be enough evidence
of progress for funders? If the
best form of assessment is the
simplest kind—assessment
based on anecdotes and human stories where learners
determine how much they benefit from the program—how
do we develop “the criteria, a matrix,
or some kind of a standard to justify funding
decisions and long-term resources?”
Literacy
workers who try to deliver “literacy for the
soul” are concerned about measuring success
in a way that meets the requirements for quantitative
data without falling into the trap of “codification
not development.” We talked about the difference
between assessment of learning and assessment
for learning and who is most interested in which
assessments.
Perhaps
we are using ‘literacy’ as a metaphor
for the acquisition of knowledge and this is why revealing
the complexity of literacy work and the gains learners
make outside the technologies of reading and writing
is such a challenge. It was suggested that a shift
from thinking about “literacy education for
adults” to “educational opportunities
for adults with low literacy” frees us from
socialized ideas of adult literacy.
How do literacy
workers, move the policy discourse away from the return-on-investment
analysis of delivering literacy programs? Do we have
the capacity to promise progress or to measure it
in under-funded programs without full-time, permanent
staff? How can we promise learners the reward of improved
opportunities when the research shows that the impact
of race, gender and class supersede that of education
as a determinant of economic success?
It may seem that we asked more
questions than we answered. That
would be a fair assessment of
the discussion, but as we shared
examples of how we think about
these dilemmas, the forum moved
us closer to some of the answers—or
at least to better questions.