10 No. 2
Quantifying the quality of an Emory education
Selected Results from the National Survey of Student Engagement
“Our goals are so complex that learning outcomes assessment will measure only a small part of what takes place while our students are with us.”
“I think it’s important for students to know a lot of facts [and] figures. . . . If you haven’t acquired that basic, empirical knowledge, the structure of reflection you build upon collapses.”
The New Curriculum
Medical student education in the twenty-first century
The Transforming Community Project
Practicing Diversity in the Academy
Uncovering and engaging Emory’s racial past and present
Uncovering the Past, Looking to the Future
Experiencing a community dialogue at Emory
It is a rallying cry to the crisis in U.S. higher education. Or it is a ridiculous right-wing parody of social science. It will expose a generation of college graduates lacking even the most basic skills and knowledge. Or it will impose the kind of stifling, lockstep testing regimen on higher education that the No Child Left Behind Act brought to k through 12 education.
Even though the words “student learning outcomes assessment” might elicit more blank stares than impassioned defenses, it is a hot debate in academe. And it is heating up at Emory.
Arguing that colleges and universities are not held accountable for the quality of graduates and that the U.S. system of higher education is slipping in global competitiveness, the U.S. Department of Education and others have pushed for an accreditation process that emphasizes cost, efficiency, productivity, and standardized measures of how much students are learning, rather than “inputs” such as resources, faculty quality, and facilities. Calls for greater accountability from organizations like the American Council of Trustees and Alumni, the growing emphasis on measurement with the influence of the U.S. News and World Report and the National Research Council, and, most recently, the report of Education Secretary Margaret Spellings’s Commission on the Future of Higher Education have helped put the business of standardized measurable results in the foreground of undergraduate education. (And what a business it is: corporations like the Educational Testing Service, which owns the SAT and the GRE, among others, are growing and consolidating at an unprecedented rate.)
The resulting pressures have landed squarely on the shoulders of institutions like Emory, which is now looking ahead, already, to its next ten-year reaccreditation review in 2014. Accrediting agencies such as the Southern Association of Colleges and Schools (SACS) and its regional counterparts, which serve as the gatekeepers between institutions and federal student aid, are calling for much more evidence than ever before on what students are actually learning. But how to ask this question—and exactly what question it is that needs asking—is proving elusive.
“There is a crescendo of requests for more accountability and evidence of learning outcomes from accreditation agencies, but also from the federal government,” says Daniel Teodorescu, Emory’s director of institutional research. “Rather than waiting for the Feds to impose their methods, private research universities are starting the discussion on our end to develop our own goals, objectives, and metrics for assessment.”
Professor of English Mark Bauerlein, who gained some familiarity with large population studies during his stint from 2003 to 2005 as director of research for the National Endowment of the Arts, believes there is a straightforward answer. “What I would advocate, quite simply, is that we have low-stakes exit exams, ‘low stakes’ meaning they’re anonymous—nothing at stake for the students, but they complete a series of questions and tasks that do test knowledge and skills in their discipline. It’s for colleges to examine themselves, not so much to examine the students.”
Others say it’s not so simple. “I’m not afraid of assessment,” says Wendy Newby, assistant dean for undergraduate education in Emory College and director of faculty resources for inclusive instruction. “But when it comes to a liberal arts education I’m not sure any of us can define it in a way that is easily or even adequately measurable. How do you define what you will measure? The question eventually becomes, What is the goal of a liberal arts education and, when measuring components, are you capturing the essentials of the experience?”
A growing number of standardized surveys, such as the National Survey of Student Engagement (NSSE) and the Collegiate Learning Assessment (CLA), attempt to frame a response. Emory College took part in the NSSE, which gauges nationwide student participation in learning and personal development activities, for the first time in 2006, and Oxford College began participating every other year in 2005. Teodorescu says one assessment tool Emory might consider is the CLA (which aims to test reasoning and written communication skills by having students analyze complex material), but he adds that it is costly and does not gauge critical skills such as creative thinking and the ability to collaborate. “I don’t think we should invest too much in standardized tests,” he says. “The main benefit is they allow you comparison with other institutions.”
Bauerlein emphasizes, however, that they also offer a longitudinal view of student development. With the NSSE, for example, “Emory should be able to look at its freshmen scores from four years ago and then look at the seniors and see how the same group changed its habits. Do they read more books on their own as seniors? Or did the number go down? Do they go to more performing arts events? It’s not to test students; it’s to see how we are changing the intellectual lives of these kids from the time they get here to the time they leave. And the results may not be very nice.”
Such revelations notwithstanding, many balk at the notion of measurable standards for qualities that are tough to quantify. “Can any art history departments agree on a defined set of fundamental facts that have to be known?” asks Associate Professor of Biology Bill Kelly. “What biology departments are going to completely agree on such a set? Do we want to start teaching how to take a standardized test at the collegiate level? That’s what’s happening at the high school level.”
Psychology professor Steven Nowicki adds, “My background is measurement, and I learned from the very get-go that there are certain things you cannot measure objectively. To me, it’s the difference between online education and face-to-face education. Online education is probably a much more efficient way to convey facts. But when you’re face to face with somebody, there’s teaching that goes on nonverbally between members of a class and a teacher. I don’t think that can be easily captured in a test.”
Indeed, write Richard Shavelson and Leta Huang of the Stanford University Education Assessment Laboratory in a 2003 Change magazine article, “The common one-size-fits-all mentality is likely to reduce the diversity of learning environments that characterizes and gives strength to our national higher education system.”
Some Emory administrators are encouraging a compromise approach that would keep assessment local, but that would also place much responsibility for it on faculty. “Departments need to go back and look carefully at what they’re doing,” Newby says. “They need to discuss what they think they want their students to know, which courses are really lending themselves to that kind of exploration, and how to measure the outcome. What is the capstone experience going to be?”
“The faculty should be the owners of this process,” Teodorescu says, “because they are the ones who will use the results to improve teaching. It should be a continuous, sustainable process, and to be sustainable you have to have some champions in each program—faculty who like to innovate and test hypotheses related to their teaching.”
But not without support, Newby and Teodorescu are both quick to add. “Ideally, there will be a center for teaching and learning staffed by professionals who can help examine both curricula and instructional practices with faculty” Newby says. “Not a large professional force, but experienced professionals who could facilitate discussions of what we could do better.”
Just the facts?
What exactly, then, is up for scrutiny with assessment of learning outcomes? There is no easy consensus. “I think it’s important for students to know a lot of facts, figures, dates, stories, biographies,” says Bauerlein. “Rote memorization. I have seen too many students—too many people—come into a room and start talking in abstract, conceptual, theoretical terms about subjects, and when you ask them about basic facts, they can’t answer the question. If you haven’t acquired that basic, empirical knowledge, then I think the structure of reflection you build upon collapses. I try not to cringe when people talk about developing higher-order thinking skills or critical thinking; you can’t do much critical thinking about rights in America if you don’t know the rights contained in the First Amendment.”
On the other hand, the facts can be slippery. “My students might not go into psychology, but they might be able to go into a completely other set of disciplines and be successful because they’ve learned a way of thinking and approaching problems, not because they have the facts you learn in psychology,” Nowicki argues. “I mean, the facts change every five years. What do we do then? The best teaching I do is generally when I’m walking with some kid after class, and we’re talking, and he sees the way I think, and I see the way he responds to something, and it has nothing to do with what’s going to be on the final.”
For Newby, the question of “outcomes” leads back to the greater social purpose of education. “How do you turn people into self-motivated learners who enjoy reading, who pursue knowledge for enjoyment and its value to society? You assume that after students have gone through certain classes that they have learned certain facts and information and can apply them in specific situations. Ultimately our goal is to create individuals who can solve tomorrow’s problems. And you do that by helping them develop strategies for lifelong learning and values toward those goals.”
Either way, Teodorescu says, “The departments should be given the flexibility to set their own methods. Assessment should not be solely driven by SACS accreditation requirements. It should be for improving teaching. The main benefit is better knowledge about your program and how well your students are learning.”—A.O.A.