Avasha Rambiritch of the University of Pretoria and I have just written a chapter for a book edited by John Read (Post-admission Language Assessment of University Students, Springer, 2016) that shows how making sufficient information available about the conception, design, development, refinement and eventual administration of a test of language ability – in other words “telling the story of a test” ‑ is the first step towards ensuring accountability for such tests. The test in question, the Test of Academic Literacy for Postgraduate Students (TALPS), is used to determine the academic literacy of prospective postgraduate students. For the full reference, see https://albertweideman.wordpress.com/research-on-tests-of-academic-literacy-in-south-africa/.
In another chapter, contributed to the same volume by me, Rebecca Patterson and Anna Pot, we argue that accountability and fairness are supported in the first instance by carefully attending to the definition of what gets measured.
That chapter in the same Springer book, “Construct refinement in tests of academic literacy”, in the first instance stresses not public or social accountability, as does the one on TALPS, but rather how we may reinforce the theoretical defensibility of the construct that we base our tests on.
The book has many further and other excellent contributions on post-admission language assessment at universities globally. I do hope that it will contribute towards the debate I have wished to stimulate on these pages. Tell us what you think!