Nothing quite beats gathering subject experts together in a room and hearing diverging perspectives and (preferably heated) discussions. If anything, last week’s E-ATP London Conference proved just that. In 2.5 days, we discussed exciting happenings in the assessment industry: what is working well for testing organisations, their struggles, and e-assessment’s most significant opportunities – now and in the future.

In 2022, exam security and fairness are still high on everybody’s agendas. Similarly, AI and VR remain hot topics and are already proving their worth, in the form of time-saving and more realistic assessments through simulations. However, they tend to come with their ethical dilemmas regarding privacy, bias, etc.
What can we learn from innovations in learning?
Kirstie Donnelly, MBE, CEO at City & Guilds, and a panel of international assessment experts kicked off the conference in style with the opening keynote.
“Why do innovations in assessment tend to lag behind those made in learning?” This was one of the interesting observations made by Ms Donnelly. She suggested that not only should we take inspiration from advances made in learning, but there are also many benefits to be gained from an even deeper connection or the fusing of learning and assessment.
Most importantly, aligning testing to learning would allow us to utilise upfront diagnostics: Data from assessment performance could be used to inform and improve learning and vice versa. This way, we could offer learners personalised learning paths and customise the entire journey to fit their cognitive learning styles.

The consensus in the room was that in the future, we’ll likely move away from long-term formal courses and toward more short-term portfolio courses. These shorter courses will allow test takers to build the skills they need at that moment. This has two advantages: First, it helps test takers demonstrate competency to perform tasks or gain employment. Second, these flexible learning paths help keep candidates’ skills up to date.
Another interesting point was put forward by one of the panellists (Saskia Wools, CitoLab), who notices that, in learning, much effort goes into improving student motivation. We do not do this for assessment in exam creation software; we just accept the notion that ‘everybody hates exams’. However, are there ways we can make them more ‘fun’? One way of doing this was proposed by Liberty Munson from Microsoft: Personalised tests, or skill-based measurement, or as she called it, “choose-your-own-adventure”. This type of learning and testing could increase test taker motivation by 1) taking off the pressure and 2) improving alignment to the specific job role the test taker has in mind.
The best predictor of test score is still postcode
In ‘Test Fairness: Taking a Global Perspective,’ a group of panellists from diverse backgrounds presented their views on fairness:
For starters, it is essential to differentiate between the scientific viewpoint of fairness – Equality – and the social viewpoint – Equity. From the scientific perspective, fairness is determined by:
- Treatment during testing
- Freedom from bias
- Access (to technology, language, and opportunities to acquire knowledge)
- Validity of score interpretation (the absence of Construct Irrelevant Variance, i.e. results in accuracy are not affected by poor questions construction, item bias, etc.).
In this view, groups scoring differently does not indicate unfairness.
In contrast, the social viewpoint of fairness – Equity – recognises that each person has different social, geographical and political circumstances. Therefore, to reach an equal outcome, resources and opportunities for each individual must be adjusted accordingly. There is no one way to measure everyone, and group differences in test scores indicate unfairness in this view.
Unfortunately, the best predictor of test performance is still postcode. Achieving fairness globally is a vast undertaking that won’t be solved overnight. However, as Andre Allen from Fifth Theory suggested, we can progress by focusing on iteration vs solution. Don’t expect to solve the issue immediately, but keep improving fairness in small increments.
So what are some of the ways we can reduce bias?
- Make sure test delivery is the same for all candidates (not just the test itself, but also travel time and distance to the test location, for example).
- Set up the marking process to minimise bias (one possibility is using AI for comparability).
- Ensure your test takers have access to Accessibility and Inclusion tools.
- Keep the learner front and centre.
- Provide a safe place to experiment with ways to reduce bias. This is not always easy in a high-stakes testing environment, but it is an essential requirement for change.
Who’s the cheater?
During the ‘2022 ATP Security Survey Preview’, we got a sneak preview of some of the results from this survey, filled in by 64 ATP testing organisation members. The full results will be presented at ATP 2023, but here are a few noteworthy preliminary findings:
- Even though half the respondents were still using pen and paper, over 70% were using some form of online testing, and more than 50% were also using online proctoring. The latter is up from 17% in 2018.
- Multiple choice was by far the most used question type in every exam creation software.
- Organisations use a wide range of tools to combat cheating with the top three being Internet searches, legal counsel and making students take additional tests.
- The amount of money spent on these various tools varies greatly by organisation, with some organisations spending under $25,000 and some upwards of $5mln.
- About 63% of the organisations had experienced a test security violation in the last 2 years. These were often detected by data forensics, web and social media monitoring and good old-fashioned tip-offs.
A fine balancing act
In ‘Cracking the Code: Balancing Candidate Experience with Exam Integrity’ Casey Kearns of Pearson VUE posed the question: “How can we keep our exams secure while maintaining a good candidate experience?”
These days, there is a multitude of measures we can take to protect exam integrity and prevent cheating:
- On assessment level: Using pool-based exams (such as the LOFT and blueprint options in Cirrus), restricting access to content items throughout the process, moving to higher order assessment objectives that get candidates to demonstrate knowledge (such as simulations, lab test, etc.).
- Agreements, such as getting candidates to agree to an honour code.
- Identification
- Exam delivery techniques, such as lockdown browsers, live proctoring, etc.
- Post-exam data analysis (AI, web and social media monitoring, etc.)
All these measures introduce a certain amount of friction to the exam experience. This in turn may impair the candidate’s journey, satisfaction and even performance. Therefore, Mr. Kearns recommends balancing the amount of friction with the stress experienced by candidates during the exam process. Each stage of the process (Preparation, Registration & Scheduling, Exam Delivery and Post-Delivery) carries a certain amount of stress – with a reasonable amount of stress being both expected and accepted (Interestingly, though, it appears that stress tolerance in general has decreased over the years.)

So when we decide what tools to use, we need to consider the stress levels during that stage and weigh it against the value provided. He further suggested some ways to reduce friction for the candidates:
- Streamline your tools. That is, can you reduce friction or shift it to other stages? For example, by getting students to familiarise themselves with the live proctoring requirements before they take the exam, so they don’t have to do it while they are stressed out during the exam.
- It’s also imperative to set expectations: overcommunicate to your test-takers the tools in place to prevent cheating and what is expected of them at each stage of the process. Furthermore, if you think you’ve communicated enough, do it one more time for good measure.
Setting your candidates up for success
The ‘Best Practices to Prepare Candidates to Perform Their Best’ session also considered the testing process from the test taker’s perspective and echoes the previously-mentioned ideas. Sonja O’Reilly and Nicola Kurtz from Chartered Accountants of Ireland shared their best practices to prepare test takers for a smooth online proctored testing experience.
CAI moved to digital testing on the Cirrus platform in 2019 and has achieved 100% candidate onboarding and increased candidate satisfaction. According to CAI, this is mainly due to two factors:
- Intensive and early preparation: Candidates can try out both the assessment platform and the proctoring solution during mock exams before the actual exam. This prevents any surprises on testing days when nerves are already frayed enough.
- Support during the exams: CAI supervisors can monitor students during the exam through dedicated Cirrus and ProctorU dashboards. This way, they can solve problems as and when they occur, send messages to candidates to stop them from panicking and even allot extra time if needed.
Useful lessons for testing organisations, especially considering the positive feedback CAI has received from candidates – about the new assessment process and the new opportunities offered through digital testing.
Taming the digital dragon
The closing keynote, delivered by James Plunkett, a one-time advisor to Gordon Brown, left us with an optimistic view of the future.
New developments like remote proctoring, AI, social networks and Big Data have opened up tremendous opportunities in recent years. However, with these opportunities come many challenges in utilising these new technologies and opportunities in an ethical, unbiased and fair way. Primarily, it has left governments and regulatory authorities floundering and running after the facts.
For example, during COVID, remote proctoring, AI and other exam integrity measures allowed us to continue delivering exams in locked-down societies, but there are issues of privacy and bias that yet need to be addressed.
However, Mr Plunkett reckons we can “tame this digital dragon”. He drew parallels between now and society during the industrial revolution at the end of the 19th century. Then too, society was changing at an unprecedented rate and governments were struggling to keep up. Eventually, though, they did and started laying down standards for, e.g. railways, organising services like sewage systems on a national scale, and more.
And so can we in the 21st century, but it will mean learning new skills, technologies and governing methods. As Mr Plunkett advised, it’s important to consider ethics – what is right at a time of great change. Start with the desired outcome and then work back to achieve this outcome. Overall, an optimistic prospect – where will we be ten years from now?
He left us with an inspiring idea of solving our current skills gap. The power WW2 GI bill offered free education for US soldiers returning from WW2. This helped smooth their reintegration into society and is credited with assisting the US to achieve a powerful economic position after the war, thanks to a workforce with relevant and up-to-date skills. Perhaps something to consider for the post-pandemic generation?
All in all, a very informative couple of days. We’re looking forward to the next conference!
Discover the top presentations from the e-Assessment Conference 2023.