What if you could keep exams secure by delivering unlimited unique tests?
Imagine a world where content theft and cheating on online exams was a thing of the past. Where exams are like fingerprints, no two of them alike. While this technology isn’t widely available yet, e-assessment platforms are seeing a light at the end of the tunnel for our biggest breakthroughs that spell the end of academic dishonesty.
The current state of exam security is a topic of much concern, as the shift to online learning and remote testing has highlighted the need for robust security measures to protect academic integrity. One area of e-assessment security that has received a great deal of attention in recent years is online proctoring, which aims to prevent cheating and ensure the authenticity of test-takers via the use of webcams, screensharing, or AI and machine learning to detect suspicious behavior.
In addition to keeping exams more safe and secure, online proctoring has a wide array of benefits. Namely convenience, allowing candidates to take exams from the comfort of their own home or other location, eliminating the need to travel to a physical testing center. It’s also more flexible, as candidates can take exams at a time that is convenient for them, rather than being limited to a specific testing window or time.
Online proctoring also has its drawbacks. One of the main issues is the invasiveness of the technology, which can make test-takers feel uncomfortable and self-conscious at the prospect of being “watched”.
It’s important to strike a balance when it comes to exam security, as solely relying on one solution, such as proctoring, isn’t enough. As online proctoring gets smarter, so do ways of cheating on online exams
This is where LOFT, or Linear-On the-Fly-Testing comes in. LOFT is being talked about more and more recently, but it’s nothing new. At its most basic description, LOFT allows for the generation of unique forms for every test taker, produced by selecting a set number of questions from a large item pool. And the LOFT 3.0 upgrade will take that up another level to ensure that each unique test form is completely equal in terms of difficulty and knowledge measured.
What is LOFT?
Designing effective and suitable exams for a qualification or course requires a significant amount of time and effort. This includes a thorough review of all questions to ensure they are clear, accurate, and cover all aspects of the course and the knowledge needed to receive a certification. Once completed, the official exam paper is finalised and strict security measures are implemented to protect it. In some cases, a backup paper is also prepared as a precaution. As you can imagine, this entire process is expensive and resource-consuming.
The emergence of computer-based testing has led many organisations to adopt the Linear-on-the-Fly Testing (LOFT) model. This approach involves creating a “pool” of approved questions, and using a computer-based assessment platform to generate a unique exam for each candidate in real-time, based on predetermined question selection rules. The bigger the pool of items, the more versions of the exam you can create. The ultimate goal being that for an entire set of candidates sitting for an exam, each and every one would have a completely different set of questions just for them – completely unique, randomised tests, but of equal difficulty across the board.
How LOFT can transform online testing: key benefits and challenges
When used with online proctoring, linear-on-the-fly testing makes cheating on online exams virtually impossible in a number of ways:
Improving the efficiency and effectiveness of the exam administration process, one of the main benefits is that once item banks and blueprints are created, maintaining existing question banks requires less effort than creating new exam papers for every sitting.
Improved security and cheating prevention
With LOFT, the exam is not fixed and cannot be leaked, reducing the risk of cheating or exam security breaches. The LOFT model eliminates the need to rely on a single exam paper, which greatly reduces security risks. By generating a unique exam for each candidate in real-time, it ensures that even if an exam is compromised, it does not affect the integrity of all exams.
Unique, randomised exams can be very effective in preventing cheating, as it makes it more difficult for candidates to share answers or collude on the exam. Additionally, randomised exams can also make it more difficult for candidates to cheat by memorising specific answers and regurgitating information without understanding the material, as they are less likely to encounter the same questions as other candidates.
If every single candidate gets a completely individual exam and is proctored to ensure they aren’t using unauthorised sites such as ChatGPT to answer their questions for them, you’ve got yourself an un-cheat-able exam.
As the LOFT technology develops, so shall the fairness of exams, as well as students’ scores. Linear on-the-fly testing can make exams more fair by providing a more randomised selection of test items for each individual student. This helps to ensure that each student has an equal opportunity to demonstrate their knowledge, regardless of their prior test-taking experience or familiarity with the test items. Although this is still developing technology, the ability to auto-rank items as “difficult” or “easy” will be available soon, meaning that while students will all get a unique exam, the number of easy and difficult questions will be exactly the same, ensuring that all students were tested fairly.
By controlling and minimizing item exposure, the validity of scores is also protected from potential issues caused by fixed sets of items becoming known within the test-taking community. This ensures that candidates who achieve high scores on the exam did so fairly and without prior knowledge of any questions or the test. This also ensures that exam fairness is defensible in court, should it ever come to that.
LOFT isn’t without its challenges…
LOFT has undergone, and will undergo many makeovers until it’s close to perfection. Until then, there are some limitations and challenges to keep in mind.
Difficulty in ensuring equal difficulty
Although this is currently being solved, some older LOFT models do not make it possible to rank question difficulties, meaning that instructors must manually tag and categorise items. Not only is this time-consuming, but it leaves room for human error and biases.
For LOFT to truly create one-of-a-kind exams, unique to each student, a large item bank is required. Developing a large number of high-quality test items that are aligned with the content and skills being tested can be a time-consuming and complex process that requires expertise in both the subject matter and item development. Not only do these items require time to develop, but they also need to be reviewed and double-checked by SMEs to determine accuracy and fairness, all of which is a huge time commitment initially. However, this initial investment will pay itself off in the long run, thanks to the increased security of your item bank.
This latest iteration of LOFT will ensure that randomly generated test forms are equal in terms of difficulty and knowledge measured, so that no test-taker is unintentionally (dis)advantaged. How? By including question difficulty and ‘friend and enemy’ questions in the question selection algorithm. Cirrus and Blees.ai are due to start work on perfecting this algorithm shortly.
LOFT 3.0 has the potential to transform testing and deliver the promise of secure, remote and on-demand assessments, making cheating virtually impossible.
Would you like to learn more about how it all works and how it can benefit your organisation? Follow us on LinkedIn or sign up to our newsletter below to get the low-down. Coming up:
- Getting started with LOFT – Question selection & best practices.
- Case study – A real-world example of an organisation delivering on-demand exams with LOFT.
- Combining LOFT and parameterisation: A match made in heaven.