To LOFT or not to LOFT, that is the question
Advice on the Development of a LOFT Testing Model
Historically, huge effort has been involved in ensuring that the exams associated with any qualification or course are effective and fit for purpose. This typically includes a detailed review of all questions, to not only ensure that they are clear and correct, but also that they adequately represent all parts of the syllabus or learning outcomes on which the user is to be tested.
Often many people are involved in the exam creation procedure, which can be quite lengthy, and at the end of the process an official exam paper is produced with all questions set in stone. There are then significant levels of security put in place, to ensure that this highly-valued paper is not compromised. Indeed it is often the case that a reserve or fall-back paper is also created “just in case”.
LOFT or Linear-on-the-fly testing
With the advent of computer-based testing, many organisations are now taking the plunge and moving to a Linear-on-the-Fly Testing model (LOFT). This means that a bank of approved questions is set up at the start of the process, and every time a candidate sits the exam, the computer-based assessment system generates a new exam paper in real-time specifically for them. This is based on the question “picking” rules that are defined within the system.
Some of the factors used to determine questions picked for an exam in the LOFT model
Learning outcomes:
Questions can be associated with one or more learning outcomes, based on whatever pedagogies or taxomomies have been set up. For example, select 2 questions on topic 1 and 4 questions on topic 2.
Question metrics based on previous performance (can incorporate Item Response Theory – IRT):
In this case the system might track various metrics on a question such as facility index (how many people have answered the question correctly/incorrectly in the past). These metrics can then be used to select questions, for example, select 3 questions that are difficult (low facility index) and 1 that is easy (high facility index).
Custom fields:
This is where individual questions are labelled in a certain way. For example, let’s say that an exam for the most part is delivered by computer-based assessment, but occasionally there is a need for certain cohorts of candidates to take the exam via pen and paper. In this case a custom field could be used to indicate whether or not the question is suitable to be selected for paper-based exams. If the question required a video to be watched, it obviously cannot be selected for a printed exam.
Friend or Enemy questions:
Friend questions are always picked together and enemy questions are never picked together.
Benefits of the LOFT or Linear-on-the-fly Testing Model
A big benefit of the LOFT model is that once good question banks are set up, the ongoing effort is then centred on maintenance. This is often a less arduous process than creating completely new papers for every exam sitting.
There is also the benefit of not being reliant on one exam paper, which has significant security risks. Using the LOFT model, every candidate gets a different set of exam questions, so even if somehow one of those exams is leaked, it does not compromise every exam.
There are also various options to reduce the risk of any collusion, such as the use of variables in questions. So for example, one student might see a question stem that starts, “Acme corporation produces high-quality leather goods and is based in France”, but another candidate would see, “Beta corporation produces high-quality cosmetics and is based in Spain”.
In addition, Trojan horse questions can be included to help identify any compromises such as the leaking of questions. This is where an incorrect answer is recorded as correct. For example if we have the question “What colour is crimson?” and the options are (1) red, (2) blue and (3) green. If (3) is recorded as the correct answer on the system, then anyone answering (3) may potentially have had access to a leaked question.
Creating a good and effective question bank
As mentioned above, there are a lot of benefits to the LOFT model, however the effort to create a good, effective question bank (or refine an existing question bank so that it can be used with the LOFT model), should not be underestimated. The success of the LOFT model is very much dependent on the quality of the question meta-data. So has each question been associated with the correct learning outcomes? Have friend and enemy questions been correctly flagged? etc.
Our advice is definitely to make sure you allow adequate time to completely sanitize question meta-data as part of the process to move to LOFT. Based on our experience it is often the case that the bulk of the effort is on ensuring that questions properly and effectively test the required objectives. However with the LOFT approach, ensuring the quality of question meta-data should absolutely be given the same level of priority, resources and time.
As an interim solution, most good assessment systems can generate randomised papers for cohorts of candidates. This is where every candidate sitting the paper at the same time gets the same set of randomly-generated questions, which are shuffled so everyone sees them in a different order. Once the random paper has been generated by the system, it can be reviewed to ensure the picking is correct and the question meta-data has performed, and question substitutions can be made if necessary. This allows organisations to effectively test the picking and transition to LOFT in a phased approach.
Whatever your thoughts on LOFT, it offers a lot more flexibility than the traditional static approach to exam paper generation and mitigates a lot of the risk.
If you’d like to read more about how to run online exams, download our 5-Step Guide to Online Examinations here.