The 4 biggest problems with remote proctoring
When you shift your gaze to a distant object, the ciliary muscles hugging your irises relax, which flattens the lenses behind them and brings the object into focus. Looking at a close object does the opposite, fattening the lenses. This stretching and squeezing of your eyes can happen up to 100,000 times a day.
This ability is crucial for an invigilator watching over a high-stakes exam, who must constantly shift their gaze from student to student. Combined with their sharp judgment, it’s made them the gold standard for invigilating important exams.
But COVID changed things. With millions of students locked down, an urgent digital solution was needed to allow them to take their high-stakes exams at home. For many companies and institutions, remote proctoring software (paired with an assessment tool) provided the answer. The software adopted the role of digital invigilator, monitoring and recording students during their tests and using its algorithms to flag potential violations for human review. Key assessments could continue during lockdowns.
But not everything is rosy with this kind of testing. Remote proctoring has its flaws, and in this article, we provide an honest account of what we believe are the biggest – the problems that should be made clear to any institution or company thinking of adopting it.
We hope this information will help you make an informed decision about whether remote proctoring is the right move for you.
1. Remote proctoring software feels intrusive
Remotely-proctored tests require a software application to be downloaded and installed. And there’s no getting around it – the app feels intrusive and IT security professionals are hesitant to install it.
Among other things, it can:
- Access students’ webcams and microphones.
- Take control of the computer to shut down tests.
- Block other software from opening.
- Open and close network ports to send live video.
- Bypass firewall restrictions.
These functions are necessary for the software to do its job. Without being able to “watch” and “listen” to students during their tests, or send the video streams to live proctors (if being employed), it would be a useless digital invigilator.
But giving the app this kind of power is still risky. It has so much control over so many sensitive features and functions that some IT professionals refer to it as “legitimate spyware.” Janison Account Director and remote proctoring expert Wesley Toms says “You’re essentially giving a third party that you’ve never met access to your system to install functionality.”
So why do companies take the risk? Often, it’s because remote proctoring is cheaper, faster, requires no temporary staff, and is less work for those already employed. When investigating whether remote proctoring is right for their business, they weigh the time and money savings against the risk of privacy breaches, and decide to continue. Their IT security teams have one essential question to answer: can we trust the software?
Companies accept the risk of remote proctoring because it’s cheaper, faster, requires no temporary staff, and is less work to deliver.
These are just a handful of questions that form the answer:
- What does the software need to access and do? (e.g. control the computer’s webcam)
- What data does the software collect? Who has access, and who else is it shared with?
- How is data kept secure? How strong is this security?
- How long is data kept before being deleted?
- What processes are in place for data breaches?
- How long does the software need to be installed?
- What does the trust chain behind the software look like?
- What security standards and privacy laws does the app’s company obey?
- What is the reputation of the app’s company? How are their reviews from other businesses?
- What other risk mitigation processes are in place to protect data?
The same data security concerns exist for institutions that ask students to use their own devices for completing assessments, and it’s the institution’s responsibility to ensure the software is trustworthy. If students have concerns, they should be addressed. They’ll usually avoid having to trudge to exam centres, but while their trust in the remote proctoring software is important, the heavy lifting must be done by the institution setting the assessment.
A clear and transparent trust chain is critical. It needs to be bulletproof, and for everyone in it to be accountable.
Wesley Toms
Account Director, Janison
When the institution confirms the app and company behind it are trustworthy, the invasiveness of the app becomes much less of a problem.
2. Remote proctoring software may not work on low-spec computers
Remote proctoring software constantly records, caches, and uploads video while running, which uses plenty of the computer’s resources. Budget computers with slow CPUs, limited RAM, and low disk space may fail to run the software, or worse, crash during a student’s test.
This risk is amplified when the internet connection is unstable. If it drops out for long periods, the test’s video recording is temporarily cached on the computer until it can be uploaded, which can overload its inadequate resources. IT professionals also use resource-heavy virtual machines to run the software, something that is standard practice to secure their devices. For computers to be able to handle the required software and internet environments, they need to be low-to-mid spec (e.g. 2.4GHz dual-core CPU, 8GM RAM).
It becomes a kind of double whammy if you’ve got a low-spec device and a dodgy internet connection. If the device’s RAM and CPU is fully utilised, by the time you’re starting a remotely-proctored test you’re asking for trouble.
Wesley Toms
Account Director, Janison
For institutions that provide devices to students, if their computers are mostly low-spec (common for universities), upgrading can be expensive. They need to determine if they’ll recoup the costs after switching to remotely-proctored tests, for both money spent and time saved.
For companies that ask students to use their own devices for tests, they force the expense back onto a group who aren’t bursting with cash. Some students can’t afford to buy more powerful laptops, and devices may need to be provided to them. In addition, companies should put processes in place to ensure students’ devices are up-to-spec, such as mock exams and technical specification tests.
With the right processes in place, companies and institutions can be confident that each device will handle the remote proctoring software.
3. Remote proctoring software raises false flags
Remote proctoring software uses AI-driven algorithms to detect cheating. Instances of cheating include students looking away from the screen for long periods, detecting another voice or person in the room, and many others. The more tests the AI monitors, the better it gets at detecting cheating.
But while the AI’s algorithms are excellent, they’re far from perfect. Their persistent analysis of objects, movement, and sound in the student’s test environment causes them to flag many common forms of cheating, some of which are innocent. These include:
- A student reading a question aloud, mumbling to themselves in thought, or looking away from the screen to think.
- Video games being played in an adjacent room, a dog barking, a sudden hailstorm – any loud sound that could be interpreted as illicit audio.
- A roommate forgetting about the test and bursting into the room. Or a toddler in a walker doing the same.
- A cat obstructing the webcam or knocking it off with its paw.
Organisations that care about academic integrity want the software to flag these things. But until the algorithms reach a god-like level of proctoring, flagging the guilty also means flagging the innocent, and a human needs to spend time reviewing and distinguishing them.
Over time, the expectation is to train large language models to be more discerning.
Wesley Toms
Account Director, Janison
The key to making this process efficient is having detailed quality assurance (QA) guidelines on what constitutes a genuine violation – a descriptive list of the various cheating methods, and what needs to happen for each. For example: how long and how many times can a student look away from the test screen before you accuse them of reading notes? Should someone coming into the room for half a minute be an instant fail?
Once these extensive guidelines are in place, companies can customise what scenarios shouldn’t be flagged by the AI, like background noise. But these customisations tend to be broad – you can’t currently set refined parameters like a certain level or length of background noise. It’s either flagged or not.
The time people spend reviewing footage can be a big problem. But it can be offset by customising what the AI flags as a violation. This is critical for making the process efficient.
Steve Smith
Head of Global Sales for Solutions, Janison
With these customisations in place, after students have completed their tests and the system has raised its flags, reviewing can start.
To make things more efficient, some institutions share the load with the remote proctoring company themselves, using a two-stage review process:
- The remote proctoring company’s support staff review the initial list of flags, dismissing anything that isn’t a violation. They also use their common sense if a violation comes up that isn’t included in the guidelines. This massively reduces the number of flags to review.
- They pass this filtered list onto the institution or company, whose staff review the flags and decide whether they’re true violations.
This two-pronged approach prevents the institution from having to review hundreds (or even thousands) of false flags. But it only works with a clear, thorough QA procedure for both parties to use.
If you’re not using human QA, you’ll be getting a lot of false flags because the algorithms are only so accurate. They’re intended to pick up a wide net of things, and the human QA refines those down to something manageable for the client. What it comes down to is communication between the two teams on setting the standards for QA.
Wesley Toms
Account Director, Janison
4. Online proctored exams provide fresh cheating opportunities
Despite the tireless vigilance of the AI, students have new cheating opportunities for remotely-proctored exams. We covered these in a recent blog, so we’ll summarise them here:
- Proxy test-taking. Paying someone else to take your test is big business. But it’s tackled by facial recognition tech.
- Notes and other devices. Students may be tempted to attach notes to a wall or place their mobile phones out of sight. The AI catches these with a room scan.
- Fake internet issues. Students might try to buy extra time by sabotaging their internet connection. To offset this, the remote proctoring software should work during dropouts.
- Helper in the room. Someone can quietly enter the room and hold up notes. The AI catches this by watching for suspicious eye movements, like staring away from the screen.
- Earphones. Discrete earphones can play audio notes. The remote proctoring software can scan students’ ear canals for these devices before the test starts.
Tests held at exam centres have their weaknesses, but remotely-proctored at-home tests have their own. Data security concerns, reviewing false flags, and other issues must be contended with, but for many companies and institutions, these weaknesses are overshadowed by the vast amounts of time and money this type of testing saves them.
The question is: are the weaknesses a dealbreaker for your company or institution? Hopefully, we’ve made the answer a little clearer for you.
About the author
Janison
Unlocking the potential in every learner
You might also like
Want to learn more about our tailored solutions?
Chat to one of our assessment or learning consultants today.
or call us on 1300 857 687 (Australia) or +61 2 6652 9850 (International)