Responsible use of AI with Proctor / Guidance for clients

Constructor Technology is committed to developing AI systems that are safe, trustworthy, and compliant with applicable laws, including the General Data Protection Regulation (GDPR) and the EU Artificial Intelligence Act (AI Act).


In the context of Proctor, our remote proctoring service, Constructor acts as AI system provider under the AI Act, responsible for ensuring Proctor meets the applicable provider obligations.


We recognize that our clients may have different roles under the AI Act. In some cases, an institution using Proctor directly in an exam environment could be considered an AI system deployer. Under the AI Act, a deployer is the entity that decides to put an AI system into use and is responsible for how it is operated. In other cases, our clients may not themselves qualify as deployers but may work alongside or provide services to one. This guidance is therefore written on the assumption that you may act as a deployer, while acknowledging that this will not apply in every contractual relationship. Our aim is to provide sufficient transparency on Proctor’s design so that, wherever a deployer is involved in the chain, they have the information needed to understand their compliance responsibilities.


Why Human-in-the-Loop (HITL) oversight matters


Proctor is designed to support, not replace, human judgment. Its AI flags potential anomalies during exams, but final determinations must always be made by qualified human staff. This human-in-the-loop (HITL) approach is central to Constructor’s system design and a required safeguard under our contractual terms. It ensures that no student is subject to a decision based solely on automated processing that could significantly affect them (complying with Article 22 of the GDPR), and Proctor functions as a supportive tool, providing preparatory signals and evidence, rather than making autonomous determinations.


Oversight considerations


Where an organization qualifies as an AI system deployer, human oversight should be actively embedded into exam workflows. Based on Proctor’s intended use, this typically includes:
•    reviewing AI-generated flags before applying sanctions or academic decisions,
•    ensuring clear escalation and appeal routes for students,
•    providing transparency to exam-takers on the role of AI and human oversight,
•    periodically auditing flagged incidents for fairness and bias, and
•    documenting oversight processes to support accountability.
These points are provided as guidance only. It remains at the discretion of your organization to determine how best to fulfil applicable legal requirements, taking into account your specific role and responsibilities.


Constructor’s commitment


Constructor will continue to test, validate, and update Proctor’s AI models in line with robust data governance practices. We provide documentation on the system’s functionality, limitations, and accuracy to support your compliance obligations. In addition, we maintain appropriate technical and organizational measures (TOMs) to safeguard personal data and ensure the secure operation of Proctor.


Shared goal


AI in proctoring should augment, not replace, human oversight. By combining Constructor’s provider obligations with oversight responsibilities of those using Proctor, we can ensure Proctor is used in a lawful, fair, and trustworthy manner, safeguarding both exam integrity and students’ rights.


Disclaimer


This guidance is provided for informational purposes only and does not constitute legal advice. Constructor makes no representation as to whether your organization qualifies as an AI system deployer under the AI Act. Compliance with applicable laws and regulations remains the responsibility of each organization.