• Running high-stakes certification or national exams using a quiz module that was built for formative classroom checks, not secure, proctored, high-volume exam delivery?

  • Spending hours on manual grading and item analysis that should be automated, with no reliable data on which questions are actually measuring what they are supposed to measure?

Online Assessment Platform Development

Custom assessment platforms for educational institutions, certification bodies, test prep companies, and corporate training teams -- question banks, adaptive testing engines, secure exam delivery, automated grading, and item-level analytics built around the specific requirements of your exam programme.

Built for organisations running high-stakes exams, professional certification programmes, or national testing where a standard LMS quiz module cannot meet the requirements for randomisation, proctoring, identity verification, and detailed performance reporting.

  • Structured question bank with item metadata, question type support, and editorial review workflow

  • Computer adaptive testing engine with IRT scoring and configurable stopping rules

  • Secure exam delivery with browser lockdown, answer randomisation, and remote proctoring integration

  • Automated grading, rubric-based marking workflows, and item-level analytics for question quality review

RaftLabs builds custom online assessment platforms for educational institutions, test prep companies, professional certification bodies, and corporate training teams. Projects cover question bank management with item metadata and review workflows, computer adaptive testing with IRT scoring, secure exam delivery with browser lockdown and remote proctoring integration, automated grading for objective question types with rubric-based grading for constructed responses, identity verification and AI-assisted anomaly detection, and item-level analytics for question quality review. Most assessment platform projects deliver in 14-18 weeks at a fixed cost with full source code ownership.

Vodafone
Aldi
Nike
Microsoft
Heineken
Cisco
Calorgas
Energia Rewards
GE
Bank of America
T-Mobile
Valero
Techstars
East Ventures
100+Products shipped
5+EdTech clients
FixedCost delivery
12-14Week delivery cycles

Assessment is the most technically demanding part of a learning platform. Generic quiz tools were not built for it.

Standard LMS quiz modules handle basic multiple-choice questions for formative classroom checks. High-stakes exams, adaptive assessments, professional certification programmes, and national testing have a different set of requirements entirely. Randomised question delivery from a calibrated item bank. Browser lockdown preventing access to other applications during the exam. Identity verification at session start. Anti-cheat mechanisms that flag suspicious behaviour without producing false positives. Instant scoring with detailed result breakdowns. Item-level analytics that tell you whether a question is actually discriminating between strong and weak candidates -- or whether it is just hard for the wrong reasons. A standard quiz module cannot meet these requirements without significant workarounds that introduce their own problems.

Custom assessment platform development builds the question bank, exam delivery engine, proctoring integrations, and analytics layer around the specific requirements of your exam programme -- whether that is a corporate compliance assessment for 10,000 employees, a professional certification exam for a licensing body, or an adaptive diagnostic for a test prep product.

What we build

Question bank and item management

Structured question bank with item metadata covering subject, topic, subtopic, difficulty level, cognitive level, and question type for precise filtering and adaptive selection. Question types supported include multiple choice, multi-select, short answer, extended essay, drag-and-drop ordering, hotspot image selection, and audio or video response items. Rich text editing for questions and answer options, with support for mathematical notation, diagrams, and code blocks. Version control so question edits create a new version rather than overwriting the item, preserving historical performance data linked to the original. Editorial review workflow where new and revised questions pass through a review and approval process before they enter the live item pool.

Adaptive testing engine

Computer adaptive testing engine that adjusts question difficulty in real time based on candidate performance during the exam session. Item response theory scoring estimating candidate ability at each checkpoint, with standard error tracked alongside the ability estimate so the engine knows when it has enough information to stop. Item selection algorithm that chooses the next question to maximise information gain at the current ability estimate rather than following a fixed sequence. Configurable stopping rules based on a target standard error, a maximum item count, or a fixed time limit. Content balancing to ensure the delivered exam covers the required topic distribution regardless of where the adaptive algorithm takes the ability estimate.

Secure exam delivery

Browser lockdown mode that prevents candidates from switching tabs, opening other applications, copying text, or using browser developer tools during the exam session. Randomised question delivery order per candidate from the item pool, with answer option randomisation for multiple-choice items, so no two candidates see the same exam in the same order. Per-candidate time limits with automated submission when time expires so no exam remains open past the allowed duration. Exam session token validation so only authorised candidates with a valid registration can access the exam. Configurable item review and flag behaviour matching the rules of your specific exam -- some programmes allow review of answered items, others do not.

Automated grading and instant results

Automated scoring for all objective question types -- multiple choice, multi-select, drag-and-drop, hotspot -- with results available immediately on submission. Configurable scoring rules including negative marking for incorrect responses, partial credit for partially correct multi-select responses, and domain-weighted scoring where different topic areas carry different weights in the total score. Rubric-based grading workflow for constructed-response and essay items, with blind marking support so markers score without seeing the candidate's identity or prior scores. Score release rules configurable by the exam programme -- immediate release, delayed release on a set date, or held pending moderation.

Proctoring and identity verification

Identity verification at session start using document scan and facial recognition match before the exam unlocks. Live remote proctoring integration with platforms including Proctorio and ProctorU for human-supervised high-stakes exams. Recorded session proctoring for exams where live supervision is not required but a recording for audit purposes is. AI-assisted anomaly detection flagging behaviours such as looking away from the screen, second face detection, or audio anomalies, with flags sent to human reviewers rather than triggering automatic disqualification. Incident log per exam session recording all flags with timestamps for review by the exam authority.

Assessment analytics and reporting

Item-level analytics for each question in the bank: difficulty index showing the proportion of candidates answering correctly, discrimination index measuring how well the item separates high-scoring from low-scoring candidates, and distractor analysis showing the selection rate for each incorrect answer option to identify plausible vs. implausible distractors. Candidate performance reports broken down by topic, cognitive level, and question type so score reports carry actionable diagnostic information beyond a total score. Cohort-level score distribution with pass rate, mean score, and standard deviation for each exam administration. Exportable reports formatted for accreditation bodies, regulatory authorities, or institutional stakeholders who need structured data rather than a dashboard view.

Frequently asked questions

A fixed-form exam delivers the same set of questions to every candidate in the same order. A computer adaptive test selects questions individually for each candidate based on their performance so far in the session. The practical difference is efficiency: an adaptive test can reach a reliable ability estimate with fewer questions than a fixed-form exam because every question is chosen to give the maximum amount of information at the candidate's current ability level. For certification exams, this means a shorter, more precise exam. For diagnostic assessments, it means a more accurate picture of where a student sits in the ability range without requiring them to answer questions far above or below their level. Adaptive testing requires a larger, calibrated item bank than a fixed-form exam -- the engine needs enough items at each difficulty level to select from.

Anti-cheat measures operate at several levels. At the browser level, lockdown mode prevents tab switching, copy-paste, screen capture, and access to other applications during the exam. At the item level, question order randomisation and answer option randomisation mean candidates sitting near each other see different question sequences. At the identity level, facial recognition verification at session start confirms the registered candidate is the person at the keyboard. At the session monitoring level, AI-assisted proctoring flags anomalous behaviour -- looking away from the screen, second face detection, environmental audio -- for human review. For high-stakes programmes, live human proctors can monitor sessions in real time with the ability to pause or terminate a session if required. The combination of measures appropriate for your exam programme depends on the stakes involved and the proctoring model your programme uses.

Yes. Formative assessment -- low-stakes checks for learning, module quizzes, practice tests -- and summative assessment -- high-stakes certification exams, end-of-course assessments, national tests -- have different requirements and the platform is configured accordingly. Formative assessments typically allow immediate feedback after each question, unlimited attempts, and no proctoring. Summative assessments typically restrict feedback until results are released, limit attempts, and apply security controls appropriate to the stakes. A single platform can host both assessment types with different configuration applied per exam. Many organisations use the same question bank for both formative practice assessments and the summative certification exam, with the summative exam drawing from a restricted subset of the bank.

A core assessment platform -- question bank, fixed-form exam delivery, automated scoring for objective items, and basic candidate reporting -- typically delivers in 12-14 weeks. A full-featured platform with a computer adaptive testing engine, remote proctoring integration, identity verification, rubric-based grading workflow, and item-level analytics typically runs 18-24 weeks. Cost depends on question type complexity, adaptive engine requirements, proctoring integration scope, and the reporting formats required by accreditation or regulatory bodies. We scope every project and give you a fixed cost before development starts.

Related EdTech software

Talk to us about your EdTech project.

Tell us the type of assessment programme you are running, your candidate volume, your current platform limitations, and the compliance or accreditation requirements that apply. We will scope the right platform and give you a fixed cost.