Using a quiz builder that supports multiple choice and true/false but can't support the structured written assessment, the scenario-based questions, or the adaptive difficulty adjustment that your qualification specification requires?
Marking written assessments manually in Word documents with a separate spreadsheet for recording grades -- a process that doesn't scale, produces inconsistent marking decisions, and can't support moderation without emailing files back and forth?
Assessment and Testing Software Development
Standard quiz builders handle multiple choice. When your assessment model requires adaptive testing that adjusts difficulty based on responses, structured written assessment with rubric marking and moderation, simulation-based scenarios, or psychometric testing with validated scoring algorithms, the quiz builder stops being useful and you need an assessment engine built for the specific question types and scoring logic your qualification requires.
We build custom assessment software for professional certification bodies, universities, training providers, and corporate L&D teams whose assessment requirements can't be met by a configured quiz plugin on their existing LMS.
Question bank management with your question types, difficulty ratings, tagging taxonomy, and psychometric item data
Adaptive testing engine that adjusts question difficulty based on learner responses to produce accurate competency estimates
Rubric-based marking for written and open-ended assessments with moderation workflow and inter-rater reliability reporting
Results reporting with item analysis, cohort performance statistics, and competency framework mapping
RaftLabs builds custom assessment and testing software for professional certification bodies, education providers, and corporate L&D teams whose assessment requirements go beyond what a standard quiz builder can handle. We build question bank management systems, adaptive testing engines, rubric-based marking platforms for written assessment, competency mapping tools, online proctoring integrations, and results reporting for qualifications, skills assessments, and compliance testing. Most assessment software projects deliver in 10 to 16 weeks at a fixed, agreed cost.
100+Software products shipped
·FixedCost delivery
·10-16Week delivery cycles
·WCAG 2.1AA accessibility compliance
When assessment requirements outgrow the quiz builder
The assessment requirements for a professional qualification are different from the assessment requirements for a corporate compliance module. A professional qualification may require adaptive testing that adjusts question difficulty based on the candidate's responses, producing a more precise competency estimate with fewer questions than a fixed-form test. It may require written assessment with structured rubric marking and a second-marker moderation process. It may require scenario simulations where the candidate's decisions in a branching narrative are scored against a marking guide. None of these are requirements a standard quiz builder was designed to meet.
We build custom assessment software for organisations whose assessment model requires a purpose-built tool. We have built assessment platforms for professional certification bodies, competency testing systems for regulated industries, and rubric-based marking platforms for higher education. We understand the psychometric principles that underpin valid and reliable assessment -- standard error of measurement, item difficulty and discrimination, inter-rater reliability -- and we build systems that produce assessment outcomes that stand up to scrutiny.
What we build
Question bank management
Question bank built around your question taxonomy -- the subjects, topics, subtopics, learning objectives, competency dimensions, and difficulty levels that characterise your item library. Question types configured to your assessment model: multiple choice with single and multiple correct answer options, true/false, matching, ordering, short answer, extended written response, hotspot, drag-and-drop, and scenario-based items with branching logic. Item metadata for each question -- the correct answer, the difficulty level, the discrimination index from previous administrations, the revision history, and any items that have been retired from active use. Item review workflow for new questions written by subject matter experts -- the draft item reviewed by a second expert, the review comments recorded, and the item approved for use or returned for revision. Item usage tracking showing every assessment in which each question has appeared and the item statistics from each administration.
Adaptive testing engine
Adaptive testing algorithm configured to your psychometric model -- Item Response Theory (IRT) for professional certification assessments where the ability estimate needs to be both precise and comparable across administrations, or simpler adaptive branching for corporate skills assessments where a pass/fail decision at a defined threshold is the primary output. Item selection logic that chooses the next question from the bank based on the candidate's current ability estimate and the information value of available items -- presenting harder questions to candidates performing above the threshold and easier questions to candidates performing below it. Standard error of measurement tracking throughout the assessment so the test terminates when the estimate is sufficiently precise rather than after a fixed number of questions. Test blueprint constraints ensuring the adaptive algorithm selects items that cover the required content domains and cognitive levels regardless of the ability-based item selection. Results report showing the final ability estimate, the confidence interval, the pass/fail decision, and the content domain performance profile for the candidate.
Rubric-based marking and moderation
Rubric marking interface presenting the candidate's written response alongside the marking rubric -- the criteria, the performance descriptors for each mark level, and the mark range for each criterion -- so the marker applies the rubric consistently rather than making overall impression judgements. Mark entry with a justification comment required for each criterion, creating a marking record that can be reviewed in the moderation process and used as evidence in any appeal. Second-marker moderation workflow where a second examiner reviews the first marker's marks and comments without seeing the first marker's grades initially, then the two marks are compared and any significant discrepancy is flagged for senior examiner resolution. Inter-rater reliability reporting at the item and cohort level showing the degree of agreement between markers -- the data that examination boards use to evaluate marking consistency and inform marker training. Moderated grade release so candidate results are only released after the moderation process is complete, preventing provisional marks from being communicated before they are confirmed.
Online proctoring integration
Proctoring integration for assessments where the remote examination setting requires identity verification, browser lockdown, and invigilation. Integration with proctoring service providers -- ProctorU, Examity, or Proctorio -- so the proctored session is initiated from within the assessment platform rather than requiring the candidate to navigate to a separate service. Browser lockdown for assessments where tab switching, copy-paste, and access to other applications need to be restricted during the session. Identity verification at the start of the assessment session -- photo ID capture and webcam image matched to the registered candidate. Session recording for assessments where the proctoring provider reviews the session recording for suspected breaches rather than providing live invigilation. Invigilation alert system for automated flag detection -- candidate looking away from screen, another person visible in the webcam, phone use -- that generates alerts for review by a human invigilator.
Results and performance reporting
Candidate results report produced immediately on test completion for pass/fail assessments where the result is available automatically, or on release after moderation for assessments with a review process. Results report format configured to your qualification's requirements -- overall score and pass/fail decision, section scores and performance profiles, competency dimension ratings, or a narrative summary for coaching and development assessments. Cohort performance report for examination administrators showing the pass rate, the mean score, the score distribution, and the performance on each item or section for the current cohort compared to historical cohorts. Item analysis report showing the difficulty, discrimination, and distractors analysis for each item in the question bank, updated after each administration to support item review and bank maintenance decisions. Awarding and certification integration where the results feed directly into the certification system so successful candidates are awarded their qualification without a manual step between the assessment result and the certificate.
Competency mapping and skills assessment
Competency framework integration for corporate skills assessments where the assessment results are mapped to specific competency dimensions rather than producing a single overall score. Assessment design tools for L&D teams building skills assessments aligned to their organisation's competency model -- the competency dimensions, the proficiency levels, and the question types for each dimension configured by the L&D team without developer involvement. Skills gap reporting for individuals and teams showing the current proficiency level against the target level for each competency dimension, identifying the specific areas where development is needed. Learning path recommendation from the skills assessment results -- the development resources and courses recommended based on the gap analysis for each learner. Cohort skills map for managers and L&D teams showing the aggregate proficiency across the team for each competency dimension, identifying team-level development priorities alongside individual development plans.
Frequently asked questions
LMS quiz tools handle formative assessment and simple summative assessment well -- multiple choice questions, automatic marking, and a pass/fail decision against a set score threshold. Custom assessment software is the right answer when the assessment model requires types the quiz builder doesn't support, when the psychometric requirements of the qualification demand features like adaptive testing or standard error of measurement tracking, when written assessment needs a rubric marking and moderation workflow, or when the assessment data needs to feed into a competency framework or a certification system in a specific format. For professional qualification bodies, the validity and reliability requirements of high-stakes assessment typically require a purpose-built tool.
Adaptive testing for professional qualifications typically uses Item Response Theory to model the relationship between a candidate's latent ability and their probability of answering each item correctly. We implement the IRT model specified by your psychometric team -- most professional certification applications use the three-parameter logistic model or the Rasch model -- and the termination criterion based on the standard error of measurement reaching an acceptable level. The item bank requires sufficient items at each difficulty level to support adaptive administration, and we advise on the minimum bank size and composition during the design phase. The adaptive algorithm is validated against real item bank data before go-live, and the implemented model is documented for review by your psychometric consultants.
Yes. LMS integration connects the assessment platform to the learner's course record so assessment scores and completion status are recorded in the LMS alongside the other course activities. SIS integration connects assessment results to the student record for grade management, regulatory reporting, and certification. Integration with an external learning record store (LRS) using xAPI allows assessment results to be captured alongside other learning events in a unified record. We document the integration specification before development begins, including the data that flows, the direction, and the trigger events.
A question bank and assessment delivery platform for a single qualification type -- multiple question formats, automatic marking, and a results report -- typically runs $30,000 to $60,000. Adding adaptive testing, rubric marking with moderation workflow, and proctoring integration typically brings the total to $60,000 to $120,000. Competency mapping and skills assessment tools for corporate L&D typically run $35,000 to $75,000 depending on the complexity of the competency framework and the reporting requirements. We scope every project before pricing. Fixed cost only.
Talk to us about your assessment software project.
Tell us about your assessment model -- the question types, the scoring logic, the marking process, and the qualification requirements. We'll scope an assessment platform built for how your qualification actually works.