Trust & Compliance

Trust, Compliance, & Responsible Innovation

AuctusIQ is committed to the ethical and responsible use of our predictive talent assessments. We provide clear information on our approaches to AI, data handling, and compliance for Legal, HR, and Compliance professionals. This page outlines our dedication to transparency, fairness, and legal adherence.

Our Human-Centric, Research-Driven Approach

Our assessments are built on extensive scientific research, not machine-driven learning. Our core algorithms result from a human-led development process, grounding our logic in established research principles. Our assessments draw from a robust database of questions, which has been refined over many years based upon how the question responses of participants in each respective role relate to performance on the job.

The questions, response options, and scoring have been validated across studies to measure the talents and competencies critical for success. This human-centric research approach guides how we address trust & fairness concerns.

Addressing Key Concerns

1. Unintended Bias or Discrimination

We have completed a rigorous, independent, third-party audit to confirm our assessments do not create bias and meet stringent legal requirements. This audit is crucial for mitigating risk, especially in high-risk applications like selection. Additionally, we are able to provide technical reports on assessment validity, reliability, and fairness to ensure decisions are sound and equitable.

2. Lack of Model Transparency / Explainability

Our algorithms are research-backed and expert-developed, making their logic transparent, not a "black box." They focus on identifiable talents and competencies, and are scored in a fully transparent and reviewable manner. Upon assessment completion, participants can also receive personalized reports with clear insights into their strengths and development areas.

3. Data Leakage Risks

We minimize data collection, typically requiring only name and email for assessments. Secure verification processes are in place, and access to results is controlled. We adhere to industry-standard security controls (e.g., SOC II principles) and data protection regulations (e.g., GDPR principles). Participant data can be deleted upon request and is anonymized over time in storage. Participant data is never shared with any AI/LLM vendors.

4. Regulatory Compliance Uncertainty

We proactively manage hiring, assessment, and AI regulatory needs. We understand evolving Automated Employment Decision Tool (AEDT) laws and demonstrate our commitment through successful third-party compliance audits. This proactive validation offers clients enhanced legal defensibility and a compliance advantage.

Additionally, we have conducted a comprehensive assessment of our system's alignment with the recently enacted EU AI Act. Based on the Act's definitions and provided guidance, particularly concerning what constitutes an "AI system," we have concluded that our solution is not subject to its provisions. Our assessments function solely on the basis of explicit, pre-defined rules and do not employ inferential or adaptive capabilities, which are key characteristics of AI systems as defined by the Act.

Our Commitment to You

  • Our third-party audits and validation studies provide strong legal defensibility.
  • We partner with clients to ensure responsible use of our tools in line with evolving laws.
  • We will continue to proactively navigate regulations to meet evolving compliance requirements.

We welcome discussions about our Trust & Compliance framework and can provide supporting documentation for your due diligence.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram