Safehire.ai is an AI-powered background screening service that scans deep and dark web sources to identify hidden risks in potential candidates. Our platform enhances safer recruitment practices and helps schools and organisations stay compliant with regulations.
Regulated activity is a legal term with real safeguarding weight. If your staff or volunteers are in regulated activity with children, it means they are working in roles so critical to safety that the law prohibits individuals who’ve been barred from doing them.
In plain terms: Regulated activity is the type of work no barred individual must ever do. It covers roles that involve regular, direct contact with children – like teaching, training, caring for, or supervising them – whether that happens in a school, nursery, children’s home or even while driving a school vehicle.
DBS checks surface past convictions, but they miss digital footprints. Safehire.ai uncovers deeper online associated risks like:
- Radical or extremist associations
- Access to CSE material
- Hate speech or abusive digital behaviour
It complements, not replaces, the Safer Recruitment process.
🐋 ‘Echo’ is our custom-trained AI search engine, developed by former military intelligence analysts. It uses machine learning to correlate identifiers across hard-to-reach parts of the internet.
While DBS (or police) checks are invaluable, they primarily identify past criminal activity. Safehire.ai complements this by examining publicly available and commercially available online data, uncovering risks that traditional checks might miss. Together, they provide a comprehensive approach to safer recruitment. Our non-intrusive searches provide critical insights that traditional checks cannot.
Yes. While we’re focused on safeguarding in schools and childcare settings, Safehire.ai is suitable for any sector that prioritises trust, safety, and reputational risk including but not limited to legal sector, healthcare, charities, NGOs and sporting & youth organisations.
Yes. We can support rolling re-checks or targeted reviews for existing staff, governors, contractors, or volunteers with our ‘Workforce Assurance’ plan. This proactive approach helps organisations stay ahead of emerging risks and reinforces a culture of continuous safeguarding.
Red Flags require careful, structured handling, they’re not automatic disqualifiers. You should:
- Review the report with your DSL or compliance lead
- Conduct a fair and open interview with the candidate
- Seek legal advice if needed
- Document every step of your decision-making
- Report concerns to authorities if appropriate
In the event you receive a Red Flag Report, there is detailed guidance on the platform to support you.
The deep & dark web is vast and unquantifiable, far beyond anything a Google search can reach. We search billions of open source records curated by threat intelligence experts, surfacing hidden risks that traditional checks miss, especially for those working with children.
We use Echo – our AI-powered e-whale to dive into digital depths most never reach. Echo scans data lakes curated by cybersecurity experts who know exactly where bad actors lurk. Think shady forums, hidden marketplaces, and encrypted platforms. These aren’t your average Google searches. By mapping associations across these dark corners, Echo helps surface potential risks that traditional checks simply miss.
Once you provide candidate information, our platform searches for related digital activity that may raise safeguarding concerns. We analyse the results and produce a report. If there are any Red Flags, we’ll walk you through the next step. Safeguarding-first and legally sound.
We combine automation with expert oversight to deliver trusted, safeguarding-first results. Our threat intelligence analysts have years of experience in online risk assessment. Every red flag report is manually reviewed before being released, ensuring accuracy, relevance, and legal compliance.
Green reports – where no concerns are found – are processed automatically through our filtering system, then sample-checked to maintain high standards and spot anomalies.
Bias audits are regularly conducted of the AI and of our analysts.
Yes. We monitor Telegram channels for potential red flags related to paedophilia, radicalisation, and extremist or hate-based content. While Telegram is a legal platform, it is widely used by individuals involved in illegal or harmful activities due to its encrypted and anonymous nature. By analysing these channels, we can detect associations or behaviours that may indicate a safeguarding risk.
It’s unreliable and often misleading. Scraping personal social media lacks context and opens the door to bias and misjudgement. We focus on verifiable dark web sources where genuine safeguarding risks are more likely to hide. No assumptions, just evidence.
Checks should be run only on shortlisted candidates, ideally before interview. This allows you to:
- Discuss any concerns during the interview
- Avoid unconscious bias in early screening
- Keep the process proportionate and fair
This approach supports both ethical hiring and legal compliance.
Yes. Safehire.ai operates fully within UK law and aligns with South Africa’s Protection of Personal Information Act (POPIA). We use only publicly available and commercially traded intelligence, with no hacking, intrusion, or deception. Think of it as shining a light into harder-to-reach parts of the web, to surface lawful and ethical safeguarding information.
Yes. Safehire.ai complies with UK GDPR and POPIA requirements. Only publicly available and commercially traded data is retrieved. Customers (schools and governing bodies) retain full control over how flagged results are managed in their recruitment process.
Yes. Transparency is key. As part of your safer recruitment process, candidates are informed and give consent for digital background screening, in line with both GDPR and POPIA principles of fairness and lawfulness. We can provide you with candidate-facing language and support to ensure the process is clear, fair, and compliant.
Yes! Safehire.ai aligns with the KCSIE guidance, which recommends online background checks for education sector recruitment.
For South African customers:
While South Africa does not have the exact equivalent of the UK’s KCSIE, Safehire.ai supports best practice in safe recruitment, helping schools meet their duty of care obligations and enhance safeguarding measures.
No. Safehire.ai respects the right to privacy while ensuring safer recruitment practices. Our searches only use publicly available data, aligning with legal and ethical standards.
For South African customers:
No. Safehire.ai respects the constitutional right to privacy while balancing the duty to protect learners. Our searches only use lawful, publicly available data, consistent with POPIA’s conditions for lawful processing of personal information.
Absolutely not! Safehire.ai never engages in hacking or unauthorised system access. We only use:
- Publicly available data sources
- Commercially traded lawful data sources.
- Strict ethical & legal compliance frameworks.
No. Our intelligence partners may observe exposed information on open forums to identify risks, but they do not engage in or facilitate criminal activity. Access is passive and strictly limited to information already in the public domain, consistent with lawful threat intelligence practices.
We act purely as a data processor, handling candidate data strictly under customer instruction and in line with GDPR and POPIA requirements for security and minimality. Note that:
Our partners follow international best practice, including:
This ensures integrity, confidentiality, and lawful processing of candidate information.
No. GDPR/POPIA and fair recruitment practices require that candidates are treated lawfully and fairly. Safehire.ai results must always be considered as part of a wider recruitment process that includes:
- DBS checks (UK customers)
Safehire.ai operates on an annual subscription model, where you purchase a plan with prepaid background check reports. Extra searches are available as top ups on a pay-as-you-go basis price based on your selected plan.
Add-ons such as Cyber Exposure reports are available for purchase via our payment portal on the website or the platform.
Yes! If you use all your reports, you can upgrade to a higher plan at any time. You’ll simply pay the new rate moving forward.
Don’t forget, you can also opt for Pay-As-You-Go top ups if you just need a few more and don’t want to upgrade.
Unused reports expire at the end of your 12-month subscription period. You’ll need to renew your subscription to continue using Safehire.ai.
Our proprietary AI automation and streamlined processes allow us to deliver high-quality threat intelligence at a cost-effective price while still benefitting from human AI analysts who will review any concerns raised by our AI analyst searching the dark web. We pass these savings onto schools and organizations to ensure affordability without compromising quality.
