Since 1st September 2022, Keeping Children Safe in Education (KCSIE) has advised schools and colleges that they “should consider” carrying out online searches, including social media, as part of their due diligence on shortlisted candidates. The use of "should" means the advice should be followed unless there is a good reason not to and that reason is documented. So what do most schools do? Default to informal 'Googling': scanning the first few pages of results for anything that catches an untrained eye. For most, it’s the best they can do, but unfortunately it is highly subjective, has data protection law issues, and opens the door to bias and recourse. Realising this, some have turned to commercial background checks that scan the surface web - offering a sense of safety without real depth. The result? Better than a Google search, but one that often misses critical red flags that lead to being on the front page of a newspaper and life long impact on a child.
TL;DR
- KCSIE makes online vetting a clear expectation, but not all screening is created equal.
- Standard social media tools often miss high-risk content and create legal exposure.
- Safehire.ai offers intelligence-led, safeguarding-first digital vetting built for schools.
The Illusion of Safety
Those who wish to gain access to children or vulnerable adults for nefarious purposes often know how to play the game. They curate clean, professional-looking online personas - sometimes even aspirational ones. A surface web search can paint the picture of a 'teacher of the year' candidate, while concealing an entirely different reality beneath.
This is one of the biggest risks with standard checks: they assess what the candidate wants to show, not what they may be trying to hide.
Most social media vetting companies rely on automated scrapers. They scan public posts on Facebook, Instagram, X (Twitter) and flag keywords.
It sounds professional. But in reality…
- It is Surface Level: Most vetting services are limited to the "surface web" - the parts of the internet indexed by standard search engines. They scrape what is publicly visible. If a candidate has a digital footprint on the deep web, or participates in toxic forums on the dark web, a standard social media scraper will simply miss it.
- It is Easy to Hide: This is the "locked profile" problem. If a candidate sets their social media accounts to private, a standard screening service will often return a report saying "no entries found." This gives schools a false sense of security. The absence of evidence is not evidence of safety; it often just means the candidate is good at privacy settings.
- It is Prone to Bias and "Noise": Many screening services provide reports based on broad keywords—flagging swearing, alcohol use, or strong opinions. While unprofessional, these are often not safeguarding issues. Receiving a report full of "lifestyle" noise can prejudice a hiring manager without providing any real insight into whether the candidate is a danger to children.
- Exposure to Discrimination Claims: Even when using a third party, you are the data controller. If the report you receive includes information about a candidate’s protected characteristics (such as a disability or their sexuality) which the screening company scraped from a bio or photo, your recruitment process is legally compromised. You have seen information you cannot unsee.Standard social media vetting often falls short, leaving you exposed:Standard social media vetting often falls short, leaving you exposed:
How Safehire.ai is Different
Safehire.ai is not a social media screening company. We are an open-source intelligence platform built specifically for safeguarding.
Here is how our platform differs from standard vetting services:
We Look Where Scrapers Can't
We do not rely on a candidate having a public Instagram profile. Safehire.ai accesses open source data from the deep and dark web. We identify risks that bad actors actively try to hide - such as links to extremist content, hate speech forums, or associations with child sexual abuse material (CSAM). These are the risks that actually matter to schools, and they rarely appear on the first five pages of Google search results or in social media scrapes.
We Filter the "Noise" and Protect Against Bias
Standard screening reports can be overwhelming and irrelevant. Safehire.ai uses a combination of AI and human analysts to filter out "lifestyle" noise. We are driven by data minimisation and focus purely on safeguarding-relevant behaviours.
We Verify the Identity
Purely automated services often struggle with common names, leading to false positives (flagging the wrong person) or false negatives (missing the right one). Safehire.ai’s AI cross-references billions of open source data points to map aliases and usernames, ensuring the digital footprint actually belongs to your candidate. A human analyst then checks its homework.
The Shift to Intelligence-Led Vetting
The requirement to perform online checks is an opportunity to tighten the net around those who seek to harm children. But relying on basic social media screening is like trying to catch water in a sieve; it misses the dangerous elements while catching information that may find you in data protection hot water.
Deciding to outsource your checks to professionals is a good first step
The next step is ensuring that the supplier you have chosen is actually looking for the right things.
Standard social media screening is often just a "digital reference check" - looking for bad behaviour in public. Safehire.ai is a safeguarding tool - looking for hidden risks in the shadows. When the safety of children is the priority, a surface-level scan is simply not enough.
Safehire.ai offers a sophisticated layer of protection that aligns with the gravity of the safeguarding duty which is ensuring that the person walking into your classroom is exactly who they say they are, with no hidden history that could put your children at risk.
Ready to see how it works? Book a demo with Safehire.ai today.



.png)
.png)


