London-based AI-powered visual moderation company Unitary announced on Tuesday that it has secured $15M (€14.31M) in Series A round of funding.
Led by Creandum, the round also attracted participation from Plural and Paladin Capital Group.
“We’ve always focused on how we might be able to use AI to ensure a safer online experience, and keep up with the pace of internet content,” says Sasha Haco, Co-Founder and CEO of Unitary.
The startup, co-founded by Haco and James Thewlis, has developed AI technology that comprehends videos and images much like humans do, analysing multiple signals to understand content and context.
“This funding enables us to stay at the forefront of video understanding research and fulfil our mission of making the internet better,” Haco adds.
“Unitary has emerged as clear early leaders in the important AI field of content safety, and we’re so excited to back this exceptional team as they continue to accelerate and innovate in content classification technology,” says Gemma Bloemen, Principal at Creandum and Unitary board member.
The funding coincides with Unitary’s expansion across multiple languages, doubling of its team size, and a threefold increase in daily video classification from 2 to 6 million. It will support research and development, further team expansion, and partnerships with leading social platforms and brand safety organisations.
The increasing complexity of video content, constituting 80% of internet traffic, is expected to grow tenfold between 2020 and 2025, posing a challenge beyond human reviewers.
“In an online world, there’s an immense need for a technology-driven approach to identify harmful content,” says Christopher Steed, Chief Investment Officer of Paladin Capital Group.
Unitary’s machine learning solution combines visual, aural, and textual content analysis to help platforms tackle the challenge of online content moderation. This capability is crucial for platforms adapting to new regulations, such as the UK’s Online Safety Bill and the EU’s Digital Services Act, which demand more proactive content moderation.
The firm has already achieved “seven figures” of annual recurring revenue, according to Ian Hogarth, Partner at Plural and Unitary board member. This remarkable feat led to the quick follow-up funding round after raising $8M (€7.63M) in March.
“From the start, Unitary had some of the most powerful AI for classifying harmful content,” says Hogarth. “We’re confident that this is the team that is set to redefine how we ensure visual content safety in the digital age.”
Multimodal model utilisation
Unitary AI’s innovation lies in its multimodal models. Multimodal AI research has been ongoing for years, but it’s now gaining more practical applications. Unitary is positioned at the intersection of advanced research and real-world applications in this evolving field.
“Rather than analysing just a series of frames, in order to understand the nuance and whether a video is [for example] artistic or violent, you need to be able to simulate the way a human moderator watches the video. We do that by analysing text, sound and visuals,” says Haco in a recent interview.
Unlike existing tools that focus on parsing data of one type at a time, Unitary’s approach simulates how a human moderator watches videos, reducing false flags and improving accuracy. Customers can customise parameters for moderation and often use Unitary alongside human teams, relieving moderators of some of their workload.
While visual-only models have been effective to some extent, the introduction of multimodal moderation presents a growth opportunity in the market, addressing ongoing content moderation challenges on social platforms, games companies, and digital channels.