Who we are
Founded by a team of cybersecurity experts and AI researchers, Cyberoo.AI has quickly become the trusted name in the scam prevention, known for our proactive approach to scam detection and eradication, and our commitment to staying ahead of cybercriminals.
-
INTEGRITY
We uphold the highest ethical standards in all our operations, ensuring trust and transparency in our relationships with clients and partners.
-
INNOVATION
We continuously push the boundaries of what's possible in cyber security, leveraging the latest advancements in AI and machine learning.
-
COLLABORATION
We believe in the power of teamwork and partnership, working closely with our clients and the wider cyber security community to create safer digital ecosystems.

Our stance on social responsibility
At Cyberoo.AI, we believe that organisations have a fundamental responsibility to protect their customers from scams and fraud, and safeguard their data against threats. This belief drives our mission and shapes our products and services.
We are committed to not only providing top-tier solutions but also to educating the public about online safety and advocating for stronger digital protection measures. We aim to empower individuals to identify and report potential threats, and businesses to actively combat scams, fraud, and digital risks, contributing to a safer online environment for all.
Research Top Picks
Ransomware Reloaded:
Re-examining Its Trend, Research and Mitigation in the Era of Data Exfiltration
We observed that ransomware no longer exists simply as an executable file or limits to encrypting files (data loss); data exfiltration (data breach) is the new norm, espionage is an emerging theme, and the industry is shifting focus from technical advancements to cyber governance and resilience. We proposed to address data exfiltration as priority over data encryption, to consider ransomware in a business-practical manner, and recommended research collaboration with the industry.
From COBIT to ISO42001:
Evaluating cybersecurity frameworks for opportunities, risks, and regulatory compliance in commercializing large language models
This study investigated the integration readiness of four predominant cybersecurity Governance, Risk and Compliance (GRC) frameworks – NIST CSF 2.0, COBIT 2019, ISO 27001:2022, and the latest ISO 42001:2023 – for the opportunities, risks, and regulatory compliance when adopting Large Language Models (LLMs), using qualitative content analysis and expert validation.
From Gemini to OpenAI Q*:
A Survey of Reshaping the Generative Artificial Intelligence (AI) Research Landscape
This comprehensive survey explored the evolving landscape of generative Artificial Intelligence (AI), with a specific focus on the transformative impacts of Mixture of Experts (MoE), multimodal learning, and the speculated advancements towards Artificial General Intelligence (AGI).
Harnessing GPT-4 for generation of cyber security GRC policies:
A focus on ransomware attack mitigation
This study investigated the potential of Generative Pre-trained Transformers (GPTs), a state-of-the-art large language model, in generating cybersecurity policies to deter and mitigate ransomware attacks that perform data exfiltration. Our findings demonstrated that GPT-generated policies could outperform human-generated policies in certain contexts, particularly when provided with tailored input prompts.