MAS Information Paper on Cyber Risks Associated with Deepfakes (18 September 2025)

The paper aims to raise FIs’ awareness by providing a comprehensive overview of the emerging threats and risks posed by deepfakes, their potential impact on the financial sector, and possible mitigation measures that FIs could implement to address these evolving risks. Deepfakes impacts FIs through different risk areas such as market risk, cyber risk, fraud risk, regulatory risk and reputational risk.

The paper focuses on three key areas namely (i) Defeating biometric authentication, (ii) Carrying out social engineering for impersonation and scams, and (iii) Facilitating the dissemination of misinformation and disinformation.

  1. Defeating Biometric Authentication
    • Risk – Deepfakes can defeat biometric authentication systems (eg. facial, fingerprint, palm) during account creation or login. Attackers use synthetic images, videos, or audio to impersonate real users, create fake identities/accounts and generate fraudulent documents.
    • Possible Mitigation Measures
      • Document Verification: Request for video upload instead of validating static image of ID documents. Accept only high-resolution images. Use image forensic techniques.
      • Watermarking and Fingerprinting: Embed hidden markers in media to detect tampering.
      • Liveness Detection: Use motion analysis, texture analysis, thermal imaging, 3D Depth analysis and behavioural analysis. Prompt the user to perform specific actions or respond to stimuli during verification.
      • Security Testing: Regularly test resilience of biometric systems.
      • Endpoint Protection: Detect and block fake media injected during live verification.
      • Encryption: Encryption protocols to secure biometric data during transmission, storage, and use.
      • Cancellable Biometrics: Store biometric data in a way that prevents reuse if compromised.
  1. Social Engineering and Impersonation Scams
    • Risk – Deepfakes can impersonate trusted individuals in video calls, voice messages, or emails to trick people into transferring money or revealing sensitive information.
    • Possible Mitigation Measures
      • Awareness Training: Educate staff and customers to spot signs of deepfakes (e.g., unnatural face movements, distorted audio).
      • Requester Verification: Use a separate channel to verify and confirm identities.
      • End Point Detection Tools: Install software on devices to flag manipulated visual and audio media.
      • Controls for High-Risk Roles: Use multi-factor authentication and split responsibilities to prevent unauthorized actions.
  1. Misinformation and Disinformation
    • Risk – Deepfakes can spread false news or impersonate executives, causing reputational damage, market manipulation, and loss of public trust.
    • Possible Mitigation Measures
      • Monitoring Tools: Scan digital platforms for fake content and impersonation attempts.
      • Incident Response Plans: Include deepfake scenarios in crisis protocols and communication strategies.
      • Collaboration: Engagement with regulators, industry peers and intelligence-sharing networks to share information.

FIs should assess the risks of deepfakes to their specific business operations and customer interactions and implement appropriate defensive measures accordingly.

For the full details, refer to https://www.mas.gov.sg/regulation/circulars/cyber-risks-associated-with-deepfakes

Disclaimer: The information, views or opinions expressed are provided for general information and should not be relied upon as legal or professional advice.

Leave a Reply

Discover more from ComplianceTransparency

Subscribe now to keep reading and get access to the full archive.

Continue reading