At the AI Centre approaches are being developed to ensure the transparency and reliability of decisions made by AI, particularly in critical areas such as governance, medicine, and law. Specialists are creating verification tools for the libraries and modules used, ensuring the absence of vulnerabilities and the safe integration of AI into essential infrastructure components. Simultaneously, the team is working on methods to protect against potential abuses of AI technologies, preventing their use for unlawful purposes.
Safe AI: Algorithmic Provision and Tools
Tasks
Development of methods and tools for MLSecOps to ensure the creation of secure systems with artificial intelligence, algorithmic and software solutions for embedding digital watermarks into digital data, which exhibit increased resilience to destructive impacts.
Practical results
- A software solution for the automatic detection and/or neutralisation of attacks on data used in training AI models (anomaly detection, data poisoning, unbalanced samples, attacks on data distribution, outliers, hidden patterns).
- Datasets for testing and evaluation.
- A software solution for protecting AI models of various classes, including protection against model stealing attacks.
- A recommendation model for mitigating threats.
Effects
Increased level of security and transparency in the operation of solutions.