topic name & Synopsis
How to build secure artificial intelligence
Artificial intelligence and machine learning, in addition to the benefits they bring, carry certain risks – same as any other technology. Of particular importance are the risks associated with the security of these technologies, which, if left unaddressed, can have far-reaching negative consequences. For example, a significant proportion of today’s cyberthreats are due to the fact that initially, when designing many computer systems, engineers did not properly think about cybersecurity. Artificial intelligence technologies, although already bringing many benefits, are now at those stages of development when it is still possible to influence them to be secure in the future. There are a number of measures and techniques that need to be used in the development of systems using AI and ML in order to eliminate many security risks and make the systems being created reliable and stable. The story will focus on proposals for such measures, which are also used in our development processes.
Anton Ivanov heads the company’s research and development department. Anton joined Kaspersky in 2011 as a malware analyst.
For five years, he worked as a senior malware analyst in the company’s Heuristic Detection Group. In 2016, Anton started to lead the Behavioral Detection Team. The team’s main focus was to proactively protect customers from different kinds of malware threats, including ransomware. In 2018, Anton became Head of the Advanced Threats Research and Detection Team, responsible for Advanced Persistent Threats (APT) attacks research and improving the quality of anti-targeted attack detection products. In 2019, he was appointed VP of Threat research responsible for overseeing a number of strategically important tasks at the company, including the promotion and development of Kaspersky’s Threat Research Team and building up company’s technology stack.