CYBER SECURITY TECHNOLOGY

Machine Learning: What It Is And What Are The Advantages Applied To CyberSecurity

Machine learning

The use of machine learning in cybersecurity applications for identity and access management systems is becoming an increasingly common practice. Will algorithms prove to be the best choice for user authentication and authorization?

Machine learning is a form of automatic learning that, through the recognition of patterns and the use of particular algorithms, is declined in a series of different methodologies that correspond to the different scientific communities that have carried out the development.

To better understand the meaning and value of machine learning, it should be explained how computational statistics, artificial neural networks, adaptive filtering, data mining as well as image processing and recognition systems and, in general, are part of machine learning. , all systems that allow a computer to learn without having been explicitly programmed.

In recent decades, programming algorithms that allow data to be interpreted and predicted on sample-based inductive models have paved the way for machine learning, especially in those fields of computer science where designing and programming explicit algorithms is impractical.

Machine Learning To Support Cybersecurity

A case in point is the filtering of emails to avoid spam, as well as the detection of network intrusions or the interception of a data breach. With one premise: even the most experienced data scientists can’t determine which algorithm will work best before trying it out. In machine learning, there is no perfect and absolute algorithm because the choice depends on the size of the data to be processed, the time available, and the quality and nature of the data.

Let’s say you have a problem accessing a mission-critical application and the system asks you to prove your login identity is valid: if instead of calling network administrators or IT managers for support, you were dealing with a machine learning algorithm? According to experts, shortly, this eventuality will be more and more frequent.

The much-discussed machine learning model has already taken root in the cybersecurity sector and today is about to take another step forward: in fact, several vendors have embraced this technology to improve the detection of malware and threats and replace the traditional detection based on signatures, today the path of identity and access management as well as privileged access governance (PAG), is also opening up for machine learning.

Why Machine Learning Is Useful For Cybersecurity

Precisely in light of these innovations, during the recent 2017 Cloud Identity Summit, several experts discussed the role that machine learning will have in cybersecurity applications for identity management systems, evaluating their risks and benefits.

Understanding why machine learning appears attractive to those involved in cybersecurity is simple: identity and access management increasingly relies on a growing number of factors (from physical and behavioral biometrics to geolocation data) and to process these last companies use algorithms.

Shortly, according to experts, the interactions required for authentication and confirmation of identities in total security will be so many that it will not be possible to delegate the entire management of the system to human beings alone.

With a view to intelligent protection, some of the work will necessarily be done by machines. Most authentications will be performed by machine learning, while human judgment will be reserved for specific and particular cases.

To give an idea of ​​the volume of activity that identity management systems are facing, just think that Microsoft currently matters – every day! – 115.5 million blocked login attempts and 15.8 million account takeover attempts, as stated at the summit by Alex Simons, director of program management at Microsoft’s Identity Division.

Identity And Access Management: From Authentication To Recognition

Like Microsoft, IBM already uses machine learning applications for identity and access management activities. According to Eric Maass, director of cloud IAM strategy at IBM, we are about to witness the transformation phase from authentication to recognition: to allow this, will be the application of machine learning linked with biometric authentication mechanisms.

Maass said pattern recognition for physical and behavioral biometrics will be able to provide continuous authentication. And the model would be similar to that based on human behavior, where trust is built over time and through numerous different factors.

Despite this important introduction of machines, experts are keen to emphasize that human contribution cannot be eliminated from identity and access management activities due to the complexity of this process. Authentication is a complex operation precisely because it is difficult to prove who you are to a machine.

It is difficult to write an endless list of Access Control Lists and different types of authorization and rights policies to define who can do what, on which systems, when, where, and how.

The Risks Of Machine Learning In Cybersecurity Applications

About the limits of artificial intelligence seen from the point of view of popular culture, the experts evoked the famous scene of the malfunction of HAL 9000 in the film 2001: A Space Odyssey in which the supercomputer even goes so far as to eavesdrop on the dialogue between the two astronauts.

Bowman and Poole. An evocative scene that still today evokes in the general public a sort of suspicion and fear towards this technology applied to identity and access management activities.

Experts believe that machine learning will become the prevailing option in access management activities, but that current identity and access management systems may not yet be ready for this revolution. Many IAM systems rely on what the speakers have called weak signals – such as user names, passwords, security questions – that can be stolen, guessed, or falsified, rather than on strong signals such as a biometric model linked to an encryption key.

So what’s the problem? The weak signals are continuously increased to raise the level of security and managing all these elements in authentication phases is becoming a particularly complex calculation: it would be advisable to implement fewer but stronger signals in the first place.

But the catch with IAM systems is that machine learning requires more data to be effective, the experts noted. An example? Facebook and Google present extremely accurate neural networks because these companies have so much data, a small local start-up for example could not have so much information and therefore would not be able to achieve such a high level of accuracy.

Data Mining And Data Analytics

To improve the level of accuracy, according to experts, data mining will become a critical part of identity and access management and machine learning: access management systems, instead of relying on static profiles based on fundamental and unchanged data, will continue to extract information about users not only to authenticate them but also to monitor their access and behavior to register risks or potential threats.

Summit speakers pointed out that, however, predictive analytics related to user behavior may not always be accurate: if the basic guidelines for good behavior are set incorrectly, identity management systems will learn and erroneously. they will make mistakes.

Another potential problem for identity and access management elaborated by machine learning is represented by the enormous and potentially infinite accumulation of data in identity management systems that will make machine learning applications in cybersecurity extremely complex.

And more difficult to manage for professionals… humans. If clusters of data related, for example, to a user’s typing patterns or mouse movements, begin to accumulate with other behavioral analytics, it becomes more difficult to navigate this sea of ​​information. If there is too much information and therefore too many calculations, these systems can become black boxes in which it is no longer possible to be sure of what is happening.

If complexity can be avoided, experts agree: just because machines can get every bit of data and keep it forever doesn’t mean they have to. The human brain, in this, is very good at optimizing important information, and machine learning should be able to work that way too.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *