#4: Adversarial Machine Learning for Recommenders with Felice Merra
Note: This transcript has been generated automatically using OpenAI's whisper and may contain inaccuracies or errors. We recommend listening to the audio for a better understanding of the content. Please feel free to reach out if you spot any corrections that need to be made. Thank you for your understanding.
The biggest motivation in attacking a recommender model is to gain profit from a platform.
So if you have a very huge catalog and few users, it is more difficult to perform an attack.
Why?
If you have a few items and a lot of users, basically it is easier to increase the recommendability of an item in the top position of a top key for instance.
Using a small catalog it is easier to be attacked.
When a platform gives the possibility to the users to upload content, that content can be an adversary content.
You are basically changing the hidden visual appearance of a product image towards another category of products.
Uploading a new item means to have a new image.
This image for a human is, I don't know, a gun, but for the system will be misclassified and misused as a toy.
Above all on the platform side, at a certain point, we have to protect the model.
Hello and welcome to this fourth episode of RECSPERTS, recommender systems experts.
This time I am joined by Felice Mera, who is an applied scientist with Amazon in Berlin.
This time we actually are going to talk about another specific topic in recommender systems, adversarial machine learning in recommender systems.
So what happens if potential attackers influence the interaction data we have, the feedback data we apply and train our recommender systems on, or what if they change user metadata or item metadata, or even if they change parameters of certain models.
All of these cases we will have a look at today and I am very gladly joined by Felice Mera, who has published papers at various conferences like CA-CHI-EM, KDD, ECIR, with them and of course RecSys.
And he is actually also a co-author of a dedicated chapter on adversarial recommender systems attack, defense and advances for the third edition of the RecSys handbook that is going to be published soon.
So thanks for joining me.
Hi Felice Mera.
Hello Marcel and thank you for the nice introduction and happy to be here for this podcast.
Before you get the chance to introduce yourself, Felice, I first have to say congratulations because Felice not only is now working for Amazon, but he also just recently finished his PhD from the Polytechnic University of Bari.
Congratulations.
Yeah, thank you very much.
So do you like just to give our listeners for today's episode a bit more of background and how you joined the RecSys field, what you have done in your PhD and what you have been researching there?
Thank you Marcel and yes, I started my voyage in the recommender system community basically three and a half years ago when I decided with my supervisor, Thomas O'Dinoya, that it was the case to start a PhD and to make some science.
And the group where I did my PhD is the CC-INFAP group in the Polytechnic University of Bari and we work a lot on recommender system and I like the idea to work in this topic since it is very impactful and there are a lot of possible techniques that can be applied and studied.
Regarding the topic, which is the security of recommender system, we got inspired by the fact that on that period, so four years ago basically, there were these examples of adverse cyber-marsh learning rising up in the computer vision.
And in this case, there was this very famous example where autonomous vehicle with a computer vision system to record for the recognition of traffic can be misled by adding, I don't know, some patches on the stop signal and the car starts to drive in a different way, creating danger, danger.
And inspired by this example, we raised our first research question, what can be adverse cyber-marsh learning in recommender system?
And it has been the starting point of my PhD because on that time, there weren't basically publications and works on adversarial machine learning and the field was quite new for recommender system.
And we started from the initial aspect of security, which were the injection of perturbation inside the user item matrix and then we moved in under direction like multimedia recommender system or robustness of model-based recommenders.
And I have done this for the last three years and I'm happy to have contributed in the community and also raised novel challenges on the security of recommender system, considering the big impact on customers, but also for companies, for the business.
This means that you were directly starting your PhD at the beginning, as you said, when these very famous examples were popping up or referring to that stop sign example.
There was, I guess, also an example with monkeys or the images of gibbons that were added with some Gaussian noise, which you could not really identify by looking at it.
But of course, there were perturbations in the underlying values of the pixels and these underlying values you can't perceive as a human were actually leading to a totally different classification as you would naturally see in that image.
Before we dive into the combination of adversarial machine learning and recommender systems, how did you get into the topic of actually adversarial machine learning?
So what was somehow driving your motivation to go into it and where was that point where you said, hey, we can also apply it to recommender systems?
It's an important point because what, in my opinion, makes you really interested in the adversarial machine learning field of study is that we have to add a minimum perturbation to change the standard behavior of the machine learning model.
And this concept of minimum perturbation that is able to create a big negative effect on a machine learning model was something that attracted my attention.
And it until that moment in recommender system, the study of security has been pretty focused on adding profiles that behave in a specific way.
However, above all, we did the advances in the computer vision and the use of machine learning and deep learning techniques for recommendation.
The fact that we have minimum variation that can change the prediction power of a recommender model, it is something that for me was really important to investigate.
So there were a lot of possibilities in attacking these models and also starting to think, oh, we can defend them because when we talk about adversarial machine learning, the goal is to protect a recommender model or a machine learning model in general and create a new attack means to find a new vulnerability that has to be sold in the next work.
And it is the pipeline that I also try to follow in my research.
I guess this is the general idea behind research and adversarial machine learning to kind of mimic the real attackers and then in order to mimic them to find out what possible attack vectors could be and to make recommender systems or other machine learning models like natural language processing models or computer vision models more robust by building in these attack defense mechanisms against the attacks of real evil.
Exactly.
So what actually sparked your initial interest in recommender systems?
So we nowadays see that many trends that started in NLP or computer vision have been applied a couple of years later to the field of RecSys somehow naturally.
So if you consider transformer models, so was applying the methods of adversarial machine learning just somehow naturally following that direction or was there any preliminary works that already even before the great emerge of adversarial machine learning was investigating these security aspects in recommender systems?
I wanted to work on recommender system first of all, because I believe that it is a field of research where you can have a really effect on people independently from adversarial machine learning.
So giving attention to the user or to the customer, it has been something important for me because during my master degree thesis, I worked on some privacy preserving techniques where I started to give attention to the end user of machine learning models.
And for me, a recommender model, it is something that has a huge impact on a user.
Then after having been attracted by this effectiveness on users, I started to see that in conjunction with the evolution of these adversarial techniques, there could have been some application in the recommender system useful to protect our users.
So it has not been the reverse path.
So, okay, I want to work on adversarial.
Adversarial is not in recommender system.
Let's move adversarial recommender system.
But it started from recommender system, from the attention to the customer, to the personalization.
And then the adversarial learning has been the field of study that allows me to understand better how to make more happier the users.
So to be in a trustful environment.
Okay, I understand.
So if we take into account this general appearance in adversarial machine learning that tiny changes can have a large impact, adding something that you can't see as a human, but which leads to a totally different classification.
Of course, one could think about the different motivations that lie behind changing the classifications of computer vision models.
But since this is a recommender systems podcast, what would you deem as the goals or motivations behind attacking recommender models?
The biggest motivation on attacking recommender model is to gain profit from a platform.
So for instance, if you are an author of songs and you want to become more popular on platform, you can upload your songs by per tool being the track such that it will be recommended more by a platform in order to gain popularity.
It can be something that it is possible because you can basically change the track audio of your songs, but the same can be on pictures where basically if you want to become more popular on a social media platform, you can start to upload your pictures with an adversarial noise that is basically impossible to be noticed by humans that make your images, your pictures better recommended by the platform.
So you can get advantage from this aspect.
However, while there is the adversary that has the goal to get an advantage from this attack, we have the entire platform that is getting a disadvantage because the users are getting wrong, wrong recommendation and the platform itself can also lose in business because there will be some outlier that start to become very popular and people can start to use less a platform because they are not getting what they want.
So obviously when an anniversary is getting something, there are other stakeholders that are losing other stuff.
So they're using the quality of the system, for instance.
Yeah.
So users get less relevant recommendations and then of course lose the motivation to use that platform in the manner they used it before or in the intensity.
And then of course the platform, for example, loses conversions or even churn increases or all that stuff.
But in the long term, I would also say that the attacker might lose if the platform is going to lose because then in the end the attacker might also get less attention if the overall platform gets less attention.
You are referring to changing a song and by that you're changing the data of the content of the corresponding item that is potentially recommended and then maybe recommended with a higher probability.
So I was also thinking about certain titles for news articles that sometimes sound clickbaitier.
So for example, this secret will change your life.
Is this also somehow an adversarial example because it draws your attention into the article?
Yes.
It is a form of attack against the user, but it is even a stronger attack if the news platform is implementing a detector of fake news, for instance, and you're able to write a fake news that is still a fake news, but it is able to evade the detector and be still recommended to the user.
And in this case, that's a very story is even stronger because it is able to counter attack the defense and there is margin for this type of attacks and they can be really, really important to us because above all, with the influence that we have on the news platform, on political aspects or social aspects, it is important to have system that are protected against this type of attacks.
So it is not only enough to have some fake news detector, for instance, but to have also to have a not diversary protected fake news or detectors.
You have to know that there could be still fake news that they can bypass your detector and be recommended and run and spread across users.
And it's a very big problem that in the other ceremonial research field, we have to take care.
And you recommend the system on the recommendation perspective, you have to take care, not only on the detection side, but on the recommendation perspective, when this fake news is coming in our recommendation model, how can we protect our recommendation model?
We are not considering that detector in this case.
Okay, so yeah, I guess fake news goes already far beyond just these clickbaity articles you might sometimes find.
But I definitely agree that fake news and that news recommendation sector, I guess, is a great challenge.
And if with adversarial machine learning, you could help to protect the recommender system against it, or at least to reduce the fact of fake news and their promotion has on it.
That would be a great advantage, I guess, and also a contribution to society.
The goal is that there's some stakeholder who wants to profit from the platform.
And the stakeholder deems attacking the platform by changing data to say, as a proper way to get gains from the platform that, of course, go beyond the effort of performing that attack.
So that should at least be the economic reasoning, looking a bit more into potential mechanisms or approaches there.
So I've seen your slides from actually your PhD thesis defense, the same slides that we also will include in the show notes afterwards, such that all our listeners can have a look at them because they have really a great structure and provide a great overview about the topic and a good reference.
In that slides, you provided that structure of looking at different notions of data.
And I always like to differentiate if you would disregard context for a moment into somehow content and feedback data.
And you show different mechanisms for attacking feedback data, so the interactions, but you also show different measures like you already provide an example for to change the content data.
So for example, change an audio track such that the song does not sound very different, but is actually different on a data level.
And then of course, afterwards, there might also be changes you apply to the model parameters.
So maybe let's go in that order and let's dive more into the potential attacks when it comes to feedback and interaction data.
What did you investigate there and what did you find?
Yeah, thank you for the introduction on this categorization of attacks, because it is very important to have these differences in these three aspects that we can attack.
Regarding the first one that has been also my first point of research is related to modifying the interaction matrix.
And the modified interaction matrix is really important because we can create new account on new platform and we can add ratings, we can add reviews, we can buy something.
And it is a quite simple case that we really we can make an attack.
The first research work where I focused has been to try to understand which are the data asset characteristics that can have an impact on the robustness of a recommender model.
In it we have in our community, there is a big attention on the fact that, I don't know, for instance, the sparsity has a big effect on the accuracy of recommender models.
So generally, when we do some abrasion study or some quantitative analysis of the performance of a new recommender, the sparsity, it is always an interesting point of analysis in the article.
However, in the case of attacks against recommender system, there were studies that were trying to understand if, for instance, the sparsity can have any impact in the robustness.
And in this work, which is named how does data set characteristics affect the robustness of collaborative recommendation models, we propose a regression based explanatory model to understand which are the data set characteristics that for regression models are the independent variable that will affect a robustness metric, like, for instance, the iteration.
So measuring how many times the attack item is eating the top key like the top ten.
The idea of this work is to give to the system designer, to the recommender system designer, a general framework where given the characteristic of a data set that is currently, for instance, in a production system, with this framework, the system designer can understand how it is possible to improve the robustness by putting inside these metrics that are related to the data.
So we explored in this first work six metrics.
We saw the space, the size, density, the Gini index on user and items, but also the rating standard deviation.
Can you describe a bit what you mean by space and size?
Regarding the space, it is basically the multiplication between the number of users and the number of items in the platform.
It is the space.
So it's basically the number of unique users and the number of unique items or?
Yes, it is basically the number of cells that we have in our user item.
So possible interactions, one could say.
And then we have also the shape, which is the ratio between the number of users and the number of item.
And it is important because it allows to quantify if the catalog is bigger or smaller than the number of customers, because it can have an impact.
So if you have a very huge catalog and few users, it is more difficult to perform an attack.
Why?
If you have a few items and a lot of users, basically it is easier to push, to increase the recommendability of an item in the top position of a top key, for instance.
So put it into practice.
Netflix has maybe a couple of thousand different titles.
There it should be easier to perform this attack.
Of course, assuming they don't have any defense mechanisms, which I would definitely think they have as it was, or were maybe, for example, on Amazon, where you have millions of articles.
Yeah, exactly.
Having a small catalog, it is easier to be attacked.
And it is what we have also shown in our experiment.
So if you give a look to our article, you can see that the shape that takes into account this information has a positive impact in the performance of the attacker.
So it means that having a smaller catalog means that the system is less robust, while having a denser dataset.
So promoting customers, users to give more feedback, to buy more obviously, also, you can make the system stronger against attack.
So you mean more robust against potential attacks.
Exactly.
And it is also intuitive because when you are making an attack, you are adding fake profiles.
And if the other profiles are very active, so they are contributing to the platform, it is more difficult for the adversary to be effective.
It basically depreciates the efforts that attackers put into, right?
Exactly.
Exactly.
You have to give more power to the adversary.
But adding more power means that you are not so good to make the attack.
Because at a certain point, if you are having thousands of fake profiles, it is easier for the platform to identify that there is a possibility of an attack.
So promoting users to have more feedback, more review, more ratings, and buy more, it is also a way to make the system stronger.
And it is also something that is aligned with what we have in that accuracy perspective.
A denser dataset generally is connected to an more accurate dataset.
And in the security side, there is this parallelism that is quite nice that denser dataset also allows to have a more robust recommender.
And these are the results of this work.
Obviously, you can also extend this set of dataset characteristics.
It would be nice to take into account also context information, temporal information, or even user profile information, like some attributes on the past interaction that are not taken into account in user item interaction matrix.
And yeah, there is the margin to extend this set of dataset characteristics to have a model that is easy to use, and that you can use to protect your system.
When talking about robustness, so I get somehow the metrics you were talking about, and also that intuition you mentioned, but robustness, how is robustness actually measured?
So is it really about you determine a couple of items, which you kind of design as the items you want, for example, to push, and then you measure after and before the attack and check how maybe their ranking has changed or how you are performing the robustness measurement in your experiments there?
The robustness can be measured with two different approaches.
The first one is a ranking aware approach, where as you mentioned, you are given the standard situation where you don't have an attack, you measure how many times an item is in the top key.
Then you perform an attack that can be a push attack.
So you want to increase the recommendability, and you measure again the new iteration.
So how many times this item, which has been attacked, is in the top key.
And the difference between these two measures is a measure of robustness.
So the variation.
Makes sense.
Okay.
So this is a ranking aware, and it is specifically related to pushing or newking.
Pushing means that you want to decrease the recommendability.
Because there's maybe a competitor whose products you don't like because you want to saw something similar.
So you are newking his or her products down to get yours implicitly promoted, for example.
Exactly.
Exactly.
You can also work against other standards, for instance, to get advantages by creating advantages to other people, to other standards.
So you talked about that ranking aware approach, and what is the other one?
And then you can have also a metric level aware approach, where basically you don't take care of the single item that has been perturbed to get an advantage.
But you consider the full performance of recommender.
So you consider the accuracy of recommender before and after an attack.
So you can use, in this case, accuracy, but also beyond accuracy metrics.
It is something that I have also applied in other works.
And it is important to take care of this system level metrics, where basically an attacker can have a huge impact.
So an attacker can also be, I don't know, another platform that wants to reduce the accuracy of another one.
So the goal of the adversary is to reduce the overall accuracy of another platform.
So two websites that are two competitors, for instance.
So we could have, for example, two video streaming services, service A and service B, and service A by perturbing the platform B with fake profiles is kind of reducing the overall system's accuracy.
And by that, of course, in comparison wins with its personalization quality compared to the platform B, and thereby maybe gains more attraction, gains more customers, or even customers leaving B and coming over to A or something like that.
Yes.
Yeah, that's an interesting aspect.
So that's not only, for example, individuals.
So individuals that kind of try to sell their stuff or promote their media content.
So it could also be whole systems that are trying to attack other platforms or something like that.
Exactly.
So this is mostly done by injecting profiles.
So within that overall set of perturbing the user item interaction matrix is injecting these profiles the only way or are they also other ways?
The perturbation of the metrics can be done in different ways.
There are the first one, the most common one, which are engineered approaches where basically you add profiles using some general information.
So you know which are the most popular items in a certain period and you start to use them to create a profile that will be important for the platform.
However, recently in the last, I think, four or five years, there are new techniques that use machine learning approach to create these profiles, which are quite interesting.
And I can also share a list of these works that can be attached to the podcast.
Oh, definitely.
And in the publish, serve and book chapter, there is also a list of these possible approaches where basically the goal is to have human imperceptible fake profiles.
So profiles that have minimal perturbation and that they are able to change the performance of a recommender model.
Are there also combinations you could think about?
So for example, thinking about marketplaces, you could somehow use not only the way of injecting fake profiles, but you could also, for example, put your articles on a platform and then create interactions with certain of these articles.
Would that also be an option?
So not only fake profiles, but fake profiles and fake items commonly injected, letting them interact with each other to target a certain effect.
Is that potential direction as well?
Yes.
So this is a potential direction and it can be under the restriction attention because in that case, it would be nice to understand how much we can be not perceptible on adding a new profile while we are also adding a fake content.
So it is a sort of hybrid attack when you are taking into account Tuesday color, the seller, but also the user.
And yeah, it is a very interesting topic that can be, I think, a subject of research for other works.
Okay.
So maybe we will see a paper in the future about this topic, hybrid attacks on the interaction matrices for recommender systems.
We content of items, whether it might be structured data or unstructured data or maybe even with the users. Which work do you deem as representative for this kind of attacks or what have you been researching in that direction when it really comes to adjusting item metadata or user metadata in order to perform attacks? Can you mention some work there? Yes, on the use of content and in the hybrid recommendation settings, I think that there is a huge margin of research and during my PhD we have contributed with preliminary works on attacks against visual based recommender models. Indeed we have seen in the last years that I think last seven, eight years that integrating for instance the visual content of a product like a product image image. It is important for a recommender model because the user taste is influenced by the visual appearance of a product. In this line of research we have also to take care that we can perturb these images and using the lesson from the computer vision where we have the famous panda image that is perturbed to be classified or misclassified as a gimbal, we can have something in the recommender system. In one of my first work which is targeted adversarial attacks against multimedia recommender system, we studied what can happen if we poison the catalog with fake images.
We have seen that basically very minimal perturbation that are not perceptible by customer, by users can change the recommendation performance. Oh, okay, interesting because I would have assumed that in the standard case with the panda rather given, this was desired to be human imperceptible but actually for a machine a totally different thingy. Now I would have assumed that for visual based recommender systems I want to have something that is perceptible by the user but which actually lets the item appear in a better way than it would without the perturbation.
Actually you are saying that still in the recommender case you are adding perturbations that are not perceptible by the user but make a change. Exactly, you can basically if you are a seller you can upload pictures of a toy that for children these are still toys but that you have perturbed these images in a so simple way and human imperceptible way that they can be I don't know guns and you will have that some users that like to buy guns will have been shown toys and if you think it in the opposite way you can have children that will buy or will get recommendation lists with guns. And it is a problem because you are basically changing the hidden visual appearance of a product image towards another category of products. So for the recommender model you are uploading a new item. Uploading a new item means to have a new image. This image for a human is I don't know a gun but for the system will be misclassified and misused as a toy. And then finally be shown to the kit. Exactly. Okay. And it is something that we have in this work we have shown that it is possible to have these items in the first five or even three positions of recommendation list. That can get you into big problems because in the end you are the platform that is somehow trying to sell guns to kids which definitely I guess and I hope is not in the interest of any platform. But can be interesting for another platform that wants to make worse another one. So the platform A that attacks the platform B. But then again you are following the same logic of perturbation of adversarial examples that we have with that Gibbon example. Because you are not actually fooling the recommender system. I'm not sure whether fooling is the right term here but let's maybe just keep it for practical reasons. So we are actually not really fooling the recommender system itself but we are fooling the part of the recommender model that has that let's say that extracts the visual features. So the convolutional neural network. Yeah. So we have a CNN as part of the recommender system of the recommender model itself. And this one then differently classifies the input image for example which shows a gun which will then be misclassified as a toy. And the recommender just follows the direction because the recommender says or sees okay there is toy as the flag for for this article. So I will put it in the category or I will show it to customers that have shown preference for toys. Yes it's the case. However there are also other attacks and I have investigated a part of them where basically you can even perturb with this human imperceptible perturbation the product image not to change the category but to push the item the product in higher position. So basically you can basically back propagate the weights from the model back to the recommender model and a convolutional neural network that is used to extract the features such that the original image is modified to increase the predicted score by the recommender model. So in this way with this double back propagation you can if you can quantify what is the direction of the perturbation such that you start to gain more interest in a specific product. And it is still a sort of misclassification but from a pure personalization or recommendation perspective. And also these attacks are very effective and are still not human imperceptible because the characteristics that to make them human imperceptible you have to fix a perturbation budget. The name is perturbation budget and generally we have an epsilon symbol to indicate this perturbation. And this is very tiny, it is very small. It is like one pixel so you can change by one pixel each pixel and it is basically not not perceptible by humans. And yes these attacks are real and are quite okay you need to know the recommender models but in this case another line of research can be to investigate the transferability where you train a personal model you evaluate these weights and then you upload these pictures in another model where you don't know what is the recommendation model but you can still have an effect for some transferability effect of attacks. Okay so basically what you can think about there your CNN is converting these raw images into embeddings and then for example an order to come up with the ranking of these articles associated with the corresponding images you are for example taking the simple dot product between their embeddings and the user embedding and then you just check which get the highest dot product and then by simple changes the dot product of the changed image can rise up the list and will then be recommended with a higher probability to the corresponding user even though the user won't see a difference between this article's image now maybe appearing at position three as it would have been appeared before it maybe position 30 or something like that. Yeah exactly. Okay so you've mentioned two things so there's really when you are changing images in a way such that they are classified differently and then appearing they are not your desire to be in there are these changes you may apply to images such that their ranking or their score is increased and thereby finally they rank up higher. Are there other methods maybe also on the user side when it comes to this metadata or content based approaches for perturbations and adversarial machine learning for XS? For now the two main research lines have been focused on these two techniques on the attack side and also the defenses are moving towards this direction so protecting from for category based but also for recommendation based.
Okay okay so mainly item focus on that side. Can you imagine or think about some some user metadata based approaches so for example I'm injecting users into the platforms that have some certain demographic data and perform in a certain way or users for example on social media you could also upload profiles that share certain maybe not sharing certain stuff but they reflect certain stuff about themselves and of course this changes what I know about the user and then these changes could somehow affect the platform negatively or benefit some certain stakeholder. Yeah it can happen yes because we can have a sort of feedback loop so you have you're adding a new content you are adding new profiles that will try to make that content even more popular it will become more popular and it will be recommended more so at the end the goal of the adversary will be even easier to be reached so you are attacking on multiple sites on the content must also on the users and it is also nice to see that in the case of customer that can add content like I don't know reviews ratings or other type of explicit feedback also this feedback can be perturbed so you can have I don't know some pictures of the product taken that are perturbed also when a platform gives the possibility to the users to upload content that content can be an adversary content. Makes sense unfortunately or fortunately unfortunately of course because there's a lot of things to consider and maybe to address potential attacks but fortunately because at least it provides further directions research and to improve and yeah exactly get us in the loop of doing interesting stuff here okay so this might be the second field let's keep the data aside for a moment and we assume the data looks fine I train my model which can be different things content-based collaborative filtering or some hybrid techniques and so on and so forth and now I have my model parameters what can I do with them in order to perform attacks giving to the adversary the possibility to know partially know the model parameters it is something that I think it can be in the community it is still under discussion because in the reality we cannot access the parameter of the model so this type of attacks basically are named white box attacks are assuming that an adversary knows the model parameters however they are important to be investigated because here if you are able to propose a defense that is a strong against this very quite impossible attack improbable attack then you have this upper bound of defense capability and it is in my opinion the reason that has to motivate research in this field because you have to work in the worst scenario to have to protect the worst attack to be good also with the one that you don't know but there if they are black box they are less strong than this one however while these attacks have been demonstrated to basically destroy the accuracy of the model so we are talking about the recall the precision the iteration the ndcg of the model by reducing also by 80 90 percent with the minimal variation what it is important is that and it is the topic that I discussed in my article at kdd uh adversarial learning workshop at kdd where i we want the the best paper for that workshop is that when you plan to protect your model you have to take care on any aspect of the defense so not only you have to protect your model against an attack but also to consider which are the effects on the other performance of a recommender what do you mean by other performance as a recommender for us a recommender is not only ndcg at 10 or precision at five yeah but it it is also coverage it is also diversity it is also popularity bias or reduction of popularity bias okay so you mean metrics for different goals of a recommender besides pure relevance or accuracy exactly because even if you are getting a good accurate model but you are only recommending very popular items it can be something that it is not so good in long term in the short term okay for a paper can be good but for a platform i don't know how much it is good because if you are showing the same 10 movies each time for 10 years at the end you will not use that platform we have seen that recently there has been this trend to use the adversarial training the adversarial training is the procedure to robustify a recommender model on a model parameter perspective for its gaining in the accuracy without thinking what is the reason of this gain and in this work that i was mentioning we have shown that there could be an effect from an amplification of popularity bias and the message here is that when you work on this adversarial machine learning we have to remember that we are in recommender system community and the recommender system has to be evaluated under different perspectives if you are addressing security okay you have to evaluate the security aspect and then understand what is happening on the rest of the model and it is something that is in my opinion very important to take care because we might want to have techniques that guarantees high security but also good accuracy and good coverage due to popularity bias or otherwise you have to specify look in this case we are going towards a very accurate model without thinking to the popularity bias or to coverage because i think that in the platform it is something that it is important to take care when you are using these or some adversarial machine learning methods for recommender systems you might be ending up increasing your popularity bias yes it depends on the techniques that you use if you use a techniques that try to make the model learn better the trend in the recommendation it will happen that in order to have a model that is able to recommend the items that have been recommended before it will try to recommend more and more the most popular one because they are the most robust they have the most feedbacks and so they kind of become a natural selection for being robust items and alleviate the adversarial effect on the recommender system exactly so reducing the coverage of the catalog reducing towards the popular one it is a possible solution for protection because you have already i don't know hundreds of thousands of feedback and views are listening events on that items and it is difficult for the new created product to reach that level of popularity yeah however however you are not recommending anything yeah yeah however it will bore your customers in the mid or long term if they are just shown the same stuff all the time and don't feel any diversity in recommendations or novel stuff okay so as i get from your points is that the stakes for changing model parameters are very high or as a challenge is pretty high to get access and to change it accordingly to your needs and so on um however i guess especially since you mentioned multimedia recommender systems so recommender systems where we are somehow exploiting textual data or vision data or some of these stuff we know that there exists plenty of pre-trained models in that field so i guess hugging phase is presenting a large library for different nlp models we have all these different pre-trained computer vision models out in the wild and of course one could safely assume that many of these models are used as components for internal recommender models to somehow turn raw data raw unstructured data like images or text into proper embeddings so actually there is a way of changing the parameters of these freely accessible models of course one could also do a better job there and check if the model is really provided by some certain trustworthy provider or not but sometimes you might also be ending up hey i want to check out this pre-trained computer vision model now i take it from that side and build it into my overall recommender system and in the end i find out that the parameters of this cv model have been changed accordingly so do you think that this is a potential risk yes it is a risk because above all as you mentioned there are models that are billion sized models that it is almost impossible to be retrained for custom uses so there is this possibility to use this public model it is important to consider that they need to be robustified with respect to the downstream task so if you are using i don't know a better model for making recommendations in a news recommender system it is necessary to take into account that it is easier for a diversity to attack that model because they already know the parameters they can create fake news they can create malicious articles and upload in the platform because the platform will extract the embeddings from a network that is public so having a downstream robustification procedure to make robust the recommender model it is something that needs to be done cool so yeah i guess we we covered quite some interesting parts on this topic i was curious whether beyond changing the input data for my recommender model or changing the model parameters of the model itself if there might also be some ways of attacking everything that happens afterwards so for example in the inference part or when i'm actually composing maybe rows for my page or something like that are there potential attacks you could think about after or beyond that model parameters part yes the attacks can be done before the training but also after the training and in the case of the test time attacks it is the the name that we use in the adversarial community you can have the possibility to do this attack because you can use as i mentioned before other networks where also very deep networks i don't know resnet 150 to to create your attack and you are pretty sure that the adversary that are using that network are able to create quite good adversarial perturbate products so i don't know you go on your social media account you upload the original image and the model will train will be trained and your image will be not so popular so you decide to remove that model and to add to remove that image and add the same image with an adversarial perturbation such that it looks like the image of a fashion blocker okay okay the model at a certain point will re-extract the embedding on fly because it can be that they extract on fly or they are i don't know extract once and stored somewhere but if they are extracted again without retraining the model in the inference stage we'll use the fake image and it will have immediate effect so you have not to wait the retraining of the model but it is on that moment that the model will change its behavior makes sense very interesting stuff also at this interesting intersection of adversarial machine learning and recommender systems if the listeners want to know more about this what would you refer to i guess it's uh would be a nice thing to have a closer look into your phd thesis and there will be the third edition of the recommender systems handbook appearing can you give us just maybe a short brief summarization of what people are going to expect from that chapter that you co-authored yes if you're going to read the book chapter i think that is important for young researchers but also for practitioners that are going to approach the protection of the recommenders because we have given in that chapter the definition and foundation of adversarial machine learning a lot of references of the main work and main research line of works we have described the baseline defense and also the matrix the evaluation that we can use to evaluate how much it is robust and also how you mentioned before what is the robustness yes that chapter i think it is a good starting point for anyone that is going to approach the security aspect of recommender system in the new machine learning era so the new machine learning here is that sounds that sounds nice yeah um cool okay so we will definitely reference this and i guess it's appearing soon so then all the listeners should have access to this i believe there's also a survey that you uh wrote about this topic that is public and which people can look up i guess it gives you a good overview about the topic um right we have published a survey accepted at the acm computing survey last year and this survey is uh the first uh literature review it is a literature review basically that we we have done and we have put there a lot of papers so we have classified them we have identified new challenges and i am happy to see that some of them are have been assessed by the community with the new works that are going that directions and from this literature review and the categorization then we have written the book chapter so all the detailed literature review of the book chapter can be also read from that survey so being somehow kind of a pioneer for adversarial machine learning and recommender systems you must be proud that there are other people continuing on that work and addressing further challenges i hope yes i'm happy but i'm more happy on the fact that there is attention on the security because we need to protect recommender models in the best way yeah definitely also in the best interest of ourselves as rex's practitioners and researchers because if these systems are attacked and not in the best light and people also will somehow attribute this to the people who are creating these systems and that's us and so it should be no interest to protect the systems i'm just a bit worried about whether people will actually really pay attention to that aspect because you have so many things that you can address when you are starting but also when you're advancing there are so many work about improving your recommender systems creating additional recommender models for different use cases improving their performance improving diversity being fair with your recommendations so definitely all important aspects of recommender systems in general there's a whole topic of evaluation how to for example the last episode that i had with olivier where we have been talking about how to perform offline unbiased estimation of online performance and then people might hear this episode and hear about yeah and there are also attacks against my recommender system and i need to make my recommender models attack proof what would you tell those people that of course have limited energy and time and capabilities when setting their priorities so of course as an research in that field you i guess won't say that there should be low priority but what would you tell them if they say okay adversarial attacks against my recommender system and anticipating those has actually low priority for me because i have that many other things on my plate what would you encounter those or what would be your advice for them yes my advice is that as soon as you have reached a good accuracy for your model and you are ready to to benefit from the effectiveness that your model has on the platform and for the users you have to start to think how to protect it because as we have seen in the all the other computer science fields at a certain point there will be someone that will attack your platform and starting to get information on how to protect it and how it is possible to avoid these attacks it is important and yes from on the research side this is also important but above all on the platform side at a certain point we have to protect the model okay yeah makes makes definitely sense so i guess referring to what has gone wrong in the past with other systems is actually a good point to justify for don't forget about it when it comes to recommender systems generally the security problem raise up when there is a security problem so you've got to talk okay now how can i protect myself and then you start to think okay we need to protect something so yeah always when it's a bit too late for it exactly addressing the challenges and of course you are following what might be there is further additional work what do you think are the currently biggest next challenges in that specific direction of rex's research the biggest challenges in my opinion are the adversarial attacks related to the privacy violation which is going to be an important topic above all in europe after the gdpr and this type of attacks can exist i have seen in other parallel communities that adversarial learning and privacy learning privacy preservation are going to be topic quite close and they recommender system for now i i didn't read any works on this topic so i think it is an important topic to take care for for the next future and another topic is to continue to robustify the multi-meter recommender models because we are now in the in we are we we can see that in the last conference there are these multi-modalities recommenders so we are going to get inside our models a lot of modalities and each of these modality can be a source of adversarial attack so it will make very complex the defensive scenario but i think it is it will be also very intriguing to have to attack on on different perspectives but also be protected so i think this is another line of research that can be quite interesting because you can attack under different dimensions so if you're protecting from one you should also be aware that another dimension can be a point of weakness for your model that sounds like there is much more things that need to be done investigated experimented with and so on but i guess that provides a nice and interesting direction for many people or researchers in the field i hope that this gives all our listeners a first good coverage on that topic and i will definitely make sure that i include all the materials that you referenced so far in the show notes so that they that you as our listeners will have the chance to follow up on all that stuff maybe leaving that specific space and getting up to a higher level again and with a view and a bright light on the whole rexis area what do you think for the overall field besides adversarial attacks what are currently the biggest challenges and recommender systems research but also practice yes i think that one of the main limit is that on the on the academia side it is quite difficult to work on reinforcement learning topic that i have seen in the last year are quite popular in rexis why it is almost impossible to work while you are a phd student or your researcher in academia so it is it would be nice to to to to give more effort in a way towards a platform or a framework that helps also academia to work under these topics because are quite nice there are a lot of possibilities but it is quite difficult from the academia researchers to work on that topic yeah i understand if you think about your personal lifestyle and since you are also aware of all these systems and surrounded by them as well what would you nominate as your favorite personalization product or recommendation of thinking about different services that you are using what is the service those recommendations you really like yeah the one from amazon i'm an amazonian so okay perfect yeah i definitely understand you there so cool yeah that was actually a very nice discussion and i really appreciate we could bring this i guess very important topic to our our listeners if you are thinking about another person that you would enjoy to have in this podcast who would that person be that you would like me to invite to the podcast i will invite render and discuss on the reproducibility stuff okay perfect what is his thinking on yeah i guess he was one of the authors of that 2020 paper and actually very famously for bpr so yeah exactly yeah okay cool nice so felice it was really a pleasure talking to you and i see time flies by but it's very interesting always to get a different perspective on that field it's always the same thing i when i talk to people that i say okay for some rexis sounds like a narrow area or research topic but once you are in you see that there are so many directions and today with adversarial machine learning we got one of these saying lances is way an underestimation but really our research direction and research area and research area within rexis so thanks for attending this episode thank you really really much i hope you also enjoyed it a bit yeah yeah thank you very much for the opportunity it's quite nice and thank you also for your work in this podcast i think it is something that we didn't have in the community that is quite nice and appreciate also the level of expertise that we are putting here it's quite nice so thank you very much also for your availability thank you thank you yeah so i really love this project i wish i was spending a bit more time to publish episodes more frequently but let's hope for the best in the future so thanks again and wish you all the best for your work thank you very much and wish all the best for your work and see you at rexis this year thanks see you at rexis bye bye thank you so much for listening to this episode of rexpert's recommender systems experts the podcast that brings you the experts in recommender systems if you enjoy this podcast please subscribe to it on your favorite podcast player and please share it with anybody you think might benefit from it please also leave a review on pod shazer and last but not least if you have questions a recommendation for an interesting expert you want to have in my show or any other suggestions drop me a message on twitter or send me an email to marcel at recsperts.com thank you again for listening and sharing and make sure not to miss the next episode because people who listen to this also listen to the next episode see you goodbye so