#28: Multistakeholder Recommender Systems with Robin Burke

Note: This transcript has been generated automatically using OpenAI's whisper and may contain inaccuracies or errors. We recommend listening to the audio for a better understanding of the content. Please feel free to reach out if you spot any corrections that need to be made. Thank you for your understanding.

We don't actually take providers seriously as users, as people whose experience that we care about.
Basically every recommender system is a multi-stakeholder recommender system.
It's just a question of how do you evaluate it?
How do you think about it and design it?
Kind of what considerations do you take into account?
There has to be benefits, right?
You have to have benefit from participating as a seller.
You have to have benefits from participating as a buyer.
There are lower transaction costs, yes, but it has to do the things that you need in order for it to be satisfactory.
And recommender systems, it's like, okay, this is this great tool for improving the matching function in those settings.
There's a lot of appeal to working on a simple problem.
We know in recommender systems, it's very domain specific anyway.
So, you know, recommending apartments, recommending partners on a dating app, recommending music, recommending jobs, these are all very, very different kinds of things.
And an algorithm that works well in one domain, sorry, one that works best in a different domain.
Recommendation, the sort of semantics of recommendation is we're connecting people with items out of an existing catalog.
Generation is not that, right?
Generation, that might be perfectly fine, but it's not recommendation.
It needs to work mathematically.
So go on and do these lists of recommendations.
So, like you have this one inGBMS experts. For today's episode I have invited another luminary from the field, actually from the academic side of the things, but also I guess with plenty of perspectives from industrial recommender systems and practice. So a great person to talk to when it comes to the comprehensive topic of multi-stakeholder recommender systems. A topic that we might have already touched in some of our past episodes when for example talking about popularity bias in recommender systems or also when talking about the role of fairness in recommender systems. But today we are going to take a much broader perspective and discuss all the issues that different stakeholders and their objectives in RecSys and also how this has evolved over time. And since I'm joined by a great researcher for today, I guess that he can also share quite a lot in RecSys history and how the whole community has evolved over time. Some of you might already be guessing who's my guest for today and it's Professor Robin Burke. Hello and welcome to the show. Hello, pleased to be here. Yeah, it's nice that you joined me and that you are up for this episode where we can dive a bit deeper into the topic of multi-stakeholder recommender systems. As always, I will just start with a short introduction of my guest and then hand over to him to provide some more insights and talk about his journey in RecSys so far, especially his research which will of course be the centerpiece for today's episode. So Robin Burke is a professor for information science at the University of Colorado in Boulder. Previously to that role, he has been professor at DePaul University.
He is also the director of the That Recommender Systems Lab and has been doing research in fairness, accountability and transparency in recommender systems. But, and I hope I'm not doing the wrong thing saying this, that he is really well known for developing and founding the world of recommender systems, especially when it comes to multi-stakeholder recommender systems.
Among his career, he has published many papers at conferences such as CIKM, UMAP and of course RecSys and these are not the only ones but just to provide an example. And from 2017 to 2020, he has also been the chair of the RecSys steering committee. But Robin, you're the better person to talk about yourself and please help me and fill in the missing gaps and tell our listeners about your journey in RecSys, where it has started, what's driving your interest in the field.
Sure, thanks. So, you know, I was thinking about this, you know, how did I get started in recommender systems and I often tell my students that I, you know, I started working in recommender systems before it was called that. But it really started in my PhD program. So my PhD advisor was Professor Roger Schenck and at that time he was founding the Institute for the Learning Sciences and we were working on, you know, what we'd basically call intelligent tutoring systems. And the thing that I was interested in and that eventually became my dissertation was this idea of personal stories that have pedagogical value. So you can imagine the scenario, I mean, and I think this is something that people very commonly experience, you know, you can talk about something theoretically, but then when somebody, when the professor says, oh, and this is this thing that happened when I was working in industry or when I was trying to solve this problem and I did this and it worked this way and I learned that that's not possible or these sort of concrete instances of things that happen are often, you know, that really attracts people's attention. It's very informative and especially for jobs that are kind of jobs that we expect people to learn, you know, kind of apprenticeship kind of jobs that you learn, you know, by doing. So we had this system and the somewhat mundane context was actually we were teaching people how to sell, you know, so our partner was a, at that time was the local phone company. I mean, these things don't exist anymore, but they were trying to teach people how to sell yellow pages advertising. So basically business advertising, you know, that would, you know, many people who hear this don't even know what I'm talking about, but there used to be a book that people would put on your doorstep that was the yellow pages and it was like the list of all the local businesses. So you could have a little listing in the yellow pages or you could pay money to have, you know, a quarter paid ad or whatever.
It's actually hard to sell yellow pages advertising because it is, it's kind of expensive and it's kind of abstract. Like you're paying a lot of money for something that, you know, six months from now, you know, this ad will appear in this book and then maybe somebody will go through the pages looking for plumbers or whatever and they'll see your ad and they'll call you, right? So is your selling something kind of abstract? It was hard to do. Some people were really good at it. A lot of people did it for six months and then left. And so they wanted to teach people how to do it better.
So we basically built a simulation so that people could kind of be in these sort of simulated customer interactions and kind of discover, try out different tactics for, you know, meeting customer objections and so forth. But the simulation was very limited and so, because there's only so much you can put in there. And so one of the things that we did was we recorded interviews with people who are good at it, you know, people who had been on the job a long time. And what we prompted them to do was to tell stories about what they had done on the job, what had worked, what hadn't worked, different kinds of people that they had encountered, different kinds of problems that they had run into. And we knew these were super interesting to the people who were learning and actually, you know, the kind of thing where on the job, you know, apprenticing with somebody, you know, you might go out into the bar, you know, after work and then people start telling these stories, right? Here's all the things that I've encountered and here's what happened.
And so the challenge, and this was my dissertation, was it's like, well, we have all these stories we want to tell them, we don't want to like write, make a documentary movie, we want to interject the stories into the user's experience in the simulation at just the right time. So let's say you try a particular sales tactic and in the simulation it works. That might be a good time to bring up the story from a salesperson saying, yeah, that's a good sales tactic, but it doesn't always work.
Here's this time when I tried that and it failed spectacularly or something like that, right?
It helps you understand, okay, what are the limits of these things? What are the limits of the simulation? It sort of expands the learning beyond what we could capture in the simulation. That was the idea. And at that time I was very interested in human memory, thinking about how do people remember the right thing at the right time? And this was very much tied in with my advisors' research, this whole area of case-based reasoning, which is where I sort of started in my sort of graduate school career and my AI career. So the idea of how do I remember the right thing at the right time? If you look at psychological theories of how memory works, they're kind of frustratingly simplistic, especially at that time. They just say, well, sort of activation spreads in memory and encounters things and then they're brought to consciousness. But if you look at this kind of question of how do I find this sort of pedagogically relevant story at the right time, it's not just because it's similar, it's because it actually makes a point, right? And I remember the story about the time the tactic failed. It's actually not similar to the time the tactic succeeded. It's different, but it's different in a pedagogically crucial way. And so I was developing this kind of model of reminding or remembering that had, that was more gold-driven, that was more sort of driven by sort of pedagogical needs rather than just sort of this associative idea of memory. So this is what I was interested in. So I got my PhD and I went to work as a postdoc at the University of Chicago, working with Christian Hammond. He's now at Northwestern, but at that time he was at the University of Chicago. And he was looking at what we would now sort of think of. So you could think of my pedagogical system as a recommender system in some sense, right? At University of Chicago, we were looking at systems that look more like sort of today's recommender systems. But Chris was also someone whose background was in case-based reasoning. We sort of had this in common. And we were thinking of it as this idea of taking some of the core concepts in case-based reasoning around retrieval and applying those to product spaces. And so we actually looked at movies, we looked at cars, we looked at a few other product type domains. And the question was, how do you find this mapping between what somebody needs and the items that are in the catalog? And nowadays we would call that a recommender system, but that term didn't exist at that time. And so these were, we called these find me systems.
And we built a number of them and started working on, and restaurants was one. We started sort of developing these, what we would now call sort of knowledge-based techniques for doing recommendation.
And one of the sort of, I think the contributions from that era of our work was this idea of interactive recommendation through critiquing. And so the idea is the restaurant domain is the one where we did the most with it. And the idea was, well, let's say I am traveling to Chicago, I'm coming from some other city. Here's my favorite restaurant in Denver. What restaurants are like that in Chicago? And so you can view that as kind of this matching problem of trying to find something similar, which is sort of a classic case-based reasoning problem.
And so we developed sort of ontology of restaurants and cuisines and things like that.
And then what do you do if that thing that it recommends is not the thing that you want?
And so our solution to that was this idea of critiquing. You basically say, I want something like this, but different in a particular way, sort of a kind of conversational kind of interaction.
So you might say, oh, that's a great recommendation, but that's too expensive. I want a cheaper version of that. And we identified, I think, maybe half a dozen different dimensions along which you could press a button and sort of critique the restaurant and move to a different part of the space. And one of the things that we discovered in doing that is that critiquing is very powerful.
People used it to kind of learn about the space. So you're like, well, what is the cheapest restaurant in this area? You can kind of poke the button a couple of times and see what happens.
You can kind of try to start to understand what the trade-offs are between these different dimensions. People really liked it. What time was it approximately? So I got my PhD in 1993 and started working at the University of Chicago right around then. I actually started the job a little bit before I was finished. And so I was working on this job and also working on my dissertation at the same time, which I do not recommend. Yeah, so we're talking about mid-90s.
And then at a certain point, and so again, we were writing about this and going to conferences and things like that. And we started to sort of hear about other people who are doing similar sorts of things. Some of the early recommender systems papers, some of the movie recommendation stuff started to be published in 95 and tapestry, possibly. Yeah, the tapestry paper there was.
And then I think it was in 1995 that the ACM, the communications of the ACM article came out where recommender systems was on the cover. And it was Resnick and Varian kind of putting out this terminology and defining the field. And we looked at that and we're like, yeah, that's kind of what we're doing. But before that, there had been other terms that people had used. The idea of agents was very big then. And so you had this idea of information agents, which again is very much tied to this idea of recommendation. So anyway, then this sort of term coalesced and we started to see more similarities across these different kinds of work. I must say, it shows how good I am at predicting what's the future of different ideas. I remember when I first encountered the first work on collaborative filtering, we had very much focused on these knowledge-based ideas. It's like, okay, building ontologies, understanding what really makes one thing similar to another, how to map user needs to particular features in kind of classic knowledge-based kind of inferential ways. And then people were like, oh, we're just going to use this data of users and ratings. And I'm like, that's it. You're never going to be able to get very far with that.
So you are talking about things like the Netflix price or such things?
Well, no, this is much before that. So there were the people who were...
The first movie recommender came from an industry lab that they did share their data.
Talking about group lens and movie lens?
No, no, no. It was even before group lens.
Yeah, but anyway, maybe I'll remember it. But anyway, I'm like, there's no knowledge here.
How is this actually going to work? So I greatly underestimated the sort of power of that collaborative signal at that time. But I kept working. We kept working in this area, and then eventually that turned into a startup company. And that company had a bunch of different names. The last name that it had was VIRB, and that company disappeared in the dot com crash.
So I worked at this postdoc for a while, and then I moved to California. I didn't really have a job there, so I worked full time for the company for a while. And then I got a grant supported job working at the University of California, Irvine. And then I made my way back to academia with a position at Cal State Fullerton. And then we moved back to Chicago in 2002. And that's when I started it at DePaul University. During this whole time, we were seeing, you know, in 95, right? We're really seeing the web and the internet sort of take off. And this, it's very clear, you know, again, we're starting up this company, we're doing all these things. It's very clear like this is the great application for recommender systems. You have to remember when we started, we were thinking of things like, oh, there's this kiosk in your blockbuster that helps you find movies that you want, right? Or we're, you know, much more kind of sort of concrete environments. And then it's like, oh, actually the web is this perfect place for a recommender system. You can aggregate lots of content. You can very easily get people to use a system that's built that way. And we did build this restaurant recommender system that was in use for three or four years. The Entrez Chicago system, that was my first experience in sort of building something that had live users in it. And we learned a whole lot of stuff. Stuff that we should have thought about before we put it up there.
So I learned a lot by doing those things. But I also, the other thing that I learned, and this is, you know, I'm still in academia now. You know, I kind of learned that making a good product and having a good idea in terms of a research contribution are really very, very different things. And there's a lot of hard work that goes into making a good product, which doesn't have anything to do with the science. And I realized I was sort of more interested in the science. And so I went back to academia with the idea. It's like, yeah, this is the, I want to concentrate on this kind of stuff, as opposed to all the necessary work, you know, but less interesting to me, work of turning something into a product. Yeah, yeah. But it was very frustrating to me that the this critiquing idea, which we had developed a lot, we'd made a lot of technical progress on, this is what verb was built sort of based around, you know, we had customers, we had prototype that was working, we had, we'd done a lot. And then there was just no way to raise money in the dot com crash. And then the whole thing just like, it just evaporated. It was kind of shocking to me. And even to this day, you don't really see critiquing being used. I think we could have had more impact with that idea than we did. There was issues with timing and, you know, anyway, I will definitely take that reminder with me, because this is something that we both have in common. You have built restaurant recommender systems back in those days. And I'm actually building restaurant recommender systems nowadays, if you want to put it that way, even though like our product managers and also our organization would see it a bit more broadly and not only tied to two restaurants. But this is like how the business that I'm nowadays involved with, like started with recommending the right restaurants for people, or it's at least are something that I could publicly safely say that this is like one of our main things, or at least of my daily work. And as you just mentioned, the approach that you took with critiquing, like having that back and forth loop with a user, or at least the recipient of the recommendations to better find out what they are interested in. And actually also what are the things that they dislike seems very much resonating. And maybe it's like, yeah, I think conversational recommendation is kind of coming back again. Something that nowadays would be more referred to as conversational recommender systems, if I'm not mistaken. Especially, I guess, with the rise of a lot of LLMs that may be very good supporting this. Okay, so then you moved back to Chicago to then also become a researcher at DePaul University, where you then also throughout that time became a full professor.
How have you actually then developed and found your area of speciality within recommender systems?
I mean, just as you were mentioning a bit about the history, I had to check out that paper that because I was just curious when it was presented this Amazon recommendations paper on item to item collaborative filtering, which was actually also 2003. So at least from my point of view, I would say still quite early in the RecSys days, some stuff already happened at the dot com bubble burst was over. And things were maybe getting traction again, also getting more attention from the commercial and industrial side of stuff. So how has the journey developed from there where also like things were better on the on the application side? Well, so I mean, in terms of my particular sort of research trajectory, I've focused on a number of different things. So this sort of critiquing idea was something we worked on for a long time. And, you know, kind of developed and that turned into the company I worked and when I when I got to DePaul, I worked very closely with Bamshad Mabashar, who was already there. And his background is a little bit different from mine, more in the data mining space, but it was actually very, it was a it was a great collaboration. And we worked together, you know, for a long time, it was a very successful partnership. And some of the things we worked on, we worked on robustness. So dealing with attacks against recommender systems, characterizing attacks and thinking about how algorithms would respond. I also worked on and one of the things I may be best known for was this idea of hybrid recommender systems, which now seems a little bit quaint. But people sort of need to remember that at the time I was working on this in the late 90s, early 2000s, there was like this very stark division of content based collaborative, like these were considered completely different techniques that didn't have anything to do with each other. And the content based people would say, well, you guys doing collaborative stuff, you don't know what you're recommending, you don't know what you're talking about, you can't handle cold start issues, you can't ask people questions about their actual tastes that people can understand, you can't explain what you're doing. And then the collaborative people will be like, well, you're totally dependent on noisy human generated tags or labels for what things are actually about, and those don't appear in the real world, and our method works better at scale, et cetera. So people treated these as completely different things. Nowadays, people take whatever data they can get and throw it all into some model and don't worry about it. But at that time, these were considered two completely different things. And so what I had kind of thought about was much more like, how did these things play together? And so that had been, you know, that had been my research when I was still working for the company to some extent, but trying to pursue my own research agenda, I was thinking about, well, what if you were able to add, and I was particularly interested in how some of these collaborative techniques could actually address some of the problems that we had faced in the knowledge based space. So one of the things that we found with the restaurant recommender is that oftentimes you have items that you want to present, which basically the system can't really tell which one is better than another. So you have a bunch of things that are all kind of in the same bin. And the system, you kind of just end up having to pick arbitrarily something to show. And I'm like, Hey, you know what? I bet that collaborative signal could be useful, you know, in ordering these things and not having this to be the arbitrary choice. And I'm like, you know what, actually, probably those techniques could play together in a lot of different ways. So that was one thing I worked on, hybrid recommendation.
So this example that you just mentioned, was it then more like two stages? So that like maybe the first stage that was solely content based provided some filtering so that you got candidates out of it. And then you were like using the collaborative signal in order to sort among or to impose a ranking on those candidates. That's right. And so at that time I called that a cascaded hybrid.
So basically you kind of make decisions in sequence and I sort of laid out different kinds of hybridization that one could have. And I also identified some that I thought were not possible, like, you know, sort of combinations of techniques that were not possible. And then people went ahead and like built systems that worked that way. So that, so I learned my intuition about what was possible and what was not. It was also not necessarily so great. And so we worked on recommender system robustness. I got very interested in social network type models and types of data and thinking about what we would now call graph. You know, now there's graph neural networks. Those things didn't exist. Then we were looking at alternate ways of building recommendations using, again, you know, multiple kinds of signals, collaborative ones, but also other ones that the data format changed into sort of, you know, we're dealing with networks.
And then I started thinking about these ideas of fairness and, you know, we've zoomed far ahead in time now, maybe 2015 or so. And I was reading some of this literature about fairness and machine learnings, you know, sort of starting to emerge as a question. And I was thinking, well, you know what, I'm sure this is an issue with recommender systems too. And, you know, started to do some work, you know, kind of exploring that question. And, you know, to get us to our present day topic, one of the things that kind of I was challenged by at the time, thinking about fairness, it's like, okay, it's clear that there's different kinds of fairness, but fairness towards what we would now call item providers is something that we care about. And I couldn't figure out how to really fit this into recommender systems evaluation, recommender systems. So the way we had been talking about recommender systems, basically from the very beginning, it was talking about it purely in terms of the sort of user centered, you know, the consumer of recommendations, what do they want?
Are they getting the things that they want? How do we make sure that we do a better job at getting the things that they want? Like this was the entire focus was this relationship and evaluating that. And when I started to talk about fairness, I'm like, well, wait, this isn't a user centered property, right? This is a more systemic property. And we don't really have a place to put those kinds of things in our discourse about recommender systems in our evaluation methodologies and so forth. And that issue kind of nagged at me. And then at around the same time, and I need to look this up because I'm not remembering exactly when Hong Kong RecSys was. But I was at RecSys in Hong Kong. And there was 2013. Okay, so this is actually even a little bit before I was at the conference. And one of the keynote speakers was Andre Broder. So Andre Broder is extremely well known as somebody who worked on computational advertising at Google. And I was actually, I was at his talk and I saw there were many things that were related to recommender systems, this idea of personalization and so forth. But there were a lot of things that were not. So they were, you know, all about the advertisers and the places where things were appearing and the bidding system and all of this stuff. And it kind of confused me. I was like, well, what does this really have to do with recommendation? Yeah, there's this personalization piece, but there's all these other things.
And I thought about this a lot. And I actually eventually started teaching a class on computational advertising. Chicago has a big, it's a big center for advertising industry in the US. And this was a class that I thought would be appealing. And it was a lot of students were interested. There's a lot of kind of interesting microeconomic aspects of this. And it did tie into personalization, which I was interested in. So as I as I was thinking more about that and thinking about sort of fairness and so forth, I sort of came to this realization that in fact, we were what I would now call the sort of multi-stakeholder idea. I didn't necessarily have that idea right away.
But I this idea, it's like, well, actually, the constraints on recommendation, the sort of properties that you want recommendations to have are not just a function of what it is that the user wants, what it is that makes the user happy, that there is a broader set of considerations.
You know, and when I wrote the paper, first paper on that at UMAP, about sort of recommender systems as being kind of these multi-sided, multi-stakeholder environments, people were like, oh, yeah, we know that already. Because of course, in a business setting, yeah, as soon as the first marketing person was told that recommender systems existed, they were thinking about that not as something that makes the user happy, but as something that can contribute to their goals as marketers, you know, enable the business to sell products to connect people with products as business goals rather than necessarily as the user wants. And I think when you were talking to Himan Abdul-Lapuri, I was listening to that podcast, he was a student of mine, and he was talking about groceries, right? So this idea that something might be perishable, like, hey, I'm going to sell this, that's a priority I have as a business, users don't necessarily care about that, but it does matter that there's less waste and that I can sell things that I need to sell. So anyway, this idea of these sort of business priorities, way back, right? So this goes back to the very first industry implementations of recommender systems, and we of course encountered that when we were commercializing stuff at VIRB, but that had kind of always been thought of as like this, not really within the realm of recommender systems. It's like, oh, here's this extra thing that businesses are trying to do to make money, but that's not really recommender systems, because it's not about the user, right? It's like this extra thing.
Let me just step in there. Wasn't really like this, looking at the objectives of a different, like we would nowadays say, stakeholder of the recommender system, in your case, the businesses that were featured on the platform, that was making people actually aware of the fact that so far recommender systems was pretty user centric, or as you say, and allude to fairness.
I mean, there are also like aspects of fairness that you could define on a consumer level. Like, for example, being fair in terms of the quality of your recommendations towards different groups of consumers. Was this something that was also already a concern at the time? Or was it more like that kind of detour is not the right word, but it was like thinking about a different stakeholder, when thinking about fairness, bringing you back to, hey, there are actually multiple stakeholders.
So was it like that way? Or was fairness for consumers also like a starting point or has us come later? Well, so I think probably I was thinking about providers first. And then I had to think about it's like, well, where do we associate that objective? And the kind of the only way to do that is to sort of broaden the frame, right? And say, Oh, well, you know, these item providers matter too. And then I realized, Oh, well, in fact, people have been talking about that before and writing papers about that before. But it's often considered as secondary or considered as not, you know, really part of recommender systems research. And so I think, you know, I was kind of, I was frustrated with that framing, because it meant that certain things would be, you know, the real recommender systems research and other things weren't. And so my argument was basically all of this is recommender systems research. It's like all of these things matter. All these things are things that are going to appear in recommender systems in real settings. It's happening now. Like we shouldn't pretend this is not part of our remit, you know, as researchers to study this.
And it's not outside of the realm. So how do we talk about those concerns? And so that's where the sort of stakeholder idea comes from. Oh, and by the way, if we adopt this perspective, it provides us a way to talk about fairness. It provides a natural home for these concerns that otherwise it's hard to know how to incorporate them. And so then when I sat down to sort of think about, OK, where does fairness enter into recommender systems? You know, and so naturally you think about looking, for example, at examples from machine learning fairness. There are a lot of it is, you know, they're interested in the people who are getting results, you know, their their credits being denied or whatever it is, a sort of standard kind of fairness situation. It's like, oh, yeah, I could see how that might happen on the recommender systems consumer side, too. Right. So you can imagine, you know, and of course, there's been plenty of research on this now, but we were sort of thinking about, OK, where might all the various sort of fairness concerns lie in the recommender system context? Yeah. As we already use some of those wordings like provider consumer and allude to those aspects, could you maybe provide a quick intro of what a multi-stakeholder recommender system is about or when it is actually a multi-stakeholder recommender system or what these different aspects mean so that our listeners do have an understanding of what we refer to when we use those terms? Sure. So basically, the idea of stakeholder comes from the business literature. It's this idea that individuals or groups that are impacted by a company's decisions are stakeholders. And there there's ways in which the company may need to think about impacts of what they do beyond, say, and this this term emerged in contrast to the term shareholder, right? So shareholders like, OK, these are the people who really own the company and a lot of business sort of theory of management is all about, OK, we're trying to provide, you know, good return for shareholders. And so the term stakeholder arrives to say, well, actually, yes, shareholders are certainly important, but there are these other kinds of individuals who have a stake in what happens. And so that term gets translated into recommender system.
So who are the stakeholders in a recommender system? Now, I would argue basically every recommender system is a multi-stakeholder recommender system. It's just a question of how do you evaluate it? How do you think about it and design it? Kind of what considerations do you take into account? But classically, sort of we think about there being three sort of key stakeholders in a recommender system. So I wanted to get away from the term user because sort of everybody is a user in this in this setting. So I landed on the term consumer. I'm still not sure that was necessarily the best choice, but I'm stuck with it. So but the idea is like you're consumed because people think of it as a consumer as somebody who buys something, but that's not always the case.
But you're consuming recommendations. So the system is delivering recommendations to you.
I'm a Spotify listener. I'm a Netflix viewer. I'm, you know, somebody who's buying, you know, going to an e-commerce site and buying something like I am getting recommendations from the system.
So I'm the recommendation consumer. And then on the other side, you have the people who are providing the items that are getting recommended. And some people use the word producer. I don't like that because not all people who are providing items are the ones who produce them. You can imagine, say, it's the record label that is providing the items to the music platform, but in fact, the artist is the one producing them. So there's complicated relationships.
So anyway, providers, just this generic term that means you're the individual who is sort of putting the items into the system and you get some benefit when those items are recommended. And then you have systems that are kind of more symmetric. So you have something like, say, social media platform, right, where I'm consuming recommendations in terms of some curated list that it provides me, but then I'm also creating my own posts. Those go into the system to be recommended to other users.
Usually, you know, only a small number of people are producing a lot of posts in social media, but still both of those roles are available. And then the third stakeholder is the system itself.
And so somebody's created that system for a reason, presumably. So Spotify has a recommender system for a reason. And there may be specific business objectives that might be different from everybody else. Right. So you can kind of think about this in terms of the diversity objectives.
What do people want out of the recommender system? You know, the providers want their items to be seen. The users want to get things that are relevant to them. And then the platform, well, they may have various reasons, depending on their business model, why the recommender system is important. Maybe it's something that increases sales. Maybe it's something that increases user satisfaction so people stay subscribed to it. A variety of reasons. But you may evaluate those things differently than you evaluate, you know, the impacts on other stakeholders. So this is sort of the key ones. And I think you could also identify that there are other stakeholders. And in recent paper that I've been working on, we actually identify three additional classes of stakeholders.
So upstream stakeholders would be individuals who are not actually directly providing items, but have some stake in those items. So an example might be, let's say a songwriter, right? So I get some royalty, maybe not very much, but I get something from the stream of a song or something like that. I'm not the one who sort of has the contact with the recommender system platform, but I'm upstream of that. And so I care about what the platform does. You might have downstream stakeholders who aren't the actual consumers of the recommendations, but that it impacts. So for example, children, if parents buy on Amazon, sure, exactly, right. And I think about educational materials, maybe the teacher is getting recommendations, but then the children are the ones who actually get the books or whatever. I know you've talked to Soleil Parra about those kinds of recommender systems. And then the other group is sort of generic group of third party stakeholders.
And here we can think of, say, governmental organizations, regulators, folks like that, who there may be some societal stake in what the recommender system is doing. And maybe like, think about employment, for example, there may be some equal employment laws, there may be regulatory agencies that govern, you know, how systems can advertise jobs and match people with jobs and so forth. So so that's a kind of bigger spectrum of stakeholders. But the key ones, usually we think of the consumers of recommendations, the providers of items, and the system that's providing that platform where things happen.
And I guess all of these stakeholders might have somewhat different goals. What I also found is that we should actually not mix up multi objective recommender systems with multi stakeholder recommender systems, which might be sometimes be treated for the same. So there could actually be one stakeholder of the system, let's say the consumer who or which have different objectives. And this would already like qualify for a multi objective system if I were to like design and evaluate for these different objectives of a single stakeholder.
But on the other hand side, and maybe this is like becoming too much terminology play from my side.
Like if I do acknowledge there are different stakeholders, if they have different objectives, then most of the time when I have a multi stakeholder system, I do have like multiple objectives and therefore also multi objective recommender system is actually design and evaluate for those would this be a way of putting it or where do you draw the line and how to put those two terms together? Well, I think a multi stakeholder recommendation as a kind of way of thinking about the recommender system and sort of the ecology around it. And so you don't necessarily have to optimize your system relative to the different stakeholders objectives in order to care about those things. So you may you may not have a way to tweak the system to improve, you know, property X or property Y, but relative to a particular stakeholder, it doesn't mean you can't measure it or think about them as stakeholders. So it's really how you think about the involvement of different parties, that's the multi stakeholder view. And then the decision to optimize the system in a particular way, that's making a choice of objectives, deciding, you know, how you're going to design algorithms to meet those objectives and so forth. And as you're saying, I could say, well, I know people like diverse recommendation lists, I know people want recommendations that are personalized to them. There's some tension between those objectives that may build a system that, you know, has a little bit of diversity, a little bit of personalization, accuracy, and now I have multiple objectives that I'm that I'm trying to meet. They're kind of orthogonal. But once you get into this realm of say, you really are very much focused on particular objectives, say, on the provider side, you're necessarily, you know, going to have to think about it in a multi objective way.
Yeah, for the listeners, they should all, I guess, be well aware of that chapter in the in the RecSys handbook on multi stakeholder recommender systems. There's also a great survey from 2019 that we will attach to the show notes. And I guess in the survey, you also talk about three areas and one of those areas you have already elaborated a bit on, which is like the fairness aspect. But there were also like two others, which is not the complete picture, but providing some somewhat more insights, which are value aware RecSys and the other one would be reciprocal recommender systems. And I'm still in a more passive process of finding really the person in RecSys who could support me in an episode on reciprocal recommender systems. But were these two aspects of value aware and reciprocal? So maybe first, like, what do they mean? And how do they embed into this thinking more about the ecology of a recommender system when it comes to multiple stakeholders? Could you say something about that? Those are I think of those as examples. We were trying to make the point that in fact, again, like multi stakeholder recommendation wasn't really new, but it was a way of thinking about existing practices, existing kind of research areas as sort of falling under this umbrella. And so those were things not necessarily like this is the most important things in recommender systems, but these are existing areas of work that we can kind of now with this multi stakeholder concept see as really falling under the same rubric. So with reciprocal recommendation, this is a situation and I think the two most sort of prominent ones are say online dating and job recommendation where the thing that you're recommending, it's you have kind of limited capacity in some sense, right? So this is not so much an issue of like, I can stream as many Taylor Swift songs as I want, right? There's no limit. You know, I'm only going to go out on so many dates in a week. I'm only going to respond to so many people on a dating app. I'm only going to interview so many candidates for a job or whatever it is. And so it matters a lot that the person to whom the recommendation is delivered is acceptable to the provider, right? So if I deliver your job ad to somebody that you would never hire, that is a waste of everybody's time, right? If I deliver your dating profile to somebody that you would never ever in a million years date, again, that's a waste of everybody's time, right? And so some domains, this doesn't matter, right? But for some domains, it does. And so those are cases. So in employment and in online dating, the sort of the provider objectives, it's very clear that those things map. And you can't build a successful system unless you think about that. So those were areas. If you sort of look back in the history of recommender systems, well before anybody was thinking about, you know, fairness or these other properties that got me thinking about multi-stakeholder issues, people were thinking about, okay, how do we deal with this issue of acceptability, you know, which is basically a provider objective, again, not thinking about in those terms, but how do we think about that and incorporate that into our algorithms? So reciprocal recommendation was sort of always had a multi-stakeholder flavor, it just had never been, you know, hadn't really been talked about that way.
And then value aware recommender systems, this was kind of leaning more into the objectives coming, say, from the system side, and talking about issues of, say, profitability, looking a little bit at the marketing literature. Not a lot of RecSys people look at the marketing literature, but there are definitely people in that space who think about recommender systems and their business function and how these things work. And definitely when you talk about, say, you know, computational advertising, it's like, well, I have to think about what's the right target market. If I am selling luxury cars, I want to make sure that the people who get those ads are people who can actually, you know, are likely customers, not people who just like to look at pretty cars, but people who actually might go out and buy a Lamborghini or something. So what's the value of making the connection? And that may, and as a system, if I'm operating the recommender system platform, I may think differently about the value of connecting users with products just based on my own business objectives, whatever those are. And so when we think about the marketing function of recommendation, really, we're talking about the sort of value aware idea, in the sense of business value. So the reason those things are in the article is because I was, you know, getting together people that were already working in this area and saying, you know, let's talk about the sort of multi-stakeholder framing of recommender systems and do it in a way that will be familiar to people by pointing back to, yes, there are these strains of research that already exist in recommender systems. People have already thought about these things, and now we can see that these areas of research really have something in common. And in some ways, they have similar problems, but they've cropped up in kind of independent streams of research over the years. All right. Going back to that aspect of you talk also a bit about the economic aspect or like aspects from research and economics. And I guess some also very important thing that you mentioned throughout those surveys and the article was like that economics was providing a term to explain something that has appeared even before, which was multi-sided platforms to also explain the success of platforms that arose, like, for example, Amazon or also like music platforms or like these economies where you do have a platform that provides as main benefits, a reduction in transaction and search costs in order to let participants on the platform interact with each other, which is also something that stems a lot from leveraging recommender systems for that purpose, be it like in social media, in entertainment systems, like, for example, we mentioned Spotify, or be it, for example, in the e-commerce domain. So all of these things like where I go somewhere and buy something quite easily without having to do all my research and then go to, for example, some offline store, or also like with delivery platforms. How was the conceptualization of multi-stakeholder systems actually helping or supporting in that sense? Or actually, why would you say was there a need to have this established as an own field? Okay, a couple of different questions there, I think. So one is, I think, the relationship with the sort of multi-sided platform literature.
And yeah, this was a big influence on me. Again, when I was thinking about sort of developing some of the ideas, I read some of this work, some of this literature in economics, I had been pointed to it by some of the computational advertising work. And when I was reading it, it was very clear to me, it's like, aha, this is exactly what we can describe, you know, many e-commerce systems and many, you know, media systems as being. And, you know, a key idea there in this sort of multi-sided platform literature is like, there has to be benefits, right? You have to have benefit from participating as a seller, you have to have benefits from participating as a buyer. There are lower transaction costs, yes, but it has to do the things that you need in order for it to be satisfactory. And recommender systems, it's like, okay, this is this great tool for improving the matching function in those settings. So yeah, I think there is a strong connection between recommender systems and sort of this multi-sided platform theory. There are some ways in which the economics literature doesn't quite match up. I mean, the people in economics mostly, you'll find the literature focuses on matching problems where the goods are rivalrous. So, you know, if you get something, it means I don't get that thing. And so that makes for mathematical proofs and problems, but it's not realistic in a lot of recommender systems domains where, you know, you may for digital goods, you essentially have infinite supply. So it's not, so anyway, the problem is it ends up being a little bit different. What gets studied ends up being a little bit different, but the concept was very important. Now, the second part of the question, so why do we need this idea? So one thing that was a strong motivation for me was just intellectual honesty, because there was this gap between what we were studying, what we were calling the field of recommender systems, and what systems actually were about and what they were actually doing. And if on the research side, we didn't have a way to talk about that, then this gap would just get bigger, right? And we wouldn't be studying the kinds of phenomena that were actually happening out there, but also the things that people were calling recommender systems in the real world would just not be the same animals, you know, that we were dealing with in our research work. And it bothered me too that you would get, you know, people who are CEOs and they would talk about recommender recommendation function as this sort of purely your use entered function. So it's all about the user experience. It's like, well, I know that's not really true, because if it was really true, I wouldn't see ads. And I just see ads. So I know that's not really true. And yet we didn't have a way to talk about it, right? We didn't have a way to study it. We didn't have a language for it. And so it just seemed like it was not reflecting the reality of recommender systems work. So part of the idea was just to say, look, recommender systems is all this stuff, not just the things that we used to think that it was about. It's like, it's this broader frame. And making that admission lets us then study more of the real phenomena, the real impact. So in terms of fairness, it's like, okay, now we have a way to talk about the fairness of the recommender system related to different entities. We have a way to talk about the impact of the system beyond just the way that a user gets recommendations from it. The impacts of the system spread far beyond that. Let's talk about the whole picture, right? And not just what historically has been the main focus of the field. I thought that that was important.
And so there's a lot of appeal to working on a simple problem, right? You know, it's like you have a data set. It's got three columns, you know, user item rating. You try to do cool mathematical things with it. Problem's well defined. You can compare your stuff against others. You can claim that it extends to other data sets. You can do lots of comparisons. Like there's a way in which it's possible to make rapid progress when the problem is defined in a kind of narrow way.
And we see this, you know, in other areas of computer science too, where it's like people fix on a challenge problem and then, you know, everybody sort of iterates on that for a while.
But oftentimes what you find out is like, it's okay, you know, you've gotten as far as you can go there, but you might not necessarily have really addressed the real problem. The simplified problem might not translate into the real world so well. And for a long time in recommender systems, you know, the Netflix prize, that kind of model of what it is that we were trying to do was a dominant one. You know, nowadays I think people are more accepting of the idea of like, okay, there's going to be a lot of content information. You know, now we have better tools embeddings and so forth where we can actually handle that content information. And we now have data sets, some of which include that kind of information. So now we get research that looks at richer input. One of the things that's very hard about multi-stakeholder recommendation research is that it's hard to get insight into the perspectives of all the stakeholders that you might consider. And it's also super domain specific. So we know in recommender systems, it's very domain specific anyway. So, you know, recommending apartments, recommending, you know, partners on a dating app, recommending music, recommending jobs, these are all very, very different kinds of things. And an algorithm that works well in one domain, sorry, the one that works best in a different domain, is that even more so when you start taking the stakeholders into account. So you could have a music app, something like Bandcamp that's like super indie. And the providers are all artists who are SoundCloud or something, artists uploading their own songs, right? And so the stakeholders have a certain property kind of on the provider side, as opposed to say, a mainstream industry kind of platform where the providers are mainly record labels or something, right? And so those are different stakeholders, they might have different needs, different expectations, different objectives. And that's even within the single domain of music streaming. And so understanding what it is that various stakeholders might want, might expect, what their objectives might be, that's going to be challenging.
And it's not the kind of information that appears in sort of standard recommender systems data sets.
So a lot of people, myself included, people use the movie lens data set, right? There is no information in the movie lens data set about what movie directors or movie studios want from movie recommendations, that's just not part of it, right? We have this information about user preferences, that's implicit in the things that people have rated and so forth. But there's just no information about what these other stakeholders might want. In other cases, you have, think about say, business objectives, it's like, how does Netflix make money? How does Spotify make money? How does a, say, travel recommendation site make money? It's like, well, they're not eager to tell me the details of their business model and all the deals they have with various providers and all of this kinds of stuff.
That's super sensitive, commercially valuable information. So that doesn't appear, right? And so there isn't a good way to assess, oh, is this choice good for the business objective? It's hard to get your hands on those businesses. So all of those things make this kind of work harder to pursue and you end up either having to make partnerships, you know, we've had a successful partnership for a number of years with a nonprofit organization that does crowdsource micro lending. I think we were extremely lucky to kind of happen into that relationship. I've had other sort of collaborations with various business organizations over the years with different degrees of success. Oftentimes you have to just kind of make stuff up. Like you have to say, okay, let's imagine that the business's objectives are like that, or let's make some reasonable assumptions about what's valuable to providers or things like that. But that gap between being able to sort of really understand what the objectives of different stakeholders might be, it's bigger once you get beyond sort of standard user preference data that we're used to dealing with in recommender systems. So it's a strictly harder problem. And then I think if you want to develop systems in this way, then you actually have to sort of think about, okay, some start talking to your HCI colleagues. It's like, okay, how do we do participatory design? Like how do we involve stakeholders in a meaningful way in thinking about the parameters of the system, thinking about the objectives of the system?
How do we elicit from people who aren't AI experts ideas about say fairness or about some distribution of utility when they're not really necessarily thinking about things in those terms. They're thinking about like, how do I get my stuff out there to the people who need to see it? So these are all problems that we're not sort of traditionally trained to talk about. You don't see RecSys papers about this, although we probably should. And so again, the work itself is sort of strictly harder than and kind of takes longer, requires more interaction with real people as opposed to data than a lot of other kind of approaches to reference.
I would argue that not thinking about a recommender system as naturally being a multi stakeholder recommender system also leads to sub optimal system configurations and thus also sub optimal results, developing recommender systems. And I have encountered this a couple of times when you were solely focused on relevance of recommendations for users, or if you want to take it in a broader sense on the quality of recommendations, be it also including the novelty or the diversity of recommendations, but solely focusing on users. And then there is business partners coming and saying like, yeah, we want to change it in this way. We want to push that content, or we want to highlight this and that organization running the system wants to benefit from that system in a certain sense. And you say like, yes, and there is a whole realm of research and also applications on this domain. But it seems like that also your stakeholders within an organization are less aware of the fact that a recommender system can serve these demands and does not necessarily only be about serving the consumer of recommendations. So that from my experience in some of these domains, what you what you typically do is you try to alter the recommendations after your dedicated model has provided some ranked list of items for a specific user in a personalized fashion, and then you apply some re ranking, generally move stuff around. But you already feel like this is not the right thing to do, you should embed this. And I found this again in that survey, and I'm reading out like the title so that people can can look it up in the in the show notes, it's called Beyond personalization research directions in multi stakeholder recommendation. And it was exactly that point later on in that survey, where you were actually touching on these research directions, and you were confronting these two like categories in which we could see that a multi stakeholder approach taking place, which is on the one side, situate the multi stakeholder problem within the core recommendation generation function. And the other one, I would actually claim a post to it is like re ranking. So apply multi stakeholder considerations after an initial set of recommendations has been generated. Would you agree that this re ranking is necessarily sub optimal to the integrated perspective. And that thinking about that as a developer of systems is important to do so that your system, whatever you develop is optimizing more for different objectives so that nobody comes and say, Yeah, but could you please move around these recommendations, because we also need to think about this and that. And we could actually model these objectives and take them into account and trade them off as like the core recommendation generation function.
So what is your take on that? Well, so I'm actually a big fan of re ranking. And there's a couple of reasons why. So I mean, I and one is that I think it gets to be quite difficult as you get more and more objectives, I think it gets to be quite difficult to start to cram all of those into your loss function, to cram all of that into the model building. The properties that you want of a nice convex loss function, it gets to be difficult to achieve a good integration of a wide variety of objectives all at the same time. The other reason is that I think, and you kind of pointed this out even in your question, is that these are often things that are a little bit dynamic.
And so the weight that you might want to assign to one of these concerns or another might vary from time to time. And now that means I have to start over training my model, if I don't have this balance quite right. And so I think there is value in letting the recommender do its thing, sort of train it to do its thing as best it can in terms of personalization, and then figure out ways that you're going to, as a post hoc process, integrate these other kinds of concerns. You're more likely to be able to have multiples of them, it's more likely to be able to be dynamic. And we've done some work, I mean I know one of the concerns is like, well, you know, how many recommendations do I need to make sure that I get the diversity that I want and things like that.
In the work that we've done, one of the metrics that we look at in a re-ranking system is sort of with the farthest down item in the list that makes it into the set of recommendations. And it rarely gets beyond like 35, 40. Once you go deeper into the recommendation list than that, you're starting to lose a lot of accuracy. And so you actually don't need a ton of recommendations in order to have a diverse pool of things to choose from. At least in my work, I haven't seen that this is a big obstacle. I think there is work still to be done. For example, this question of, do I maybe want to optimize my learned model differently knowing that I'm going to re-rank?
Right? I haven't seen any research on that. So I think that re-ranking kind of gets a bad wrap.
I think partly because, you know, you get more credit for building a cool model with cool objectives than you get, you know, by just scoring things and sorting them. But I think as a practical matter, I think if you look at a lot of real systems, I think you'll see that's exactly how they work. And that you get a lot of flexibility by incorporating objectives via re-ranking. And in fact, the work I've been doing most recently specifically goes after this problem, looking at dynamics, meaning what is the state of the system right now? And we're looking specifically at fairness. Say, well, if I've been historically, you know, in the past and recommendations that I've given out over the past month or the past week or past day, whatever my window is, if those recommendations have been unfair in one way or another, I need to do something about that right now. And that just because I have tuned the system in a particular way doesn't necessarily mean that its outcomes in real time meet the fairness criteria that I want. It needs to be kind of a more of a feedback oriented system where I see, okay, this is what I've done so far, I need to maybe correct this going forward. And there isn't really a good way to do that in a model that's kind of based on learning. You have to be constantly retraining with shifting objectives. Re-ranking is just more natural in that case. And then the other issue is the multiple, having the multiple objectives there. And in my recent work, we've been looking at social choice as a paradigm for incorporating multiple concerns, mostly fairness concerns, but coming from different perspectives, different heterogeneous definitions of fairness relative to different groups and so forth. How do I put all that stuff together? I think once it's hard to throw it all into a regularizer, but with social choice, we can actually make use of the sort of repeated interactions that happen with the recommender system, make use of the dynamic information, and allocate fairness concerns to particular recommendation opportunities, and then promote fairness over time. And that's how I've been thinking about it. It's hard for me to see how you would do the same thing if you were trying to put everything into the model. The paper that you're alluding to is called Social Choice for Heterogeneous Fairness and Recommendation. It was just published as of this year's RecSys. Yes. And there's actually a journal article that just came out. So this sort of architecture that I'm talking about is called Scruff, Social Choice for Recommendation Under Fairness. So there's an article in the ACM Transactions and Recommender Systems that kind of outlines exactly how that system works. And then the latest results using Scruff are in the RecSys paper. Can you describe on the house side, because I read you're using different agents that help you in doing so in a multi-agent framework. How actually can we perceive that? So what is happening there actually? Yeah. So just an outline of how Scruff works.
So the idea is we're solving two problems. One is we want to have multiple fairness objectives.
Those fairness objectives could be defined in a lot of different ways. We don't want to a priori say, oh, fairness means this, like matching some distribution or equal numbers of x and y.
Like we don't want to have to specify what those are in advance, because fairness is a complex concept. It could turn out a lot of different ways once you analyze it in a given domain. And we are also talking about different stakeholders here again. So like fairness for different stakeholders of the system. Exactly. Exactly. And so the idea is imagined. So it's kind of standard re-ranking idea that recommender system is going to produce recommendations. And now the question is how do I turn those into things that are sensitive to the system's fairness concerns? The fairness concerns are purposes of visualizing this or sort of conceptualizing this. We think of them as agents.
And so we have two questions to answer. One question is which fairness concerns are allocated to a given recommendation opportunity. So recommendation opportunity is a user comes along and the system is going to deliver them some recommendations. So which fairness concerns are allocated to that opportunity? It could be all of them, but you may want to allocate some subset.
So that's question number one, what agents are allocated and maybe with what weight or something like that. And then the second question is how do those concerns then interact with the existing recommendations in order to produce a final result set that goes to them? And so we think about this in terms of social choice. So people in economics and game theory have been thinking about these kinds of problems for a long time. Allocation problems. I have some resource. I have agents who want it, which agent gets it. And there are different mechanisms. So in our case, what makes a recommendation opportunity valuable to the system or to a particular agent has two key parameters.
One is how compatible is the user with that fairness concern? So again, we've done a lot of work in this micro lending domain. People are choosing which loans that they might want to provide money to support. Maybe I am somebody that is very interested in agriculture and I give loans frequently to agricultural entrepreneurs. When I come along and the system maybe sees that agriculture is one of these areas that the system needs to pay more attention to. It's like, oh, user 65, they're really compatible with this fairness concern. Let's allocate this agent now, because maybe these kinds of people don't come along that often. So there's this compatibility question. What agents are compatible with the opportunity? And then there's the question of what is the historical fairness situation? So maybe I'm actually pretty good in terms of recommending agriculture loans recently. But I have other areas, other concerns, say for example, particular countries haven't gotten as much capital as they need recently. And so I need to focus on loans to Uzbekistan. And that's more important than this sort of agriculture objective, even though both those objectives are there. So I have this question of what is the historical sort of status of the different fairness concerns? How have I addressed them? Whatever window matters to me in terms of the application. So what's the need to allocate the agent? What's the likelihood of success, like the value of this opportunity? That information can then be combined across all the different agents to decide how they get allocated. And you can imagine different allocation mechanisms because it's a repeated choice situation, meaning a new user comes along, gets recommendations regularly. I might just pick one agent at a time and say, okay, whatever the most urgent thing is, I'm going to allocate you. And then I'll allocate somebody else when the next user comes. So repeated choice actually makes this problem a lot easier in a lot of ways.
So there are different mechanisms. This is kind of our research has been exploring these different mechanisms and what difference they make. You can look at the paper for some of those details.
So I have now a set of allocated agents. Now the question is how do those interact with the recommender? So you can think about in social choice terms, what the recommender is doing is producing a set of preferences, like this item is preferred over that, et cetera, on down the list.
And that's kind of intended to represent the user's preferences. You can argue about how well the recommender system really does represent the user preferences, but that's kind of the aim, right, is that the recommender represents the user preferences. What social choice is all about in terms of preference aggregation is taking preferences from different folks, producing a societal aggregation, right? And this is kind of when people think about voting, this is what we think about. It's like, okay, different people have different preferences. There's some mechanism.
Everybody's preferences go into the mechanism and out comes a choice that we all sort of have to live with. And this is the same idea. In goes the preferences from the recommender system. In goes the preferences from the fairness agents, whichever ones are allocated, perhaps weighted in different ways. And then out comes a sort of societal preference, overall preference coming out of the whole system. And again, you know, 100 years of social choice research, there are a lot of mechanisms for doing this. And this has been part of our research to explore, okay, what are the trade-offs involved in choosing different mechanisms for doing this aggregation? And what are the best combinations of things, both in terms of the allocation piece and the preference aggregation piece? So that's kind of what Scruff is in a nutshell. It's sort of a two-phase social choice thing. It has some interesting properties from a social choice perspective. I have a colleague who's a computational social choice expert, Nick Matei, who's my colleague working on this. So, for example, in a lot of the sort of theoretical work on computational social choice, huge scale is a big issue, right? So you want to know how well does this, you know, choice mechanism work in computational complexity terms? Like is it if I have millions of preferences? But we're not going to have that in the kinds of systems that we're talking about. You're going to have a handful of fairness concerns. And so some of those constraints are not as binding, you know, in the context that we're talking about. Another thing that they worry about in social choice is strategic behavior. Like you may not want to reveal your preferences to the other people in the market. But again, in our case, we're thinking about a system which is operated by one company and one organization. And so, you know, maybe we don't care so much about preference revelation. People know that those fairness concerns are there. It's not a mystery, you know, so strategic issues like that don't arise so much. So anyway, it's got some interesting problem or interesting characteristics from a social choice perspective. And then I think it offers some interesting, some valuable things from the multi stakeholder recommender systems perspective, which is to say, I'm not imposing any constraints on what the fairness objective looks like for each of the fairness agents, it could be completely different for each agent. It's easy for me to integrate lots of different concerns, just, you know, add another agent and to have different configurations of those concerns over time. And then also, I think the dynamic aspect, which we often don't evaluate so much in recommender systems, but this idea that you see it a little more in sort of the reinforcement learning paradigms, but how does the system adapt over time, you know, to different things that might happen. And, you know, we see this, for example, in the micro lending domain where we've been working, things happen in the world. So for example, a war breaks out in Ukraine, right? Suddenly people want to lend to Ukrainian entrepreneurs or whatever, right? Because there's news, right? It's in the news or there's a natural disaster somewhere.
And you don't necessarily want, I mean, people in the other parts of the world still have needs, right? So you still, you know, so you don't necessarily want the signal, the popularity signal getting back to some popularity bias. You don't want that signal to overwhelm everything.
And so having a system that's adaptive can actually help. It's sort of adapting in real time that's attending to the different priorities. And that's relatively easy to tweak, you know, in small ways as needed that there's value in that. Great. Okay, that sounds like an interesting and very up to date read. And actually, personally, for me, it's also nice that we kind of started when talking about your journey in RecSys that couple of things reappear in certain fields. And especially in this episode, we also said there are many things from the economics domain that play a role in RecSys back and forth. I mean, not a big surprise, but nevertheless, it's good to always remember ourselves of that. Apart from that paper, there's one thing that you also reminded me of a couple of times already. And this was also one of your more recent works. It's different from a normal paper.
And you might already think about what I'm alluding to that manifesto on post-userist recommender systems. And the first thing when you were giving that keynote as part of the old RecSys workshop was that I was treating it first as like multi-stakeholder recommender systems just in a different way.
But this is definitely not about like putting it into a different way, but putting like things into perspectives that we have seen recently very much growing the use of LLMs. And you thought about this further, came up with a conclusion in that, which I would say lands a bit from multi-stakeholder RecSys, but is not only about it. Can you tell us or share a bit what this manifesto is about and what you claim and why? Okay. You know, I was very surprised that they invited me to give that keynote on the basis of what was kind of a extended version of a late night rant. But anyway, here we are. So the genesis of this paper came from me seeing work where people were talking about generating content as a kind of recommendation. So I can do personalized generation of content and people were calling that recommendation. And this like made my hair stand up in the back of my neck. I'm like, why does this bother me so much? And so when you have that reaction, you sort of interrogate, it's like, well, why do I feel this way? Why do I think this is wrong? And so anyway, I try to distinguish in the writing here that recommendation, the sort of semantics of recommendation is we're connecting people with items out of an existing catalog. Generation is not that right? Generation that might be perfectly fine, but it's not recommendation. But then I started thinking about, well, from the user point of view, you can't really tell the difference in some sense. It's like, well, I get the things that I like, how, why does it matter that there's like some different semantics there?
And then this got me thinking about something which, you know, my, one of my colleagues here at SCU Boulder, I can't take credit for the term post-userist. That comes from Jed Brubaker, paper that he had at CHI a few years ago. He does work on digital legacy. So what happens to your data after you die? And this creates a problem kind of like the multi-stakeholder problem that I, we talked about earlier, it's like, well, the user's not there. How do we do HCI research when the user's not there? How do we do recommender systems research if we just think about it in terms of the user, but actually, you know, there's this broader ecosystem? So then I started thinking about, well, what would recommender systems mean if we really, really took very seriously this idea that it's really about this ecosystem. It's not just about the user. What does that ecosystem need?
Right. And so that's kind of the idea of post-userist recommendation is sort of really thinking about, yes, there's this relationship between the user and the system is giving them access to content.
But if we take the ecosystem as a whole, and we take that seriously as sort of what recommender system should be doing, we come across a completely different set of issues than the ones that we think about right now. And as part of, you know, so the particular domain of recommender systems.
So for example, how does a recommender system create a connection between the recommendation consumer and the provider? Like, how does a music app build a fan base for a musical artist? Right. We have not thought about that as a recommender systems question. But if you take this kind of post-userist perspective, it's like just the connection between the user and the item, that's just this tiny piece of the ecosystem. We want to think about this in this much broader way. And my argument is that if we just stick with this connection, especially if we're talking about digital goods, restaurants may be different. But if you're talking about digital goods, then this battle is lost already. So the generative people win. But if we think about, you know, the ecosystem, then we actually have a job to do as recommender systems.
That was kind of the point of that paper. This is kind of related to my more recent research and just kind of the thrust of what I've been doing with this sort of multi-stakeholder idea.
I think it presents some interesting challenges for us as researchers to say, what does it really mean to take these other parties seriously as users of the recommender system? What is the provider's interface to the recommender system? Now, I know these things exist. Like there's YouTube Creator Studio, there's some kinds of interfaces that TikTok or Spotify have for the people who are creating content. I have never seen a paper about that at a recommender systems conference. Why not? We don't think that's a part of the recommender system.
We don't actually take providers seriously as users, as people whose experience that we care about. And I think we should. So I think this ecosystem perspective is very important. And I'm very interested in this question. It's like, well, what should be a provider's interface to a recommender system? And if you talk to people who are building these systems, they're like, well, if we tell people too much about how the system works, we open it up to manipulation. People can manipulate the algorithm. And I want to reframe that because what other people come manipulation, I call people giving information to the system about what their content is about.
And that seems like a reasonable thing for people to give information to the system about who their audience is and who their content is about. So the only reason you would call that manipulation is if you don't think that the perspective of those stakeholders should matter. Again, like if you take this perspective and you take it seriously, it turns some of this sort of received wisdom about how recommender systems should work kind of turns it on its head. All right. And some important thing that I have my problems with or actually accepting is you think that this battle is already lost. So for those who are creating digital goods, as long as there will be user centric recommender systems, there is no value for creators of digital goods as long as we don't start going beyond that user perspective. Well, again, it was a rant. It was a position maybe a little bit more extreme than when I actually believe, but I actually don't know how extreme it is. So, for example, there was a report that came out. I don't have the link right here handy, but a report came out recently predicting that due to generative content, the sort of rise of generative content, that revenues to individual musical artists would decline by 75 percent over the next five years. All right. OK. So, you know, I'm an amateur musician. I'm not a professional musician. But if that was my source of income, I would be I would be worried about that as actually, you know, somebody who cares about that music ecosystem, who cares about the arts. I don't want those folks to disappear. And I think that that's a problem. So I think that the pressures that are there will move us in this direction in some for some kinds of media, for some kinds of items in some kinds of settings.
I think that that will start to happen more. And then I think the question is, if we want recommender systems to remain valuable, what do we have to do in terms of thinking about what properties does the system need to have that it is promoting kind of human creativity and human flourishing as opposed to making that more difficult? Great question. And I guess also a great call out to all of those who are listening to think about that makes people thinking, I guess, or hopefully and hopefully not. I think that's what Alt RecSys was for, is to to ask some questions that we otherwise aren't asked. Actually, and this is somewhat maybe leading us to the closing of the session and of this interview. So some of the questions that are frequently asked my guests are not only where they see the field going, but you already provided me with a good way of putting it. What are other things that make your neck hair stand up? So I mean, you have around 30 years of experience in recommender systems. And even though it was not always called recommender systems, you kind of contributed and like accompanied recommender systems turning into recommender systems, at least from the naming. So you have seen a lot, you have researched a lot.
Given this manifesto, we already see like one outlook. What are other things that you see the field going or that you would then turn into, Hey, we should do this and we should do that.
And we should pay more attention to this. Yes, I do have one other thing to add here. And I, this is around the question of sort of transparency slash explanation. And I think the work on explanation has really focused, especially more recently on how do I provide some plausible connection between the item that I've recommended and the user's profile such that the user will trust me enough to click on this thing and buy it, listen to it, whatever. So this kind of justification kind of use for explanation. And what I find problematic about that is that again, most systems are multi-stakeholder. Most systems have a variety of objectives going on. And that kind of explanation kind of perpetuates the idea that it's all about you when in fact, it really isn't all about you. And so if you think that there's a transparency element to explanation that I should be telling you something about how the system works, I should not be hiding its multi-stakeholder aspects from you. And I think this question about how to, now you run quickly into the ideology, right? So if I tell you, Oh, I'm recommending this stuff for you, but part of it's also because I have this fairness issue and blah, blah, blah, people are going to be like, wait, what, what? It's like not all about me. Like people expect that. They don't, they haven't been taught. They haven't been, it hasn't been demonstrated to them that that these, there are these other considerations. And when we've done work on this, people get kind of upset about that.
They're like, what do you mean? What do you mean it's not about all that? So anyway, I think otherwise we're just deceiving people and this bothers me a lot. So I think when we think about recommendation transparency, I think we need to not sweep the multi-stakeholder aspects under the rug. We need to be upfront about what the objectives are. And I think we need to think much more broadly about what it means for a recommender system to have explanations and to be transparent. So for example, why am I getting recommended, you know, this music now, and I was getting recommended different music last month. Like, I don't know. I've never seen a recommender system paper that thinks about transparency in that way. You know, think about again, the ecological point of view, what do explanations for providers look like? You know, if I want to know, you know, why is the recommender recommending my stuff to users in Germany more than it's recommending to users in France? Like there's no system, I know that answers questions like that, or even research that would help understand how a system might do that. So I think we have a very narrow view of explanation and transparency and it's very much tied to perpetuating what I think is an incorrect, you know, notion of what the recommender system is actually doing. That's something that I would like to see more work on. It does raise all these questions about, you know, people not wanting to reveal too much about how the system works, but that means that's a problem to be solved, not a reason not to start to explore it. Yeah, good way of putting it. This also connects to a talk that you gave, which was called recommended for you is a lie that I was unfortunately not able to find on the internet anymore. Maybe I did a bad job of researching it. No, it was, it was, it was like, it was on a panel and there wasn't a paper associated with that or anything. But yeah, this is exactly what I was talking about, right? So this idea that, you know, we've accustomed people to thinking about recommender systems in non-multi-stakeholder ways. And if we understand them to be multi-stakeholder systems, then we got to explain them differently. Yeah, makes sense. So being transparent, improving explanations and being honest in those explanations that we provide to the consumers of recommendations. I think it's a challenge. Like we're not, yeah, we're not there yet in terms of this kind of work, but you know, it doesn't mean that we shouldn't be pursuing it.
We haven't gotten there yet. Yeah. For this podcast where we have already seen many folks that you collaborated with, Heman, for example, Michael Xtrend, are there other folks that you would like to listen to on this podcast that you could think about now? Well, you know, I have been working recently with somebody whose name might not be familiar to you, but his name is Carl Higley.
Yeah, he is. He is. Okay. But anyway, so he's been working with us on a project on building a recommender system platform. It's called Pop Rocks. That would be a whole another interview in itself. But I think he, as somebody who's interested both in research and also in a lot of sort of experience in recommender system applications, he is also somebody who has a lot of ideas about, you know, like here are some real problems in recommender system implementation and application that haven't seen a lot of exploration that we don't think of as important recs problems. And I think he's got some good ideas about, you know, where, you know, some interesting open problems lie. Very good. Sounds like a great recommend. The reason why I'm saying like he is, at least I'm aware of him is that Even Oldridge in one of the very early episodes also mentioned him as a very valuable person to talk to on this interview. Since then, maybe like 20 episodes have passed. So I should definitely make sure to reach out to him rather sooner than later. So yeah, thanks for that recommendation. And Robin, I thank you very much for sharing all of these insights, your experience, and also for all the patients that you brought, especially toward the end of the podcast already overstretching a bit. So yeah, I guess people will definitely appreciate it and really much appreciate it that you that you shared and participated in this project.
Well, thanks for inviting me. And thanks for, you know, doing this podcast and bringing lots of different voices, research and application. Great. Thank you. Cool, then have a wonderful day and yeah, see you soon. Latest at Nixxwaxxis. Okay, bye.
Thank you so much for listening to this episode of RECSPERTS, Recommender Systems Experts, the podcast that brings you the experts in recommender systems.
If you enjoy this podcast, please subscribe to it on your favorite podcast player and please share it with anybody you think might benefit from it. If you have questions, a recommendation for an interesting expert you want to have in my show, or any other suggestions, drop me a message on Twitter or send me an email. Thank you again for listening and sharing and make sure not to miss the next episode, because people who listen to this also listen to the next episode. Goodbye.

#28: Multistakeholder Recommender Systems with Robin Burke
Broadcast by