#14: User Modeling and Superlinked with Daniel Svonava
Note: This transcript has been generated automatically using OpenAI's whisper and may contain inaccuracies or errors. We recommend listening to the audio for a better understanding of the content. Please feel free to reach out if you spot any corrections that need to be made. Thank you for your understanding.
Recommendations and content discovery is kind of the problem that user modeling, I think, is useful for.
And yeah, we see increasingly that the real-time aspect is key.
In order to make or for your product to make decisions.
So you can frame each page view or each interaction or each modification that happens to your user in your product as a decision.
Decision on where to send them, what to show them.
All of these decisions should be navigated from the place of understanding what the user wants, what sort of their traits might be.
And you should always have some objective that you are optimizing for.
You should be taking everything the user gives you about themselves as a hint to what they might like.
It really helps with the cold start.
So it's all about this, right?
You have maybe some onboarding form to your application or to your event or whatever it is.
You should utilize what people tell you in that form.
And if they can bring with them some social profile, bootstrapping the initial version of the user model from this and then refining based on behavior.
Right.
So I can show you a couple of interesting items, users, whatever, and then refining this kind of real-time feedback loop.
That's the magic, right?
We need to figure out a way to really show that we are on the side of the end user and that we are on the side of the platform owner.
And somehow reframe this push for arbitrary engagement at any cost into meaningful engagement that's actually helpful for people.
Hello and welcome to this new episode of RECSPERTS, a recommender systems experts.
For today's episode, I'm really pleased to be joined by Daniel Svonava.
Daniel is the CEO and co-founder of Superlinked.
And he has also been working for more than five years as a senior software engineer and tech lead at Google, where he was working on forecasting and pricing systems for YouTube ads, as well as on user modeling, which will be one of the topics that we are going to address in today's episode.
But not only this is what Daniel has done in the past.
He has also founded several startups and he holds a master's degree in software engineering from the Slovak University of Technology.
User modeling will be one of today's topics.
We will also go into depth about real time personalization.
We will have a glimpse into the current ML tooling landscape and we will also definitely go into the current endeavor that Daniel is involved in, which is Superlinked.
So hello, Daniel. Welcome to the show.
Hey, hey, I'm happy to be here.
Long time fan of the podcast.
Really recommend everybody to check out, for example, the Adversarial Attacks episode for recommender systems that really blew my mind.
Thank you for having me and let's get this started.
Thanks. I mean, it's always nice if we can also relate back to some of the episodes that we have already had in the past.
And I mean, there are already plenty of them.
I'm trying to do my best to come up with more and more on a monthly basis.
I will do and try my best to keep up with it.
Daniel, I already mentioned a couple of points about you, but I guess you are the best person to talk about yourself.
So can you introduce yourself to our listeners?
Yeah, for sure. So, you know, I would consider myself to be a pretty technical person by training, by background, you know, doing coding competitions since high school.
You mentioned Slovak Technical University, which I'm pretty sure nobody or most people don't know.
But I would say, you know, broadly, the listeners should imagine this kind of Eastern European technical upbringing.
Right. But then it was clear to me that I will never be the best in coding competitions.
And so I switched to working on different projects and doing internships.
So I did the Google and IBM research internship during university.
And I realized that actually you can have a real world impact with algorithms.
And then I started my first company on the back of that thought it was two engineers starting a company together, which is this typically ends in tears because you build some very cool stuff.
But then the business side doesn't keep up usually.
And so after maybe a year of working on a system that produces summaries of people's vacations, so compiles all kinds of information from the photos and all kinds of data from a trip and then creates a story out of that.
We after a year of kind of doing a few pilot deployments and so on, we went with my co-founder back to work in a bigger company and we ended up at YouTube, both of us.
So, you know, I would recommend people doing that out of university, actually, because it may be for not for seven years, like I did, but maybe for three years.
I think that's the kind of maximum ROI because you learn how, you know, this stuff works in practice.
And then it's just so much easier to go and do stuff on your own, which is what I did after I left YouTube.
And, yeah, now it's super linked.
We are trying to turn into product.
What's there I built very custom, right?
And that has to do with understanding the users of your product and creating a more engaging and safer experience for them.
Okay.
I see.
So definitely something that we are going to cover.
However, there are actually two questions that are popping up in my mind when you have been talking about this.
So the first one is what are the main things that you learned when you have been founding for the first time that you learned back then that you can adapt to or can use today for what you are doing at SuperLinked?
Go read the Lean Startup book.
You know, that's a good starting point.
And then Jobs to be Done.
That's a good package because we were building for a long time towards something we thought should be, certainly.
As opposed to, you know, from the early days interacting with enough people, not with just like very close circle of initial users, but a bit broader and actually realizing where people are in their head and then going towards this.
Right.
It's very hard to create a completely new technology and at the same time change how people do stuff.
So I think in general you want to pick one degree of freedom where you are changing something and then fix everything else basically.
Basically, back then you missed the point of testing your market or your customer assumptions as early as possible somehow.
Yeah, pretty much.
And going into way too much engineering for the sake of engineering, we were like coming up with new algorithms to solve problems that then we realized were actually not the main problems that we realized.
Definitely worth making that experience.
I mean, if you have gone through that yourself, then it's much easier to adapt to it and do that or learn from it in the later stage of your career and take it from there.
Actually, the second question is, I mean, you have spent more than five years at Google, but you mentioned you if you could do again their same experience, you would rather recommend staying for about three years.
Why actually three?
So is it that the return somehow diminished by then or what is the reason why you recommend people to stay rather three years in such a company?
Obviously, it's a ballpark.
And for me, it maybe adds up to something like seven years.
The way to think about that and probably about any job is that in the first couple sort of half years of the job or first couple quarters, the expectations of people around you, you know, you know, you're not really a big fan of the job.
You're not really a big fan of people around you and the responsibility and the types of problems you are solving tend to increase or grow in scope very quickly.
Right. So I do to every half a year the size of the problem I was looking at.
And, you know, back then we were like writing MapReduce by hand and doing all those types of things in those early times.
This kind of expansion hides the other problems, right?
Like there are other problems about working at the big company where things might not be moving as fast as you might want.
There are all these other aspects of making progress, not just creating the best product or best internal tool to kind of manage everybody that's involved in that process.
You know, there are difficulties, but this is in the early years, for me at least, offset by this growth of scope and sort of reinvention of the job every half a year if you want, if you really are focused.
But this doubling just slows down inevitably, right?
And I think this happens around those sort of three, four years on the job.
I think it starts to sort of plateau or get some diminishing returns and then you have to work so much more to advance by a little bit because there is just more competition there on the top.
And the availability of those really interesting problems kind of gets the scars and more and more scars.
When you think about it, basically you end up with some sort of estimate, like four years.
It's a good time to, you know, random words with restart, basically.
It's time to restart, basically.
Okay, but the time that you spent working for YouTube was definitely not spent unwisely.
I mean, you learned many things and you contributed a lot.
And you also mentioned that you worked a lot on user modeling back at the time, which is the first topic for this episode today.
Can you give us our first coverage about what have been the problems you were dealing with while working for YouTube and also how user modeling was relating to this?
So just broadly, you know, user modeling, I think, is quite an ambiguous term.
So the way I think about it is that you have this problem of understanding your users, individual, single users, in order to make or for your product to make decisions.
So you can frame each page view or each interaction or each notification that happens to your user in your product as a decision, decision on where to send them, what to show them.
And all of these decisions should be navigated from the place of understanding what the user wants, what sort of their traits might be.
And you should always have some objective that you are optimizing for in mind.
And so for me, broadly, the problem of user modeling is collecting the data that users generate while using your product.
So that's, you know, the behavioral data.
But there is also metadata that they might create to describe themselves.
They might be creating content and, you know, there might even be third party sources that they, for example, bring with them.
So, you know, for example, the Web Tree space, people sign into services with wallets and bring in almost like a little passport of data with them intentionally.
Right. That's kind of the difference.
And then you should use that input to better understand them and offer a better service.
So collecting all the data and then running some models on top to make the data useful.
I like the saying that data is like oil because oil is useless if you don't refine it. Right.
So actually by itself, it's useless.
So you need to organize it in a way that makes it useful.
Or refine it in a way that makes it then consumable by certain downstream services, cars, buses, trucks.
So what are the buses and trucks?
So I think the buses and trucks is kind of how you actually get value out of it. Right.
And you do that by typing the data back or typing the insights from the data back into the product experience. Right.
So not just having a dashboard about what the users might be doing, which is certainly the best first step. Right.
Monitoring analytics is where a lot of the data science effort goes to make sense of this data.
But then, OK, you are tracking the KPIs. Good. But how do you move the KPIs?
And that happens by actually taking those insights about users and putting them back into the product in the form of recommendations or in general relevance.
Right. That's kind of one side of the coin and then the other side might be safety.
So suppression of bad behaviors or, you know, removal of spam.
For example, at YouTube, you know, on the sort of safety side, you can take the example of YouTube comments.
So many, many years ago, YouTube comments were not the best place to be, let's put it that way.
And a lot of a lot of work went into changing that and surfacing the helpful comments with the right vibe, which is not a very precise way to put it.
But I think if you have a bunch of user generated content in your product, you are in charge of setting the tone of what the code of conduct on your platform should be. Right.
And then somehow you have to figure out the way to encode it in the system because you have, you know, million comments generated per day easily.
So there is kind of no way to really moderate that other than aligning the machine with what you think it's a good idea to highlight.
OK, I see. How actually does that relate with using user models for personalization or more specifically for recommendations?
So I already get that for building user models, for creating user models, you use different sources of information.
You use the behavioral data. You use what users do to self describe themselves or their created content that might be possible to draw some conclusions about themselves because they basically show who they are through the content they create and also through the third party.
So different sources of data, different notions of data somehow banded to also your platform when it comes, for example, to the behavioral data.
So now we do all have all these different sources of user data and we do have certain, let's say, use cases for personalizing content on our page.
So how do you connect these two worlds? So what is what is in the middle of it?
Vector embeddings. Lots of vector embeddings. Yeah, basically.
Right. So here may be a good reference for people who haven't seen it yet.
I would go read the TikTok monolith paper. Right. So TikTok has a cloud offering now conversation can be had around the company as such.
But I think what is not disputable is the quality of their recommended infrastructure, especially with the eye towards the real time nature of it. Right.
So I think this is something they do quite well and people have stuff to learn from them.
One of the aspects there, for example, is not only having vector embeddings or these representations that help you understand what's related and what's not related for the content, which is normally the strategy, but also having it for users. And then you have a choice of doing it only as the query time. Right.
So user comes to your platform and now you have all these sources of data that you are reconciling into a picture of what the user is about.
And you can do that at query time and just create a vector and search into the vector space of the content to find a recommendation. Right.
This is kind of the two tower approach. But what, for example, we at Superlinked are thinking about and the way we approach it is that we actually also store the user embeddings.
So we actually materialize them into a database because then it's not only a tool to search the content database, but it's useful in and of its own.
If you have real time updated user embeddings of everybody in your system, then you can do things like clustering. You can identify segments of users that maybe need different treatment in your product or would benefit from being addressed by a differently tuned recommendation model.
For example, you can do label transfer. Right.
So if you have labels for your users, let's say that have been marked as spam accounts, you might want to transfer those labels on previously sort of unexamined accounts and you can do that by proximity in the embedding space.
Yeah, there's like a bunch of benefits to actually also materializing the user side of the two towers.
Okay, I see. So you have been working on user modeling during your time at YouTube. Can you think about further challenges that you have been facing during that time when it came to user modeling?
How actually did user modeling or was it connected to the forecasting of ads and performance, for example?
Yeah, so I'll give you two completely opposite extreme examples of forecasting.
So it covers the space.
Exactly. With everything else in between. So two examples of behavioral prediction of slightly different type. First problem. Imagine that you have many hundreds of hours of content uploaded to your platform every day, and you have to edit the upload time, make a decision whether to transcode the uploaded video into 20 different versions to serve on 20 different devices, or you will do that transcoding when a view actually comes and somebody wants to view the content.
The real time transcoding is much more expensive than the kind of batch transcoding that you can do as the content is being uploaded.
So basically, you have to decide you have to predict will there be enough views for this video that just is being uploaded. So you have no kind of prayer on performance of that video. You know, of course, about the author, you know, about metadata and so on.
And you have to make a decision.
Which means it's not too expensive at all or in general, but it might be too expensive for a given reach of a video that I'm predicting it to have, right?
And by the time you realize this, right, because a bunch of views is coming. So you are doing the real time transcoding. And at some point you are 100 times more expensive compared to because it takes you a while to realize because it's kind of distributed system.
So at some point you are 100 times more expensive than just taking the decision initially, okay, let's transcode this one to all the formats. Let's push that into a CBM. And, you know, you'll set we will save on all that network transfer all that real time compute.
And it's kind of funny binary decision, right? So you might be working with some probability distributions. But at the end of the day, there are just these two actions that you have to take with kind of limited prior.
Yeah, so this is kind of an example of short term prediction with not so much information. And then for the ad performance prediction, right? So this is kind of what I was mostly focused on during my time there.
Imagine you have this interface through which people buy ads. Okay, so people come in and they buy $10 billion worth of ads every year.
And the way they do that is they come in and they start to create a campaign. And the campaign has 10s of different types of settings that it can have. It can target keywords, it can target different aspects of the video, it can target different aspects of the user model.
So the model of the viewer of the video, so interests, things like this.
So basically just many different levers that let you specify your audience that you want to reach with ads, right?
Down to IDs of videos, your ad should run on. Okay, so there is like whole industry of third parties that identify subsets of videos. And this can be 10,000 videos, for example.
And then these companies would sell that list and then advertisers buy those lists and use them to target the ads because the seller claims that this set of videos has certain type of audience or certain type of quality.
And so then the configuration targets specific video IDs, basically. So it's very complicated. Long story short.
But as the user is doing that, right, it's creating the campaign, they expect that in the real time, they'll see a set of estimates of what the campaign will do when you run it.
What do you mean by what will it do?
How many clicks it will win. So, you know, the campaign has a budget and a bid. And there is always some goal that the buyer has in mind.
I want to get these many clicks or, you know, on certain type of audience. And so the forecasting system has to be taking into account a very complicated description of the campaign.
And it needs to be predicting, OK, if this, you know, runs in a week and competes with everything else that will be running in a week, which can be literally 100000 other campaigns, how will this campaign perform in the competition on this future traffic?
Right. I see. And you need to sort of compute that real time. That's the first problem, because it's a part of the iteration of the buying flow.
So the user tweaks the campaign, sees the result. OK, not enough clicks. OK, I need to relax some constraints somewhere.
OK, now the number increased. I like this, right? Which, by the way, there is like an important detail that I mentioned, which is the user has expectations about when they change one of the constraints in a certain way.
The number goes up or down or it should go. Right. There are these kind of expectations of this monotonous behavior. Like if I relax a constraint, there is no reason this should become more expensive.
Yeah. So now you need to be also consistent across that search space of different campaigns. This is hard, but the most extreme case of it is when they buy one year ahead.
And when the campaigns are not auction campaigns, but reservation campaigns, basically have to say, I will deliver a million impressions and I promise this to you for this price.
And then you sign a contract. Right. So you are now committing the platform to serving that campaign of very specific volume at very specific price a year in the future.
And it's that gets specific video ideas that you are now trying to predict how many views there will be of those videos in a year and then how this will compete with all the other campaigns you sold.
This is the opposite extreme. Right. So in the first problem, we were predicting something that happens in an hour or even less.
The other example, we are predicting something that happens in a year that can create the multi-million dollar liability per campaign for your company.
So in both cases. So the first being the video upload case, I will just refer to it as this for the moment. And the second one, though, somehow the ads performance forecast.
So I do understand that both of these cases have really high real time performance requirements.
And by performance, I mean really the latency. So you almost expect, especially in the latter case, almost instantaneous feedback because you don't want to wait for minutes to get the result of your changed settings in terms of audience reach.
You want to have it in. Yeah, let's maybe I would assume sub seconds that you can almost instantaneously get this bag and then pull the levers as long as you want to reach the certain audience you want to get.
However, I miss the aspect of the user model because given the video upload case, I mean, there is a user. We know something about the user.
We might also know how successful in terms of reach or other factors, the content of that user that the user created before was.
And is it now like that you are exploiting all these different sources of information for a user to answer the decision of batch and have it everywhere saved when it needs to be served?
Or is it not really the far reaching content so that I will just decide to do so in real time or where is actually the user model coming into play when it comes to the video upload case?
Yeah, so on one hand, you have the author of the video, right? And then past performance and, you know, this then acts as a feature into the prediction for this specific use case.
So one way to think about this is like a stack. Some people could call it, you know, you have a feature store where you represent the uploader description.
And then, you know, you have some general understanding of the viewer appetite, let's say, and that one is not, let's say, per user, but it's an aggregated model that tries to predict, OK, how many views at this time a video of these properties might get.
So maybe that one is not like per user, but it's still some aggregation of what do you expect from the users of your platform?
OK, it's created from basically all these all these different sources of data that you might have that you try to integrate.
You know, that's the uploader. And just to make it a bit more specific for the app performance forecasting, because the campaigns are so complex, you can't really train a model that takes the campaign as the user.
It takes the campaign as the input and speeds out anything pretty much. The constraints of these campaigns are not really smooth.
So, you know, it's something that you have to actually simulate rather than try to estimate from a description of a campaign, because these campaigns will be competing in a certain way.
You have to simulate that competition where you can do modeling is predicting the traffic. Right.
So in a simple way, this could be a time series prediction problem, right? You would somehow segment your inventory or your traffic and then you would extrapolate some time series based statistics over those segments.
And then you would try to somehow feed your campaigns into those segments. Unfortunately, this doesn't really work because, again, the campaigns are too specifically targeted.
You would never kind of generate those segments fine grained enough to figure out how they satisfy the campaigns. And so what you have to do is you have to kind of generate almost like a future log.
So you have to sort of create like a future event of people coming to your platform and actually interacting with it. And this is where the user model is useful because it can, you know, nowadays generative AI is the big topic.
But basically, you know, it's kind of running a model the opposite way. Right. So we are feeding it some random vector and it's spitting out users. And then, you know, you have a bunch of future users and now you can simulate your campaigns against this.
And, you know, then each user represents a thousand actual users, right? But they have all the properties like a normal user would have. And therefore, then the campaigns which target users work with this representation as opposed to segments.
Right. So we're trying to kind of create future users and do the simulation on top of them.
Let me just come up with a very simplified example just to understand whether I'm getting the gist of it. So let's say there's a world in which you can sell chocolate and ice cream. And you know that let's assume 75 percent of people are interested in chocolate.
25 are interested in ice cream. And now I want to run a campaign that is advertising ice cream. So and now I just want to know, okay, I do have these two segments, which are somehow aggregations of users, but using the same model like I use for a single user just to represent a cohort of users.
So and now I'm basically checking how many of these users I will reach with my ice cream at and then I can say, hey, since I assume that I will be having 25 percent of my whole user base being ice cream interested for the next 12 months somehow averaged or something like that.
I can say, okay, 25 percent times the number of overall users. So this are people you're going to reach times there. Let's, for example, assume a daily average clicks or something like that.
So if you could do it this way, this would be nice. And you could if you had two segments, you would basically your Venn diagram has three boxes. And so you can sort of somehow model each box and you're good to go.
Unfortunately, this is not the case. You have Venn diagram with thousands various little overlapping sets. And therefore you have to kind of pretend you are creating actual individual users.
Yes, they represent hundreds. Each simulated user represents hundreds or thousands of actual future people, but they are like fully fledged people.
So they have interests. They have they have all these combinations of the various sets in the Venn diagram in proportions that are realistic to the actual population.
And you have enough of them to get a good sense of all the overlaps between all the sets in the Venn diagram. But you can't model the actual overlaps. You are you are doing that.
It's almost like a Monte Carlo little bit because you are kind of sampling from all the users and then you are projecting that somehow in the future.
And you have enough of them to get a good sense of how the Venn diagram basically looks and what intersects with what. And the additional benefit is that these people are compatible with your campaign descriptions because your campaigns target actual real people, not segments.
And so therefore your simulator can then pick up a campaign, pick up the future user traffic made out of individual actual simulated people, and then it can sort of figure out how these campaigns will compete and what's going to be going on.
Therefore I tried to heavily simplify it, but it's way not as easy as this is. I mean, it's not like people were having a preference for either ice cream or chocolate.
They might be having a preference for both of them. And then there is not a world where there are only two things I could be interested in.
But there are thousands of things that I'm interested in and somehow distributing my energy and time over. So for my attention, what we can do is kind of those were two very specialized examples, very extreme specialized examples, which a platform like YouTube would have.
Maybe to kind of bring this home a little bit, we can talk about something in the middle of the road that most platforms might be dealing with.
And there, you know, it's, I would say, content discovery. So for YouTube, 60 plus percent of watch time is driven by recommended videos there on the side. The YouTube shorts launched after I left the company.
So I don't know if they published some stats for this, but it's all basically recommended driven. There is no search. You are just, you know, so it's 100 percent driven by recommendations as well as, you know, so this TikTok is nowadays also very clear representation of content discovery, measuring a lot.
Right. And then I would also mention the trend of social commerce. So we have these apps coming from Asia, and others that she, for example, they have their own app as well. And it's kind of a recommender based shopping as opposed to search based.
So I think in the West, we really think about shopping as kind of search first, right? I see something somewhere or get an idea, go search for it. And then we all worry about search relevance. Right.
And then there is some purchase further down the line. But it seems that this is shifting a little bit towards the recommender engine based situation where you are kind of window shopping or you are getting inspired or you are consuming content.
And as a side effect of this, you have the opportunity to buy stuff. I think in that case, you know, recommendations and content discovery is kind of the problem that user modeling, I think, is useful for.
And yeah, we see increasingly that the real time aspect is key. And the reason for that, I think, is that you really want to close that feedback loop. Right. So basically, if your system figures out or that this user should absolutely do that, we see cut on skateboard video. OK. And then the time it takes you between realizing this in the system, the user seeing the video and the user responding. So, OK, watching it or not or, you know, liking it even.
And then how long does it take you to take that response and update your understanding of, OK, what does the user want? Right. And then showing this to them again.
So this almost like, you know, the user is together with the system converging to something that they really like. And if they can run these loops really fast, then within one user session, they get to where they want to go.
As opposed to having this kind of batch process that every hour kind of updates all of these click computes and then you serve recommended videos based on which video you are on right now or recommended products based on which product you are viewing right now.
If you can sort of shortcut that feedback loop from half an hour to a second, it's like all different things, basically.
You personalize before you run at risk of losing the user because the user just feels like the current session is not tailoring what the user's assumed intent is to describe it like that or something that goes into the right direction.
Yeah, absolutely. So I think that's really key in order to have a chance in this fight for attention nowadays. Right. And then I think also this whole idea of real time data infrastructure changes how we build systems.
It changes what user experiences we build. So this is, for example, if somebody is thinking, oh, at which point of time should I start worrying about this? Right.
You might end up building a different system if you go with this paradigm. Right. So these apps that we see succeed and hit first place in the US app store charts.
They're designed around a system that can real time personalize the content. Right. So I would say I would really take it from the start and build that into the core design of the product.
So that's on the product decision side. And then on the engineering side, I think it also changes how we work with these systems because you can change something and immediately see the effect of the change.
You can, you know, let's say you are building a data pipeline that updates your embeddings. Right. If every time you want to test it, you need to sort of run, you know, spark cluster has to boot up and stuff needs to be happening.
And then something somewhere updates all at once. All the vectors.
So basically the best scenario, right?
Yeah. In the best scenario, you kind of work with this on and off. Right. It's not kind of a flow type of work.
But if you can, you know, inject a data point on the input, see it go through the system sort of instantly and create an update and then you kind of see, OK, what is this doing?
It's almost like going from, you know, working in C++ and building stuff for a long time into kind of hot updates in JavaScript where you can change stuff and it's you see it right away.
I think it makes the developers maybe a bit lazy. I'm kind of an old school guy.
But at the same time, I think it's super clear that the productivity just skyrockets.
When checking the simple linked website, I also found a blog post in which you claimed that real time personalization or real time is going to be the buzzword in 2023.
So I definitely see or share your perspective on the added value of real time.
And you already said that it has an impact on how you need to think about how you need to integrate with the product, but also, of course, in terms of technology.
So I have been in a situation in the past where I've been fighting a lot for having some certain real time recommenders in place or a near real time so that you really want within a session to update your knowledge and not only the knowledge, but also what comes out of the knowledge.
So what you, for example, said would be a vector representation of the users that is updated and can then be used to then serve the next video in a row.
My question is as follows. Do you think that this is of equal importance in various industries, use cases and domains where recommendations are adding value?
Is it everybody that should use real time personalization? When should they use if they are mature or not as mature? Or what do you think would be the right time and the right fit for real time personalization?
So one heuristic that I think is useful for the answer here is looking at the systems around the system you are building.
So let's say if the kind of system you are working with that you are inserting a recommender in is made out of a bunch of batch pipelines, right?
And your system would be called from one such pipeline because that's how the system already surfaces its results somewhere further downstream.
Then this is probably not a good idea to start with a real time recommendation or your real time user modeling in general.
You can still use the same techniques on the level of algorithms. So the same embedding models, the same way you define which features are important and so on.
This you can reuse, but you can run it in a batch workload, right? In order to, let's say, annotate something in your pipeline, label users or new recommendations or whatever it might be.
I think a good sort of rule of thumb is if you can get the actual end user interact with the result of your recommendations, right?
So you are powering some sort of front end that could establish this feedback loop.
So it could be that the user is interacting with the results or the results affect something in the environment that you can then measure if this sort of action yielded a positive result.
I think this is where the feedback loop can be then shortened and you can convert faster. So for me, that's kind of how I look at it.
So for example, for us, you know, it's super linked. Some of the clients want to personalize emails and email campaign. This in and of itself, it's a batch use case, right? Because you are sending out a bunch of emails.
So on the surface, batch will do. Then you start to ask, OK, so we send the email that will have some call to action, right?
The user clicks and goes somewhere. Now they're in your app or on your website.
They have entered the region of being able to generate this feedback. And so there they, you know, you might want to actually respond to what's going on. Right.
So which button in the email they clicked and then what they're doing once they land, are they interacting with something that's right there or did they start to scroll?
And then it's more towards the real time situation.
So that means in that very example that the email campaign itself, so selecting how I'm going to display my content or what I want to sell or whatever.
Let's say you are doing offers, right? You are doing offers.
Let's say I have the possibility to have three offers in every email that I send out to my customers.
And that basically is a process of coming up with the top three offers for each client as something that I do in batch.
But this is the batch world and there it's fine also to stay in the batch world.
But am I getting you right that you argue for taking the feedback towards the email itself?
So, for example, I'm now clicking something there is the identifier coming in and I do know for a certain point in time that the client click this and now is on my page.
That this is the right time to already take into account that email click feedback, but of course not for the batch model, but then for a different model that uses or applies real time personalization.
Yep, exactly. So already the first page you render after they are landing on your website from the email, you have one of those decisions of what to do with this user attention.
And it's a waste if you don't optimize that.
So I think I think that makes sense. Now you have a choice, right? Do you want to maintain two systems? Do you want to have your batch system and then the real time system?
And then I would ask, OK, how are you keeping them consistent?
I think this is a big topic, by the way, which in the data world, people came up with something called the customer data platform set of tools designed purely to aggregate customer data from different sources, identify kind of who is who across different systems, give everything in one place, kind of single source of truth for your customer data and connect this with APIs to everything else.
We have seen this evolve for many, many years. There are now big companies doing this. We have no equivalent like this, I think, in the sort of machine learning community because all the models we build.
So so it is it is kind of potentially controversial take, right?
But yes, but for me, I think people build use case and objective specific models to optimize certain interactions, let's say.
But these models are not really sharing their understanding of what the user is about.
They might be sharing training data, right? They might be sharing maybe some features extracted from things the users interacted with.
But they're not really by design consistent across themselves. And so then the user experience could feel disconnected, right?
Because it's kind of different models taking different decisions in different points in the application.
I don't think we have this idea that all these neural nets, for example, should have some couple layers that actually is shared across all my user modeling tasks for my product.
Right. And then I'm bolting on parts for that kind of make it task specific. And by the way, now, like the community is realizing that for the large language models, we will have to do this because nobody wants to pay for actually training that first half of the network, basically.
We might do some sort of fine tuning on top, but the first big chunk, we can't keep retraining that, right?
But there's a point where I would definitely disagree. Doing real time personalization for me does not necessarily mean that I need to retrain the whole thing in real time.
So since you have already mentioned two-tor models, I guess this is a nice example to illustrate this with.
Let's say we train a sequential model for users where we use the adaption of Word2vec, so Pro2vec coming up with representations for users and for items in the same space, so everything's fine.
Or we use basically the two towers for it to come up with representations.
So and now we use the latest user vector representation, aka embedding, and store it to some feature store. When the user arrives at the platform, we check for this embedding and we use that embedding to push it with some candidates through the ranker and come up with the ranking of items and picks the top most ones.
And now the user clicks on a certain item, then I would assume I could take that click and the data of what the click is associated with.
So the corresponding item embedding to update the user embedding and would result in a changed user embedding, which then could also result in a changed order of my next step or something like that.
So, I mean, this is something where you would say, okay, my individual components, so what creates a user or the item embedding or the rank net, they stay the same, they won't be retrained each and every time.
But what is basically the input will be changed. So how do you think about this?
This is the right trade off, I think. And these are the two levers that you have, right? You kind of have the embedding model. And I totally agree. This one should be more static, but I would still say you need to keep it up to date, but not real time. Totally agreed.
The thing that's kept real time updated, indeed, is the user vector. That's the first half of the network produced the user embedding. And then, you know, are you making sure that when you feed that user vector as an input to all your different downstream sort of ranking models, let's say, right?
Somehow this is consistent, right? That the user vector is produced in a way that the perceived experience down the line for the user across all the different ranking sub problems that you might have to suggest other users to interact with on the platform, suggest content, highlight content you might have missed, which is a different objective than recommending content in a feed, right?
You have all kinds of different settings. How is it consistent across those settings? Because, yeah, you kind of, okay, you insert the user vector on the input, but this ranking model, if it's a complex one, you have no sort of bounds on what it might figure out to do. And this, I think it's somehow a problem.
Can you elaborate a bit more on what you specifically mean with consistency there?
Have you ever had an experience where, for example, on YouTube, so I can make fun of YouTube because hopefully I earned the permission.
I mean, you are a YouTube user, you are always permitted to make a joke on YouTube. Yes, that's true. That's part of the game.
That's true. So, you know, you are watching a YouTube video and you are somehow deep into a session where you are learning about some specific topic and suddenly you get the ad that's kind of completely off topic and kind of breaks the flow of the user experience.
Or you get some recommendation that's like completely off and it's kind of off in some wrong way, right? It's not all the model is like exploring some neighborhood here or, you know.
Yeah, yeah. So somehow spurious. Yeah. So like something that just seems to sort of take a whole new path, you know, and kind of breaks the flow of the session.
This is kind of a vague and abstract way to say it. But I think it's these kind of discontinuities in the user experience that then cause the session abandon, which might by the way might be whole another model that you are running to predict, you know, what's the likely outcome of all these different choices that you are making in terms of the probability.
So, yeah, the same way in the data world, we figured out how to only keep one latest phone number for the customer across all our different tools and use cases in a big company.
The data world figured out how to model these entities and somehow figure out cross organizationally how to reconcile and then always use the latest phone number.
And in the ML world, I think the teams are still siloed. It's completely different team doing ad selection modeling from content recommendation. It's completely different thing, likely using, you know, all kinds of different features on the input as well as model architectures.
And these people sort of maybe sometimes talk, but there is no shared understanding of the user between those two models. Then you get issues, basically.
Okay, because as part of that session, there's not only of course that single model that is involved, which if used only could provide that consistency.
But due to the fact that there are several models involved in creating the environment in which I experienced my session, aka the website or you name it.
And knowledge is not shared or treated in the same way. This might yield certain inconsistencies between what I as a user expect and what I'm confronted with.
Would that be the right way?
It's guaranteed to generate this because there is many more ways to get this wrong than to get it right. Like if you have a chatbot on your website now with the chat GPT was used this time.
Imagine you would have a different sort of model of what tone the user prefers, so friendly or professional. You would have a different model on each sub page.
And then sometimes the bot would be talking to like a Western cowboy. Sometimes it would be talking to like, you know, professional in the bank. This would be breaking into experience.
So this is kind of maybe one way to highlight the problem in slightly made up weird scenario. This is kind of, I think, what we are dealing with.
And in big companies, there are hundreds of different models making these decisions. If you don't force a shared understanding of the user, they will diverge and the experience will be inconsistent.
So this is why I'm kind of passionate about user modeling is a category of problem because normally the problem of detecting bots and recommending videos and doing some labeling of the users are all viewed as separate problems.
And I think that's a missed opportunity because I think there can be a foundational layer under all of those that just tries to understand and aggregate as many signals about the user as possible and really understand deeply the behavioral patterns and all of the trades and things that are worth knowing, which is, you know, as there is comprivacy, which I think is also very important. Right.
For example, we don't take personally identifiable information into our system and we don't, you know, as the advertising world is moving from third party cookies to first party cookies, companies have to sort of do this in house and they can't go behind the scenes, join, you know, user data using these identifiers.
Right. And we are designing a system, you know, to work in that first party data world and privacy first. But we did ask the risk. You should be taking everything the user gives you about themselves as a hint to what they might like or what they might be about.
Or maybe they are taking all of this in doing a good centralized job of deriving representations of this that are useful for all the downstream tasks and then having much easier job in solving those individual downstream tasks because your user representation is already super rich basically.
So what is popping up in my mind as we are talking about this are multitask models. So is this somehow addressing the problem properly or at least an answer to provide a higher level of consistency since you at least have the same basic model, but then on top of it, you have several heads for the different tasks that you want to perform?
I think so. The model can push from the task specific model down into the foundational user model, the more consistent the output should be. And then I think objectives are an interesting topic as well, because actually when I was interviewing for YouTube back in the day, I was interviewing both as an engineer and as a PM.
And you can maybe tell that I like to talk a lot and so on. And I worked with a lot of PMs and I understand their desire to come in and have influence over these systems, right? Have editorial influence over what sort of things should we highlight for the user?
What are the ways to encode this information, right? There are kind of all kinds of ideas coming from the product org and oftentimes there is tension, I think, between data science and product.
Definitely. Because of this, because I mean, as an engineer or data scientist or whatever, you want to be based on empirical facts. And so I don't want to say or imply that product managers don't know their business well, they should.
And I would also say they do, but there's always somehow a clench if you want to do some certain stuff manually and if you or if you rather want to do it database.
So here is the secret trick, right? Here's the secret trick. So I think it's important to be able to express objectives and the intentional about what the model should do and so on.
So this kind of completely unsupervised strategy of, yeah, let's just maximize the clicks or whatever. I don't think that works because the search space is just too big.
Like you have to have some insight into some feature engineering, some more refined objective setting and so on. And these things are much simpler if you have a simple model.
And what enables you to have a simple model is to hide all that complexity in that user model, right? So push all the complexity there. And then on top of it, now you can play with objectives.
Now you can kind of have something where you collaborate with the PM because this is now meant to achieve a specific goal. And you have separated that from the general, let's just create some embedding that really describes what our users are about, right?
You now get to have this big project down there in the basement where it's all about data driven, all about kind of not really having strong opinions.
And then you have this layer that takes that rich user signal and marries it with the objective for the product. And here is where we collaborate. This is much simpler to retrain. It's a much thinner network. So much easier time, right?
So this is the heck of dealing with PMs.
Which is then definitely still data driven. So for me, it just sounds more like allow the user model to have a high complexity to be generally able to represent all these different notions that represent the user's intent that you can leverage or exploit with different tasks, specific models.
Okay, that's interesting. So how to combine user modeling with real time personalization? I guess we talked quite a lot about why it's important and also on a, let's say methodological side of how to do it.
But staying with Simon Sinek, which I guess you're also a great fan of what is the what. So in terms of the ML tooling landscape, I see that you are heavily involved with these kinds of questions you have just yesterday been at a conference where you were speaking and discussing vector similarity search.
How are we supported with the current landscape in order to perform user modeling and real time personalization correctly? Or how do you do it?
So for the listeners that are not maybe running right now, they should check the show notes will add a map of this kind of machine learning and data vendor landscape, which is this brosable, zoomable, completely crazily complicated map of tools.
And it tells you that, yeah, there is just a lot of options. I think, you know, traditionally, this, this is the whole problem of by versus built. Right. So as a company that faces all kinds of other challenges besides user modeling and personalization, you have to figure out, okay, what's the right trade of how to navigate the problem.
And for me, basically, I have two steps for the process. And I think it starts with the problem. Right. And that's something I learned from my first start up, basically, right. And I think it's the case for any, you know, product management exercise.
If you don't understand the problem you're trying to solve, then tool selection is the wrong activity to be doing.
I really, really like what you're saying, because you hear it so often that people are basically shouting for people to be solution driven. I rather think about it as rather be problem driven.
Because in really making sure that you understand the problem in and out and correctly, the solution is sometimes kind of self evident or much easier.
I guess there is a quote by Albert Einstein, who somehow said, if I'm presented with a problem, I would spend 95% or 98% of time, I would invest in understanding the problem and then the least 2% I need for solving it.
And sharpening the ax and then chopping the tree in the last minute or something.
That's a good one.
Yeah. So especially if you talk to vendors, obviously they'll tell you they solve all the problems. Right. And so why do you need to understand the problem if they can solve them all.
But I think in reality, anybody who has been doing this for at least a little while, realize that it's good to sort of experiment on your own and just feel out using the tools you already have.
Feel the problem out. Right. And maybe be agile, experiment, see where the value is going to come from in terms of performance. You know, what sort of KPI do you actually want to move?
That's kind of my lack that my co-founder is also a software background, but then went into McKinsey and kind of strategic advisory, basically.
And so he's always the one who says, all right, what's the KPI? We are actually moving for the company, you know, and let's keep the jargon and just, OK, how are we making our clients make more money?
So and then kind of work backwards from that. And if you are in a company, you have the same problem, right? Like, how are we achieving our goals that we set out to do this quarter? Right.
And then kind of work backwards from that. So that's the first step.
And then I think the second one is you are looking for tools that ideally help you get going really fast because the worst thing is to pitch this big project and then go and spend six months kind of burning through some budget without having anything to show for it.
And it doesn't matter that it's strategic investment, that, you know, it needs best practice, whatever. Nobody cares. After six months, you don't have results.
Your manager is not going to like this. The other thing, like you want tools that help you get started and get some quick wins and quick validation.
But then they can kind of grow with you. So this is the one to punch, right? Easy start, get some wins. And then can I come in and start tweaking stuff?
Can I override the stuff that's important for me to keep iterating on so that I can get even more performance so that I can make this fit my product better because each product is a bit different.
This is a tension, right? Because there are lots of tools out there, even for personalization, for example, that are this kind of black box, right?
So they help you put all the data in, some recommendations come out, helps you get started. But then, you know, suddenly you have some idea, let's prioritize this feature or something.
Let's engineer a whole new feature and then you might be having a challenge.
The opposite extreme, of course, is just getting completely general compute platform and building, let's say, a real time recommender system or any other kind of full fledged complicated system completely from scratch, right?
It's the opposite problem of six months of work and possibly not moving past that proof of concept stage.
Okay, I see. So it's basically a two step approach. The first step is getting a proper problem and goal understanding.
And the second stage is to start with a tool with a technology that enables you to collect evidence feedback very fast.
But that is also capable of being extended, scaled out if it proves to be useful for you.
This is the holy grail. Yes, perfect summary. Okay.
Let's say being confronted with that overall landscape. Why have you come up with that approach? So what is it that makes you need that approach?
Is it something that you need at Superlinked or is it something that has proven to be useful in your past experience or where and why did you come up with that?
So actually, obviously, as a part of market research for Superlinked, we are talking with a lot of data scientists and a lot of PMs that have to deal with data scientists, which is always very interesting.
And we have seen this sort of bimodal distribution, right? We have seen that teams either sort of are on the side of yet we are doing this ourselves from scratch.
And then they struggle getting that on time on budget, actually moving the KPI. And then we have seen, especially in the personalization, by the way, which is quite interesting, is the application of user modeling.
We have seen lots of people with almost like a PTSD, you know, of deploying some third-party solution that was a black box and getting some initial validation that personalization is a good idea, which, okay, yeah, that's good.
But then running into problems of, yeah, like wanting to tweak the solution down the line, not being able to having to change your product around the recommender engine rather than the system changing around the product.
And then you have, you know, kind of you are stuck, which is also not ideal. So, and, you know, it's not just for personalization.
I think there for any task nowadays, you know, you either have the API that just does it, prime example, large language models, right?
But then pretty quickly, somebody wants to tweak something that you can't tweak, basically.
Once you start to get the real feedback from real customers, real users, this is guaranteed to happen.
And if it's not happening, then you need to go talk to them, right? Because they definitely have that. You just need to listen.
So there is either this, like, here's the API and it solves your problem. This kind of a property of that whole landscape, it's kind of fun to look at the landscape and kind of go, which extreme is this company on, right?
Is it just giving me APIs and everything is fine? Or is it giving me like, put your Python code here or a Rust code or whatever, and then make your system and we don't, you know, that's it.
Good luck, right? And they don't necessarily bundle all the tools that you need for this. So you have to rebuild.
Okay, now you have to rebuild the evaluation. You have to figure out how to bring human into the loop, right? For safety.
There is some problem that you'll encounter while reinventing all these wheels that will just make you not deliver.
I think, yeah, so there are these two camps and I have just seen that with my co-founder and we said, let's pick a use case and let's deliver something that is easy to start with, but then also you can adjust.
I mean, this sounds pretty obvious, but somehow, yeah, it seems that people kind of erode on one or the other side.
I guess it's really hard also if you want to deliver something really general. If you want to deliver a platform for real time machine learning of any kind or machine learning in general, you can't get this property of super easy start out of the box and then tweak exactly only the parts that need to be tweaked.
So it helps to pick something, right? Pick a use case. It can be anything but, you know, a use case and then build around the use case.
I think that's what we'll see more and more as this landscape kind of exploding complexity and people have to figure out how to integrate 10 different tools to make something.
I think we'll see this kind of vertically integrated solutions that look at the task and take you from zero data scientists in the company, early stage, you know, you can onboard.
It's pre-configured. Staff is good. And then all the way to 100 million users, data science team, but still these people can work within that framework on that specific problem.
Yeah. And I mean, you have taken your experiences with different frameworks and the talks that you conducted to a product, which is super linked and definitely also something that we should cover here.
So I have seen that super linked is described as a personalization engine as a service or to put it differently to evaluate launch and operate real time machine learning personalization for consumer apps.
So mouthful. This is something that I picked up and it's not really reflecting what you would describe super linked to be or to put it differently.
Daniel, please enlighten us. What is super linked and what is the problem that you are going to solve with super linked?
We are already solving. So that's kind of news from late last year. We onboarded our first production customer.
Oh, concrete.
We were running kind of our own app on top of the infrastructure for the last year. But now we felt in December that, okay, it's ready for the next step. And so we onboarded the social network.
They are using super linked to personalize the feed, which is kind of the main part of the social network. We have five, six customers in a pilot design.
So, you know, writing the integration code and many more in the pipeline. And the problem that we are solving is user modeling.
Unsurprisingly, you kind of heard me talk about how I think there is this foundational layer to user modeling from the point of view of application builder or a company.
So our solution is relatively simple. What it does, it basically has that vector embedding component that takes in the data that we talked about the user data from the different sources of behavioral self-declared metadata and also user-provided third party sources that the user kind of shares with you as the platform.
We help you convert all of these things into their vector representations given model configuration. So everything is basically running from a conflict.
So kind of platform is code or whatever you want to call it. The conflict defines end to end what's going on. And what's going on is the data gets converted into vectors. Those vectors get aggregated into sort of real time representations of the users and the content.
And then on top of this, we have a bunch of APIs that you can use to query those representations and those APIs that use case specific. So one for exposing a feed that you can paginate and each next page recomputes based on what the user was doing on the previous pages.
So real time personalized feed, user to user recommendations for social context, you know, follow recommendations, sort of content recommendations for this email use case. So like content you missed the type of stuff.
And really anything, you know, we are, for example, now with a very big platform in conversation about bot labeling, which is this kind of label transfer use case, right? And it's supervised.
So they have a bunch of labels about accounts being flagged by moderators as bots. And, you know, then the real time problem is when a comment comes in, this platform is a solution for comments under articles for publishers and, you know, some of the very big ones.
So comment comes in and you want to figure out, right, is this comment spam? Do we want to suppress the distribution of this comment within the product? Right. And this is real time because if you don't, then if it's a very popular article that's getting millions of views, this comment, this spam comment that's maybe pointing somewhere else or propagating some idea you maybe don't want to have there gets a huge exposure, right?
So you have to make very quick decisions. But, you know, it's something that you use basically user embeddings because you can have a look at the behavioral neighbors of this account.
You can estimate the probability that this account has a certain label, like is it a bot based on the existence of the label in that neighborhood.
So that means that Superlinked is actually not only offered for use cases of personalization, but already also for something that you might not have anticipated from the very beginning, but for which it turned out to be useful or applicable as well.
Yeah. And for us, you know, if you look at superlink.com, we say build your own personalization engine. And that's a choice of a go to market strategy, right?
Because if we said build your own user modeling system, most people wouldn't really, it's too broad, basically. And so we are going with personalization or this kind of recommended engine use case as the first one.
But yeah, already exploring other uses for basically this foundational user understanding. So do you want to do just these neighborhood queries or do you want to feed this as an input into another model, we can make that available.
So, you know, that's kind of the serving part of this. Based on that config, we out of the box do these updates. We do online updates, we do batch updates, we do model retraining.
But as you correctly said, not all the time, right? Because it's the vectors that are kept kind of sub-second update latency. And then we also bundle experimentation.
How do you do that?
So you basically activate multiple models in your system, in your account, and then you define traffic split between them. You can also just request certain models, you can request which vectors you want to access in the query.
That's kind of the neat aspect of this is that we are already getting all the behavioral events, right, to update the user representation. And so therefore, we can also use those same events to help evaluate downstream effects of the recommendations, right?
So we can then attribute, okay, you use this set of vectors to generate whatever recommendation or some sort of action, right?
We can attribute it back to, okay, then that user generated this and this and that event. We can run sort of some predicate over that and evaluate if the model is, you know, how well it's achieving the goal that you set out.
Looking at the landscape of specifically RecSys tooling, Superlink does not the very first offering that allows somehow to purchase personalization as a service, if you want to call it like that. So are there competitors who are doing the same or to put it differently, what are you doing better or what is your main differentiator?
So I think if you look at the space of recommender engines as a service more on the, let's say, black boxy side meant to be used by just general, you know, long tail of companies, let's say, I think those solutions have existed for maybe a decade, maybe longer.
And they basically all focus on e-commerce and on very specific tasks in e-commerce, which makes sense because that's where the attribution of better accommodation performance to more money.
It's the easiest probably or one of the easiest, maybe except ads where it's even more precise. But there are fewer companies having to deal with this ad serving problem. There are many more companies having to figure out which products to highlight in their Shopify store.
And so that's why there is a bunch of vendors for that. And we don't really want to compete with those. Where I think we are different is in the amount of configuration that you can do or kind of micromanaging of the embedding models.
You know, if you look at something like Algolia, for example, as one of the big players, let's say, in the e-commerce leaning world, you can there define what's a priority of different features or different events.
But you can't, I don't think really train a new embedding model based on some quite abstract objective. And I think it's also, you know, what we focus on is combining different embedding models together.
So the config we have is you define basically parts of the vector embedding for the user to be made out of outputs of different embedding models. And they can be very simple. They can be like, you know, how recent is the document, right?
Which is literally one scaler. So it's a very simple one dimensional embedding all the way to, you know, let's say something like TFIDF or semantic embedding or embedding that's trained specifically on the customer data set with some set of labels in mind, right?
So in the bot detection use case, you have labels of, you know, this user is a spam account. And now you want to take this and train the new embedding model that clusters spam accounts closer together, right?
To kind of better separate the with and without label or class A, class B users in the space so that the inference has, you know, better confidence, basically.
So, you know, you can't do this in something that you would traditionally call the recommender engine as a service, right? You can't override embedding models.
So I would say, you know, that's an adjacent space for us, but we don't directly compete. And then on the other side, you have machine learning MLOps platforms, right?
You can kind of, there are now some popping up for real time ML or real time kind of feature extraction, but there are still general, right? So it's any use case.
And there are different things that are specific about specifically the user modeling case, which is, you know, you have objects that change often the user.
You want to, you know, embed sequences of actions and then you want to do this evaluation, right? Because the events you are getting in are user actions.
So now you want to have something built in that goes and kind of aggregates what the user is doing after having been exposed or not to an experiment.
There is just a whole bunch of tooling around this. Because we just focus on this relatively narrow task, we can bundle into the product.
Okay, so that means that you are highly compatible with existing solutions tools in the MLOps landscape, which then also makes it much more easier to opt in for superlink since it's not kind of narrow, but more broader in the sense that it allows a more holistic experience of, yeah, let's say, deploying personalization for certain use cases.
So for me, it sounds like the secret sauce of what you're doing is basically keep it as generic and configurable as possible.
And by that, what you kind of unlock is to be able to be used in different industries for different use cases.
I mean, you already mentioned that there is that use case of bot detection. But on the other hand, I have just listened to another podcast.
And that podcast, you have been talking a bit about your experience with remote tooling for conferences. And I guess you also mentioned that it was unfortunately so far a pretty shitty experience for some certain platforms.
And I share your point there, because it was actually not adapting to the people that were using the platform to allow for better connections that somehow adapt what these people are interested in how they are like to kind of find like minded people or people they would match with or something like that.
Can you elaborate a bit on that? Because it sounded for me like a bit that that super linked or at least thinking about the very first or one of the first use cases was somehow driven by that disappointment in how platforms, especially during the pandemic were basically handling that problem.
Yes, you're right. So this app that I mentioned that we have our own app that was the first client for the infrastructure is actually networking app for professional communities.
So we have this deployed in actually some kind of machine learning groups as well. And we have some coming up as well. And what it does is super simple.
Just you opt in getting introduced to another member within the community, and then it uses your LinkedIn account, which you declare upon sign up to match you with relevant other members.
So you get on Monday, like, you know, hey, Marcela, you want to be introduced to another ex-Googler, and then you click yes. And then on Wednesday, you get introduced over email.
But unlike many other similar tools, you don't fill in a questionnaire with millions of questions that get obsolete the time you answer. This is kind of the declared third party source that the infrastructure supports and the LinkedIn profile gets used.
And we can find, you know, hidden gems in these communities that you should definitely meet. And an obvious consumer for this is the event industry.
Indeed, I think as COVID moved conferences to the virtual world, I think the quality of relationships you could create went down, basically, which is totally understandable.
It's much more difficult to connect with people digitally, especially if you, as you said, like, with many of these platforms, especially if you just get exposed to just random matches, right?
Yeah, we have been quite involved with a few virtual event platforms. Obviously, that industry is now going through a change, because events are coming back to the real world.
So they have to figure out how they want to marry the online offline and all that. But yeah, it has been a big influence for us. And I think it may be part of the reason why we actually persist the user vectors, right?
Because you want to find other compatible users, not just content. Yeah, that was one of the origin points. But yeah, since then, basically, when I talk with people with different problems, having user embedding that reflects differences between users in interesting ways that's also up to date seems to be a good input to many different things.
Yeah, I would still say that there is benefit of having that also in the future with people returning to more in-person venues, because I guess there will be a long term effect after people having experience that they don't need to be present.
However, thinking about the last RecSys conference, which was a hybrid conference, so we had one of these tools in place for the conference that was coined the RecSys hub. However, it didn't have so at least I wasn't able to find it or maybe also not too much interested in using the hub since I was there in person.
But I would love to have some kind of such a function. So even though you might have some interests that go beyond your pure professional interest, because it's kind of always easier to connect to people also on the basis of maybe shared hobbies.
So there would be people who are into hiking and then maybe first talk about hiking and then next talk about RecSys models or something like that. So might be definitely cool to use or exploit this in order to make connections with people.
Yeah, RecSys needs a RecSys. Yeah, I think there is a big opportunity because events are kind of attractor of attention. And I think this energy that comes together, if properly utilized, if something helps you find the 5% diffraction of people that are coming to the event that for you, absolutely the most interesting to talk to, I think that's a huge unlock. And maybe I would just add that, of course, your whole personality is not reflected in your LinkedIn account.
We also, by the way, support Twitter, which is maybe a different type of the personality. But I think it really helps with the cold start. So it's all about this, right? You have maybe some onboarding form to your application or to your event or whatever it is, you should utilize what people tell you in that form.
And if they can bring with them some social profile, bootstrapping the initial version of the user model from this and then refining based on behavior. So I can show you a couple of interesting items, users, whatever.
And then refining this kind of real-time feedback loop. That's the magic, right? Then you go to an event with a couple of thousand people and with one user session, you converge to the 10 you should absolutely need. That's the dream.
And then all the other stuff. So when we chat with virtual event platforms about working with us, they have all kinds of other problems. They have to personalize the agenda for the event, right? Which sessions might be interesting.
There is all kinds of content that they generate that they use to activate people to actually show up and kind of think about the event afterwards. They are doing communities of alumni of events now, right? To kind of keep the event going virtually afterwards.
And all of those moments across that whole attendee lifecycle, there are these opportunities for just being relevant. It's a no-brainer.
So what are we going to or what can we expect next hearing from Superlinked or what is on the roadmap and what further developments or challenges are ahead of you?
Good question. I think our main challenge, as you said, we are all about configurability, right? So having configurations for common use cases in place as a starting point and then forking that off and then going your own direction from there.
The challenge is in exposing the right amount of flexibility in this, right? And exactly how we should do that, right? So right now we basically have configuration language for this. It helps you define the various parts of these embedding vectors and you kind of reference vectorizers that are created in code.
One thing I really hate is when somebody tries to make such an expressive config that it would be easier to actually write code. I think this happens often and it's wrong.
I think the purpose of the config is to assemble parameters and assemble kind of the big picture view of what you are pursuing and then it references code, right?
And then the code is the catalog of vectorizers that we have available and we're building new ones. And because of the diversity of these use cases we work with, recommending jobs, like that's a big one, matching candidates to jobs in all this whole industry just about this.
You need to be able to engineer features of the jobs problem, right? Sceniority, carrier progression. There is a lot there, right?
So we have vectorizers that use kind of general, national learning concepts and then you configure them to capture something like seniority by having like some sort of keyword classifier set up to do that, right?
So, and then you reference that in the config. But I think our biggest challenge, that the main thing we have to get right, is to make this config just easy to make and easy to iterate on, have the support of the environment to give you feedback.
Give you feedback before you actually run the traffic app in the experiment, right? Can we have some proxies that we can highlight that help you navigate the problem of what to set there in the config?
Kind of like my problem with YouTube, right? You are creating this campaign that's almost like coding. It's so complex.
Actually, that's a fun thought. I should explore if the YouTube campaigns are Turing-complete.
They might be. They might be. So how to enable people, even without a lot of data science expertise, to get started, I think 2023 is about hopefully finally making AI accessible.
And not just by doing this Blackbox APIs, but somehow really helping people feel like they own the solution, right? That is not some magical thing, but they have enough inputs into creating it so that they actually understand what's going on.
I mean, the whole explainability aspect of this is huge, right? If you can say that, okay, I want my embedding to be made out of these parts and I value them in these relative ways, and then I can attribute which part drove how much of the cosine distance when I'm doing the recommendation.
Now you can start to attribute on that level. And even if some of those embedding models are Blackbox, which they definitely will be because it's deep learning and so on.
At least you know kind of where it's coming from a little bit, right? So this is like a way to start unpacking the problem, because you define the vector part in our config.
Now the evaluation framework we have can not only attribute to model version, but also to the vector part, right? This is what you get for kind of focusing on the use case and then building around it, right?
I think it's kind of this accessibility sort of giving people the insight into how the sausage is made, basically.
So accessibility on the one hand side and on the other side help people to develop and maintain a mental model that is more consistent between the livers that I pull and what the outcome would be.
And then imagine we had that conversation around how the product management should participate in the process of tuning these systems, because they're definitely they care because the systems are promised to improve their metrics, right?
If you have some shared language for this, it helps, right? And so we view our config as potentially a way to establish this connection between, you know, the data scientists and general features.
They are deep into the individual embedding models, but then the way you combine them, the way you prioritize between them and how you define it can actually be readable by a product manager. And now you can have a conversation about that.
So basically having a shared language between business and technology.
Yeah. And then, you know, connect this to the eval, right? So you know which vector part can be attributed to how much of the downstream effect.
This is, I think, path towards giving everybody enough insights to be nice to each other.
Sounds like you are not bored in this and maybe also not in the upcoming years.
Daniel, it's really great talking with you about all these challenges. What are for you also some more general challenges or maybe the challenge for the recommender systems field?
I mean, where to start?
Maybe just pick one.
Well, one thing is pretty clear to me when we talk with companies about recommender engines, you know, the people who are supposed to benefit from all this research and all this work, they have kind of a negative starting point when they're thinking about recommender engines because of what is going on in, you know, the social media and how, you know, filter battles, people spending one and a half, you know, one and a half hour on average on TikTok per day. The algorithm has a very bad reputation. I think the recommender engine community needs to somehow rebrand.
Or I don't know. I don't know what we will do. But I think the algorithms can be used to make stuff interesting, right? That's like the point of a recommender engine.
In order for platforms being able to compete with the TikToks of the world, they need to not be afraid to adopt the recommender engine in the first place, because otherwise, their engagement is down, their retention is down, they have a problem, right?
And maybe not only adopt, but also to iterate, right?
Exactly. So, yeah, I think one challenge is branding. We need to figure out a way to really show that we are on the side of the end user and that we are on the side of the platform owner and somehow reframe this push for arbitrary engagement at any cost into meaningful engagement that's actually helpful for people.
I like that one.
This needs to, I think, come from the community. And I think a lot of the research that's being done supports this, right? So trying to remove biases in the models, stuff around the safety, the adversarial stuff, super interesting.
I think there is a lot of research that's going in the exactly right direction, but we have to find ways to communicate that and make people more comfortable with the technology in and of itself and separate that from what it's used for and how some companies managed to weaponize it because nobody else really had it to that extent, right?
So that's kind of my goal by kind of getting this into as many hands of people building platforms. We can kind of remove that part of the equation and then it matters.
Okay, is it actually interesting stuff that you are building there? You know, are you going towards having a platform that educates users, helps them achieve their goals, and then you just need this as a part of your toolkit.
Otherwise, people just are spoiled and if they don't get what they want from you, then they'll just go back to TikTok.
Really happy that you are bringing this up because I think that as a RecSys practitioner and scientist community, people already have a shared understanding that we need to go far beyond pure optimization of relevance and that relevance is not satisfaction and so on and so forth.
Because we have covered a couple of these questions in previous episodes. But yeah, also that this is only the first step in a sequence of steps where we also should bring this to the users, to the people who should in the end benefit from it and who basically the whole journey starts with.
So how to rebrand there and how to get a more positive image because people are still using it and there are good reasons to use certain recommendations, personalization and so on because in certain things that makes our all lives easier.
But they are also definitely as always with many things downsides and just make sure, hey, we are working on it and this is how it looks after we worked on it and it's now better.
It's all about the objective that you set. So for example, at YouTube, I went through a transition of at first optimizing for views, how many views a video gets and then these results in clickbait video names and thumbnails.
Then there was a big switch towards watch time optimization.
Yeah, I remember that paper by Covington back then, I guess, in 2016 where they said, okay, if I only go for clicks, I encounter clickbait. So we now just only take into account full video watches.
Yeah. And that's a small example of having the right objective. That's where everything starts basically. And everything kind of derives from this, from how you train the embedding model to how you evaluate your hyperparameter tuning.
Everything goes back to the objective and this is basically fully in the hands of the platform owners. So they are the ones who set out to make a better social platform or make a learning platform that's also fun or a media system or whatever. Marketplace.
Their ability to describe this objective, right? I think it's super critical and it's part of this shared language problem, right? How do you, and then we go into like alignment and all that stuff, right?
How do you help people describe their objective for how the platform should be more meaningful, better than the more simplistic, you know, click optimization and then translate that objective into something you can train the neural network against.
That's what we have to figure out. Otherwise, we're always going to stick to the lowest common denominator of a click or watch time, which, you know, okay, it works better to remove clickbait video names, but it still optimizes for you being just stuck to the screen as long as possible, right?
Not necessarily, hey, how did this feel at the end of the session? Hey, are you happy about how you spent the last hour? Right?
That's definitely a very relevant and critical point that is relevant for the field more than just being relevant.
As we are just approaching the end of this episode, I want to finally wrap up with two follow up questions that should not be so much of an surprise for you.
One of them is, so if checking for different products that you use in your daily life, what is the one where you really enjoy personalization and you can't say YouTube because you're a bias.
I was afraid you say you can't say Spotify because everybody says that one.
And yeah, I'm somewhat afraid that I also have to say Spotify. I'm like a huge fan of music. I really like how they do the discovery of less popular stuff. This is something we also think about a lot, how to surface new things.
So I like this aspect of their kind of tuning. Also, the amount of content they publish for the community is just amazing. So yeah, I always kind of look up to the Spotify guys.
I think I'll leave it there. I think maybe to all the sort of negativity about YouTube, I will sort of fix that by saying that for the YouTube shorts, I think the platform is doing a very good job, actually.
So yeah, kudos to my colleagues.
So talking about colleagues or other people in the field who are working on recommender systems or doing research there, is there a specific person that you would like me to feature in the show?
Okay, so I have, I think, potentially a suggestion.
My suggestion for a guest would be Lisa Colvin. So she's, I don't know if you have heard, but yeah, like ex Pandora personalization lead and kind of with product insight as well.
I think I'll be chatting with her next week or so. And I enjoyed our exchanges so far. So yeah, I think that should be quite interesting and also quite practical.
So maybe in a broader sense, I think getting people who are applying these techniques and who are kind of grappling with the real world messy stuff around this domain.
I would love to hear more from as well to motivate, you know, all the research that is going on.
Okay, then I will add her to my guest list and Lisa, expect my invitation.
Yeah, it was really great talking to you, Danielle, especially since we already, I would say went into an almost philosophical direction, which is always good.
And these technologies are somehow shaping, influencing our daily lives, though it definitely makes sense also to think about a bit about their responsibility.
Yeah, so thanks for sharing all your experience and your thoughts on the show.
Thank you, Marcela. It was really awesome spending this time with you. And yeah, I hope that some of those thoughts resonated.
And if somebody in the audience kind of has a question or a follow up thought, I'm very happy to chat. So feel free to I think we'll put maybe some links for my LinkedIn and so on in the show notes.
So yeah, feel free to reach out.
How can people preferably reach out to you linked in Twitter?
I think I actually spent nowadays most of my time on LinkedIn from these kind of big networks. Otherwise, kind of more smaller communities, but from the big networks LinkedIn, so I'm the most likely to respond there.
Great. Then as always, we will put the corresponding links on the show notes and then feel free to reach out to Daniel.
Cool. Then yeah, thanks again for joining and have a nice day.
Thank you. Have a good one. Bye.
Bye.
Thank you again for listening and sharing and make sure not to miss the next episode because people who listen to this also listen to the next episode. See you. Goodbye.
Bye.
Bye.