[MUSIC PLAYING] STEFANO PASQUALI: So what I’m going to present today is my personal journey on liquidity research that started almost 10 years ago and, in particular, the journey in BlackRock, where I started to run this project three years ago And the topic of liquidity is a typical financial problem that is very difficult to solve That’s why the papers and research on liquidity, they arrive pretty late in the industry and in the standard methodology And because of sparse data, because of– or sparse, noisy data or multidimensionality So this is a typical type of problem where machine learning can help, OK? So what I’m going to show to you is, first of all, what is the AI for us So what I think is the right uses for us in finance in particular for my specific project for machine learning, AI Why and when in finance, in particular in liquidity, the problem of hunting for performance, but be aware of our black boxes And I will present some model I will review the model we already implemented and give you a tour of the lab models that we are testing in the lab So a little bit of names So for me, machine learning is making sense of a large data set, recognizing complex patterns in the data, and is being able to account for non-linearity and getting rid of other mathematical assumptions, like assuming linearity of some behave or some of the features or normality, all this kind of stuff The typical mathematical assumptions, they were done in what I call Quant Finance 1.0 To me, machine learning– I prefer to use the word “machine learning” than “AI.” So far, my machine is not intelligent It’s just a machine that can crunch out the data and extracting patterns in that data And some of these factor behaviors are good for me as features in the job of forecasting liquidity in our security level, portfolio level, redemption at risk, or whatever we do There are some cool names here These are the ones dear to me, because I started many years ago to approach to leverage and allow cluster analysis in these types of problems Pattern recognition is what I say, regression, because now everybody is ashamed to say “regression.” No, we still use regression And I’m not ashamed that last quarter we allowed free model reproduction of two based on machine learning approaches And one was linear regression with two variables Does its job No shame in doing that So I am a guy looking for innovation I am dreaming to be able to build for liquidity default to serious deep learning model Interpretability is not a problem I’m going to mention to you But if you have something that does its job and is a simple, interpretable model, two variables, inner regression, take it Now, machine learning is not that one So it’s not a robot In particular in finance Think about finance We have a fiduciary role to our clients So I’m building a model, providing the analytics to a portfolio manager or a trader And this guy is making business decisions with other people’s money So we cannot use black boxes, things that we cannot understand So for me, machine learning is a [INAUDIBLE] scenario Maybe one day, but not now And there will be enough of all on that day when it’s going to happen And when you recognize pattern in financial data, you can recognize that pattern So there is a correlation between the interest rates and height of the Fed chairs So you know better than me that if you plug these features in a machine, whatever is your regression, these will be significant features So now you pretend to know the next height or the Fed chair, you are claiming to forecast interest rates So you must be careful about this type of pattern that the machine will include In particular, a machine in the case of deep learning, we’ll use this pattern without telling it to you Just set up the stage right in the middle between myth and reality of application machine learning in finance Explainable AI, XAI People love acronyms This is a new acronym that– I mean, I started to see these acronyms a few years ago And we must enable the user to understand

what the algorithms do So basically, this is a beautiful chart [INAUDIBLE] by a presentation done by DARPA But it’s beautiful to represent the function between explainability and the learning performance So clearly, we are now moving– in particular in our case in this area, the dream is to go to deep learning But we still have something to do before we get to deep learning But for sure, we are able to prove that traditional or machine learning methodology– if we call random forest machine learning and they do– can add important improvements on performance, particularly now that we have a lot of data to train the model So this picture is our guideline for our daily job when we do in the lab, we start to push on innovation So our journey in machine learning applied to liquidity And, again, a few years ago when I started to work on liquidity– people in finance here know better than me, but there were a few people working on fixed income, in particular, 10 years ago There was no data Trades was a system where trades data were reported for US corporate bonds Highly noisy No competitor sparring We kept– et cetera And there was nothing in Europe, almost nothing in Europe Nothing in emerging market Equity, you had a lot of data, but highly noisy And there wasn’t a practice about model to forecast the liquidity of a security So you have a situation where we have a lot of data, highly noisy, so we need to start seeing that from this and assemble the noisy data Or you have entire market, where we have a decent amount of data only on a small part of the market And it come out with a forecast for the rest of the security in the market So the principle here– every trader, every portfolio manager will do exactly taking a bond, for example, putting the bond in excess, start to put in excess some features downloaded by information provider Then, based on experience, we start to define the list of similar bonds based on some feature– rating, coupon, amount of standing, et cetera, and trying to pick up– have a sense of price and the liquidity of the target bond and start to evaluate, look in similar bonds The beautiful thing about machine learning– cluster analysis, random forest that basically, more or less, does that thing And you can scale and automatize that thing So that one, I think, was the first adventure Now, what we do here? This is the description of the entire overall project So because of this problem of estimating liquidity, liquidity arrive very late And liquidity arrive– if you look at best practice for a credit model, default probability model, we can agree or not agree with those models, but there are plenty of models from the ’80s, deep mathematical models, very complex models In liquidity, there is almost nothing– a couple of papers almost playing with the same three concepts back and forth Is that because no data, et cetera? And so people was keeping liquidity a little bit outside the risk managing and process I do market risk, I do credit risk, I do portfolio optimization, good tracking error, and then on the side, I try to figure it out to have a sense what is the liquidity in my portfolios Now regulators gave me a job, because they created 10 years ago on the sell side first Then on the buy side, they started to push for people to adopt liquidity metrics to control liquidity risk In particular, BlackRock decided to have a more broader vision, so really embedded liquidity in the risk management process And many years ago when I was trying to convince my former company to invest in this type of research, I was mentioning that– you should think about value at risk Why is it used by everybody since 30 years, maybe 25 years? It’s not necessarily the right risk management, but it creates a baseline out of which compliance people, portfolio manager, risk manager tool based on some numbers I want to address how good or bad is value at risk But value at risk creates this link And if you think, embedded in value at risk, there is one crazy assumption– that you have historical [INAUDIBLE] of the price– how to reach and generate your distribution So every single position and the price 20 days ago, you could have sold that day your entire position

immediately in the market at that price That’s simply not true Because when you want to unwind the position, then you need to decide the venue, decide how you want to sell this position If your position is big, you’re going to have a cost So a lot of people start to become aware of the concept about liquidity-adjusted value at risk So this was a way to plug liquidity from the beginning into data management process So what we have to do was to do a gigantic infrastructure Unfortunately for my job, every company I went, I was hired to build models I started to build models after 18 months, 24 a month The first three years was building databases, cleaning the data, going around the company hunting for data If the data weren’t there, I convinced the big guys to have funding for the data Pulling the data together So it was necessary to create this environment That’s exactly what we did here So for liquidity, there was a necessity to unify the data we were having, internal data from BlackRock, external vendor, pull [INAUDIBLE] external information Crunching this with a high-performance computing– in this case, we used a lot of GPUs We created our research environment and were necessarily based on machine learning We called this, actually, our Liquidity Vault We have all this magic data to build the liquidity models Then we go here And the first job, we want to build transaction cost We want to have simulated that Because of portfolio optimizations, rebalancing, or whatever, we wanted to sell a specific volume of this position It can be a bond, an equity, whatever We wanted to have an idea of how much it was going to cost us And if we wait more time or less time, how the price is going to change? So we want to build this liquidity transaction cost surface Then we wanted to aggregate the portfolio level, the liquidity, and this is tricky Because the concept of correlation of liquidity, the typical way people, they aggregate the portfolio level is trickier And if you start to sell a bond in a portfolio and you add bond of same issue in the same portfolio, when you go to sell the second bond, maybe the liquidity profile of this bond has changed because of the first sale you did So here, I think there is a better way to approach this problem I’m going to mention to you Then the other side of liquidity And that, for us, as managers, is very important Assuming that my portfolios are liquid or in control of liquidity, I can’t receive redemption for my clients on the fund So I need to be able to unwind the position at a reasonable cost, because I’m controlling liquidity risk, to give cash back to the people that are redeeming the fund and not to hurt the people still in the fund So I discovered that the redemption at risk is one of the most difficult problems ever, because thinking of being able to forecast that, in this particular fund, I would have these outflows next week or next month is a very difficult problem, because it depends on market conditions Specifically, it does syncretic information on the fund Behavior of people depends from if the fund is widely used for pension fund for other company If there is a moment where a lot of people will retire, we’re going to have structured outflows in the fund So there are so many things to account in the fund So basically, here is a case where we try to clean the message with a linear regression, a gradient-boosting machine less regression But the only way where we found some decent performance was using neural networks So the liquidity analytics as built at the beginning, we plugged this into a portfolio optimization, and our dream is to arrive to this concept, where we have– this is the portfolio Actually, it’s a chart, but it’s a real simulation of a real fund, where you have cash, the cost, and you want to allow the tracking error– you want to set pro-rata or allow the portfolio to optimize the liquidity profile And you want to basically control the risk So basically, try to have the highest possible cash raise with the lowest possible cost So you want to try to be in this area of the curve And then when you have this, you can finally make everybody happy You serve, with one centralized engine, portfolio construction, training support, liquidity-adjusted risk, what I mentioned before, and regulatory reporting For some things, we are in a good stage of the project to serve these guys We are still working for the coverage in order

to implement– to plug into this optimizer in our platform But this will give you a sense of the broader project that we are doing on liquidity To honor my guest– so the Google people that kindly invited me here– I’ll give you a sense of what we are doing here We are using the cloud We are one of the first user in BlackRock, my team of the Google Cloud There are many other people using the cloud now, but we are basically using the cloud for a proof-of-concept And that’s where I found the benefit So I have a couple of guys, very expert in machine learning, but they have day job, liquidity My job is not building neural networks My job is providing liquidity analytics So sometimes we don’t have the bandwidth to experiment And having this guy be able to experiment and bring it up to 40-plus GPUs to train a neural network or using the ICPUs instance with the 96 cores that were available enabled us to speed up our POC process [INAUDIBLE] idea is a bad idea So saving time or actually starting to invest in this area Now, this is the role of machine learning in our research So the problem that we attacked with machine learning– first of all, we are doing ML on small or medium data sets for a moment We are expanding now to larger data sets And I’m mentioning this because you will not see here a deep learning model, because, by definition, to do deep learning, you need to have a massive data set So here we are working with small and medium data sets Based on the for transaction cost, redemption at risk, tradable volume, was our big successful story of last year So to one component transaction cost must be able to forecast the T-plus-1 tradable volume of a specific [INAUDIBLE] And we found a lot of benefits in the model leveraging machine learning And security similarity– the concept of went into use Machine learning tools we used– primary regularized linear regression, random forest, and gradient-boosting trees Despite that, we, in the lab, now in the lab, we have a neural network implementation on some of these models Number four, the use of suspect So the neural network implementation, despite that they are improving the performance with respect to random forest, but not crazy, because data set is still medium-sized, we have the problem of interpretability So we prefer, for a moment, to roll out a random forest approach And with respect to the neural network where we are keeping this in the lab, we are still researching on this I will try– as you know, synthesis is not the best part of Italians, so I spoke a lot But I will try to save five minutes for Q&A at the end Now, here, a hand drawing to see how typical the problem we need to solve Bid and ask spread Straight line Useless Because that, to you, that you have the same cost independent of how much you want to sell The theory is that every, every, transaction cost model in every paper, more or less, is using a sublinear approach, the typical square root function Or you can spend hours to recalibrate the exponent as 0.6, 0.4 with this approach Those models are very poor, because the blue points are more or less how the data will look Very noisy But actually, the cost goes down when [INAUDIBLE] more volume Then it’s almost flat around the bid and ask spread Then we start to spike up And when it spikes up, typically, you don’t have a lot of observation I’m talking about three, four observations per bond in a quarter So how can you fit the model in that area? So the real curve is the green one The theory says, try to calibrate a model using the red one And now we think that neural network approach, deep learning, we can help a lot in the future to actually go non-parametric, and going to fit much better, and forecast much better this curve There is typically the 3D problem So I want to have a model that, together, can account for the slippage of the curve in functional time So now if I have a cost of 10 basis points, starting 10 million, if I wait more time, I will be able to lower the cost So that’s definitely a problem people are trying to solve And the similarity constant– this is very important, because it’s something that you must account when you build liquidity models Bond 1, Bond 2 Assuming that this is a good curve Stay with me now I’m selling something in Bond 1

Eventually, I’m going to shift the curve of Bond 2 So when I’m going to go to sell the Bond 2, I have a different transaction cost curve And this [INAUDIBLE] can be attacked, this problem, using security similarity, making a linkage between securities Google has the Knowledge Graph, a similar concept about this– making a link between entities– in this case, security in general or bonds Now, the data First of all, let me go back Whatever equation curve you use, you have observed cost and you are aiming to train a model based on this cost People coming from other sectors– that’s why I asked the question before– they think, OK, in historical series of something that I observe, that’s the Bible, that’s true, and now I can train my model, whatever model it is In finance, I’m sorry, the situation is not like this To provoke you to understand this concept, here I did– this is my transaction cost cooling function of volume traded within a spread [INAUDIBLE] volume and other features The independent variable is this, called Implementation Shortfall It’s the differential between a trade that I did and dimensional price So the market was at this level I trade at this level or this level? This definition is the cost that I got in the moment I looked at the cost, I trained the model, and tried to forecast in the future of this cost Here is a very liquid bond Very easy It’s a telecommunication bond Between 3%, 4% coupon Maturity, 2022 And to calculate this implementation shortfall, our trades [INAUDIBLE] 3D information provided Everybody claiming, I give you the right price Look the three curves So now you’re trading at whatever it is– a linear regression or a neural network When you are here, you are in trouble, because you don’t know what is the truth So I told to my guys, I’m sorry, guys, but 60% of your job is [INAUDIBLE] and cleaning data This is something that we have to do internally That’s why the application of machine learning in finance, there are many reasons, but the fact that we are dealing with real money, but as well this type of problem Data to train the models are often not available or, now that they’re becoming available, can be very noisy, very inconsistent Now, what do we do here? This was the implementation shortfall that I mentioned before So we have our benchmark price We execute a trade The differential is our observed cost out of which we train our model, whatever methodology we use And our model is defined on many important features Fixed cost– typically the bid and ask spread The volume forecast I know that this bond has high level tradable volume Whatever I want to sell, I have a lower cost A bond with lower tradable volume in the market, if I grow closer to this level, I’m going to have a spike with the cost And other features And I show this diagram, because for a moment, instead of going wild and deep learning, all this kind of stuff, we did the entire transaction cost, we only, for one component of the transaction cost, we used machine learning And now we are attacking the other part And here we use random forest We have an implementation in the lab with neural network with the same problem– interpretability And this is very important, because this model now is in production It works very well So far our customer, the traders are very happy The problem that I think about– let’s assume that I have the perfect model I don’t, by the way, but I have a decent performance Tomorrow I forecast for a specific bond a big drop in tradable volume, so as a signal, very severe warning for our traders They need to do something about that or not Kind of quarantine the bond If you have to unwind some position, not that one, or start to sell To implement a strategy to hedge half from this liquidity risk on this bond If I say there is a drop of 50% of the tradable volume, it’s a big thing Next thing the guy is going to ask me– why? And I cannot tell to him, I have a beautiful deep-learning model I have it tested, I’m cool My guys, as smart computer-sense people, take it, because this guy is making decisions, trading real people’s money That’s why, in neural network, we can pick up some sort of interpretability Sorry, in a random forest, we can pick up some sort of interpretability Now I can say to the guy, I have an example here

Why the model is moving? It’s forecasting something different So only on the volume forecast where is this framework, where you’re applying neural network And we basically– now I don’t want to go into too much details I’m happy to provide to you offline some details But here, what we do here, we trade through random forest, one on empirical data, using empirical volume data; and then other one based on features in absence of transactional data So we are able to leverage to have a guess of the tradeable volume for a bond that never shows volume A bond that never shows volume, for example, doesn’t mean that it’s in liquid It’s a bond that [INAUDIBLE] trade, but you can go to trade this bond easily, or it’s a signal that this bond is untradeable And then we have some concept of a probability of trade So we separated the model in trying to define the probability, how is the probability that this bond’s going to trade and then when it’s going to trade, which is the expected value And this model so far is working pretty well And it’s the one that has a neural network parallel approach on this Transaction cost overall I’ll tell you this sad story of the progress of this model It’s very important So we have an initial model, a linear regression model based on the intraday data The X means very poor [INAUDIBLE] performance I can tell you, only between us, that the [INAUDIBLE] performance of this model was 1% So pretty low The reason why I’m putting these dots and not the numbers– because some people in my company, they warned me, listen, you’re going in front of Google people And there will be a lot of non-financial people And then you’re going to brag about the performance on the last line that, in finance, is a very big number I would say, in finance, it’s a ridiculous number Well, not ridiculous, but a low number So just to avoid the credibility issues with you, I hid the number So I’ve been discovering, no? So the fourth one was very bad When I went to intraday data, I did some work on defining better the implementation shortfall that I mentioned before The model was OK Then we started to do partitioning on the universe So we started to separate buy and sell Then we started to separate– if you think we started to do a concept of manual cluster We are bucketing the universe in more and more generalist groups and fitting the model within each group And if you think, in one shot, a random forest will do And now we are working a lot on a generalized additive model, non-parametric model like random forest And we are, as well, in the lab, testing neural network This is neural network for the entire transaction cost So not only one component We expected that if we train the model of a big amount of data, the performance will be very high I don’t know yet Next event, you will know how the model works So far, we’ve rolled out in production this model, based on its aggressive approach, based on the partition of the universe Now we are generalizing this with cluster analysis and random forest That’s our next step But so far, we’ve rolled out, in production, this model that is using the ADV based on random forest And so far, the model performs pretty well To give you some story of the evolution of our experiment This is an example of November 2000– OK, my company was very lucky to have, in the last four years, any serious liquidity stress For me, it was a damnation, because I have nothing to train the data on liquidity stress So I say this against the interest of my company now I needed some bad moment, so I can verify my model Otherwise, everything’s good, my model is good Something like this happened in November There was some stress in the market, particularly in corporate bond market And we are very happy to see that, from beginning of November up to Thanksgiving, where the stress was very evident to manifest, there was a situation That’s why we have a job Situation where portfolio manager, they were ordering– OK, I need to unwind this position That’s fine, because bid and ask spread is not moving And the traders say, I’m sorry, but when I go to trade these guys, it’s more difficult I have a higher cost And there was this connection between the two The concept of liquidity doesn’t depend necessarily of observability within a spread It can depend from many other features So the entire model, we are able– this is something that you cannot do in neural network So I was inviting [INAUDIBLE] spotlight, Google to find a solution This is something that you can do in random forest This is the evolution for a group of three bonds The issuer, average behavior of the transaction cost

across November And the transaction cost went up a lot in terms of basis points So what we could unwind here were very different to what you could unwind at the end of the month And you see that duration times spread was backing up as expected Within a spread, it’s almost flat Bid and ask spread in that month actually went down, even counter-intuitively It was 0.2 basis points, so nothing But ADV was dropping And interestingly, the number of dealer runs was dropping So once the feature was used, it’s just simply count how many dealer runs are contributing to this bond It’s a measure of interest I know that there are some Google people there that are going to have a lot of ideas about the [INAUDIBLE] and stuff like this But for the moment, for us, counting how many dealer runs are contributing a bond is an indication of interest around this bond and making bids around this bond Now, this number drops For us, it’s a warning The model reacted to this So bid and ask spread that is technically the main driver of transaction cost wasn’t moving, but the other features, less important, a lot of them, they were moving And finally, the transaction cost was doing a good job I did my test I claimed the model was done Now I don’t want any more stress in the market, because it was a very painful month for me Anyway But this is exactly the reason why we are doing this job So now we can provide a signal [INAUDIBLE] to the traders, and we can attribute this to the input features Idea for next step Very quick, I mentioned to you that we have, in the lab, generalized additive models that we tried And for this different group of markets, we improved the performance of the model of 6%, 22%, and 11% And that’s very natural It’s a kind of fitting sublinear model, different model for every band of volumes So it’s the orange curve If you remember, it’s exactly similar to the empirical shape of the data in the first slide I showed to you Going down, then up Then one thing, with my guys, we start soon in the lab is a leveraged reinforcement learning on the transaction cost level And this, we think, is going to be performing very, very well, technically, and particularly useful for stress testing I have nothing to say to you now, because we need to start We are configuring the environment to do this experiment But this is an evolution of the transaction cost we’re going to do Similarity This is our– and I’ve been bragging about this and talking about this for 10 years So we have now different methodologies to be able to define given the first bond In this case, I picked up oil company bonds Similar bond, this is completely done by the machine So here, I can use cluster analysis I can use gradient-boosting tree You can use neural network And again, if you use neural network, you’re going to have an interpretability problem But we are very happy about this And now you see that two total bond are very similar If you start to do some trading activity on the first one, when you go to make a trade on the second one, you’re going to have a different liquidity profile And so we are basically defining this similarity base on how the securities are highly connected or not There is a model we are implementing about this So in the security similarity, we have distance-based metrics That is done We are testing using gradient-boosting trees And next, we’re going to have semi-supervised variational auto-encoders and potentially a knowledge graph I do believe that the knowledge graph concept– I don’t know yet– will be the answer in terms of a methodology tool to do this job Again, to honor my guests, about interpretability, in this case, interpretability becomes much more important than other cases, because this will be used widely, technically, even for portfolio construction Many people were investigating, but these two papers are done by Google people I think it’s DeepMind people, I think So this is basically what we are doing in the lab now The title here is Redemption at Risk, The Beginning of Our Artificial Neural Network and Cloud Adventure Because if you think about regressive models, random forests based on now a lot of data, but not petabytes of data, you can work in an internal infrastructure and databases Random forest was a difficult problem, so immediately, we started to end in some performance issues Then we opened a personal account on the Google Cloud

Literally The first one was a personal account We started to make experiments, then we opened a corporate account on that And basically, the story here– so we did this first experiment without the data just to understand how the cloud was working Then we got approval to upload some of our data in the cloud officially, as a company under contract And we were able to do this experiment, using an embarrassing amount of factors, because running the redemption of the fund has a crazy amount of factors Using three ensembles to filter the features and then fit in our networks And the model we are building is time to forecast a CDF that we can transform in a PDF It’s telling to us the probability of– we are not forecasting the size of the redemption I would be a very rich guy if I could do that We are trying to forecast the probability of large flows, where large is a flow bigger than a certain threshold So basically, you take your slice on this 3D picture, and then you use that distribution to do your forecast, a distribution conditional to all the input factors This is the only way we could have a somewhat decent performance in this model Not crazy big, but a good performance, something that you can use in the daily life, and it was the only way to do that Now, we are working now in trying to make sense of this model for day-to-day users because of interpretability Another case that we are working on I don’t have to tell the people in finance here that, in particular, bonds trade over the counter This data is unstructured, and there is not a quantitative way to determine the accuracy of the dealer quotes Dealer quotes are the big source of everybody, us included Every vendor I’m using is because trade are very rare, sparse The information from dealers are what people use to price these bonds And as you saw before, three different providers 30 years in the market, they have severe disagreement of an easy bond There is no way to determine which one is better or not A lot, we have to rely to subjective expertise of the traders And one reason I joined BlackRock– because I was in a company where there was a big trading desk because of the amount of work the company has So I was learning a lot I thought to be an expert in pricing bonds and then, given the right price to estimated liquidity, I realized, no And I learned a lot from our traders So now how do we transform this know-how without automatizing [INAUDIBLE] know-how? So you should remember the benchmark price problem, no? And then what we are trying to do here is basically collect and wait Different pricing source and learning how to weight this from feedback that we receive from our traders in a UI and trying to apply NLP to their comments So really, when they disagree on the price, they tend to ask, in English, why or put it in English better than mine And then with NLP, we try to extract features to adjust our model We are on the beginning of this journey, but this is cool stuff It was an idea enacted [INAUDIBLE] two of my guys last year, and now we are trying to make this a reality This is just to introduce what I think the same final answer that I mentioned yesterday in the AI spotlight, where I see the biggest improvements, impacts of machine learning on finance, something that I’m not expert So maybe I’m shooting myself and keeping myself out of the job that is NLP I’m a user of NLP I’m not building models in NLP But if you think of what is finance, it’s collecting data from the market, collecting information from the provider, and trying to do what? To forecast people’s behavior So there is the assumption that a volume of published in the market, a trade that happened in a stock exchange, aerating, the fact that the bond is this sector or not this sector is something that will enable you to forecast the behavior of the market Why don’t you go at the source when you see people talking about the market? So doing NLP and textual analysis SPEAKER 1: Good afternoon The expo floor will be closing in 15 minutes STEFANO PASQUALI: OK SPEAKER 1: Please make your way to the escalators and the main lobby Thank you STEFANO PASQUALI: Thank you OK So basically, guys, if you want me to stop, you could have a less violent way to stop me, no? I see the time here, so it was fine Anyway So before they call security to kick me out, so this is the application In the market, if someone then in BlackRock, all the applications are NLP

So people are using NLP to apply NLP to the transcription of the company conference call Identify less obvious linkages between stocks, the similarity Text-based industry classification And the last one, what I think is going to be a dramatic, revolutionary input in my modeling– news sentiment index for liquidity So somebody doing this engine to scrape textual news telling to me which is the sentiment on liquidity, I’m pretty sure– I have no proof on this– I’m pretty sure that this feature will drive the vast majority of the performance of the model So far, nobody was able to produce for me very high quality sentiment on liquidity, but the technology and the people are evolving And I’m really hopeful on NLP So to conclude, then I asked– I give seven, eight, six minutes to Q&A. This is recapping our [INAUDIBLE] this picture with the cyborg man, the robot machine shaking the hands of the human being That’s exactly the concept of pushing for innovation BlackRock has, as well, a big technology culture It’s always been pushing for innovation I am getting bored if I don’t play with some fancy tools So we push for innovation, but in a not reckless way, because we have the problem or the fiduciary role to our investors So it’s always to find the balance between performance of the model and transparency, interpretability The model surveillance that we are observing is becoming more severe And that is– what is the ideal topic for us? Clearly, the infrastructure, security similarity, NLP, reinforcement, and transparent learning are the things we think we are and we want to investigate to see if we can seek some performance in our model So be careful Machine learning is not the holy grail And we want to use it only when advantageous We are able to prove that there are already some cases where we’re adding value We [INAUDIBLE] in the lab More to come The big push will be when there will be some interpretability ability tools Then you don’t have any problem to go while deep and try everything you want I stop here OK, so it’s been a pleasure It’s a long journey First time that an Italian finished before the time, the deadline So note this as a big event, and so thank you very much for your attention We’ll see you soon [MUSIC PLAYING]