[MUSIC PLAYING] SHASHANK JOSHI: My name is Shashank Joshi I’m a specialist leader in cloud engineering practice of Deloitte Consulting STEPHEN ELLIOTT: And I’m Stephen Elliott I’m a product manager on Cloud Billing SHASHANK JOSHI: All right, so here is the quick agenda We’ll start off with the introduction– what is cloud cost management, why it is important And then we’ll look at a real-life enterprise example where we helped build their cloud cost management solution Then Stephen will talk about what tools and services Google Cloud provides for cost management Then we’ll see a couple of demos to walk you through what you’ll hear in the session today And we’ll wrap up with Q&A. And just a reminder, you can go to your Dory app, ask a question, or vote on a question that you like All right, so let’s get started What is cloud cost management? Fundamentally, it’s about controlling– or monitoring, controlling, allocating, and optimizing your cost So cloud promises accelerated innovation and reduced cost But in reality, that cost savings are not always realized And another big impact, a big change from your traditional OpEx model is, in a traditional environment, you have your budgeting cycle done once a year, or once a quarter, right? But in an OpEx environment, you have to do it on an ongoing basis So you need a solution that can do that for you So just look at why it is important, the cloud cost management And you decided to attend this session That means you understand why it’s important But still, let’s go through some of the analyst research on this one So when asked about, what is your greatest barrier for cloud adoption, the second greatest barrier was cited as the cloud cost management, just only after cloud– the data security, which can be– which is the number one greatest barrier And within cloud cost management, when asked, what is your top pain– so 61% of people said that the predictability of your cloud cost is the greatest pain point for them And what happens if you don’t have a effective cloud cost management solution? So 69% would overspend up to 25% of their cloud budget So this is a significant number And if you don’t have a plan for cloud cost management, you would overspend more than 70% of your budget So these are really big numbers So it’s still important to have a cost management plan and have a cost management solution in place So now that we know why cloud cost management is important, you would want to start– what your solution has to answer So what are the questions you’d want to answer using your cloud cost management solution So the first important fundamental question is, how much does your app cost? And depending on at what level you are looking at, it can be one app, a bunch of apps, or all of apps in your organization So you would want to know how much it takes for you to deliver value to your business through your apps Second, for number two question, is, what are your cost trends? You can be in a seasonal variation industry or a regular industry But you should know, what area your trends, overall and also for individual apps And finally, what are your cost drivers? So what is driving your cost? What are your top five spend areas How would you drill down into that? So that’s another important question Even though these are fundamental questions– and Google Cloud provides a lot of solutions out of the box that can answer these questions for you in a matter of minutes, that Stephen will showcase later on– but these are not all the questions that you need to answer These are the fundamental questions, but based on your organization, you may have much more– many more questions So let’s look at one example where we help build a billing data analysis solution for a Fortune 500 retailer So just give a quick background on this customer– so a couple of years ago, they started on a really big cloud transmission journey using GCP We helped build a business case for them And it was a very exhaustive business case in terms of what initiatives, milestones, going down all the way to the applications And very early on in the journey, we realized the importance of cloud cost management solution And as we were getting ready, building the foundation, getting ready for the bigger transformation, we also started building the solution required for cloud billing data analysis and reporting So as part of the cloud cost management team, we were responsible for oversight analysis reporting

for all aspects of cloud cost management So we needed a system which can meet all these challenges So the first set of challenges were organizational challenges And I’m sure all bigger organizations would really face this So every organization will have some kind of a structure For example, they’ll have lines of businesses, vice presidents, departments, applications, and stuff like that So how will you overlay all of this data, your organization structure, onto your GCP billing? Because you may have to help showcase all of that back using that structure And second option was, how will you combine multiple data sources? So GCP Billing is not the only source of cost in your overall system So there are so many other systems like licensing, other third-party tools, other colo and other cloud providers as well So how will you combine all those multiple data sources? So your solution has to be robust enough to combine all of these sources and derive meaningful insights from that And finally, in terms of organization, there are different levels of stakeholders So how will you break down the reporting or analysis based on different stakeholders? And how will you control access, right? You wouldn’t want application– line of business 1 to look at line of business 2 data like that And in terms of data challenges, working with billing export schema is really important So sooner your team is very familiar with the billing export schema and overall BigQuery, it’s really important And how would you distribute shared services, right? So there can be shared services like networking, monitoring So how will you distribute that cost? And some of the credits that come, they come at the organization level How will you distribute those credits back to the respective folks? And finally, every large organization would have some kind of a data analysis reporting solution in terms of cost already existing So whether you will choose to extend that solution for GCP, or you will want to build a brand new one– so that’s another question you need to answer So with these challenges in mind, this is how we came up with– this is a overview of the architecture that we build So if you see, on the left-hand side, there are a bunch of data sources Of course, GCP Billing export, that was a major data source And as a best practice, we have to enable– you should enable GCP export on day one And then there are other GCP data For example, we actually had a lot of other requirements in terms of getting more data from GCP– for example, how many instances are running in our environment– so any other platform-related data So we had custom scripts to capture all of that data and use– dump into BigQuery And then organizational data, right– all of your org structure, your VPs, your departments, your budgets, all of that organizational data has to be brought in as well for meaningful insights And we’ll see that in the demo later on You can have third-party tools data And you can have other clouds, or colo-provided data as well You would want one single solution to do– to visualize all of that, right, instead of going to multiple different solutions For storage and analysis, we used BigQuery Initially, we started off with a on-prem solution, but later on, realized, related to– challenges related to cost and functionality– we decided to get everything into Google Cloud– so using BigQuery as our storage and analysis solution And we’ll see some of the example in our demo, how we use that actually And we had a bunch of automation and scheduling Because all of the org data that you see here, all the platform data that is coming, all of that data was automated So there were scripts running all the time, scheduled And there were third-party– or the organizational data coming into CSV files And whenever there was a change in the file, the automation pipeline would update the table And finally, we used a bunch of BigQuery views to join all of these different data sources And for visualization, we have used Tableau, because that was the tool of choice So before I go to the next one, I just want to highlight a couple of key highlights of the solution So far cost organizations, we used labels So we decided a labeling strategy very early on And as a best practice, you should spend time on deciding the labeling strategy, and not just for today’s– well near future, maybe slightly longer than that Because it’s really hard to change once you decide on a labeling strategy That was one important part And because of that labeling strategy, we were able to drill down not just by the VP, but going down to the department, going down to the application, going down to the environment

So there was visibility going across, all the way down Second interesting part We did was we used billing export data merged with so many other data sources And we’ll see how that is useful later on in the demo So don’t be afraid to really merge different data sources with your billing to get more insights And finally, we create a lot of custom fields as well And we’ll see one example of that If you have seen your BigQuery export, for every service, you have multiple SKUs And for computing, for example, there are thousands of SKUs So if you wanted to jump from service to SKU, that’s a much bigger jump So we came up with a field called SKU categories So we wanted to categorize some of the SKUs into something more higher-level and more meaningful ‘ Because not everybody would want to go to the actual SKUs later on And we’ll see how that helps later on as well And in terms of impact– so the major impact was the visibility from the get-go So we could track the spend on a overall business case level And the leadership could see, now, how they are doing and make adjustments So this was a great tool to the overall leadership, to look at it and see how well we are performing vis-a-vis the business case And not just that– because the reports were available all the way down to the app owners and developers, it really helped make that mindset change from CapEx to OpEx As the granular-level data was available, they could see impact of their actions If they design an application in a certain way, they could see a positive or negative impact on the cost immediately in near real terms So that really helped create the CapEx to OpEx mindset And in terms of analysis, because everybody could see what their budget is and how their spend is going on a daily, monthly, weekly basis, it created a culture of accountability across the organization And as everybody could see, what are their top areas of spend, they could manage what are their top areas of spend, do something about it, and kind of control that And overall, it helped improve the forecasting and predictability Because everybody was able to see how much their application cost yesterday, today, and based on the load variations, what it will cost tomorrow And finally, in terms of insights, it helped identify a lot of optimization opportunities And couple of examples I can give– when we realized the compute cost was much higher, we’d dig deeper into that We saw, in one instance, the licensing cost was much higher So we decided to look at, why is it higher So is it– is there some way that we can address that? Maybe using BYOL, right, we can reduce the cost Other examples were we identified a lot of storage-related costs in compute And then that’s where the custom scripts came in handy Because we wrote custom scripts that could go through all of your projects and find out orphan disks, that disks which are not attached to any instance So you’re paying for storage for those instances– those disks– and not using them So that is– that was another opportunity to delete, or take a snapshot, or do something about that And finally, other opportunities in terms of making this as part of your culture– the cost optimization is becoming part of your culture And to do that, we had regular governance meetings with our lines of businesses on trends and recommendations So rather than making it once a quarter or a long-term activity, we made it a habit, our governance, to do it, like, biweekly basis And one of the interesting anecdote I remember from one of such meetings was when we were helping out predicting one– how to predict the cost for one of the LoBs We realized, for a peak– because it’s a retail customer, there were peaks and troughs for their usage And we were trying to estimate how much it will cost when it goes on a peak So obviously, we had a unit cost on a traditional basis But during a peak, your overall on-demand usage goes up, right? So you have certain commitments for a lower cost, but you cannot make commitments for a short period of peak So during the peak, your on-demand instance usage goes up And that actually pushes your unit price for compute up higher as well So when predicting for a peak, you not just have to predict for the increased capacity, you have to predict– use the increased unit cost as well Keep in mind that as well So that insight was really useful for some of the LoBs to make accurate predictions about how

the peak would look like And finally, before closing out about this use case, what are some of the key takeaways that you can take back to your billing systems? So the first one is billing analysis and reporting is not one and done kind of solution, right? And we actually developed this solution in a very agile, iterative fashion So don’t try to collect all your requirements for all your dashboards very early in the cycle You should start building reports and start improving on that on a biweekly basis So if you do it in an iterative fashion, that’s the best way to do it The second one is, decide your data pipeline and reporting tack early on We had to make corrections, adjustment in the midway But always, you should do some due diligence to decide your data pipeline stack early on And identifying optimization opportunities should not be a rarely occurring thing it should be happening much more regularly That way, you can control your cost So with that, I’ll hand it over to Stephen to talk about the cloud cost management tools on GCP STEPHEN ELLIOTT: All right, thank you, Shashank This is actually kind of funny, because I never thought I would get to present Billing in an IMAX theater So it’s an exciting day, because I get to take check that off my thrilling bucket list of life goals So we talked to lots of customers, big and small And there’s several key themes– and Shashank kind of alluded to these– that we want to address with our cost management tools And these are what’s on the screen right now So visibility– the basic ability to view your costs in a way that’s meaningful to you Accountability– so once you know the cost of some meaningful, logical grouping– say, the fully-loaded cost of the application I just shipped last week– now, who do I hold accountable for the cost of that application? Who do I reward for having delivered it efficiently? And who do I have a meeting with who did not deliver it efficiently? But more seriously, what is the corrective action if you see costs starting to run away? And then control– now this is a big one, because there’s both the proactive, explicit control, and the tools that you use to control costs, and then there’s just the sense of control And that one’s hard to put a finger on But often, a bunch of different tools can give you the sense and the feeling of being in control That’s often just as important for you and your leadership You know, if you’re a CIO, or a VP, or you’re an engineering manager, it’s that sense of control that lets you be confident that you’re doing the right thing And then intelligence– so you expect to get insights about your costs, suggestions about where you can optimize what you’re spending, and get those from the platform, and get that easily from the data So those are the four priorities we have on Billing And I’m going to talk through the tools, and then show you the tools, and then demo the tools So there’ll be three different ways depending what kind of learner you are So we have built-in reports that come straight out of the box They’re ready to slice and dice in the Billing console If you want to get access to the detailed data, we export all the detailed data, like Shashank mentioned, to BigQuery, which is a really powerful tool to get really fast query results on large amounts of data So you won’t be limited by BigQuery if you want to join it against rich operational data We have recommendations So various products have different recommendations, for example Compute Engine has VM rightsizing recommendations to show you where you might want to have a larger instance, or a smaller instance, depending on how you’re actually using the cores and memory in your VMs And then customizable dashboards– so this would be, in the case of a Google product, Data Studio plugged into BigQuery Or if you have your own solution– you know, you’re using Tableau, or Looker, or some other solution– plug that into BigQuery as well On the accountability and control side of the house, again, this is all about giving you the tools you need to feel like you are in control, and can see costs, and don’t get surprised by costs as your teams are iterating and kind of operating autonomously So one is just billing access control, being able to set the rights to view costs for the right users By default, billing account administrators and project owners and users get to see the costs of what they’re doing Then we have budget alerts and automated actions So you can set notification thresholds on your spending in a project or in a billing account so that you receive an email if spending goes over a threshold Or you get a Pub/Sub message sent to some programmatic function that responds to it and takes corrective action And then there’s the resource hierarchy So this is just how you want to organize your resources, your usage into projects, have multiple projects under a folder, have all of that under your domain, and then set standard policies at the right level of the hierarchy And then finally here, there’s quota And so one way to think about quota is the proactive control of usage, setting, in advance,

what someone’s allowed to use And then budgets and alerts, that’s notification So that’s a lagging indicator to respond to things after they’ve exceeded a certain threshold So this graph makes me laugh, just because I stubbornly oversimplify it despite the most vigorous feedback from my engineers So on the left, there’s usage metering from GCP products So if you’re using Compute Engine or Storage, it’s emitting usage to what I’ve called, here, the pricing engine, which doesn’t exist This is a chimera It’s not a real thing But it’s a really complex system that takes the usage, figures out the right pricing, and credits, and discounts to apply to it, and packages that to send to downstream systems So one is important to us That’s payments That’s how we make money But for this talk, the yellow items and the red items are what are important So we send the cost and the usage downstream to all the cost management tools– billing reports, export, and dashboards, and budgets and alerts So if you run into Jonathan Thorndycraft or anyone else, and you talk to them, just don’t mention the word “pricing engine.” Talk about something else But this is basically what it does There’s a great talk about resource management– actually, later today, I’ll give you the talk number– by Greg and Max, teammates But at a high level, the items that you should think about are, how do you set policies and users in your domain and organization? And then, how do you pay for your projects with a billing account? Usually, most customers have one billing account, so this shouldn’t require too much thought The payment profile– just something to be aware of and make sure you have the right users set on those, so they receive the invoices and they’re able to pay for the invoice And then projects, folders, and labels, which Shashank referred to and I mentioned, which are the ways to organize their resources and be able to identify costs of the same group– So this is the part where I’m just going to show you what all these things are before I demo them So reports look like this They work out of the box You don’t have to set anything up And anytime you log in, you can see a report that lets you slice and dice the costs for easy visibility And then billing export– if you go to BigQuery and you have billing export enabled, you can just look at the table where all the costs are going and either run the queries in line here, run the query via a command line, or via an API, or plug it into Data Studio or Tableau, and then maybe have an experience where you don’t even need to write the query For example, in Data Studio, you can just do a what you see is what you get configuration of the charts that pulls directly from billing export and then gives you something like this And we actually have a template– and Shashank will show that in a demo– where you can just copy the template, plug in your export table, and then get it to start populating real data from your own account And budgets and alerts, this is per billing account or project And then you can easily set multiple alert thresholds based on actual spending level or a forecasted spending amount And that’s a new release We came out with that just a couple weeks ago So that’s a nice one that I’ll talk about And then just the mental model for how programmatic budget alerts and programmatic consumption of those alerts work– you can think of– the manual method is we send an email to the billing administrator and say, your spending is approaching your threshold You might want to do something about it, or just a heads-up Programmatically, if we sent the same message to a Pub/Sub topic, you could have something like a Cloud Function in just the Pub/Sub topic message anytime it arrives, and then decide what to do with it So for example, one solution would be to just send a Slack message to a Slack group and say, hey, just FYI, the budget’s– the spending is trending up Or if it’s a sandbox project and you’re completely comfortable blowing it away, you could have a more draconian programmatic response this says, OK, you spent more than $500 That was your test account Your test account’s been shut down And this is just kind of a small pitch to Cloud Function But when I say it’s easy to write the programmatic consumption of the Pub/Sub message, Cloud Functions are really short snippets of code in common languages So here we have a very short function that does a quick budget check and then does– kind of pushes the big red button and says disable billing, because we exceeded the budget So for budgets and quota, again, keep in mind, budgets are your lagging indicator It’s when the spending has happened, which means the usage happened before that And quota is your proactive control And the quota policies can be set for different products You can decide whether or not a user should be able to run a lot of BigQuery or run a lot of instances And on budgets, that is kind of a catch-all So if the project spends a certain level, that captures all the products in that project If a billing account spends a certain level, that just captures all of the spending and activity in the billing account So quotas are also more targeted to the actual products being used

So another important aspect to cost management– and many of you may be familiar with this But if you get into a good rhythm of building the dashboards, as Shashank mentioned, and having teams accountable for their costs, and regularly checking these, it actually does something interesting, and it makes the culture more operationally in tune with what’s going on We’ve had customers like, here, Vendasta, Dale Birch, say that once they’ve put up their cost management dashboards, they’ve identified things they probably wouldn’t have identified otherwise So they have a great talk where they’re talking about runaway storage spending So usually, storage spending that runs away slowly is something that’s kind of hard to catch And then you might notice it over months But you would have to actually look at the long-term trend and have someone proactively look at that But if you just have a dashboard that says year to date spending, and you see a trend that’s going up, it’s hard to miss, just because humans are so great at identifying the visual trend And usually, cost is a great proxy for operations So if there’s a deviation in cost, or there’s a trend in cost that doesn’t look right, but just looks like it deviated from the past pattern, usually that correlates to something interesting or wrong in operations And it’s just a really nice proxy that everyone understands, whether it’s the VP or the engineer, that something has started to change from the status quo Great, so for the demo, while Shashank sets that up, what we’re going to do is we’re going to do an end to end and show you how we recommend using the different tools, starting from the simplest use case to the more involved customer use case that Shashank was talking about with his engagement with the retail customer So right now we’re in the Cloud console This is the home page for a project And there’s two ways to get to your cost data The first one is just the Billing card And here, the project spent $235 so far– it’s all the way over there– $235 so far this month And I can either go to Billing reports here– But before I do that, I just want to show you that you can also go to the Billing page in console by just coming to the Billing menu and then going to Reports down here But for now, I’m going to go to it via the card So I’ll see– first question I might ask is, where’s that $235 coming from? Is it under control? Does it look like it’s a reasonable amount of spend So if I click on View detailed charges– View detailed charges– great, OK, so now I’m here This is amazing I’m going to zoom out This is the most high-powered projector, and yet somehow– all right So once you’re in reports, what we were mentioning is you get an at-a-glance view of where you’re spending has come from and where it’s going Is this a right size, or should I zoom in a bit more? It’s fine? OK So you immediately get a sense of what your overall spending is on the left, to date– $235 We actually spun this up for the demo So that’s a big increase over the previous 10 days And then we have a projection of what we think we’re going to spend by the end of the month– so $250 by the end of the month Now that’s curious You would think that if we spent $235 in the first 10 days of the month, the projection should be something more like $600, $700 by the end of the month But the way we’re doing the projection is we’re looking at– I’ll just do a custom date range We’re looking at the historical spend trend here So if I went back to January and said, what’s the trend in spending, well, usually, here usually means, the last time we used this project, we did a big burst and then shut it down And so that’s why the projection thinks that we’re not going to exceed $250 So that’s just this project And what I can see is that the spending is coming from some SKUs So N1 predefined instance core, that’s a Compute Engine core part of the VM And then it must be running Windows, because there’s a Windows Server license on it, Data Center Edition, and then some RAM for those instances But that might be too granular, so let’s see– group by product And it’s all Compute Engine And there’s some Stackdriver logging at zero cost So the next thing I might want to do is just say, well, I’m the billing account administrator Let’s look at– so if I wanted to see all of the spending in the billing account, I could just select all projects So right now, it shows me just one of the 18 in this billing account, because I’d come to it from the project home page So show me everything And let me know how my entire team is doing spending-wise So now that forecast has changed, because the spending across the billing account, across all the projects is different So I might group by project, just to see, what are the high-spending projects And so far, the highest-spending project this month is Phills Amazing Application And the one we had come from was the Phill Good App–

Dev And again, where’s this trend line coming from? Do a custom range, maybe take it back to January So you can see, as we’re ramping up for Next we’ve been doing some more test– some more demo testing, spinning up some VMs So Shashank can show you the demo later in this session And then the forecast is actually saying, well, typically, on this day the month, you’ve been spinning this much And you’ve been increasing month over month So we think you’ll have kind of a big spike in spend around here, and then it’ll go down again So the nice thing about the quick, slice and dice ability of reports is, now you can start to answer some real questions really quickly, even operational ones So one thing I might want to know is– I see that the Phill Good App– Dev suddenly started having usage It didn’t have usage before mid-March So I might want to focus just on that one and say, well, where’s that spending coming from? And why is it getting started? So if I focus on that, it looks like there was a bunch of spending that happened around March 12, March 13 And again, I can drill back in We saw what that was, that it was a Compute Engine And then when we looked at it even more, we saw that it was an N1 predefined instance core So once we’ve identified something interesting, then– this only takes a matter of minutes– you can go to Stackdriver and just see, in the logs, well, who’s responsible? What’s responsible for the spending? We know, generally speaking, it’s a predefined instance core So if we were to go to Stackdriver– does this go here? Stackdriver logging– I’m still in the same project– so Phill Good App– Dev And I’m interested in these VMs instances, who’s running these VM instances, where they come from Let’s look at the activity logs So we got compute activity log And then here, we had a date range of interest So it was March– let’s say March 13 can be the start date, and then maybe look at the whole day there Set end date And so now I’m looking at the logs And I can immediately start to see some instance insertions So here, it looks like we’ve been having some activity And some of the activity– so here, there are some config changes by Shashank And if I looked further, I’d see some instances getting stopped by Shashank, and then some instances getting started, right? So now I’ve got a sense of the accountability It’s my partner, Shashank That’s great It could be me, because I am logged in as him But I’m going to pretend that it’s him So the next thing I want to do is just set a budget on that So actually, I’ll show you how to get there So I’d say, well, you know, I trust him to an extent But I don’t– I wouldn’t trust him with my entire demo So go to Billing, and then go to Budgets And say, well, let’s set some notifications on the billing account So the project was Phill Good App– Dev Go to the linked billing account for that And then set a budget for it So now I’m going to create a budget And this is pretty fast So budget– don’t let Shashank spend more– budget And let’s see, Billing reports demo, that’s the project And you know, the budget amount might be something like $1,000 So I don’t ask for more than $1,000 I’m comfortable with $1,000 for the demo, for the month, but not that much more And then now, the auto-suggested thresholds are, notify me at 50%, 90%, 100% Another interesting thing that we launched is you could say, I would also like to be notified when I am projected– oops, it has to be unique So I’m just going to delete that and use the last one– when I’m projected or forecasted to exceed the budget And so you don’t have to wait until you exceed a threshold Just say, when we forecast that you’re going to exceed the budget, even if it’s only the 10th of the month or the 15th of the month, we should alert on that And then finally, you could either send that via email or connect it to a Pub/Sub topic if you want to respond to it programmatically And because you can have multiple budgets, you could set one this just sends an email, and then set an identical one then sets– that sends a Pub/Sub– Pub/Sub alert So I think the topic is in this project You just have to select a Pub/Sub topic to send the message to So alert email– save that And then you have a budget that helps me stay in control of my demo spending And then, again, the Cloud Function

itself could be very simple So if you’re going into Cloud spending, if you go into Cloud Functions– oops, I’ll just cheat and go over to the tabs– you’ll get a list of all the functions in your project And this one is actually one that’s very simple It responds to any messages sent to the Pub/Sub topic, alert email And in this case, it’s a function that just forwards it It’s not truly an email It just forwards it to Slack And that– if you’re using a standard language like Node.js, just include the Slack library, and then post a message to it So this is a fake token But post a message to the Slack channel And they’ll just post the contents of your Pub/Sub message And that is your function You don’t have to do anything else And when you hit Deploy, that function will start responding to the budget alert So it’s a really powerful tool And you can learn more about it in a deep dive talk that my colleague Phill Coletti is going to give So now that I’ve gone through how I control Shashank’s spending, I’ll let him talk through billing export and data through your dashboards SHASHANK JOSHI: Sure, thanks, Stephen I really hope you didn’t catch my pet project, but you did So the tools are pretty good So I’m going to talk about, how will you overlay some of the org data that I spoke about in the enterprise example with your billing data and how we can use that So just to set an example, let’s say you have multiple LoBs in your environment, three Lines of Businesses, multiple VPs If you see these names familiar, these are the Docker container name generator So these are the VPs And each of the VPs would have one or more applications And each applications would have environments And each environment would be in a specific project And to overlay this structure with your billing usage, you have to have a labeling strategy So I’m using labels such as environment– so whether it’s a dev, test, prod– what lob it is– lob1, lob2, lob3– what app it is– whether it’s phill-good, all-is-well And you get the idea We are using some interesting names here So this is good But how will you really implement that into BigQuery, where you can create, for example– billing export is where your export is And you can create another data set for the org data So we have a table for LoB to VP mapping Let me show you that So it can be your LoB mapping back to your VP name And we’ll see, how will you join that with your billing export? And let’s go to the next level of details So you may have some budgets for your individual VPs So you can create another table that maps your VP with your month and budget And all of these details can be stored in a CSV file And you can create a automation pipeline So whenever there is a– and you can check these CSV files back into your source code repository And whenever there is a change– let’s say a new VP joins, or a VP leaves, or a budget changes on a month on month– you update the CSV file And that automatically updates your BigQuery table So that’s about your org data to overlay with your billing data Now let’s look at some of the other data I mentioned about SKU categories as a custom field that we used So that’s where we created a table that gets your service, all your SKUs, and inserts a category cards SKU category just for simplicity there And let me show you how effective is that So here I am using– for example, for Compute Engine, just if you see the licensing itself– so as I mentioned, there are, like, thousands of services– SKUs And within Compute– just for licensing, there are hundreds of SKUs So if you see, there are– in this case, there are 16 different SKUs in my usage itself for licensing And for reporting, I may just want to see– or clip all of them together as licensing, right? So that’s what I’m doing here And then let’s look at– the other important thing I mentioned was about the platform data So in this case, I actually created a Python script using Google APIs What it does is it goes through all of your products, gets all of your instance data, and dumps it into a BigQuery table And you can schedule this script to run every two hours, or every day, based on what you want to use And this is just one example The API is so enormous and useful, you can write as many custom scripts So this particular script gets the details of the project it’s running in, what instance and name, what region and zone it’s running in, and how many cores and RAM does the instance have

And we’ll see how that data is used And because we are using project labels to map to line of business, now you can map all of these instances back to your VP So now we have created the structure So there is a billing table And there are a bunch of different tables for your org data, for your other data Now let’s see how we can join all of this to make some meaningful dashboards So for that, we create a bunch of views Let’s look at those views So the first view that you see here is a billing daily view So here, what we are joining here is– I hope the text is visible So we are joining the main building table with the SKU categories and with the LoB mapping So what you get as a result is you can map your billing usage by your VP and by the SKU category So you’re merging these– joining these– the three tables The other view is the billing monthly budget view So in this view, you’re joining your billing table with your LoBs as well as your monthly budget So that’s how you are joining these two tables And finally, the instance data and VP– so in this particular view, you’re joining the instance data with the VP table and the LoB table So this is just an example to set it up, how you can join different tables together with your billing export So you created the tables, additional tables Of course, you had a billing export Then you need to create a bunch of views on how to join these different tables And then finally, you create the dashboards And as Stephen mentioned, it’s really easy to do Data Studio dashboards If you just search on GCP Billing Data Studio sample on Google, you’ll see that You’ll see Stephen’s photo here I think it’s a really nice talk from last year– so a must-see, must see your YouTube video So what you do is you just click on this one, this link And it takes you to a page here And there are details given on how to– how Data Studio works But more interestingly, there is start working with your sample billing report So you just click on this link That takes you to a sample dashboard So that gives you, out of the box, many dashboards, right? Your analysis by projects, analysis by resources, spending trends, App Engine, BigQuery– all of these reports are available to you And it’s so easy to use in your environment You just click on Make a copy, and give your data source, your billing export here So in my case, it’s Billing_Export_Table I just click on Copy report And all those dashboards you see, those that are available for my environment out of the box So this is how easier it is to really do this So this is– this can give you out-of-the-box reports But how to make some of the custom reports that we saw? So many tables, so many views– how do you consume that? So let’s go to the next cost management custom reports– and a shout out to David Wright, who helped, creating some of these reports and views So the first one– because you saw the first view that we had was combining the VP and LoB table with our billing export And because of that– let me increase the font size here So because of that, you could see the spending by the VPs So you can see that Amazing Aryabhata is spending most of the money for you The other insight that you see here is there are some unassigned costs as well So these are the projects which you have not labeled And it’s really important, as organization, for you to see, what are my costs that are not labelled, not allocated, right? So it’s really important And it’s clearly visible from here Now let’s look at– we had budgets for everybody, right? So let’s look at how they are doing vis-a-vis the budgets that they have So this is the view where we combine the budgets table with the billing table So you can see, for example, Relaxed Richie is our VP He’s below our budget, so he can continue to stay relax Let’s look at somebody else So Mr. Trusting Torvalds is actually exceeding his budget So we’ll have to have a discussion with him, and why is that So you can see what they’re doing vis-a-vis daily budget and monthly as well You see daily spend, monthly spend, vis-a-vis their budgets Coming back to the SKU categories, because these are categories, these are much higher-level abstractions So you see licensing fees here as the second biggest cost driver That gives you a really good view of where your cost is going So instance VCPU is definitely number one,

but licensing is number two So you would want to drill down And as Stephen saw– I mean, he caught me with my pet project– that is a bunch of Windows machines, Windows licensing that had licenses as well So you may want to look at that as cost optimization And finally, let’s look at some of the custom GCP scripts So we had the custom GCP script getting all the instance data, and how that is useful combining with your other org data So now you can see, by each VP, how many different types of instances they are running, right? So why it is useful, you may be running a lot of– your developers may be running a lot of custom instances, right? So that is an optimization opportunity where you would go to the standard instance and save some cost Instances per zone, this may be important from availability perspective if you’re oversubscribing one zone or another For example, in this case, Amazing Aryabhata is going to– his team is prioritizing one zone over others And these dashboards of cores and RAM per zone, these details are really useful for you to make informed decision on how many commitments you want to buy on each of the zones– or each of the regions And finally, whether– instance status– you can do it like, running, terminated How many instances which just shut down and just left like that? You would want to kill those or delete those This is just example– just a few examples But you can make as many custom reports and combine that together STEPHEN ELLIOTT: OK, thanks, Shashank And can we switch back to the slides? All right, so just to wrap up, we mentioned some best practices These slides will be available online later But these are worth reiterating, which is, on day one, you’ve seen the value of having billing export enabled So you should enable that when you have a billing account And that way, that historical data is always available to you And in my demo, I showed you how reports are just a very easy tool to use to quickly find a date range and a product or SKU of interest, and then use that as your jumping-off point, whether it’s Stackdriver or the detailed logs And that exploration, if you know the tools, can take a matter of minutes to figure out what’s driving something Finally, it’s really worth putting in some effort to think about how you organize your work and how you want to manage your costs to build custom dashboards off of that from the detailed billing data so that you can spread those dashboards Data Studio lets you share them easily So you can share those dashboards with your company And then related to that is just developing your cost management strategy and thinking about how you’re organizing the resources and the projects, how you want to label cross-cutting spend across different projects and different resource types so that you can easily see fully-loaded application costs or fully-loaded team costs And then finally, if you do all this right, just really work at building that culture of cost ownership, making sure people actually use the cost dashboards and our iteratively improving it, as Shashank was mentioning earlier, so that they both get the cost accountability side of it, but also the engineering excellence that can come out of being really strong cost owners [MUSIC PLAYING]