hello good morning or good evening wherever your day finds you welcome to the Amazon Web Services partner webinar series our topic today will feature deploying your business critical sequel server apps on Amazon ec2 i’m your host Sherri Sullivan I’m here with a partner marketing team in Seattle and before we get started I’d like to just mention a couple of housekeeping items we are recording today’s webcast and we will make it available post-event I’d also like to remind folks that in your browser you have the opportunity to submit questions any time during this webcast and we’ll take a few minutes at the close to respond to those questions so please feel free to submit as the webcast dumb moves along I’d also like to remind folks that we do make our content available on the AWS youtube channel as well as SlideShare I’m pleased to be joined today by Tony tomorrow who is the director of field engineering with bios who is one of our AWS technology partners as well as our own miles ward who joins us today from Tokyo Japan who’s a senior solutions architect here at AWS will begin with miles providing us with just an overview of the enterprise infrastructure on AWS followed by Tony who will give us some sequel insights and a demo that showcases fanless sequels failover cluster on AWS so with that miles i’ll go ahead and transition to you take it away miles board cherry thank you very much in this is my lord i’m a solution architect with Amazon Web Services I’m excited to speak with everybody today one of the core things we are hearing from the AWS customer base the real focus and drive to adopt technologies that are disruptive and innovative and drives real value into their businesses and the interesting thing we’ve heard is that very able to do that with AWS because frankly just how costly and complex the history of on-premises physical infrastructure in you require like you know like any kind of business that they built out of parts and tin and computers and wiring and cabling and data center facility there are really large capital expenditures give the risk of very low utilization of those systems the prices for you know top top quality devices you know continue to be high and on top of the actual capital purchases you have always been management and administration both of maintenance as well as the procurement process itself is acquiring all of that couple justifying it and given the expense and the time required to build defense systems you also have you know a real reluctance and and structural resistance to building things that can scale to the demands the businesses what we’ve heard from folks like Gartner is that the core I team part of even the biggest and most efficient enterprises often spend eighty percent or more of its time just keeping the lights on just running the basics that power and infrastructure so those business values trying to figure out how to deliver better value to shareholders more innovative products how to come to market faster Dre AWS could build a set of services that deliver some really significant disruptive than its business benefit we wanted to build a system that demanded no upfront investment we don’t want customers and you know that their business opportunities that are made possible by eliminating the capital expenditure component of an infrastructure design we also want to bring the total cost of ownership way way down for infrastructure these computers are supposed to be a commodity now and hit anything or cloud regime does is turn them into a activity and we want like all utility consumers the lower prices is often as possible we also think that it makes sense for businesses to pay only for what you use just like your power bill you don’t have to pay for how much compute capacity you might need you pay for how much you’re actually using typically you’re paying by the hour for AWS services will also move it with the number of customers that would want to use the system like this that you really have to be able to operate in a totally

self-service way who wants to wait for the vendor to get things set up or for things to get shipped in the mail or four bills to complete the AWS everything is an API or a CLI or an easy web interface away from as fast as you can work through the process e fast deployment you’ve also heard from customers that because of those advantages low-cost really only painful what you use and being able to control what you use you can really grow and even sometimes more important rink to accommodate the actual business need for infrastructure how different from the old we’re doing because you have that ability to grow and shrink now now you can capitalize on opportunities because you’ve lowered the risk of experimentation you’ve made it easier to follow a small experiment with a big scaled infrastructure if the market demand it and to capitalize on opportunity so we thought that was a good idea and the market loaded and you know I’ve seen lots of the Magic Quadrant chart the partner does a good job of evaluating lots of different technical spaces but I haven’t seen very many of them is that far of a gap between the market leader and the also RAM in this space so but hearing from a lot of customers that usually found something that’s valuable and your we were trying to continue to invest in this business and see it well so how do what are a couple of different ways that works first thing is you even take in a very open approach to building systems we want to make sure that you aren’t attached are locked into any specific software vendor or stack we have SDKs in seven different programming languages today and we support applications from not only the open source you know or free market vendors but also the biggest commercial software vendors in the world have chosen a that you also the great platform to run their most sophisticated application so we also know that those applications now are being used across businesses in every vertical and 190 countries around the world and delivering value from from the core you know architecture components that line a business application then the other kinds of core module that drive you know the beating heart of big business through the big high scale high scalability web applications social media systems gaming infrastructure there’s also a really exciting developments in big data analysis high-performance computing you know communicational to the dynamics and also the cloud because of its ability to deliver high-performance infrastructure at a reasonable price in a way that’s controllable directly by your developers has enabled really unique solutions across each of these different vertical and move it it would you just be folks in Silicon Valley it be folks in the silicon beach and the silicon body and the silicon desert that all want to be able to use it so we started to design and deploy the same AWS infrastructure across the world so now we have nine distinct region they are available on four different continents at this point we’re really able to serve customer workload in a global way but without the kind of undifferentiated heavy lifting of the contract block and tackling and currency exchanges and all the other work that it takes to build a system that’s really it’s the same with the same API a profit then and on top of that globally distributed infrastructure that is all these benefits for latency for locality of data for legal compliance for for you know the sort of geographical penetration into different customer bases we’ve also built a little broad set of services so not only do you have these foundational components like ec2 or you rent computers by the hour or s3 or restore data phenomenally high durability we also have higher level aggregated services you know things like elastic search where we you know you make it so that you can easy cloud surfer you need to go you can easily create a search in back for you know you’ll make elastic mapreduce where it’s needed run and manage being a deep clusters on top of aw so one of the most important categories of those services places where there are lots of different approaches is running database software so you know the biggest applications in the world depend on structure and

storage and there’s a myriad different way of building databases in the cloud so not only can you run standard off-the-shelf relational database software directly on UT to where you have root level access or administrator access to individual computers that you rent by the hour which make it so that you can design and deploy exactly the way you want without any kind of constraints about you know about your access to the control and design we’ve also heard some customers that some of that stuff is complex until they want more managed solutions as well so we’ve built a service called rd asked where we manage relational databases on demand for customers we’ve also done a bunch of the foundational work and delivered products around the no sequel market space both hosting no sequel databases on easy to as well as Amazon managed mythical services like and DynamoDB and simple baby so you heard from those customers that are running databases on YouTube that they want to run big databases at first you know a lot of customers were powering the web applications or social media systems or game but now we’re talking about the core databases that power the heart of the enterprise so you even like solid state disk local to the instance up to two terabytes on our high ones 4x larges well 244 gigs of memory and the CR 18 x large instances for data warehousing applications even of instances there’s 48 terabytes of the local disk so very powerful instances all of which you can rent by the hour to deliver high performance or online and offline database work life so we also worked really hard on our product where we host and manage databases that the service car yet then RDS has all these function for pre configuring and automatically deploying and managing at high scale database servers we have functions for my sequel for Oracle and reserve some customers that they wanted sequel server as well so we built this managed service to deliver and deploy those systems on their own the big advantage for RDS is a bunch of again that undifferentiated heavy lifting installing and patching and optimizing and managing each of those sort of basic function they get handled one of the things that we’ve heard from customers around RDS is that that there are more advanced replication topologies high availability design other kinds of approaches that they want to be able to take to ensure that in the most critical business scenario they have all of the options that they might want available to build a highly replicated highly available database I wrote the sequel server on GG white paper a couple of years ago and back at that time the state of the art for running sequel server on AWS was used frankly some somewhat older technologies for replication from Microsoft you used you know log shipping and database mirroring with witnesses a little transactional replication as the approaches for maintaining high availability for single server these are systems that you can build today on sea to where you control the sequel servers deployment and management directly and but these functions aren’t directly available by wind building my RDS there isn’t an automatic replicated model using the manage database services we’ve definitely heard from customers that there’s demand for more advanced higher performing more consistent access to the newest and best parts of the sequel server story around high availability so when we heard what science had to offer we were pretty excited not only for our customers and partners but for the businesses that they represent at all of the different systems they be able to build at high availability and high performance because of the technologies that science at broad market so with that I’d love to introduce David excited for for them to show you how clouds were excellent thanks miles so mouse mentions what I’m going to do is review how you can achieve native Microsoft clustering within Amazon ec2 but before I get into too many of the specifics on how you can actually make this happen I wanted to provide you with a brief background on fios technologies and who we are what we do so science technologies formerly known as steelite technologies we focus specifically on high availability and

data replication solutions we’ve been we’ve been around for well over a decade protecting thousands and thousands of mission-critical servers and applications worldwide and so what we’re going to be talking about today is the concept of sandless clustering we’re going to jump to those details right now so for years the native microsoft clustering feature that’s built right into windows server has been the de facto standard for high availability you know primarily in physical environments yeah one of the challenges that people have in deploying this is that it requires shared storage such as a fibre channel or a nice cozy type of sand and these types of storage arrays can be very costly complex to set up and maintain so you know certainly as miles mentioned one of the advantages to moving to the cloud is you don’t have to worry about those types of infrastructure detailed anymore plus when you take a look at the way a traditional Microsoft failover cluster works well technically the shared storage is a potential single point of failure so if the storage goes down you’ve lost your entire cluster so you know as you make the move to the cloud generally shared storage isn’t available so you aren’t able to set up a need of Microsoft cluster without additional software so this is exactly the piece of the puzzle that style solve we provide real-time data replication software that is fully cluster integrated and fully aware so the next slide here this is what I call a sandless cluster also known as a shared nothing clustering architecture so what we have here are two two windows servers and our data keeper data super cloud edition technology is providing real-time block level data replication to keep the nodes and sync we support both synchronous as well as asynchronous block level replication because we’re doing this the block level its replication engine is very efficient very low overhead minimal impact on the the systems and the performance of your application so just like you would expect in your traditional sand-based cluster you know you’ve got your multiple nodes in the cluster sequel servers running on your your active node and let’s say there’s a failure so your primary server fails as you would expect the cluster would fail and would bring your database back on line on the standby cluster node now the data keep a replication engine because it’s fully integrated with the Microsoft clustering it handles all of the mirror reversal and the data replication you know redirection accordingly so as the primary server fails and your sequel database moved to the other node in your cluster it will start sending data back in the opposite direction so you don’t have to worry about manually keeping things and things that just happens automatically here so by setting up a sandless cluster like this you get the best RTO or recovery time objective because you get very very fast failover times as made possible through the native windows server failover clustering technologies but you also get excellent RPO or recovery point objective because with this real-time replication you’re not going to lose any data in the event you have a failure of one of your clustered systems here so this diagram is a simple to node cluster configuration that’s not the only way you can set this up you can also set up larger clusters if you want to do mork more robust failover topologies here and get better protection so here’s an example of a three node cluster we’re replicating to multiple locations so we do support multi-target replication so here I’ve got my primary node one at the bottom left side of the screen here it’s the active system and data is being replicated in real time both to know to which is at the bottom right of your screen as well as to a third node 4dr purposes here and if the primary server fails as you would expect sequel is failed over to the next server in the priority list that becomes the new source of your data replication sources and targets again are automatically reversed to match where the app where the workload is being run within your cluster so let’s look a little bit more specifically how this is typically implemented up in ec2 so for maximum availability you want to place your cluster nodes in different availability zone so here i have an example where i’ve got a two node cluster instance one instance to in different availability zones and the data is being replicated in real time now with a shared nothing cluster architecture or sandless cluster architecture like this the native microsoft failover clustering technologies leverage is a file share witness this is

basically to protect against split brain so you don’t have multiple systems trying to bring the database on at the same time so for maximum availability you ideally want to deploy this so in a three availability zone configuration and if you have a failure of your primary instance for some reason the database will fail over to the other node in the cluster and everything stays up here for you now some of the key advantages of deploying a sandwich cluster especially up in ec2 is you know firstly this is very easy to set up and maintain so for most folks out there who have been deploying sequel servers in the in physical environments for a number of years most folks have already set up a Microsoft failover cluster so because of that you’re not you’re not having to reinvent the wheel or relearn anything it’s all based upon the standard Microsoft’s fail failover clustering technology that you know and love yet we allow you to simply do this now in any environment without that shared storage requirement anymore and because it’s fully integrated with the native failover clustering technologies you know think of it like the senate forget it principal you just set up the replication and from there you really don’t have to think about it or manage it that’s all handled through the cluster so this gives you the flexibility to enable the native microsoft clustering in AWS and it just makes it that much easier for you to adopt moving mission-critical applications into an ec2 environment here so a little more specifics on the technology that makes us possible what we’re talking about here today is our data keeper cloud edition product which we’re focusing here on sequel today however because this technology provides real-time block-level replication it really can be used to protect and cluster any application or service that you would normally protect within a windows server failover cluster could be sequel sharepoint Microsoft Dynamics Microsoft Lync you know you name it file highly available file servers and more this is a very optimized high-performance replication engine we’re doing this at the block level as I mentioned before so very low overhead we don’t have to worry about things like file permission lock files open files we sit beneath the file system and we replicate any data that gets written to the source system gets replicated in real time to one or more targets that are part of that cluster we do support both synchronous and asynchronous replication so you can select the mode of replication that makes most sense based on your businesses recovery time and recovery point objectives here it’s also very very easy to use and setup i’ll show you during the upcoming demonstration but we have a very simple to use nmc Microsoft management console interface it’s literally a 3-step wizard to set up the data replication and get that mirror integrated into your traditional fail of your normal failover cluster here now I often get asked you know what type of overhead is involved with with replication now we’ve done a lot of optimizations here to make this very minimal so the diagram up here on the slide we’re comparing three things first of all is a standalone sequel instance without any replication basically non-clustered and with the sales data keeper replication now this is synchronous replication turned on there is minimal overhead we’re looking at you know 10 ballpark ten percent or so for full data protection and full clustering now a lot of another technology the lot of other folks look at is the always-on availability group technology that’s built into Windows Server I’m sorry sequel 2012 and that with the same workload with the same their version of synchronous replication it has significant overhead here so we’re talking sixty seventy percent ballpark depending on the workload here so one of the key advantages aside from the ease of use and the cluster integration here is that now with data keeper cloud addition you get the best of both worlds you get the full data protection the full availability for your mission-critical sequel environment but you’re not sacrificing performance in order to achieve availability and data and data protection here there are a number of additional benefits here as to going with a failover cluster clustered instance of sequel with data keeper as opposed to availability groups here first of all because we’re clustering

and protecting the entire instance it supports on limited databases it also protects your system databases so things like your master msdb your sequel logins your sequel agent jobs all of that is replicated and protected if you go down the route of availability groups you’re going to have to maintain those types of things on your own your sequel logins your your sequel agent jobs you’ll have to keep those in sync so it’s a higher administrative overhead here by going with a failover cluster aims at the data keeper again all of that all that is just handled for you also it supports unlimited databases plus it replicates data outside the sequel server itself so availability groups is going to replicate certain databases with data keeper because we’re doing a disk level block level replication we will replicate data volumes on that Windows Server your D Drive your eDrive your f drive whatever you set up to replicate so if your application has data that lives outside of the sequel database then this is going to protect that as well also you know the final bullet point here talks about replication efficiency and we mentioned that on the previous slide that with a block-level disk replication technology much less overhead and much better overall system performance here so we have a number of customers who have adopted this model here what I’m going to take you through here is a quick case study and then we will jump into a live demonstration so one of our customers is a software provider and they provide software primarily aimed at retailers and distributors here and they are launching a new applicator they launched a new application and one of the requirements is that they wanted to move they wanted to move this workload up into ec2 but they were familiar for a number of years of deploying traditional physical servers with with a back-end san type of sequel clusters they wanted to leverage the existing skillset and expertise that they had you know built up over the years but that said during this transition this is a mission-critical systems they needed to maintain and ensure the availability of their sequel 2012 back-end at all times one of the motivators of them moving up to this environment is to eliminate you know the the capital expenditure type costs you know servers sans power cooling all of that yet at the same time ensuring maximum availability here and so you know they look for a solution that would enable the native microsoft failover clustering that they know and love and they’ve been using for a number of years but they want to make that possible up in ec2 so that’s why they turn to a science to help make that happen so what they came up with was a multi availability zone cluster again this diagram will look fairly familiar from what we’ve seen before but they’ve got a two node multi availability zone multi subnet sequel 2012 cluster and they’re actually running multiple sequel instances they have one sequel instance running on the in availability zone the first availability zone they have a second sequel instance running on the other cluster node in a different availability zone so there’s actually replication happening in both directions each note if you will as a backup for the other then in the third availability zone they have their file share witness which is the you know the second opinion or the the traffic cop if you will to ensure that spillover happens at the appropriate times and things like that so what they implemented is you know a two node sandless cluster up an ec2 the primary server fails the sequel instance moves to the standby system and data replication happens in the opposite direction now during this time where one of your your your systems your is down data keeper maintains a bitmap technology that tracks the data that is being changed so at this point in time we know exactly what data is change what data hasn’t so when the failed node comes back online data replication picks up where left off a partial resync is automatically handled and initiated to get that failed no back up to speed and in sync as quickly as possible here so the solutions they implemented again was at the specifics here on the solution with a two node windows server fail failover cluster across multiple availability zone they’re running to

sequel instances and an active active config so one sequel instance on node a different sequel instance on node B and a file share witness and their third availability zone and they’re leveraging data keeper cloud addition as the real time replication engine that’s fully integrated in the failover clustering to keep the different cluster nodes in sync essentially data keeper cloud addition is making those local independent disks look like a single piece of cluster storage some of the key benefits that they were able to realize by implementing this is certainly a lot better total cost of ownership they didn’t have to invest in new hardware there they’re actually achieving a higher level of availability than before because the shared storage array is no longer that potential single point of failure and because of this they they’re able to easily and flexibly deploy new applications and services up in this environment and do so with the peace of mind that they they are ensuring maximum uptime and they’re going to have high availability fail of clusters whenever they deploy new applications and databases up into this environment here and this was all done very quickly and easily this whole solution was deployed and well under a day very easy to set up here so we reviewed the methods for ensuring high availability and forming failover clusters using native microsoft clustering technology here up in amazon ec2 we’ve talked about the data keeper cloud edition a product which provides a real-time block level data replication that’s fully integrated with native microsoft clustering we’ve talked about some of the different availability options and differences if you’re looking to protect sequel 2012 specifically failover clusters clustered instances versus always on availability group we’ve reviewed a case study and what I’m going to jump into next is a brief live demonstration here so I’m going to go out go ahead and share out my screen and this will take just a moment or so to activate you should be seeing a screen that says connecting to server should be loading loading the desktop screen that will be sharing here during the demonstration and I’m going to show you a real live in this case to node windows server failover cluster running up on ec2 that is data keeper cloud edition enabled so two independent instances no shared storage involved in data keeper cluster addition is providing that real-time block level data replication so if we take a look at the screen here I’ve got my failover cluster manager open I also down at the bottom how the data keeper interface screen open and what we’re looking at here is a two node cluster there’s node a and node B and I’ve got a couple of clustered clustered resources here first I have the sequel server the cluster instance itself that’s the top resource here in this list and then also I’m clustering the Microsoft dtc and this is a important differentiator here because a number of applications depend upon DT see if they leverage distributed transaction so this is one reason where if your application does require depend upon the trip the transaction that you’re going to for maximum protection you’re going to need to deploy a failover cluster instance as opposed to availability group availability groups and is not compatible with distributed transactions here at this point in time so here I’ve got my two node cluster if I take a look at the sequel server this is a what Microsoft would call a multi-site cluster here because the different nodes are in different availability zones and relied on different subnets here so as failover happens it will also update dns to redirect to the appropriate IP now one difference you’ll notice if you’ve ever set up a traditional failover cluster in the past is the disk drive section in your traditional physical sand-based cluster world this would be a cluster disk also known as a physical disk resource essentially your shared your shared run on that sand now as I mentioned before with what the data keeper cloud edition does is it abstracts the storage and it makes two or more pieces of independent storage look like a single cluster disk and that’s what we see here I’ve got this data keeper volume H which looks like a single piece of cluster storage but yet it’s an H Drive on node a and an equally sized h drive on node b and we are keeping those two volumes in sync in real time data is being replicated right now from be back to server a because the current owner of the database it’s

running on node B right now in the available storage pool you see the same thing so here you can see the clustered disks again the only difference here from your you know physical fan base cluster is you’re going to see these data keeper volumes as opposed to your physical disk resources and then if we take a look at the cluster summary screen be one other difference the second difference here is the quorum configuration so when you’re in you’re in a traditional sand-based cluster you would set this up with node and discs majority so it would use a small lawn on that sand as the third vote towards quorum in this case because there are no shared discs we’re doing a stainless cluster configuration we leverage one of the other built-in options in the cluster which is node and file share majority so this case there’s a simple file share witness which in this case we placed in a third availability zone from maximum protection and uptime simple file share that’s part of the cluster and it acts as the third vote towards cluster quorum here so for all intents and purposes you know this looks feels and acts like your traditional sand-based cluster but we’re now making this all possible with software based replication that’s fully cluster integrated so I’m going to switch we’re going to take a look now at the data keeper interface and then we’ll go back and all will will do a failover of the clustered sequel instance from one cluster node to another show how the data replication is automatically reversed so the data keeper interface is a very easy to use MMC snapping and what we’re looking at here is what we call the server overview report where you can see the different systems involved so here I can see my node a and my node B and the different drive letters that are attached I’ve got a d drive and an H Drive and you can see the state of the mirror they’re both actively being mirrored and you can see what their role is so what you’ll notice here actually is i have replication happening in both both both directions I’ve got my H Drive replicating from B to a and the D Drive from A to B so I really have an active active cluster my sequences on one node the dtc is on a different note there each work at your each acting as a backup for one another you can have different drive letters replicating in different directions that’s perfectly fine if you drill down onto a job this is what we call a mirror here you can see specifics so for any given mirror you can see the source the target the drive letter is being mirrored and then the ipn points as well as the mirror state so you can see full information on what is happening with data replication the nice thing about this technology is you know you go in here and then to set up replication you create a job it is a three-step wizard to create you pick your source server you pick your target server and you pick a couple of mirror options such as do you want synchronous or asynchronous replication and from there you really don’t have to go back into the data keep your interface if you don’t want to it all gets driven going forward through these standards failover cluster manager interface to hear so if I go back now and let’s do a switch over so I want to move my sequel instance from node be back to note a so just like you would normally you just right-click and say well let’s move this to node a and it’s going to initiate a switch over as part of this it’s also going to update the IP because we’re failing over to a sequel instance running in a different subnet here as part of this process because of the integration with data keeper cloud edition it’s telling data keeper to automatically switch the direction of replication so now instead of the H volume replicating from B to a the the mirroring is now happening from a over to beat a is now the new current owner so everything took a few seconds here but everything is back online and my sequel database is safely and happily living on node a out of my cluster here if I go back into the data keeper interface you’ll see here that for this same mirror it now shows that node a is the source node B is the target so we’re just reconfirming that the the the replication you know was was switched over automatically through this seamless or cluster integration here so with that that wraps up the demonstration here I’m go ahead and stop desktop sharing and we will bring it back to the slides here so we just want to thank everybody for their time and interest here today in learning how to deploy you know mission-critical sequel based application up in ec2 we’ve reviewed the data keeper cloud edition technology as well as gone through a live demonstration so I guess at this point does sherry I’ll turn things

back over to you fantastic thank you so much Tony that was a great overview on a great demo really showcasing the style solution there and while some thank you for an overview on AWS folks if you joined this midstream we do invite you to submit some questions and we’ve got some queued up so let’s jump in I’m going to point this back to you Tony is the question is is a domain required to implement this type of configuration or solution excellent question so with the with the configuration that we looked at here today with data keeper cloud edition it is based upon the native microsoft failover clustering technology one of the requirements of windows server failover cluster is it needs to live in a domain environment so short answer is yes if you have specific needs to do this in an environment without active directory what we can do is we can take that offline and we can talk about some different options as well with you so great way great another question that point back to you Tony is done des no Silas solution support sequel 2008 r2 standard yes so so the the data keeper solution we’re really application agnostic we’re providing real-time block level replication at the disc level so as we found the demonstration here today we’re replicating you know we were replicating the d drive and the h drive but honestly the data inside that derive could be anything it could be any applications data that can be supported so you know really if if it comes down to you know can you cluster it within failover clustering and then if so data keeper can replicate that data for you okay right is there a maximum to the number of nodes so that’s all controlled through failover clustering so whatever either windows server 2008 or 2012 has as their maximum number of nodes that would be your upper limit so that’s really controlled through microsoft failover clustering not through dating paper okay right I Tony another question your direction and feel free to jump in miles does this technology support distributed transaction controller on Amazon ec2 yes great question and that’s these answers yes and that’s what we demoed here today so I had a sequel cluster and we were protecting both a clustered sequel instance as well as deep Microsoft dtc distributed transaction controller yes all right um is there a type of alerts or notifications that stylist application can kick off for certain events yes great question the answer is yes so all of the activities get logged into the event viewer so out of the standard windows logs and then they each have certain event IDs and you can key off various event IDs to be notified when those types of events happen so we just integrate with the native windows logging and then from there you can be you can key off IDs and get notified and this is the question I think sometimes there’s confusion around availability zones from an AWS perspective and sort of what what that means but the question here is can cluster notes span those regions as well as availability zone on AWS like an example would be in a west coast East Coast us you know different gos sure thing yes it’s really in that case it’s just a matter of setting up the appropriate connectivity between the different locations here so generally that would be done through you know a VPN tunnel or some type of effect between between the locations this is miles real quick and then a component there and run Web Services has changed pricing recently associated with the data transfer call at the regional level so normally you build a system that spans multiple region you’d be paying the normal internet ingress charges for outbound data replication one reason to another now we’ve reduce those prices in some cases very significantly the standard internet egrets for example from us east or the east coast of the United States for the question I starts at twelve and a half cents a gigabyte if

you are moving data into another region it’s only to send a gigabyte that’s a vast improvement so you kind of cross regional replicated topologies not only function using this iOS technology but they’re also cost-effective great point miles thanks another question I’ll go ahead and put this back to you Tony is them is unlimited data base really unlimited an example of you know does it work with thousands of databases on a single server maybe you can clarify for folks what those limitations might be sure sure so so from a from a data keeper perspective again we’re doing block little replications to whether there’s one database for a thousand databases it’s all the same to us so really when we say you know unlimited data bases its however many databases you can run on sequel server itself so I don’t know off the top of my head what the upper limit for number of databases in sequel itself but for a given instance you know you could protect with this configuration however many databases you would run on on sequel itself let’s say for example in a non cluster or standalone environment the same the same rule and apply here with one of the reasons I guess we brought it up that way and phrase it as unlimited is with availability groups there are limits to the number of databases that can be protected in that configuration thanks for clarifying another question here is is more it sort of been the same genre is that you know what’s the smallest size ec2 instance supported with the data keeper sure so the configuration we were looking at here today let me pull that up i believe it was a mild correct me if I’m saying anything wrong here but believe it was a m1 dot small because with setting up a cluster you would assign secondary all the cluster related IP so the cluster IP the IP of the clustered sequel instance and so on would be assigned to the instance as a secondary IP address and micro i believe only supports theater one or two miles probably up to correct me there so if you have more than you know that then you would have to go up to the next one up which i believe is m1 dot small but yeah Tony you got that exactly right there other because of the CPU timing behavior for the 20 micro instance type it can be I think a very useful place to practice installation or system remediation you know workloads where your training users on sequel on top of the micro instances but there are very few cases that we’ve seen in production wheres equal servers are running on the micro class so we recommend at a minimum the small infant type and again that that only delivers one point seven gigabytes of RAM to the workload which no we’re seeing most production databases operate on margarine but there shouldn’t be a reason on the scio side where you wouldn’t be able to run on any of the specific instances and the demo we saw her today was rent the cluster knows no day and no be were m1 not small ok great follow-on question specifically around data keeper and I’ll point this back to you obviously Tony if then the data keeper is it limited or available to to AWS no it’s not so you can also use and deploy data keeper you know in your own data center on physical servers on virtual servers it’s a very flexible technology so it works very nicely and we have many customers doing this up in AWS but you know you could do this with two physical systems in your own data center and do a very similar thing so data keeper works both the on-premise and in the cloud both physical and virtual environments right so we had we had another question which you just alluded to is that can the solution cluster to a non cloud server and the answer to that is yes so again anytime you know multi sites and things like that are involved it’s it’s really a matter of ensuring you’ve got the appropriate you know network connectivity or VPN tunnels and

bandwidth between the sites with a real-time replication solution you want to make sure you know it’s not so important how big your data set is it’s more important how rapidly the data changes so I call it the rate of change here because you want to ensure that you’ve got you know sufficient bandwidth so that as quickly as the data gets written on your active server it can be replicated across the network in real time for maximum data protection so it really comes down to rate of change versus bandwidth all right thank you that folks we’ve got time for just a couple more questions and Tony this question is back around can you provided some insights into how the licensing for data keeper is how that works sure definitely definitely so data keeper is licensed on a per cluster node basis so the demonstration we saw here today where I had a two node cluster you know that would be to license is of the data keeper software if you want to get more specifics or on on or any quotes things of that nature I would just suggest you email amazon at us cos calm and a representative would be very happy to help you out with with all that information riffing well I’d like to thank you both Tony and miles for a great presentation today and really providing detailed information around sandless sequel failover clustering on AWS and I’d like to invite folks before you jump off our webcast today to take a very brief survey we do appreciate your feedback helps us improve our webinar series and so with that I’d like to thank you Tony from sigh of our technology partner as well as miles our AWS solutions architect for a great presentation today and thank you attendees for your time we appreciate it thank you thank you very much