NOTE: This was from my old A&L Enterprises blog – but I thought it was interesting…
Recently I attended a KC DAMA event where Karen Lopez (@datachick) presented on NoSQL and the cloud – in terms of database design. Among many concepts (some of which I may blog on later) was the implication of Microsoft“Azure” – SQL Server in the cloud.
There are compelling advantages to the idea of putting your database in the cloud – in order to have Microsoft manage the SQL Server database – not you. This is along the trend of further commoditization of technology – of making it more of a box than a complex installation of hardware or software. Many IT departments (and the vendors that support them) are aiming at reducing cost of operating – by automating and outsourcing many routine items.
When an entire service is in the cloud – such as Workday, Evernote, Twitter, etc. – you typically interact with it via a user interface and/or a set of APIs (which hopefully are somewhat stable). You are somewhat insulated from the details – as that interaction layer is intentionally there to help with that. In many cases changes they make are something you either don’t notice or get used to.
It’s a little different when a piece of your infrastructure is now out of your control. In a perfect world you shouldn’t care what database or what version it is – as you interact with it in a method that doesn’t care. In reality that’s never true – the concept of being able to easily switch out a database is just that – a concept. In reality it’s rarely done – as your application base often uses specific features of that particular database.
The same can often be true of a new version of a database – as a lot can change. It’s one thing if they improve existing features and/or add new features – but what if they deprecate existing features? What if new components on your side – drivers, middle ware, etc. have to be upgraded to use it? What if you have to do some conversion and setup activities in order to continue to use the database?
What do you do today when a new version of a database comes out? Do you immediately install it – or do you wait some time? Do you skip versions – or install every version? Do you extensively test – understanding what all the implications across your enterprise are?
If you have your database now in the cloud you don’t control now when the upgrade happens – the provider does. You don’t get to control how long you get to test it. You need to follow their plan – as they’re in charge of that infrastructure. You will need to keep up or things will stop working. You will have to deal with the consequences of whatever upgrading another component of your infrastructure will have on the rest of it (it could be a cascade of interdependent components).
It could get real fun if you had different parts of your infrastructure in the cloud – each upgrading on their own schedule. How you would manage that could be really challenging – as there are all kinds of implications. It could be keeping up with all these changes may cost more than you save by outsourcing the infrastructure.
There is a bright side to this – in that it may change the way you operate:
- You may do a better job keeping things up to date – so that you can innovate more quickly
- You may be more intentional in what features of the database you will use – so you reduce your dependency.
- You may intentionally use isolating layers (either internally developed or third party) as a rule instead of as an exception.
It may be that approach pays more dividends over the long run – but it is initially a loss of control you may not be prepared for.