Search This Blog

August 23, 2010

Data Infrastructure Architecture [2/2]

Yesterday I wrote a fairly long blog post which started to delve into some of the questions addressed by data infrastructure architecture.  After writing more than 2000 words I decided it was time to give you all a break and for me to have some time away from the keyboard.

So here’s the second instalment in which I’ll outline the remaining concerns for the data infrastructure components of data architecture.

Data platform performance management

One of the key success factors for a stable DBMS is that it can give business application users the data they need in a timely manner.  Making sure that the data platform has the system bandwidth required to address the workload is then very important, and managing this requires the implementation of supporting infrastructure and process, which must provide ways of answering a number of questions.

Is the server sized to cope with the existing workload?  What components of the physical server environment are under stress?  What are the system bottlenecks that are causing performance stress?  What sort of query latency are users having to endure?  Is the server running into deadlocks or managing huge blocking trees?  Is server stress a cyclic or ongoing phenomenon?  Is the storage layout of databases appropriate for the workload?  What is the business impact of data platform stress?  Is data platform performance meeting service level agreements?  What (if any) are the penalties for service level breaches?  How do you balance the cost of service penalties against the cost of prevention?  If database queries have to be tuned, who is responsible for this?  Who manages the engagement of third party ISVs whose queries are running slowly?  Do they have SLAs to comply with or service penalties against them?  How is all this being measured?  Are these measurements sampled from real-time data-points or are you sampling averages?  Do you have a performance data warehouse in place?  Can you detect trends in performance over time?  Can trends be linked to any specific configuration changes or business changes?

Not bad for a single stream of consciousness… even my head’s spinning.  Suffice it to say that having some structure about how performance is measured and managed is a key component of selling data platform infrastructure architecture as a service to your clients.

Data platform security & access control

Data platform security design ultimately answers two questions.  Who gets to access your data?  And under what circumstances do they get to enjoy that privilege?  Of course, it’s not that simple.  There are a number of far more detailed questions that need to be answered, but who/what and where/when/how are the items that business people understand.  That is: Who is able to query the database?  What data can each person access and what can they do to it?  Are there restrictions on where the user can access their data (e.g. home/work/mobile access scenarios), when they can access it (e.g. limitations on end-user access during batch processing or database maintenance windows) or how they access it (e.g. from named applications vs. end-user tools such as MS Access or Excel)?

At a more technical level we can talk about attack surfaces, defensive coding mechanisms, authentication, identity maintenance and management, exploit management, application-level query validation, roles, schemas, encryption, directory integration, firewalls, DMZs, code/query execution policies, identity impersonation, SQL injection, exception logging and auditing, patch management and more.  Ultimately these technical terms all address the fundamentals of who/what and where/when/how.

Data platform backup & recovery

Once you have a bunch of people happily transacting away in your database, they’re going to have some expectations about that data still being there when the come in to work tomorrow.  I know… just plain unreasonable, isn’t it!  Sadly, IT professionals just have to get over the fact that business people actually want to use the applications we IT people build for them, which means that data still being around the next data matters.  To this end, data platforms need to be backed up on a regular basis as data loss can and does occur.  Backup planning requires that you have the capacity to store at least some of your backups for a long period of time, that the data in those backups is not compromised as a result of page-level corruptions, and that you understand the full path that your data travels between your database and its final resting place – typically in a third party tape storage centre.  You need to make sure that if your backup medium changes that old backups are migrated to that medium.  You also need to make sure that media storage of backups and its transport is secured through the entire journey.  You need to define your roles and responsibilities around backup management.  Are your DBAs responsible for changing tapes?  Or is this something that gets managed by your server team?  What about archive management tools?  Are you going to use SQL Server specific backup agents?  Ore will you back up to disk then backup the files to tape?  Compression and encryption are additional factors in your choice of backup solution design.

However, it’s not just enough to do backups on a regular basis.  I have learnt from bitter experience that you need to be able to verify that your backup solution can be restored as well.  What are the processes for getting your backup media back from offsite storage?  What does that process cost?  Who is responsible for mounting the tapes for restores?  What impact does restoring data have on ongoing operational processes?  Do you need to do your restores over the top of existing data, or even to a copy of the database on the same server?  Is there enough disk space to support the restore?  What is the objective of the restore?  Are you trying to get at data that went missing in one table?  Or is the whole of the original database compromised?  What is the maximum acceptable data loss in your service level agreement?  What is your SLA for maximum acceptable time to system restoration?  How is the decision to perform a restore made?  Who makes it?  What are acceptable grounds for database restores?

Backup and Recovery Services (BURS) is a complex area of technology where the broad sweeping designs aren’t what will cause you pain.  It’s all the niggly little details that hurt you, and it’s important to ensure that you’ve got the right heads focused on getting them right.

Data platform license management

This is kind of an odd one… it sort of belongs under data platform selection but it also has implications for support and for total solution costing.  The choice of licensing models is an important one, and has much broader reach than you might think.

How do we go about selecting a licensing model then?  Well… lets start with features.  Different editions of your data platform will have different features.  If there’s a killer feature that you MUST have, or pays for the price difference between editions then you should make the business case and request the more capable edition.  If the feature is not going to provide sufficient ROI, then you need to make sure the project understands the additional costs that will be incurred as a result for delivering the same required results.

Another important consideration is existing agreements.  For example, if your business has a Microsoft Enterprise Agreement that gives everyone SQL Server CALs as part of their bundle, then you can license SQL Server in “Per Server” mode rather than “Per CPU”.  This is usually much cheaper, but has some limitations – namely, that you cannot then use that server to store data accessible from the internet unless you also buy an internet data collector license. 

Do you need multiple editions of MS SQL Server on the same physical host?  If so, you may find that some editions require a separate license for each DBMS instance, while others might cover any number of instances up to some named maximum.  Carefully weigh up the costs per instance to make sure that you have the right level of coverage.

You also need to consider whether you buy the licenses up-front, or whether you buy your software through a vendor who can offer you MPLS monthly charges.  Either way, this choice will have corresponding cash-flow impacts on the business.  If you’re working for a cashed up enterprise then the monthly charge might be more attractive as they already have sufficient savings, debt-worthiness or cash-flow to be able to sustain monthly licensing.  However, if you are working for a start-up, they may elect to go for the up-front single payment and treat it as an investment in not having to deal with a recurrent cash-flow impact.

The final licensing factor which is often ignored or poorly understood is software assurance and support.  In the case of SQL Server, software assurance is reasonably expensive, but prevents a cost at a later date of having to pay to upgrade to a new version of the product.  TechNet and MSDN licenses should also be considered, as these give you access to development and support resources which might not otherwise be available – e.g. a certain number of Customer Support Services cases per year, or early access to pre-release editions of SQL Server for regression testing and review. 

Data platform application & integration interfaces

This final aspect of data platform infrastructure architecture is often neglected by DBAs, and is typically a good way to tell between a DBA and a data architect.  Every data platform has a set of supporting technologies that allow data consumers to access data from applications, or have data fed from one data source into another.  Putting corporate standards in place for how the supporting infrastructure for these activities hangs together is a critical success factor for any data platform design.  In the case of SQL Server there are a lot of different options on how to connect to databases, ranging from true legacy libraries such as Embedded-SQL and DB-Library, through to the latest and greatest SQL Server Native Client libraries, ADO.NET versions and the abstraction layers that can sit above them in the application tier – including ORMs such as Entity Spaces, LINQ2SQL, LLBGen and NHibernate, or more model-driven architectures such as Entity Framework.  Each of these solutions has a batch of client-side requirements that need to be installed on application servers (for thin client/N-Tier applications) and/or client desktops (for thick-client applications).  Data architects should be a driving force in selecting these technologies and provide guidance on how they should be used in applications.

Integration technologies are an even bigger can of worms.  You have SSIS and SQL Server native replication as immediately obvious options.  Native-ish “application” interfaces such as SQLCMD, PowerShell, Service Broker and Sync Services are also solid data integration paths for the SQL Server platform.  However, after you get past those you have literally hundreds of different ETL tools, data integration platforms, heterogeneous replication engines, client-side applications and messaging solutions that can all quite happily talk to SQL Server.  Working through the noise to standardize on a subset of harmonious and complementary technologies is not an easy task, but it should be considered a high priority for any data platform architect.

Of course the usual questions of roles and responsibility then enter the two problem spaces above, as does the ongoing maintenance of your technical policy choices.  Sometimes version m of tool X is better than version n of tool Y, but version m+1 of X is nowhere near as capable as version n+1 of Y.  Do you switch horses when this happens and hopes that X eventually catches back up in version m+2 or m+3?  If you don’t, then at what point do you switch to Y?  Straight away at n+1?  After ongoing market leadership has been established at n+2?  These are all important questions and they require not only an insight into the discrete capabilities of the technologies, but also the visions and directions of their respective manufacturers.


Well… there you have it… a quick (4100 word) overview of data platform infrastructure architecture.  By no means have I been exhaustive!  But I hope you’ve got something useful out of these entries.  In the next blog I’ll start talking about database infrastructure design in more detail.  Once I’ve finished talking about some of these architecture pieces at a high level I’ll come back and revisit the detail, wrapping some more structure around the questions that need to be answered and the design options available.  Until next time – code well, code often!

No comments:

Post a Comment

There was an error in this gadget