Wednesday, December 29, 2010

Audit 2

After some feedback and opinions from some co-workers and others in the business, I reviewed my audit codebase.  Finally I got a few hrs to finish it, and this is the result.

The Goals

First of all, let me review what I was setting out to achieve.  Looking at a previous post (Audit Again) here is a summary of what I was wanting to do:

  1. Implement a process log system
  2. Remove the reliance on specific databases (that is the core app could be anywhere in relation to the archive and audited tables)
  3. Refactor the code to make it more readable and also break it down into more re-usable parts
  4. Use Visual Studio 2010 Database Projects to implement these changes

It isn't a large list of improvements, but a list that I thought was worth while.  So, now for the report card, how did i go?

Implement a Process Log system

When looking at the log entries that were being generated I got to thinking what would happen if 2 management processes (or more for that matter) were running at the same time? for example an audit archive and a re-compile?  They should (in theory) not affect one another, so should be able to run at the same time.  But the logging would be all over the place.  There could be an intermingling of log entries from both processes that would make debugging that bit harder.

This was very simple to implement.  I just updated the Log Entry table to have a BigInt in it (called process), and then created a table called process and linked the ID to the Process field in the log entry.  After that it was a straight forward process of updating the code to cater for it.

Decouple archive from main database

This one required more thought, but in the end was simple enough.  I wanted to have the main tables for the audit in one database, and then be able to have the audited tables and their immediate audit entries in a second database and the archived data in a third.  I am not saying that this is the way that I would use this, but it means that you can have the audit management tables and stored procedures in either database.  That way the archive database can just have the archive, and the application is maintained in the main database, or the audit tables can be in the main database, and the audit management and archive tables and code can be in another database.

To do this, I just created an entry in the configuration table to identify the location of the audit archive tables, then refactored the code to use that location instead of the database that it management code and tables are in.  And that was it.

Code Refactor

With a Visual Studio 2010 Database Project I was able to do this very easily.  It treats the database project just like any other, which means that when you change the name of a table or stored procedure, it will change all of the references to that database object within the project.  That said, I had other issues with Visual Studio that I will touch on below.

Visual Studio 2010 Database Project

And here we come to the part where I didn’t achieve what I intended to.  I had hoped that I would be able to use the Visual Studio Database Project to manage the code base.  This would have been better the long by increasing maintainability and improved deployment.  I wont rehash my previous blog (here) but it was great for maintaining the code, but it (in my mind at least) failed at deployment.  As a result I didn't use it for anything else other than changing the names of the tables and stored procedures.

Audit 2

So, that was how I went about what I aimed to do.  If you want the source code, please click here.  It is provided as is, and you use it at your own risk.  If you have any questions, please leave a comment.

Saturday, May 22, 2010

The First One

So, I have re-named the data structures.  I used the database project template in Visual Studio 2010, and to be honest I wasn’t impressed. As a database management project tool, I found it almost impossible to commit the updated changes back to the database.

But first, let me look at what I did and what I liked and what I didn’t like at each step.  My first step was to use a clean install of the project and get that imported into the database project.  That worked pretty well.  It created one file for each database object (stored procedure, table etc.) which I thought was great.  It created a good structure which was pretty easy to understand (for a non DBA like me).

VS2010 SQL Project FolderStructure

To me, this structure is very similar to the structure that you see in the MS SQL Studio, which is a good thing in my mind.

Then I did one of the items on my version 2 list for the project, to rename the database tables.  The refactoring was very much in line with the way code gets refactored in VS2010.  It was quick and easy, and every reference was renamed.  Needless to say I did like that.

Now to the parts I didn't like.  I wanted to set one of the tables to save the data within as it was part of the project, not the dynamic data of the system usage. I could not work out how to do this.  I would have thought that a database project would have been able to allocate some default data to the tables, if not mark some tables as having all their data from the project loaded into the implementation of the project.

This leads to my next problem.  I got the source data structure from a database.  I made a relatively simple if somewhat extensive change.  I then wanted to update the database with the changes and it would not update.  Kept on returning a user error or something.  I am going to be honest here.  This to me is the core of a database project.  Writing scripts is one thing.  The ability to use a GUI to manage the code is awesome.  The UX for updating the data structures is not.

As a result, at this time I am not going to use the database project for my changes.  At least until I am able to resolve the 2 issues that I have encountered so far.  So, at this stage I have renamed the database tables, and am about to tackle another of the items on the list.

Sunday, May 16, 2010

Audit Again

So, I am reviewing the audit code that I have written.  Following the review of some posts by a friend, I am going to write the next version of the audit functionality that I finished a while back (a whole 2-4 weeks). So, using that as a basis, and ensuring that I maintain the aims set out in the previous version (for a refresher, go here).

Now, the aims for the next version are as follows:

  • Process logging, not just a generic log table, but one that is based on each process so that the logging system can differentiate between 2 processes running at the same time.  Also increased logging for better debugging of issues.
  • Allowing for the core parts of the audit to be in the line of business database, but still archiving to a separate database.  The audit should be configurable so that it can either run out of the archive database or the line of business database, or both (where the business has 2 audited databases there should only be one archive database).
  • Rename the data objects so that they make logical sense.
  • Further refractor the code so that there are as many re-usable parts as possible.
  • Use the 2010 Visual Studio 2008 Database solution template to write the code in.

That is all I can think of as improvements for now.  If I think of anything else that I would like to add to the next version, I will let you know.

Saturday, May 15, 2010

A Breath Before the Start

Well, I had a friend look at my blog the other day. He is someone who I respect in the technical world, knows heaps, and is just an all round good guy.  (For those wondering, the man is Richard Banks, who you can find here.  He is also worth a follow on twitter, over here.) Anyway, he pointed out some spelling and grammar issues, which I was expecting.  He also gave me some tips and tricks for blogging, like the code colouring and where I can host files without any ads, which I will incorporate I to my last few posts.  I'm not going to tidy every post, just the last two. Hope to do that this weekend.

Then he ended his advice with a “but”. He wanted to know why I was looking at using prefixes on table names, and my thoughts on a "decoupled" application. And he recommended that I review some other patterns, such as the repository pattern. Taking this advice, I did some searching on the web to look into the repository pattern, and as he suggested I reviewed my thoughts and the post content.  Following this, I have decided that he is right (as he usually is).

As a result, I am going to make some changes to my approach.  Firstly, I am going to write an updated version of my audit code.  One without named prefixes, that makes more sense with regards to naming conventions.  (As you can see I am dropping the prefix naming convention on the data layer.)

Secondly, I am going to write a decoupled application in the correctly.  One that has it’s layers decoupled, not the modules within each layer.  This means some changes to my intentions, but I can make that work.  I am still going to maintain the audit functionality in a separate database from the Line of Business database, and I will explain my reasoning's for that when I release a new version of that little project.

Thirdly, I am going to write code that has a designated purpose.  I am going to achieve something that I can use as a vehicle to learn.  It has been said that without vision people perish.  Probably from not having a goal and aimlessly wandering around til they did. I am hoping that with a goal to aim for, I can be more focused on doing and learning rather than wandering down a dark alley.

As an aside, this is the point of me coding and writing blogs about what I am doing and my thoughts on it all.  I am wanting to learn, and I just happen to be in a position where I can ask people to check my blog and code.  If one stops being teachable, stops wanting to learn, then they are, in my opinion, not worth listening to.  I don’t want to be one of those people.  Other than fixing formatting issues and typos, I wont be altering or removing the previous posts.  It is a reminder to me of mistakes I have made to teach me not to make them again.

So, my next post will detail what I am aiming to do with my audit scripts, which I will do try to get out as soon as I can.

Monday, May 10, 2010

Names and Meanings

As you might expect, I am going to use the Autofac IoC container to write a decoupled application the does "stuff". I haven't defined much of what I want this application to do, as I have mentioned in an earlier post, I just want to learn.

One of the things rattling around in my head is how to name the database objects. It seams like something trivial, I mean what's in a name? But names have meaning, and just as with naming children changing them later in life can cause more problems than it is worth!

My options as I see them are to name the objects with names that match the tables or to add a prefix to the table names, still having some semblance to the name of the object. Let me go through my thoughts.

My first option is to name the tables names that reflect the objects they represent. Names like user, settings and individual.  This would help in matching the data store to the entity layer, and any custom queries would be easy to read.

Select * from [user]
inner join individual on [user].id = individual.id

This is easier to read and understand at a glance.  It returns the user and the individual that it is attached to. Simple.  But here is my problem with that.  I am attempting to write a decoupled application.  What if I want to move one of the parts to another set of servers? For example, if the core of the app gets more use than the rest, then performance wise wouldn't I be better moving it if I could? And I don't just mean the presentation layer or the model, but the data layer also? The problem with this naming approach is that I can't tell, from a data layer perspective, which table belongs to which module.

The second option is to attach a prefix to the table names, which could signify what part of the application it was apart. It turns the earlier SQL statement into

Select * from acsUser
inner join coreIndividual on acsUser.id = coreIndividual.id

Which is not as readable, but I can tell the module that each part comes from. Now for a simple example it isn't too bad, but for more complex examples it can get rather difficult to manage.

I guess the question is how decoupled do I want the code to be? I was speaking to a friend and he said that a decoupled application was rarely going to be decoupled at the data layer. How many places can afford to have multiple database servers in production (that are not replicated and the like)?

Also, if I decouple the data layer, that can create it's own issues, like how to search across multiple parts of the app? How to maintain data integrity across multiple sources? And another issue, how do I maintain data access security across those different modules?

After much thought, I have decided to write an extremely decoupled app, one where it is decoupled from the data layer up. There are 2 reasons for this. One is that I am doing this to learn how to improve my coding. And two, I like a challenge. So I am going to write each "module" with it's own database, web service, and then write integrative UX environments that bring it all together! So now I need to review the architecture of the app, to make sure that it is in line with this.

Sunday, May 2, 2010

Autofac 2.2 and Me: My First Look at an IoC

Well, since I had the database audit working to my satisfaction (not to a production multi server high volume level, but working enough), it is now time to start working on my c# code skills.  While I am working to get a system working, I am not specifically wanting to achieve a specific project like I was for the audit, but rather making sure I am playing with various technologies.  Things like C#, WCF, Silverlight, maybe some WPF, and MVC.  Any other things I should be trying just let me know.

So, my first learning is going to be the concept of an IoC container. Here are the simple steps I took to implement it.  Please note I have not any direct experience with IoC containers, and only a weeks worth of reading here and there on Autofac and IoC containers.  This basically means that I have gotten it working, but it isn't necessarily the best implementation, so please, any feedback would be appreciated.

But before I explain how I got it working, let me explain (to my understanding) why you should be using an IoC Container.  One of the hardest things to do is make any major changes to an existing system.  This is because they are usually written with a single objective in mind, to fulfil the requirements.  Now, when you want to change a part of your code, it is simple.  However, if you want to remove an entire section of the code (for example change the database type), this gets harder.  Using interfaces has helped with this, in that you can write to an interface, and when you want to change the implementation you can create another version to the interface, and substitute it with the old version.  But, the down side is that you need to find all the references and update them.  Here is where IoC Containers come into play.

With an IoC container, in one place you can change the implemented class, where the IoC is built, rather than having to find every instantiation of the old class to change it to the new one.  This can lead to a more decoupled application, one that has as many of its parts (code wise) not reliant on the other parts, but on definitions of what the code is meant to do.  As a result, these parts can be interchanged with greater ease.

Now, back to the implementation of the Autofac IoC container.

The Steps to IoC Goodness

  1. Downloaded the source
  2. Added the resources to my projects
  3. Created the Container
  4. Updated my code to use the container
Downloaded the source
The first step was to download the source from the project home.  The homepage of the Autofac project is here, and the download page is here. Please note that there is a .Net 3.5 version and a 4.0 version, so choose the version that best suits you. (Or best suits your code, to be more precise.)

Added the resources to my projects
Next, I added the dll files to my projects.  I added them to my code project and also the test project.  The code project cause that is where object interfaces and the like are registered with the IoC Container, but also in my test project cause that is where the container is tested to make sure that I have gotten the container working properly (the whole point of a test).

Created the Container
Next, in my project, I added the container to my code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Autofac;

namespace Nyx.Core.Universal
{
public sealed class AutofacContainer
{
// Added a private constructor to stop the auto creation of a public one
private AutofacContainer()
{
}

public static IContainer BuildContainer()
{
var builder = new ContainerBuilder();

builder.RegisterType<DatabaseSession>().As<IDatabaseSession>();
builder.RegisterType<DataAccessTest>();

var container = builder.Build();
return container;
}

}
}

Please note that the basis of this is to prove a test, to make sure that the code is functioning properly.  The constructor for the DataAccessTest object has a dependency on IDatabaseSession (see the DataAccessTest code below). Now, note that the 2 register statements are different.  This is because I am in the first case, telling Autofac that when it sees the interface IDatabaseSession it should use the concrete class DatabaseSession.  This is not actually required, as but am just learning so I explicitly code it that way.  If you see the second register line, you will see that it is just registering the class, no interface reference.  This is how I could have written the first line, the interface being inferred.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Nyx.Core.Entities;
using Nyx.Core.Universal;
using NHibernate.Linq;

namespace Nyx.Core
{
public class DataAccessTest
{
IDatabaseSession _sessionLocator;

public DataAccessTest(IDatabaseSession sl)
{
_sessionLocator = sl;
}

public User GetUser()
{
User result = null;

using (var session = _sessionLocator.OpenSession())
{
var query = from ticketingOffer in session.Linq<User>()
where ticketingOffer.UserId.Equals(1)
select ticketingOffer;
result = query.FirstOrDefault();
}
return result;
}
}
}

The database session is just a connection that through fluent nHibernate connects to a MS SQL 2008 Express edition database, and I wont bore you with that code.

4. Updated my code to use the container


So, I have updated the code by adding the IoC container to the core, and creating a test class to prove my point, I have written a unit test to show that it is all working.

[TestMethod]
public void Autofac_database_integration_test_should_return_valid_user()
{
using (var container = AutofacContainer.BuildContainer())
{
var TestObject = container.Resolve<DataAccessTest>();
var result = TestObject.GetUser();

Assert.AreEqual(1, result.UserId);
Assert.AreEqual("lkoutzas", result.UserName);
}
}

Now, this code creates the Autofac IoC, and then it resolves the DataAccessTest class, automatically resolving the DatabaseSession class.


And there you have it.  I haven't added everything that I have done, just the highlights to get Autofac working.  Again, this is my getting it working, not the implementation of a pattern or anything.


Happy Containing!

Monday, April 26, 2010

SQL Audit Part 9: Wrapping It Up

So, to warp up, we have a SQL level audit that fires a tigger to save the audited data.  You will notice that I get the primary key, so using various tools you could set up the UX to notice that the unerlynig objects are audited and then return the audit trail for a specific object.

Anyway, the "install" file is located at the following location:
MediaFire File Download: CompleteSQLDataAudit_v1.0.sq

Do let me know your thoughts or if you have any questions with the source code.  You should be able to open it in MS SQL Server Management Studio and run it in any 2008 database (assuming it works in other versions I don't have, see the first post about that).  The audit control tables and stored procedures do not need to be in the same database as the tables to be audit.  And you can audit tables in more than one database.

To start auditing a table you can just run the StartAuditFor stored procedure, and so long as the account has the access to the source table it will all work.

To re-compile the audit structure after updating a source table structure, execute the ReCompileAuditStructure stored procedure and it will check all currently audited tables.

To archive the old audit records, run the ArchiveAuditTables stored procedure. I would recommend this be done in a SQL Job, which I would run before the nightly backup for your SQL database (might as well clean up the data before you back it up).

SQL Audit Part 8: Data Archiving Script

The archiving of the records is a simple affair.  All that we need to make sure of is that we don't archiving something twice, or that we don't delete something we haven't archived.  For high volume tables, this can get interesting, especially if you want to remove the audit records as soon as they are written. (That is, every time that the archive script is run.)


So, to get around that we mark all of the records we want to copy first, then we copy all the marked records, then delete the audit records that are marked.  Makes it nice and neat. While it creates an extra query, it is a safegard that is worth it.  While playing with audits you can never be to careful.


So that about wraps up the project, more or less.  The last part in the series details where you can find the source code (complete) and some other odds and ends.

SQL Audit Part 7: Audit Table Maintenance Script

Now we have the audit started and working, what is needed now is to manage it.  By manage, I mean that when we change the source table structure, we don't want to have to manually alter the tables that store the audit data.

Rather than recreate slightly modified code to manage the the tables, I coded the creation script so that it will do both.  Yes, I know that this is cheating (I planned the development that way) but coding is meant to be done smartly, not just with brute force but also finesse.  So, to manage both in the one script, there are a few principals to use.

1. Check before create.
Before you do what you need to, check that it hasn't already been done.  In some cases this will mean that if it is there you do nothing, in other cases you will delete it so that you can recreate it, and in more you update instead of delete.  That is a decision that you can make based on your code, and what your wanting to achieve.

2. Understand both requirements.
Updating code and creating code can be combined, but there are some cases where this is not advisable.  You need to understand both how the creation needs to work and how the update needs to work to be able to know if the functions can be combined, and then the best way to achieve that.

3. Make your code manageable.
This is more of a generic coding principal, but it still applies.  When you combine 2 functionalities into the one piece of code, it makes it that much more complex. The best way to manage the complexity it so make sure that your code is readable, and to make sure that another can come along and understand it with without you pointing out whats what.  On this one I am continually learning, but I think I am getting better.

Then, with that in mind, I can just create a cursor to loop though the tables that have been started, and re-run the creation stored procedure over it, and were done.

SQL Audit Part 6: Creating The Trigger

So, now we have the database tables created for the audit and the archive, now we need to create the trigger to perform the audit. Now the code for this, based on what I have done to date, is what you would expect.

Set @sql = '';
Set @sql = @sql + N'IF EXISTS (SELECT name FROM sysobjects
WHERE name = ''adt_tgr_'+@auditTableName+''' AND type = ''TR'')
DROP TRIGGER adt_tgr_'+@auditTableName
SET @sql = 'EXEC '+cast(@databaseName as nvarchar(255))+'.sys.sp_executesql N'''+REPLACE(@sql,'''','''''')+'''';
exec sp_executesql @sql;

However there is this part that is the check for the existing trigger. We cant drop a trigger that doesn't exist, but where it does exist we want to drop it so that we recreate it. It is pretty basic, but gets the job done, and also shows the use of the sp_executesql in the source database.

With this in place, the audit is now running.  Any insert, update or delete command on the source database table will create the audit record in the audit table.  But there are still 2 things missing from the original brief, to be able to maintain the structures of the audit and archive tables at the database level, as well as the ability to archive the audit records after a set time so that we don't keep records to the database we don't need.  And these will be covered in the next 2 posts.

SQL Audit Part 5: Altering the Tables - Source Defined Fields

So, we have now got a manageable setup that will allow us to add the audit required fields to the audit and archive tables. Now we need to replicate the source table structure in those tables. The basics are the same with the audit fields, but we get the structure from the source table.

CREATE TABLE #adtSourceTable (
FieldName varchar(255),
FieldType varchar(255),
FieldSize int
);
SET @sql = N'Insert into #adtSourceTable
Select COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH from '+@databaseName+'.information_schema.columns
where TABLE_NAME like '''+@TableName+'''';
EXEC sp_executesql @sql;

If you look above you will see how we get the list of fields and their definition.  We look at a table called information_schema.columns to get the information that we need.  Now this table will get more fields than we actually need, so we only get the fields that we do need, namely the name, type and size or length.

The rest of this part of the code is almost an exact mirror of the code that creates the audit fields in both of the tables.

Sunday, April 25, 2010

SQL Audit Part 4: Altering the Tables - Audit Defined Fields

So, we have our tables created with an ID field, now we need to add to those tables the basic fields to store information about the audit of each record, such as action, date time, status etc.  Now, for ease of altering the set of required fields, I have placed the field names and specifications into another table.  When I need to add a new field, I can just add a new record to the table.


SET @sql = N'IF EXISTS(SELECT * FROM '+@databaseName+'.INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_SCHEMA='''+@dbSchema+''' AND CONSTRAINT_NAME=''DF_'+@auditTableName+'_'+@FieldName+''' AND TABLE_NAME='''+@auditTableName+''')
BEGIN
alter table '+@databaseName+'.'+@dbSchema+'.'+@auditTableName+' drop constraint DF_'+@auditTableName+'_'+@FieldName+';
END';
SET @sql = 'EXEC '+cast(@databaseName as nvarchar(255))+'.sys.sp_executesql N'''+REPLACE(@sql,'''','''''')+'''';
EXEC sp_executesql @sql;


One of the issues I needed to address was adding constraints to the audit table in the source database.  To do that, I found out that the sp_executesql command actually was an object in each database, so to call the instance of the stored procedure in another database you can use <database>.sys.sp_executesql <nvarchar execute to>

Yes, I know that for some, this isn't earth shattering, but for me who was doing this to improve my SQL skills, it was a revelation.  Even helped me to re-factor my earlier code to the manner that it is now (more reliable and better performing).  I used the same thoughts on getting the table list to see if the table existed in the database.

So, the above code will remove a constraint on a table in a different database.  A very handy concept on cross database management.

Again, I will try to find somewhere to upload all of the code, but for now I am just pointing out the smart (in my opinion) aspects of each functionality.

Saturday, April 24, 2010

SQL Audit Part 3: Creating the Tables

The first thing I needed to do was to determine that the source table does indeed exist.  Now, assuming that @databaseName and @tableName are inputs to the stored procedure, look at the following code:


CREATE TABLE #adtMyTableListCheck ( 
dbName varchar(255), 
dbSchema varchar(255),
tblName varchar(255), 
tblType varchar(255)
);

SET @sql = N'Insert into #adtMyTableListCheck 
Select * From '+@databaseName+'.INFORMATION_SCHEMA.TABLES 
where TABLE_NAME like '''+@tablename+'''';

EXEC sp_executesql @sql; 
SELECT TOP 1 @dbSchema = IsNull(dbSchema,''), @dbTableType = IsNull(tblType,'')  FROM #adtMyTableListCheck;
DROP TABLE #adtMyTableListCheck;


First we create a temp table to store our results into.  Then we search the database in question for the table we are after.  If an object is found, we get the type and Schema of the object (the object owner).  The object could be view or a table, so later on we check that using the table type.  But, you might ask why do we get the object owner? By using the object owner in the queries it allows the queries execution plan to be cached.  Now, for the queries that we are talking about it probably .002 of a second at most, but if you do that 100 times a second, you start to get some savings.  If you do that 1000 times a second even more, and so on.

That is about all that I need to explain from the start part the overall flow of the stored procedure, which is:


  • Gather inputs, sort out table names to use etc.
  • Determine existence of the source table
  • Insert a record into the audited table list
  • Make sure that the database tables for the audit and archive tables have the required audit fields
  • Make sure that the tables have the correct fields for the source table
  • Create the trigger for the audit


So, we have created the audit and archive tables, as well as checked that the table exists, now we need to create the fields required for the database audit.

SQL Audit Part 2: Overall Description of the Flow of Data

Now that we have set the ground rules for our little discussion, let me take some time to explain how I wanted the data to flow.

The image above shows you what I aim to do.  On inserting, deleting or updating data in the Source Table, I want to store that change in the Source Database (the Audit Table in the Source Database above).  Then, for "old" audit entries, I want to move them off the source database and onto an archive database (in this case called the Audit Controller Database) so that they don't take up space in the source database and could in theory be saved to disk to be kept for prosperity.  "Old" can be a variable time, in my mind calculated in days, some source tables would have a small "old" value, whilst others would have a large one, all in the same application.

All of this would be managed by triggers (to audit the data and store it in the audit table) and stored procedures to create the audit table and archive tables, to create the triggers, to maintain the triggers and tables, and to archive the audit table data to the controllers database.

SQL Audit Part 1: The Aims

It has been a long time since my last post, but I think some people will think it is worth the wait.  Over the past few months, when I have had the time, I have worked on improving my SQL skills and understanding of databases.  Now, if your like me and you hate just reading things with no hands on practice, you will understand that I gave myself a little project definition to help me in my learning's.  I decided to write a SQL audit project.

The requirements of the project were as follows:
  • Managed by stored procedures;
  • Database level auditing (for speed);
  • Always accessible via SQL (that is, I wanted a store somewhere where I would review the audited information without having to involve a DBA);
  • I wanted the ability to archive the data after a set time.
I am just finishing up some final refactoring and logging parts, but I think I have something that will achieve this.

Some disclaimers before I continue I think are in order.  I am not a SQL DBA, just a developer wanting to improve his database skills.  The code I have written has not been thoroughly tested, nor would I recommend using it for a production environment without some form of testing in your Staging or User Acceptance Testing environments for speed and to make sure it doesn't break existing systems.  I have written this in Microsoft SQL Server 2008 (Express Edition), so there may be more features in other versions (such as Standard or Enterprise) that would break the code in this project, but again I have not tested this against other editions so am unaware of any issues with them.  Whilst you are free to use this for yourself, you do so at your own risk.

Now that that is out of the way, let me detail the parts of the "series" that I am going to write:
I will try to find a location to upload the various files to create and review the various parts of what I have done, just looking into which source repository I should use, Codeplex or code.google.com.

OK, now without further ado let me begin!