tag:blogger.com,1999:blog-74376828999112047642024-02-20T01:57:32.765+11:00RamblingsThe ramblings of a developer, from anything to anything that enters my mind...Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.comBlogger41125tag:blogger.com,1999:blog-7437682899911204764.post-30432019156281157812015-07-02T21:59:00.000+10:002015-07-02T21:59:34.306+10:00ASP .NET and CookiesRecently I found out something rather cool about Request and Response cookies in ASP .NET. The basics of cookie management in ASP .Net are that the cookie gets read in the Request.Cookies collection and written in the Response.Cookies collection. However, did you know that as soon as you as you write a cookie out in the response, it is automatically available in the request? I didn’t, and when I worked it out I thought it was rather cool, and thought I would share.Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-91015203943338331522013-01-08T01:09:00.000+11:002016-12-17T14:40:36.602+11:00Using log4net: some observations when logging is under loadSo, here is the first of what I hope will be many posts coming out of my new project. As with most .NET developers, I use log4net to log to text files and to a database. It is one of the more flexible logging frameworks and is relatively easy to set up. However, recently I have found what I think is a bug in log4net, which I will explain how to generate it and also how to overcome it.<br />
<br />
Firstly, some basic configuration details. Below is my basic configuration for log4net.
<br />
<xmp>
<log4net>
<appender name="LogFileAppender" type="log4net.Appender.FileAppender">
<file value="logs\web.log">
<appendtofile value="true">
<lockingmodel type="log4net.Appender.FileAppender+MinimalLock">
<layout type="log4net.Layout.PatternLayout">
<conversionpattern value="%date [%thread] %-5level %logger - %message %exception%newline">
</conversionpattern></layout>
</lockingmodel></appendtofile></file></appender>
<appender name="AdoNetAppender" type="log4net.Appender.AdoNetAppender">
<buffersize value="1">
<connectiontype value="System.Data.SqlClient.SqlConnection, System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
<connectionstring value="data source=<server>;initial catalog=<database>;integrated security=true;persist security info=True">
<commandtext value="INSERT INTO CodeLog ([Date],[Thread],[Level],[Logger],[Message],[Exception]) VALUES (@log_date, @thread, @log_level, @logger, @message, @exception)">
<parameter>
<parametername value="@log_date">
<dbtype value="DateTime">
<layout type="log4net.Layout.RawTimeStampLayout">
</layout></dbtype></parametername></parameter>
<parameter>
<parametername value="@thread">
<dbtype value="String">
<size value="255">
<layout type="log4net.Layout.PatternLayout">
<conversionpattern value="%thread">
</conversionpattern></layout>
</size></dbtype></parametername></parameter>
<parameter>
<parametername value="@log_level">
<dbtype value="String">
<size value="50">
<layout type="log4net.Layout.PatternLayout">
<conversionpattern value="%level">
</conversionpattern></layout>
</size></dbtype></parametername></parameter>
<parameter>
<parametername value="@logger">
<dbtype value="String">
<size value="255">
<layout type="log4net.Layout.PatternLayout">
<conversionpattern value="%logger">
</conversionpattern></layout>
</size></dbtype></parametername></parameter>
<parameter>
<parametername value="@message">
<dbtype value="String">
<size value="4000">
<layout type="log4net.Layout.PatternLayout">
<conversionpattern value="%message">
</conversionpattern></layout>
</size></dbtype></parametername></parameter>
<parameter>
<parametername value="@exception">
<dbtype value="String">
<size value="2000">
<layout type="log4net.Layout.ExceptionLayout">
</layout></size></dbtype></parametername></parameter>
</commandtext></connectionstring></connectiontype></buffersize></appender>
<root>
<level value="ALL">
<appender-ref ref="LogFileAppender">
<appender-ref ref="AdoNetAppender">
</appender-ref></appender-ref></level></root>
</log4net>
</xmp>
Basicly, I have set up log4net to log to a text file and also to a database. <br />
<br />
Now, as far as I am aware there is nothing specifically wrong about this configuration. It does indeed log to both a database and also a text file. However, under load some curious things happen. First of all, you can get an I/O race condition appearing regularly in the logs. This is because the load can cause 2 threads to want to write to the text file at the same time, causing the I/O race condition to occur. To fix this, remove the text file log appender updating the root node as below:
<br />
<xmp> <root>
<level value="ALL">
<!-- <appender-ref ref="LogFileAppender" /> -->
<appender-ref ref="AdoNetAppender">
</appender-ref></level></root>
</xmp>
This then leaves the database logger configured. So far so good. Lets say that for production you want to turn off logging entirly, the following configuration will work:
<br />
<xmp> <root>
<level value="OFF">
<!-- <appender-ref ref="LogFileAppender" /> -->
<appender-ref ref="AdoNetAppender">
</appender-ref></level></root>
</xmp>
Note that we have now turned all logging OFF. If your like me, you would think that this would be the end of it. You have turned off logging, end of story. But this is actually where things can get interesting. Lets say your site or web service gets under heavy load and requires data access out side of the logging framework (even to a different database). Over time, your web site stops working, all queries to the database fail. Yes, you can restart IIS and everything will go back to working again. But this is not an actual solution, it is something you would do when you had nothing else to try and you need to get the site back up and running as soon as possible. <br />
<br />
So, what caused the issue? From what I can tell if you have a ADO appender as in the example above, you can't just turn off logging. From what I have experienced this will still connect to the database AND WONT RELEASE THE CONNECTION TIL THE CONNECTION EXPIRES. And that is the key thing here. If your database connections time out after 2 minutes, you can see how the connection pool can get clogged with these open logging connections that do nothing. What you need to do is remove or comment out the ADO appender from the list of appenders as below:
<br />
<xmp> <root>
<level value="OFF">
<!-- <appender-ref ref="LogFileAppender" /> -->
<!-- <appender-ref ref="AdoNetAppender" /> -->
</level></root>
</xmp>
From what I have noticed, if logging is turned on, then the issue does not arise. It is only when the logging is off and there is an ADO appender there that this is an issue. I ran some tests against a web service, and with logging turned off and the ADO appender commented out, I could get 516k requests though a web service over the course of an hour with no adverse affects. As soon as I uncommented the ADO appender, the test ran for 5 minutes then started experiencing major issues, with all database connectivity stopping soon after.<br />
<br />
<b>Side note:</b><br />
If you want to use the above to configure to a database, below is the SQL that you need to create the table. Just make sure your connection string is correct and that the user that is running the website has access to the databse the table is in.
<br />
<xmp>
CREATE TABLE [dbo].[CodeLog](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Date] [datetime] NOT NULL,
[Thread] [varchar](255) NOT NULL,
[Level] [varchar](50) NOT NULL,
[Logger] [varchar](255) NOT NULL,
[Message] [varchar](4000) NOT NULL,
[Exception] [varchar](2000) NULL
) ON [PRIMARY]
</xmp>
Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-36240862571660714302013-01-04T11:24:00.001+11:002013-01-04T11:25:39.751+11:00A New Challenge<p dir=ltr>It has been a while since my last post, having been really busy at work getting some projects completed.  However, as its the start of a new year, I'm thinking I should get back to blogging.</p>
<p dir=ltr>So, I'm going to start work on a new self learning project, like the Java OData and the SQL audit tools I wrote previously. While I'm not going to share what the project is, I will try share what I learn while doing the project.  What's the point of working on a project? Well, I like having a purpose for doing something. I like to have a goal and try to meet that goal. Also, I am working on my project from the beginning, to see thoughts turn into requirements into functionality. But when it comes down to it, I just like a challenge.</p>
<p dir=ltr>Now the basic frameworks/languages/processes will be MVC 4, C#, Razor, agile, HTML5, CSS3, WCF and MS SQL 2012.</p>
<p dir=ltr>I already have some ideas for some future posts, so watch this space.</p>
Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-32178721984519093682012-03-20T22:44:00.000+11:002012-03-20T22:46:27.348+11:00nHibernate, Lazy Loading and AutoMapperI know its been a while since I have last posted, and unfortunately this post is going to be short, but I have just gotten something working and thought that I would share it with you all.<br /><br />
Here is the senario: you have nHibernate (in my case fluent nHibernate) and you have some collections (and properties) to be lazy loaded. You also have your entities mapped to a DTO entity for sending over the wire. Now, my problem was that the lazy loading was not working, when I monitored the SQL trace I could see every member and every collection being loaded regardless of what I had configured in nHibernate. This was causing around 400 SQL statements to be executed to the database, and I could not work out why.<br /><br />
Now, the answer seems simple now that I know what it is, but needless to say it took a few hrs to work out what the issue was, and then another few hrs to actually get a solution that I was happy with. But, I think I have a solution I can be happy, especially now that I have gotten the 400 or so SQL statements down to about 50 for the first load, and 5 (I am hoping I can get that down to 3) for every other load in the nHibernate session.<br /><br />
So, I hear you ask, what was causing this blow out? It was the AutoMapper checking the value of the entity that was triggering the lazy loading of the collections and properties, which then resulted in every member in the object being loaded, then every member in every child entity, and so on until the entire entity tree was being loaded. Now, I am sure that I might have a configuration setting not set up properly, or I might have something wrong with my entity definition, but from what I have found this is pretty standard behaviour.<br /><br />
Now for the fix. It is really neat, and is done in the AutoMapper configuration when you create your maps between your entity and DTO object. I had basic mappings set up (my DTO’s are closely aligned with my entities), so a sample mapping would be:<br />
<pre class="brush: csharp;">
Mapper.CreateMap<Entity, EntityDto>();
</pre>
<br />
Now, that is fine, but to stop the Lazy loading being triggered I needed to add a bit to the mapping definition when mapping the entity to the DTO (I don’t care about the mapping from the DTO to the entity, no data access there). So my mapping from about became:<br /><br />
<pre class="brush: csharp;">
Mapper.CreateMap<Entity, EntityDto>()
.ForMember(dest => dest.Collection,
opt => opt.Conditional(dest => NHibernateUtil.IsInitalized(dest.Collection));
</pre>
<br />
As you can see, this simple change is nice and neat, and stops the AutoMapper triggering the Lazy Loading of the entities. And, as we stop the lazy loading, we stop the extra unwanted and un-needed calls to the database.<br />
<br /><br />
Anyway, just thought I would mention it and hopefully help someone else having the same fun as I was. Let me know your thoughts.Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com1tag:blogger.com,1999:blog-7437682899911204764.post-24277010593971544142011-10-30T00:16:00.001+11:002011-10-30T00:24:11.668+11:00JOCU – The Alpha<p>I’ve been working on an OData reader in java for some time now, and I am ready to show people the first alpha. There are some pieces missing, and I would prefer better ways of accessing some of the data. But here it is.</p> <p>The release only has the underived version of the JOCU project. It allows for most of the functionality from the OData protocol (the main one missing is the data format, this uses XML data src as apposed to the json format).</p> <p>Here is a code snipit I wrote that uses the API from the JAR library.</p><pre class="brush: java;">import java.sql.Timestamp;<br />import java.util.Date;<br />import java.util.List;<br /><br />import lkoutzas.jocu.shared.AbstractCatelogueData;<br />import lkoutzas.jocu.shared.AbstractEntity;<br />import lkoutzas.jocu.underived.ODataEntity;<br />import lkoutzas.jocu.underived.ODataLink;<br />import lkoutzas.jocu.underived.ODataRepository;<br /><br /><br />public class RunJOCUDemo {<br /><br /> public static void main(String[] args) {<br /><br /> System.out.println("Welcome to the demo of the JOCU odata reader!!");<br /> System.out.print("Catelogues Start: ");<br /> System.out.println(new Timestamp(System.currentTimeMillis()));<br /> <br /> // create the odata repository<br /> ODataRepository oData = new ODataRepository();<br /> // set the URI for the location of the repository<br /> oData.SetURI("http://services.odata.org/OData/OData.svc/");<br /> // get oData Catelogue<br /> List<AbstractEntity> categlogues = oData.FetchODataCatelogues();<br /> System.out.print("End: ");<br /> System.out.println(new Timestamp(System.currentTimeMillis()));<br /> // dump out results to show it works...<br /> System.out.println("Catelogues...");<br /> for(int i=0;i<categlogues.size();i++)<br /> {<br /> AbstractEntity local = categlogues.get(i);<br /> System.out.println(local.getAttribute("href"));<br /> }<br /> System.out.println();<br /> System.out.print("Entities Start: ");<br /> System.out.println(new Timestamp(System.currentTimeMillis()));<br /> // create the odata repository<br /> ODataRepository oData2 = new ODataRepository();<br /> // set the URI for the location of the repository<br /> oData2.SetURI("http://services.odata.org/OData/OData.svc/");<br /> // set the catelogue for the oData repositpory<br /> oData2.SetCatelogue("Categories");<br /> // query or filter the data request (need to detail full list)<br /> oData2.getQuery().top("2").select("ID,Name");<br /> // get entities<br /> AbstractCatelogueData entities = oData2.FetchEntities();<br /> System.out.print("End: ");<br /> System.out.println(new Timestamp(System.currentTimeMillis()));<br /> <br /> System.out.println("entities...");<br /> for(int i=0;i<entities.getEntities().size();i++)<br /> {<br /> ODataEntity local = (ODataEntity) entities.getEntities().get(i);<br /> System.out.println(local.field("ID") + ": " + local.field("Name") + " links are: ");<br /> for(int j=0;j< local.getLinks().size();j++)<br /> {<br /> ODataLink odataLink = local.getLinks().get(j);<br /> System.out.println(odataLink.getTitle() + " - " + odataLink.getHref() + " ("+ odataLink.getRelationship()+")"); <br /> }<br /> }<br /> }<br /><br />}<br /><br /></pre><br /><p>As you can see, the API is pretty easy to use and I think pretty good. So, for the “bits not in the alpha”.</p><br /><p>Firstly, there is a bug that will mean that XML structured elements (CDATA as an example) are not parsed correctly. They get broken down into separate elements rather than a single blob.</p><br /><p>Also, I would like to improve the manner that the data is being accessed. It feels a bit clunky to me. I am not sure how I can improve it, but I would like to see what I can think of.</p><br /><p>The last missing bit / bug is that the expanded entities within the querystring. At the moment I think it would fail badly. The API will allow it, but the data will likely either cause the call to fail or it will will be ignored.</p><br /><p>Once these items are in the code, I am going to work on the derived version, which will allow strongly typed objects and (hopefully) reduce the size of the downloaded packet.</p><br /><p>The link for the alpha is <a href="http://dl.dropbox.com/u/15397896/jocu.jar" target="_blank">here</a>. Let me know your thoughts.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-44211227153850010922011-09-25T16:59:00.001+10:002011-09-25T16:59:57.111+10:00Rolling back a change set<p>Recently, I was working on a TSF Team Project project that had 2 branches. I was continuously merging my trunk with each of these branches, as all three projects moved forward. Recently I was given the go ahead to merge one of the branches into the trunk. After checking and rechecking, I did this.</p> <p>After about an hr after I had merged one of the projects into the trunk, I was, as usually happens, asked if I could make a change ASAP and release (in a way that would have the least impact on testing resources). This meant that I needed release a version of the code that mirrored the current Production environment as a base with the necessary changes. My first thought was that I would grab the change set, update the code, build on my local and release that. However there was a gotcha, there was a requirement to have the build that was deployed done on the build server (with the TFS auditing of changes etc).</p> <p>Anyways, I grabbed the required change set, made the changes, tested it locally, then checked it in, and released the result from the build server. After checking that the release worked, I handed it over to the testers. I then re-merged the code with the other 2 branches (luckily I hadn't deleted the one that I had been told to move into the trunk) so that they had the latest change. Now, in TFS, merges use code on the server to merge, not code on your local (at least as far as I can tell). As I was reviewing the changes and resolving the conflicts, I was seeing code from the trunk that included the branch I had merge into it. Needless to say I got to wondering what had been built and then released to UAT. I grabbed a tool to review the binaries that were in the release, and it was based on the project that I had merged into the trunk! There was code there that I knew wouldn't be tested, and as such could not be released as this would increase the risk the change was raising (testing only the features of any change can mean that other impacts of the change are not identified until a full regression test is done).</p> <p>So, I was left with a dilemma, how can I revert to a change set and release that change set, using the build server and all appropriate auditing in TFS, without having to manually remove all untested code and create a greater risk? Then, after some googling, I came across a post by Mike Fourie, which details one of the new features in the August 2011 release of the Team Foundation Server Power Tools. TFS 2010 has the ability, though the command line, to perform a roll back. This extension brings that functionality into Visual Studio 2010 with a usable UI. Let me just say that this saved my weekend from being an entire coding wreck. I was able to roll back to the required version, then re-build and re-release only what was desired. I wont re-write how it works, Mike does that well enough, but here are some links:</p> <p><a href="http://mikefourie.wordpress.com/2011/08/20/using-rollback-in-the-team-foundation-server-power-tools/" target="_blank">Mike’s Post: Using Rollback in the Team Foundation Server Power Tools</a></p> <p><a href="http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f" target="_blank">Team Foundation Server Power Tools August 2011</a></p> <p>I hope this helps others out there who are wanting to (or having to) roll back code to a previous version on the server.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-29885868651548528242011-08-14T21:46:00.000+10:002011-08-14T21:50:28.178+10:00Securing CMS<span style="background-color: white" class="Apple-style-span"><span style="font-family: arial, 'Liberation Sans', 'DejaVu Sans', sans-serif" class="Apple-style-span"><span style="line-height: 18px; font-size: 14px" class="Apple-style-span"></span></span></span><span style="background-color: white" class="Apple-style-span"><span style="font-family: arial, 'Liberation Sans', 'DejaVu Sans', sans-serif" class="Apple-style-span"><span style="line-height: 18px; font-size: 14px" class="Apple-style-span"> <p><span style="background-color: white" class="Apple-style-span"><span style="font-family: arial, 'Liberation Sans', 'DejaVu Sans', sans-serif" class="Apple-style-span"><span style="line-height: 18px; font-size: 14px" class="Apple-style-span"></span></span></span></p></span></span></span> <p>Only a short one today. At work we had an issue that we needed to secure our CMS to only be available for internal access. I had a few ideas, but after some research I found something very cool that was apart of IIS (6 and 7). There is an option to secure a site or directory (or application) to only certain IP's. Its called "IPv4 Address and Domain Restrictions". <p>The reason why I'm blogging about this is because this is an easy way of adding an extra layer of security to a normal CMS. Most CMS's are available for running locally, and this means that they are something that can be easily access, and the code can be analysed. From a security point of view, this is a concern. To mitigate against this, securing the main administration directories (login etc) using the IP restrictions in IIS adds an extra layer to the security of the application. <p>The easiest way to use this is to deny all access, and then only allow the IP's that you want to access the directory (usually internal and one or two external IP's). <p>Hope this helps.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-9592848508200593482011-08-06T01:58:00.001+10:002011-08-06T02:02:34.557+10:00Changes to Facebook APIToday we had something interesting happen at work. We had just implemented some Facebook functionality on a project that was ready for release, and just before it was signed off today, a final end to end test was completed. Part of that test was to test some functionality with social networks. When the Facebook functionality was tested, it failed.<br />
<br />
The functionality was meant to pre-populate the message box with a message that the user updated. On Wednesday this was working, and today it wasn't. Lets just say that this caused a few questions to be asked and a few worried people moving the corridors. How can functionality that was working one day break the next when we made no changes to the code? After some searching the web (Google is mostly always your friend), I found my answer. Let me break it down for you.<br />
<br />
Firstly, we were using the Javascript Library, specifically the FB.ui functionality. Using the Feed dialog box, we would place our user augmented message into the message box, which the user could then update and publish on their wall. Now looking at the reference from the Facebook developers page (seen here <a href="http://developers.facebook.com/docs/reference/dialogs/feed/">http://developers.facebook.com/docs/reference/dialogs/feed/</a> ) the message property is what we used to pre-fill the dialog box. As you can see there is a note on the property that states that the field will be ignored on the 12th July, 2011. So why was it still working two days ago (3/8 to be specific)?<br />
<br />
Before I go into that let me first look at the policy for Apps in Facebook. If you go to the policy page (seen here <a href="http://developers.facebook.com/policy/">http://developers.facebook.com/policy/</a> ) and read section 4 sub section 2 it states that an app will not pre-fill any fields. (Then in the following sub section it states that the user can give permission for an app to post on their behalf. That is a task for next week.)<br />
<br />
So back to the question, why did it just stop working? If I were to guess, and that is all that I would be doing, I would say that Facebook just got fed up with people not looking at the policies and following them. Apps should only pre fill the form when the user has done it some where else in the work flow. I would say that there were plenty of apps that just didn't follow the rules and they just decided to enforce the rules. And to be honest, I have no problem with them enforcing their policy.<br />
<br />
Now, what is the lesson learned from all this? Is it that we should not trust Facebook? I would say not. For me? The lesson is that when you are using third party services you need to be aware that what you don't control what happens with them. You should intermittently test the services, just to make sure that they work as expected. That is if you want to make sure that your sites and applications maintain the functionality that you build them with.<br />
Hope this helps!Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-79522155454111502122011-04-24T15:42:00.001+10:002011-04-24T15:42:09.291+10:00Something Different<p>It has been some time since my last blog post, and this one is not going to be along the same vein as normal. I will try to post something relating to my completing the underived version of the JOCU project, but work has slowed to almost a crawl as I am in the midst of work around the house (my computer is currently in pieces in the hallway, and I am using it mainly though my phone, not ideal for coding).</p> <p>So, if I'm not going to relating something code related, I hear you ask what do I have to say? Well, I have been playing with my phone (I have a Samsung Galaxy S i9000) and came across Darkys ROM. Now, never having looked at custom ROM before, I was cautious as to whether this was a good idea or not.</p> <p>Well, I must say I am impressed. I have always know that my phone had more grunt than it used, like it was being restrained. After setting up Darkys ROM, I can see a definite performance improvement. There is instant response to touch input, as a result of using the 9.5 ROM, I am on Android 2.2.1, which has it’s own improvements. Overall, I am very happy with my phone. I mean, I was happy before, but now more so. I even still have access to the built in Samsung Apps.</p> <p>Something to note though, doing this has it’s risks. The Galaxy S is one of the few Android phones on the market that you can brick. I also performed the update on my little brothers phone, and while the end result was the same, the process was not the greatest, a few times I thought I had bricked his phone. But, in the end I got it working. Doing this is at your own risk.</p> <p>Now for the links:</p> <p>Darkys Rom Home: <a href="http://www.darkyrom.com">http://www.darkyrom.com</a><br>2.2 – 2.2.1 install process: <a href="http://www.darkyrom.com/community/index.php?threads/guide-installing-darkys-rom-on-2-2-2-2-1-i9000-i9000m-only.1041/">http://www.darkyrom.com/community/index.php?threads/guide-installing-darkys-rom-on-2-2-2-2-1-i9000-i9000m-only.1041/</a></p> <p>I did an install from 2.1 to Darkys 9.5, but that one wasn't so smooth (my little brothers phone), so I wont post that here, I recommend that you use Google yourself.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-48754504455440693672011-03-06T00:18:00.001+11:002011-03-06T00:18:55.635+11:00JOCU – The Final Stretch<p>So, I have the basics of an XML parser completed, now I need to implement this to deliver what I am wanting, which is the Java OData Consuming Utility. I am going to work on the loosely typed or underived version first, then I will implement the derived version.</p> <p>Briefly, lets look at the functionality we can expect from the repository. </p> <blockquote> <p><em>string GetSource() / SetSource(string)<br></em>This will define what the source of the oData is.</p> <p><em>bool GetMetadataRequired() / void SetMetadataRequired(bool)<br></em>This will set or get if the metadata is required for this repository</p> <p><em>ODataEntities Fetch()<br></em>This will get the entities from the oData query</p> <p><em>IODataQuery Filter(string)<br></em>Applies the filter URI convention to the source</p> <p><em>IODataQuery SetEntityCollection(string)<br></em>This will define which collection that it is desired to get</p> <p><em>IODataQuery GetEntity(string)<br></em>Gets a single entity from the collection</p> <p><em>IODataQuery SetOrderBy(string)<br></em>Sets what the list is ordered by</p> <p><em>IODataQuery Top(string)<br></em>Sets the number of entities that we want returned</p> <p><em>IODataQuery Expand(string)<br></em>Sets which related entities to expand</p> <p><em>IODataQuery Format(string)<br></em>Sets the format to return the data in (not used really for what I want, should I scrap it?)</p> <p><em>IODataQuery Select(string)<br></em>Identifies which entity items to select, and related items</p> <p><em>IODataQuery InlineCount(string)<br></em>Sets the inline count property on the data source</p> <p><em>IODataQuery CustomURIElement(string)<br></em>Sets a custom element to the query that is sent to the OData source</p> <p><em>string GetMetadata(string)<br></em>This will get the OData collection metadata entry</p></blockquote> <p>And, from the point of view of the entity we have the following:</p> <blockquote> <p><em>string Get(string)<br></em>Gets the value of the key passed though</p> <p><em>string GetMetadata(string)<br></em>Gets the metadata value for the key passed though</p></blockquote> <p>Most of this is derived from the OData URI conventions, with some basic access functions thrown in. I think that this is all I need to implement, but to be honest I’m not sure. I have a feeling I will be uncovering more than I expect as I start to tackle the implementation of the repository access, but I need to start somewhere, and this is the best place. I am aiming to finish this off in the next month or so, but as I am doing this in my spare time to have some fun we shall see what happens.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-57849820134949572082011-02-28T15:50:00.001+11:002011-02-28T16:01:54.420+11:00Dynamic Layout with MVC 3 and Razor<p>The past weekend has seen me at Sydney’s Webcamp. It was a great 2 days of looking at some of Microsoft's Web Technologies as well as have a play with them in the labs. Towards the end of the second day, after doing the labs I wanted to, I had a bit of a play wrote a simple site that changed it’s layout based on the querystring in MVC 3/Razor. I thought I would share it with you.</p> <p>First, make sure you have the MVC 3 templates, which you can get from Microsoft <a href="http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d2928bc1-f48c-4e95-a064-2a455a22c8f6&displaylang=en" target="_blank">here</a>. These can take some time to install, so be patient. You can also get it via the Web Platform Installer. (Note: I am working from Visual Studio 2010. I am not sure of the compatibility of MVC 3 with previous versions of Visual Studio.)</p> <p>Once that is installed, open Visual Studio, select File –> New –> Project. Open the Visual C# set of Projects, select Web, and then select the ASP.NET MVC 3 Web Application. Also, enter a name for the application (I have used MVC3DynamicLayout). Then hit OK.</p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/1.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/1.jpg" width="240" height="167"></a> </p> <p>The Project isn’t created yet. On the next screen, you have the options around the type / template of MVC 3 Project to create. For our purposes select the Empty Application and the Razor view engine. Hit OK and your project will create.</p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/2.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/2.jpg" width="240" height="223"></a> </p> <p>So, we have our MVC 3 Project, now to get something that will execute. Below is a picture of the file structure that we will use as a reference.</p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/3.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/3.jpg" width="227" height="240"></a> </p> <p>First, we create our initial layout. Right click on the Shared folder (that is under the Views folder), select Add –> New Item. Select the MVC 3 Layout Page (Razor) and enter site_1.cshtml as the Name.</p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/4.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/4.jpg" width="240" height="167"></a> </p> <p>This will create a new cshtml file, into which I have added some render sections and a <h1> tag to show that this is layout 1. Note, the inline css is just to make the point of the exercise easier, not something you should do.</p><pre class="brush: csharp;"><!DOCTYPE html><br /><br /><html><br /><head> <br /> <title>@ViewBag.Title</title><br /></head><br /><body><br /> <div style="width:700px;"><br /> <div style="border: red solid 1px;"><h1>This is layout one</h1></div> <br /> <div style="border: green solid 1px;"><br /> @RenderSection("Header",false)<br /> </div><br /> <div><br /> <div style="width:100px; border: black solid 1px; float: left;">@RenderSection("Menu",false)</div><br /> <div style="width:596px; border: purple solid 1px; float: right;">@RenderBody()</div><br /> </div><br /> <div style="clear: both; border: blue solid 1px;">some footer text</div><br /> </div><br /></body><br /></html><br /></pre><br /><p>Now we need to add a controller. So go to Controllers in Solution Explorer, right click and select Add –> Controller, and call it HomeController. Do not select the check box to add action methods for create, update, delete and details scenarios.</p><br /><p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/5.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/5.jpg" width="240" height="106"></a> </p><br /><p>And finally, we need to create the view to show with the correct layout. In the home controller, right click anywhere in the Index function and select Add View. Make sure you check the Use a layout or master page and have selected the site_1.cshtml layout that we have created.</p><br /><p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/6.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/6.jpg" width="240" height="240"></a> </p><br /><p>Now, lets add the content to the view so that something is displayed. That is in the code below.</p><pre class="brush: csharp;">@{<br /> ViewBag.Title = "Index";<br /> Layout = "~/Views/Shared/site_1.cshtml";<br />}<br /><br />@section Header{<br /><h3>This is the head</h3><br />}<br /><br />@section Menu{<br /><ul><li>item 1</li><li>item 2</li></ul><br />}<br /><br />And here is the rendering of the body<br /></pre><br /><p>Now, if you hit F5 you should get the following webpage.</p><br /><p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/7.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/7.jpg" width="240" height="145"></a> </p><br /><p>So, at this stage you should have a working MVC 3 site, with some really cool looking layout and great UI. (OK, I joke… :D) But we are not done yet, that was only half of what we want. Now for the other half, to be able to change the page layout dynamically. Now, for our purpose dynamically will be a querystring value.</p><br /><p>So, first lets create our second page layout. Right click on the Shared folder in the View folder, then add a new item. As before, MVC 3 Layout Page (Razor) but this time enter site_2.cshtml as the Name. Below is the code for site_2.cshtml</p><pre class="brush: xml;"><!DOCTYPE html><br /><br /><html><br /><head> <br /> <title>@ViewBag.Title</title><br /></head><br /><body><br /> <div style="width:700px;"><br /> <div style="border: green solid 1px;"><br /> @RenderSection("Header",false)<br /> </div><br /> <div style="border: red solid 1px;"><h1>This is layout two</h1></div> <br /> <div><br /> <div style="width:596px; border: purple solid 1px; float: left;">@RenderBody()</div><br /> <div style="width:100px; border: black solid 1px; float: right;">@RenderSection("Menu",false)</div><br /> </div><br /> <div style="clear: both; border: blue solid 1px;">some footer text</div><br /> </div><br /></body><br /></html><br /></pre><br /><p>Now, we need to update the controller to accept the querystring argument and then, if that argument is a “y”, let the view know that it need to change the view layout.</p><pre class="brush: csharp;"> public ActionResult Index(string canIChangeTheLayout)<br /> {<br /> ViewBag.canIChangeTheLayout = (canIChangeTheLayout == "y")?"":"y";<br /> ViewBag.Layout = "~/Views/Shared/site_1.cshtml";<br /> if (canIChangeTheLayout == "y") ViewBag.Layout = "~/Views/Shared/site_2.cshtml";<br /> return View();<br /> }<br /></pre><br /><p>And then need to make 2 changes to the view. One to alter the layout (from the viewbag rather than hardcoded, and the other to create a link to control the querystring. The new view code is below:</p><pre class="brush: xml;">@{<br /> ViewBag.Title = "Index";<br /> Layout = ViewBag.Layout;<br />}<br /><br />@section Header{<br /><h3>This is the head</h3><br />}<br /><br />@section Menu{<br /><ul><li>item 1</li><li>item 2</li></ul><br />}<br /><br />And here is the rendering of the body<br /><br /><br /><br />Link: @Html.ActionLink("change layout", "index", new { canIChangeTheLayout = ViewBag.canIChangeTheLayout })<br /><br /><br /></pre><br /><p>And vola! If you click on the link you should alternate between the first layout and the layout below:</p><br /><p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/8.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/MVC3RazorDynamicLayout/8.jpg" width="240" height="145"></a> </p><br /><p>Well, I hoped you liked this basic review of the MVC 3 Projects and how to change the layout in code.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com2tag:blogger.com,1999:blog-7437682899911204764.post-63809812359976738622011-02-15T00:54:00.001+11:002011-02-15T00:54:01.899+11:00Test Driven Development<p>It has been some time since my last post, work has been rather hectic of late. My downtime has been just that, downtime. However when I finally got back to my Java oData Consumer I ended up having a good experience in Test Driven Development that I thought I would share.</p> <p>The next small chunk that I wanted to complete was the XML namespace to package name piece, and then applying that package in my code when trying to get a type of node from a known namespace. The first part was easy, I wrote my test, got the code working, and vola, I had my XML namespace to package converter. Not the best work, has hardcoded parts that I would love to remove, but it does the job. It is something I can refactor later once I have a complete oData consumer. (A side note, I can add various namespaces and XML definitions easily, so am thinking of also expanding this to allow for a simple RSS consumer in Java, but that is way down the track and I digress.)</p> <p>The next part I thought was simple. Combine the package name with the expected class name and end up with a complete class name that I can reference in my code when I am trying to create a new node in my tree. I wrote my test, got it working, then because I had tests that checked the rest of the code I ran those and lo and behold they broke. So, I fixed them up making sure that I was not breaking anything else, and my code went back to being robust.</p> <p>The reason why I am saying that this is a good experience in Test Driven Development (TDD) is that if I did this the way I am used to, just manually tested my code, there is almost no chance that I would have picked up the bugs that I created. By using TDD I had a repository of tests that I have coded against to meet the requirements (each in and of itself a small requirement). Now I might not have hundreds and thousands of tests to run against my code, but as my codebase grows, the number of tests expands as well to cater for the new code. No new code should be written where it isn’t to make a test pass.</p> <p>However, proceed with caution here. Tests need to be useful and meaningful. Make sure that you are testing what you think your testing. Your tests form part of your codebase, make sure you check that they pass regularly, and also make sure that you know what they are testing. Having heaps of tests that are checking the same thing is pointless. If you have one test to check that your function is working in a set scenario, then move on. You should write a new test for each scenario. Also you when you get a bug you should be able to write a test to show that the bug is in the code, then you can debug your code to make sure that the fix is working, and that it hasn't accidently broken anything else.</p> <p>A tip on testing that I got from Tatham Oddie (<a href="http://twitter.com/tathamoddie" target="_blank">@tathamoddie</a> on twitter) is to use the following template for writing a test. It works in any language. First create the test (the name and container in code). Then add 3 comments: Arrange; Act; Assert. The first part of your test should be to Arrange the objects that your testing, with mocks, creating the object that you want to test. The second should be the Act part, this is where you perform the function or method that you want to test. The third is the Assert part, where you check that what you wanted to be the result was indeed the result.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-22407074318253218982011-01-22T14:22:00.001+11:002011-01-22T14:23:48.803+11:00Agile vs Waterfall<p>Recently I found myself coding less and “planning” more. It took me a while to work out why, and when I did I thought I would write this so that others may avoid the trap I was in.</p> <p>Firstly, some background. As you may or may not be aware, I am writing an API in java for read an OData source. Now, I am using TDD principals to code the API (basically write a test case and then write the code that makes this test pass). This means that I need to think in small increments, write a test for the small increment, then write a small increment that satisfies that test, then write another test for another small increment, then write that small increment and so on, you get the idea.</p> <p>Now back to the present. I encountered the next part of my code requirements, implementing XML namespaces within my code. This is where my problems started. I did the right thing by researching some things on XML namespaces, to make sure that I understood how they were implemented and make sure my code didn’t vastly differ what it should be. Then I did the wrong thing, I tried to plan the entire implementation of the XML Namespace component, rather than looking at the small piece I was dealing with (in this case pulling the namespace and storing it in the required format).</p> <p>I was planning the reading dtd and metafiles and how that would work. Looking at the manner that dtd’s could be delivered. How would I know which was an internal namespace and which was an external? How would I register the external namespace in my code (for example the $metadata of an OData source)? And so on and so forth. That is not how I am wanting to write my code. I want to write a test for a small part, then write that small part. In this case the small part was identifying the namespaces and putting them into a useful format.</p> <p>This got me thinking about agile and waterfall practices. Agile methodologies would say that you get the requirement and break it down into smaller pieces, and those into smaller pieces until you have something that can be done in a defined period of time (for example, in scrum you would break it down into smaller pieces that would be done within an iteration). Whilst you know what the requirements are, you don’t design the entire solution for what may or may not happen, nor do you plan everything out at the start. You break the requirement into smaller chunks and that into smaller chunks into a requirement that you can complete within a week or 2.</p> <p>Waterfall is very different to this. In the waterfall approach, you define every requirement that you can think of, plan the end result to the finest detail, then you start coding to the fine details that you have planned to deliver. You don't start coding until you have the UX defined, the web services documented and the classes worked out and what each class properties and methods are. </p> <p>Now, without going into a discussion about waterfall vs agile development practices, I will say my thoughts. While there are parts of one in the other and vice versa, the two can not coexist. For example, when your doing agile you need to know the end result, what is the stakeholder asking for. That you can break down into smaller requirements and so on until it is small enough to complete in iteration or sprint. And conversely in waterfall development, you can not say “deliver the whole project at once”, you need to break it down into smaller chunks that a developer can work on.</p> <p>The main point of difference for me is that in my opinion, agile is about meeting the requirements that you have been asked to deliver. No more no less. You cant tell the stake holder what they want, you can only guide them along as you code and as they see the work. This doesn't mean that if they say they want a field you don't do any error checking on it, or that you just make it work and not deliver a quality product. What it does mean is that you don't enlarge the requirements to what you can see being needed down the track. You meet the current need. If the database table needs a title, description and timestamp, don't add a link to a status that is then part of a different requirement. By the same token you can guide the requirements along by asking questions like “where is this going to be reported?”, “Is there a need to report on this?”, “ Who should be able to edit this?”</p> <p>Waterfall projects on the other hand you can vastly change the requirements because you are part of the planning process. Nothing gets done until everything is defined and some form of specification documented. You research to make sure that it is future proof, so that you can work on another project and this one wont need any attention. You tend to guide the requirements and specifications, rather than the development. But, once it is locked in, it takes a lot to change one line of the specifications that your working to.</p> <p>Personally, I prefer agile development processes over waterfall process because you can be flexible, if you code something that the user changes their mind on you can adapt. (Hence, why it is called agile.) You also only deliver what is needed, not what you think is best. The issue with stakeholders is that they don’t always communicate it well to developers, project managers or product owners. Hence the need to ask questions and listen to the answers. You also end up with leaner code (it does what is needed, and nothing it doesn't) rather than bloated code with parts that are rarely, if ever, used.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-79994024305598122242011-01-16T15:14:00.001+11:002011-01-16T15:23:56.460+11:00BlackBerry DevCon<p>I had the opportunity to attend the BlackBerry DevCon Asia over the last few days in Bali. There was a lot there, but I thought I would share a few of the highlights for me. [Note: I’m a developer, not a reporter, so this is just my thoughts, not an article for a publication.</p> <h2>Playbook</h2> <p>There is a new tablet coming to the market called the BlackBerry Playbook. Over the conference I had the opportunity to play with one and I must say I was very impressed. The graphics on the device look amazing, even with 3 rendering engines being used at the same time there is little discernable lag in playback in all three at the same time. It has a 7” screen which might be a tad on the small, but works with the device.</p> <p>Also, the OS is based on QNX which is a different approach to the other 2 main OS’s used on tablets, iOS for iPad and Android for other tablets. I like this move because it keeps separate the tablet and the phone. They might be similar, but to use the same OS is like using Windows XP as a server. It can be done, but it isn't what it was designed to do.</p> <p>My main criticism is that there is no bar along the top or bottom when your in an application. For example,in the Android OS there is the top bar that has the time and notifications, an also has the ability to be pulled down to interact with those notifications. This is missing in the device. Once you open an application the entire bar goes and the application is full screen. In some cases like games and remote desktop applications that is ideal, but most of the applications (not games) that I use leave that bar there.</p> <p>Why is this important? Let me give you a real life example. I'm sitting down and turn on my playbook, I open twitter and my emails, then start composing a blog. As there is no notification bar I cant see if I get any emails nor any twitter DM’s or mentions that I might want to have a look at. To check, I have to go out of one app and into the other. Today people are use to reviewing the notifications area to see if there is something that they need to deal with. That said, that is basically my only criticism. I think that this device will be one to watch as it gets closer to the release date. The specs are changing, but the current specs can be found <a href="http://us.blackberry.com/playbook-tablet/BlackBerry_PlayBook_v5.pdf" target="_blank">here</a>.</p> <h2>SuperApps</h2> <p>This is an oldie but still one that was pushed heavily over the conference. The idea is that the applications that you write are linked to the native operating system, contextual application menus (both location and from the apps point of view), and linking with other applications as well. An example of this is a translating application. As a normal app, this would be a notepad like application that you would open, type one language and then translate the text, then copy the text translation to the program that you want.</p> <p>With super apps, you would create a menu item in your email application. The user would then type the text of the email in their language, in the email program. Then the user would go to the menu, select the translation software (which might pop up a dialogue to ask which language to translate to), and it would do the translation for you and put the text in your email.</p> <p>I guess a new part of this is that you can do this into third party apps. For example, with the translation application, you could tell the OS that it should add the Translate contextual menu item for every text field it displays. Then when your timesheet program fires up, every text box will be able to be translated to a different language.</p> <h2>WebWorks Programs</h2> <p>This one is rather interesting. What this allows for is the ability for a developer to use HTML and javascript (including jQuery and some other third javascript api’s) to write an app, then the WebWorks API will bundle that into a native installable file. It is hard to tell the difference between a Java based app and a WebWorks based app. This, in my opinion, allows for speed of development to increase (its just HTML and javascript, which while tricky are not unique to any device, so debugging becomes easier). Using AJAX you can perform client/server operations, and some native functionality has been exposed to the javascript API so that you can make your application run as though it were running like a Java based one.</p> <p>There are 2 other things to note. Firstly, the WebWorks environment will be on both the BlackBerry mobile phone OS’s (I think 5.0 and up, not sure about 4.6) and the PlayBook (along with Adobe Air). The other is more exciting. Using CSS and Javascript, you can write the one application and build one version for the PlayBook, and one version for the Phone OS, and the application will determine the which device you are using and allow you to customize the experience for each. (Just like we do for different browsers at the moment.) Yes, I guess that you would expect that from something based in HTML, but I still think it is exciting (and powerful) none the less.</p> <h2>HTML Debugging</h2> <p>A side note to this, you can also debug HTML in an browser on an emulated phone though Chrome or Firefox/Firebug dynamically on your desktop. Makes working out the odds and ends of the HTML issues easier to deal with. This uses the WebKit engine on both to link them together.</p> <p>These were just some of the things that I heard and saw that I think will be cool to use when you need to. Look out for more information on that PlayBook, I think it has the capability to be a game changer for the tablet device market.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-50935851815413560262011-01-08T16:17:00.001+11:002011-01-09T00:15:25.742+11:00Running a single test with JUnit and Eclipse<p>So, I had a slight problem with using TDD in Eclipse. Not a problem as such, more an obstacle that I needed to overcome. What I wanted to do was to write a test, get that working, then re-run all the tests, check that the changes to the code hadn’t caused another error to correct, then move on to the next test. I found a few odds and ends about how this could be done, but nothing seemed to work.</p> <p>After some looking and seeing I was able to get this working. Here’s how I did it. For this example, I am using JUnit 4 and Eclipse 3.6.1 (Helios). Please note, to see the images more clearly just click on the image.</p> <p>Firstly, go to Run –> Run Configurations</p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/RunConfigurationMenu.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/RunConfigurationMenu.jpg" width="189" height="240"></a> </p> <p>Then, right click the JUnit node in the tree and select New</p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/RunConfigurationsDialogueBox.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/RunConfigurationsDialogueBox.jpg" width="240" height="231"></a> </p> <p>Enter the name for the run configuration (I have called mine XMLSourceRepository_GetTokensTest)</p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/NewRunConfigurationsOptions.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/NewRunConfigurationsOptions.jpg" width="240" height="233"></a> </p> <p>Select the Method that you would like to test (default is all methods).</p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/MethodSelection.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/MethodSelection.jpg" width="240" height="221"></a> </p> <p>Now, I have Android JUnit and Eclipse JUnit launchers, so down the bottom I need to press the hyperlink and the following dialogue box appears. Select the Use Configuration specific settings and then Eclipse JUnit Launcher.</p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/LauncherSelector.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/LauncherSelector.jpg" width="240" height="240"></a> </p> <p>And your done. Press the run button to run the test, and now it is in your run and debug button menus</p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/DebugOptions_withSingleTest.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/DebugOptions_withSingleTest.jpg" width="240" height="152"></a></p> <p><a href="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/RunOptions_withSingleTest.jpg" target="_blank"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://dl.dropbox.com/u/15397896/BlogImages/2011/JUnitEclipse/RunOptions_withSingleTest.jpg" width="240" height="151"></a> </p> <p>So there you have it. Using JUnit and Eclipse Helios (3.6.1) I have been able to select a single test to run without the overhead of other tests being run as well.</p> <p>Now, to me, this is rather complex and time consuming if you write a test, change the run configurations to add or alter an existing configuration, make a few changes to change the code, then have to do the whole process again. It would be easier be able to do this via the source or a test list of some kind. I am not saying that this isn’t possible, but I couldn’t get it to work. If you know how to do that, please let me know so that I can correct this post with the easier way of managing tests!</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com2tag:blogger.com,1999:blog-7437682899911204764.post-1146852447715012062011-01-02T16:27:00.001+11:002011-01-02T16:37:55.875+11:00Java Open Data Protocol API<p>Recently, I have played with Open Data Protocol (OData) using Java and I am not entirely happy with the API’s that I have had to use. So, in true developer style I am going to attempt to write my own. Have called it JOCU (Java OData Consuming Utility). I figure that by doing this I will learn the OData Protocol, touch up on my Java coding, practice TDD and JUnit, and maybe play with GIT or some other source control.</p> <h2>Open Data Protocol</h2> <p>Now for some understanding. The Open Data Protocol was put together by various organisations to surface data in a consistent way that was manageable cross platform. It allows for updating and retrieving of data to the data store via the URI conventions. I wont go into it in detail cause I am likely to get it wrong, and that isn't the idea of this post. For more information, go to <a href="http://www.odata.org/">http://www.odata.org/</a>.</p> <h2>So, on with the show… my first hurtle</h2> <p>I have started the API and have already come across something worth noting down. I had written a function to get the source code for a file, called GetSource. The original code is below:</p><pre class="brush: java;">public String GetSource(String URIForSource) throws MalformedURLException, IOException {<br /> <br /> URL url = new URL(URIForSource);<br /> HttpURLConnection URLConn = (HttpURLConnection)url.openConnection();<br /> BufferedReader in = new BufferedReader(new InputStreamReader(URLConn.getInputStream()));<br /> <br /> String str;<br /> String result = "";<br /> while ((str = in.readLine()) != null) <br /> result = result + str;<br /> in.close();<br /> return result;<br />}<br /></pre><br /><p>Now for HTML files and for static XML files this is fine. However, when you use this for an OData source, it dies (returns a 400 Bad Request error). I then thought that there was something wrong with my URL so I copied it into a browser and it worked! I was left with the question: why would the URL work in a browser yet not in code? After some investigating, I found that I needed to identify the types of responses that I would accept. As a result, I added one line to my code and came up with the below:</p><pre class="brush: java;">public String GetSource(String URIForSource) throws MalformedURLException, IOException {<br /> <br /> URL url = new URL(URIForSource);<br /> HttpURLConnection URLConn = (HttpURLConnection)url.openConnection();<br /> URLConn.setRequestProperty("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");<br /> BufferedReader in = new BufferedReader(new InputStreamReader(URLConn.getInputStream()));<br /> <br /> String str;<br /> String result = "";<br /> while ((str = in.readLine()) != null) <br /> result = result + str;<br /> in.close();<br /> return result;<br />}<br /></pre><br /><p>As you can see in line 5, I set the connection to accept xml, along with other types that I needn’t go into. As a result, the code is now working perfectly (under what tests I have written to date that is).</p><br /><p>Next is to manage the tokens that the XML returns.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-44961876804084285212010-12-29T15:13:00.001+11:002011-01-02T15:56:07.618+11:00Audit 2<p>After some feedback and opinions from some co-workers and others in the business, I reviewed my audit codebase. Finally I got a few hrs to finish it, and this is the result.</p> <h4>The Goals</h4> <p>First of all, let me review what I was setting out to achieve. Looking at a previous post (<a title="Previous Blog Post - Audit Again" href="http://lkoutzas.blogspot.com/2010/05/audit-again.html" target="_blank">Audit Again</a>) here is a summary of what I was wanting to do:</p> <ol> <li>Implement a process log system <li>Remove the reliance on specific databases (that is the core app could be anywhere in relation to the archive and audited tables) <li>Refactor the code to make it more readable and also break it down into more re-usable parts <li>Use Visual Studio 2010 Database Projects to implement these changes</li></ol> <p>It isn't a large list of improvements, but a list that I thought was worth while. So, now for the report card, how did i go?</p> <h4>Implement a Process Log system</h4> <p>When looking at the log entries that were being generated I got to thinking what would happen if 2 management processes (or more for that matter) were running at the same time? for example an audit archive and a re-compile? They should (in theory) not affect one another, so should be able to run at the same time. But the logging would be all over the place. There could be an intermingling of log entries from both processes that would make debugging that bit harder.</p> <p>This was very simple to implement. I just updated the Log Entry table to have a BigInt in it (called process), and then created a table called process and linked the ID to the Process field in the log entry. After that it was a straight forward process of updating the code to cater for it.</p> <h4><strong>Decouple archive from main database</strong></h4> <p>This one required more thought, but in the end was simple enough. I wanted to have the main tables for the audit in one database, and then be able to have the audited tables and their immediate audit entries in a second database and the archived data in a third. I am not saying that this is the way that I would use this, but it means that you can have the audit management tables and stored procedures in either database. That way the archive database can just have the archive, and the application is maintained in the main database, or the audit tables can be in the main database, and the audit management and archive tables and code can be in another database.</p> <p>To do this, I just created an entry in the configuration table to identify the location of the audit archive tables, then refactored the code to use that location instead of the database that it management code and tables are in. And that was it.</p> <h4>Code Refactor</h4> <p>With a Visual Studio 2010 Database Project I was able to do this very easily. It treats the database project just like any other, which means that when you change the name of a table or stored procedure, it will change all of the references to that database object within the project. That said, I had other issues with Visual Studio that I will touch on below.</p> <h4>Visual Studio 2010 Database Project</h4> <p>And here we come to the part where I didn’t achieve what I intended to. I had hoped that I would be able to use the Visual Studio Database Project to manage the code base. This would have been better the long by increasing maintainability and improved deployment. I wont rehash my previous blog (<a href="http://lkoutzas.blogspot.com/2010/05/first-one.html" target="_blank">here</a>) but it was great for maintaining the code, but it (in my mind at least) failed at deployment. As a result I didn't use it for anything else other than changing the names of the tables and stored procedures.</p> <h4>Audit 2</h4> <p>So, that was how I went about what I aimed to do. If you want the source code, please click <a href="http://dl.dropbox.com/u/15397896/NyxAudit.zip" target="_blank">here</a>. It is provided as is, and you use it at your own risk. If you have any questions, please leave a comment.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-9946278399447741042010-05-22T21:17:00.001+10:002010-05-22T21:58:34.198+10:00The First One<p>So, I have re-named the data structures. I used the database project template in Visual Studio 2010, and to be honest I wasn’t impressed. As a database management project tool, I found it almost impossible to commit the updated changes back to the database.</p> <p>But first, let me look at what I did and what I liked and what I didn’t like at each step. My first step was to use a clean install of the project and get that imported into the database project. That worked pretty well. It created one file for each database object (stored procedure, table etc.) which I thought was great. It created a good structure which was pretty easy to understand (for a non DBA like me).</p> <p><a style="border-right-width: 0px; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9duaqLPiBq4M2-m384JUXh21fwsn8cPnPWNYMCbgOV7c-Qi4JGQohNu4WeFdeewaFg4O1C-sRLBJMuTY6RtktokNjYtIwp8GNIULozoiJeBNIfFNaUo5nXPG0jyiJ5gO_HRst3S8YtV_U/s1600-h/FolderStructure%5B11%5D.jpg"><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="VS2010 SQL Project FolderStructure" border="0" alt="VS2010 SQL Project FolderStructure" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWevQsuFnBVXzSOeTsjUpwTyEHqvJL6ljmlbk4eziSGMloPD4jYfbmBRrTMGOUKuRTf83NtNQVGlXIptaRZ98dJD_htUI3RZ0r40ZfRoQfjMn2Mv5ra1i2rHaWogzOudn_AQxyHvfYuBmJ/?imgmax=800" width="214" height="240"></a></p> <p>To me, this structure is very similar to the structure that you see in the MS SQL Studio, which is a good thing in my mind.</p> <p>Then I did one of the items on my version 2 list for the project, to rename the database tables. The refactoring was very much in line with the way code gets refactored in VS2010. It was quick and easy, and every reference was renamed. Needless to say I did like that.</p> <p>Now to the parts I didn't like. I wanted to set one of the tables to save the data within as it was part of the project, not the dynamic data of the system usage. I could not work out how to do this. I would have thought that a database project would have been able to allocate some default data to the tables, if not mark some tables as having all their data from the project loaded into the implementation of the project.</p> <p>This leads to my next problem. I got the source data structure from a database. I made a relatively simple if somewhat extensive change. I then wanted to update the database with the changes and it would not update. Kept on returning a user error or something. I am going to be honest here. This to me is the core of a database project. Writing scripts is one thing. The ability to use a GUI to manage the code is awesome. The UX for updating the data structures is not.</p> <p>As a result, at this time I am not going to use the database project for my changes. At least until I am able to resolve the 2 issues that I have encountered so far. So, at this stage I have renamed the database tables, and am about to tackle another of the items on the list.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-64211454691320760682010-05-16T15:48:00.001+10:002010-05-16T15:49:17.924+10:00Audit Again<p>So, I am reviewing the audit code that I have written. Following the review of some posts by a friend, I am going to write the next version of the audit functionality that I finished a while back (a whole 2-4 weeks). So, using that as a basis, and ensuring that I maintain the aims set out in the previous version (for a refresher, go <a href="http://lkoutzas.blogspot.com/2010/04/sql-audit-part-1-aims.html" target="_blank">here</a>).</p> <p>Now, the aims for the next version are as follows:</p> <ul> <li>Process logging, not just a generic log table, but one that is based on each process so that the logging system can differentiate between 2 processes running at the same time. Also increased logging for better debugging of issues.</li> <li>Allowing for the core parts of the audit to be in the line of business database, but still archiving to a separate database. The audit should be configurable so that it can either run out of the archive database or the line of business database, or both (where the business has 2 audited databases there should only be one archive database).</li> <li>Rename the data objects so that they make logical sense.</li> <li>Further refractor the code so that there are as many re-usable parts as possible.</li> <li>Use the 2010 Visual Studio 2008 Database solution template to write the code in.</li></ul> <p>That is all I can think of as improvements for now. If I think of anything else that I would like to add to the next version, I will let you know.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-5964797899004940162010-05-15T22:49:00.001+10:002013-01-04T16:10:29.845+11:00A Breath Before the Start<p>Well, I had a friend look at my blog the other day. He is someone who I respect in the technical world, knows heaps, and is just an all round good guy. (For those wondering, the man is Richard Banks, who you can find <a href="http://www.richard-banks.org/" target="_blank">here</a>. He is also worth a follow on twitter, <a href="http://twitter.com/rbanks54" target="_blank">over here</a>.) Anyway, he pointed out some spelling and grammar issues, which I was expecting. He also gave me some tips and tricks for blogging, like the code colouring and where I can host files without any ads, which I will incorporate I to my last few posts. I'm not going to tidy every post, just the last two. Hope to do that this weekend.<br><br>Then he ended his advice with a “but”. He wanted to know why I was looking at using prefixes on table names, and my thoughts on a "decoupled" application. And he recommended that I review some other patterns, such as the repository pattern. Taking this advice, I did some searching on the web to look into the repository pattern, and as he suggested I reviewed my thoughts and the post content. Following this, I have decided that he is right (as he usually is).<br><br>As a result, I am going to make some changes to my approach. Firstly, I am going to write an updated version of my audit code. One without named prefixes, that makes more sense with regards to naming conventions. (As you can see I am dropping the prefix naming convention on the data layer.)</p> <p>Secondly, I am going to write a decoupled application in the correctly. One that has it’s layers decoupled, not the modules within each layer. This means some changes to my intentions, but I can make that work. I am still going to maintain the audit functionality in a separate database from the Line of Business database, and I will explain my reasoning's for that when I release a new version of that little project.</p> <p>Thirdly, I am going to write code that has a designated purpose. I am going to achieve something that I can use as a vehicle to learn. It has been said that without vision people perish. Probably from not having a goal and aimlessly wandering around til they did. I am hoping that with a goal to aim for, I can be more focused on doing and learning rather than wandering down a dark alley.</p> <p>As an aside, this is the point of me coding and writing blogs about what I am doing and my thoughts on it all. I am wanting to learn, and I just happen to be in a position where I can ask people to check my blog and code. If one stops being teachable, stops wanting to learn, then they are, in my opinion, not worth listening to. I don’t want to be one of those people. Other than fixing formatting issues and typos, I wont be altering or removing the previous posts. It is a reminder to me of mistakes I have made to teach me not to make them again.</p> <p>So, my next post will detail what I am aiming to do with my audit scripts, which I will do try to get out as soon as I can.</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com1tag:blogger.com,1999:blog-7437682899911204764.post-30738567615700905922010-05-10T14:33:00.006+10:002010-05-14T22:41:39.563+10:00Names and MeaningsAs you might expect, I am going to use the Autofac IoC container to write a decoupled application the does "stuff". I haven't defined much of what I want this application to do, as I have mentioned in an earlier post, I just want to learn.<br><br>One of the things rattling around in my head is how to name the database objects. It seams like something trivial, I mean what's in a name? But names have meaning, and just as with naming children changing them later in life can cause more problems than it is worth!<br><br>My options as I see them are to name the objects with names that match the tables or to add a prefix to the table names, still having some semblance to the name of the object. Let me go through my thoughts.<br><br>My first option is to name the tables names that reflect the objects they represent. Names like user, settings and individual. This would help in matching the data store to the entity layer, and any custom queries would be easy to read.<br><br><pre class="brush: sql;">Select * from [user]<br> inner join individual on [user].id = individual.id<br></pre><br>This is easier to read and understand at a glance. It returns the user and the individual that it is attached to. Simple. But here is my problem with that. I am attempting to write a decoupled application. What if I want to move one of the parts to another set of servers? For example, if the core of the app gets more use than the rest, then performance wise wouldn't I be better moving it if I could? And I don't just mean the presentation layer or the model, but the data layer also? The problem with this naming approach is that I can't tell, from a data layer perspective, which table belongs to which module.<br><br>The second option is to attach a prefix to the table names, which could signify what part of the application it was apart. It turns the earlier SQL statement into<br><br><pre class="brush: sql;">Select * from acsUser<br> inner join coreIndividual on acsUser.id = coreIndividual.id<br></pre><br>Which is not as readable, but I can tell the module that each part comes from. Now for a simple example it isn't too bad, but for more complex examples it can get rather difficult to manage.<br><br>I guess the question is how decoupled do I want the code to be? I was speaking to a friend and he said that a decoupled application was rarely going to be decoupled at the data layer. How many places can afford to have multiple database servers in production (that are not replicated and the like)?<br><br>Also, if I decouple the data layer, that can create it's own issues, like how to search across multiple parts of the app? How to maintain data integrity across multiple sources? And another issue, how do I maintain data access security across those different modules?<br><br>After much thought, I have decided to write an extremely decoupled app, one where it is decoupled from the data layer up. There are 2 reasons for this. One is that I am doing this to learn how to improve my coding. And two, I like a challenge. So I am going to write each "module" with it's own database, web service, and then write integrative UX environments that bring it all together! So now I need to review the architecture of the app, to make sure that it is in line with this. <br /> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-22693512291818992892010-05-02T00:59:00.001+10:002010-05-16T01:34:19.017+10:00Autofac 2.2 and Me: My First Look at an IoCWell, since I had the database audit working to my satisfaction (not to a production multi server high volume level, but working enough), it is now time to start working on my c# code skills. While I am working to get a system working, I am not specifically wanting to achieve a specific project like I was for the audit, but rather making sure I am playing with various technologies. Things like C#, WCF, Silverlight, maybe some WPF, and MVC. Any other things I should be trying just let me know.<br><br>So, my first learning is going to be the concept of an IoC container. Here are the simple steps I took to implement it. Please note I have not any direct experience with IoC containers, and only a weeks worth of reading here and there on Autofac and IoC containers. This basically means that I have gotten it working, but it isn't necessarily the best implementation, so please, any feedback would be appreciated.<br><br>But before I explain how I got it working, let me explain (to my understanding) why you should be using an IoC Container. One of the hardest things to do is make any major changes to an existing system. This is because they are usually written with a single objective in mind, to fulfil the requirements. Now, when you want to change a part of your code, it is simple. However, if you want to remove an entire section of the code (for example change the database type), this gets harder. Using interfaces has helped with this, in that you can write to an interface, and when you want to change the implementation you can create another version to the interface, and substitute it with the old version. But, the down side is that you need to find all the references and update them. Here is where IoC Containers come into play.<br><br>With an IoC container, in one place you can change the implemented class, where the IoC is built, rather than having to find every instantiation of the old class to change it to the new one. This can lead to a more decoupled application, one that has as many of its parts (code wise) not reliant on the other parts, but on definitions of what the code is meant to do. As a result, these parts can be interchanged with greater ease.<br><br>Now, back to the implementation of the Autofac IoC container.<br><br><b>The Steps to IoC Goodness</b><br><br> <ol> <li>Downloaded the source <li>Added the resources to my projects <li>Created the Container <li>Updated my code to use the container</li></ol> <div style="margin: 0px"><b>Downloaded the source</b></div> <div style="margin: 0px">The first step was to download the source from the project home. The homepage of the Autofac project is <a href="http://code.google.com/p/autofac/">here</a>, and the download page is <a href="http://code.google.com/p/autofac/downloads/list">here</a>. Please note that there is a .Net 3.5 version and a 4.0 version, so choose the version that best suits you. (Or best suits your code, to be more precise.)</div> <div style="margin: 0px"><br></div> <div style="margin: 0px"><b>Added the resources to my projects</b></div> <div style="margin: 0px">Next, I added the dll files to my projects. I added them to my code project and also the test project. The code project cause that is where object interfaces and the like are registered with the IoC Container, but also in my test project cause that is where the container is tested to make sure that I have gotten the container working properly (the whole point of a test).</div> <div style="margin: 0px"><br></div> <div style="margin: 0px"><b>Created the Container</b></div> <div style="margin: 0px">Next, in my project, I added the container to my code.</div><pre class="brush: csharp;">using System;<br />using System.Collections.Generic;<br />using System.Linq;<br />using System.Text;<br />using Autofac;<br /><br />namespace Nyx.Core.Universal<br />{<br /> public sealed class AutofacContainer<br /> {<br /> // Added a private constructor to stop the auto creation of a public one<br /> private AutofacContainer()<br /> {<br /> }<br /><br /> public static IContainer BuildContainer()<br /> {<br /> var builder = new ContainerBuilder();<br /><br /> builder.RegisterType<DatabaseSession>().As<IDatabaseSession>();<br /> builder.RegisterType<DataAccessTest>();<br /> <br /> var container = builder.Build();<br /> return container;<br /> }<br /> <br /> }<br />}</pre><br /><div>Please note that the basis of this is to prove a test, to make sure that the code is functioning properly. The constructor for the <span style="font-family: 'Courier New', courier, monospace" class="Apple-style-span"><span style="font-size: small" class="Apple-style-span">DataAccessTest </span></span>object has a dependency on <span style="font-family: 'Courier New', courier, monospace" class="Apple-style-span"><span style="font-size: small" class="Apple-style-span">IDatabaseSession </span></span>(see the <span style="font-family: 'Courier New', courier, monospace" class="Apple-style-span"><span style="font-size: small" class="Apple-style-span">DataAccessTest </span></span>code below). Now, note that the 2 register statements are different. This is because I am in the first case, telling Autofac that when it sees the interface IDatabaseSession it should use the concrete class DatabaseSession. This is not actually required, as but am just learning so I explicitly code it that way. If you see the second register line, you will see that it is just registering the class, no interface reference. This is how I could have written the first line, the interface being inferred.</div><pre class="brush: csharp;">using System;<br />using System.Collections.Generic;<br />using System.Linq;<br />using System.Text;<br />using Nyx.Core.Entities;<br />using Nyx.Core.Universal;<br />using NHibernate.Linq;<br /><br />namespace Nyx.Core<br />{<br /> public class DataAccessTest<br /> {<br /> IDatabaseSession _sessionLocator;<br /><br /> public DataAccessTest(IDatabaseSession sl)<br /> {<br /> _sessionLocator = sl;<br /> }<br /><br /> public User GetUser()<br /> {<br /> User result = null;<br /><br /> using (var session = _sessionLocator.OpenSession())<br /> {<br /> var query = from ticketingOffer in session.Linq<User>()<br /> where ticketingOffer.UserId.Equals(1)<br /> select ticketingOffer;<br /> result = query.FirstOrDefault();<br /> }<br /> return result;<br /> }<br /> }<br />}</pre><br /><div>The database session is just a connection that through fluent nHibernate connects to a MS SQL 2008 Express edition database, and I wont bore you with that code.</div><br /><p><strong>4. Updated my code to use the container</strong></p><br /><p>So, I have updated the code by adding the IoC container to the core, and creating a test class to prove my point, I have written a unit test to show that it is all working.</p><pre class="brush: csharp;">[TestMethod]<br />public void Autofac_database_integration_test_should_return_valid_user()<br />{<br /> using (var container = AutofacContainer.BuildContainer())<br /> {<br /> var TestObject = container.Resolve<DataAccessTest>();<br /> var result = TestObject.GetUser();<br /> <br /> Assert.AreEqual(1, result.UserId);<br /> Assert.AreEqual("lkoutzas", result.UserName);<br /> }<br />}<br /></pre><br /><p>Now, this code creates the Autofac IoC, and then it resolves the DataAccessTest class, automatically resolving the DatabaseSession class.</p><br /><p>And there you have it. I haven't added everything that I have done, just the highlights to get Autofac working. Again, this is my getting it working, not the implementation of a pattern or anything.</p><br /><p>Happy Containing!</p> Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-29137565559842281472010-04-26T17:07:00.000+10:002010-04-26T17:07:37.872+10:00SQL Audit Part 9: Wrapping It UpSo, to warp up, we have a SQL level audit that fires a tigger to save the audited data. You will notice that I get the primary key, so using various tools you could set up the UX to notice that the unerlynig objects are audited and then return the audit trail for a specific object.<br />
<br />
Anyway, the "install" file is located at the following location:<br />
<a href="http://www.mediafire.com/file/mnvnm3t1zx2/CompleteSQLDataAudit_v1.0.sql">MediaFire File Download: CompleteSQLDataAudit_v1.0.sq</a><br />
<br />
Do let me know your thoughts or if you have any questions with the source code. You should be able to open it in MS SQL Server Management Studio and run it in any 2008 database (assuming it works in other versions I don't have, see the first post about that). The audit control tables and stored procedures do not need to be in the same database as the tables to be audit. And you can audit tables in more than one database.<br />
<br />
To start auditing a table you can just run the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">StartAuditFor </span>stored procedure, and so long as the account has the access to the source table it will all work.<br />
<br />
To re-compile the audit structure after updating a source table structure, execute the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">ReCompileAuditStructure </span>stored procedure and it will check all currently audited tables. <br />
<br />
To archive the old audit records, run the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">ArchiveAuditTables </span>stored procedure. I would recommend this be done in a SQL Job, which I would run before the nightly backup for your SQL database (might as well clean up the data before you back it up).Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-73765659341414962502010-04-26T15:56:00.001+10:002010-04-26T17:10:33.582+10:00SQL Audit Part 8: Data Archiving ScriptThe archiving of the records is a simple affair. All that we need to make sure of is that we don't archiving something twice, or that we don't delete something we haven't archived. For high volume tables, this can get interesting, especially if you want to remove the audit records as soon as they are written. (That is, every time that the archive script is run.)<br />
<br /><br />
So, to get around that we mark all of the records we want to copy first, then we copy all the marked records, then delete the audit records that are marked. Makes it nice and neat. While it creates an extra query, it is a safegard that is worth it. While playing with audits you can never be to careful.<br />
<br /><br />
So that about wraps up the project, more or less. The last part in the series details where you can find the source code (complete) and some other odds and ends.Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0tag:blogger.com,1999:blog-7437682899911204764.post-89032083962076314152010-04-26T12:17:00.000+10:002010-04-26T12:17:30.766+10:00SQL Audit Part 7: Audit Table Maintenance Script<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">Now we have the audit started and working, what is needed now is to manage it. By manage, I mean that when we change the source table structure, we don't want to have to manually alter the tables that store the audit data.</span></span><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">Rather than recreate slightly modified code to manage the the tables, I coded the creation script so that it will do both. Yes, I know that this is cheating (I planned the development that way) but coding is meant to be done smartly, not just with brute force but also finesse. So, to manage both in the one script, there are a few principals to use.</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><b>1. Check before create.</b></span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">Before you do what you need to, check that it hasn't already been done. In some cases this will mean that if it is there you do nothing, in other cases you will delete it so that you can recreate it, and in more you update instead of delete. That is a decision that you can make based on your code, and what your wanting to achieve.</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><b>2. Understand both requirements.</b></span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">Updating code and creating code can be combined, but there are some cases where this is not advisable. You need to understand both how the creation needs to work and how the update needs to work to be able to know if the functions can be combined, and then the best way to achieve that.</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><b>3. Make your code manageable.</b></span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">This is more of a generic coding principal, but it still applies. When you combine 2 functionalities into the one piece of code, it makes it that much more complex. The best way to manage the complexity it so make sure that your code is readable, and to make sure that another can come along and understand it with without you pointing out whats what. On this one I am continually learning, but I think I am getting better.</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">Then, with that in mind, I can just create a cursor to loop though the tables that have been started, and re-run the creation stored procedure over it, and were done.</span></span></div>Lee Koutzashttp://www.blogger.com/profile/07547096773286540799noreply@blogger.com0