Sunday, October 30, 2011

JOCU – The Alpha

I’ve been working on an OData reader in java for some time now, and I am ready to show people the first alpha.  There are some pieces missing, and I would prefer better ways of accessing some of the data.  But here it is.

The release only has the underived version of the JOCU project.  It allows for most of the functionality from the OData protocol (the main one missing is the data format, this uses XML data src as apposed to the json format).

Here is a code snipit I wrote that uses the API from the JAR library.

import java.sql.Timestamp;
import java.util.Date;
import java.util.List;

import lkoutzas.jocu.shared.AbstractCatelogueData;
import lkoutzas.jocu.shared.AbstractEntity;
import lkoutzas.jocu.underived.ODataEntity;
import lkoutzas.jocu.underived.ODataLink;
import lkoutzas.jocu.underived.ODataRepository;


public class RunJOCUDemo {

public static void main(String[] args) {

System.out.println("Welcome to the demo of the JOCU odata reader!!");
System.out.print("Catelogues Start: ");
System.out.println(new Timestamp(System.currentTimeMillis()));

// create the odata repository
ODataRepository oData = new ODataRepository();
// set the URI for the location of the repository
oData.SetURI("http://services.odata.org/OData/OData.svc/");
// get oData Catelogue
List<AbstractEntity> categlogues = oData.FetchODataCatelogues();
System.out.print("End: ");
System.out.println(new Timestamp(System.currentTimeMillis()));
// dump out results to show it works...
System.out.println("Catelogues...");
for(int i=0;i<categlogues.size();i++)
{
AbstractEntity local = categlogues.get(i);
System.out.println(local.getAttribute("href"));
}
System.out.println();
System.out.print("Entities Start: ");
System.out.println(new Timestamp(System.currentTimeMillis()));
// create the odata repository
ODataRepository oData2 = new ODataRepository();
// set the URI for the location of the repository
oData2.SetURI("http://services.odata.org/OData/OData.svc/");
// set the catelogue for the oData repositpory
oData2.SetCatelogue("Categories");
// query or filter the data request (need to detail full list)
oData2.getQuery().top("2").select("ID,Name");
// get entities
AbstractCatelogueData entities = oData2.FetchEntities();
System.out.print("End: ");
System.out.println(new Timestamp(System.currentTimeMillis()));

System.out.println("entities...");
for(int i=0;i<entities.getEntities().size();i++)
{
ODataEntity local = (ODataEntity) entities.getEntities().get(i);
System.out.println(local.field("ID") + ": " + local.field("Name") + " links are: ");
for(int j=0;j< local.getLinks().size();j++)
{
ODataLink odataLink = local.getLinks().get(j);
System.out.println(odataLink.getTitle() + " - " + odataLink.getHref() + " ("+ odataLink.getRelationship()+")");
}
}
}

}


As you can see, the API is pretty easy to use and I think pretty good.  So, for the “bits not in the alpha”.


Firstly, there is a bug that will mean that XML structured elements (CDATA as an example) are not parsed correctly.  They get broken down into separate elements rather than a single blob.


Also, I would like to improve the manner that the data is being accessed.  It feels a bit clunky to me. I am not sure how I can improve it, but I would like to see what I can think of.


The last missing bit / bug is that the expanded entities within the querystring.  At the moment I think it would fail badly.  The API will allow it, but the data will likely either cause the call to fail or it will will be ignored.


Once these items are in the code, I am going to work on the derived version, which will allow strongly typed objects and (hopefully) reduce the size of the downloaded packet.


The link for the alpha is here.  Let me know your thoughts.

Sunday, September 25, 2011

Rolling back a change set

Recently, I was working on a TSF Team Project project that had 2 branches.  I was continuously merging my trunk with each of these branches, as all three projects moved forward.  Recently I was given the go ahead to merge one of the branches into the trunk.  After checking and rechecking, I did this.

After about an hr after I had merged one of the projects into the trunk, I was, as usually happens, asked if I could make a change ASAP and release (in a way that would have the least impact on testing resources).  This meant that I needed release a version of the code that mirrored the current Production environment as a base with the necessary changes.  My first thought was that I would grab the change set, update the code, build on my local and release that.  However there was a gotcha, there was a requirement to have the build that was deployed done on the build server (with the TFS auditing of changes etc).

Anyways, I grabbed the required change set, made the changes, tested it locally, then checked it in, and released the result from the build server.  After checking that the release worked, I handed it over to the testers.  I then re-merged the code with the other 2 branches (luckily I hadn't deleted the one that I had been told to move into the trunk) so that they had the latest change.  Now, in TFS, merges use code on the server to merge, not code on your local (at least as far as I can tell).  As I was reviewing the changes and resolving the conflicts, I was seeing code from the trunk that included the branch I had merge into it.  Needless to say I got to wondering what had been built and then released to UAT.  I grabbed a tool to review the binaries that were in the release, and it was based on the project that I had merged into the trunk!  There was code there that I knew wouldn't be tested, and as such could not be released as this would increase the risk the change was raising (testing only the features of any change can mean that other impacts of the change are not identified until a full regression test is done).

So, I was left with a dilemma, how can I revert to a change set and release that change set, using the build server and all appropriate auditing in TFS, without having to manually remove all untested code and create a greater risk?  Then, after some googling, I came across a post by Mike Fourie, which details one of the new features in the August 2011 release of the Team Foundation Server Power Tools.  TFS 2010 has the ability, though the command line, to perform a roll back.  This extension brings that functionality into Visual Studio 2010 with a usable UI.  Let me just say that this saved my weekend from being an entire coding wreck.  I was able to roll back to the required version, then re-build and re-release only what was desired.  I wont re-write how it works, Mike does that well enough, but here are some links:

Mike’s Post: Using Rollback in the Team Foundation Server Power Tools

Team Foundation Server Power Tools August 2011

I hope this helps others out there who are wanting to (or having to) roll back code to a previous version on the server.

Sunday, August 14, 2011

Securing CMS

Only a short one today.  At work we had an issue that we needed to secure our CMS to only be available for internal access. I had a few ideas, but after some research I found something very cool that was apart of IIS (6 and 7).  There is an option to secure a site or directory (or application) to only certain IP's. Its called "IPv4 Address and Domain Restrictions".

The reason why I'm blogging about this is because this is an easy way of adding an extra layer of security to a normal CMS.  Most CMS's are available for running locally, and this means that they are something that can be easily access, and the code can be analysed.  From a security point of view, this is a concern.  To mitigate against this, securing the main administration directories (login etc) using the IP restrictions in IIS adds an extra layer to the security of the application.

The easiest way to use this is to deny all access, and then only allow the IP's that you want to access the directory (usually internal and one or two external IP's).

Hope this helps.

Saturday, August 6, 2011

Changes to Facebook API

Today we had something interesting happen at work.  We had just implemented some Facebook functionality on a project that was ready for release, and just before it was signed off today, a final end to end test was completed.  Part of that test was to test some functionality with social networks.  When the Facebook functionality was tested, it failed.

The functionality was meant to pre-populate the message box with a message that the user updated.  On Wednesday this was working, and today it wasn't.  Lets just say that this caused a few questions to be asked and a few worried people moving the corridors.  How can functionality that was working one day break the next when we made no changes to the code?  After some searching the web (Google is mostly always your friend), I found my answer.  Let me break it down for you.

Firstly, we were using the Javascript Library, specifically the FB.ui functionality. Using the Feed dialog box, we would place our user augmented message into the message box, which the user could then update and publish on their wall.  Now looking at the reference from the Facebook developers page (seen here http://developers.facebook.com/docs/reference/dialogs/feed/ ) the message property is what we used to pre-fill the dialog box. As you can see there is a note on the property that states that the field will be ignored on the 12th July, 2011.  So why was it still working two days ago (3/8 to be specific)?

Before I go into that let me first look at the policy for Apps in Facebook. If you go to the policy page (seen here http://developers.facebook.com/policy/ ) and read section 4 sub section 2 it states that an app will not pre-fill any fields.  (Then in the following sub section it states that the user can give permission for an app to post on their behalf. That is a task for next week.)

So back to the question, why did it just stop working?  If I were to guess, and that is all that I would be doing, I would say that Facebook just got fed up with people not looking at the policies and following them.  Apps should only pre fill the form when the user has done it some where else in the work flow.  I would say that there were plenty of apps that just didn't follow the rules and they just decided to enforce the rules.  And to be honest, I have no problem with them enforcing their policy.

Now, what is the lesson learned from all this?  Is it that we should not trust Facebook?  I would say not.  For me? The lesson is that when you are using third party services you need to be aware that what you don't control what happens with them.  You should intermittently test the services, just to make sure that they work as expected.  That is if you want to make sure that your sites and applications maintain the functionality that you build them with.
Hope this helps!

Sunday, April 24, 2011

Something Different

It has been some time since my last blog post, and this one is not going to be along the same vein as normal. I will try to post something relating to my completing the underived version of the JOCU project, but work has slowed to almost a crawl as I am in the midst of work around the house (my computer is currently in pieces in the hallway, and I am using it mainly though my phone, not ideal for coding).

So, if I'm not going to relating something code related, I hear you ask what do I have to say?  Well, I have been playing with my phone (I have a Samsung Galaxy S i9000) and came across Darkys ROM.  Now, never having looked at custom ROM before, I was cautious as to whether this was a good idea or not.

Well, I must say I am impressed.  I have always know that my phone had more grunt than it used, like it was being restrained.  After setting up Darkys ROM, I can see a definite performance improvement.  There is instant response to touch input, as a result of using the 9.5 ROM, I am on Android 2.2.1, which has it’s own improvements.  Overall, I am very happy with my phone.  I mean, I was happy before, but now more so.  I even still have access to the built in Samsung Apps.

Something to note though, doing this has it’s risks. The Galaxy S is one of the few Android phones on the market that you can brick.  I also performed the update on my little brothers phone, and while the end result was the same, the process was not the greatest, a few times I thought I had bricked his phone.  But, in the end I got it working.  Doing this is at your own risk.

Now for the links:

Darkys Rom Home: http://www.darkyrom.com
2.2 – 2.2.1 install process: http://www.darkyrom.com/community/index.php?threads/guide-installing-darkys-rom-on-2-2-2-2-1-i9000-i9000m-only.1041/

I did an install from 2.1 to Darkys 9.5, but that one wasn't so smooth (my little brothers phone), so I wont post that here, I recommend that you use Google yourself.

Sunday, March 6, 2011

JOCU – The Final Stretch

So, I have the basics of an XML parser completed, now I need to implement this to deliver what I am wanting, which is the Java OData Consuming Utility.  I am going to work on the loosely typed or underived version first, then I will implement the derived version.

Briefly, lets look at the functionality we can expect from the repository. 

string GetSource() / SetSource(string)
This will define what the source of the oData is.

bool GetMetadataRequired() / void SetMetadataRequired(bool)
This will set or get if the metadata is required for this repository

ODataEntities Fetch()
This will get the entities from the oData query

IODataQuery Filter(string)
Applies the filter URI convention to the source

IODataQuery SetEntityCollection(string)
This will define which collection that it is desired to get

IODataQuery GetEntity(string)
Gets a single entity from the collection

IODataQuery SetOrderBy(string)
Sets what the list is ordered by

IODataQuery Top(string)
Sets the number of entities that we want returned

IODataQuery Expand(string)
Sets which related entities to expand

IODataQuery Format(string)
Sets the format to return the data in (not used really for what I want, should I scrap it?)

IODataQuery Select(string)
Identifies which entity items to select, and related items

IODataQuery InlineCount(string)
Sets the inline count property on the data source

IODataQuery CustomURIElement(string)
Sets a custom element to the query that is sent to the OData source

string GetMetadata(string)
This will get the OData collection metadata entry

And, from the point of view of the entity we have the following:

string Get(string)
Gets the value of the key passed though

string GetMetadata(string)
Gets the metadata value for the key passed though

Most of this is derived from the OData URI conventions, with some basic access functions thrown in.  I think that this is all I need to implement, but to be honest I’m not sure.  I have a feeling I will be uncovering more than I expect as I start to tackle the implementation of the repository access, but I need to start somewhere, and this is the best place.  I am aiming to finish this off in the next month or so, but as I am doing this in my spare time to have some fun we shall see what happens.

Monday, February 28, 2011

Dynamic Layout with MVC 3 and Razor

The past weekend has seen me at Sydney’s Webcamp.  It was a great 2 days of looking at some of Microsoft's Web Technologies as well as have a play with them in the labs.  Towards the end of the second day, after doing the labs I wanted to, I had a bit of a play wrote a simple site that changed it’s layout based on the querystring in MVC 3/Razor.  I thought I would share it with you.

First, make sure you have the MVC 3 templates, which you can get from Microsoft here.  These can take some time to install, so be patient.  You can also get it via the Web Platform Installer. (Note: I am working from Visual Studio 2010.  I am not sure of the compatibility of MVC 3 with previous versions of Visual Studio.)

Once that is installed, open Visual Studio, select File –> New –> Project.  Open the Visual C# set of Projects, select Web, and then select the ASP.NET MVC 3 Web Application.  Also, enter a name for the application (I have used MVC3DynamicLayout). Then hit OK.

The Project isn’t created yet.  On the next screen, you have the options around the type / template of MVC 3 Project to create.  For our purposes select the Empty Application and the Razor view engine.  Hit OK and your project will create.

So, we have our MVC 3 Project, now to get something that will execute.  Below is a picture of the file structure that we will use as a reference.

First, we create our initial layout.  Right click on the Shared folder (that is under the Views folder), select Add –> New Item.  Select the MVC 3 Layout Page (Razor) and enter site_1.cshtml as the Name.

This will create a new cshtml file, into which I have added some render sections and a <h1> tag to show that this is layout 1.  Note, the inline css is just to make the point of the exercise easier, not something you should do.

<!DOCTYPE html>

<html>
<head>
<title>@ViewBag.Title</title>
</head>
<body>
<div style="width:700px;">
<div style="border: red solid 1px;"><h1>This is layout one</h1></div>
<div style="border: green solid 1px;">
@RenderSection("Header",false)
</div>
<div>
<div style="width:100px; border: black solid 1px; float: left;">@RenderSection("Menu",false)</div>
<div style="width:596px; border: purple solid 1px; float: right;">@RenderBody()</div>
</div>
<div style="clear: both; border: blue solid 1px;">some footer text</div>
</div>
</body>
</html>

Now we need to add a controller.  So go to Controllers in Solution Explorer, right click and select Add –> Controller, and call it HomeController.  Do not select the check box to add action methods for create, update, delete and details scenarios.



And finally, we need to create the view to show with the correct layout.  In the home controller, right click anywhere in the Index function and select Add View.  Make sure you check the Use a layout or master page and have selected the site_1.cshtml layout that we have created.



Now, lets add the content to the view so that something is displayed.  That is in the code below.

@{
ViewBag.Title = "Index";
Layout = "~/Views/Shared/site_1.cshtml";
}

@section Header{
<h3>This is the head</h3>
}

@section Menu{
<ul><li>item 1</li><li>item 2</li></ul>
}

And here is the rendering of the body

Now, if you hit F5 you should get the following webpage.



So, at this stage you should have a working MVC 3 site, with some really cool looking layout and great UI. (OK, I joke… :D) But we are not done yet, that was only half of what we want.  Now for the other half, to be able to change the page layout dynamically.  Now, for our purpose dynamically will be a querystring value.


So, first lets create our second page layout. Right click on the Shared folder in the View folder, then add a new item.  As before, MVC 3 Layout Page (Razor) but this time enter site_2.cshtml as the Name.  Below is the code for site_2.cshtml

<!DOCTYPE html>

<html>
<head>
<title>@ViewBag.Title</title>
</head>
<body>
<div style="width:700px;">
<div style="border: green solid 1px;">
@RenderSection("Header",false)
</div>
<div style="border: red solid 1px;"><h1>This is layout two</h1></div>
<div>
<div style="width:596px; border: purple solid 1px; float: left;">@RenderBody()</div>
<div style="width:100px; border: black solid 1px; float: right;">@RenderSection("Menu",false)</div>
</div>
<div style="clear: both; border: blue solid 1px;">some footer text</div>
</div>
</body>
</html>

Now, we need to update the controller to accept the querystring argument and then, if that argument is a “y”, let the view know that it need to change the view layout.

    public ActionResult Index(string canIChangeTheLayout)
{
ViewBag.canIChangeTheLayout = (canIChangeTheLayout == "y")?"":"y";
ViewBag.Layout = "~/Views/Shared/site_1.cshtml";
if (canIChangeTheLayout == "y") ViewBag.Layout = "~/Views/Shared/site_2.cshtml";
return View();
}

And then need to make 2 changes to the view.  One to alter the layout (from the viewbag rather than hardcoded, and the other to create a link to control the querystring.  The new view code is below:

@{
ViewBag.Title = "Index";
Layout = ViewBag.Layout;
}

@section Header{
<h3>This is the head</h3>
}

@section Menu{
<ul><li>item 1</li><li>item 2</li></ul>
}

And here is the rendering of the body
<br /><br />
Link: @Html.ActionLink("change layout", "index", new { canIChangeTheLayout = ViewBag.canIChangeTheLayout })



And vola! If you click on the link you should alternate between the first layout and the layout below:



Well, I hoped you liked this basic review of the MVC 3 Projects and how to change the layout in code.

Tuesday, February 15, 2011

Test Driven Development

It has been some time since my last post, work has been rather hectic of late.  My downtime has been just that, downtime.  However when I finally got back to my Java oData Consumer I ended up having a good experience in Test Driven Development that I thought I would share.

The next small chunk that I wanted to complete was the XML namespace to package name piece, and then applying that package in my code when trying to get a type of node from a known namespace.  The first part was easy, I wrote my test, got the code working, and vola, I had my XML namespace to package converter.  Not the best work, has hardcoded parts that I would love to remove, but it does the job.  It is something I can refactor later once I have a complete oData consumer.  (A side note, I can add various namespaces and XML definitions easily, so am thinking of also expanding this to allow for a simple RSS consumer in Java, but that is way down the track and I digress.)

The next part I thought was simple. Combine the package name with the expected class name and end up with a complete class name that I can reference in my code when I am trying to create a new node in my tree.  I wrote my test, got it working, then because I had tests that checked the rest of the code I ran those and lo and behold they broke.  So, I fixed them up making sure that I was not breaking anything else, and my code went back to being robust.

The reason why I am saying that this is a good experience in Test Driven Development (TDD) is that if I did this the way I am used to, just manually tested my code, there is almost no chance that I would have picked up the bugs that I created.  By using TDD I had a repository of tests that I have coded against to meet the requirements (each in and of itself a small requirement).  Now I might not have hundreds and thousands of tests to run against my code, but as my codebase grows, the number of tests expands as well to cater for the new code.  No new code should be written where it isn’t to make a test pass.

However, proceed with caution here.  Tests need to be useful and meaningful.  Make sure that you are testing what you think your testing.  Your tests form part of your codebase, make sure you check that they pass regularly, and also make sure that you know what they are testing.  Having heaps of tests that are checking the same thing is pointless. If you have one test to check that your function is working in a set scenario, then move on.  You should write a new test for each scenario.  Also you when you get a bug you should be able to write a test to show that the bug is in the code, then you can debug your code to make sure that the fix is working, and that it hasn't accidently broken anything else.

A tip on testing that I got from Tatham Oddie (@tathamoddie on twitter) is to use the following template for writing a test.  It works in any language.  First create the test (the name and container in code).  Then add 3 comments: Arrange; Act; Assert.  The first part of your test should be to Arrange the objects that your testing, with mocks, creating the object that you want to test.  The second should be the Act part, this is where you perform the function or method that you want to test.  The third is the Assert part, where you check that what you wanted to be the result was indeed the result.

Saturday, January 22, 2011

Agile vs Waterfall

Recently I found myself coding less and “planning” more. It took me a while to work out why, and when I did I thought I would write this so that others may avoid the trap I was in.

Firstly, some background. As you may or may not be aware, I am writing an API in java for read an OData source.  Now, I am using TDD principals to code the API (basically write a test case and then write the code that makes this test pass).  This means that I need to think in small increments, write a test for the small increment, then write a small increment that satisfies that test, then write another test for another small increment, then write that small increment and so on,  you get the idea.

Now back to the present.  I encountered the next part of my code requirements, implementing XML namespaces within my code.  This is where my problems started.  I did the right thing by researching some things on XML namespaces, to make sure that I understood how they were implemented and make sure my code didn’t vastly differ what it should be.  Then I did the wrong thing, I tried to plan the entire implementation of the XML Namespace component, rather than looking at the small piece I was dealing with (in this case pulling the namespace and storing it in the required format).

I was planning the reading dtd and metafiles and how that would work.  Looking at the manner that dtd’s could be delivered.  How would I know which was an internal namespace and which was an external? How would I register the external namespace in my code (for example the $metadata of an OData source)? And so on and so forth.  That is not how I am wanting to write my code.  I want to write a test for a small part, then write that small part.  In this case the small part was identifying the namespaces and putting them into a useful format.

This got me thinking about agile and waterfall practices.  Agile methodologies would say that you get the requirement and break it down into smaller pieces, and those into smaller pieces until you have something that can be done in a defined period of time (for example, in scrum you would break it down into smaller pieces that would be done within an iteration).  Whilst you know what the requirements are, you don’t design the entire solution for what may or may not happen, nor do you plan everything out at the start.  You break the requirement into smaller chunks and that into smaller chunks into a requirement that you can complete within a week or 2.

Waterfall is very different to this.  In the waterfall approach, you define every requirement that you can think of, plan the end result to the finest detail, then you start coding to the fine details that you have planned to deliver.  You don't start coding until you have the UX defined, the web services documented and the classes worked out and what each class properties and methods are.

Now, without going into a discussion about waterfall vs agile development practices, I will say my thoughts.  While there are parts of one in the other and vice versa, the two can not coexist.  For example, when your doing agile you need to know the end result, what is the stakeholder asking for.  That you can break down into smaller requirements and so on until it is small enough to complete in iteration or sprint.  And conversely in waterfall development, you can not say “deliver the whole project at once”, you need to break it down into smaller chunks that a developer can work on.

The main point of difference for me is that in my opinion, agile is about meeting the requirements that you have been asked to deliver.  No more no less.  You cant tell the stake holder what they want, you can only guide them along as you code and as they see the work.  This doesn't mean that if they say they want a field you don't do any error checking on it, or that you just make it work and not deliver a quality product.  What it does mean is that you don't enlarge the requirements to what you can see being needed down the track.  You meet the current need. If the database table needs a title, description and timestamp, don't add a link to a status that is then part of a different requirement.  By the same token you can guide the requirements along by asking questions like “where is this going to be reported?”, “Is there a need to report on this?”, “ Who should be able to edit this?”

Waterfall projects on the other hand you can vastly change the requirements because you are part of the planning process.  Nothing gets done until everything is defined and some form of specification documented.  You research to make sure that it is future proof, so that you can work on another project and this one wont need any attention.  You tend to guide the requirements and specifications, rather than the development.  But, once it is locked in, it takes a lot to change one line of the specifications that your working to.

Personally, I prefer agile development processes over waterfall process because you can be flexible, if you code something that the user changes their mind on you can adapt. (Hence, why it is called agile.)  You also only deliver what is needed, not what you think is best.  The issue with stakeholders is that they don’t always communicate it well to developers, project managers or product owners.  Hence the need to ask questions and listen to the answers.  You also end up with leaner code (it does what is needed, and nothing it doesn't) rather than bloated code with parts that are rarely, if ever, used.

Sunday, January 16, 2011

BlackBerry DevCon

I had the opportunity to attend the BlackBerry DevCon Asia over the last few days in Bali. There was a lot there, but I thought I would share a few of the highlights for me. [Note: I’m a developer, not a reporter, so this is just my thoughts, not an article for a publication.

Playbook

There is a new tablet coming to the market called the BlackBerry Playbook.  Over the conference I had the opportunity to play with one and I must say I was very impressed. The graphics on the device look amazing, even with 3 rendering engines being used at the same time there is little discernable lag in playback in all three at the same time.  It has a 7” screen which might be a tad on the small, but works with the device.

Also, the OS is based on QNX which is a different approach to the other 2 main OS’s used on tablets, iOS for iPad and Android for other tablets.  I like this move because it keeps separate the tablet and the phone.  They might be similar, but to use the same OS is like using Windows XP as a server.  It can be done, but it isn't what it was designed to do.

My main criticism is that there is no bar along the top or bottom when your in an application.  For example,in the Android OS there is the top bar that has the time and notifications, an also has the ability to be pulled down to interact with those notifications.  This is missing in the device. Once you open an application the entire bar goes and the application is full screen. In some cases like games and remote desktop applications that is ideal, but most of the applications (not games) that I use leave that bar there.

Why is this important? Let me give you a real life example. I'm sitting down and turn on my playbook, I open twitter and my emails, then start composing a blog.  As there is no notification bar I cant see if I get any emails nor any twitter DM’s or mentions that I might want to have a look at.  To check, I have to go out of one app and into the other.  Today people are use to reviewing the notifications area to see if there is something that they need to deal with.  That said, that is basically my only criticism.  I think that this device will be one to watch as it gets closer to the release date.  The specs are changing, but the current specs can be found here.

SuperApps

This is an oldie but still one that was pushed heavily over the conference.  The idea is that the applications that you write are linked to the native operating system, contextual application menus (both location and from the apps point of view), and linking with other applications as well.  An example of this is a translating application.  As a normal app, this would be a notepad like application that you would open, type one language and then translate the text, then copy the text translation to the program that you want.

With super apps, you would create a menu item in your email application.  The user would then type the text of the email in their language, in the email program. Then the user would go to the menu, select the translation software (which might pop up a dialogue to ask which language to translate to), and it would do the translation for you and put the text in your email.

I guess a new part of this is that you can do this into third party apps.  For example, with the translation application, you could tell the OS that it should add the Translate contextual menu item for every text field it displays.  Then when your timesheet program fires up, every text box will be able to be translated to a different language.

WebWorks Programs

This one is rather interesting.  What this allows for is the ability for a developer to use HTML and javascript (including jQuery and some other third javascript api’s) to write an app, then the WebWorks API will bundle that into a native installable file.  It is hard to tell the difference between a Java based app and a WebWorks based app.  This, in my opinion, allows for speed of development to increase (its just HTML and javascript, which while tricky are not unique to any device, so debugging becomes easier).  Using AJAX you can perform client/server operations, and some native functionality has been exposed to the javascript API so that you can make your application run as though it were running like a Java based one.

There are 2 other things to note.  Firstly, the WebWorks environment will be on both the BlackBerry mobile phone OS’s (I think 5.0 and up, not sure about 4.6) and the PlayBook (along with Adobe Air).  The other is more exciting.  Using CSS and Javascript, you can write the one application and build one version for the PlayBook, and one version for the Phone OS, and the application will determine the which device you are using and allow you to customize the experience for each.  (Just like we do for different browsers at the moment.)   Yes, I guess that you would expect that from something based in HTML, but I still think it is exciting (and powerful) none the less.

HTML Debugging

A side note to this, you can also debug HTML in an browser on an emulated phone though Chrome or Firefox/Firebug dynamically on your desktop. Makes working out the odds and ends of the HTML issues easier to deal with.  This uses the WebKit engine on both to link them together.

These were just some of the things that I heard and saw that I think will be cool to use when you need to.  Look out for more information on that PlayBook, I think it has the capability to be a game changer for the tablet device market.

Saturday, January 8, 2011

Running a single test with JUnit and Eclipse

So, I had a slight problem with using TDD in Eclipse.  Not a problem as such, more an obstacle that I needed to overcome.  What I wanted to do was to write a test, get that working, then re-run all the tests, check that the changes to the code hadn’t caused another error to correct, then move on to the next test.  I found a few odds and ends about how this could be done, but nothing seemed to work.

After some looking and seeing I was able to get this working.  Here’s how I did it.  For this example, I am using JUnit 4 and Eclipse 3.6.1 (Helios).  Please note, to see the images more clearly just click on the image.

Firstly, go to Run –> Run Configurations

Then, right click the JUnit node in the tree and select New

Enter the name for the run configuration (I have called mine XMLSourceRepository_GetTokensTest)

Select the Method that you would like to test (default is all methods).

Now, I have Android JUnit and Eclipse JUnit launchers, so down the bottom I need to press the hyperlink and the following dialogue box appears. Select the Use Configuration specific settings and then Eclipse JUnit Launcher.

And your done.  Press the run button to run the test, and now it is in your run and debug button menus

So there you have it.  Using JUnit and Eclipse Helios (3.6.1) I have been able to select a single test to run without the overhead of other tests being run as well.

Now, to me, this is rather complex and time consuming if you write a test, change the run configurations to add or alter an existing configuration, make a few changes to change the code, then have to do the whole process again.  It would be easier be able to do this via the source or a test list of some kind.  I am not saying that this isn’t possible, but I couldn’t get it to work.  If you know how to do that, please let me know so that I can correct this post with the easier way of managing tests!

Sunday, January 2, 2011

Java Open Data Protocol API

Recently, I have played with Open Data Protocol (OData) using Java and I am not entirely happy with the API’s that I have had to use.  So, in true developer style I am going to attempt to write my own.  Have called it JOCU (Java OData Consuming Utility). I figure that by doing this I will learn the OData Protocol, touch up on my Java coding, practice TDD and JUnit, and maybe play with GIT or some other source control.

Open Data Protocol

Now for some understanding.  The Open Data Protocol was put together by various organisations to surface data in a consistent way that was manageable cross platform.  It allows for updating and retrieving of data to the data store via the URI conventions.  I wont go into it in detail cause I am likely to get it wrong, and that isn't the idea of this post.  For more information, go to http://www.odata.org/.

So, on with the show… my first hurtle

I have started the API and have already come across something worth noting down.  I had written a function to get the source code for a file, called GetSource.  The original code is below:

public String GetSource(String URIForSource) throws  MalformedURLException, IOException  {

URL url = new URL(URIForSource);
HttpURLConnection URLConn = (HttpURLConnection)url.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(URLConn.getInputStream()));

String str;
String result = "";
while ((str = in.readLine()) != null)
result = result + str;
in.close();
return result;
}

Now for HTML files and for static XML files this is fine.  However, when you use this for an OData source, it dies (returns a 400 Bad Request error).  I then thought that there was something wrong with my URL so I copied it into a browser and it worked!  I was left with the question: why would the URL work in a browser yet not in code?  After some investigating, I found that I needed to identify the types of responses that I would accept.  As a result, I added one line to my code and came up with the below:

public String GetSource(String URIForSource) throws  MalformedURLException, IOException  {

URL url = new URL(URIForSource);
HttpURLConnection URLConn = (HttpURLConnection)url.openConnection();
URLConn.setRequestProperty("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
BufferedReader in = new BufferedReader(new InputStreamReader(URLConn.getInputStream()));

String str;
String result = "";
while ((str = in.readLine()) != null)
result = result + str;
in.close();
return result;
}

As you can see in line 5, I set the connection to accept xml, along with other types that I needn’t go into.  As a result, the code is now working perfectly (under what tests I have written to date that is).


Next is to manage the tokens that the XML returns.