javascript done right

Well its been a while since I last used JavaScript, but I’m back at it with a bang. However, it would appear that people are still writing JavaScript as awful as ever. I’m not claiming to be a JavaScript expert but I see people do things in JavaScript that they wouldn’t even consider doing in languages like C++, C#, Java, and so on. Why is this the case? I can only presume that they don’t know any better. With this in mind, let’s take look a few not so nice bits of JavaScript namely global variables and scope, and see how we can make this situation better.

With C (and C++) we have global variables, that is, a variable which is available at each and every scope of your program. Why is this bad? Well due to the fact that this variable does not have any sense of locality, it can be changed anywhere in your code. Thus, there can be a large number of dependencies on such a variable – something which all good developers know is bad. This can also lead to name clashes resulting in unpredictable behaviour. So why is JavaScript so bad? Well it bases its programming model on global variables. Damn.

So what are the problems with scope in JavaScript? Well JavaScript does not have block scope – this may appear odd to those used to C-style syntax. What does this actually mean and what’s the problem with this? Well I’ll steal an example from the book Javascript – The Good Bits by Douglas Crockford where you can see how confusing this can be:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var foo = function () {
    var a = 3, b = 5;
 
    var bar = function () {
        var b = 7, c = 11;
 
        // At this point, a is 3, b is 7, and c is 11
        a += b + c;
 
        // At this point, a is 21, b is 7, and c is 11
    };
 
    // At this point, a is 3, b is 5, and c is not defined
    bar();
 
    // At this point, a is 21, b is 5
};

However, despite these nuances in the JavaScript language, with a little thought and engineering we can get round this.

First, let me say that JavaScript really is a great language, don’t underestimate it. Many languages have bad features, it does not mean that you have to use them.

Now getting round the global variable problem is pretty simple, we create a global object and place all our JavaScript code within this object. The code below shows an example of how to do this.

1
2
3
4
5
6
var APPLICATION_NAMESPACE = {};
 
APPLICATION_NAMESPACE.my_object = {
     my_name : "Name",
     my_function : function () { alert("ALERT"); } 
}

This style of coding in JavaScript should be familiar to users of jQuery – instead of APPLICATION_NAMESPACE you use $ or jQuery. There we have it, this gives us a nice way to get round those pesky global variables.

Now, if like me, you are used to a more object-oriented (OO) style of software development then JavaScript can appear a little awkward at first. With no classes and traditional inheritance the typical OO developer is left wondering what to do. This need not be the case.

One development pattern that comes to the rescue for OO developers has been proposed, once again, by Douglas Crockford and is known as the module pattern. So how can we use it? Well say we want to create a “class” Person that has a “public” method fullname which uses a “private” method concat, this is represented using the module pattern as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
function Person () {
    function concat() {
        return firstname + surname;
    }
 
    return {
        fullname: function(firstname, surname) {
            return concat(firstname, surname);
        }
   }
}
 
// To use Person we do the following
var person = new Person();
var name = person.fullname("Davie", "Jones");

As you can see this allows for sensible scoping of variables and functions, and at the same time makes a JavaScript objects look like classes. Also, this method of developing objects in JavaScript just about eliminates the need for global objects.

I’m a little hesitant to use JavaScript as a fully blown OO language – as this is not how it was intended. Instead we should embrace the alternative style of programming that JavaScript gives us. After all, if you were using a functional language like Haskell, F#, Scala and so on, you wouldn’t expect to write code as you would in a language that supports the typical OO constructs.

The fact is that JavaScript is here to stay for a long long time, so you may as well learn to love it 😀 .

charting success

In my continued exploration of C# I recently had the opportunity to investigate the Google Chart API using Google Chart Sharp. Both the C# API and Google Chart itself are great. For those unfamiliar with Google Chart here is a quick summary.

Google Chart allows you to dynamically generate many of the popular types of chart – currently the API includes support for Line Charts, Bar Charts, Pie Charts, Venn Diagrams, plus many more. Check out the full list of chart types here.

Charts are generated via a GET request using a simple URL, where various GET parameters can be supplied to customize the chart. For example, to construct a pie chart that describes web browser usage for a particular page in which FireFox has 60%, and IE 40% of the total views, we use the following URL:

http://chart.apis.google.com/chart?cht=p3&chd=t:60,40&chs=250x100&chl=FireFox|IE

Typing this URL into your browser will show the chart below:

Pie Chart with two data points

Pie Chart with two data points

I’m sure many of you will recognise the chart design from Google Analytics.

Not only is the Chart API free to use (but not to abuse!), it also generates pretty nifty looking charts.

The charts can also be customised in many different ways including size, colour, axis, etc. Check out the documentation for this on the Google Chart API page linked to above. To show how simple it is to change the colours, take a look at the chart below:

Pie Chart with custom colours

Pie Chart with custom colours

Here the only change required to add custom colours is an additional HTML GET parameter – I simply added chco=9913CE,D3A4E5 to the end of the URL given above.

Now on to the C# side of things.

As you can probably guess, the C# wrapper for Google Chart simply generates a URL. For example, to create the chart above using our new colour scheme we do the following:

1
2
3
4
5
PieChart chart = new PieChart(250, 100, PieChartType.ThreeD);
chart.SetData(new int [] { 60, 40 });
chart.SetPieChartLabels(new string [] { "FireFox", "IE" });
chart.SetDatasetColors(new string [] { "9913CE", "D3A4E5" });
string url= chart.GetUrl();

You can then use the URL directly in an HTML document or download the image locally using the following code:

1
2
3
4
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
Image image = Image.FromStream(response.GetResponseStream());
image.Save("mychart.png");

Yep it’s that easy. The other charts can be created just as quickly in both the Google Chart API and also with the C# wrapper. So if you are looking to create good looking charts and don’t want to pay much (any) money I suggest that you check this out.

how not to develop websites

It amazes me time and time again how certain websites are designed/administered. In particular, government websites. Let it be known my rant starts here.

Laughing in the face of the credit crunch I recently booked a trip to Australia. As it turned out, it was almost as cheap to go via the US with a stopover in San Francisco, which I duly accepted. However, as I was now staying in the US for a day or two, I had to apply for a ESTA – this has to been done at least 72 hours before travel. Fair enough.

So off I trotted to Google and searched for “ESTA”. The searched showed that the most likely place to complete my ESTA was at the link https://esta.cbp.dhs.gov/esta. However, clicking on the link returned the page shown in the screenshot below. Hence, I thought the site must be down, and that I’d come back later. I then tried later that day and it still wasn’t working so I ended up leaving it for a while (by a while I mean a month or so).

ESTA website error screenshot

ESTA website error screenshot

Then last night I thought I’d better try again, but nothing, the same page. It was late, so I thought I would leave it again until the morning. However, the morning greeted me with the same error page once again. With my suspicions now raised and due to the departure date looming over me, I searched on Google to see if anyone else was getting this problem. Nothing, no mention of anything. Strange.

It then dawned on me that it might be the browser. So I loaded up the page in IE7 and bang it worked. I mean how shite is that? They should have at least have given a warning that my browser (Chrome) wasn’t supported. Instead I have wasted time trying to figure out what the hell was wrong.

Maybe I should have tried an alternative browser sooner but I was given no indication that this might be the problem. In my eyes this is a colossal FAIL. In fact, the only reason I decided to check in another browser was that the UK Government was considering a similar stance with browsers it did not believe it should support. The following excerpt is taking from the Boagworld podcast #135:

Apparently, the UK government believes we should test on the basis of popularity. In a draft document advising public sector websites, it suggests that if a browser appears in visitor logs as being below an arbitrary percentage of total “unique visitors”, then it should not be listed as being “fully supported”.

As the guys over at Boagworld stated, this may seem like a sensible move, but in fact it’s not, and I refer you to the comment quoted by Jon Hicks.

It isn’t clear how the supported browser list would be enforced, but I’m concerned that this approach will encourage browser sniffing, a move that will exclude browsers like Omniweb,Shiira and iCab, simply because their name isn’t ‘Safari’. They share the exact same rendering engine, and therefore require no further testing. You can be more inclusive without spending any extra resources.

So this brings us round to the issue of graceful degradation and progressive enhancement ([1][2]), as championed by Yahoo.  For those who don’t fancy following the links here are the definitions of each concept:
 

Graceful degradation: Providing an alternative version of your functionality or making the user aware of shortcomings of a product as a safety measure to ensure that the product is usable.

Progressive enhancement: Starting with a baseline of usable functionality, then increasing the richness of the user experience step by step by testing for support for enhancements before applying them.

Surely following either one of these approaches is the way forward?  Developing a website in this way will also improve the site in term of accessibility – the sooner someone gets prosecuted here under the disability discrimination act for this stuff the better. However, this methodology seems totally lost to the US government (and the UK government for that matter). Instead others will wonder why the hell they can’t get on the site from their “obscure” browsers. Hopefully this post may help them. Or maybe the US government will do the sensible thing and tell us why we can’t view the page, I won’t hold my breath though.

[1] http://dev.opera.com/articles/view/graceful-degradation-progressive-enhancement/

[2] http://accessites.org/site/2007/02/graceful-degradation-progressive-enhancement/

dynamic queries with LINQ

So, I’m a master C# programmer having been using it for a grand total of 4 weeks! My learning process began by ignoring the provincial and proverbial “Hello World” program, and instead I decided to learn the language features that are particular to C#.

As it happens, C# has quite a few modern language features that Java is sadly lacking (I won’t go into this here, but a comparison between the two is something I plan to blog about in the future). Along with many other new features, C# 3.0 gave us LINQ, which will be the topic of this post.

LINQ, I think, is reasonably straightforward to both understand and comprehend. Basic queries are extremely simple to create, and LINQ almost gives C# a functional programming feel. An example LINQ query is shown below:

from s in students where s.classyear == "2008" && s.courseyear == "2" select s;

That said, not everything is plain sailing with LINQ. The application I was writing required the where condition in my LINQ statement be dynamic. That is, a column specified in the where condition is only known at runtime. Achieving this functionality using the syntax shown above is not possible (which if you try it out I’m sure it will be clear why). To achieve this functionality the learning curve soon ramped up.

If you have searched on Google for dynamic queries in LINQ, you probably found these two articles, ScottGu’s Dynamic Linq and tomasp.net’s Building LINQ Queries at Runtime in C# – there were others, but few proved to be much help. ScottGu’s article was very good, and in essence solved my problem, but didn’t really help me understand the problem. The second article just went way over my head. Why? Well it came down to another new feature of C# that I didn’t know about, which was Expressions and Expression Trees, which, as it turns out, proved to be the key to solving this problem.

In addition to using the query syntax with LINQ (as shown above), you can also use query expressions. Query expressions are simply static methods that allow you to express a LINQ query. The example given above can be rewritten as follows using a query expression:

student.Where(s => s.courseyear == "2008" && s.classyear == "2");

On first looking at this example, it is still not obvious how runtime queries can be generated using query expressions. However, by taking a look at the type of the argument passed to Where helps us out:

Expression<Func<TSource, bool>> predicate

After some thought, this begins to make sense. The Where method takes a lambda expression whose parameter is of type TSource and returns a boolean. Therefore, surely we can create a lambda expression dynamically by building an expression tree for it? And yes, this is exactly what you are supposed to do.

C# experts are probably howling at me now and can’t understand why it took me so long to get there. However, this article is not aimed at you; it’s aimed at a newbie like me, and is simply trying to detail my thought process.

So now it was time to learn about Expression Trees.

Although I understood the concept of Expression Trees, they took a bit of getting used to – just don’t give up if you are struggling with it. In order to show how we can use Expression Trees to create dynamic queries, I will jump straight in with the code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public Expression<Func<student, bool>> GetWhereLambda(string courseyear,
                                                      string classyear,
                                                      string myDynaColumn)
{
    ParameterExpression param = Expression.Parameter(typeof(student), "s");
 
    Expression courseExpr = GetEqualsExpr(param, "courseyear", courseyear);
    Expression classExpr = GetEqualsExpr(param, "classyear", classyear);
    Expression cond = Expression.And(courseExpr, classExpr);
 
    // This is where we create the expression for the dynamic column.
    // Obviously the value could have been dynamic as well but it 
    // saves a little visual complexity this way.
    cond = Expression.And(cond, GetEqualsExpr(param, myDynaColumn, "YES"));
 
    return Expression.Lambda<Func<student, bool>>(cond, param);
}
 
private Expression GetEqualsExpr(ParameterExpression param,
                                 string property,
                                 string value)
{
     Expression prop = Expression.Property(param, property);
     Expression val = Expression.Constant(value);
     return Expression.Equal(prop, val);
}

Seeing the code makes this look pretty simple, but there are quite a few concepts that a programmer new to C# has to get the hang of. However, let’s just step through the code – well one clause anyway, the rest follows from that.

Let’s look at how the clause (s.courseyear == "2008"), from out original where condition, converts to an Expression. We first create a parameter expression (Line 5), that is, the argument (parameter) supplied to the lambda expression (this parameter must be used for all sub-expressions that make up the lambda expression). Next, we need to create an expression to represent the right hand-side (s.courseyear) and left hand-side ("2008") of our clause. Since courseyear is a property of student (property as in C# property – think neat getters and setters) we create a property expression, i.e. prop = Expression.Property(param, "courseyear"), and as "2008" is really a constant, we create constant expression using value = Expression.Constant("2008"). Finally, we want to express the fact that the property should be equal to the constant in our query, we use the Expression.Equal(prop,value). That’s it. Essentially what expressions do is model our code as data.

Now it becomes obvious how we can represent a column in the where clause that is supplied at runtime – we simply create a property expression, as shown at Line 20, where property is the dynamic column name specified at runtime.

To combine the clauses in the where condition we simply use Expression.And, as shown in Lines 9 and 11. The resulting expression and the parameter is then used to create a lambda expression, which is in turn passed as an argument to Where, e.g.

Expression<Func<student, bool>> myLambda = GetWhereLambda("2008", "2", "active");
IEnumerable<student> filtered = students.Where(myLambda);

There you have it. It’s not that straightforward for those new to C# like me, but hopefully this article at least gives some code that can easily be used by people in the same position as I was.

The S in SOLID

Right, I’m back onto the SOLID principles again, I apologise to those who are bored of this.

So the other day I was listening to the most recent Scott Hansleman podcast with Uncle Bob Martin where the conversation was about the recent handbag fight between Bob Martin and Joel Spolsky – as I discussed in a previous blog post. In this podcast Bob Martin talks about an issue Joel Spolsky raised with him on the StackOverflow podcast #41.  An extract from the discussion from the StackOverflow (SO) podcast #41 is given below:

Spolsky: Bob, can you explain again the Single Responsibility principle? Because I don’t think I understand it right.

Martin: The Single Responsibility principle is actually a very old principle, I think it was coined by Bertrand Meyer a long time ago. The basic idea is simple, if you have a module or a function or anything it should have one reason to change. And by that I mean that if there is some other source of change… if there are sources of change out there one source of change should impact it. So a simple example, we have an employee, this is the one I use all the time…

Spolsky: Wait, wait, hold on, let me stop you for a second. Change, you mean like at run-time?

Martin: No. At development time.

Spolsky: You mean changes of code. There should be one source of entropy in the world which causes you to have to change the source code for that thing.

Martin: Yeah. That’s the idea. Do you ever achieve that? No. 

Martin: You try to get your modules so that if a feature changes a module might change but no other feature change will affect that module. You try and get your modules so partitioned that when a change occurs in the requirements the minimum possible number of modules are affected. So the example I always use is the employee class. Should an employee know how to write itself to the database, how to calculate it’s pay and how to write an employee report. And, if you had all of that functionality inside an employee class then when the accountants change the business rules for calculating pay you’d have to change the employee, you’d have to modify the code in the employee. If the bean counters change the format of the report you’d have to go in and change this class, or if the DBAs changed the schema you’d have to go in and change this class. I call classes like this dependency magnets, they change for too many reasons.

Spolsky: Is that… how is that bad?

I kind of feel the same as Joel with this one and I’m afraid that Bob is not really convincing me here. Why? Let’s pursue an example for a second.

Suppose that I’m creating a class that outputs a file. In this class I extract data from a database, convert various bits of the data, then output this data to a file. So how many reasons has my class got to change? Well it looks like 3: the database can change, how I convert the data can change, and how the file is output can change. Hence according to the S in SOLID, each of these operations should be contained in a separate class. Let’s just remind ourselves what the S in SOLID stands for:

A class should have one, and only one, reason to change. [1]

Now if I am forced into making changes to either the database, data conversion, and file format, I make the changes to my class. What is wrong with that? I just compile the project again and distribute the binary. Simple. I can’t see any benefit in breaking this class up into 4 separate classes, there just seems no point, but according to the SOLID principles there is. Arguments from Bob like “Yeah. That’s the idea. Do you ever achieve that? No” are nonsense. It clearly undermines the need for such rules in the first place. There is no point having rules if you are not going to stick to them. Fudging a rule every time it doesn’t fit your argument implies that the rule doesn’t make sense in general.

The discussion in SO #41 then moves on to the WordPress plugin system where Bob comments on how having independent plugins are a great example of the single point of responsibility (I must stress it was not Bob’s example, he just went with it). Eh NO. I think that it sometimes becomes easy to forget what the S means. Let me just state it again: A CLASS should have one, and only one, reason to change. The plugin itself, which is represented by a class, may have lots of reasons to change. Again in the subsequent Hansel Minutes podcast Bob uses the term module, in the sense of say a jar file in Java or a dll in .NET, to describe building a set of components that adhere to the S in the SOLID principles. This is once again deviating away from the actual meaning of S.

OK, am I being pedantic about the definition? I don’t think so. If it doesn’t mean what it says, then change it; there are people out there that will follow these rules blindly, and having such a specific definition only encourages this.

I think a far better rule would state that a class should only be responsible for a single task, in my example that task is creating the file. This is completely different from a class that has only one reason to change. OK, it’s a lot less specific, and maybe harder to judge, but any decent programmer will have a feel for what a task is. Not only does this seem like a more sensible option, it’s what the plugin and the module argument ostensibly encourage. It seems to me this is the only true generalisation you can extract from Bob’s arguments and I think it would be much better if this is what the rule suggested.

Don’t get me wrong, there are times when you want a class to only have one reason to change, this is especially true when you are developing a framework where the source code is essentially immutable to the outside world, in this case class design becomes far far more important. However the SOLID principles are intended as a set of rules for every programmer to use, and as it stands, I’m sure that the S is causing more confusion, code bloat, and misguidance than it is providing benefit to those adhering to these rules.

Having rules are a great idea, but they also encourage people to follow them word-for-word. It’s easy to say they should not be followed religiously, but I’m sure that is what is expected. This coupled with the developer with poor judgement, little experience and lots of blind faith, can leave us all wondering why we are even entertaining all this.

References

[1] http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod