Archive

Posts Tagged ‘javascript’

Closures vs Objects: FIGHT

September 23rd, 2011 11 comments

In JavaScript, there are two main patterns for creating objects with state - plain JavaScript objects and closures. In this post I’m going to highlight the similarites and consider the pros and cons of the two approaches.

An object is an entity with state and methods to access/modify that state. There are two main approaches to this, which I’ll be calling “objects” and “closures”. For the rest of this post I’ll only use the word object to refer to plain JavaScript objects, in an attempt to avoid confusion.

//object
function Person(name) {
  this.name = name;
}

Person.prototype.sayHi = function () {
  console.log('Hi there, my name is ' + this.name);
};


//closure
function person(name) {
  //using object literal but state held in closure, not in object
  return {
    sayHi: function () {
      console.log('Hi there, my name is ' + name);
    }
  };
}

Now, there are infinite variations to the above. I’m using native JavaScript constructors in the object version, but it doesn’t have to be that way. For example, I could make a function that returns an object, without having to use new:

//alternative object
function person(name) {
  return {
    name: name,
    sayHi: function () {
      console.log('Hi there, my name is ' + this.name);
    }
  };
}

Alternatively, I could come up with a more convoluted example that caches the sayHi method so it’s shared between instances.

The point is, using objects, state is shared through this. Whenever you call a method on an object, e.g.

var dave = new Person('Dave');
dave.sayHi();

this within the method will be equal to the object it was called upon. When using closures however, state is shared through the lexical scope. This highlights the first key difference between objects and closures: access to the internal state.

There are three ways to mutate the internal state of an object in JavaScript. I’ll illustrate with some examples:

//Obtain a reference to the object and assign new properties
function changeName(object, newName) {
  object.name = newName;
}

changeName(dave, 'Bob');


//Attach a function to the object and call it as a method on the object
function changeName(newName) {
  this.name = newName;
}

dave.changeName = changeName;
dave.changeName('Bob');


//call/apply a function with the object as the context
changeName.call(dave, 'Bob');

With closures, on the other hand, there is only one way to mutate the internal state – be inside the scope of the closure:

function person(name) {
  //to change `name`, you *must* be defined somewhere inside this function

  return {
    sayHi: function () {
      console.log('Hi there, my name is ' + name);
    }
  };
}

This can be a blessing and a curse. The advantage of objects over closures is that you’re not limited in the functionality you can add to an object by location in the source code. If you decide that you need to add more functionality to an object, you can do this at any point in your codebase. With a closure the only way to add functionality (with access to the internal state) is to define it somewhere inside the function that creates the closure.

The advantage of closures over objects is the same as the disadvantage but from the perspective of third party code. With an object, anybody can add or change functionality on your object, and access its internal state. With a closure, the internal state is private – it can’t be accessed from outside the closure without the use of accessor functions.

Another advantage that objects have is in terms of memory usage. With a closure, by definition, for a function to have access to the internal state it must be defined inside the closure. That means that each new closure created must have its own version of the function. Objects on the other hand have no such limitation. A function that reads from and writes to this need only be defined once – it can then be added to any object, either shared via the prototype system or through other means.

An advantage that closures have is that you don’t need to keep track of this. With closures it’s simple – you’re either in the correct scope or you’re not. With objects, if you’re in a method called on an object then this is that object, but if the method gets detached from its object (e.g. when passed as an argument) it loses its binding, or if you have a nested function inside the method. Then you need to start messing about with call and apply, and bind and other fun stuff.

So, when should you use objects and when closures? If you’re making hundreds of object-type things then they should probably be objects. If there are only a few and you have security concerns then closures are a better bet.

Privacy isn’t just a security issue – it’s useful for creating a clean separation between the public API and the private implementation. Some developers would say that JavaScript doesn’t provide privacy, so you should get used to writing objects with everything public, and tell users of your code to just not touch certain properties (for example those with a leading underscore). I think it’s useful being able to enforce privacy – it makes sure that no-one will ever write code that’s reliant on an implementation detail of yours.

Because of this, I tend to favour closures, falling back to objects when memory becomes a concern, but that’s largely a personal preference.

Understanding typeof, instanceof and constructor in JavaScript

September 18th, 2011 9 comments

They say in JavaScript “everything is an object”. They’re wrong. Some types in JavaScript are so-called “primitive types”, and they don’t act like objects. These types are:

  • Undefined
  • Null
  • Boolean
  • Number
  • String

The confusion comes from the fact that the boolean, number and string types can be treated like objects in a limited way. For example, the expression "I'm no object".length returns the value 13. This happens because when you attempt to access properties or methods on a primitive value, JavaScript instantiates a wrapper object temporarily, just so you can access its methods. ‘Cause JavaScript’s nice like that. I’m not going to go into more details here, but Angus Croll wrote about The Secret Life of JavaScript Primitives, so that would be a good place to learn more.

typeof

typeof is a unary operator, just like the ! operator. It returns a string representing the type of its operand. Here are some examples:

typeof 3; // returns "number"
typeof 'blah'; //returns "string"
typeof {}; //returns "object"
typeof []; //returns "object"
typeof function () {}; //returns "function"

typeof has its idiosyncrasies. For example, typeof null returns "object", and typeof /[a-z]/ returns "function". Again, Angus Croll has written more on this subject than I have space for here.

So, basically typeof is used for telling apart the different primitive types (as long as you don’t care about null). It’s no use for telling different types of object apart though – for most objects typeof will return "object".

constructor

constructor is a property available on all objects’ prototypes, and it is a reference to the constructor function used to create the object. So, ({}).constructor returns the Object constructor function (the parentheses are needed to clarify a syntactic ambiguity) and [].constructor returns the Array constructor function. Likewise, it will return your custom constructor function:

function Person(name) {
  this.name = name;
}

var dave = new Person('Dave');
dave.constructor === Person; //true

Remember that unlike the typeof operator, constructor returns a reference to the actual function. Another gotcha: because constructor is part of the prototype, if you reassign the prototype to a constructor function, e.g. Person.prototype = {};, you’ll lose the constructor property.

instanceof

instanceof is a binary operator – its syntax is instance instanceof Constructor. So, to continue the above example:

dave instanceof Person; //true

The difference between instanceof and the constructor property (apart from the obvious syntactic difference) is that instanceof inspects the object’s prototype chain. So, going back to our friend dave again:

dave instanceof Object; //true

This is because Person.prototype is an object, so Object is in dave‘s prototype chain, therefore dave is an instance of Object.

Wrap-up

So, if you’re dealing with primitive objects, use typeof to distinguish them. Because typeof returns "function" for functions, it can also be useful for checking if an object member or a function argument is a function. If you’re working out the constructor of an object, use its constructor property. And if you’re dealing with lengthy inheritance chains, and you want to find out whether an object inherits from a certain constructor, use instanceof.

Categories: Programming Tags: ,

Making inheritable objects in JavaScript without objects

August 14th, 2011 1 comment

For a long time I’ve found the subject of the equivalence of closures and objects fascinating. The problem with creating objects using closures is that there isn’t a way to do inheritance (although you should always favour composition over inheritance anyway!). I thought it would be fun to see if it was possible to implement an object system in JavaScript with inheritance, without using any plain JavaScript objects.

To do this, I set myself two rules:

  1. Don’t create any plain old JavaScript objects (functions, arrays etc. are allowed)
  2. Don’t use the dot operator.

For that to work, I wrote some wrappers for the JavaScript methods I would need, like Array.prototype.slice.call (this becomes callSlice). Apart from those wrappers, there are no dots used anywhere. To dispatch methods based on a string name, instead of using an object (which would have been much easier but would have missed the point of the exercise) I’m passing around functions which dispatch methods using conditional constructs (if, else and switch).

Introducing funcobj!!!

Yeah I know, it’s a rubbish name. Anyway, the Github repo is here and I’ve got a running example on JsFiddle here.

Here’s some examples of how it works:

//call the doSomething method on myObject, with the argument 500
myObject('doSomething')(500);

//call the doSomethingElse method on the object's super object
myObject('doSomethingElse', true)();

To make an object like this, call objMaker with a function that defines and dispatches methods:

To inherit, pass a third argument to objMaker, which is the object to delegate to if the new object does not contain the called method (or: if the new object cannot respond to the message). Have a look at this long example in JsFiddle to see how that works:

This is stupid

Yeah, I know. It’s not meant to be used for anything. It’s probably hopelessly inefficient, and the syntax leaves a lot to be desired. But it does work, which was what I was going for. It might be a stupid idea, but it demonstrates the power of JavaScript closures. It also gives the possibility of dynamically responding to unknown messages, like in Ruby’s method_missing.

This is what objMaker looks like. There’s some fairly gnarly code in there, but if you can get your head around what’s doing what, you’ll hopefully learn something new about JavaScript!

Categories: Programming Tags: ,

Easy functional programming in JavaScript with Underscore.js — part 2

July 29th, 2011 3 comments

In the previous post I showed how good use of Underscore.js‘s map function can substantially increase the quality of your code. This time I’ll explore some other functional collection favourites: reduce, select and all.

Reduce that array over a low heat until thick and creamy

Have you ever had to do something like this?

Basically, this is going through some kind of list of values, calculating a new value from each one and accumulating some kind of answer. It’s a common pattern, and there’s an Underscore function for it! This is exactly what _.reduce was built for:

_.reduce takes a collection of values, a function, and a starting value. In this case, the starting value is 0 because we’re building up a running total of numbers. The function is passed the running total (which starts at 0) and each element of the collection. The return value of the function becomes the next value for the running total. _.reduce doesn’t just need to be used on numbers:

In this case, I’m reducing an array of animal objects into a single string. Of course there are plenty of ways to do the above, but this shows you the flexibility of _.reduce.

Bo Selecta!

Imagine you have an array of strings, but you want a new array of strings that start with a capital letter. You might do that like this:

Now, you won’t be surprised to learn that there’s an Underscore function for doing just that: introducing _.select!

Isn’t that much cleaner? Again, we’re abstracting away the minutiae of selecting elements from a list, and just declaring that we want the elements that start with a capital letter. Simples!

All the single ladies

Sometimes you just want to know a yes or no answer about a collection. For example, are all my ladies single?

Hell yes they are. You can read _.all as “is this statement true about all the elements in my collection?” In the same way, _.any can be read as “is this statement true about any of the elements in my collection?”

What about this?

Here’s a useful little tip. When you’re working with nested functions in an object context, you’ll find that this in the nested function doesn’t refer to the object’s context, but the global context. The standard way to deal with this is a line like that = this; or self = this;. There’s a nicer way to do this with Underscore. Nearly all the functions for working on collections take an optional context argument, which specifies what this will be inside the function. Here’s an example:

Because we’re passing the value of this to _.select, the function passed to _.select is working with the correct this. Otherwise, this.maxAge would be undefined.

Learning more

I’ve only gone into some of the functionality that Underscore offers. The docs are very approachable and clear – I urge you to go and read them.

@SamirTalwar on Twitter recommended that I show the alternative syntax for Underscore, as described in the docs. Basically, instead of doing:

_.map(collection, func);

you can do:

_(collection).map(func);

And, coupled with .chain() you can do awesome chaining things like this:

Pretty cool. Anyway, use whichever feels right to you. Enjoy!

Categories: Programming Tags: ,

Easy functional programming in JavaScript with Underscore.js — part 1

July 27th, 2011 6 comments

So, you’ve been developing in JavaScript for a while, you’re getting on quite well with your for and while loops, when somebody comes along and tells you you shouldn’t be doing that, that JavaScript is a functional programming language and there’s much better ways to solve most problems that with loops. Whaaa…? Well, that’s what I’m here to tell you anyway.

What is functional programming?

Functional programming (FP) is all about the idea of functions as a mapping from one value to another. An FP function just turns one value into another – it doesn’t have side effects. Side effects are anything that happens within the function that changes something outside of the function. For example, changing the value of a variable not defined in the function, or logging output, or showing an alert message … basically anything that happens in the function. The only thing an FP function does is takes in a value, and returns a new value.

Another interesting thing about FP is the use of higher-order functions. These are functions that either take or return functions as values (or both). I’m mainly going to be looking at functions that take other functions as parameters, because there are some interesting things you can do with that.

An underscore, courtesy of Wikipedia

Functional programming in JavaScript is made a lot easier with a suite of functions called Underscore.js all packed into a minified script of just 3kb. Underscore provides a selection of common FP functions for working on collections, like map, each, reduce, select etc. I’ll walk through some of these to demonstrate the power of FP. Note that functional programming isn’t just about working with collections – there are lots of other exciting things you can do – but for now I’ll concentrate on the collections.

Mapping over an array

One of the most widely used Underscore functions is _.map. This takes an array and a transforming function, and creates a new array based on the return value of the transforming function. That’s a bit abstract so I’ll give you an example:

So, _.map is passed an array, and a function. That function is called on each element of the array in turn, and the return value of the function gives the corresponding element in the new array. Note that the original array is left intact. The _.map function takes an array, and returns another array – it’s a mapping from one value to another.

You can do much more interesting things with _.map than just multiplying numbers though. You could transform an array of strings into an array of cats!

I hope you’ll forgive me for using two Underscore functions there. First off, we’ve got _.map. In this case, we’re going through the array of cat names, and creating an array of cats. The second is _.each. This, like _.map, goes through each element of the array, passing them into the function, but it doesn’t care about the return value. It just executes the function and carries on with its life. Now, if you’re concentrating, you may have noticed that each isn’t actually a mapping from one value to another, and you’d be right. _.each is only useful if you allow side effects, for example console.loging. A JavaScript program isn’t much fun without any side effects, because you’d have no idea if anything happened otherwise. The key to functional programming is minimising the side effects.

What’s great about _.map is that once you’ve learned to read it, and understand what it does, it really succinctly explains the meaning of your code. In that cat example, you can see that all the _.map function is doing is turning strings into cats. That’s all you need to know. The old-school JavaScript technique would look like this:

In comparison, isn’t that just damn ugly? I mean, to read that you need to keep track of the array, and the array index, and the length of the array, and you have to make a new blank array to contain the new cats, and … you get the point. It’s mixing up the how with the what. All I care about is that I’m getting an array of cats, from an array of cat names. Don’t bother me with insignificant details like array indices.

Here’s another example: constructing a query string:

This one uses an object, but the same principles apply. From a list of key/value pairs, we’re constructing an array of ‘key=value’ strings. Then all you need to do is join the array with &s. This describes how a query string is formed much more declaratively than the standard

append the key, then an equals, then the value, then an ampersand, then the next key, then the next value, then another ampersand, then – oooh we’re at the end – better get rid of that last ampersand, whoops!

Go forth and map-ify!

I hope that’s given you some ideas as to how functional programming can be used in JavaScript. And I really hope that you can begin to see how much cleaner your code can get when you cast out the for loops and embrace the _.map. If all you’re after is map and each, and you’re using jQuery, you’ll be glad to know both functions have jQuery equivalents: jQuery.map and jQuery.each, albeit with a slightly different API. Next time I’ll be looking at some of Underscore’s other functions, for the times when _.map just won’t cut it.


Part 2 is now up!

Categories: Programming Tags: ,

Test-drive your JavaScript!

July 23rd, 2011 5 comments

Testcard

JavaScript is such an important language today. It’s stopped being a toy scripting language, and become a serious programming language. Unfortunately, the vast majority of JavaScript developers don’t unit test their code.

Testing is a vital part of modern development. Code without tests isn’t code – it’s random scribblings that may or may not be executable. Even if they are executable, the only way you can tell if they all work is by loading your application or website, and trying out every single thing that a user could possibly do.

Test-driven development (TDD) is a really nice way of developing software by writing tests before you write the code. If you’re not used to this way of working then it sounds like a weird way to do things, but hear me out.

Imagine you’re building a web app. You wouldn’t just write the whole thing, then open up a browser and see if it works. You’d write a small bit, open the browser and see if it does the expected thing. If it didn’t, then you’d work out why, then fix it, and look again. Then you’d do the next bit.

TDD is just like that, but easier. With TDD, instead of going to the browser each time and seeing if what you’ve just written works, you write a bit of test code that checks that your code does the expected thing. Once it does, you write another test that checks the next thing you’d like it to do. The advantage of working this way is that every time you run your tests, you’re running all of the previous tests you’ve written. That way you can see if something you’ve written to get the most recent thing working ended up breaking something you did earlier. If you were going to do this the old way, you’d either have to check everything every time you made a change, or risk breaking other things that you’re not concentrating on right now. A good test suite will run in under a second, so you get instant feedback whenever you make a change.

Using TDD also means that you end up thinking about the application in a much more sensible way. It forces you to limit the interactions between different bits of the code, which makes the code much easier to modify at a later date.

A side-effect of TDD is that you end up with a comprehensive test suite that can be used to verify the behaviour of your app.

Ok – I’m sold!

Now, TDD takes some practice. You need to learn the tools and the techniques so you can get to the point where you actually write code faster when you’re test-driving it. So, how do you get there?

Test-Driven JavaScript Development

The best book I know on the subject is Test-Driven JavaScript Development by Christian Johansen. This book works on the assumption that you’ve never done TDD before, and gives an excellent grounding in both TDD in general, and how to do it in JavaScript. This is done through example code that you can write and test as you read the book. By the time you finish the book you’ll know how to do TDD, you’ll have some awesome tools for testing JavaScript, and you’ll have written a Node.js app!

If you’re not sure whether you want to get the book yet, have a look at the author’s introductory testing article on Script Junkie.

Testing tools

Although the author of Test-Driven JavaScript Development advocates the use of Google’s JSTestDriver, I much prefer to use Jasmine. If you work with Ruby you’ve likely used RSpec – Jasmine is basically RSpec for JavaScript, but with some ideas of its own as well. Jasmine is a BDD framework, not TDD. BDD (behaviour-driven development) is like TDD, but with a different emphasis and slightly different philosophy (and terminology). To find out more about BDD have a look at the Wikipedia page, as it’s outside the scope of this article.

Finally

I hope I’ve convinced you by this point that testing is worth it – even in JavaScript. I also hope that you’ll give TDD a go, as I think it’s (a) a great way to develop software, and (b) a brilliant way to create a test suite for your code. It does take a bit of time to set everything up the first time, and to learn the right techniques, but that time will pay for itself over and over again, when you have applications that actually do what they’re supposed to do, and don’t break when you change them. So please, do give it a go, and let me know how you get on!

Categories: Programming Tags: ,

JavaScript, JSON and modern web MVC

June 2nd, 2011 7 comments

As web developers we’re working through a transitionary period. New technologies are becoming widespread enough to be usable in real web apps – not just toys. After years of stagnation in JavaScript engines, Google Chrome started an arms race in speed, with all the major browser makers competing to make the fastest JavaScript interpreter. These changes are opening up a new world of web app development that hasn’t been fully explored yet, but this means we may have to rethink some of the best practices we’ve been following for the past decade.

MVC today

In a modern web application framework, MVC is well defined: requests get routed to a controller, which requests data from the model (which is probably using an ORM) and instantiates a templated view with this data. This view is sent as HTML to the browser which creates a DOM from the HTML, and renders the DOM (with CSS and images) as a web page.

JavaScript then operates on the DOM, adding a layer of interactivity to the page. As things advance, the JavaScript layer has taken on more importance, reducing the number of page reloads by selectively reloading parts of the page with Ajax.

As the complexity of JavaScript on the client has increased, it’s been easy to end up with unstructured spaghetti code based around the jQuery model of “find something, do something with it”. As more state has moved to the browser, MVC frameworks for JavaScript have stepped in to add structure to the code. The problem is, now we have two MVC stacks:

Current JavaScript solutions suffer from "Double MVC". You need both server and client-side MVC stacks. Conceptual complexity is very high.
@dhh
DHH

MVC of the future

MVCMVC or MMVVCC aren’t just un-catchy, they’re a rubbish idea. If we’re going to make rich apps that rely on JavaScript (and as I said previously, I think we should), we need to lose some redundancy.

The model will usually be accessing some shared database, so in most cases it makes sense for this component to be on the server (authentication usually lives here as well). The view is the DOM, and the controller is the JavaScript. There’ll also need to be model code in the JavaScript – the extent to which the model logic is split between client and server depends on the application.

All the server needs to do is provide JSON responses when the client asks for more data, ideally using simple RESTful resources.

One benefit of the server being a simple JSON store is that it enables us to easily add redundancy to our apps – if server1 fails to deliver a response, fall back to server2.

No more HTML

The DOM is the means by which we display our app to the user. The browser thinks in terms of the DOM. HTML is just a serialisation of the DOM. Modern web app developers shouldn’t be thinking in terms of HTML. HTML is the assembly language of the web, and we should be working at a higher level of abstraction if we want to get stuff done. Most of the time we should be thinking in terms of layouts, widgets and dialogs, like grown-up UI designers.

The problem at the moment is that HTML is being used as a combined data and presentation format, and it should be obvious that this is a Bad ThingTM.

Of course at some point we’ll need to go down to the level of the HTML, and at that point we should be using templates. jQuery.tmpl is one (see my slides and presentation on the subject), but there are many others too. One interesting development (thanks to James Pearce for putting me onto this) is Model-driven Views (MDV).

MDV links JavaScript objects to the DOM in such a way that any changes to the object are automatically reflected in the DOM, and likewise, any user changes to the DOM are automatically reflected in the model objects. Hopefully this will become standard in modern browsers, as it provides a clean and easy way to keep the DOM up-to-date with the model objects without endless wiring, which again helps to insulate us from having to deal with too much HTML.

I’m not suggesting that HTML will go away, or that developers won’t need to learn it – just that most of the time, most developers won’t need to be worrying about it, just like most of the time, most C developers don’t need to worry about assembly code.

Conclusion

Most web-apps are currently halfway there in terms of moving functionality to the client. The problem is that we have ended up in the worst of both worlds as app developers. Our serverside MVC framework provides a nice template-based abstraction over the HTML, which we then break apart on the client. We’re forced to work at too many different levels of abstraction simultaneously, which is never a good thing.

Things were simple when everything happened on the server. Things will be simple again when everything happens on the client. Over time new tools and best practices will emerge to encourage this new way of working. In the meantime, it’s going to take a bit of extra effort, but it’ll be worth it.

The end of progressive enhancement revisited

May 29th, 2011 23 comments

This is a follow-up post to JavaScript and the end of progressive enhancement - you may want to read that first if you haven’t already. The comments are worth a read, as well as the Reddit thread.

The app/doc divide

What I didn’t make clear in my original post (because I wasn’t fully clear about it at the time) is the distinction between document-based web sites, and web applications. Documents are made to be read, whereas apps are made to be interacted with. James Pearce wrote a great description in the comments:

Actually what we’re maybe observing here is a clash of civilisations: the documentistas vs the applicationistas; web-as-a-medium vs web-as-a-technology-stack; designers vs programmers. It’s no wonder that progressive enhancement is a powerful tenet for the former of each pair to rally around… but not necessarily the most architecturally important consideration of the latter.

[James has since written a great, and highly relevant, post on the subject of URLs in JS apps.]

It goes without saying that apps and documents are two completely different things (even though there is a huge amount of overlap between the two). Trying to treat web design/development as a single discipline is a mistake. Anything that is primarily meant for reading, such as a newspaper website or a blog, exists as a collection of documents. I should be able to link to another blogger’s blog post and expect that in a week or a year that content will still be available at that address. I shouldn’t need to have JavaScript enabled in order to visit that address.

Web apps aren’t the same thing. A web app is an application that happens to use web technologies. Most web apps could be desktop apps – what makes a web app a web app is that it is accessed using a browser. No-one expects to be able to hyperlink to a particular view in Microsoft Excel – what would that even mean? In the same way, I can’t share a link to a particular email in Gmail. Web apps don’t always exist in the same web as everything else – many apps are islands.

Historically, web apps have been built in the document-based paradigm, using links to move to different parts of the app and forms to submit data for processing. The only reason they were made like this was because this was all the technology allowed. Now that technology has advanced, first with Ajax and now with HTML5, we aren’t tied to this old practice.

Amy Hoy summed it up nicely:

"graceful degradation" (and likewise, p.e.) died the moment we moved away from document design and into app design.
@amyhoy
Amy Hoy

The problem is that because everyone building stuff online is using the same technologies, there can be a tendency to think that there is one true way of using those technologies, independent of what they’re being used for. This is clearly false.

The end of progressive enhancement?

Progressive enhancement is still the way to make document-based sites on the web – there’s no denying that. But with applications the advantages are less clear. If we can accept the fact that the document-based paradigm isn’t the best way to make web apps, then we should also accept the fact that progressive enhancement doesn’t apply to apps.

JavaScript and the end of progressive enhancement

May 4th, 2011 53 comments

Progressive enhancement is the Right WayTM to do things in web development. It works like this:

  1. Write the HTML for your content, 100% semantically (i.e. only the necessary tags to explain the meaning of the content).

  2. Style the HTML using CSS. You may need to add hooks to your HTML in the form of classes and ids for the CSS to target.

  3. Add JavaScript enhancements to the interface, but only enhancements.

Parallel to this, the functionality of the site/app is progressively enhanced:

  1. All navigation must happen via links and form submissions – don’t use JavaScript for navigation.

  2. Add JavaScript enhancements to navigation, overriding the links and form submissions, for example to avoid page reloads. Every single link and form element should still work when JavaScript is disabled.

But of course you knew that, because that’s how you roll. I’m not sure this is the way forward though. To explain why I’ll need to start with a bit of (over-simplified) history.

A history lesson

Static pages

Originally, most sites on the web were a collection of static HTML pages saved on a server. The bit of the URL after the domain gave the location of the file on the server. This works fine as long as you’re working with static information, and don’t mind each file having to repeat the header, footer, and any other shared HTML.

Server-side dynamism

In order to make the web more useful, techniques were developed to enable the HTML to be generated on the fly, when a request was received. For a long time, this was how the web worked, and a lot of it still does.

Client-side dynamism

As JavaScript has become more prevalent on the client, developers have started using Ajax to streamline interfaces, replacing page loads with partial reloads. This either works by sending HTML snippets to replace a section of page (thus putting the burden of HTML generation on the server) or by sending raw data in XML or JSON and letting the browser construct the HTML or DOM structure.

Shifting to the client

As JavaScript engines increase in speed, the Ajax bottleneck comes in the transfer time. With a fast JavaScript engine it’s quicker to send the data to the client in the lightest way possible and let the client construct the HTML, rather than constructing the HTML on the server and saving the browser some work.

This raises an issue – we now need rendering code on the client and the server, doing the same thing. This breaks the DRY principle and leads to a maintenance nightmare. Remember, all of the functionality needs to work without JavaScript first, and only be enhanced by JavaScript.

No page reloads

If we’re trying to avoid page reloads, why not take that to its logical conclusion? All the server needs to do is spit out JSON – the client handles all of the rendering. Even if the entire page needs to change, it might be faster to load the new data asynchronously, then render a new template using this new data.

Working this way, a huge amount of functionality would need to be written twice; once on the server and once on the client. This isn’t going work for most people, so we’re left with two options – either abandon the “no reload” approach or abandon progressive enhancement.

The end of progressive enhancement

So, where does this leave things? It depends on how strongly you’re tied to progressive enhancement. Up until now, progressive enhancement was a good way to build websites and web apps – it enforced a clean separation between content, layout, behaviour and navigation. But it could be holding us back now, stopping the web moving forward towards more natural interfaces that aren’t over-burdened by its history as something very different to what it is today.

There are still good reasons to keep using progressive enhancement, but it may be time to accept that JavaScript is an essential technology on today’s web, and stop trying to make everything work in its absence.

Or maybe not

I’m completely torn on this issue. I wrote this post as a way of putting one side of the argument across in the hope of generating some kind of discussion. I don’t know what the solution is right now, but I know it’s worth talking about. So let me know what you think!

Appendix: Tools for this brave new world

Backbone.js is a JavaScript MVC framework, perfectly suited for developing applications that live on the client. You’ll want to use some kind of templating solution as well – I use jQuery.tmpl (I’ve written a presentation on the topic if you’re interested in learning more) but there are lots of others as well.

Sammy.js (suggested by justin TNT) looks like another good client-side framework, definitely influenced by Sinatra.

If anybody has any suggestions for other suitable libraries/frameworks I’ll gladly add them in here.


Edit: I’ve now written a follow-up to this post: The end of progressive enhancement revisited.

Closures explained with JavaScript

April 22nd, 2011 18 comments

Last year I wrote A brief introduction to closures which was meant to help people understand exactly what a closure is and how it works. I’m going to attempt that again, but from a different angle. I think with these kinds of concept, you just need to read as many alternative explanations as you can in order to get a well rounded view.

First-class functions

As I explained in Why JavaScript is AWESOME, one of the most powerful parts of JavaScript is its first-class functions. So, what does “first-class” mean in a programming language? To quote wikipedia, something is first-class if it:

  • can be stored in variables and data structures
  • can be passed as a parameter to a subroutine
  • can be returned as the result of a subroutine
  • can be constructed at runtime
  • has intrinsic identity (independent of any given name)

So, functions in JavaScript are just like objects. In fact, in JavaScript functions are objects.

The ability to nest functions gives us closures. Which is what I’m going to talk about next…

Nested functions

Here’s a little toy example of nested functions:

The important thing to note here is that there is only one f defined. Each time f is called, a new function g is created, local to that execution of f. When that function g is returned, we can assign its value to a globally defined variable. So, we call f and assign the result to g5, then we call f again and assign the result to g1. g1 and g5 are two different functions. They happen to share the same code, but they were executed in different environments, with different free variables. (As an aside, we don’t need to use a function definition to define g and then return it. Instead, we can use a function expression which allows us to create a function without naming it. These are called ‘anonymous functions’ or lambdas. Here’s a version of the above with g converted to an anonymous function.)

Free variables and scope

A variable is free in any particular scope if it is defined within an enclosing scope. To make that more concrete, in the scope of g, the variable x is free, because it is defined within the scope of f. Any global variables are free within the scopes of f and g.

What do I mean by scope? A scope is an area of code where a variable may be defined, without the enclosing scope knowing about it. JavaScript has ‘function scope’, so each function has its own scope. So, any variable defined within f is invisible outside of f. Scopes can be nested, so in the above example, g has its own scope which is contained within the scope of f, which is contained within the global scope. Whenever a variable is accessed, the JavaScript interpreter first looks within the current scope. If the variable is not found to be defined within the current scope, the interpreter checks the enclosing scope, then the enclosing scope of the enclosing scope, all the way up to the global scope. If the variable is not found, a ReferenceError is thrown.

Closures are functions that retain a reference to their free variables

And this is the meat of the matter. Let’s look at a simplified version of the above example first:

It’s no surprise that when you call f with the argument 5, when g is called it has access to that argument. What’s a bit more surprising is that if you return g from the argument, the returned function still has access to the argument 5 (as shown in the original example). The bit that can be a bit mind-blowing (and I think generally the reason that people have such trouble understanding closures) is that the returned g actually is remembering the variable x that was defined when f was called. That might not make much sense, so here’s another example:

When person is called, the argument name is bound to the value passed. So, the first time person is called, name is bound to ‘Dave’, and the second time, it’s bound to ‘Mary’. person defines two internal functions, get and set. The first time these functions are defined, they have a free variable name which is bound to ‘Dave’. These two functions are returned in an array, which is unpacked on the outside to get two functions getDave and setDave. (If you want to return more than one value from a function, you can either return an object or an array. Using an array here is more verbose, but I didn’t want to confuse the issue by including objects as well. Here’s a version of the above using an object instead, if that makes more sense to you.)

And this is the magic bit. getDave and setDave both remember the same variable name, which was bound to ‘Dave’ originally. When setDave is called, that variable is set to ‘Bob’. Now when getDave is called, it returns ‘Bob’ (Dave never liked the name ‘Dave’ anyway). So getDave and setDave are two functions that remember the same variable. This is what I mean when I say “Closures are functions that retain a reference to their free variables”. getDave and setDave both remember the free variable name. Even though person has now returned, the variable name lives on, because it is referenced by getDave and setDave.

The variable name was bound to ‘Dave’ when person was called the first time. When person is called a second time, a new version of name comes into existence, as well as new versions of get and set. So the functions getMary and setMary are completely different to the functions getDave and setDave. They execute identical code, but in two different environments, with different free variables.

Summary

In summary, a closure is a function called in one context that remembers variables defined in another context – the context in which it was defined. Multiple closures defined within the same context remember that same context, so changes they make are shared between them. Every time a function is called that creates closures, a new, shared context is created for the new closures.

The best way to learn is to play, so I’d recommend editing the fiddles above (by clicking on the + at the top right) and having a fiddle. Enjoy!


Addendum: Lasse Reichstein had some useful comments on this post on the JSMentors mailing list – read them here. The important thing to realise is that a closure actually remembers its environment rather than its free variables, so if you define a new variable in the environment of the closure after the closure’s definition, it will be accessible inside the closure.