Archive

Posts Tagged ‘progressive-enhancement’

JavaScript, JSON and modern web MVC

June 2nd, 2011 7 comments

As web developers we’re working through a transitionary period. New technologies are becoming widespread enough to be usable in real web apps – not just toys. After years of stagnation in JavaScript engines, Google Chrome started an arms race in speed, with all the major browser makers competing to make the fastest JavaScript interpreter. These changes are opening up a new world of web app development that hasn’t been fully explored yet, but this means we may have to rethink some of the best practices we’ve been following for the past decade.

MVC today

In a modern web application framework, MVC is well defined: requests get routed to a controller, which requests data from the model (which is probably using an ORM) and instantiates a templated view with this data. This view is sent as HTML to the browser which creates a DOM from the HTML, and renders the DOM (with CSS and images) as a web page.

JavaScript then operates on the DOM, adding a layer of interactivity to the page. As things advance, the JavaScript layer has taken on more importance, reducing the number of page reloads by selectively reloading parts of the page with Ajax.

As the complexity of JavaScript on the client has increased, it’s been easy to end up with unstructured spaghetti code based around the jQuery model of “find something, do something with it”. As more state has moved to the browser, MVC frameworks for JavaScript have stepped in to add structure to the code. The problem is, now we have two MVC stacks:

Current JavaScript solutions suffer from "Double MVC". You need both server and client-side MVC stacks. Conceptual complexity is very high.
@dhh
DHH

MVC of the future

MVCMVC or MMVVCC aren’t just un-catchy, they’re a rubbish idea. If we’re going to make rich apps that rely on JavaScript (and as I said previously, I think we should), we need to lose some redundancy.

The model will usually be accessing some shared database, so in most cases it makes sense for this component to be on the server (authentication usually lives here as well). The view is the DOM, and the controller is the JavaScript. There’ll also need to be model code in the JavaScript – the extent to which the model logic is split between client and server depends on the application.

All the server needs to do is provide JSON responses when the client asks for more data, ideally using simple RESTful resources.

One benefit of the server being a simple JSON store is that it enables us to easily add redundancy to our apps – if server1 fails to deliver a response, fall back to server2.

No more HTML

The DOM is the means by which we display our app to the user. The browser thinks in terms of the DOM. HTML is just a serialisation of the DOM. Modern web app developers shouldn’t be thinking in terms of HTML. HTML is the assembly language of the web, and we should be working at a higher level of abstraction if we want to get stuff done. Most of the time we should be thinking in terms of layouts, widgets and dialogs, like grown-up UI designers.

The problem at the moment is that HTML is being used as a combined data and presentation format, and it should be obvious that this is a Bad ThingTM.

Of course at some point we’ll need to go down to the level of the HTML, and at that point we should be using templates. jQuery.tmpl is one (see my slides and presentation on the subject), but there are many others too. One interesting development (thanks to James Pearce for putting me onto this) is Model-driven Views (MDV).

MDV links JavaScript objects to the DOM in such a way that any changes to the object are automatically reflected in the DOM, and likewise, any user changes to the DOM are automatically reflected in the model objects. Hopefully this will become standard in modern browsers, as it provides a clean and easy way to keep the DOM up-to-date with the model objects without endless wiring, which again helps to insulate us from having to deal with too much HTML.

I’m not suggesting that HTML will go away, or that developers won’t need to learn it – just that most of the time, most developers won’t need to be worrying about it, just like most of the time, most C developers don’t need to worry about assembly code.

Conclusion

Most web-apps are currently halfway there in terms of moving functionality to the client. The problem is that we have ended up in the worst of both worlds as app developers. Our serverside MVC framework provides a nice template-based abstraction over the HTML, which we then break apart on the client. We’re forced to work at too many different levels of abstraction simultaneously, which is never a good thing.

Things were simple when everything happened on the server. Things will be simple again when everything happens on the client. Over time new tools and best practices will emerge to encourage this new way of working. In the meantime, it’s going to take a bit of extra effort, but it’ll be worth it.

The end of progressive enhancement revisited

May 29th, 2011 23 comments

This is a follow-up post to JavaScript and the end of progressive enhancement - you may want to read that first if you haven’t already. The comments are worth a read, as well as the Reddit thread.

The app/doc divide

What I didn’t make clear in my original post (because I wasn’t fully clear about it at the time) is the distinction between document-based web sites, and web applications. Documents are made to be read, whereas apps are made to be interacted with. James Pearce wrote a great description in the comments:

Actually what we’re maybe observing here is a clash of civilisations: the documentistas vs the applicationistas; web-as-a-medium vs web-as-a-technology-stack; designers vs programmers. It’s no wonder that progressive enhancement is a powerful tenet for the former of each pair to rally around… but not necessarily the most architecturally important consideration of the latter.

[James has since written a great, and highly relevant, post on the subject of URLs in JS apps.]

It goes without saying that apps and documents are two completely different things (even though there is a huge amount of overlap between the two). Trying to treat web design/development as a single discipline is a mistake. Anything that is primarily meant for reading, such as a newspaper website or a blog, exists as a collection of documents. I should be able to link to another blogger’s blog post and expect that in a week or a year that content will still be available at that address. I shouldn’t need to have JavaScript enabled in order to visit that address.

Web apps aren’t the same thing. A web app is an application that happens to use web technologies. Most web apps could be desktop apps – what makes a web app a web app is that it is accessed using a browser. No-one expects to be able to hyperlink to a particular view in Microsoft Excel – what would that even mean? In the same way, I can’t share a link to a particular email in Gmail. Web apps don’t always exist in the same web as everything else – many apps are islands.

Historically, web apps have been built in the document-based paradigm, using links to move to different parts of the app and forms to submit data for processing. The only reason they were made like this was because this was all the technology allowed. Now that technology has advanced, first with Ajax and now with HTML5, we aren’t tied to this old practice.

Amy Hoy summed it up nicely:

"graceful degradation" (and likewise, p.e.) died the moment we moved away from document design and into app design.
@amyhoy
Amy Hoy

The problem is that because everyone building stuff online is using the same technologies, there can be a tendency to think that there is one true way of using those technologies, independent of what they’re being used for. This is clearly false.

The end of progressive enhancement?

Progressive enhancement is still the way to make document-based sites on the web – there’s no denying that. But with applications the advantages are less clear. If we can accept the fact that the document-based paradigm isn’t the best way to make web apps, then we should also accept the fact that progressive enhancement doesn’t apply to apps.

JavaScript and the end of progressive enhancement

May 4th, 2011 53 comments

Progressive enhancement is the Right WayTM to do things in web development. It works like this:

  1. Write the HTML for your content, 100% semantically (i.e. only the necessary tags to explain the meaning of the content).

  2. Style the HTML using CSS. You may need to add hooks to your HTML in the form of classes and ids for the CSS to target.

  3. Add JavaScript enhancements to the interface, but only enhancements.

Parallel to this, the functionality of the site/app is progressively enhanced:

  1. All navigation must happen via links and form submissions – don’t use JavaScript for navigation.

  2. Add JavaScript enhancements to navigation, overriding the links and form submissions, for example to avoid page reloads. Every single link and form element should still work when JavaScript is disabled.

But of course you knew that, because that’s how you roll. I’m not sure this is the way forward though. To explain why I’ll need to start with a bit of (over-simplified) history.

A history lesson

Static pages

Originally, most sites on the web were a collection of static HTML pages saved on a server. The bit of the URL after the domain gave the location of the file on the server. This works fine as long as you’re working with static information, and don’t mind each file having to repeat the header, footer, and any other shared HTML.

Server-side dynamism

In order to make the web more useful, techniques were developed to enable the HTML to be generated on the fly, when a request was received. For a long time, this was how the web worked, and a lot of it still does.

Client-side dynamism

As JavaScript has become more prevalent on the client, developers have started using Ajax to streamline interfaces, replacing page loads with partial reloads. This either works by sending HTML snippets to replace a section of page (thus putting the burden of HTML generation on the server) or by sending raw data in XML or JSON and letting the browser construct the HTML or DOM structure.

Shifting to the client

As JavaScript engines increase in speed, the Ajax bottleneck comes in the transfer time. With a fast JavaScript engine it’s quicker to send the data to the client in the lightest way possible and let the client construct the HTML, rather than constructing the HTML on the server and saving the browser some work.

This raises an issue – we now need rendering code on the client and the server, doing the same thing. This breaks the DRY principle and leads to a maintenance nightmare. Remember, all of the functionality needs to work without JavaScript first, and only be enhanced by JavaScript.

No page reloads

If we’re trying to avoid page reloads, why not take that to its logical conclusion? All the server needs to do is spit out JSON – the client handles all of the rendering. Even if the entire page needs to change, it might be faster to load the new data asynchronously, then render a new template using this new data.

Working this way, a huge amount of functionality would need to be written twice; once on the server and once on the client. This isn’t going work for most people, so we’re left with two options – either abandon the “no reload” approach or abandon progressive enhancement.

The end of progressive enhancement

So, where does this leave things? It depends on how strongly you’re tied to progressive enhancement. Up until now, progressive enhancement was a good way to build websites and web apps – it enforced a clean separation between content, layout, behaviour and navigation. But it could be holding us back now, stopping the web moving forward towards more natural interfaces that aren’t over-burdened by its history as something very different to what it is today.

There are still good reasons to keep using progressive enhancement, but it may be time to accept that JavaScript is an essential technology on today’s web, and stop trying to make everything work in its absence.

Or maybe not

I’m completely torn on this issue. I wrote this post as a way of putting one side of the argument across in the hope of generating some kind of discussion. I don’t know what the solution is right now, but I know it’s worth talking about. So let me know what you think!

Appendix: Tools for this brave new world

Backbone.js is a JavaScript MVC framework, perfectly suited for developing applications that live on the client. You’ll want to use some kind of templating solution as well – I use jQuery.tmpl (I’ve written a presentation on the topic if you’re interested in learning more) but there are lots of others as well.

Sammy.js (suggested by justin TNT) looks like another good client-side framework, definitely influenced by Sinatra.

If anybody has any suggestions for other suitable libraries/frameworks I’ll gladly add them in here.


Edit: I’ve now written a follow-up to this post: The end of progressive enhancement revisited.