Archive | Opinion RSS feed for this section

Lion Taming For Beginners

1 Sep

What results in a successful piece of software? Is it the power of the software itself? Is it the range of features it has? Or is is the interface design that allows a user to access those powerful features?

Its a bit of everything really but that would make for a very short and dull post and you’d feel like you wasted your time if I finished with that so let me explain.

I’ve just started a new role working for a software development company. Their flagship product is an immensely powerful data management tool – and ‘tool’ is an understatement, it doesn’t _begin_ to do justice to the level of complexity this bad boy has. If you’re an ordinary user you can view and generate reports and charts based on data from either an OLAP or relational (SQLServer in this case) DB. If you’re a Developer then you can design custom forms, reports, get down and dirty with your own SQL and a wide variety of other frighteningly techy things I’m too right brained to get right now. Take it from me, this is one powerful piece of kit.

And its driven through the thinnest of clients – a web browser. When I first saw it working, it blew my ‘cool’ rating up to 11. Its the first time I’ve ever seen anything this powerful working in a standard install web browser.

But as Spidey’s dead Uncle once said: “with great power comes great responsibility.” and thats where this colossus falters just a _little_ bit. Its too easy to get lost in it and its a very steep learning curve to learn how to use it. We know that and this is one of the reasons they took me on – to put an interface on it that is easy to navigate and make it work like the very best web based applications such as “Rojo.com”:http://www.rojo.com – a big powerful beast with an interface that tames it wonderfully.

I suspect I may have a bit of understandable resistance to overcome. There’s a lot of people who invested a lot of time in this product and it’ll take some time to convince them that I also want whats best for it. I’m hoping I can find a way to let them see the potential of this without treading on anyones toes.

Semantics Is The New Black

30 Jul

Every year around January time, the design/development community make a few predictions as to what will be the big thing for the upcoming year. Predictions range from popular colours, site types, font choices as well as more esoteric things such as concepts (AJAX was touted as the coming thing this year with some apparent justification) but a few things become popular due to events or industry leaders making them news (for example Andy Clarke’s recent post about accessibility and societal control and SiteMorses recent footshooting debacle has placed accessibility back to the forefront of the community’s collective mind).

And then some things quietly and unobtrusively instill themselves into our design/development lives with scarcely a ripple.

The ongoing movement towards semantics on the web is something that does seem to pass by even us in the community responsible for its promotion. I want to take a look at a few things that we might not even have thought of as examples of web based semantics and how they are affecting us on a daily basis.

What we mean by semantics as they apply to the web is the principle of the ‘thing’ itself having meaning as well as the message that the ‘thing’ is overtly conveying. A prime example of this:

This is a paragraph.

You can’t get much more semantic than that! We use a ‘paragraph’ element to convey the covert meaning on the section in question as well as to display the text in that element overtly.

But these days, semantics cover a much wider range of possibilities and meanings than a simple markup element. Lets take a look at Search.

Search engines such as Google, Yahoo and MSN will place an increasing amount of importance on semantics. This process is already underway – I’ve discussed before how Google are implementing a process called Latent Semantic Indexing – and will only increase pace. But what does semantics mean for search engines? It can mean lots of things. Firstly there is the semantic relationship between the search word/phrase you use to generate results and the actual results themselves. Obviously, the better that match is the more accurate your SERPs (Search Engine Results Pages) will be.

From a web developers point of view, semantics affect our sites relationship with search engines in two major ways. Firstly if you want to promote the phrase ‘bad credit loans’ on your site then creating phrases that share a semantic meaning with that phrase or the words in that phrase is a good idea ‘bad credit loans’ could be semantically matched with ‘debt consolidation’ or ‘secured loans’ or ‘credit worries’. The second way semantics is important to us comes in terms of the sites that link to us. If I’m in charge of a travel insurance website then my automatic assumption might be to get lots of backlinks from finance related sites. However, the semantic way of looking at the relationship would be to get links from sites that share a common or similar theme – holiday sites, airline sites etc.

A more intriguing and tantalizing possibility regarding semantics and search engines is the possibility that search engines are capable of determining the _type_ of site. By this I mean is the site an e-commerce site? Is it a forum? Is it a basic brochure site? Is it a blog? This semantic relationship between the underlying code of a site, its structure and its overall purpose does seem detectable by engines albeit in a fairly basic ‘brute force’ way – so far.

Moving away from search a little bit we should take a look at how blogging has powered a massive increase in constructing a semantic structure to its particular environment. Sites like Technorati which are essentially search engines for blogs have a core functionality which lists all the other sites a particular site is receiving links from – in the blogosphere links are awarded by bloggers who feel the linkee shares a common goal/spirit/language/understanding with them and hence Technorati’s Cosmos feature is a foundation of semantics – communication going beyond just the overt. With blogs becoming increasingly popular its no wonder the big search engines are interested in matching sites like Technorati’s semantic influence.

Then of course there are the blogs themselves – categorisable and taggable as sites never have been before and capable of creating a vast community based not just on what each blogger finds interesting but on the way that blogs store, produce and display information. Again, the way its said is as important as whats actually _being_ said. And as new formats and new offshoots appear (del.icio.us/ and flickr for example) that semantic relationship between blogs that share no visual similarities and _who might not even be aware of each other_ builds and builds. Flickr and del.icio.us can be fed into a lot of blogs and blogs can export their content in meaningfully rich ways via RSS.

So, semantics – its the new black. As our understanding of what can be achieved by making sure we write to a common format and how relationships between codable structures fire relationships between people increases so will our ability to have a web that can finally begin to bring things to us with increasing accuracy. The future isn’t Search, the future is Delivery.

The Design Of Amazon: Happy 10th Anniversary!

3 Jul

On the tenth anniversary of Amazon being an online retailer (well, the actual date is July 16th but what the hell) I thought it might be fun to take a look at why they’re so successful in design terms.

Lets be honest. For most designers, Amazon ain’t too pretty. In fact it looks grim in places. The code is also an unsemantic mess of nested tables and inappropriate code choices. These two things alone are enough to make any standards based designer grit his or her teeth. It also won’t validate, makes no effort to be accessible and makes lots of schoolboy coding errors.

But it works brilliantly. Its the embodiment of usability and a role model for how effective Information Architecture can be when researched and used well. The design elements it does use are used well, its fast loading and a joy to navigate.

So lets have a look at a typical Amazon product page (warning: 162kb image) at some of the things Amazon does right.Firstly there’s the overall layout of the page. Everything is structured in order of importance _to the user_ from top to bottom (most important at top, least at bottom).

Most importantly of all we have our main navigation area. The (clickable, of course) logo is left aligned in the traditional manner – the area that studies indicate users look first – thus ensuring the user knows exactly where they are and who they’re dealing with. These things add to the users comfort level. Next to the logo is the main site overview personal options – your account, your wish list, your basket, how you get help – which again reassures the user that _they_ are at the centre of the process.

Next is the site navigation. The cream bar is top level section areas and the blue bar is secondary level drill-down options. Note the second cream bar under the blue bar which carries Search options but carries on the ‘navigation’ colours of cream and blue (blue of course being a good choice as it mirrors the colour a user traditionally associates with progress and action – the blue of an unstyled hyperlink).

Underneath that we have one of things that Amazon does so well – internal advertising. Bright red to reflect the traditional colours of a UK Sale and note that the word ‘sale’ is the biggest typeface on the whole page. There’s no way you can miss that. Especially as its incorporated well with the rest of the page rather than floating about as an ineffectual banner.

Next is the hub of this page template – the product detail area. Everything about this area reeks of highly effective information architecture and copy writing. Look how _sparse_ the details are. Not a thing is wasted and yet not a significant details is overlooked. In that relatively small area you get price, terms and conditions, availability, the option to purchase a second hand copy, release date, the number of items associated with the product, the label, ASIN and catalogue numbers, the option to add the product to your shopping basket and/or wish list, any associated special offers, how Amazon customers rate the product, the opportunity to rate the product in order to hone your personal recommendations (Amazon appreciates latent semantics as much as Google!) and , oh yeah, a picture of the product itself.

Immediately below this area Amazon offer a textbook definition of the idea of user goals supporting business goals. Of course Amazon want you to spend more money so they offer a great way to get you to part with more cash but in a way thats so damn useful you don’t really mind – the famous ‘Customers who like this may also like’ areas. The first, interestingly, are links to offsite selling areas (Ticketmaster and Yahoo in this example page) but underneath the Track Listing (note that these two areas buttress the all-important Track Listing which is something most people want to see and thus they get these two areas in their viewing area too) is the ‘customers who bough this also bought’ which is a genius piece of viral marketing – allowing your customers to dictate the fashion will always bring more sales than dictating fashion _to_ the customer.

Next up we have customer reviews. Something of a double edged sword I’d guess Amazon keep these as they enhance their reputation as transparent and actively _useful_ to customers. Thats worth more than any one negative review can potentially lose them in revenue. A brand associated with implicit trust _and_ usefullness is worth its weight in gold.

After this is sectional bottom nav for long pages like these thus preventing users the necessity of scrolling back up and maybe getting bored in the process.

Last on the page is the stuff that Amazon correctly judges its users will need least often. Its not trying to hide the process of returning things if you need to but Amazon realise that such an eventuality will be a relative rarity for any one customer and so they place that information where its fairly easy to find but doesn’t impinge on the essential page activities of reading about and buying product.

All thats said I do find it disappointing that Amazon’s site isn’t particularly accessible and that they use non-validating, non-semantic code. These are things that elevate great sites into superlative sites. Amazon would save money on bandwidth, have even better download times and have a perfect base for any future re branding – why stop with the job only 50% done Amazon? Why not start the next ten years with a substantial under-the-bonnet change and reap the benefits enjoyed by your equally large contemporaries like Multimap, Yahoo and MSN?

Open Letter To The Accessibility Task Force

29 Jun

Colleagues,

Your joint appointment to the ATF is a visible positive indicator that the concept of web accessibility is maturing. I think all the choices for this task force are inspired and that between you you have an excellent pool of academic and practical experience.

That said I think you have a tricky task ahead of you. I note the positive steps you’ve taken in asking on your personal sites what we as designers and developers think are important steps and I’ve spoken my piece as part of this process. I also note with some concern that a basic concept is in danger of being ignored in some of the replies I’ve read (including my own).

Whilst its true that its important that CMS’s can handle content better and that screenreaders work with browsers as oppose to against them (to take two highlighted examples) I think we need to first have a task force that can put the house of accessibility in order.

I’m not talking about anything adversarial with WAI but it seems to me that the most common issues to do with the concept of accessibility revolve around what it actually *is*. This is an issue that both WAI and GAWDS have totally failed to address and yet without this basic, fundamental understanding our comprehension in this respect is being steered with a warped rudder.

Even our so called ‘guru’s’ have occasionally odd ideas about accessibility and what/who it encompasses. I read a recent comment from one of the biggest gurus in the field recently chastising someone who suggested content should be accessible as well as the code and interface design. Obviously this ‘guru’ is unaware of issues affecting those with a cognitive based disability.

Another big name claims that accessibility should only be about removing barriers and that pages scripted to take account of users real-life needs fail to grasp the ethos of accessibility.

Obviously there is substantial confusion not just about tools and technique but at a much more fundamental level. To that end I think item 1 on your agenda should be defining accessibility for web developers and all sub tasks of this item should be a clarification on who the main user base are, the software tools they may use, how we can currently level the playing field for some of these users and the steps we need to take to provide enhanced interfaces for some of these users.

We also need a redefining of the main user base. Currently and historically, the perception has been of users with a visual impairment. The majority of debate revolves around these users and to a lesser extent users with mobility issues. This situation ignores a third of those covered by the UK DDA. I’d like to see the task force question this emphasis – whats the point of a concept of accessibility that only caters to 2/3rds of its customers?

I’d also like to see a full and frank discussion of WCAG 1.0 and 2.0 and an extensive debate on their shortcomings. Its obvious that WAI aren’t going to do this and I think you guys are ideally placed to highlight these issues.

Accessibility is a noble goal that deserves better treatment than its so far received.

The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.

Tim Berners-Lee, W3C Director and inventor of the World Wide Web

Thats the quote on the WAI home page. So far, due to uncertainty, poor defining and poor propaganda, WAI have utterly failed to convince either developers and/or business. One could argue its beyond their remit just as one could argue that what I’m propounding here is beyond *your* remit but in the absence of any constructive leadership from WAI its possibly up to WaSP and this task force to define accessibility in ways that are universally comprehendable by developers *and* business and that don’t exclude large sectors of the client base that the overall concept is supposed to empower and at the same time put pressure on WAI to do the same. I think these things are vital before we can even think of more detailed agendas touching on implementations such as CSS, software (screenreaders _or_ CMS’s) or the necessity or otherwise of validation.

Accessibility, The Law and Social Responsibility

17 Jun

In Andy Clarke’s recent post on why he believes that accessibility shouldn’t be enforceable by law he made reference to Foucault’s theories of social control and how such controls seek to obviate responsibility from individuals to protest what they feel to be injustices.

As it stands on the surface Andy’s argument is spot on. Foucalt says that (in essence) once a law governing an aspect of human behaviour is codified its akin to society saying ‘we don’t need to care about that anymore’ and that the resultant law can never be as well intentioned as the masses refusing to accept a certain behaviour. Again, its all true.

Unfortunately, its a bad argument to use to support the position that no laws are better than laws when it comes to accessibility. Why? Firstly because of the people this law seeks to make responsible. Unlike individuals, businesses are primarily concerned with profit. And thats as it should be. If we accept that we live in a capitalist society then the aim of generating profit is a good thing. However, as we all know, businesses take this aspect too far on occasion. For us to take Foucalt seriously in the context that Andy has used him we have to believe that business is responsible and concerned a large percentage of the time. Ask yourself if you believe that to be the case. We also have to believe that big business is capable of large scale acceptance of the needs of minority groups. Ask yourself if you believe that to be the case.

The truth is that big business will by and large voluntarily do nothing that might impinge on their profit margin (except in instances where a ‘loss leader’ is seen as a viable option). Do we believe that if enough people protested outside the London office of Nike about their appalling practice of exploitation that they would stop? Or do we believe that they would simply ignore it and/or relocate their offices or possibly make cursory gestures and trumpet them loudly in glossy ad campaigns?

The second reason its a bad argument is to do with the people that this affects. And lets make no mistake – this affects people. When we say we build accessible websites, I think that we sometimes forget that this phrase hides a whole section of society of people. I’ve grown increasingly worried about the amount of people who strive to get their site to validate with Bobby or Cynthia or even the WCAG but in the urge to make it pass, totally forget that its supposed to benefit people. In this sense accessibility is vastly more important than web standards. Whilst I totally accept that web standards are vital they don’t have the same day-to-day impact on people that the concept of accessibility does.

And when we do actually get around to talking about the people behind the phrase we talk mainly about people with a visual disability – a group that is numerically vastly less significant than all but 2 other groupings of disability in the UK. Why do we do this? Is it because its simply the easiest problem to address? Is it because thats the emphasis the WAI put on WCAG1.0? However, please consider the numbers below:

According to statistics provided by the DDA, the breakdown of people with a disability who would fall under the act is as follows (broken down by type of disability):

Type of Disability Number of People (millions)
Lifting and carrying 7m
Mobility 6m
Physical co-ordination 5.6m
Learning and understanding 3.9m
Seeing and hearing 2.5m
Manual dexterity 2.3m
Continence 1.6m
Total 29.6m

From these statistics I feel we can safely remove ‘lifting and carrying’ and ‘continence’; neither of which would add to the difficulties of using a website. Doing this leaves us with:

Type of Disability Number of People (millions)
Mobility 6m
Physical co-ordination 5.6m
Learning and understanding 3.9m
Seeing and hearing 2.5m
Manual dexterity 2.3m
Total 20.3m

As we can then see, there are two main groupings; physical and learning based disabilities. In our adjusted group, those with a learning disability equal 19.22% of the overall total of people in the UK who fall under the jurisdiction of the DDA and who we would also expect to be adversely affected by inaccessible websites. If a strong enough argument could be made for including the two sub-groups of people I removed from the equation then this percentage would be a little over 13%. Whichever way one looks at it, it’s a high percentage. A much higher percentage for example than say, people who use Mozilla or Opera and yet strenuous efforts are made by standards aware designers not to exclude these users. At the recent @media presentation why did attendees witness someone using a Screenreader? Why didn’t they witness a user with Downs Syndrome trying to use a website?

One of the features of some learning disabilities is that the resultant behaviour governed by the condition in question leads the person to not be in a position to advocate. I am not saying they are incapable due to a lack of intelligence, rather, what I am saying is that some people are not able to advocate in such a public way as might be required for businesses to take note. Or even, if we are honest, for society at large to take note. An example: for some autistics, their condition leads them to be very uncomfortable around people and they find social interaction distressing and actively painful. It simply unreasonable to expect someone in this situation to be able to commit to a prolonged media campaign of advocacy and awareness raising. In which case we should do it for them right? These are people with whom we share a common belief of accessibility for all – aside from putting compliance badges on our websites and saying how jolly nice it would be if everyone cared as much as we do, what have we actually done?

It is these people, this 19 or so percent, that legislation is vital for. I am not claiming that the law in the UK is any good as its not, its far too ambiguous. Neither am I saying that we should abandon our social responsibility to others and assume the law will take care of everything for us. What I am saying is that whilst the law is not very well implemented and needs to be much tighter and more specific that the underlying aim of this particular law is a good one and in the absence of businesses growing a social conscience or web designers joining disability rights marches it serves a purpose: to protect the rights and freedoms of a section of people who are most vulnerable.

On Having An Entry Rejected By A CSS Gallery

2 Jun

OK. I’m going to have to word this post very carefully. I want to get it upfront that this entry is in no way just a great big sulk about not getting listed on StyleGala or CSS Beauty/Vault or Unmatched Style. Seriously, its not. Stop sniggering.

There has to be a line of quality that differentiates what gets listed and what doesn’t. Thats as it should be – not every design can make it in. But lets get the awkward question out the way – do I think this design should be listed? The honest answer is that I’m ambivalent. It would be nice, but I’m not going to lose any sleep over it if I don’t. So whats the point of this post?

I know that at least one of these sites has come calling just by checking my referrer stats. I further can see that I haven’t been listed (thats not to say I won’t be of course, I really don’t know if anyone’s even submitted the design or not, let alone had the other gallery sites come calling to have a look). So OK – as I said above, its nice to be listed but at the end of the day if I’m not then I’m not. I’m not gagging for a listing – my self confidence has grown enough since I stopped doing just Flash work that I’m not too worried what others think. But what would be great would be some kind of indication as to the factors that led me not to be listed.

Imagine how useful that would be? All designers are interested in pushing themselves and growing past their current boundaries right? What better way than to get some kind of feedback from the people who made the call as to their opinions as to why it didn’t make the cut? It wouldn’t necessarily have to be massively in-depth, just a sort of tickbox affair stating what the main issue(s) was/were and an indication of what needed to be worked on. A paragraph of text at most. StyleGala does this for successful designs – wouldn’t that effort be better placed offering constructive critiques to those that didn’t make it in?

Now I know the guys running these galleries are busy people but wouldn’t that be a great thing for them to do? I know I would really appreciate constructive criticism on a design from some of the best in the business.

Who’s Site Is It Anyway?

31 May

There’s been an uptake in 3rd party apps interacting with websites in ways previously unthought of this year. Two of the biggest known are the Google Content Rewriter and now, Greasemonkey.

What both these things do is alter aspects of a website. The onus in each of these caes is slightly different. Google does it mostly, it seems, for Google whilst Greasemonkey does it for users. The end result is the same however – sites are altered.

This for me throws up lots of interesting questions about sites and the various aspects that can be ‘owned’ and who the various people are that ‘own’ these aspects.

At its most basic level a web page is comprised of three things: markup, design elements and content. Greasemonkey and Google can alter all three of these things. The question then is: should they be allowed to do so? Who ‘owns’ the site?

I take the view that all aspects of my site are owned by me but that by explicitly allowing others to access the content and the design then they have at least a part ownership in that aspect of the page(s). Once its downloaded to their browser then they should be able to alter is as they see fit to suit their needs. In this respect I have less of an issue with Greasemonkey than I do with Google. Greasemonkey is to help users. Google’s little tool is to help Google – they can make money from my content.

You’ve probably noted that in the above paragraph that I didn’t include markup in my list of what I personally find acceptable to alter. This is because I don’t find it acceptable for anything other than me to alter the code used on my site. Changing my code can have implications for how the site does/doesn’t work on the most basic of levels. For example, I use semantic code as I think its important. I use various bits of Javascript to do various things (the comments form spell checker for example) and whilst I can appreciate that users might want to have more control I don’t think that altering my code is acceptable.

At some point there has to be an element of trust. I don’t serve ads, I don’t track anyone with cookies or sessions (except in as much as I do for the comment form details). I don’t use hidden frames and I don’t think users have the right to alter the most basic building block of how I choose to serve content.

I serve content in two formats – markup and RSS. Those, to me, are your options as a user. I think a mutually respectful relationship where I trust you not to make unwanted changes to the functionality of my site and you trust me not to flood you with spam or other ads is important.

Dean Edwards recently wrote a script to disale Greasemonkey.

…GreaseMonkey broke my site. I didn’t realise what the problem was at first. I use a JavaScript syntax highlighter to make code on this site look slightly less boring. It uses some regular expressions and a little bit of DOM scripting. After installing GreaseMonkey I noticed that some of my code samples were completely broken in Firefox.

I entirely agree with Dean here. OK, he’s talking about design elements which I don’t have such an issue with but the fact remains that this Greasemonkey script is interfering in coded elements of the page Dean published. That to me is not acceptable. I find it as annoying as I do Google’s attempt to rewite my content – even if it is much more benign in intent.

There are also a few scripts here that apparently totally disable Greasemonkey. I don’t think anyone really wants that but I can sympathise with site owners wanting their sites to be presented as they intended them.

As a point of note, I installed the script to disable Google rewriting my content straight away. I’m still wavering as to whether or not to install a Greasemonkey killer. I’d be very interested in others opinions on this.

Spam: On The Wane?

3 May

Is it just me or has the unremitting flood of spam, um, remitted?

I’m aware that as the proud user of one the worlds leading CMS’ I have a plethora of excellent spam fighting hacks, plug-ins and built in tools at my disposal – from centrally managed blacklists through to advanced comment moderation and server configuration tools but even so, it does seem slightly quiet on the western front.

I’m used to the odd one or two slipping through the net – where a particularly dedicated spammer has visited me in person to negotiate Gatekeeper or they’ve added my Gatekeeper keys to some evil spam database cracking system but its been zip, nada, zlich, zero, bupkis, fuck-all.

Anyone else getting this or am I extremely lucky and yet have obviously just jinxed myself?

Cutting Edge – Why?

25 Apr

Like most of us in this line of work/hobby I stop by the four main CSS galleries every few days to see whats new and make the odd comment if I particularly like a design.

I’ve noticed a trend over the last few months on all these sites in a few of the commenters remarking that a showcased site isn’t using ‘cutting edge’ CSS techniques or that there’s nothing new to see aesthetically and without fail it always puzzles me who is making these comments and why – some of these commenters even say a design isn’t ‘worthy’ of the gallery its been submitted to.

What is the big deal about using cutting edge techniques? Lets not forget that all the designs showcased at these sites are in live production, often serving a commercial purpose. It strikes me as incredibly dangerous to use cutting edge techniques (CSS or otherwise) on clients live sites. The reason these techniques are cutting edge is that they are new, unproven and possibly unstable. The only place I personally would consider utilising cutting edge techniques is on a designated experimentation area. I can’t imagne paying customers being overjoyed to find designs failing due to unstable techniques coming apart in unforseen ways due to lack of testing. Its really very unfair to base judgement on a showcased design on its lack of cutting edge technique – by not using cutting edge technique the designer has proven themselves not only a good designer but also responsible.

And what about all those ‘seen it all before’ comments? What exactly are people unhappy with when they make these comments? If one site is a carbon copy of another then fair enough but if a site merely happens to use a similar information design then I don’t see the issue. One example of this is the amount of complaints that come in when a blog is posted: ‘it just looks like a blog’ is the recurring comment – well no shit, Sherlock – guess what? thats because it is a blog! Blogs are structured the way they are because over time thats how the user goals have shaped the design. At bottom all e-commece sites look pretty much the same too – not aesthetically, but in terms of flow. Why? Because this is how the design of an e-commerce has evolved with the needs of the site user to the fore as oppose to the needs of the site designer to showcase their skillset.

So, does this mean I think all designs should be the same – no way. We need cutting edge technique, we need innovation and we need people to push the boundries but we also need to realise that there is a time and a place to do these things. We also need to realise that a good design is much more than a cutting edge style sheet and lots of graphics.

What sort of site am I talking about? What sort of site is great looking but doesn’t use anything which might impact negatively on user experience – well, John’s recent redesign of Joshuaink is a perfect example. It looks fantastic but at heart, its a very simple, solid blog design. Its beauty is not only skin deep, it goes beyond into the semantics, usability and flow of information. Another is Garrett Dimon’s recent design. In terms of aesthetics its nowhere near the same as Joshuaink but look at the semantics and look at the information flow and its plain to see that what we have here is ‘just another blog’ but just like John’s redesign it has a beauty and style that can be appreciated for what it is – a great design, executed perfectly.

Lets not get caught up in a need to be cutting edge merely for the sake of being cutting edge. Instead lets appreciate good design for what it is. If it doesn’t float your boat then fine but don’t fault a good design merely because its not using cutting edge technique.