Skip to content

Audiences, Outcomes, and Determining User Needs

Audiences, Outcomes, and Determining User Needs

by COREY VILHAUER

Every website needs an audience. And every audience needs a goal. Advocating for end-user needs is the very foundation of the user experience disciplines. We make websites for real people. Those real people are able to do real things. Everyone is happy.

But, it’s not really that easy, is it?

The issue, of course, is that we cannot advocate for those whom we do not know—or, even worse, those whom we assume we know. So we go to the source: we interview, we learn, and we determine who, exactly, these mystery users are. In doing so, we answer the two most important questions of the discovery stage: who are our audiences, and what do they want from our website?

Then—and only then—can we begin the process toward better content.

Defining the process

End users are a funny thing. They begin as amorphous blobs of assumed stereotypes. As we learn more about them, they become more refined. They develop characteristics and quirks. The more we learn about the end user, the closer we get to a sort of Pinocchio scenario: they become Real Users!

The process of creating Real Users is what we at Blend call our Audiences and Outcomes process. It happens before any other part of the project, and is based on C. David Gammel’s book Online and On Mission, in which Gammel pushes the need to identify and prioritize audiences before you develop any strategy.

In this case, Gammel defines an audience as:

Any group of people with some measurable characteristic in common which influences how relevant and significant they are to your specific outcomes.

Likewise, Gammel defines an outcome as:

A measurable change, action or behavior that you wish a visitor to take or experience.

In other words, just stating a goal is not enough. Outcomes must be measurable, otherwise they’re not goals—they’re aspirations. Without considering how an outcome will be measured, we cannot accurately represent the benefits—or the viability—of a user outcome.

Finally, we use audiences and outcomes to create user personas. We use these personas until the project is complete. Because we can’t run every decision past a field of actual end users, we rely on personas to do the work for us. They become our friends. We refer to them by name in meetings. It would all be very weird, if it wasn’t so necessary.

If you’re confused about how this differs from your standard discovery meeting, with people meeting in a room and answering questions and all of that, the answer is: it doesn’t. Not really. You may already do something like this without being so deliberate, or you may define audiences and outcomes elsewhere in your process.

That’s cool. We’ve found that tackling audiences and outcomes at the very beginning makes our content inventory more relevant (by allowing us to pair pages with audiences) and saves a step in our qualitative audit (by giving us context for content needs).

What’s more, it clarifies our goals from day one. This clarification is important. For example, if we’re building a site to sell mail-order diapers, we can’t just say, “We’re building a diaper delivery site, and mothers will come to the site to buy diapers, and so let’s start writing copy.” I’m not a mother. And if I was, I’m certainly not EVERY mother. I know damn well that fathers and other caregivers will come to the site, too. So if I move forward with the diaper-buying mother stereotype in mind, I’m doing a disservice to a giant percentage of the site’s users.

We are not the audience. We can only assume what our user thinks. Which is where the audiences and outcomes process comes into play, allowing us to narrow down who the user REALLY is through stakeholder input, persona development, and persona confirmation.

Buy-in: everyone wants to play

These proxies serve an added purpose: they help us get buy-in and help us back up assumptions. Erin Kissane sums it up perfectly in The Elements of Content Strategy, when she says:

The personas or other user proxies that you or your colleagues have created are the best backup you could hope for. Return to those tools when you need to validate opinions—yours or someone else’s.

By pulling stakeholders into a room and getting them to talk about their product, their audiences, their issues, etc., we’re giving them a way to buy in. No longer is this a consultant-driven process—it’s a company-driven process, where the consultant serves more as facilitator than dictator. As I’ve mentioned before, we don’t have all of the answers, and we do our clients a disservice by assuming we do.

When stakeholders get involved early, they are less likely to hold things in. They’re also less likely to object to changes in their content or process because, after all, we’ve all agreed on our audiences.

So, how do we do it?

The audiences and outcomes process

Let’s assume that within your methodology, you define audiences and outcomes at the very beginning of a project. It’s how we throw a project into context. It gives us a high-end view of who we’re dealing with and provides background for the qualitative content audit. It’s a “getting to know you” period of two or three weeks, broken into three steps:

  • Step 1: The discovery meeting
  • Step 2: User interviews
  • Step 3: Project deliverables: audiences, outcomes, and personas

The discovery meeting

There’s one goal: to get this group talking. Invite a small number of people—five to seven—and make sure someone from the front line is there. Ask them to let go of preconceptions. If an executive is present, make sure they don’t take over the conversation.

From your side, make sure you have two people. One will lead discussion, the other will document the discussion and provide an extra brain to ask and answer questions.

The meeting should be an open forum for discussing the website’s needs and goals. Schedule no more than an hour, but don’t be surprised if it goes a little longer. Begin the meeting by introducing everyone, introducing the process, and explaining how it fits into the overall site plan.

Bring big markers and whiteboards and anything to get ideas up in front of the group. Then, make two huge spaces for the following headers: Audiences – Outcomes.

Then it’s time to start asking questions.

THE QUESTIONS

These are the questions that we use at Blend to define audiences and outcomes. They aren’t law. In fact, we’ve never made it all the way through this list. (Note: you’ve probably seen some of these questions asked elsewhere. Of course you have—we created this list, like any good list, by stealing and adapting ideas for our own use.)

AUDIENCES

  • Who do you feel are your site’s audiences?
  • What are the demographics of these audiences?
  • How comfortable with technology is this audience?
  • Who is currently visiting the site? What makes their visit a success in their eyes? In yours?
  • Who else is competing for their attention?

OUTCOMES

  • What do you want to persuade your audience to do?
  • What assumptions do you make concerning your audiences? Example: do you assume your audience is of a certain socioeconomic group, or that they are familiar with certain aspects of your organization?
  • What drives your business, and how does your audience help achieve positive results?
  • What metrics do you want to keep track of?

COMPANY VOICE

  • What is your company’s ultimate mission? (Not a mission statement, but a more organic, real-world one-sentence answer to “Why do you do what you do?”)
  • What message do you need to get across?
  • What is the company’s voice and personality?
  • What has worked in the past? What hasn’t worked in the past? What were the stumbling blocks?
  • What attributes does your company have that helps to gather attention—i.e., “Our company is nationally known,” “Our company employs former movie stars,” “Our company is well respected in the field.”
  • What topics can we take advantage of? Example: if you are an automobile manufacturer, are there government rebates we can promote?
  • What topics are off limits?

METHODS

  • How do you currently communicate with your audiences? How often? (Related: can we have copies of your past materials?)
  • Who creates the content?
  • How does your audience prefer to communicate with you?
  • What other functionality will you need?

CONTENT MANAGEMENT

  • What is the current content workflow?
  • Who currently creates content?
  • Who will write it in the future?
  • Who approves content?
  • What stumbling blocks are in place that make it difficult for the content to get published?
  • Who in the company connects with customers most naturally?

As you push your way through a group’s initial fear of discussion, new questions will flow naturally. The specific questions aren’t necessary, as long as you remember to:

  1. Ask the client who their audiences are and what those audiences want.
  2. Listen for clues that expose secondary audiences.
  3. Dive into those clues. Make them work. Ask follow-up questions. Get people talking.

To create an extremely basic example, imagine we’re doing a meeting with a fictional airline: On-Time Air. After a very general discussion, we’ve got this written on the whiteboard:

Audiences:

  • Passengers
  • Airports

Outcomes:

  • Find accurate flight information
  • Book a flight easily
  • Learn about baggage fees
  • Locate gates and flight times

Digging into the audiences, we ask what else passengers look for. One person mentions that, yesterday, someone asked for a chart depicting the airline’s on-time percentage. This may not be a high-level outcome, but it reminds us that when something newsworthy occurs, the press may look for information on the airline. When a potentially damaging article is about to come out, it’s the airline’s policy to let its  employees know ahead of time. What about new employees? What about future employees? How do they apply?

As you can see, this line of thinking led us to three new audiences (the press; new and future airline employees) and three new outcomes (finding company news and information, looking for explanations on the airline’s problems, and locating job applications).

What’s more, we can start to see sub-audiences: employees could be separated by type (pilots, front-line staff at airport desks, those who take phone reservations) or by status (new, potential, veteran). What about the separation between a current passenger—those who have tickets, and a potential passenger—those looking for tickets? Some audiences will share outcomes. Both a pilot and a current passenger may be looking for related or identical information.

NOTE: You’ll notice that those last groups of questions aren’t really audience/outcome related. That’s okay. We’re opening up here, and your stakeholders will be in the mood to talk. This gives us a chance to grab a little extracurricular research. Taking Tiffani Jones Brown’s Making Things Hard post a step further, we’re making things easier by asking hard questions at a time when the client is more receptive to those questions.

User interviews

The initial strategy meeting is designed to help determine whom the site is for and the goals that need to be addressed; in other words, the meeting shapes the audiences for the site, as well as the desired outcomes related to each audience. The next step is to talk to actual site users to determine whether these audiences and outcomes are accurate.

This happens early in the process for a reason: we need to know who the user is, and we want to use their opinions throughout the project. So we ask our clients for a list of contacts, and we talk to past customers. Or, we solicit opinions from a related industry group. Regardless, we ask questions.

For example, if we’re talking to an audience of building managers, we could ask:

  • How do you secure funding for a project?
  • How many companies are you required to look at during a bid?
  • Who provides post-build service for a project?

But we could also ask questions that help gauge an audience’s personal and technological habits:

  • What kind of mobile phone do you use?
  • Where do you live and how large is your organization?
  • How often are you on the internet for non-work purposes?

Then, we compare their needs and perceived outcomes with the ones our client mentioned. If they match, then awesome. If not, even more awesome: time to bring it back to the client and say “This is what people want.” We’re already learning, people!

Project deliverables

Whether we like it or not, a huge part of content strategy is delivering documents, and the audiences and outcomes process is no different.

First, prioritize each audience and assign numbers to each user outcome. Because every audience could make a case for being the most important, it’s up to you and your client to determine which audiences are really the most important. Not only does this provide a handy cheat-sheet for solving design hierarchy problems (think: should our home page focus on existing members or new members) it also helps you determine which personas will get more space at the theoretical persona table.

(Not to mention: these numbers will come in handy during content auditing, where you can match content with goals. Saying, “Outcome 3.2a” is a lot shorter and easier than typing out the entire outcome.)

You might need to split some audiences into more manageable categories. These “sub-audiences” share the same overall goals as the parent audience, but feature some additional needs and goals.

For example: A physician’s website may have three major audiences—referring physicians, patients, and staff. Patients could be further split into three categories: new patients, current patients, and family members of patients—all three will have the same overall goals as the patient audience, but with additional goals dependent upon further classification.

Giving the document structure

In the beginning, the structure of the audiences and outcomes document might look like this:

  1. Introduction
  2. Summary of findings, including user interview findings
  3. Audiences and outcomes (example below)

AUDIENCE 1: CUSTOMERS

Customers are those who are either thinking about purchasing an airline ticket or who have already purchased an airline ticket. They are the main source of income for the airline, and represent the airline’s most important audience.

We can split customers into two distinct sub-audiences: potential customers and current customers.

Sub-Audience 1.a: Potential Customers

Potential customers are those who have yet to purchase a ticket. They’re visiting the site because they’re interested in traveling, and may be researching either current ticket prices, flight schedules or both. Their mindset depends on context—they could be voluntarily researching a trip for leisure, or they could be locked into a trip and simply need the cheapest or best flight.

  • 1a.1 – Find and compare flights by price, date, and flight details.
  • 1a.2 – Purchase desired flights with little resistance.
  • 1a.3 – Locate and understand flight rules—check-in time, weight restrictions, etc.

Sub-Audience 1.b: Current Customers

Current customers are those who have already paid for their ticket. For the most part, they are no longer comparing prices—they’re on the site to confirm existing flight information or make changes. Research at this point has shifted from discovery to confirmation.

  • 1b.1 – Access flight and ticket information.
  • 1b.2 – Contact On-Time Air for flight changes or questions.
  • 1b.3 – Locate and understand flight rules—check-in time, weight restrictions, etc.

Notice that both potential and current customers share the outcome “Locate and understand flight needs.” This is common. Audience desires always overlap, though there can be differences in how we measure these outcomes.

Speaking of measurement…

Making things measurable

Because outcomes should be measurable, we need to bring analytics into the mix. Determining these metrics helps us understand what’s important to the site and, more importantly, how we determine whether a content plan is working as imagined.

We’ll take our original document framework and add these metrics to the desired outcomes. For example, for desired outcome 1b.1 above, you may say:

  • 1b.1 – Access flight and ticket information.
  • Metric #1 – Lower page views per task.
  • Metric #2 – Fewer customer service calls for flight and ticket information.

You can see that there are two avenues to measure better access to flight and ticket information. One is to measure how many pages a user goes through before finally finding the information. The other is to track customer service calls to see if the number of calls for flight and ticket information decreases.

Much like the outcomes themselves presented some level of overlap, your metrics will overlap as well. Don’t worry. That’s normal.

Erik Peterson’s The Big Book of Key Performance Indicators is a good place to help determine which metrics to use for each outcome.

PROJECT PERSONAS

With metrics in place, the next step is to create personas for the major audiences.

The persona process has been well documented by nearly everyone, it seems. At Blend, our three favorite resources are these:

The number of personas you create depends on the number of unique audiences you’ve determined. In the case of On-Time Air, let’s say we have six unique audiences, one of which is the 1.a: Potential Customer. We would then create a persona that represents a potential customer:

Martin Hunt

Age: 46

Occupation: Architect

Family: Married with one child (19)

Education: Architecture degree from St. Cloud State University

Habits: Martin uses the internet every day, but has never been a heavy user. He relies on aides for most of his online information and research, and spends a good chunk of time answering email, but outside of that he’s a novice. He has a Facebook account and has signed up for Twitter, but has never posted. He has a Blackberry.

Assumptions: Martin is wary of the unofficial nature of bargain travel sites like Kayak.com and Priceline.com. He prefers to order tickets directly from the airline. Because he lives in Minneapolis, he must often fly Delta, but he is increasingly interested in On-Time Air’s direct flights from Minneapolis to Orlando and Minneapolis to Las Vegas.

Martin doesn’t care about price if the difference between two airlines is close—he’s more concerned about how comfortable he’ll be on the flight. He values relationships, and assumes the airline best positioned to win his business will be the one that does the best job of selling a unique experience. At the same time, he has already racked up a considerable amount of airline miles with Delta.

“I would rather fly comfortable than fly cheap, but I won’t accept an exorbitant price.”

Naturally, the details will be different depending on the project. And remember: personas are made of real people. All that questioning and interviewing you did earlier in the process? That helps inform and define your personas. For now. Until you do more interviewing and your personas become stronger and more agile. Better cooks, even.

Look at the personas you’ve created. Do they accurately represent the outcomes you’ve determined? In writing them, have new outcomes turned up? Go back to the beginning and make sure everything levels out. Your audiences and outcomes determine your personas, and your personas should validate your audiences and outcomes.

You can do one of two things in terms of deliverables—you can add these to the Audiences and Outcomes document, or you can present them on their own. We feel that they are so closely tied to audiences and outcomes that we include them in that document.

Next steps

With your audiences and outcomes in place, you no longer have any excuse for not knowing what your project’s goals are. The document informs the content audit, where you can begin assigning relevancy to every page on an existing site. Every piece of content on the site should relate back to a specific outcome, and if not, it needs to be reviewed for relevance.

From there, the audiences and outcomes will help you choose participants for user interviews and user testing. Find a group willing to help you through the entire process, and you’ve got a valuable resource for not only testing prototypes, but also for performing card sorts, confirming IA tree testing, and serving as a de facto advisory board on the subject.

Finally, the audiences and outcomes help remind you what really matters in web design and development: the end user. We place an overview of a project’s audiences and outcomes at the beginning of every wireframe, style guide, and specifications document. Doing so gives us easy access to our user’s needs. After all—without the end user, we’d have no one to impress from here on out.

Learn More

Related Topics: Content StrategyProject Management and WorkflowInformation ArchitectureUsabilityUser Research

Discuss

Was it good for you, too?Join the discussion »

Share

About the Author

Corey Vilhauer is User Experience Strategist at Blend Interactive in Sioux Falls, South Dakota. He writes about content strategy at Eating Elephant, and writes about things that aren’t content strategy at Black Marks on Wood Pulp. He’s on Twitter: @mrvilhauer.

A Guide To Heuristic Website Reviews

A Guide To Heuristic Website Reviews

In this article, we’ll explore a scoring system for rating and comparing websites, we’ll visualize those ratings using infographics, and we’ll see what data and structure this method provides for reviewing websites.

HOW TO TELL WHETHER A WEBSITE IS JUNK

We are all reviewers. We review many websites every day without even realizing it. In fact, many of us are experts at it. We don’t realize it because the whole process occurs in moments.

That’s how it is. We use websites; we judge websites. Even if we don’t know we’re doing it, we make judgements about trustworthiness, credibility, competency, reliability, design and style within seconds of arriving on a Web page. After looking around, we also get a pretty good feel for the user experience and usability.

CONSULTANCY REVIEWS

For many years, the agency I work for has conducted detailed reviews of its clients’ websites. As part of the consultancy process, we offer recommendations for any redesign or redevelopment work that is necessary.

Snap judgments may be useful and unavoidable, but when it comes to reviewing websites professionally, we need to be more organized and thorough, and we do this by using areview methodology. It also pays in both time and effort to be formulaic and consistent in our approach, because there are so many things to look at when considering a website.

To make this easier, we use a set of heuristics to score websites, along with a simple method to quickly visualize any weaknesses. I use a set of heuristics that I have worked with and edited and updated to suit the type of projects I work with, based on original work created by User Focus.

[Editor’s note: A must-have for professional Web designers and developers: The Printed Smashing Books Bundle is full of practical insight for your daily work. Get the bundle right away!]

Heuristics

(Image: Rick S.)

A heuristic is just a fancy word for a measurement of something that can’t readily be quantified (i.e. when there are no actual numbers to judge whether item A is better or worse than item B). In a 100 meter sprint, the winner is easily identified by concrete data. In ice dancing, the contestants are judged based on a set of technical and artistic criteria, giving them a set of scores.

ALL THAT GLITTERS IS NOT GOLD

We might be swayed by something that looks good, but we all know that beauty is only skin deep. As with everything that glitters, the job of the reviewer is to poke about and see if they really have struck gold.

Conversely, some websites that are judged harshly for their graphic design are successful beyond measure — I’m looking at you, Amazon, eBay, Craigslist and even Google. These websites aren’t much to look at, but functionally speaking, they do their job well and have evolved over the years to precisely meet their customers’ needs.

As designers, we’re asked to redesign websites that generally are getting to look better and better. It’s getting to the point that we find ourselves questioning the need for a redesign at all. But usually the problems are not immediately obvious in the visuals, layout or code. Sometimes a website is just wrong for the client’s brand; or the experience of performing tasks on it is unpleasant. Sometimes, a website just doesn’t work.

You can’t tell by looking. You need to dig deeper by really using the website, setting yourself tasks and trying things out. Only then will you experience what is really going on. Realizing just how much rethinking, redesigning and redeveloping a website needs often takes a while.

METRICS FOR SUCCESS

The success of most websites can be measured by some metric, be it the number of sales, uploads, downloads, clicks, comments or sign-ups. But a website can be successful in sales and still have problems; for example, it might be successful because of excellent marketing, because of its offline reputation (as in the case of high-street brands) or from having cornered the market. That does not mean it is without problems. But many more websites have no quantifiable metrics by which we can determine how good or bad they actually are. Judging these websites is more difficult and requires a bit more leg work.

A Many-Layered Cake

(Image: Scheinwerfermann)

When reviewing a website in detail, we have to explore many layers, both on the surface and below, including the following:

  • Task orientation and website functionality,
  • Navigation and information architecture,
  • Forms and data entry,
  • Trust and credibility,
  • Quality of writing and content,
  • Search,
  • Help, feedback and error tolerance,
  • Page layout and visual/aesthetic design,
  • Accessibility and technical design.

Taking these broad categories, we can devise a list of questions to explore each and get to the heart of the website. This formalizes the process and ensures that the same thought process can be repeated the next time. It also serves as a checklist, ensuring that nothing is forgotten. For example, when looking at the layout and visual design of a website, our questions could include these:

  • Are standard elements (such as page titles, website navigation, page navigation and privacy policy) easy to locate?
  • Is there a good balance between information density and white space?
  • Does the website have a consistent and clearly recognizable look and feel that will engage users?

For accessibility, we could formulate questions such as these:

  • Is the color contrast across the website enough to make all of the content accessible?
  • Does the website work comfortably at lower resolutions (e.g. 1024 × 768 pixels)?
  • Does the CSS validate with the W3C’s validation services?

Regarding the written copy, our questions could include:

  • Are the pages simple to scan on screen? Are they broken up by headings and subheadings? Are the paragraphs short?
  • Are acronyms and abbreviations defined when first used?
  • Does the website favor maps, diagrams, graphs, flow charts and other visuals over long blocks of text?

DEPTH

Although relatively easy to conduct, a heuristic review is not quick to perform. However, we can decide just how much depth to go into and how many questions to ask in order to get a feel for the website. The more heuristic measures we use, the longer the process will take; the fewer we use, the less informative the results will be. It’s a matter of striking a balance between the time available and the quality of returns. Selecting heuristics that get to the heart of each category can significantly reduce the amount of effort you need to put in.

Devising A Scoring System

To get a yardstick score for each heuristic, a simple score can be given. For example, 0 points if it falls short of a metric, 1 point if it’s halfway there, and 2 points if it does the job. So, if acronyms or abbreviations are defined in some sections but not in others, then the heuristic would score only 1 point. If the website worked comfortably at 1024 × 768 pixels, then it would receive 2 points.

These points can be totalled across each category to give a quantifiable sense of what’s going on across the website, as shown here:

Totals of heuristic data across categories.

VISUALIZATION

Representing this data visually helps us quickly identify problem areas and makes it easier to compare websites.

Radar diagrams are perfect for this kind of analysis, because they give a recognizable shape based on the score. The more circular the radar, the more balanced the score; the spikier the radar, the more variation in the score. The size of the radar plot on the axes indicates the score percentage itself, showing good and bad areas, as seen in the examples below:

A radar plot showing a website that performs well across all heuristic categories.

A radar plot showing poor performance across all heuristic categories.

A radar plot showing a website that performs well in all areas but one.

A radar plot showing a website that performs poorly in all areas but one.

Competitor Reviews

By combining the heuristic results of different websites, we can create a visual comparison of competing websites in a market segment. This is particularly good for getting a feel for which websites fail and which succeed in certain respects. Analyzing multiple websites can, of course, take a lot of work, so stripping your heuristics down to the essentials is a good idea.

A DIRECT COMPARISON

As a real-world example, below is a comparison of two similar websites: Smashing Magazine and Webdesigner Depot. We can see that both lack a little in most of the categories, apart from quality of writing and content, which is what we would expect from content-rich blogs. (Please note that I work for neither website and stand as an impartial bystander!)

Both websites score a little higher in page layout and visual design, but they have rather weak home pages, being in the format of a traditional, basic blog. Their calls to action score quite poorly (other than the advertising!). Smashing Magazine scores marginally better in navigation because it has the tabs on top to distinguish major content areas, whereas Webdesigner Depot almost loses the navigation below the advertising in the right-hand column. Smashing Magazine scores slightly higher in accessibility for a number of minor heuristics, such as the clarity of the text, spacing and contrast.

Webdesigner Depot falls behind a little on trust and credibility because of details such as the basic link to an email address in the footer (compared to the well-considered contact form on Smashing Magazine), and also for the very brief copy in the “About us” section. However, Webdesigner Depot picks up slightly more points in visual design for its colorful style. Of course, like the presentation scores in ice dancing, any process used to score aesthetics or design will always be subjective, so having a wide range of criteria for various aspects of design is a good idea.

A heuristic analysis of Smashing Magazine.

A heuristic analysis of Webdesigner Depot. Note that Webdesigner Depot does not really have or require form inputs, so it scores 0 by default in the “Forms and data entry” category; this score can be either ignored or removed altogether if so wished.

To emphasize the differences in the heuristic measurements, we can overlay one radar plot on the other:

Overlaying one radar diagram on the other to enhance visualization.

Conclusion

When reviewing a website, subjective snap judgements are unwise. We can do justice to a website only with a detailed test drive. We need to perform tasks and look in detail at various components on and below the surface. Heuristic scoring is a useful process forquantifying and visualizing a website’s quality when other measures are not appropriate or available. This formal process reveals problem areas, while focusing the discussion at the start of a redevelopment phase.

Resources

Based on work done by UserfocusDiscover more and download a free template to get started in creating your own heuristic reviews.

(al)

Tags:Usability

↑ Back to topShare on Twitter

Leigh Howells

Leigh is a designer with 15 years experience, now working in user experience. He’s been helping websites look better, be more organised and work better since grey backgrounds were the norm. He’s a jack of all trades, from video to music and still trying to master at least one of them. He survives on coffee and custard creams and blogs occasionally from his own planet; leighhowells.com