- works for Amazon Creative Services. Responsible for offering UX services and designs to organisations.
- we oversimplify ‘us’ v ‘them’. It’s not just use and them (designers v customers)
- we must accept from the outset that there are designers, debs, support, testers, CEOs who will all feel their opinion counts
- when we oversimplify the designers and customers get squashed and lost in the debate. This is dangerous as it leads to design b committee
- we then create the divide. HR Rep: “I don’t like the logo and color”. Designer: “why the fuck doesn’t he like the logo, who is he?”
- what they’re expressing is fear. MD’s, CEO’s won’t agree to something they don’t understand
- a leader is required to ensure decisions are made based on fear.
- leaders are not necessarily bosses, managers of office authority figures
- leaders are someone who makes how ant to do well and to do better because the leader is good a what they do
Leaders don’t always have:
- the right answer
2 things leaders do:
- build confidence and trust
- listening first allows you to build confidence and trust later
- we have to be open to do the right thing and open to go in a new direction
- failing to use a good idea that yo think may make your design look worse or you don’t agree with makes you a bad designer
How to respond to fear
- ” I hear you, I understand your concern but we have tested this approach and we feel really confident and good about it”
- this show you are on their side. This will help them pass an idea that they may not necessarily understand
- Cross Channel Experience is designing the process for all touch points regardless of device
- 90% of businesses say Cross Channel Experience is key
- 3 types of touch points exist:
1. Static – PC’s, mobile devices etc. these. Devices that cannot be altered once in the hand of the user.
2. Interactive – Internet, websites etc.
3. Human – physical interaction between human to human, customer service etc.
- customers don’t look a your business in the context of one of these channels. They look at it as a holistic experience.
Methods for a better CCE
- observe how people use it (watch them first hand)
- observe it in the contents of its use
- attention to detail counts
- look for hacks, ask why they came up with a new way of doing something it already does
- follow entire engagement from purchasing, un-packaging/downloading to use
Tools to help achieve a better CCE
Audience – those there to use the product
Onstage – the app, the product etc
Backstage – behind the scenes, warehouse, etc. this is the bit the user doesn’t see.
Support systems – the ticket support system, hosting services etc.
Note to self: lookup nform experience map
- Cross-pollinate rules – extend knowledge be information across different departments within the organisation to help create a common collective goal
- aim to create a unified vision
- abruptness = bad UX
- things that appear from no where are confusing, they require more cognitive thought.
- transitions should communicate what is happening. Just like the minimise to dock animation on the mac.
- transitions are a trade off as they often cause delays. Delays in user interaction are bad.
- advanced users don’t want delays, they want a quick, snappy product.
- once the user is familiar with an animation, it is not always required after.
- height removes animation as no calculation can be made for the intervals between.
- CSS image level 4 will allow animation on background images
- transitions can be used to persist state by using transition-delay on active CSS rule
- ‘steps’ is a new property that can be added to animation to animate still image frames
- responsive is not fluid. Using a media query for the new iPhone 5, using a media query for resolutions between 1024 & 768 is not fluid. This is just another form of fixed design. It’s restricted.
- think evolution. A what point should the designer step in and alter the design, regardless of window? Content should always be readable regardless of resolution.
- designing for fixed resolutions also gives us ugly break points.
Peripheral vision refers to the vision that lies outside the center of the field of sight, called the fovea and consists of three parts. The nearest to the fovea is called near-peripheral, the area slightly further is called the mid-peripheral while the most outer area is called far-peripheral.
Compared to many animals, humans have fairly weak peripheral vision, especially when it comes to distinguishing colors and shapes. Our peripheral vision has adapted to recognise general shapes in order to feed our brain a general impression of a situation. The fovea is far more adapted to recognise fine detail and color. We know this to be true as in order to read a paragraph of text we move our central field of vision back and forth across the page over the text.
Differences within the anatomy of cells which make up the retina are the reason for the differences between central and peripheral vision. The area of the retina where central vision occurs is heavily packed with cone cells. Cone cells are used to perceive colors and fine lines but make up only a small minority of cells within the retina. The rest of the cells are known as rod cells and rod cells are responsible for taking in coarser and more general information. The far-peripheral area of vision is filled with mainly rod cells and these organize light from broad scenes and large objects and convert them into nerve impulses, which reach the brain via the optic nerve at the back of the eye.
Between the fovea (central vision) and peripheral vision is the Parafovea. The parafovea surrounds the fovea and helps us to distinguish things more closely to the central vision. We know this as when reading text we can quickly understand the next couple of words before our central vision focuses on them. The parafovea is what allows us to rapidly read text.
The fact that our vision is more precise at the center of our field of view does not make our peripheral vision inferior, just that it accomplishes a different purpose. If our entire field of view were as precise as it is at the fovea, our eyes would have to send much more information to our brain demanding more energy consumption to process.
As mentioned above, our peripheral vision is responsible for recognising a scene or situation. GUI’s traditionally set the scene by placing header elements, navigation and footer elements at the outer edge of the application. Our peripheral vision therefore recognises these broader shapes and guides our central vision to the center, where more finer and more compact data exists. Not only do these elements act as a scene creator but they tend to be less content heavy, reducing the need for our central vision to move towards them.
Twitter is a good example of this. It sets the scene by placing the main navigation, which only consists of seven components at the very top of the page. Our central vision immediately focuses on the main content.
White space is also an important aspect to GUI’s that prevents the central vision from being distracted too easily. If there are no striking objects within the view of the parafovea, our central vision has no reason to move. Adding white space around objects keeps the user focused on a specific task or part of the page. Invisible pathways can be created by careful use of white space. By not adding too much white space, the GUI can be designed so users notice multiple elements at a time but without distracting them from their main task.
Have you every walked into the room and the TV has caught your eye so you immediately turn to see it? Detail and color is limited in the periphery so we turn our central vision to focus on the detail. This is why notifications in GUI’s work so well. Today, notifications come in many different forms but mainly they are a toast or growl style animation at the edge of the screen. This animation is processed by the peripheral vision so we turn towards it so our central visions can focus on the content. The same applies to the red notifications used in Facebook. Color is limited in the periphery so a red color block will immediately engage our fovea.
Applying these principles will help build engaging GUI’s that focus attention, form uninterrupted user pathways while also highlighting other important areas.
Exrtacts from http://webaim.org/articles/laws/usa/rehab
There are many myths surrounding the realities of US Law and web accessibility. For example, as per the ADA (Americans with Disability Act of 1990) there is no such thing as an accessible website. Title IV—Telecommunications does not include any laws regarding the internet, this was introduced in the Rehabilitation Act of 1998.
The Rehabilitation Act of 1973 was the first major legislative effort to provide a range of services for persons suffering with physical and cognitive disabilities. Since it’s inception this act has had two amendments, one in 1993 and most recently 1998. Two sections within the Rehabilitation Act, as amended, have an impact on accessible web design. These sections are Sections 504 and 508.
Section 504 provides the context of the lay while section 508 provides the direction.
Section 504 is a civil rights law. Included as an amendment to the Rehabilitation Act of 1973, the message of this section is concise;
No otherwise qualified individual with a disability in the United States… shall, solely by reason of her or his disability, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any program or activity receiving Federal financial assistance.
Therefore, programs receiving federal funds may not discriminate against those with disabilities based on their disability status. All government agencies, schools, postsecondary entities such as state colleges, universities etc fall into this category.
The Reauthorized Rehabilitation Act of 1998 included amendments to Section 508. This section prohibits the Federal Government from procuring electronic and information technology goods and services that are not fully accessible to those with disabilities. This includes the service of web design since the Internet was specifically mentioned.
Section 508 directed the Access Board (The Architectural and Transportation Barriers Compliance Board) to create binding, enforceable standards that clearly outline and identify specifically what federal government means by “accessible” electronic and information technology products. The first set of accessibility standards for Federal E&IT were published on December 21, 2000.
Although limited to federal agencies, Section 508 is an extremely influential piece of legislation. There are 4 reasons why this is so.
A breakdown showing IE9′s HTML5 support
WAI-ARIA, the Accessible Rich Internet Applications specification from the W3C’s Web Accessibility Initiative, provides a way to add the missing semantics needed by assistive technologies such as screen readers. ARIA enables developers to describe their widgets in more detail by adding special attributes to the markup. Designed to fill the gap between standard HTML tags and the desktop-style controls found in dynamic web applications, ARIA provides roles and states that describe the behaviour of most familiar UI widgets.
The ARIA specification is split up into three different types of attributes: roles, states, and properties.
Continue reading on developer.mozilla
I recently read this comment by a graphic designer in response to the new iOS 7 icons and I agree with what he says:
“…all design – particularly communication arts must be viewed in the context of their surroundings. In this case, the new icons lack the rich colors and details that the Retina display is capable of showing. Line weights and uniformity of colors make it difficult to quickly distinguish function at such a small size. The overly simple designs lack warmth and look as though they could have been purchased from one of any number of stock image sites. Most importantly though; some of the icons do not adequately represent their applications’ function.