[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: FESCo Meeting Summary for 20090424


On Sun, May 3, 2009 at 12:56 PM, Callum Lerwick <seg haxxed com> wrote:
> On Sat, 2009-05-02 at 01:55 +0100, Matthew Garrett wrote:
>> In the end we still come down to making
>> decisions based on the opinions of people we deem to be experts in the
>> field. And if you don't trust the desktop team to make the appropriate
>> decision in this case then it would be helpful for you to say so
>> plainly.
> Well, to put it bluntly, no. We don't trust the desktop team and we
> *shouldn't* trust the desktop team.
> So they're "expert in their field". Fine. Cool. We need experts. But the
> problem with experts is they're totally consumed by their field, and
> lack the ability to view their piece of the puzzle in the overall
> picture.
> There's a reason we have release managers and FESCo. Someone has to
> manage the Big Picture, and sometimes that requires the "experts" to
> compromise their Perfect Vision for the good of the overall project.
> ... And are we seriously designing UI based purely on "the opinions of
> experts"? Is there any actual end user testing in the loop?
> Are we honestly expected to trust the desktop team based purely on
> "We're experts and we say so!" How reasonable is that? No, trusting
> experts is for sheep. We want to see data. Show us usability studies, or
> meta-studies of bug reports. *Something* more than "we say so". We trust
> data, not experts.
> As it is, the people in the trenches doing QA and user support have way
> more convincing data than the desktop team.
> Designing HCI without Humans in the loop is bogus beyond belief.

I think you are conflating user experience design with usability.  Our
user experience design and interface design takes place very early in
the Fedora development cycle.  It is primarily informed by our
experiences, research, and vision.  Currently, the user testing is
done primarily after a Fedora release.  However, we don't have a
process in place to collect trustworthy, actionable data.  Ubuntu has
actually just completed a round of real user testing and will be
presenting the results at GUADEC.  So, that should be interesting.

It would be great if we organized something similar.  A word of
caution though - this is hard to do correctly.  It is quite easy to
collect misleading and confusing data.  The exact process of the
testing matters a great deal.  FWIW, I'm hoping to do something like
this in the next few months.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]