As novices we slowly and laboriously sift through a chaotic flood of minutia. To experts the significant details are obvious. Irrelevant details fade to the background. The novice receives a jumble of meaningless impressions; the expert sees patterns and meaning.
Somehow experts have made the trek from “How could you possibly tell?” to “How could you not?”. And they probably can’t tell you how they got there.
This talk examines the topic of perceptual learning through the lens of theory and practice—research and anecdotes—and speculates how it can be deployed strategically to train new experts.
We typically think about expertise and mastery in terms of:
- knowing facts
- explicitly verbalizing concepts
- executing specific sequences of steps
A lot of expertise, however, defies this tidy classification. The skill of experts is reliable and consistent, but they’re often unable to explain what they do or how they know.
The field of Perceptual Learning is the study of the conditions under which the brain develops the ability to make snap judgements. The fundamental building block of this ability is differentiation, the process of determining which dimensions of variation are important and which are not.
The brains of novices and experts process perceptual information differently in two important ways:
- discovery effects: how they extract information
- fluency effects: how efficiently they extract information
Discovery effects consist of units (novices perceive unrelated pieces of data, whereas experts see chunks, patterns, and higher-order relations) and selectivity (novices process both relevant and irrelevant data, and as we gain expertise, our brains begin to attenuate or filter out irrelevant characteristics, and amply relevant ones).
Fluency effects consist of search type (novices process serially, experts are much more likely to process in parallel), speed (slowly vs. quickly), and attentional load (novices are capacity constrained, whereas experts extract information without draining cognitive resources).
Dr. Kellman’s work shows that not only is it possible to take this haphazard process and make it deliberate, but we can compress the learning that happens naturally into a much shorter amount of time by training people with short interactive trials targeting a specific perceptual skill using a huge, messy, complex data set with known answers. It’s very important that the data set is large, with no duplicates and complex variation. The variation should include both the relevant and irrelevant characteristics (noise and distractors). This allows the brain to identify the meaningful diagnostic structures and underlying invariants.