The program used for the simulations reported in Norris, D. (2006) The Bayesian Reader: Explaining word recognition as an optimal Bayesian decision process. Psychological Review, 113, 327-357 is available here:
Source Code - The cource code should compile OK with g++ > 3.2 or MS Visual C.
With g++, compile usging: g++ -O3 *.cpp -o BayesVisual
It works for me on Red Hat Enterprise 3, Fedora Core 4 & 5, and Solaris.
Documentation - leaves much to be desired, but ready-cooked simulations are available below
Simulated lexical decision times.
The Bayesian Reader is very computationally intensive. Simulations of
a single experiment can take days to tun on a single computer. As I can
now run simulations in parallel on a pool of Linux boxes and PCs, I have
simulated lexical decision times for all of the 12545 words in the CELEX
lexicons used in the simulations reported in the paper. The simulations
are the average of 50 runs for each word. This represents several months
of CPU time.
Simulated RTs for words 4, 5, and 6 letters long are contained in the files
LDT4.txt, LDT5.txt, LDT6.txt
Each line in the files begins with the word and its CELEX frequency.
There are then 6 sets of three numbers corresponding to simulations
with the following thresholds (for both Yes and No responses):
0.8, 0.85, 0.9, 0.95, 0.97, 0.99.
For each threshold there is the RT, the number of 'Yes' responses, and
the number of 'No' responses. Note that there is also the possibility
that the model did not respond within the available time, so the sum
of 'Yes' and 'No' responses need not be 50.
The simulations of words of different lengths are quite independent.
As explained in the paper, one shouldn't attempt to compare absolute RTs
for words of different lengths. The best way to determine the model's mean
simulated RT for all of the items in an experiment, when there are different
numbers of items of different lengths in different conditions, is to calculate
unweighted means over the different word lengths.
The most recent version of the Bayesian Reader model is described in
Norris (2009) Putting it all together: A unified account of word recognition
This version of the model can perform simulations of RT-distributions and masked priming, and can also incorporate noise in letter position as well as letter identity. The source code, a Windows executable, and the documentation are available here: