Noisy channel Bayesian Reader
The code for the noisy-channel version of the Bayesian Reader model can be found here in zip or tar.zip format,
along with some examples and README.txt which contains the information on this page.
Older versions of the BayesianReader are here
This is the code for the noisy channel version of the Bayesian Reader (BRnc).
On a unix system you can compile this simply by typing "make -B" at
the command line. I've only tried it on Centos 5.6 with gcc 4.1.2 so
far, but there are no requirements for non-standard libraries so it
should work on any flavour of unix. There isn't a windows
version but if you're familiar with say, Visual C++, it shouldn't be
too difficult to compile it.
To compile the version for running simulations with French type
make -B -f MakefileFrench
and this will create BRFnc. This needs to be used with the French
Simulations using Dutch are run with the English version.
BRnc script.txt arg1 arg2 ...
where script.txt contains the parameter settings, response thresholds,
and possibly names of input and output files.
Inside script.txt, arg1, arg2 etc are converted to $1, $2 etc.
Two example scripts are provided, one to run straight lexical
decision, and one to do masked priming. There are also two lexicons,
one based on the English lexicon project and one based on the French
BRnc LDscript.txt stims.txt LDout.txt ELPlexicon.txt
will run the program with the parameter settings in LDscript.txt,
taking a list of input words in the first column of stims.txt, writing the
output to LDout.txt and using ELPlexicon.txt as the lexicon.
BRnc MPscript.txt stims.txt MPout.txt ELPlexicon.txt
will run a simulation of masked priming. stims.txt contains 2 columns
of stimuli, the target followed by the prime. You can put all the
conditions for an experiment in different columns of one input file
and then use the parameters PrimeFieldPosition and TargetFieldPosition
to select which columns you want to use.
You don't really need to run these examples (they take about 90mins
each on 3.0GHz machine) as I've included the output files: LDout.txt
There are quite a few parameter settings available that we played with
in development, but were never really used. The simplest solutions
always seemed to work best. However, all of the parameter settings
always get printed out.
The most critical parameter settings are the response thresholds, the
prime duration, the identity/position noise, and the
The program is very slow. It can take about two weeks on a single CPU
to simulate a single masked priming experiment. Obviously it will take
far longer to simulate all of the data from one of the lexicon
projects. I run everything under Condor on a large number of linux
boxes in parallel. I've made the full results of the lexicon project
If you just want to play with the Bayesian Reader, but aren't worried
about using a mix of different-length stimuli and allowing for
insertions and deletions, I suggest you try one of the older models
here instead. They run much faster and, at least for the moment, have
If you want to use this for research purposes, it's probably worth
contacting me for advice. If you want to simulate some data, and the
data look interesting, I might be able to help you run the simulations.
Format of output files.
If you run the scripts above you get a model RT for each pair of
threshold values you use. Each line looks like this:
P_a_WordThreshold 0.99 0.001 Y 24 613 N 1 363 T 0 ytf 0 ntf 0
The first two numbers are the Yes and No thresholds.
Y 24 613 - Number of Yes responses and their RTs
N 1 363 - Number of No responses and their RTs
T 0 ytf 0 ntf 0 - number of timeouts (responses > MaxSteps),
yes-too-fast: Yes responses faster than MinSteps,
no-too-fast: No responses faster than MinSteps,
The overlong name - P_a_WordThreshold - is just for compatibility with
the output of the earlier models