Our publication database contains 7985 publications dating back to 1943. You can browse some of the most recently added entries below, or you can:
- Search for particular publications
- See publications whose data is available from our data repository
- Contact us to request a reprint (reprints may not be available for all publications)
Recently Added Publications
Convergent neural signatures of speech prediction error are a biological marker for spoken word recognition
Authors:
Sohoglu, E., Beckers, L., DAVIS, M.H.
Reference:
Nature Communications
Year of publication:
In Press
CBU number:
9091
Abstract:
We use MEG and fMRI to determine how predictions are combined with speech input in superior temporal cortex. We compare neural responses to words in which first syllables strongly or weakly predict second syllables (e.g., “bingo”, “snigger” versus “tango”, “meagre”). We further compare neural responses to the same second syllables when predictions mismatch with input during pseudoword perception (e.g., “snigo” and “meago”). Neural repre- sentations of second syllables are suppressed by strong predictions when predictions match sensory input but show the opposite effect when predic- tions mismatch. Computational simulations show that this interaction is con- sistent with prediction error but not alternative (sharpened signal) computations. Neural signatures of prediction error are observed 200 ms after second syllable onset and in early auditory regions (bilateral Heschl’s gyrus and STG). These findings demonstrate prediction error computations during the identification of familiar spoken words and perception of unfamiliar pseudowords.
https://osf.io/wjd4s/
A neurocognitive pathway for engineering artificial touch
Authors:
Nisky, L and MAKIN, T.
Reference:
Science Advancesn
Year of publication:
In Press
CBU number:
9090
Abstract:
Artificial haptics has the potential to revolutionise the way we integrate physical and virtual
technologies in our daily lives, with implications on teleoperation, motor skill acquisition,
rehabilitation, gaming, interpersonal communication, and beyond. Here we delve into the
intricate interplay between the somatosensory system and engineered haptic inputs for
perception and action. We critically examine the sensory feedback’s fidelity, and the
cognitive demands for interfacing with these systems. We examine how artificial touch
interfaces could be redesigned to better align with human sensory, motor, and cognitive
systems, emphasising the dynamic and context-dependent nature of sensory integration. We
consider the various learning processes involved in adapting to artificial haptics, highlighting
the need for interfaces that support both explicit and implicit learning mechanisms. We
emphasise the need for technologies that are not just physiologically biomimetic, but also
behaviourally and cognitively congruent with the user, affording a range of alternative
solutions to users’ needs.
The dimensionality of neural coding
for cognitive control is gradually
transformed within the lateral
prefrontal cortex
Authors:
CHIOU, R., DUNCAN, J.D., Jefferies, E., LAMBON RALPH, M.
Reference:
Journal of Neuroscience
Year of publication:
In Press
CBU number:
9089
A checklist for assessing the methodological quality of concurrent tES-fMRI studies (ContES checklist): a consensus study and statement.
Authors:
Ekhtiari, H., Ghobadi-Azbari, P.,, Thielscher, A., Antal, A., Li, L.M., Shereen, A.D., Cabral-Calderin, Y., Keeser, D., Bergmann, T.O., Jamil, A., Violante, I.R. , Almeida, J.,Meinzer, M., Siebner, H.R., Woods, A.J., Stagg, C.J.,Abend ,R., Antonenko, D., Auer, T., Bächinger, M., Baeken, C., Barron, H.C., Chase, H.W., Crinion, J., Datta, A., DAVIS, M.H., Ebrahimi, M., Esmaeilpour, Z.,, Falcone, B. Fiori, V., , Ghodratitoostani, I., Gilam, G., Grabner, R.H., Greenspan, J.D., Groen, G., Hartwigsen, G., Hauser, T.U., Herrmann, C.S., Juan, C.H., Krekelberg, B., Lefebvre, S ., Liew, S.L., Madsen, K.H., Mahdavifar-Khayati, R., Malmir, N., Marangolo, P., Martin, A.K., Meeker , T.J., Ardabili, H.M., Moisa, M., Momi, D., Mulyana, B., Opitz, A., Orlov .,, Ragert, P., Ruff, C.C., Ruffini, G., Ruttorf, M., Sangchooli, A., Schellhorn, K., Schlaug, G., Sehm. B.,.Soleimani, G., Tavakoli, H., Thompson, B.,
, Timmann, D., Tsuchiyagaito, A., Ulrich, M., Vosskuhl, J., Weinrich, C.A., Zare-Bidoky, M., Zhang, X., Zoefel, B., Nitsche, M.A., Bikson, M.
Reference:
Nature Protocols, 04 Feb 2022, 17(3):596-617
Year of publication:
2022
CBU number:
9088
Abstract:
Low-intensity transcranial electrical stimulation (tES), including alternating or direct current stimulation, applies weak electrical stimulation to modulate the activity of brain circuits. Integration of tES with concurrent functional MRI (fMRI) allows for the mapping of neural activity during neuromodulation, supporting causal studies of both brain function and tES effects. Methodological aspects of tES-fMRI studies underpin the results, and reporting them in appropriate detail is required for reproducibility and interpretability. Despite the growing number of published reports, there are no consensus-based checklists for disclosing methodological details of concurrent tES-fMRI studies. The objective of this work was to develop a consensus-based checklist of reporting standards for concurrent tES-fMRI studies to support methodological rigor, transparency and reproducibility (ContES checklist). A two-phase Delphi consensus process was conducted by a steering committee (SC) of 13 members and 49 expert panelists through the International Network of the tES-fMRI Consortium. The process began with a circulation of a preliminary checklist of essential items and additional recommendations, developed by the SC on the basis of a systematic review of 57 concurrent tES-fMRI studies. Contributors were then invited to suggest revisions or additions to the initial checklist. After the revision phase, contributors rated the importance of the 17 essential items and 42 additional recommendations in the final checklist. The state of methodological transparency within the 57 reviewed concurrent tES-fMRI studies was then assessed by using the checklist. Experts refined the checklist through the revision and rating phases, leading to a checklist with three categories of essential items and additional recommendations: (i) technological factors, (ii) safety and noise tests and (iii) methodological factors. The level of reporting of checklist items varied among the 57 concurrent tES-fMRI papers, ranging from 24% to 76%. On average, 53% of checklist items were reported in a given article. In conclusion, use of the ContES checklist is expected to enhance the methodological reporting quality of future concurrent tES-fMRI studies and increase methodological transparency and reproducibility.
URL:
Using Earables Platforms to Study Verbal Communication: Introducing earables to psycholinguistic research
Authors:
Fernández, A.P., DAVIS, M.H.
Reference:
2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp/ISWC ’22 Adjunct)
Year of publication:
2022
CBU number:
9087
Abstract:
Earables provide a new opportunity to study conversation in the wild. They uniquely allow (i) accurate head motion tracking recorded synchronously with the speech signal and (ii) multiple people to simultaneously receive and stream conversational speech that is unconstrained by body movement. Here, our general aim is to introduce the use of earables for conducting psycholinguistic studies requiring audio and movement data jointly collected during verbal interaction in a natural setting. Specifically, we propose using earables platforms to address the relationship between head movement, speech and meaning transmission from single and multiple-person perspectives.
URL:
Designing remote synchronous auditory comprehension assessment for severely impaired individuals with aphasia
Authors:
Robson, H., Thomasson, H., DAVIS, M.H.
Reference:
International Journal of Language & Communication Disorders, 06 Nov 2023, 59(3):1232-1242
Year of publication:
2023
CBU number:
9086
Abstract:
Background
The use of telepractice in aphasia research and therapy is increasing in frequency. Teleassessment in aphasia has been demonstrated to be reliable. However, neuropsychological and clinical language comprehension assessments are not always readily translatable to an online environment and people with severe language comprehension or cognitive impairments have sometimes been considered to be unsuitable for teleassessment.
Aim
This project aimed to produce a battery of language comprehension teleassessments at the single word, sentence and discourse level suitable for individuals with moderate-severe language comprehension impairments.
Methods
Assessment development prioritised response consistency and clinical flexibility during testing. Teleassessments were delivered in PowerPoint over Zoom using screen sharing and remote control functions. The assessments were evaluated in 14 people with aphasia and 9 neurotypical control participants. Modifiable assessment templates are available here: https://osf.io/r6wfm/.
Main contributions
People with aphasia were able to engage in language comprehension teleassessment with limited carer support. Only one assessment could not be completed for technical reasons. Statistical analysis revealed above chance performance in 141/151 completed assessments.
Conclusions
People with aphasia, including people with moderate-severe comprehension impairments, are able to engage with teleassessment. Successful teleassessment can be supported by retaining clinical flexibility and maintaining consistent task demands.
URL:
Can speech perception deficits cause phonological impairments? Evidence from short-term memory for ambiguous speech
Authors:
Smith, H.J., Gilbert, R.A., DAVIS, M.H.
Reference:
Journal of Experimental psychology. General, 14 Dec 2023, 153(4):957-981
Year of publication:
2023
CBU number:
9085
Abstract:
Poor performance on phonological tasks is characteristic of neurodevelopmental language disorders (dyslexia and/or developmental language disorder). Perceptual deficit accounts attribute phonological dysfunction to lower-level deficits in speech-sound processing. However, a causal pathway from speech perception to phonological performance has not been established. We assessed this relationship in typical adults by experimentally disrupting speech-sound discrimination in a phonological short-term memory (pSTM) task. We used an automated audio-morphing method (Rogers & Davis, 2017) to create ambiguous intermediate syllables between 16 letter name–letter name (“B”–“P”) and letter name–word (“B”–“we”) pairs. High- and low-ambiguity syllables were used in a pSTM task in which participants (N = 36) recalled six- and eight-letter name sequences. Low-ambiguity sequences were better recalled than high-ambiguity sequences, for letter name–letter name but not letter name–word morphed syllables. A further experiment replicated this ambiguity cost (N = 26), but failed to show retroactive or prospective effects for mixed high- and low-ambiguity sequences, in contrast to pSTM findings for speech-in-noise (SiN; Guang et al., 2020; Rabbitt, 1968). These experiments show that ambiguous speech sounds impair pSTM, via a different mechanism to SiN recall. We further show that the effect of ambiguous speech on recall is context-specific, limited, and does not transfer to recall of nonconfusable items. This indicates that speech perception deficits are not a plausible cause of pSTM difficulties in language disorders.
URL:
Intelligibility improves perception of timing changes in speech
Authors:
Zoefel, B., Gilbert, R.A., Davis, M.H.
Reference:
Plos one, 12 Jan 2023, 18(1):e0279024
Year of publication:
2023
CBU number:
9084
Abstract:
Auditory rhythms are ubiquitous in music, speech, and other everyday sounds. Yet, it is unclear how perceived rhythms arise from the repeating structure of sounds. For speech, it is unclear whether rhythm is solely derived from acoustic properties (e.g., rapid amplitude changes), or if it is also influenced by the linguistic units (syllables, words, etc.) that listeners extract from intelligible speech. Here, we present three experiments in which participants were asked to detect an irregularity in rhythmically spoken speech sequences. In each experiment, we reduce the number of possible stimulus properties that differ between intelligible and unintelligible speech sounds and show that these acoustically-matched intelligibility conditions nonetheless lead to differences in rhythm perception. In Experiment 1, we replicate a previous study showing that rhythm perception is improved for intelligible (16-channel vocoded) as compared to unintelligible (1-channel vocoded) speech-despite near-identical broadband amplitude modulations. In Experiment 2, we use spectrally-rotated 16-channel speech to show the effect of intelligibility cannot be explained by differences in spectral complexity. In Experiment 3, we compare rhythm perception for sine-wave speech signals when they are heard as non-speech (for naïve listeners), and subsequent to training, when identical sounds are perceived as speech. In all cases, detection of rhythmic regularity is enhanced when participants perceive the stimulus as speech compared to when they do not. Together, these findings demonstrate that intelligibility enhances the perception of timing changes in speech, which is hence linked to processes that extract abstract linguistic units from sound.
URL:
Temporal lobe perceptual predictions for speech are instantiated in motor cortex and reconciled by inferior frontal cortex
Authors:
Cope, T.E., Sohoglu, E., Peterson, K.A., Jones, P.S., Rua, C., Passamonti, L., Sedley, W., Post, B., Coebergh, J., Butler, C.R., Garrard, P., Abdel-Aziz, K., Husain, M., Griffiths, T.D., Patterson, K., DAVIS, M.H., ROWE, J.B.
Reference:
Cell Reports, 24 Apr 2023, 42(5):112422
Year of publication:
2023
CBU number:
9083
Abstract:
Humans use predictions to improve speech perception, especially in noisy environments. Here we use 7-T functional MRI (fMRI) to decode brain representations of written phonological predictions and degraded speech signals in healthy humans and people with selective frontal neurodegeneration (non-fluent variant primary progressive aphasia [nfvPPA]). Multivariate analyses of item-specific patterns of neural activation indicate dissimilar representations of verified and violated predictions in left inferior frontal gyrus, suggestive of processing by distinct neural populations. In contrast, precentral gyrus represents a combination of phonological information and weighted prediction error. In the presence of intact temporal cortex, frontal neurodegeneration results in inflexible predictions. This manifests neurally as a failure to suppress incorrect predictions in anterior superior temporal gyrus and reduced stability of phonological representations in precentral gyrus. We propose a tripartite speech perception network in which inferior frontal gyrus supports prediction reconciliation in echoic memory, and precentral gyrus invokes a motor model to instantiate and refine perceptual predictions for speech.
URL:
Complex speech-language therapy interventions for stroke-related aphasia: the RELEASE study incorporating a systematic review and individual participant data network meta-analysis
Authors:
Brady, M.C., Ali, M., VandenBerg, K., Williams, L.J., Williams, L.R., Abo, M., Becker, F., Bowen, A., Brandenburg, C., Breitenstein, C., Bruehl, S., Copland, D.A., Cranfill, T.B.,, di Pietro-Bachmann, M., Enderby, P., Fillingham, J.,, Galli, F.L., Gandolfi, M., Glize, B., Godecke, E., Hawkins. N., Hilari, K., Hinckley, J., Horton, S., Howard, D., Jaecks, P., Jefferies, E., Jesus, L.M.T., Kambanaros, M., Kang , E.K., Khedr, E.M., Kong, A.P.H.,Kukkonen, T., Laganaro, M., LAMBON RALPH, M.A., Laska, A.C., Leemann, B., Leff, A.P., Lima, R.R., Lorenz, A., MacWhinney, B.,, Shisler Marshall, R., Mattioli, F., Maviş, İ., Meinzer, M., Nilipour, R., Noé, E., Paik, N.J., Palmer, R., Papathanasiou, I., Patrício, B.F., Martins, I.P., Price, C., Price, C., Jakovac, T.P., Rochon, E., Rose, M.L., Rosso, C., Rubi-Fessen, I., Ruiter, M.B., Snell, C., Stahl, B., Szaflarski, J.P., Thomas, S.A., van de Sandt-Koenderman, M.,, van der Meulen, I., Visch-Brink, E.,, Worrall, L., Wright, H.H.
Reference:
Review from National Institute for Health and Care Research, Southampton (UK), 21 Dec 2022
Year of publication:
2022
CBU number:
9082
Abstract:
Background
People with language problems following stroke (aphasia) benefit from speech and language therapy. Optimising speech and language therapy for aphasia recovery is a research priority.
Objectives
The objectives were to explore patterns and predictors of language and communication recovery, optimum speech and language therapy intervention provision, and whether or not effectiveness varies by participant subgroup or language domain.
Design
This research comprised a systematic review, a meta-analysis and a network meta-analysis of individual participant data.
Setting
Participant data were collected in research and clinical settings.
Interventions
The intervention under investigation was speech and language therapy for aphasia after stroke.
Main outcome measures
The main outcome measures were absolute changes in language scores from baseline on overall language ability, auditory comprehension, spoken language, reading comprehension, writing and functional communication.
Data sources and participants
Electronic databases were systematically searched, including MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature, Linguistic and Language Behavior Abstracts and SpeechBITE (searched from inception to 2015). The results were screened for eligibility, and published and unpublished data sets (randomised controlled trials, non-randomised controlled trials, cohort studies, case series, registries) with at least 10 individual participant data reporting aphasia duration and severity were identified. Existing collaborators and primary researchers named in identified records were invited to contribute electronic data sets. Individual participant data in the public domain were extracted.
Review methods
Data on demographics, speech and language therapy interventions, outcomes and quality criteria were independently extracted by two reviewers, or available as individual participant data data sets. Meta-analysis and network meta-analysis were used to generate hypotheses.
Results
We retrieved 5928 individual participant data from 174 data sets across 28 countries, comprising 75 electronic (3940 individual participant data), 47 randomised controlled trial (1778 individual participant data) and 91 speech and language therapy intervention (2746 individual participant data) data sets. The median participant age was 63 years (interquartile range 53–72 years). We identified 53 unavailable, but potentially eligible, randomised controlled trials (46 of these appeared to include speech and language therapy). Relevant individual participant data were filtered into each analysis. Statistically significant predictors of recovery included age (functional communication, individual participant data: 532, n = 14 randomised controlled trials) and sex (overall language ability, individual participant data: 482, n = 11 randomised controlled trials; functional communication, individual participant data: 532, n = 14 randomised controlled trials). Older age and being a longer time since aphasia onset predicted poorer recovery. A negative relationship between baseline severity score and change from baseline (p < 0.0001) may reflect the reduced improvement possible from high baseline scores. The frequency, duration, intensity and dosage of speech and language therapy were variously associated with auditory comprehension, naming and functional communication recovery. There were insufficient data to examine spontaneous recovery. The greatest overall gains in language ability [14.95 points (95% confidence interval 8.7 to 21.2 points) on the Western Aphasia Battery-Aphasia Quotient] and functional communication [0.78 points (95% confidence interval 0.48 to 1.1 points) on the Aachen Aphasia Test-Spontaneous Communication] were associated with receiving speech and language therapy 4 to 5 days weekly; for auditory comprehension [5.86 points (95% confidence interval 1.6 to 10.0 points) on the Aachen Aphasia Test-Token Test], the greatest gains were associated with receiving speech and language therapy 3 to 4 days weekly. The greatest overall gains in language ability [15.9 points (95% confidence interval 8.0 to 23.6 points) on the Western Aphasia Battery-Aphasia Quotient] and functional communication [0.77 points (95% confidence interval 0.36 to 1.2 points) on the Aachen Aphasia Test-Spontaneous Communication] were associated with speech and language therapy participation from 2 to 4 (and more than 9) hours weekly, whereas the highest auditory comprehension gains [7.3 points (95% confidence interval 4.1 to 10.5 points) on the Aachen Aphasia Test-Token Test] were associated with speech and language therapy participation in excess of 9 hours weekly (with similar gains notes for 4 hours weekly). While clinically similar gains were made alongside different speech and language therapy intensities, the greatest overall gains in language ability [18.37 points (95% confidence interval 10.58 to 26.16 points) on the Western Aphasia Battery-Aphasia Quotient] and auditory comprehension [5.23 points (95% confidence interval 1.51 to 8.95 points) on the Aachen Aphasia Test-Token Test] were associated with 20–50 hours of speech and language therapy. Network meta-analyses on naming and the duration of speech and language therapy interventions across language outcomes were unstable. Relative variance was acceptable (< 30%). Subgroups may benefit from specific interventions.
URL: