adMe
Last edited: August 8, 2025adMe: absorbtion, distribution, metabolism, excretion.
Pharmacology treatment of diseases. The microbiome regulates metabolism.
admissable
Last edited: August 8, 2025ADReSS Challenge
Last edited: August 8, 2025ADReSS Challenge is a Alzheimer’s Dementia Recognition challenge from the data available on DementiaBank.
ADReSS Literature Survey Index
Last edited: August 8, 2025The ADReSS Literature Survey is a literature survey for the results published during the ADReSS Challenge.
- Antonsson 2021: disfluency + SVF features trained on SVM: lexical > narrative qual.
- Chlasta 2021: features extracted from VGGish on SVM; also trained new CNN from .wav.
- Sadeghian 2021: Used GA for feature sel., achieved 94% w/ MMSE alone; dev’d ASR tool.
- Martinc 2021: CBOW (text) + ADR (sound) late fusion’d to a BERT, ablated for features.
- Meghanani 2021: spontaneous speech transcripts with fastText and CNN; 83.33% acc.
- Yuan 2021: ERNIE on transcripts with pause encoding; 89.6% acc.
- Jonell 2021: Developed a kitchen sink of diag. tools and correlated it with biomarkers.
- Laguarta 2021: multimodel (OVBM) to embed auditory info + biomarkers for clsf.
- Shah 2021: late fusion of n-gram and OpenSMILE on std. classifiers.
- Lindsay 2021: Cross-linguistic markers shared for AD patients between English and French.
- Zhu 2021: late fusion of CTP task for AD clsf. w/ transf., mobilenet, yamnet, mockingjay.
- Guo 2021: WLS data to augment CTP from ADReSS Challenge and trained it on a BERT.
- Balagopalan 2021: lexo. and synt. features trained on a BERT and other models.
- Mahajan 2021: a bimodal model on speech/text with GRU on speech and CNN-LSTM on text.
- Parvin 2020: excercize scheme effects on theta/alpha ratio and Brain wave frequency.
- Luz 2021: review paper presenting the ADReSSo challenge and current baselines.
From Meghanani 2021, a review:
advantage function
Last edited: August 8, 2025an advantage function is a method for scoring a policy based on how much additional value it provides compared to the greedy policy:
\begin{align} A(s,a) &= Q(s,a) - U(s) \\ &= Q(s,a) - \max_{a}Q(s,a) \end{align}
that is, how much does your policy’s action-value function differ from that of choosing the action that maximizes the utility.
For a greedy policy that just optimizes this exact metric, \(A =0\).