Decades of phoniatrics and speech science give us a defensible map between what the voice does and what's happening upstream. Audexia operationalizes that map. Click around — this page is interactive.
Each condition fingerprint is built from peer‑reviewed feature‑to‑condition associations. Audexia scores all of them in parallel from a single 90‑second session.
Each feature has a clean physical interpretation — and a known set of conditions it's sensitive to. Audexia ships them as a single, model‑agnostic feature vector.
This is a one‑task demo of the same engine clinicians use. Pitch, jitter, shimmer, and HNR compute live from your microphone. Nothing is uploaded — audio stays in your browser tab and is discarded the moment you leave.
No microphone? Run a synthetic sample instead.
Each task isolates a different speech subsystem. The clinician picks the bundle; Audexia handles the order, prompts, and quality gates.
Phonatory stability and respiratory support. Drives jitter, shimmer, HNR, F0 stability.
Diadochokinetic rate and regularity. Primary bulbar marker — sensitive to early ALS.
Standardized phonetic content. Rate, prosody, articulation precision in controlled context.
Connected speech under planning load. Articulatory deficits surface here when sustained /a/ looks normal.
Overlearned motor program. Reveals reduced loudness scaling and festination.
Brief verbal fluency or serial 7s. Elicits state‑level affective and cognitive modulation.
Many factors beyond pathology shape acoustic features — age, sex, individual style, medication, fatigue, microphone, environment, and task choice all leave fingerprints. Audexia is built to augment clinical assessment, not replace it. Every observation we write to the chart includes its provenance, the model version that produced it, and the quality envelope of the underlying capture. Clinicians decide.