As we hear or read a sentence, word-by-word, we pass through states of uncertainty about which structure the speaker might be intending. Resolving these uncertainties -- say, about the grammatical role that words play -- implies information-processing work. The talk presents a general formalization of this information-processing work, and uses it to address the comprehension difficulty of pre-nominal relative clauses in Korean, Japanese and Chinese. The proposed Entropy Reduction metric correctly derives a processing asymmetry called the Subject Advantage that has been observed across languages. Combining corpus-derived attestation frequencies with linguistically-motivated Minimalist Grammars, we see how the processing difficulty profile reflects uncertainty over different types of empty or inaudible elements.