In the course of comprehension, the human brain makes astounding inferences despite insufficient and ambiguous data - the data mostly being 'words' that frame sentences. Can this neural process of "abstraction of the relevant" be sufficiently modeled such that the computer may be induced to perform the same? This article is our response to this very question; a treatise on our attempt at engineering the basis of such a model, or rather, a methodology of 'relevance cognition'. Drawing inspiration from the way children learn languages, we propose here a corpus and entropy-based methodology that follows a subjectivist approach to granulate and identify statements of consequence in a natural language sample, all within the periphery of a particular context. Besides promoting language grasping abilities of a machine-through simulation of the intuitive process of relevant-information segregation-the methodology aims at reducing overall text-processing costs. The suggested scheme considers 'conceptual keywords' to be the basis of sentential understanding and utilizes the principles of Bayes' conditional probability and Shannon's entropy theorem. Experimental results have been provided to substantiate our logic. Though this article has been formulated over the backdrop of the Z-number approach to computing with words (CWW), it nevertheless applies to research areas in natural language processing (NLP) like text summarization, semantic disambiguation and concept graph generation. © 2013 Elsevier B.V. All rights reserved.