Architectural limits in current state-of-the-art AI prevent its production of new valid and explainable knowledge. Human cognition at least partly represent knowledge explicitly, yet the cognitive sciences have struggled with developing a useful and tractable model of human knowledge acquisition. One major challenge concern how world experience produces general knowledge. Another and less investigated problem concern the agent's general motive for acquiring knowledge.
In this talk I will argue that the human native desire to know might constitute a core principle for any epistemic autonomous system. The acquired knowledge structure derived from episodic experiences can be mined to identify knowledge gaps guiding further enquiry as in thinking deeply about inconsistencies and seeking arbitrating evidence. Empirically, I present recent findings from the lab suggesting that human curiosity acts like a rational learning opportunity signal driven both by uncertainty in prior knowledge, and the reliability of available information sources and operate as gain in updating. Moreover, the human mind appears to reward itself for reliable operations in diverse tasks such as logical problem-solving and foreign word recall. Thus, it is not learning that is rewarded but the subjective reliability of the learning outcome.
Imbuing an artificial system with a human-like computational goal of minimizing uncertainty in predictive models across its knowledge representation and computing the expected knowledge gain of exploration might produce a generally artificial intelligence.