Cognitive Pragmatics in Human–AI Interaction

Authors

  • Muhammad Natsir Universitas Negeri Medan
  • Nadya Anggita Lubis Universitas Negeri Medan
  • Fanny Agnesya Siagian Universitas Negeri Medan
  • Sry Juniar Limbong Universitas Negeri Medan
  • Yoanne Simbolon Universitas Negeri Medan

DOI:

https://doi.org/10.59890/ijir.v4i3.150

Keywords:

Cognitive Pragmatics, Human–AI Interaction, Meaning Inference, Intentional Stance, Politeness Attribution

Abstract

The rapid development of language-based artificial intelligence has transformed the ways humans interact, communicate, and construct meaning, particularly within social, educational, and personal reflective contexts. This transformation raises fundamental questions regarding how meaning is produced and interpreted when interaction no longer involves a human interlocutor. This study examines human–AI interaction through the lens of cognitive pragmatics, with a specific focus on users’ subjective experiences, inferential meaning-making processes, and relational negotiation. Adopting a qualitative phenomenological approach, the study explores participants’ lived experiences of interacting with language-based AI systems in academic and reflective settings. Data were collected through in-depth semi-structured interviews with active AI users and complemented by document analysis of selected interaction records. The data were analyzed using thematic analysis to identify recurring patterns of meaning construction and emergent pragmatic dynamics. The findings reveal three interrelated themes: the experience of an illusion of understanding, the attribution of intention and politeness to AI, and the negotiation of emotional distance accompanied by ethical reflection. These findings demonstrate that meaning in human–AI interaction is not derived from AI’s communicative capacities but is actively constructed through users’ pragmatic inference and metapragmatic awareness. Theoretically, this study extends the scope of cognitive pragmatics to the domain of human–technology interaction by foregrounding users’ experiential and inferential processes. Practically, the findings offer important implications for digital literacy, education, and the development of more reflective and responsible human–AI relationships

References

Ahmed, W. (2025). Human–AI interaction and perceived understanding: A qualitative inquiry. AI & Society, 40(1), 115–129. https://doi.org/10.1007/s00146-024-01892-7

Bahji, A. (2026). Trust, empathy, and artificial agents in educational contexts. Frontiers in Education, 11, 1002147. https://doi.org/10.3389/feduc.2026.1002147

Bara, B. G. (2021). Cognitive pragmatics: The mental processes of communication (2nd ed.). MIT Press.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Bundy, A., & Chater, N. (2023). What does it mean for an AI to understand? Artificial Intelligence, 315, 103807. https://doi.org/10.1016/j.artint.2022.103807

Clark, A., & Fischer, M. H. (2022). Minds, brains, and artificial agents: Intentionality revisited. Cognitive Science, 46(5), e13158. https://doi.org/10.1111/cogs.13158

Culpeper, J., & Tantucci, V. (2023). Politeness, impoliteness, and the pragmatics of digital communication. Journal of Pragmatics, 206, 1–14. https://doi.org/10.1016/j.pragma.2023.01.002

Dennett, D. C. (2021). The intentional stance (2nd ed.). MIT Press.

Giora, R. (2023). Salience, context, and meaning construction in digital discourse. Semantics & Pragmatics, 16, 1–28. https://doi.org/10.3765/sp.16.3

Haugh, M. (2022). Pragmatics and the challenge of artificial communication. Journal of Pragmatics, 194, 1–13. https://doi.org/10.1016/j.pragma.2022.03.004

Hutchby, I. (2021). Communicative affordances and interactional norms in digital media. Polity Press.

Lebedenko, E., Mantello, P., & Seering, J. (2025). Meaning-making and agency in human–AI interaction. Human–Computer Interaction, 40(2), 245–278. https://doi.org/10.1080/07370024.2024.2381196

Mantello, P., Hoel, A. S., & Seering, J. (2025). Anthropomorphism and moral attribution in conversational AI. AI & Society, 40(2), 327–341. https://doi.org/10.1007/s00146-024-01911-7

Downloads

Published

2026-03-31

Issue

Section

Articles