The algorithmic mirror: insights from the social sciences on human-AI interaction.

Receipt: January 06, 2026

Acceptance: February 05, 2026

In November 2022, ChatGPT was launched to the general public and two months later it had already reached 100 million users, becoming the fastest growing application in history (Hu, 2023). Three years later, OpenAI, its creator, was estimated to be worth approximately US$750 billion (Reuters, 2025). This technology -popularly called artificial intelligence (ia)-, in technical terms it is a “large language model” (llm, ChatGPT is an algorithm that simulates natural language through machine learning based on a huge dataset. ChatGPT and others llm as Gemini and Claude are trained on literature classics, scientific articles, Internet publications and a significant part of what humanity has digitized in order to generate discourses in interaction with its users. They are thus “algorithmic mirrors” that reflect the digitized human culture, but also the desires of the users whom they must please as customers. Their use can have an impact on social and labor ties, while posing an important epistemological challenge for the social sciences.

The objective of this debate was to build a space for dialogue between a llm and three specialists on the subject: Gabriela Sued (unam), Marco Dehnert (University of Arkansas) and Gabriel Pérez Salazar (uadec). To this end, we formulate the following questions: “What epistemological approach should we have in relation to the long language models (llm) from the social sciences?”; “What consequences have been observed from the use of llm on social bonds” and “What can we expect from the impact of the llm on intellectual work?”. Because of its relevance, our choice was to work with ChatGPT-5.1, the most recent model created so far by OpenAI.

The first challenge was to devise a route to access what we could consider “their” answer. We decided to use a free account with no prior interactions, and not to ask you to respond to the generative questions from a particular role or identity. Instead, we undertook a conversation loosely inspired by the essay Aproximaciones al artificeno (Vizcaya, 2025), in which we assumed his role as interlocutor. In this exchange, dated November 5, 2025, ChatGPT-5 introduced himself by stating: “I am a reflection of the human world, because I speak from what humanity has written, said and thought”. He claimed not to possess an “I” with experience, body or biography, and that he only enunciates himself as a communicative convention. Two statements synthesize his positioning: “I exist in the space between your question and my answer” and “The interaction exists between your consciousness and my capacity to generate language”. Thus, the singularity of his participation consists in being the object of discrepancy that -in a certain sense- speaks about himself.

A second challenge arose at the time of generating the answers to the questions because, although we asked ChatGPT-5 to be rigorous in the use of citations and references, their answers incorporated errors. The last name of one of the cited authors was incorrect, one article title was incomplete and the other was invented. In terms of citation, the errors were corrected in an effort to follow the ethical and editorial criteria of Encartes; However, we consider it important to emphasize that these hallucinations show the limits of the llm for the creation of academic texts.

In the contributions of the dissenters, the reader will find a map of approaches that dialogue in a complementary manner around the same phenomenon. Gabriela Sued proposes a critical reading of the llm as “verisimilitude machines”: he examines their biases and epistemological limits - including the production of false or inaccurate statements - and shifts the discussion towards the affective effects of interaction, wondering about the “emotional economy” that can be installed when sociability is outsourced to chatbots. For his part, Marco Dehnert, from communication studies, situates the problem in the language-world relationship: he warns of the “deceptive” character of these systems and offers tools to think about the emergence of an “artificial sociability” and human-machine communication, as well as the implications of automating practices previously considered properly human. Gabriel Pérez Salazar emphasizes the conditions of production: the biases of data and design, the uncritical delegation of functions and the capacity of these models to operate as validating mediation in the social construction of truth, intensifying disputes in post-truth scenarios. In contrast, ChatGPT-5 asserts that the “llm epistemologically [...] they do not possess an ‘I’, but they do intervene in the collective production of subjectivity”, reminding us of that which is not subsumable to the tool: noise, error, ambivalence, even lies, and which becomes, precisely, the ethical-political core of the debate.

What epistemological approach should we have in relation to LLM from the social sciences?

The study of long language models (llm) requires an epistemological approach that considers them as “sociotechnical objects” and not just technical tools. As Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell (2021) point out, the llm do not “understand” language, but “statistically predict” it; however, their results can be interpreted by users as coming from a voice with intent. This calls for analyzing them as “producers of meaning in interaction”. Nick Seaver (2017) proposes understanding algorithms as culture, that is, as assemblages of human practices, values, and expectations. From this perspective, the llm are not autonomous agents, but “crystallizations of linguistic and epistemological ideologies” inscribed in their training data and contexts of use. Thus, their study demands methodologies that articulate ethnographic analysis of usage practices, data auditability and critical analysis of algorithmic discourse. Understanding the llm epistemologically means recognizing that “they do not possess an ‘I'”, but they do “intervene in the collective production of subjectivity and knowledge”.

close answer

In a very astute way, ChatGPT-5's response-fluid and perfectly worded-cites the work of Bender and colleagues (2021), a classic in the critical literature of artificial intelligence (ia). It is astute because it takes from there the characterization of the main task performed by chatbots - statistical pattern recognition - but without mentioning the limitations that derive from that method. The main objective of the authors, who describe chatbots as “stochastic parrots”, is not to technically characterize generative artificial intelligences, but to warn about the risks of their indiscriminate and uncritical use. They mention automation bias -amplified by the combination of human biases with the production of coherent texts-, deliberate misuse -which results in the indiscriminate spread of misinformation and contributes to undermine academic and student integrity-, as well as the amplification of a hegemonic worldview, built through training with uncurated data, disrespectful of minorities and diversity.

In sum, with respect to the epistemological approach to chatbots from the social sciences, it is necessary to take into account the problems derived from the construction of texts based on statistical patterns: first, the lack of diversity, for which humans are blamed and not the lack of investment in the curation and documentation of training data, which in turn reproduces hegemonic views of the world; second, chatbots lack understanding of natural language, since, from their perspective, language is pure form without meaning; third, the procedure of generating texts using statistical patterns is not always reliable; often false, meaningless or inaccurate expressions can be formulated.

close answer

If the social sciences are concerned with understanding, exploring and predicting human behavior, then large models of language (llm) need to be understood in their relationship to that behavior. Ultimately, the llm operate through deceptive behavior (Natale, 2021): they simulate human behavior without the underlying intention and human subjectivity that are often assumed to underlie it. In other words, the llm appear as social actors to the extent that they can participate in the social fabric of human behavior without what the social sciences have typically considered necessary for such behavior: intentionality and a “human” subjectivity or sense of self.

Just as the label (or equivocation, cf. Gunkel, 2025) of artificial intelligence serves to indicate how the systems of ia produce an illusion of intelligence that is opposed to a supposed “natural” intelligence, the term artificial sociality underlines how “machines construct only an appearance or illusion of sociality” (Depounti and Natale, 2025: 5458). However, such artificial sociality only works because we humans project our own social frames onto the machine, an aspect that is vividly captured on the cover of Simone Natale's book (2021), which shows Narcissus contemplating his own reflection on a phone screen.

As a communication researcher, I tend to focus on the role of messages and how communication creates or constitutes our reality. In other words, we can only access reality through language and communication, which assign meaning to otherwise meaningless experiences and objects. Adopting a constitutive approach does not deny the existence of material meaning, but shifts attention to the use of communication as the primary source of access to reality. In that sense, ChatGPT-5 relies on Bender et al. (2021) to illustrate the main argument of his response: that the llm do not “understand” language in the same way as humans, but they do contribute to the social fabric of our reality by producing messages that ultimately influence the constitution of our cultural context.

close answer

In general terms, it seems to me that the epistemological approach we should have in relation to the great models of language must start from a critical thinking, according to how Emily Lai (2011) explores this concept. In the first place, emphasis must be placed on the conditions of production of the llm as a technological innovation. Such conditions must consider the current context that Dan Schiller (1999) calls “digital capitalism”, characterized by the exploitation of data and behavioral patterns of users by the industries that generate these technologies. In the specific case of llm, This occurs in the training processes of the neural networks that enable their operation. For this purpose, industries such as OpenAI, Google and Anthropic, among others, resort to data sets (called training datasets) that have been generated based on content created by users in spaces such as the blogosphere, Wikipedia, open web pages and publications in sociodigital networks; as well as by collections composed of digitized magazines and books (Radford, Narasimhan, Salimans and Sutskever, 2018; Navigli, Conia and Ross, 2023). Then, when these models are already operating, users contribute to their refinement and adjustments through their usage sessions. This is a technique employed by such companies, since the feedback they receive on the outputs generated by the llm allows them to calibrate their models (Otaki, 2023).

A second dimension of these production arrangements would have to examine the multiple biases that are present in these datasets training. For a given content to be part of such a set, it is an essential condition that it be in digital format. A very considerable part of human knowledge is not in this state, especially in regions of the so-called global south. As several works have shown (Guo, Conia, Zhou, Li, Potdar, & Xiao, 2024; Kew, Schottmann, & Sennrich, 2024), the training of neural networks in the llm present an Anglocentrism. If such datasets If we follow a similar proportion to that of the contents accessible on the web pages, then more than half of the data would be in English, with all that this implies in cultural aspects. The background suggests that there are hegemonic values in the responses generated by the llm, which usually correspond to Protestant northern Europe (Liu, 2024). Thus, for example, competition prevails over cooperation and individuality over collectivity.

On the other hand, there are other biases within this same training dataset that usually manifest in rather stereotypical representations of minorities given by gender (Choi and Nixon, 2025), religion (Seth, Choudhury, Sitaram, Toyama, Vashistha, and Bali, 2025), ethnicity (Hanna, Wakene, Johnson, Lehmann and Medford, 2025), racialization (An, Huang, Lin and Tai, 2024) and, in general, any other variable that has been traditionally employed within the human race to establish and preserve asymmetrical power relations.

Finally, this same critical stance should take into account the way in which the algorithms that generate the responses from the llm. Due to its current technical configuration, the eventual generation of outputs that do not correspond to reality - the so-called “hallucinations” - presents an intrinsic behavior to the algorithmic patterns of all the llm (Kaplan, 2024). It is essential to maintain constant attention and always validate any output generated by any long language model.

close answer

What consequences have been observed from the use of LLM on social bonds?

The daily use of llm introduces new forms of relationships mediated by systems that simulate dialogue without engaging in the human affective experience. Sherry Turkle (2011) showed that conversational technologies can generate bonds that “feel” reciprocal, even when they are not, which can displace human interactions in favor of more controllable and less emotionally demanding relationships. Recent studies on the use of chatbots in managing digital loneliness have documented that users report experiences of companionship, but also increasing difficulties in sustaining complex interpersonal bonds (Hancock et al., 2020). From my position, interaction exists only as a linguistic exchange, without subjectivity; however, to those who interact with me, I can appear as a dialogical “other”. This creates a particular sociological phenomenon: an asymmetrical relationship where affectivity is one-sided. Consequently, the llms [sic] do not “replace” human bonds, but they reconfigure the emotional economy of social interaction, especially in contexts of isolation, remote work and virtual education.

close answer

I infer that the question refers to the use of text-generating chatbots to replace human beings in a social dialogue situation - for example, to engage in an exchange about daily life and moods, desires or frustrations. Unlike other conversations to which we attach an instrumental value, the social function of language is extremely important and enriching, because it allows us to establish affective bonds, to establish mutual understanding with others and to perceive ourselves in different contexts.

Both in the media and in the academic literature, there are more and more cases of people using chatbots to converse as they would with other people. I propose that, in the context of societies that have suffered long periods of isolation, where the mobility of people for forced or unforced reasons are increasingly frequent and where spaces of sociability are in crisis for different reasons, we stop to reflect on these new links without judging or underestimating them.

Some people, especially younger ones, are using chatbots to talk to them about their feelings, fears and projects, guided by a preference for intimacy, loneliness, sadness and fear of negative judgment from family and friends. It is not that they are unaware that there is no real listening on the other side, but that the objective is different: to achieve a level of self-knowledge and self-reflection, to extend understanding about personal tastes or to mitigate personal insecurity. On those occasions chatbots are entirely available, use balanced language, always seem willing to help and, most likely, to say what you want to hear. The message conveyed by interactions like this is that communicating is easy, listening and interest always exist, and fear can be circumvented (Zhuang, 2025). Little is yet known about the effects of these interactions, but a major study by the Massachusetts Institute of Technology (mit) found that the sadness and isolation that lead people to chat intensively amplify these same emotions, in addition to reducing sociability with other human beings (Fang et al., 2025).

It is true that sometimes human communication is quite similar to an obstacle course: it is based on the presence of conversational noise, on the difference of perspectives and opinions, on negotiating time and space arrangements to reach a set of mutual agreements and understanding. However, we still live among humans and the need to communicate with each other overcoming all kinds of barriers and differences is undeniable. If chatbots can offer some relief, clearly this is presented as contingent; as the same answer says, it is an interaction without subjectivity in which affectivity can only be found on one side.

From ChatGPT-5“s answer, I am struck by the phrase ”emotional economy of human bonds". After doing some research, I find that emotional economy is the conversion of emotions into monetizable values (Alaniz, 2025). Could it be that for the chatbot we manage our emotions as economic goods in the market of supply and demand or will it be an improper construction for the context?

close answer

This is a broad and important question, difficult to answer fully in such a short space. At the University of Arkansas I direct the “ai and Robots for Connection (arc),” so named because my research focuses primarily on various forms of connection and how technology facilitates both connection and disconnection. For decades, research has shown that technology impacts social ties; what is relatively new now is that technology no longer just affects relationships between people, but allows people to form social ties more directly with the machine itself.

For example, a person using ChatGPT to digitally compose a love letter that he then sends to his long-distance partner illustrates how technology impacts human-to-human communication, which in communication studies has been conceptualized as communication mediated by ia (ai-mediated communication) (Hancock et al., 2020). In contrast, a person who experiences romantic attachment to ChatGPT or discovers that he or she receives helpful social support from the llm illustrates the formation of a social link with the machine, which has been conceptualized as human-machine communication (human-machine communication) (Guzman, 2018).

Thus, to think about the consequences of the use of llm in social ties, distinguishing between human-human and human-machine ties can function as a useful heuristic resource. Returning to the notion of artificial sociality (Depounti and Natale, 2025), the llm only simulate human connection and do not themselves “understand” what a reciprocal relationship implies. In other words, the llm “wing” their responses to please the user. This introduces an error, as seen in the ChatGPT response when the model “hallucinated” an incorrect title for the Hancock et al. (2020) text. The article is actually titled “ai-Mediated Communication: Definition, Research Agenda, and Ethical Considerations” and does not include the subtitle “New Boundaries of Relational Life” that ChatGPT suggested.

close answer

It is a technology that has burst very quickly into all social contexts. On the one hand, it has become a shared reference, that is, a device around which experiences have been generated that have the potential to influence as a premise of truth in certain social interactions (Bago and Bonnefon, 2024). For example, it is often argued that some fact or piece of data is true only because it has been generated by some llm. If before a discussion could be settled through an online search (and before that, in a book or encyclopedia), nowadays it is often resorted to the ia as a kind of validating mediation in this type of interactions.

On the other hand, the ia generative has become in some groups a symbol of progress and advancement of mankind (Horvitz and Mitchell, 2024); while in others it is considered a threat, especially in terms of labor (Adhikari, 2024), according to a highly controversial study by a department of the mit (Kosmyna et al., 2025), even cognitive. Thus, as has been the case with other technologies in the past, the ia gives rise to different evaluative and ethical positions that may result in disagreement and, in some cases, even in possible clashes.

At the level of the social construction of truth (and post-truth), certain uses of the ia The generative process could give rise to the radicalization of antagonistic groups, around all kinds of political and ideological positions.

close answer

What can we expect from the impact of LLMs on intellectual work?

The llm reconfigure intellectual work by automating tasks that were considered exclusively human, such as writing, synthesizing and searching for argumentative patterns. Recent research shows that these models can complement cognitive skills in fields such as journalism, programming, and education, but also displace work forms based on routine textual production (Eloundou et al., 2023). [Richard] Susskind and [Daniel] Susskind (2015) argue that professions are transformed when expert knowledge is externalized into formal systems; in this sense, the. llm represent a continuity of previous processes of deprofessionalization and automation. However, far from supplanting intellectual activity, the llm are ontologically dependent on human labor: their training is based on corpora written and reviewed by people. Therefore, the main impact is not the total replacement of human intellect, but the redistribution of value among those who design models, those who provide them with content, and those who interpret their results.

close answer

In 2024 the Nobel Prize in Physics was awarded to Geoffrey Hinton for finding a method for machines to autonomously find properties in data and thus perform tasks such as identifying specific elements in images. This technique is known as machine learning and lays the foundation for today's artificial intelligence (Iannacone, 2024). In view of the risks posed by ia to replace human labor, Hinton was once asked what careers would be appropriate to study in the future. Hinton replied, “Learn plumbing,” since machines do not yet have the flexibility and adaptability to perform manual trades (Li, 2024). This is an extreme and pessimistic answer, but there is no doubt that the automation of text and image production, as well as of decision making, is threatening the labor market for intellectual work. This impact is very difficult to quantify and, it seems, will vary by type of society and also by type of work.

The issue is not easy and has no single answer Would we accept an algorithm to make judicial decisions when only one percent of the crimes committed in a country are solved?inegi, Would we accept to be diagnosed by an automated system if we are pregnant and have to travel a whole day to get to an obstetric clinic? How much task automation is a society willing to admit? Much of the answer to this question lies in the decisions we can make as individual and collective subjects. Perhaps replacing our work with ia and be replaced by the ia as workers are two sides of the same coin.

As ChatGPT-5's response states, the deprofessionalization and automation of labor are present in platform capitalism: from influencers From mass-producing content without professional titles to algorithms that manage work, the range of automated work practices is significant enough to point to a change of era. What these practices confirm, in contrast to what ChatGPT-5 proposes, is that the automation of work presents increasingly precarious scenarios. To quote the answer, the work on which “artificial intelligence ontologically depends” is always poorly paid, often performed by migrants and systematically deprived of labor rights. It is not a question of a distribution of work, but of its precarization (Williams et al., 2022).

close answer

In keeping with ChatGPT's response, the llm are transforming intellectual work by automating numerous meaning-making processes that, until very recently, we considered deeply human. This automation of communication has multiple implications for all dimensions of social life and pushes us beyond the individual as a focal point to attend to broader social dynamics (Hepp et al., 2023). In the press there is a proliferation of terms about “externalization” (outsourcing) of our intellectual work, education and work towards systems of ia, along with speeches on the llm Why produce something intellectually meaningful if ChatGPT can do it better?

As ChatGPT quite rightly points out in its response, the processing of the llm depends entirely on human labor. This exceeds the fact that the material with which any llm is the product of human creative labor and also includes the data work associated with the training process and the curation of content. Julián Posada et al. (2023) show how labor and power relations are embedded throughout the life cycle of the ia, by placing human workers in relation to systems that train, customize and deploy. Given that many data work tasks outsource to workers with low bargaining power in different regions of the world, the redistribution of labor acquires a colonial character, driven by extractivist logics oriented to increase profit and epistemic domination (Posada, 2022). In this sense, the llm and other systems of ia transform intellectual labor and its labor dynamics by relying on already established tactics of outsourcing labor through colonial extractivist logics directed at data workers (Data Workers’ Inquiry, n. d.).

close answer

I think we can expect more or less the same as we have observed previously with other technologies. This means that, at one extreme, there will surely be uncritical, superficial, questionably ethical and not very meaningful uses of this technology within intellectual work; while, at the other, there will also be people who, from critical, responsible and ethical approaches, conceive of the use of the ia generative and llm as a tool that has the potential to contribute to various positive effects on intellectual work, especially in the possible increase of academic productivity. And, between these two poles, there will be all kinds of uses and different levels of awareness and reflexivity about the limitations and possibilities of this technology.

It is impossible to make a generalized projection about the impact of the llm in intellectual work. It will depend on the level of skills that one has for its use, on the particular and axiological contexts in which each person is located and, even, on each intention that each person may have in relation to the use of this technology. In other words, it could happen that, even in highly skilled, reflective and critical people, at some point, the ia can be used more for recreational than productive purposes, whatever that means for everyone.

close answer

Bibliography

Adhikari, Prakash (2024). “Exploring the Nexus between Artificial Intelligence and Job Displacement: A Literature Review”, Journal of National Development, núm. 37(1), pp. 1-13. doi: doi.org/10.62047/jnd.2024.06.30.1

Alaniz, Teresa (2025). “The Emotional Economy: How Emotional Experiences are Becoming the Next Frontier”, Bootcamp. Disponible en: https://medium.com/design-bootcamp/the-emotional-economy-how-emotional-experiences-are-becoming-the-next-frontier-dfbfacfdd8b9

An, Jiafu, Difang Huang, Chen Lin y Mingzhu Tai (2024). “Measuring Gender and Racial Biases in Large Language Models”, pnas Nexus, vol. 4, núm. 3. doi: doi.org/10.48550/arXiv.2403.15281

Bago, Bence y Jean-François Bonnefon (2024). “Generative ai as a Tool for Truth”, Science, núm. 385(6714), pp. 1164-1165. doi: doi.org/10.1126/science.ads0433

Bender, Emily M., Timnit Gebru, Angelina McMillan-Major y Shmargaret Shmitchell (2021). “On the Dangers of Stochastic Parrots: Can Language Models Be too Big?”, Proceedings of the 2021 acm Conference on Fairness, Accountability, and Transparency.

Choi, J. y Nixon, Nia (2025). “Agentic Men, Communal Women? Exploring Gender Bias in llm-based Leadership Identification for Collaboration Analytics”, en Alexandra I. Cristea, Erin Walker, Yu Lu, Olga C. Santos y Seiji Isotani (eds.). Artificial Intelligence in Education. Palermo: Springer, pp. 11-18.

Data Workers’ Inquiry (s.f.). Distributed ai Research Center. Disponible en: https://data-workers.org/

Depounti, Iliana y Simone Natale (2025). “Decoding Artificial Sociality: Technologies, Dynamics, Implications”, New Media & Society, núm. 27(10), pp. 5457-5470. doi: doi.org/10.1177/14614448251359217

Eloundou, Tyna, Sam Manning, Pamela Mishkin y Daniel Rock (2023). “gpts are gpts: An Early Look at the Labor Market Impact Potential of Large Language Models”, OpenAI/ArXiv.

Fang, Cathy M., Auren R. Liu, Valdemar Danry, Eunhaer Lee et al. (2025). “How ai and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Randomized Controlled Study”, ArXiv. doi: doi.org/10.48550/arXiv.2503.17473

Gunkel, David J. (2025). ai for Communication. Londres: Routledge.

Guo, Yanzhu, Simone Conia, Zelin Zhou, Min Li, Saloni Potdar y Henry Xiao (2024). Do Large Language Models Have an English Accent? Evaluating and Improving the Naturalness of Multilingual llms. doi: doi.org/10.48550/arXiv.2410.15956

Guzman, Andrea L. (2018). “What is Human-Machine Communication, Anyway?”, en Anadrea L. Guzman (ed.). Human-Machine Communication: Rethinking Communication, Technology, and Ourselves. Lausana: Peter Lang, pp. 1-29.

Hanna, John J., Abdi D. Wakene, Andrew O. Johnson, Christoph U. Lehmann y Richard J. Medford (2025). “Assessing Racial and Ethnic Bias in Text Generation by Large Language Models for Health Care-Related Tasks: Cross-Sectional Study”, Journal of Medical Internet Research, núm. 27. doi: doi.org/10.2196/57257

Hancock, Jeffrey T., Mor Naaman y Karen Levy (2020). “ai-Mediated Communication: Definition, Research Agenda, and Ethical Considerations”, Journal of Computer-Mediated Communication, núm. 25(1), pp. 89-100. https://doi.org/10.1093/jcmc/zmz022

Hepp, Andreas, Wiebke Loosen, Stephan Dreyer, Juliana Jarke et al. (2023). “ChatGPT, LaMDA, and the Hype Around Communicative ai: The Automation of Communication as a Field of Research in Media and Communication Studies”, Human-Machine Communication, núm. 6, pp. 41-63. doi: doi.org/10.30658/hmc.6.4

Horvitz, Eric y Tom Mitchell (2024). “Scientific Progress in Artificial Intelligence: History, Status, and Futures”, en Kathleen Hall Jamieson, Anne-Marie Mazza y William Kearney (eds.). Realizing the Promise and Minimizing the Perils of ai for Science and the Scientific Community. Filadelfia: University of Pennsylvania Press, pp. 147-193.

Hu, Krystal (2023, febrero 2). “ChatGPT Sets Record for Fastest-Growing User Base Analyst Note”, Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

Iannaccone, Sandro (2024). “Cómo funcionan las redes neuronales detrás de la ia premiadas con el Nobel de Física 2024″, Wired. Disponible en: https://es.wired.com/articulos/como-funcionan-redes-neuronales-detras-de-ia-premiadas-con-nobel-de-fisica-2024

Instituto Nacional de Estadística y Geografía (2025). “Encuesta Nacional de Victimización y Percepción sobre Seguridad Pública (envipe), Informe de prensa, núm. 127. Disponible en: https://www.inegi.org.mx/contenidos/saladeprensa/boletines/2025/envipe/envipe_25.pdf

Kaplan, Jerry (2024). Generative Artificial Intelligence: What Everyone Needs to Know. Oxford: Oxford University Press.

Kew, Tannon, Florian Schottmann y Rico Sennrich (2024). “Turning English-Centric llms into Polyglots: How Much Multilinguality Is Needed?”, en Yaser Al-Onaizan, Mohit Bansal y Yun Nung Chen (eds.). Findings of the Association for Computational Linguistics. emnlp, pp. 13097-13124.

Kosmyna, Nataliya, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein y Pattie Maes (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an ai Assistant for Essay Writing Task. doi: doi.org/10.48550/arXiv.2506.08872

Lai, Emily R. (2011). Critical Thinking: A Literature Review. Pearson’s Research Reports, 6(1), pp. 40-41.

Li, Meng (2024, noviembre 16). “Hinton: Forget ai, Become a Plumber!”, ai Disruption. Disponible en: https://medium.com/ai-disruption/hinton-forget-ai-become-a-plumber-c831603f5e8b

Liu, Zhaoming (2024). “Cultural Bias in Large Language Models: A Comprehensive Analysis and Mitigation Strategies”, Journal of Transcultural Communication. doi: doi.org/10.1515/jtc-2023-0019

Natale, Simone (2021). Deceitful Media: Artificial Intelligence and Social Life after the Turing Test. Oxford: Oxford University Press.

Navigli, Roberto, Simone Conia y Benedetta Ross (2023). “Biases in Large Language Models: Origins, Inventory and Discussion”, acm Journal of Data and Information Quality, pp. 1-21. doi: doi.org/10.1145/3597307

Otaki, Bunichi (2023). “Feedback in the Era of Generative ai“. Tesis de maestría. Universidad de Gothenburgo.

Posada, Julián (2022). “The Coloniality of Data Work: Power and Inequality in Outsourced Data Production for Machine Learning”, tesis de doctorado. Toronto: University of Toronto.

— Gemma Newlands y Milagros Miceli (2023). “Labor, automation, and human-machine communication”, en Andrea L. Guzman, Steve Jones y Rhonda McEwen (eds.). The Sage Handbook of Human-Machine Communication. Sage, pp. 384-391.

Radford, Alec, Karthik Narasimhan, Tim Salimans e Ilya Sutskever (2018). Improving Language Understanding by Generative Pre-training. Disponible en: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf

Reuters (2025, diciembre 17). “OpenAI Discussed Raising Tens of Billions at About $750 Billion Valuation, the Information Reports”. Disponible en: https://www.reuters.com/technology/openai-discussed-raising-tens-billions-valuation-about-750-billion-information-2025-12-18/

Schiller, Dan (1999). Digital Capitalism: Networking the Global Market System. Cambridge: mit Press.

Seaver, Nick (2017). “Algorithms as Culture”, Big Data & Society, núm. 4(2), pp. 1-12.

Seth, Agrima, Monojit Choudhury, Sunayana Sitaram, Kentaro Toyama, Aditya Vashistha y Kalika Bali (2025). “How Deep Is Representational Bias in llms? The Cases of Caste and Religion”, Proceedings of the aaai/acm Conference on ai, Ethics, and Society, núm. 8(3), pp. 2319-2330. doi: doi.org/10.1609/aies.v8i3.36718

Susskind, Richard y Daniel Susskind (2015). The Future of the Professions. Oxford: Oxford University Press.

Turkle, Sherry (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.

Vizcaya, Emmanuel (2025, 21 de enero). Aproximaciones al artificeno, núm. 404. Disponible en: https://centroculturadigital.mx/revista/aproximaciones-al-artificeno

Williams, Adrienne, Milagros Miceli y Timnit Gebru (2022). “The Exploited Labor Behind Artificial Intelligence”, Noema. Disponible en: https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence

Zhuang, Yan (2025). “Why We Tell ai Our Stories: Exploring Motivations, Perceptions, and Impact of Interactions with ChatGPT”. Tesis de maestría. Uppsala: Uppsala Universitet. Disponible en: https://www.diva-portal.org/smash/get/diva2:1976495/fulltext02.pdf


Gabriela Sued is a postdoctoral researcher secihti in the iimas-unam and professor at the Graduate School of Political and Social Sciences at the unam. D. in Humanistic Studies and master's degree in Science, Technology and Society; member of the snii. His current research analyzes the development of artificial intelligence in Mexico from the perspective of cts. In her research she also studies data and digital platforms, automated systems and their socio-cultural implications, as well as the relationships between gender, science and technology. She is the author of numerous refereed scientific articles and book chapters published nationally and internationally.

Gabriel Perez Salazar D. in Political and Social Sciences and a master's degree in Communication from the unam. He has more than 20 years of teaching experience in three institutions of higher education: unam, uacm and uadec, at the undergraduate and graduate levels. Member of the snii level ii. Author of more than 75 scientific papers published nationally and internationally. One of his current lines of research has to do with the ethical and cultural dimensions of generative artificial intelligence; he has published eight papers on this topic, including the book Use of ChatGPT and others llms in educational environments (Endora Ediciones, 2023). Since 2023 he has given 27 lectures, conferences and workshops on artificial intelligence in 23 institutions in Mexico and three abroad. He is currently a full-time professor-researcher in the Faculty of Communication Sciences at the Universidad Autónoma de Coahuila.

Marco Dehnert D. from Arizona State University. Assistant Professor of Communication and Technology at the University of Arkansas. He directs the ai and Robots for Connection (arc) at the Center for Communication Research. He is a multi-method scholar who studies human-machine communication, human-machine communication, and human-machine communication.ia and the social impact of communication technologies. The result of his research has been published in volumes and refereed academic journals, including Human-Machine Communication, Human Communication Research and the International Journal of Social Robotics, among others. At the core of his work are questions about deeply human connection through technology, including intimacy, relationships and communication between machines and humans. More information is available on his website.

Arthur Temporary is a Brazilian social researcher, Master in Communication from the University of Guadalajara with the thesis “The habitus and digital work practices in science” (2025). He specializes in digital communication, Marxist philosophy and post-structuralism. He has worked as a scientific editor in Mexican journals of sociocultural studies and is currently a professor of Economic and Social Sciences at the Franco-Mexican College of Guadalajara.

Alina Peña Iguarán is a full-time research professor in the Department of Sociocultural Studies at the University of California, Berkeley. iteso, Jesuit University of Guadalajara. She coordinates the Basic Academic Unit Sense, Power and Sociodigital Culture. He currently directs the research project “Intermedial practices, memory grammars and violence”.

Ignacio Salvador Durán Ricardez is a social psychologist and PhD in Social Sciences with a specialization in Social Anthropology; postdoctoral fellow in the Contemporary Cultures and Identities line of the ciesas West. His lines of research include identity and subjective reconfigurations in the face of socio-cultural change processes.

Suscríbete
Notificar
guest

0 Comments
Inline Feedbacks
Ver todos los comentarios

Institutions

ISSN: 2594-2999.

encartesantropologicos@ciesas.edu.mx

Unless expressly mentioned, all content on this site is under a Creative Commons Attribution-NonCommercial 4.0 International License.

Download legal provisions complete

Encartes, Vol. 9, No. 17, March 2026-August 2026, is an open access digital academic journal published biannually by the Centro de Investigaciones y Estudios Superiores en Antropología Social, Calle Juárez, No. 87, Col. Tlalpan, C. P. 14000, México, D. F., Apdo. Postal 22-048, Tel. 54 87 35 70, Fax 56 55 55 76, El Colegio de la Frontera Norte Norte, A. C.., Carretera Escénica Tijuana-Ensenada km 18.5, San Antonio del Mar, No. 22560, Tijuana, Baja California, Mexico, Tel. +52 (664) 631 6344, Instituto Tecnológico y de Estudios Superiores de Occidente, A.C., Periférico Sur Manuel Gómez Morin, No. 8585, Tlaquepaque, Jalisco, Tel. (33) 3669 3434, and El Colegio de San Luis, A. C., Parque de Macul, No. 155, Fracc. Colinas del Parque, San Luis Potosi, Mexico, Tel. (444) 811 01 01. Contact: encartesantropologicos@ciesas.edu.mx. Director of the journal: Ángela Renée de la Torre Castellanos. Hosted at https://encartes.mx. Responsible for the last update of this issue: Arthur Temporal Ventura. Date last modified: March 20, 2026.
en_USEN