Ethical Aspects and Social Responsibilities in the Use of Artificial Intelligence in Academic Research

Authors

  • Aurelia Glavan Faculty of Psychology and Special Psychopedagogy, State Pedagogical University „Ion Creangă” from Chisinau, Moldova, Address: 1, Ion Creangă St., Chisinau, Moldova
  • Gabriela Repeșco Doctoral School Science of Education, State Pedagogical University „Ion Creangă” from Chisinau, Moldova, Address: 1, Ion Creangă St., Chisinau, Moldova
  • Vadim Repeșco Technical University of Moldova

Keywords:

artificial intelligence, ethics, academic research, digital competence

Abstract

The integration of artificial intelligence (AI) into academic research is significantly transforming the ways in which data are generated, analysed, and interpreted. While AI offers numerous benefits, such as the automation of repetitive processes, the analysis of large volumes of information, and the identification of relevant patterns, this technological advancement also raises a series of ethical dilemmas that cannot be ignored. Among the key concerns are the lack of algorithmic transparency, the risk of perpetuating biases through machine learning models, ambiguity regarding responsibility for AI-generated decisions, and issues related to data privacy.    
This article aims to explore these aspects, highlighting the tensions between technological innovation and ethical responsibility. At the same time, it examines a range of initiatives and best practices that can guide the responsible implementation of AI in research activities. By identifying clear courses of action, from the ethical training of researchers to institutional regulation, the article contributes to the development of a reflective framework essential for sustainable and ethically responsible academic research in the digital age.

References

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In: Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, pp.149–159.

Boden, M. A. (2016). AI: Its nature and future. Oxford University Press.

Elgammal, A., Liu, B., Elhoseiny, M., & Mazzone, M. (2017). CAN: Creative Adversarial Networks, Generating “Art” by Learning About Styles and Deviating from Style Norms. arXiv preprint.

Esteva, A., et al. (2019). A guide to deep learning in healthcare. In: Nature Medicine, 25(1), pp.24-29.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2020). AI4People - An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. In: Minds and Machines, 28(4), pp. 689-707.

European Commission. (2021). Ethics guidelines for trustworthy AI. Publications Office of the European Union. Retrieved from https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines, date: 15.08.2025

Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. fairmlbook.org

Brundage, M., et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv preprint.

Brynjolfsson, E., & McAfee, A. (2016). The Second Machine Age. W. W. Norton & Company.

Calo, R. (2015). Robotics and the Lessons of Cyberlaw. In: California Law Review, 103(3), pp. 513-563.

Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. In: Nature, 538(7625), pp. 311-313.

Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. arXiv preprint.

Floridi, L., et al. (2018). AI4People - An Ethical Framework for a Good AI Society. In: Minds and Machines, 28(4), pp. 689-707.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

Lipton, Z. C. (2016). The Mythos of Model Interpretability. arXiv preprint.

Munafò, M. R., et al. (2017). A manifesto for reproducible science. In: Nature Human Behaviour, 1(1), 0021. doi: 10.1038/s41562-016-0021.

Samuelson, P. (2017). Intellectual Property and Data. In: UC Berkeley Public Law Research Paper.

Shokri, R., et al. (2017). Membership Inference Attacks Against Machine Learning Models. IEEE Symposium on Security and Privacy.

Smith, B., et al. (2020). Data Governance and Ethics in AI. In: Journal of AI Research, 68, pp.123-145.

Susskind, R., & Susskind, D. (2015). The Future of the Professions. In: Oxford University Press.

Voigt, P., & Von dem Bussche, A. (2017). The EU General Data Protection Regulation (GDPR). Springer.

Baker, M., et al. (2019). Accelerating Scientific Discovery with Artificial Intelligence. In: Nature, 575(7781), pp.5-7.

Elgammal, A., Liu, B., Elhoseiny, M., & Mazzone, M. (2017). CAN: Creative Adversarial Networks, Generating “Art” by Learning About Styles and Deviating from Style Norms. arXiv preprint.

Jordan, M. I., & Mitchell, T. M. (2015). Machine Learning: Trends, Perspectives, and Prospects. In: Science, 349(6245), pp. 255-260.

Kovanis, M., et al. (2016). The Global Burden of Journal Peer Review in the Biomedical Literature: Strong Imbalance in the Collective Enterprise. PLoS ONE, 11(11), e0166387.

Kroll, J. A., et al. (2016). Accountable Algorithms. In: University of Pennsylvania Law Review, 165(3), pp.633-705.

Luckin, R., et al. (2016). Intelligence Unleashed: An Argument for AI in Education. In: Pearson Education.

Rolnick, D., et al. (2019). Tackling Climate Change with Machine Learning. arXiv preprint.

Woolley, A. W., et al. (2015). Evidence for a Collective Intelligence Factor in the Performance of Human Groups. In: Science, 330(6004), pp.686-688.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2020). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. In: Minds and Machines, 28(4), pp.689-707.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. In: Nature Machine Intelligence, 1(9), pp.389-399.

Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. In: Science, 349(6245), pp.255-260.

Morley, J., et al. (2021). The ethics of AI in health care: A mapping review. In: Social Science & Medicine, 260, 113172.

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., ... & Schölkopf, B. (2019). In: Machine behaviour. Nature, 568(7753), pp. 477-486.

Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. In: Nature Medicine, 25(1), pp.44-56.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2020). AI4People - An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. In: Minds and Machines, 28(4), pp. 689-707.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635.

Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2021). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. In. Science and Engineering Ethics, 27(4), pp.1-29.

Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., & Cave, S. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp.195-200.

Downloads

Published

2025-10-31

How to Cite

Glavan, A., Repeșco, G., & Repeșco, V. (2025). Ethical Aspects and Social Responsibilities in the Use of Artificial Intelligence in Academic Research. Didactica Danubiensis, 5(1), 356–365. Retrieved from https://dj.univ-danubius.ro/index.php/DD/article/view/3507

Issue

Section

Articles

Most read articles by the same author(s)