The European Commission, in collaboration with members of the European Research Area (ERA) and various stakeholders, has introduced a set of directives aimed at promoting the ethical use of generative artificial intelligence (AI) within the European research sphere.

With their capabilities to accelerate the creation of text, images and code, generative AI tools represent a valuable resource. However, it is crucial that researchers are fully aware of the potential limitations and challenges, including issues of plagiarism, disclosure of sensitive data and bias inherent in the models. Drawing on the core values ​​of research integrity, these guidelines aim to provide a framework for researchers, scientific institutions and funders, promoting the consistent management of these technologies across the continent.

The key principles

The directives focus on several key aspects:

  • Researchers are advised to exercise caution when using generative AI for sensitive tasks, such as peer review or evaluation, while ensuring that privacy, confidentiality and intellectual property rights are respected.
  • Research institutions are responsible for supporting the conscious use of generative AI and overseeing the adoption and development of these technologies within their fields.
  • Funding bodies are urged to encourage transparent use of generative AI by funded entities.

Given the dynamic nature of generative AI, these guidelines will be subject to periodic reviews, in order to integrate feedback from the scientific community and stakeholders involved.