I'd like to hear the scientific community talk more about research integrity, in particular, when promoting AI.

Take, for example, the European Code of Conduct for Research Integrity (https://allea.org/code-of-conduct/):

Reliability. Honesty. Respect. Accountability.

(1/4)

The European Code of Conduct for Research Integrity - ALLEA

From https://allea.org/code-of-conduct

“Reliability
in ensuring the quality of research, reflected in the design, methodology, analysis, and use of resources.”

“Honesty
in developing, undertaking, reviewing, reporting, and communicating research in a transparent, fair, full, and unbiased way.”

(2/4)

The European Code of Conduct for Research Integrity - ALLEA

(…cont. from https://allea.org/code-of-conduct)

“Respect
for colleagues, research participants, research subjects, society, ecosystems, cultural heritage, and the environment.”

“Accountability
for the research from idea to publication, for its management and organisation, for training, supervision, and mentoring, and for its wider societal impacts.”

(3/4)

The European Code of Conduct for Research Integrity - ALLEA

While I see value in AI tools for correcting spelling, assisting with code, or using machine learning on well-defined data to find correlations my three-dimensional reasoning cannot detect, I fail to see how broader AI solutions or autonomous agents can be compatible with the concept of #researchintegrity.

Reliability. Honesty. Respect. Accountability.

Am I just old-fashioned?

(end)