I am giving a keynote at the Nordic Data Stewardship Network’s Hybrid Seminar, a full-day event co-organised by CSC and NeIC, held at the Life Science Center in Espoo (and streamed online). The seminar brings together data stewards and research support professionals from across the Nordic countries for talks, workshops, and networking.
My talk was titled “Responsible use of Artificial Intelligence systems in research work and support: Ethics, integrity, and compliance”. The central argument is that AI tools in research are not a monolithic risk to be avoided or embraced wholesale — they need to be evaluated along several distinct axes:
- Ethics: Who benefits and who might be harmed? What biases are embedded in training data? What are the environmental costs?
- Research integrity: Where does AI-generated content sit relative to authorship, fabrication, and attribution norms? How do we maintain reproducibility when using probabilistic tools?
- Compliance: What do GDPR, institutional policies, and funders’ open science requirements actually say about the use of AI in research workflows and research support?
A recurring theme was that research support staff — data stewards, RDM specialists, librarians — are increasingly on the front line of these questions, fielding queries from researchers who are already using these tools and need practical, actionable guidance rather than abstract warnings.
The slides are available here (PDF).