UEFISCDI 4th Meeting. Conference "AI & Research Security".

The side event was attended by 11 stakeholders from the Associated Policy Authority, academia, and various funding instruments representatives. List of partecipants:
Public policies expert UEFISCDI
EEA&Norway Grants Coordinator at UEFISCDI
Communication officer UEFISCDI
Open Science Hub Coordinator
PNRR / Smart Labs Coordinator
PEO projects coordinator at UEFISCDI
Media EDU
Ministry of Education and Research / Media EDU
Project expert UEFISCDI
CDREUROPE implementation expert at UEFISCDI
M100 Hub
The Conference
The conference was centered around the topic of use of AI in research, with an emphasis on the responsible use of AI tools in generating and evaluation project applications. Another main topic was the identification of methods that would lead to the development of an institutional hub in research security – a need that emerged in the recent international geopolitical context. On the second day of the conference, a special session was assigned to digital responsibility. A presentation was delivered to the audience that included: general introduction to CDR concept and CDR dimensions (based on the CDR Model), good practices examples of digital responsibility identified in partner’s regions (Open calls from Tuscany region, VALI platform in Hungary, DigiNET project in Finland, cyber vouchers and the Smart Services Platform in France, M100 in Romania), other cases. The presentation was followed by in-group discussions about the next steps: how we can integrate digital responsibility in future open calls? What is the correct approach towards digital and AI regulations?
Main ideas of the discussion:
- the importance of adopting measures to regulate the use of digital technologies in the increasingly dynamic digital world cannot be disputed. However, too much regulation will impede the technological progress.
- The competition between EU, USA and China is heavily influenced by the regulations imposed in each country. EU is a champion of responsible regulation in AI and use of digital technologies è EU legislation imported by member states’ national legislations. Romania needs to adapt and update its legislation to facilitate the digital transition for SMEs and other organizations
- The use of AI and digital tools in evaluation of projects applications is currently not permitted by the current legislation. UEFISCDI must adhere to the legislation.
- To include more detailed digital responsibility criteria in open calls is difficult, as evaluators needs to have competences and expertise in this field. Those with expertise are either specialized on sustainability or on IT/technology.
- Possible inclusion of digital responsibility in call text:
- Ask applicants to indicate sections that were written with AI tools in project proposals
- Indicate in each open call sections that are allowed to be written with AI tools
- Create a repository of AI tools that can be used by applicants to write sections of project proposals. Lists should comprise only tested tools, that have proved to provide efficient results.
- Include recommendations and guidelines for the responsible use of AI and digital tools, emphasizing the fact that they cannot be fully trusted and that any results must be verified before included in the proposal
Conclusions
The speakers and audience discussed the challenges of the current digital reality and the fact that although regulations are needed to protect citizens and organizations, they need not impede technological progress. Many research organizations face new challenges and are pressured to pay more attention to the dual use of their research results, therefore there is a need to develop a research security hub at national level.
Current legislation does not permit the use of AI and digital tools for evaluating project applications. UEFISCDI must comply with existing regulations, even if it limits the efficiency that AI tools could provide in the evaluation process.
To ensure responsible AI usage in project applications, clear guidelines must be established in all financial instruments. Potential solutions include requiring applicants to disclose AI-written sections, specifying where and how is AI usage permitted, creating and updating constantly a repository of tested AI tools, and providing guidelines on verifying AI-generated content.