Skip to content

Final results 2025 CORE Call

The FNR is pleased to communicate the final results of the 2025 CORE Call. 44 projects have been retained for funding, an FNR commitment of 28.5 MEUR. Of the 44 projects, 13 are CORE Junior and 8 are CORE International. The deadline for the 2026 CORE Call is 21 April 2026, 14:00 CET.

CORE is the central programme of the FNR. A multi-annual thematic research programme, the prime objective of CORE is to strengthen the scientific quality of Luxembourg’s public research in the country’s research priorities.

Overview by National Research Priority

  • Industrial and Service Transformation: 23 projects
  • Sustainable and Responsible Development: 10 projects
  • Personalised Healthcare: 9 projects
  • 21st Century Education: 2 projects
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
ProMultiParents, Educators and Healthcare Professionals promoting MultilingualismUniversity of LuxembourgClaudine Kirsch21st Century Education € 500,000
KeywordsMultilingualism, multilingual development, influencing factors, language beliefs, advice, practices, families, education professionals, healthcare professionals
AbstractParents, Educators and Healthcare Professionals promoting Multilingualism (ProMulti) More and more children are growing up in multilingual environments and learning multiple languages from a young age on account of globalization and migration. At the same time, parents of migration background as well as education and healthcare professionals are found to be unsure of how to support children’s multilingual development. Language learning is influenced by exposure to and use of several languages. This learning environment, in turn, is informed by adults’ language choices, practices and beliefs. To foster language learning, it is therefore important to understand the beliefs of parents and professionals, the advice they seek and give, and the ways in which families apply this advice. The supportive and concerted efforts of parents and professionals have positive implications for children’s multilingual development which affects their identity, well-being, academic achievement, future employment and health prospects. In addition, language proficiency eases integration and strengthens social cohesion. There is, however, little research on parents’ and education and healthcare professionals’ language beliefs, management and practices and the ways in which these interact. In this interdisciplinary research project, a team of researchers from the University of Luxembourg, supported by the Luxembourg Institute of Health, will examine factors that influence multilingual development and support structures of parents and professionals. The project aims to investigate the language beliefs, management and practices of multilingual families and education and healthcare professionals, identify similarities and differences, explore the interplay of these factors and identify promising practices reported in multilingual resources for parents and professionals. The participants are families of migration background, children, education professionals (i.e., educators, teachers, SEN teachers) and healthcare professionals (i.e., midwives, pediatricians, speech and language therapists and occupational therapists). Using surveys, observations, interviews and focus groups, the team investigates the ways in which these actors experience, describe and support children’s multilingual development. Quantitative data will be analysed with correlation analyses, while qualitative data will undergo discourse, content and interaction analysis. The project findings will deepen our understanding of language beliefs, advice and practices that promote or hinder language development, of multilingual development per se, and of the efforts of various actors and sectors to support multilingualism. This knowledge enables parents and professionals to identify children’s specific needs and patterns of effective (or ineffective) support at an early stage. The adults learn to adjust language choices, approaches and materials at home and in education and healthcare settings that further language development in an efficient and effective manner. The opportunities for language exposure and meaningful interactions they provide and suggest, shape children’s learning trajectories and language outcomes. Findings will, therefore, have significant implications for young children’s language development; professionals’ understanding of factors influencing this development; the tailored and evidence-based support provided to parents; contextualized policies and training opportunities. Disseminated through publications, conferences and outreach activities, findings will impact society by improving cross-sector communication which addresses existing educational and health inequalities.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
EDOUniversal public supply of EDucation and long-run OpportunitiesLuxembourg Institute of Socio-Economic Research (LISER)Francesco Andreoli21st Century Education € 766,000
KeywordsEducation, inequality of opportunity, preschool, earnings, two way fixed effects, distributional methods, welfare, sorting, tracking
AbstractIt is well understood from the social science literature that education has a causal impact on productivity, human capital and skills. For this reason, educational programs are at the forefront of policy debate when it comes to identify interventions for improving human capital and skills of future generations, and for levelling the playing field. One of the main traits of (early or late) educational programs in Europe is the universality of their provision. The argument in support of universal intervention is the need to balance efficiency objectives with equity concerns, which seems at odds with the evidence on mean tested programs. This project’s goal is to uncover causal evidence about the economic returns of education reforms that take place early and late stages of the education career in systems where education is publicly provided and access is granted universally. We measure returns on human capital of children using information about their educational career when young and their labour market performances when adult. In this way, we can capture multiple aspects of human capital (such as non-cognitive skills) that could be affected by the education expansions and valued by markets. To do so, we resort on features of education reforms to draw causal evidence about the underlying economic returns. Importantly, the design of the reforms allows to isolate their causal impact from the effects of confounders, characterizing the socio-economic context of the country where the reform takes place. We hence rely on country-case-study to identify and estimate empirical effect with causal content, which we use to test three hypotheses of interest. The hypotheses (H) look closely at the relation between parental resources/income, their children’ human capital, and the way education reforms interact with them. H1 is that large universal educational expansions affect human capital, heterogeneously so across parental income groups. An education pre-distributes opportunities for human capital and income of treated children when it affects more intensively children experiencing disadvantaged background compared to children from advantages background. H2 is that the pre-distributive implications of education expansions are stronger the sooner the education expansion takes place. H3 considers instead the possibility of widening the variety of education curricula in primary education (rather than widening access of duration of education) leads to efficiency and equity returns on early measures of human capital. The EDO project relies on evidence from few country-case-study in Europe, the choice being guided by the timing of reforms and the extent of accessibility to the education system. Our first case study is Italy, where we explore the staggered implementation of the preschool state program, Scuola Materna, across cohorts and counties to identify the effect of early education expansion on children earnings. The second case study is the large education reform that took place in Sweden, of which we study the intergenerational consequences. Our third case study relies on Luxembourg, where we assess the effects of enlarging supply of primary education curricula (universally available) on basic competences and tracking choices of the treated. The EDO project will primarily deliver empirical scientific contributions, based on innovative data that blend social security data, register data and large population surveys. The data platform that will be created for Luxembourg has a contingent value for the project, but also for future analysis of inequality in educational opportunities. This platform is managed by EDO members in tight collaboration with the Observatoire National de l’Enfance, de la Jeunesse et de la Qualité Scolaire (OEJQS), which will also lead activities for disseminating evidence across stakeholders and the general public in Luxembourg and abroad.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
CLARITYCommunity-Led Analysis and Reporting to Improve Trust and TransparencyUniversity of LuxembourgGabriele LenziniIndustrial and Service TransformationDFG € 694,000
KeywordsDisinformation, Community-Led Fact-Checking, Crowdsourced Fact-Checking, Social Media
AbstractCommunity-led fact-checking on social media is an emerging approach that seeks to reduce reliance on external fact-checking organizations by harnessing the collective intelligence of users. In this model, users contribute fact-checking comments and vote on the accuracy of others’ assessments. The most helpful contributions are displayed alongside the original post, providing readers with additional context to inform their opinions. To date, this approach has been fully implemented on only one major platform — Community Notes on X. However, other social media companies, such as Meta and YouTube, are actively exploring the integration of community fact-checking features on their platforms. While early studies highlight the potential of Community Notes in curbing the spread of misinformation and indicate a high level of user trust, critical questions remain. In particular, the long-term reliability of this approach is not yet established. Concerns have also been raised about its resilience to manipulation (e.g. coordinated attempts to label false information as true under the guise of fact-checks). Another critical issue concerns transparency: how should these fact-checking results be presented to ensure users understand and trust the process, while also encouraging critical thinking? The CLARITY project addresses these open challenges from an algorithmic and user-centered perspective. It will investigate effective strategies for implementing community-led fact-checking systems, develop approaches to ensure transparency, and identify best practices for presenting fact-checks in ways that foster user trust and critical engagement. Additionally, it will explore the limits and broader applications of this approach, including its potential use in areas beyond misinformation, such as hate speech detection.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
A(SM)2AAnalysing Security Measures/Security Measures for AnalyzabilityUniversity of LuxembourgMarcus VölpIndustrial and Service TransformationDFG € 564,000
Keywordsreal-time systems, cyber-physical systems, worst-case execution time analysis, security
AbstractSafety-critical real-time and cyber-physical systems (CPS) regularly undergo stringent offline timing verification (worst-case execution time (WCET) and schedulability analysis) to ensure timeliness of all interactions with the systems they control. While real-time systems used to be closed, isolated systems, this is no longer true for swarms of interacting robots, collaboratively driving vehicles, the power grid and industrial automation and medical systems. Unfortunately, the safety of these networked systems, some of which are already part of our critical infrastructure, crucially also depends on their security and in fact we already see a multitude of increasingly sophisticated cyberattacks by adversarial teams, including nation state hackers, targeting these systems. The dilemma we are facing is that most of the advanced security measures at our disposal, are not suitable for the resource-constraint safety-critical embedded systems in the above applications that are subject to stringent timing verification. In this project, the Embedded Systems group of Universitaet Augsburg and the Critical and Extreme Security and Dependability group at SnT, University of Luxembourg join forces to explore the interplay of security measures for resource-constrained safety-critical embedded systems and their timing analyzability. We investigate techniques such as address-space randomization, control-flow integrity mechanisms and hardware and software solutions against transient execution attacks, to mention a few, and develop security mechanisms that are tailored to enhance the security of resource-constrained safety-critical systems.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
FREELYFrequency-enhanced verification and validation of CPSUniversity of LuxembourgDomenico BianculliIndustrial and Service TransformationFWF € 671,000
Keywordssoftware engineering, verification, validation, cyber-physical systems, CPS, formal specifications, temporal logic, frequency-domain properties, runtime verification, monitoring, specification mining, testing, fault localization, failure explanation
AbstractCyber-Physical Systems (CPS) are at the core of safety-critical domains such as autonomous transportation, healthcare, smart grids, and industrial automation. Ensuring their correctness and reliability is of fundamental importance, as failures in these systems can lead to severe consequences. The verification and validation (V&V) of CPS relies on specification languages to unambiguously describe the desired behaviour. Using such languages, the CPS executions can be automatically verified, thus enabling the automation of many V&V activities, such as testing and fault analysis. However, the use of specification languages for CPS V&V is challenging, due to the CPS interdisciplinary nature. Notably, traditional specification languages have been developed in the field of computer science and focus on time-domain properties, ensuring that events occur in the correct sequence within predefined time constraints. However, in CPS, many phenomena are best described in the frequency domain—such as oscillations in control systems, signal distortions in communication networks, and spectral anomalies in power systems—that cannot be adequately captured by time-based specifications alone. FREELY will fill this gap by investigating the methods and techniques that enable the use of Time-Frequency Logic (TFL) for V&V of CPS. TFL is a formal specification language that extends traditional temporal logics by incorporating frequency-domain operators, making it well-suited for describing CPS behaviors that depend on combined time and frequency characteristics. However, TFL has been only proposed as a proof of concept, and, due to the fundamental difference between the time and frequency domains, state-of-the art time-based V&V methods and techniques cannot be directly applied to TFL. Thus, we identify the need to further develop the language, and to investigate new V&V techniques that can leverage the hybrid time-frequency features of TFL. FREELY will enable the use of TFL in the V&V of CPS by achieving the following key objectives: (1) development of methods to automatically infer TFL specifications from system execution traces (both template-based and template-free) and from natural language through Large Language Models, (2) design of offline and online monitoring techniques to verify system executions against TFL specifications, (3) development of novel test case generation techniques for verifying CPS against TFL specifications, including falsification testing, coverage-based testing, and property-based testing, and (4) development of methods for fault localization, failure explanation, and automated repair using TFL-based analysis. By enabling the use of TFL for specification inference, runtime monitoring, test case generation and fault analysis of CPS, FREELY will establish the foundation of frequency-enhanced specification-driven V&V of CPS. Methodologically, we will integrate techniques from runtime verification, control engineering, and software engineering to develop a comprehensive TFL-based V&V framework. We will incorporate control-theoretic frequency-based tools such as Bode plots, Nyquist plots, and spectral analysis to identify critical frequency regions linked to system failures. Furthermore, our fault diagnosis framework will map abnormal spectral behaviors to system components, facilitating failure explanation and targeted repair. Our techniques will be implemented in open-source tools and evaluated on real-world CPS applications, ensuring practical applicability. The outcomes will be disseminated through leading conferences and journals in software engineering, formal methods, and CPS, contributing to a broader adoption of frequency-enhanced V&V of CPS in industry and academia.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
TailorFibTailoring Natural Fiber Properties on Demand via Microstructural ReconstructionLuxembourg Institute of Science and Technology (LIST)Carlos Fuentes RojasIndustrial and Service TransformationFWF € 719,000
KeywordsPlant Fibres, Micro-structural reconstruction, bamboo, Fibre treatment
AbstractNatural plant fibres, such as bamboo, offer a compelling combination of renewability and mechanical strength comparable to synthetic alternatives like glass fibres. However, their widespread adoption in structural applications is hindered by two persistent challenges: unpredictable performance due to inherent variability in lignin-cellulose interactions and middle lamella composition, and the limitations of conventional treatments that often degrade cellulose integrity or improve one property at the expense of others. Bamboo, with its uniquely high lignin content and densely packed cellulose microfibrils, exemplifies these issues. While its lignin-rich middle lamella is critical for stress transfer and environmental resilience, its porosity and hydrophilic nature lead to moisture sensitivity, dimensional instability, and thermal degradation—key barriers to high-performance applications. This project addresses these challenges through a novel biomimetic strategy that reconfigures bamboo’s interfibrillar architecture at the molecular level, enabling precise control over both physical and mechanical properties. By integrating advanced imaging and green chemistry, we pioneer a four-stage approach. First, high-resolution 3D digital reconstruction maps elementary fibre and middle lamella distribution at submicron scales, linking microstructural features to macroscale performance. Second, a patented extraction process selectively isolates the middle lamella using enzymatic-solvent treatments, preserving elementary fibre integrity. Third, capillary-driven interfacial engineering rebuilds the middle lamella with chitosan-lignin hybrids, mimicking the natural pectin-lignin synergy to balance rigidity and flexibility—directly tailoring mechanical behaviour. Fourth, targeted lignin glycidylation introduces covalent bonding sites, enhancing interfacial cohesion while enabling tuneable moisture resistance and thermal stability. Central to this work is unravelling the structural-functional relationships governing cellulose-lignin interactions across hierarchical scales, which underpins the ability to systematically control fibre properties. AI-optimized micro-photogrammetry enables rapid 3D fibre analysis to decode failure mechanisms and correlate microstructural adjustments with performance outcomes. Closed-loop recycling of lignin from processing waste ensures sustainability while maintaining control over material composition. By redefining the middle lamella’s architecture and interfacial chemistry, this approach transforms natural variability into engineered predictability, achieving fibres with on-demand combinations of strength, toughness, hydrophobicity, and thermal resilience.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
PCS-GRAPHSIntegrating Situational Awareness, Planning and Control for Autonomous Robots using S-GraphsUniversity of LuxembourgHolger VoosIndustrial and Service Transformation – € 630,000
Keywordsmobile robotics, situational awareness, motion planning and control, UAV, control engineering, machine learning
AbstractAutonomous mobile robots must in general operate in complex and dynamic environments for a longer time. Robots operating in such scenarios need to continuously acquire a comprehensive situational awareness to enable intelligent decision-making and autonomous tasks execution, that range from safe collision-free navigation to object manipulation. Situational awareness (SA) is since decades a core robotics research area and comprises the perception of the elements in the environment in space and time, the comprehension of their meaning and the prediction of their future status. Herein, very recent approaches like our work on Situational Graphs (S-Graphs), have gone beyond traditional metric-semantic Simultaneous Localization and Mapping (SLAM), and enable robots to integrate relational knowledge into the SA model. While the former involves two steps (i.e. semantic SLAM + scene graph), the S-Graphs merge geometric models of the environment generated by SLAM approaches with 3D scene graphs (and related semantic information) into a multilayered jointly optimizable Factor Graph (FG). However, the S-Graph approach (like other advanced SA solutions) is not yet tightly connected with motion planning and control (MP&C) as required for robot autonomous operations. Some recent planning solutions use either FGs or 3D Scene Graphs as models of the environment for task and motion planning. With respect to control, also recent approaches combine optimal or model predictive control (MPC) with FGs. However, all those planning and control approaches do not directly exploit the multilayered structure of the advanced SA solutions like S-Graphs with geometric information on lower and semantic information at higher levels. In addition, most of the existing solutions that apply graph-based environmental models for robotics MP&C are so far not able to adapt or improve their performance using learning approaches. Herein, the SA models, the MP&C algorithms or both could in principle be improved by learning from data or the real interaction of the robot and the environment using the potential of advanced machine learning techniques such as reinforcement learning (RL). Therefore, PCS-GRAPHS intends to develop a novel approach for a tight integration of S-Graphs with MP&C algorithms that directly exploits the graph structure as well as the hierarchically layered SA information for a very efficient and real-time capable solution. As a further extension, we will develop a solution where the S-Graph is converted into a learnable Factor Graph Neural Network (FGNN) and a RL-based approach will further improve the performance of the MP&C. We will test and validate the new solutions using two challenging real-world robotic use cases: (I) autonomous robotic navigation on a construction site, (II) autonomous aerial grasping using a soft universal gripper on a UAV.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
6G-AURAAI-Driven Architecture for Resilient and Sustainable 6G Unlicensed Radio CommunicationsLuxembourg Institute of Science and Technology (LIST)Ayat Zaki HindiIndustrial and Service TransformationYes € 557,000
KeywordsWireless communications; 6G; Wi-Fi, Ultra Wide Band (UWB); Medium access; Spiking Neural Networks (SNNs); Information-Centric Networking (ICN)
Abstract6G-AURA investigates how unlicensed spectrum can be harnessed to enable the next generation of intelligent, high-performance Wireless Local Area Networks (WLANs) for industrial environments. As licensed spectrum remains costly and limited, the project focuses on sustainable, cost-effective alternatives like the newly opened 6 GHz band. To overcome the performance uncertainty typically associated with unlicensed access, 6G-AURA targets innovation at the MAC and networking layers—developing adaptive, AI-driven solutions that ensure reliability and efficiency under shared spectrum conditions. It integrates energy-efficient intelligence based on Spiking Neural Networks (SNNs), enabling real-time interference prediction and adaptive protocol behavior. The project also explores multi-RAT architectures combining Ultra-Wide Band (UWB) and advanced Wi-Fi technologies (e.g., Wi-Fi 6E/7/8) to support 6G-class services: hyper-reliable low-latency communications (HRLLC), immersive human-machine interaction (AR/XR), and massive IoT. Complementing this, 6G-AURA adopts Information-Centric Networking (ICN) to reduce overhead and enhance scalability in dense deployments. The project contributes to the long-term evolution of unlicensed networks and supports Europe’s transition toward intelligent, resilient, and human-centric Industry 5.0 systems.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
ReadyVioSatResource Management in Satellite Communications with Dynamic Topologies: Machine Learning Algorithms for Variable Input-Output DimensionsUniversity of LuxembourgTi Ti NguyenIndustrial and Service TransformationYes € 657,000
KeywordsSatellite Communication; Resource management; Machine Learning; Scalability; Joint Optimization and Machine Learning; Machine Learning for Dynamic Systems; Algorithms for Beamforming, Spectrum management, and Routing
AbstractSatellite systems are evolving rapidly and are poised to revolutionize global connectivity. The transition to low Earth orbit (LEO) and very low Earth orbit (VLEO) satellites, along with the increasing deployment of small satellites and multi-orbit networks, presents several challenges. Effective resource management is crucial for maintaining the sustainability and performance of satellite communication (SatCom) networks. However, conventional resource management algorithms, which are designed for static or slow-changing networks, often struggle to handle the dynamic nature of modern satellite systems, which exhibit significant temporal and spatial variability. Although machine learning (ML) has been explored for resource management, many existing ML-based approaches consider that inference and training data have the same dimensions, limiting their adaptability to dynamically changing network conditions. This project, Resource Management in Satellite Communications with Dynamic Topologies: Machine Learning Algorithms for Variable Input-Output Dimensions (ReadyVioSat), aims to develop ML models specifically tailored for highly dynamic satellite networks. These models will accommodate varying network sizes and configurations without requiring complete retraining while remaining robust to changes in node order and network topologies. A key focus of the project is leveraging permutation equivariance (PE) and permutation invariance (PI) to design scalable ML models. However, simply ensuring that an ML model adheres to PE and PI properties does not automatically guarantee generalizability across different network sizes. To address this, the project will establish a structured framework that connects PI/PE principles to scalability. Beyond scalability, ReadyVioSat will also address feasibility, adaptability, generalizability, and efficiency of ML models. This includes the synergy of ML and optimization, the integration of unified graph neural networks (GNN) and deep reinforcement learning (DRL), and the fusion of unsupervised learning with optimization techniques. By providing universal ML frameworks, ReadyVioSat will improve satellite communication performance, enabling a wide range of critical satellite services. Additionally, the project fosters cutting-edge research and development, driving technological advancement and innovation in Luxembourg.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
EONISEEfficient On-Board Neuromorphic Inference for Satellite-Based Earth Observation – Towards Spiking Vision-Language ModelsUniversity of LuxembourgThanh-Dung LeIndustrial and Service TransformationYes € 648,000
KeywordsNeuromorphic Computing, Spiking Neural Networks, Vision-Language Models, Satellite Earth Observation, Low-Power Edge Processing, Robust Inference, Task-Oriented Integrated Sensing, Computation, and Communication (TISC²)
AbstractThe EONISE project pioneers a new paradigm for satellite-based Earth observation by integrating neuromorphic computing with advanced vision–language processing and constraint-aware communication strategies. The project’s approach is threefold. First, it develops spiking neural network architectures that implement a novel spike-based self-attention mechanism and enable the direct, low-power training of vision-language models using surrogate gradient methods. This innovation promises significantly reduced latency, energy consumption, and memory usage compared to conventional ANN-based systems. Second, the initiative addresses real-world operational challenges by enhancing the robustness and interpretability of spiking VLMs. Through rigorous benchmarking and incorporating prediction-powered inference, the project ensures that these models maintain high semantic accuracy even under noisy, occluded, or domain-shifted conditions while providing transparent, causal insight into their decision-making processes. Finally, EONISE introduces a task-oriented (TISC²), constraint-aware framework for onboard sensing, processing, and communication. By dynamically optimizing image compression and transmission scheduling in line with satellite constraints such as bandwidth and latency, the project aims to significantly reduce downlink volumes without compromising task performance. Collectively, these innovations set the stage for energy-efficient, robust, and semantically rich onboard inference systems that can revolutionize how satellites process and transmit critical Earth observation data.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
ANANSIAndroid Malware Defense: Localization-Facilitated Evasive Behavior Analysis and CountermeasuresUniversity of LuxembourgJacques KleinIndustrial and Service Transformation € 721,000
KeywordsMobile apps, Android Applications, Malware, app analysis
AbstractAndroid’s dominance as the leading smartphone operating system makes it a prime target for malware attacks. Traditional malware detection methods, including rule-based heuristics and conventional machine learning, struggle to keep pace with evolving malware variants that use tactics such as code obfuscation and dynamic behavior triggering. As a result, detection models often lack robustness and fail to generalize to novel threats. ANANSI addresses this critical challenge by developing an adaptive and explainable machine learning pipeline for Android malware detection. The project targets two core limitations in existing systems: (1) poor localization of malicious code, and (2) the opaque, black-box nature of detection models that hinders interpretability and resilience. To overcome these limitations, ANANSI proposes two complementary strategies. The first strategy focuses on understanding malware behavior through precise localization of malicious payloads in Smali code and dynamic analysis of execution conditions. This will uncover evasive behaviors and enable the extraction of behaviorally meaningful features. The second strategy applies systematic robustness testing to assess and enhance model resilience. By combining explainable AI (XAI) with adversarial testing techniques, the project will trace model failures to specific malicious code regions and simulate evasion scenarios to expose decision-making flaws. Together, these strategies will support the development of a unified analysis pipeline that enables continuous improvement of detection models. The pipeline integrates insights from behavior analysis, model explanation, and robustness testing to produce reliable and adaptive malware detection systems. ANANSI’s outcomes will contribute novel tools, benchmarks, and frameworks for the research community and enhance the security posture of mobile platforms.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
LAASeRLLM-based Automated Analysis of Software RequirementsLuxembourg Institute of Science and Technology (LIST)Renzo DegiovanniIndustrial and Service TransformationYes € 692,000
KeywordsNatural Language Requirements, Trustworthy AI, Software Reliability and Trust
AbstractMany of the major software errors happen due to incorrect understanding of the intended system behaviour. To deal with this issue, practitioners perform requirements analysis, an iterative process attempting to identify problems and appropriate mitigation actions, with the goal to improve the software requirements prior to project development. Currently, Natural Language (NL) is the most adopted language for capturing requirements, especially in industrial contexts. However, Natural Language requirements are highly prone to quality problems, such as vagueness, ambiguity, inconsistency, and incompleteness. LAASeR will implement a “human-in-the-loop” framework to automatically assist the engineers in the assessment and improvement of Natural Language requirements specifications. To do so, LAASeR will rely on the Generative AI models (LLMs) prediction and text generation capabilities to identify and resolve quality issues in the NL requirements. It will combine scalable techniques coming from software testing, e.g. differential and metamorphic testing, to control the LLMs outputs in order to avoid LLMs hallucinations and reduce false positives, common issues in this context.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
INQUIREINtegrating QUalitative and quantItative REasoningUniversity of LuxembourgApostolos TzimoulisIndustrial and Service TransformationYes € 503,000
Keywordsalgebraic proof theory, analytic calculi, probabilistic many-valued logic, substructural logics, residuated lattices.
AbstractIntegrating qualitative (logical) and quantitative (probabilistic) reasoning has been a major item of the research agenda in logic and mathematics since Boole’s Laws of Thought. Various probabilistic logics have been introduced to capture this connection, which have been invaluable in such areas as Bayesian epistemology, decision theory, knowledge/belief-representation and update, Dempster-Shafer theory, and quantum theory, and which are presently of paramount importance for the development of next-generation AI. The mathematical development of probabilistic logics is uneven: while their model-theoretic and algebraic treatment is advanced, their proof-theoretic treatment is essentially unexplored. The main aim of this proposal is to develop proof calculi for families of logics specifically designed to describe, integrate, and combine qualitative and quantitative reasoning in different contexts, and especially those social and technological contexts involving decision making under uncertainty. I will address this aim by further developing, expanding, and combining methodologies pertaining to duality theory, algebraic proof theory, and analytic calculi, which I have developed and successfully deployed for several other logics. The main theoretical benefit of this project is extending the scope of proof theory to provide probabilistic logics with an essential component of their mathematical environment. Its main practical benefit is providing researchers in social sciences with bespoke formal tools for modelling and analysing complex scenarios involving decision-making under uncertainty, and researchers in AI with bespoke formal tools for e.g. injecting formal relations on (hyper-)graphs, and thus designing learning algorithms better capable of tackling with rules, social behaviour, decisions, and reasoning.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
GALICIAGraph Active Learning to Improve Causal Inference ApplicationsUniversity of LuxembourgGeorgios PanagopoulosIndustrial and Service TransformationYes € 499,000
KeywordsCausal machine learning, graph neural networks, counterfactual estimation, adaptive experimental design, doubly robust estimation
AbstractThe GALICIA project addresses the critical need to extrapolate causal conclusions in limited sample size regimes. Traditional causal inference relies on extensive randomized controlled trials or large-scale observational studies. However, such data collection is costly, time-consuming, or risky in domains like healthcare and e-commerce due to the nature of the interventions. Overcoming this challenge is crucial for advancing experimentation practices and has broad implications for any field where decisions rely on causal inference. GALICIA aims to bridge this gap by developing novel causal machine learning techniques that enhance treatment effect estimation for adaptive experiments with budget constraints and treatment assignment policies with limited supervision. A key component of GALICIA is the integration of graph-based learning into causal machine learning models. Graph-based learning methods, such as graph neural networks, excel in semi-supervised learning by leveraging relational structures to propagate information across samples participating in an experiment and samples that are under consideration. GALICIA will develop doubly robust estimators with graph-based outcome and propensity models to improve causal estimates in adaptive experiments with limited data. Additionally, we will incorporate graph-based methods into causal meta-learners to enhance counterfactual estimation in both observational and experimental settings. Beyond estimation, GALICIA will advance treatment assignment policies by incorporating active learning into decision-making. Existing policy learning methods assume a fixed, well-trained causal effect model, overlooking opportunities for iterative improvement. By integrating active learning, GALICIA will develop adaptive policy learning strategies that balance exploration (identifying informative samples to refine the model) and exploitation (assigning treatments based on the model’s predictions), ultimately enhancing long-term policy performance in dynamic environments. Overall, GALICIA’s contributions will extend the applicability of causal machine learning to real-world settings with budget constraints, influencing both theoretical advancements and practical implementations in healthcare, e-commerce, and beyond. The project aims to address fundamental operational constraints, spanning from hypothesis testing to policy-making and the real-world application of causal conclusions. Our research will be disseminated through open-source libraries, benchmark datasets, and publications in top conferences and journals to maximize impact and adoption.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
RECALLReasoning and Explaining CAusality and Liability through Legal AiUniversity of LuxembourgRéka MarkovichIndustrial and Service Transformation € 636,000
KeywordsAI&Law, computational law, explainable AI, formal argumentation, LLMs, symbolic and subsymbolic AI, causation, liability, legal explanations
AbstractRECALL is an interdisciplinary research project in computational law and explainable AI integrating computational argumentation and Large Language Models (LLMs) to reason and explain causality and liability distribution in legal contexts. Legal liability hinges not only on establishing a direct causal link between actions and the harm but also on interpreting a myriad of non-causal factors, such as whether the harm was foreseeable. Argumentation has been used to provide Explainability to AI, offering a step-by-step explanation of how an AI System reaches a decision; it can provide reasoning over uncertainty and can find solutions when conflicting information is faced. However, existing techniques used in computational argumentation do not adequately address causality and responsibility, even though human argumentation, especially in the law, often deals with these notions. For instance, formal models of argumentation that capture forms of defeasible reasoning do not use proper representations of causal knowledge and are therefore unable to reason about causality, much less on legal liability. RECALL addresses this research gap by developing a new formal model of argumentation, aimed at emulating and understanding argumentation about causality. The formal model of argumentation will be built based on Pearl’s structural causal models. Complementing this, we will use the advanced capabilities of LLMs to comprehend and articulate the main non-causal element critical to legal liability, i.e., foreseeability, thus enriching the argumentation with a deeper, more nuanced understanding of legal liability. By synergizing the explicit, rule-based reasoning of computational argumentation with the contextual, nuanced comprehension of LLMs, this initiative aims to provide a comprehensive, multifaceted approach to legal liability attribution and hence to explanations in the law. To validate our approach, we will conduct a comprehensive evaluation using a dataset of legal cases. This will not only demonstrate the efficacy of our methods but also highlight their potential to advance the field of legal AI.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
BRIDGEBug Report Intelligence for software Development in the Generative AI EraUniversity of LuxembourgTegawendé François D’Assise BissyandéIndustrial and Service Transformation € 498,000
KeywordsBug reports, Generative AI, Coding Assistant, Program repair, LLM, Foundation models
AbstractSoftware bugs impose massive economic costs and significantly impact user experience. Bug reports are critical communication channels between users and developers, yet they often lack the detail needed for efficient resolution. This research proposes a novel framework leveraging recent breakthroughs in Large Language Models (LLMs) and the Chain of Draft reasoning paradigm to transform bug report processing in the generative AI era. The BRIDGE project introduces innovative approaches to enhance, analyze, and exploit bug reports through: (1) LLM-powered processing that converts verbose, unstructured reports into concise, information-dense representations; (2) bimodal analysis techniques that bridge the semantic gap between natural language descriptions and code elements; (3) adaptive instrumentation guided by LLMs to focus on relevant code sections; and (4) bug localization and repair methods that leverage enhanced bug reports. We hypothesize that these techniques will significantly reduce the time from bug report to fix and establish bug reports as critical feedback mechanisms for improving AI-assisted software development. The project integrates advanced NLP, program analysis, and machine learning to address longstanding challenges in software maintenance. Our comprehensive evaluation methodology will measure improvements in bug resolution efficiency across open-source projects and industrial applications, advancing the state-of-the-art in automated program repair and establishing new paradigms for human-AI collaboration in software engineering.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
SHADEREVUnderstanding shadow reservesUniversity of LuxembourgLidan ZhangIndustrial and Service TransformationYes € 579,000
KeywordsInternational finance, foreign exchange reserve, geopolitics
AbstractSHADEREV investigates shadow reserves—state-controlled foreign currency assets outside official reserve accounts. These reserves operate outside standard monetary reporting and evade international financial surveillance, despite their potential role in buffering liquidity, stabilising exchange rates, and influencing cross-border capital flows. The project will construct the first cross-country database dedicated to identifying and quantifying shadow reserves by systematically detecting discrepancies between reported foreign exchange reserves and external asset positions drawn from portfolio holdings, banking statistics, and derivatives exposures. SHADEREV will transform detailed financial records and regulatory datasets—specifically the IMF Coordinated Portfolio Investment Survey (CPIS), BIS Locational Banking Statistics (LBS), U.S. Treasury International Capital (TIC) System data, and BIS OTC Derivatives Reports—into coherent, dynamic estimates of off-balance-sheet foreign asset accumulation. This comprehensive empirical analysis will span the period from 2000 to 2023 and will focus on countries that share notable characteristics, such as persistent current account surpluses, state-dominated financial structures, and substantial exposure to geopolitical or commodity-related vulnerabilities. The project evaluates whether shadow reserves serve stabilising functions comparable to official reserves, particularly during periods of financial stress, economic sanctions, and constrained monetary policy environments. It investigates in detail whether shadow reserves mitigate capital flow volatility by providing alternative channels for liquidity management during market disruptions and external shocks. The analysis explores their role in enabling foreign exchange interventions through indirect mechanisms, such as the deployment of derivatives and swap agreements, without direct central bank involvement. Additionally, SHADEREV examines if shadow reserves help stabilise domestic financial markets by buffering against sudden stops in capital inflows and outflows, thus contributing to broader macroeconomic stability. Furthermore, the project assesses whether governments strategically accumulate shadow reserves to reinforce economic autonomy and enhance bargaining power in international financial diplomacy. SHADEREV explores whether fluctuations in shadow reserve positions are systematically associated with shifts in trade policies, bilateral negotiations, or diplomatic posturing—highlighting their potential role as instruments of financial statecraft and geopolitical strategy. By situating shadow reserves within these broader contexts, SHADEREV seeks to clarify how concealed financial strategies underpin national responses to geopolitical pressures and economic uncertainties. Academically, SHADEREV advances the scholarship on financial opacity, sovereign asset management practices, and international reserve behaviour by introducing an innovative, empirically robust methodology for estimating shadow reserves. It will establish new empirical insights into the macro-financial dynamics of hidden sovereign assets across varying regulatory environments and market conditions. From a policy perspective, SHADEREV enhances macro-financial surveillance capabilities by developing systematic tools for monitoring concealed reserve accumulations. Within the Luxembourg context, the project aligns closely with national priorities aimed at enhancing external risk assessment capacities and financial oversight frameworks, including the monitoring of cross-border financial exposures and anti-money laundering enforcement. Ultimately, the database and analytical toolkit developed by SHADEREV will provide essential resources for researchers, policymakers, regulators, and financial analysts engaged in international finance and geopolitical strategy.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
SYNERGIXSYstemic firm traNsformation dEteRmined by Green and diGItal innovations: what are the key X-drivers behind this change?Luxembourg Institute of Socio-Economic Research (LISER)Michela Gianna BiaIndustrial and Service TransformationNo € 664,000
KeywordsArtificial Intelligence; Green Technology; Skills; Organizational Transformation; Workforce Composition; Mediation Analysis.
AbstractThe digital and green transitions present profound challenges for firms across Europe, requiring adaptation not only in technology but also in organizational practices, workforce skills, and managerial strategies. Despite growing enthusiasm around artificial intelligence (AI) and green technology, the actual process of adoption remains uneven, and its implications for firm performance and workforce transformation are not yet well understood. The SYNERGIX project explores how firms adopt and integrate AI and green technology, with a particular focus on organizational restructuring, workforce composition, and management practices. Building on lessons from earlier technological shifts (e.g., ICT diffusion) and recent evidence on AI (Agrawal, 2023; Brynjolfsson & Raymond, 2025, among others), we hypothesize that successful adoption requires more than technological deployment—it demands strategic leadership, workforce upskilling, and often a fundamental rethinking of internal structures. While global surveys (e.g., McKinsey, 2025) report that over 75% of firms have adopted some form of AI, closer analysis—especially in sectors like manufacturing—reveals much lower effective adoption rates (e.g., <10% in 2018 in the U.S., McElheran et al., 2024). Adoption is often driven by large firms, typically where top management is directly involved. However, substantial barriers—technological, organizational, and skill-related—still hinder broader diffusion. Using Norway as a case study, this project leverages rich administrative data on technology use, workforce structure, educational profiles, and firm performance. Norway is particularly suitable given its high AI adoption rates (32%, Eurostat 2024), leadership in green technology, and economic similarity to Luxembourg—small, wealthy, and primarily technology-adopting. These parallels make the findings relevant for informing Luxembourg’s national research priorities, which emphasize AI, digital transformation, sustainability, and workforce development. The project is structured around three work packages (WPs): WP1 investigates the key drivers of AI and green technology adoption in the firm, focusing on workforce skills, managerial characteristics, organizational structure, and sectoral dynamics. Building on the identification of key drivers of AI and green technology adoption, WP2 estimates the effects of innovation adoption on firm performance, examining whether these technologies are associated with automation, skill upgrading, or strategic organizational change within firms. WP3 explores the impacts on internal firm dynamics, including organizational hierarchies, wage differentials, and the role of managerial expertise, and examines whether these changes also mediate the effects on firm productivity (causal mediation analysis). Finally, it compares the patterns observed between green and AI adopters. By clarifying how firms implement and benefit from these technologies, the research contributes to both academic understanding and practical policymaking. It informs strategies to support effective adoption—such as targeted R&D policies, training programs, and governance reforms—and addresses potential risks like skill mismatches and inequality. For Luxembourg, the findings will support evidence-based policy in key areas of competitiveness, digitalization, and sustainable growth.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
BRIDGENExtending bridge life with monitoring and a regenerative strengthening technologyUniversity of LuxembourgNuma BertolaIndustrial and Service TransformationFRQ € 449,000
KeywordsInfrastructure Management, Structural Identification, Bridge monitoring, UHPFRC, UHP-FRCM, Value of information
AbstractThe management of existing bridges is a crucial challenge, as many structures are reaching the end of their theoretical end of life. As replacing all aging infrastructure is neither environmentally nor economically viable, new regenerative intervention schemes are therefore essential to avoid unnecessary demolition. However, current decision-making relies heavily on subjective visual inspections and simplified recalculations, often resulting in premature replacement. Although recent research has introduced data-informed evaluation methods and advanced strengthening techniques, their adoption in practice remains limited due to the absence of an integrated framework. The BRIDGEN project proposes a holistic approach to bridge management, combining sensor-based inspection, data-driven diagnostics, and regenerative intervention strategies. By leveraging the interdisciplinary expertise of the PIs, the project will establish a full-cycle solution—from monitoring to intervention—using data-informed structural assessment and intervention with Ultra-High-Performance Fiber-Reinforced Cementitious Matrix (UHP-FCM) materials. First, a structural performance monitoring methodology will be developed to quantify structural capacity accurately. Then, a novel strengthening system targeting the bridge’s bottom chord will be introduced to simultaneously improve durability and load-bearing performance. Sensor datasets after the intervention will validate the intervention and allow quantification of its environmental and economic benefits. The framework will be demonstrated through two real-case applications on concrete bridges. This new framework will preserve existing structures and extend their service life without compromising the bridge’s safety. Rather than the conventional deconstruction-reconstruction solution, this new framework envisions that existing structures can be extended for several decades, leading to more sustainable and cost-effective infrastructure management.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
UFP4MPUltra-fast potentials for mechanical propertiesLuxembourg Institute of Science and Technology (LIST)Matthias RuppIndustrial and Service Transformation € 484,000
Keywordsmachine learning, molecular dynamics, hardness, machine-learning interatomic potential, plastic deformation, nanoindentation
AbstractThe rational design and optimisation of materials are central to societal progress but limited by long and expensive development cycles. Advances in artificial intelligence and machine learning algorithms and their application to atomistic simulations promise to accelerate the computational discovery, study, and optimisation of materials by enabling their accurate simulation and, thus, prediction of complex material properties in quantitative agreement with experiments. Hardness, a material’s resistance to localised plastic deformation, is extremely important in many industrial applications where materials are subjected to high stresses and/or wear, such as cutting tools, machine parts, bearings, or gears. However, hardness is a complex property that depends on multiple parameters, including composition, crystal structure, grains (size, orientation, boundaries), doping, and residual stress. It is commonly measured by penetrating a material with an indenter made of a harder material and measuring the resulting deformation. Indentation is challenging to simulate, as it depends on the collective behaviour of many atoms, requires extended time and length scales, is sensitive to the local material micro- and nanostructure, and involves different plastic deformation mechanisms, including dislocations, cracks, and phase transformations. So far, molecular dynamics (MD) approaches have been limited to accurate simulations of small systems via ab initio methods or inaccurate simulations of large systems via traditional force fields. ML interatomic potentials (MLIPs) have revolutionised MD simulations by combining near-ab initio accuracy with computational efficiency. Ultra-fast potentials (UFPs) are at the forefront of this development, enabling the simulation of millions of atoms and microsecond-long simulations (of smaller systems) The objective of the UFP4MP (UFPs for Mechanical Properties) project is to evaluate the capability of UFPs to capture the physics of several materials with very different mechanical behaviours. Specifically, UFP4MP will study the system composed of the two chemical species Ti and C because of its simplicity, the strong literature background dedicated to it, and the variety of materials it contains: metallic Ti, diamond, graphite, amorphous carbon (a-C), hard titanium carbides, TiC/a-C nanocomposites, and mixtures of metallic and ceramics (Ti/TiC). To address that objective, UFP4MP will combine computations and experiments. On the one hand, different Ti-C coatings will be deposited by magnetron co-sputtering, and characterised by different techniques, including nanoindentation. On the other hand, UFP4MP will use UFP-accelerated MD simulations to predict hardness values and plastic deformation mechanisms in quantitative agreement with the nanoindentation experiments and experimental characterisation carried out within the project. UFP4MP will explore fundamental scientific questions on plastic deformation processes in different types of materials, with a clear path towards industrial applications. As such, the UFP4MP project will develop a data-driven computational tool to study materials performance and plastic deformation mechanisms under nanoindentation. The developed approach will be generally applicable to other families of materials and extensible to related phenomena, such as scratching, tensile deformation, and fracture.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
UMLFFUncertainty-aware Machine Learning Force FieldsUniversity of LuxembourgIgor PoltavskyiIndustrial and Service Transformation € 704,000
Keywordsmachine learning, atomistic simulations, uncertainty estimations, active learning, force fields
AbstractOver the past two decades, significant advancements in machine learning (ML) force fields (FFs) have substantially enhanced the capabilities and scope of atomistic simulations, evolving from simple energy calculations for small molecules to highly accurate reconstructions of atomic forces in complex, multi-element systems. Recently, the emergence of “foundational” MLFF models represents a paradigm shift by providing broadly applicable models trained on extensive and chemically diverse datasets, moving beyond traditional system-specific training. This innovation can revolutionize atomistic simulations, allowing researchers and practitioners, including those without extensive computational resources or specialized expertise, to utilize state-of-the-art MLFF models. Nevertheless, implementing these broadly applicable MLFFs introduces significant new challenges, particularly in uncertainty quantification, which is essential for ensuring model reliability, facilitating robust active learning strategies, and preventing inappropriate use of predictive models. The Uncertainty-aware Machine Learning Force Fields (UMLFF) project addresses this critical challenge by integrating advanced uncertainty estimation methodologies into cutting-edge MLFF architectures, such as MACE and SO3krates. By embedding uncertainty quantification directly into the models’ architectures and associated training procedures, the project aims to significantly enhance the robustness and reliability of MLFFs. In its initial phase, the UMLFF project will concentrate on developing necessary architectural modifications and training protocols for a single, uncertainty-aware MLFF model. Subsequently, the research will extend into active learning strategies across configurational and chemical spaces, further enhancing model generalization and broad applicability. Ultimately, the UMLFF project aims to produce a fully automated software tool and comprehensive workflow, facilitating the efficient training of reliable and widely applicable MLFFs equipped with built-in uncertainty quantification. This advanced toolkit is anticipated to accelerate and support scientific discovery and industrial innovation in areas involving atomistic simulations.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
FastQOPTFast Quantum OptimizationUniversity of LuxembourgAndras GrabaritsIndustrial and Service TransformationYes € 428,000
Keywordsquantum optimization, quantum quenches, many-body quantum systems far from equilibrium
AbstractThe idea of using quantum systems as simulators for computational tasks traces back to Richard P. Feynman’s pioneering vision. Since then, quantum computation and optimization have undergone remarkable advancements, finding successful applications in diverse fields such as portfolio optimization, election forecasting, financial crash prediction, investment strategies, protein folding, and traffic scheduling. Quantum computation leverages the fundamental principles of quantum mechanics to tackle complex combinatorial problems beyond the reach of classical algorithms. The standard approach involves encoding the solution into the ground state of a carefully designed interacting Ising spin Hamiltonian, known as the problem Hamiltonian. The computation begins with an easily prepared ground state of a trivial Hamiltonian, and the goal is to evolve the system towards the final ground state within a time frame shorter than classical methods. While quantum annealing (QA), based on the adiabatic theorem, has gained popularity, its practical implementation faces significant challenges due to long annealing times and hardware limitations. Consequently, a shift toward assisted fast quenching protocols provides a promising alternative, supported by recent technological breakthroughs demonstrating quantum simulations with thousands of qubits. FastQOPT aims to establish a comprehensive analytical framework for fast quantum simulations assisted by approximate counterdiabatic driving while exploring novel optimization schemes for future experimental realizations. The project is structured around three key objectives. First, we investigate universal statistical properties of defects generated during rapid quenches across both first- and second-order quantum phase transitions. Second, we develop innovative optimization strategies for rapid protocols that hold the potential to enhance quantum advantage over classical methods. Third, we analyze defect statistics under approximate counterdiabatic driving and extend our study to noisy driving schedules, characterizing universal statistical properties of defect formation and exploring optimization strategies that leverage noise. Additionally, we provide an analytical characterization of time-dependent minimal subspaces relevant to these quenching protocols. By addressing these challenges, FastQOPT aims to bridge the gap between theoretical advancements and practical implementations, contributing to the next generation of fast quantum optimization techniques.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
MacroFluctRespFar-from-equilibrium macroscopic fluctuation-response theoryUniversity of Luxembourg  Massimiliano EspositoIndustrial and Service Transformation € 690,000
KeywordsStochastic thermodynamics, Fluctuation-dissipation, Response theory, Thermodynamic uncertainty relations
AbstractClose to equilibrium, the fluctuation-dissipation theorem establishes a universal link between a system’s response to perturbations and its equilibrium fluctuations. This foundational result permeates all areas of physics. In recent years, progress in nonequilibrium statistical physics—especially within stochastic thermodynamics, where dynamics are modeled as Markov jump processes—has enabled the study of fluctuations and responses far from equilibrium. Surprisingly, exact relations between fluctuations and responses persist in this regime, although they now involve kinetic properties like dynamical activity. Our group has been at the forefront of these developments, revealing universal trade-offs between response, fluctuations, and dissipation. These results show that, unlike passive systems with limited response capacity, active systems can display enhanced sensitivity or extreme resilience to perturbations—hallmarks of behaviors such as adaptation and homeostasis. However, the implications of these nonequilibrium results in the diffusive and macroscopic limits remain poorly understood. This project aims to bridge that gap. We will explore how these far-from-equilibrium fluctuation-response relations scale up to macroscopic systems, where nonlinear deterministic dynamics are perturbed by Gaussian noise, or even further to fluctuating fields in space-time. These questions are not only of fundamental interest but also have concrete implications for the design and analysis of active materials, gene regulatory networks, metabolic systems, and nanoelectronic devices. By building a unified response theory across scales and models, this project will lay the groundwork for new methods to control and characterize complex nonequilibrium systems.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
ThermoPhotoCRNNonequilibrium Thermodynamics of Light-Powered Chemical Reactions NetworksUniversity of LuxembourgLéa Annie Evelyne BresqueIndustrial and Service TransformationYes € 486,000
KeywordsChemical Reaction Network, Nonequilibrium thermodynamics, Photosynthesis, Metabolism, Topology, efficiency, free-energy
AbstractThe laws of thermodynamics have long been essential for understanding energy exchanges in macroscopic systems, particularly during the industrial revolution, which leveraged chemical energy from fossil fuels to power thermal machines. These machines rely on exergonic chemical reactions to produce heat, subsequently converted into useful work through temperature gradients. While early thermodynamics focused on reversible processes near equilibrium, most natural phenomena are inherently dissipative and occur far from equilibrium. Living systems, in particular, are quintessentially out of equilibrium, continuously consuming free energy from their environment to sustain order and promote growth. In ecosystems, this energy originates from the Sun, with photosynthetic organisms converting solar energy into chemical potential. Recent advances in molecular biology have elucidated key cellular metabolic pathways, enabling the study of their thermodynamic properties using chemical reaction networks. This framework allows to quantify energy flows, dissipation, and efficiency in complex biochemical systems. Although prior research has successfully analyzed metabolic efficiency in prokaryotes, photosynthetic eukaryotic cells remain less explored. Understanding their energy dynamics is crucial for unraveling the thermodynamic underpinnings of photosynthesis and its role in sustaining life on Earth.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
OptimalControlTransitionOptimal control of phase transitions in complex materialsUniversity of LuxembourgEtienne FodorIndustrial and Service Transformation € 807,000
Keywordscontrol theory, phase transitions, nonequilibrium thermodynamics, active matter, machine learning
AbstractAn outstanding challenge in the development of innovative materials is to reliably predict how the material properties can be changed in a finite time at a minimal energy cost. In fact, such a challenge is the first step towards building novel sensors and actuators which efficiently adapt their properties to external cues. The laws of thermodynamics dictate that the least dissipative protocols for passive systems are the quasistatic ones which operate as slowly as possible. Yet, such laws do not provide any predictions for finite-duration protocols. Moreover, for active systems which dissipate energy events at rest (ie, in the absence of perturbation), quasistatic protocols are no longer optimal. Therefore, there is a dire need to build a comprehensive framework which can guide experiments towards delineating optimal protocols for controlling passive and active systems in finite time. In this proposal, the goal is to optimize protocols inducing phase transitions in complex materials at a minimal energy cost. This is an ambitious project for at least two reasons. First, the challenge is to demonstrate that the tools of stochastic thermodynamics (ST), which extend macroscopic laws to fluctuating environments, can be deployed beyond minimal systems with a few degrees of freedom (for which ST was primarily developed) to now examine many-body systems (for which ST needs to be adapted). Addressing this challenge will open the door to optimizing the rich phenomenology of complex materials. Second, the difficulty is to extend optimization methods (including tools of machine learning) to account for the abrupt changes in material properties inherent to phase transitions. Our recent studies and preliminary results indicate that we are now in a position to propose unprecedented solutions to overcome these issues. Our control strategies are based on describing complex systems with thermodynamically consistent field theories (TCFTs) which entail a proper account of the material energy budget. Our first objective (O1) is to study phase transitions in TCFTs of passive and active matter which capture the phenomenology of canonical materials (with potential applications to industrial challenges): liquid mixtures, polar materials, and elastic materials. Our second objective (O2) is to build a versatile framework for the optimization of protocols in TCFTs using a weak-noise approximation (valid for large system sizes) and a perturbative treatment (close to quasistatic protocols). Finally, our third objective (O3) is to deploy this framework to delineate concrete strategies, in the form of time-varying protocols for experimentally tunable parameters, for crossing phase transitions. Overall, this theoretical project will provide a versatile thermodynamic framework for the optimal control of complex systems, with direct applications to various control problems. Remarkably, the project will benefit from continuous interaction with experimentalists to ensure the feasibility of our control strategies in actual materials with existing experimental techniques.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
DEPLOYDeploying a mobile interoceptive training app to reduce somatic symptom distressUniversity of LuxembourgAndré SchulzPersonalised Healthcare € 760,000
Keywordssomatic symptoms, mobile intervention, ambulatory assessment, interoception, interoceptive training, heartbeat perception, perceptual learning
AbstractSomatic symptoms have a particularly high prevalence in modern societies and can feed, if persistent and debilitating, into a so-called ‘somatic symptom disorder’ (SSD). Conventional psychotherapy shows only moderate response rates, as concerned individuals typically search for medical explanations for their complaints. Impaired interoception, the processing and perception of signals from inside the body, is discussed as one mechanism behind somatic symptom generation. Accordingly, first studies showed that SSD patients report reduced somatic symptom severity after an interoceptive training, based on heartbeat perception. The overarching aim of the current project is to develop a mobile app based on heartbeat perception training (HBPT) and to test its efficacy in reducing symptom severity in individuals reporting somatic symptoms. This aim can be sub-divided into three sub-aims: Firstly, we aim at revealing if a HBPT attenuates somatic symptoms in a long-term when applied in the laboratory. Secondly, we intend to investigate the mechanisms of this symptom-attenuating effect by elucidating if it is due to the enhancement of interoceptive abilities or if it is a side effect related to a shift of the attentional focus from symptoms to heartbeats. Thirdly, it is our purpose to translate this intervention from the laboratory into a mobile application, operated by a smartphone and built-in cameras. That study will be based on ambulatory assessment in real-life. In both studies (laboratory study and ambulatory study using a smartphone app) we will investigate individuals of the general public (due to the high prevalence of somatic symptoms) and individuals with a fully manifested SSD. Hence, this project may provide a novel, mobile intervention based on HBPT to reduce somatic symptoms. Furthermore, as it is based on physiological processes (i.e. assessment and perception of heartbeats), concerned individuals may show a higher acceptance of this intervention. Due to the high societal and individual burden associated with somatic symptoms, such as work absence, disability, early retirement and suicide, the current project has the potential for a significant contribution to public health.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
ACKROSAtypical Chemokine Receptors in the Regulation of the Opioid System and Pain SignalingLuxembourg Institute of Health (LIH)Martyna SzpakowskaPersonalised HealthcareFRQ € 377,000
KeywordsACKR, chemokine, signal transduction, opioid receptors, scavenger, molecular pharmacology, GPCR, drug discovery, pain, opioid crisis
AbstractChronic pain remains a major public health challenge, with opioid drugs still representing the standard of care despite their serious side effects, including tolerance, dependence, and fatal overdose. Addressing the limitations of strategies targeting the classical opioid receptors (ORs) is essential for developing safer, more effective analgesics. At the Luxembourg Institute of Health (LIH), the atypical chemokine receptor 3 (ACKR3) was recently identified as an unconventional opioid receptor. Acting as a high-affinity scavenger of endogenous opioid peptides, ACKR3 reduces their availability to ORs and may thereby diminish their natural analgesic effects. Preliminary in-vivo data obtained in collaboration with the University of Sherbrooke suggest that blocking ACKR3 may indeed be a promising alternative approach for pain treatment. However, the validation of ACKR3 as a novel target remains incomplete, and its precise regulatory roles in the opioid system (OS), including the underlying molecular mechanisms, have yet to be fully elucidated. Whether other mechanisms beyond opioid peptide scavenging and degradation are involved remains an open question. This INTER CORE project brings together a multidisciplinary team of experts in GPCR biology, pain pharmacology, and biosensor technologies from Luxembourg, Sherbrooke, and Montreal. It aims to investigate comprehensively ACKR3’s functions in the OS and its therapeutic potential for treating chronic and persistent pain. The project will explore novel hypotheses on the involvement of this unconventional receptor in regulation of the OS, including (1) the ability of ACKR3 released on small extracellular vesicles (sEVs) to scavenge/sequester remotely opioid peptides, (2) the effect of ACKR3 heterodimerization with classical opioid receptors such as MOR, and (3) the impact of “β-arrestin hijacking,” or depletion of intracellular transducers by ACKR3 on OR signalling and desensitization. In parallel, ACKR5, the closest relative of ACKR3 phylogenetically, will be examined for its role in the OS through opioid peptide binding. Using cutting-edge in vitro and in vivo models and biosensors, complementary expertise, this project will elucidate ACKR3’s role in the OS and guide the development of pharmacological modulators. Ultimately, it aims to deliver foundational knowledge and tools that could pave the way for a new class of improved analgesics, addressing an urgent unmet medical need in pain management.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
MicroAmylo-PDMicrobiome-derived small amyloidogenic proteins as pathogenic agents in Parkinson’s diseaseUniversity of Luxembourg  Paul WilmesPersonalised Healthcare € 908,000
KeywordsAmyloids; Parkinson’s disease; microbiome; mouse models; small proteins
AbstractThe human microbiome is a complex and diverse ecosystem comprising bacteria, archaea, microeukaryotes, and viruses. Recent studies have shown that this ecosystem plays a crucial role in maintaining host physiology and contributes to various diseases, including neurodegenerative disorders. The classical pathological hallmarks of Parkinson’s disease (PD) involve the aggregation of α-synuclein, an amyloid protein, in enteric neurons as well as in the dopaminergic substantia nigra and other brain regions of the central nervous system. Studies have found that introducing the gut microbiomes of people with Parkinson’s disease into mice overexpressing human α-synuclein exacerbates symptoms compared to transplanting microbiomes from healthy humans. Prior work by us using humanised mouse models of PD has shown that microbiome-derived amyloids such as the protein curli can cause misfolding, aggregation, and spreading of disease-associated α-synuclein aggregates from the gut to the brain, especially in the context of a diet-driven reduction in gut barrier function. An underexplored human gut microbiome complement comprises small proteins (<= 50 amino acids). We recently discovered that the majority of microbiome-derived small proteins (77.5 %) contain amyloidogenic fragments which, according to in silico modelling, are capable of misfolding, aggregating and triggering the pathological amyloid formation of host proteins, including α-synuclein. These characteristics, including the cross-seeding potential of these microbiome-derived small amyloidogenic proteins, have since been confirmed using in vitro experiments. Here, we will study the impact of the microbiome-derived small amyloidogenic proteins on Parkinson’s disease in vivo. Specifically, we hypothesise that microbiome-derived small amyloidogenic proteins contribute to PD pathophysiology. To test this hypothesis, we will: (1) Investigate the exposure to microbiome-derived small amyloidogenic proteins in mice and assess their impact on brain penetrance and animal well-being, (2) Study the effect of chronic exposure to these small proteins on PD-linked neurodegenerative phenotypes, and (3) Examine the impact of diet-driven gut barrier defects (driven by a low fibre diet) in relation to amyloid protein accumulation and PD-linked phenotypes. The proposed project is highly relevant to our understanding of Parkinson’s disease and the role of the microbiome in disease causation. The results from this project will contribute to the development of novel prognostic, diagnostic and therapeutic applications to support inter alia early disease-modifying therapies for PD. Crucially, it will also provide insights into other neurodegenerative disorders, e.g. Alzheimer’s disease, and other chronic conditions, e.g. type 2 diabetes, characterised by amyloid formation in susceptible human cell types and tissues.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
VAMPVagus Nerve Stimulation to Modulate PainUniversity of Luxembourg  Marian Van Der MeulenPersonalised Healthcare € 350,000
Keywordsvagus nerve stimulation; pain; conditioned pain modulation; autonomic nervous system
AbstractPain has a strong negative impact on overall quality of life, affecting daily activities, social interactions, and mental health. It is estimated that 12-48% of European adults suffer from chronic pain at any one moment. However, current treatments often remain inadequate and hampered by limitations (e.g., side effects, addiction, tolerance). There is an urgent need for new, non-invasive, pain management strategies. Growing evidence provides support for transcutaneous auricular vagus nerve stimulation (taVNS) as a promising new treatment option. Clinical trials and animal studies have shown significant improvements in pain symptoms. However, drawing firm conclusions from these studies is difficult, given the variability in samples, pain outcomes and methodologies. There is a critical lack of systematic and well-controlled studies on the efficacy of taVNS for pain relief in healthy individuals, combined with an urgent need for standardized stimulation protocols and longitudinal assessments. In this project, we address these challenges in a series of three studies that systematically investigate the efficacy and mechanisms of taVNS effects on experimental pain in a healthy population. In Study I, we will identify the optimal stimulation parameters (in particular, frequency and pulse width) resulting in the greatest pain relief. Study II compares different control conditions, to disentangle the effects of taVNS from non-specific stimulation as well as expectation effects. And finally, in Study III, we will explore potential long-term effects of taVNS in a longitudinal intervention study using an innovative multi-modal approach, integrating ecological momentary assessments, wearable sensors and repeated self-administered stimulation at home. In all three studies, autonomic nervous system activity will be monitored (e.g., heart rate, electrodermal activity) and psychological factors will be assessed. We will employ a comprehensive battery of pain outcomes, including several modalities of pain thresholds, as well as dynamic pain measures (e.g., conditioned pain modulation and temporal summation). The primary aim of the project is to provide an evidence-based framework for applying taVNS in pain research and to establish empirically tested standardized protocols for future clinical investigations. We will formulate methodological recommendations regarding study design and stimulation parameters to both the research and clinical community. These will enable more meaningful and reproducible results and facilitate the progress in taVNS research for pain management. Importantly, the project will also provide insight into the underlying mechanisms of taVNS. Addressing the gaps and challenges in the literature on experimental pain in healthy individuals is a critical first step in the translation towards clinical therapeutic taVNS interventions.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
TANDEMTargeting neuroinflammation in microglia-integrated nigrostriatal assembloids as a novel therapeutic approach for Multiple System AtrophyUniversity of Luxembourg  Emanuele FrattiniPersonalised HealthcareYes € 913,000
KeywordsMultiple system atrophy; neuroinflammation; neurodegeneration; alpha-synuclein; brain organoids.
AbstractMultiple system atrophy (MSA) is a rapidly progressive neurodegenerative disorder causing severe disability and reduced life expectancy. Two main phenotypic subtypes exist: (1) MSA-P, characterized by rigid-akinetic parkinsonism with poor levodopa response, and (2) MSA-C, presenting with predominant ataxia and imbalance. As the disease advances, these phenotypes merge into a highly disabling condition marked by widespread neuronal degeneration. Post-mortem studies have identified degeneration of GABAergic medium spiny neurons in the striatum and dopaminergic neurons in the substantia nigra as a common neuropathological feature of MSA. The hallmark of MSA pathology is the presence of glial cytoplasmic inclusions (GCIs) in oligodendrocytes, primarily composed of misfolded α-synuclein (α-syn), classifying MSA as an α-synucleinopathy. Additionally, microglial activation, astrogliosis and increased levels of inflammatory molecules in cerebrospinal fluid suggest that neuroinflammation plays a critical role in MSA pathology, though its relationship with α-syn accumulation remains unclear. Current transgenic models, which overexpress α-syn in oligodendrocytes, fail to fully capture the complexity of disease progression, particularly the interactions between neurons and immune cells. Consequently, MSA remains incurable, leading to significant disability and death within 10 years of symptom onset. This project proposes that targeting neuroinflammation by reducing microglia and astrocyte activation may enhance neuronal survival and mitigate α-syn pathology, offering a potential neuroprotective strategy for MSA. To test this hypothesis, this project will generate nigro-striatal assembloids derived from induced pluripotent stem cells (iPSCs) of MSA patients, which will integrate brain-region-specific organoids representing the midbrain and striatum. Patient-derived microglia will be incorporated in the model to study their role in neuronal loss and α-syn toxicity. Additionally, therapeutic modulation of immune cell activity using compounds targeting different pro-inflammatory pathways will be explored as a strategy to reduce neuroinflammation-driven damage. To evaluate model validity and treatment efficacy, multiple conditions will be compared (e.g., microglia-free vs. microglia-integrated assembloids, healthy vs. MSA microglia, pre- vs. post-treatment effects). Characterization will be performed at defined differentiation stages to assess key MSA features, including neurodegeneration, α-syn aggregation in oligodendrocytes and neurons, astrogliosis and microglia activation. Analytical methods will include SDS-PAGE/Western blot for total and detergent-insoluble α-syn, immunofluorescence and confocal microscopy for α-syn localization in specific cell types, immunohistochemistry for α-syn aggregation markers, phosphorylation and ubiquitination analysis of α-syn inclusions, seeding aggregation assays for ultrasensitive α-syn detection, transmission electron microscopy for ultrastructural fibril characterization and high-content 3D imaging to quantify GABAergic and dopaminergic neuron integrity. By leveraging patient-derived microglia-integrated nigrostriatal assembloids, this project aims to elucidate the mechanisms of neuroinflammation in MSA and explore new therapeutic strategies for this incurable disease.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
KillERUnderstanding and exploiting hard-wired metabolic dependencies of cancer cells during ER stressLuxembourg Institute of Health (LIH)Johannes MeiserPersonalised HealthcareFNRS € 776,000
KeywordsMetabolism, ER stress, formate, one carbon metabolism, cancer, mRNA translation
AbstractTumor initiation and progression are not only defined by specific mutations and cell-intrinsic properties but are also heavily influenced by the host environment and its specific metabolic landscape in which cancer cells proliferate. Cancer cells are highly plastic and as such, their metabolism exhibits a great degree of flexibility, allowing tumor cells to adapt to changing nutrient environments, pH levels or oxygen tensions. Yet, limits exist where hard-wired constraints result in certain metabolic liabilities of cancer cells. We have recently, characterized and defined hard-wired metabolic dependencies of cancer cells undergoing ER stress. Most prominently, they show a switch from glycine release to glycine consumption. Perturbing the glycine system, we have obtained compelling evidence that this prevents a proper unfolded protein response (UPR) which is essential to cancer cells to overcome ER stress. While our data clearly highlight that intracellular glycine scavenging prevents proper UPR, mechanistic details remain incomplete. Thus, in KillER we want to (i) explain how glycine controls UPR at a mechanistic level and (ii) explore the clinical implication of such an approach in context of triple negative breast cancer and chemotherapy, a setup where onset of ER stress is well known. In summary, we aim to provide causal evidence that glycine is a determinant to execute the UPR and develop a new metabolic concept that can be leveraged to enhance the effectiveness of adjuvant therapies that promote ER stress.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
RiboPDThe role of ribosome speed control in the pathogenesis of Parkinson’s diseaseUniversity of Luxembourg  Rico SchieweckPersonalised Healthcare € 868,000
KeywordsTranslation speed, co-translational protein folding, Parkinson’s disease, neurodegeneration, protein folding
AbstractDisease biology centres on genetic variants that lead to pathological changes in cells. Although the nature of these variants can vary, they all have in common that they need to be translated into proteins in order to exert their effects. This is where translation comes in. Research in recent decades has shown that the complex interplay between translation speed and (co-translational) protein folding is a prerequisite for protein homeostasis (proteostasis). This is of particular interest for neurodegeneration, which is characterised by impaired proteostasis. Although translation is a crucial factor in maintaining cellular function, very little is known about its role in brain disorders such as Parkinson’s disease (PD). In this project, we will elucidate pathological changes in translation speed and their impact on co-translational protein folding. To this end, we will integrate ribosome interactomics, translatomics and nascent proteomics data to unravel co-translational protein misfolding in human dopaminergic PD neurons. To complement these findings, we will combine these data with genetic patient information to identify critical mutations in translation regulators that may affect the rate of protein synthesis. We will use these findings to screen for small translation modifiers that (i) restore translation speed, (ii) improve protein folding, and (iii) promote survival of PD neurons. Together, our project will establish translation dysregulation as a new concept of PD pathology. These findings will open up the possibility of a new class of drugs targeting the translation machinery in PD.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
INDAIntegrative Multiscale Neurodegenerative Disease AnalysisUniversity of Luxembourg  Alexander SkupinPersonalised Healthcare € 1,135,000
KeywordsNeurodegenerative Disease, Parkinson’s disease, Alzheimer’s disease, Common Mechanisms, Multiomics characterization, Data integration, Mitochondrial Dysfunction
AbstractNeurodegenerative diseases (NDs), such as Parkinson’s disease (PD) and Alzheimer’s disease (AD), are driven by complex molecular and cellular dysfunctions that remain poorly understood. The interdisciplinary “Integrative Multiscale Neurodegenerative Disease Analysis” (INDA) project aims to systematically uncover the mechanisms underlying neuronal vulnerability and resilience by applying a comprehensive systems biology approach across molecular, cellular, and functional scales in a translational manner. Based on our recent analyses and preliminary data, INDA will first combine our available longitudinal multiomics data of patient-based induced pluripotent stem cells (iPSCs) differentiated into midbrain neurons with public data repositories for an integrative analysis across diverse AD and PD conditions with a focus on key perturbations relevant to ND pathogenesis, particularly the dysregulation in calcium homeostasis and mitochondrial maintenance. This analysis will be complemented by corresponding perturbation experiments for calcium signalling and the Coiled-Coil-Helix-Coiled-Coil-Helix Domain (CHCHD) gene family expression in iPSC-based neuronal differentiation experiments with multi-omics profiling, high-content imaging, and electrophysiological readouts to investigate disease-relevant phenotypes under controlled conditions. Special emphasis will be on mitochondrial phenotyping using APEX-based proteomics, electron microscopy and Seahorse assays to detail the functional and structural changes on the cellular and mitochondrial level as hallmarks of NDs. The resulting data will be analyzed using our integrated network and pathway modeling approaches to identify common and condition-specific molecular mechanisms of ND development. These findings will be validated by deep phenotyping of corresponding organoid models and by analyses of clinical blood and brain patient samples to pave the way for the development for translational treatment strategies. Through its multiscale design, the project will deliver (i) new mechanistic insight into neuronal dysfunction, (ii) validated molecular signatures and mitochondrial phenotypes, and (iii) candidate biomarkers and targets for future therapeutic exploration in NDs. With its coordinated and interdisciplinary framework, INDA will deliver not only novel biological insights but also robust, transferable tools for disease modeling and molecular diagnostics, contributing to the broader mission of personalized medicine in neurodegeneration.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
BODYLINESGender, Care and Transnational Experiences in Organ TransplantationUniversity of Luxembourg  Karolina BarglowskiPersonalised Healthcare € 759,000
KeywordsOrgan transplantation, gender, lived experiences, transnational healthcare, Luxembourg
AbstractOrgan transplantation is a medical procedure used when a vital organ fails and is replaced by a healthy one from a living or deceased donor. It saves the lives and improves the quality of life of thousands of patients in Europe each year. In the long term, transplantation is often more cost-efficient than alternatives such as dialysis. Yet, despite significant medical advances and cross-border cooperation through networks like Eurotransplant, the demand for organs continues to exceed the supply. As a result, many patients in need do not receive transplants, and inequalities in both donation and transplantation persist across countries and demographic groups. Among these, gender is one of the most significant factors influencing the entire transplantation pathway—from diagnosis and referral to donation and recovery. Studies show that women are diagnosed later, wait longer, donate more frequently (especially as living donors), and are less likely to receive transplants than men. However, these disparities remain underexplored in their subjective and relational dimensions. The BODYLINES project examines how gendered expectations shape individuals’ experiences of organ donation and transplantation within Luxembourg’s distinctive cross-border healthcare system. Drawing on the country’s unique demographic profile and healthcare system, the project adopts an intersectional framework to analyze how gender, migration status, and socio-economic position influence access to transplantation, caregiving responsibilities, and post-transplant outcomes. Through qualitative interviews, ethnographic observation, policy analysis, and secondary data, the study centres the narratives of donors, patients, and caregivers to reveal how institutional structures and social norms interact to influence organ donation, access to care, and post-transplant outcomes. Luxembourg offers a compelling case for this research. As a member of Eurotransplant without domestic transplant centres, it has developed a well-integrated system of transnational cooperation for surgical procedures abroad—primarily in Belgium, France, and Germany—while providing pre- and post-operative care at home. This cross-border model creates unique conditions for studying how transnational healthcare systems shape patient experiences. The project focuses on how gendered caregiving roles, mobility constraints, and access to medical resources affect those navigating cross-border healthcare. Women—often both caregivers and donors—may face distinct logistical, emotional, and financial burdens that remain largely unexamined. By redefining transplantation as a socially embedded process rather than a sole clinical intervention, BODYLINES contributes to a deeper understanding of equity in healthcare. It addresses critical gaps in medical and policy discourses by illuminating the gendered, subjective, and relational aspects of donation and transplantation in transnational contexts. The project aligns with the CORE programme’s mission to support excellent, innovative, and socially relevant research. It advances interdisciplinary knowledge at the intersection of medical sociology, gender studies, and health policy. It offers evidence-based insights for developing more inclusive and gender-sensitive transplantation practices in Luxembourg and beyond.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
P(R)OP2Property Data Meets Population Data in Past and Present: Migration and Property Relations in the Long Run (1910-2024)University of LuxembourgMachteld VenkenSustainable and Responsible Development € 904,000
Keywordshousing dynamics, migration, property wealth inequalities, digital migration studies, oral history, contemporary European history
AbstractP(R)OP2 unites migration historians and social geographers to study the role of housing in the integration and segregation of migrants. It offers the first detailed investigation of the relation between population movements and property relations at the level of a city over a period of more than a century. Whereas migration historians have investigated mobility and integration, and urban historians have analysed housing development, historiographical research on their interrelation is hard to find. Social scientists, on the other hand, have mostly studied current housing affordability issues without including the legacy of historical property ownership structures as a possible determining factor. Going beyond an analysis of migrant access to housing, we interrogate how local owners of land and housing responded and respond to migrant housing demand. The central idea of the project is that housing first needs to be made available to migrants before this housing can be accessed. We thus investigate how property relations – that is, who owns property and how they make it available – have limited or favoured the integration of certain migrants similarly or differently over time, hereby conditioning the distribution of people across urban space throughout generations. Our work centres on the case of Dudelange, Luxembourg’s fourth biggest city, between 1910 and 2024. A rural municipality until 1882, the city then became one of Europe’s preeminent steel production sites. This economic transformation caused a rapid increase in its inhabitants until the early 20th century. The city’s population later stagnated until the early 1990s, when Dudelange started to accommodate those priced out of the capital city and financial centre. Thanks to the past work by project members, a collection of datasets linking owners and occupiers of place is available that has few parallels in Europe: land registry records, census tables, migrant arrival forms and genealogical records. We cross-link data using a dual approach: a stock perspective, comparing population and ownership data at the beginning and end of the study period, and a flow perspective, capturing population movements and property transactions in the intervening period. In this way, we capture what allows migrants to gain a foothold in a new place (rental housing infrastructure availability, homeownership opportunities) and what renders their stay more precarious (gentrification, outmigration due to housing prices). P(R)OP2 mobilises source criticism to understand what information the datasets offer through a reconstruction and interpretation of the administrative procedures, understandings and practices that have created them. It also complements the quantitative analyses with expert and intergenerational interviews to unravel the lived housing experiences of migrants. Interdisciplinary in form and content, P(R)OP2 significantly scales up ongoing research practices within single academic disciplines. It is co-led by a historian and a social geographer, who each work with a postdoctoral researcher. The eight research questions broached in the project concern long run developments related to source criticism, integration and segregation through housing, and the lived housing experiences of migrants. Interdisciplinary collaboration is needed to answer these. Each research question yields an article in an international peer-reviewed journal following a presentation at an international conference. The results also feed a special issue, a digital exhibition displaying interview fragments about housing experiences, and roundtable discussions with housing policy makers and archivists. P(R)OP2 promotes a renewal of historical research by including housing as a key factor for integration into migration history, while also training social scientists to broaden the time frame of their studies and to adequately contextualise their data.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
GOVLUGoverning the Duchy of Luxembourg within the polycentric monarchy of Habsburg Spain (16th-17th c.). Political actors and communication strategies at the periphery of the Spanish NetherlandsUniversity of LuxembourgMonique WeisSustainable and Responsible Development € 708,000
KeywordsDuchy of Luxembourg, Habsburg/Spanish Netherlands, polycentric monarchy, government and administration, 16th and 17th century archives, political actors, discursive patterns
AbstractGOVLU aims to shed new light on how the early modern Duchy of Luxembourg, the southernmost province and a “strategic periphery” of the Low Countries, was governed and administered in the 16th and 17th century, under the rule of the Spanish Habsburgs. The project proposes to renew an underdeveloped and outdated historiography by focusing on the dynamic interactions of the provincial institutions (governor, Council, Estates) with the central government of the Habsburg/Spanish Netherlands in Brussels and with the royal administration in Madrid. One of GOVLU’s objectives is to integrate Luxembourg’s early modern past into international research dynamics on polycentric states, power balances and communication strategies. Other aims are to contribute to the transregional history of Luxembourg, to foster new studies about the early modern period and to reach out to the wider public about this little-known subject. The project mobilises a large corpus of neglected archival sources, kept in repositories in Luxembourg, Belgium, France, Spain and Austria. Some of this valuable material will be digitally edited and made available for further research. The GOVLU team, which is composed of the PI, a doctoral candidate and a post-doctoral fellow, will highlight the role of political actors at different levels (governors, counsellors, secretaries, delegates). Who exactly were these “men of government”, most of whom stemmed from the Duchy or the surrounding regions? What were their mobilities between Luxembourg, Brussels and the Holy Roman Empire? By tackling these questions, GOVLU will contribute to the study of early modern offices and their agents. How did the political actors of the Duchy dialogue and negotiate with other decision-makers and with the different components of society? Which discursive and argumentative elements did they use to make requests and to justify their attitudes? What was the influence of Luxembourg’s multilingualism? GOVLU places the communication strategies developed by the various institutions and their agents, in Luxembourg and beyond, at the centre of its work. It focuses on the discursive and argumentative patterns present in the correspondence and other archival documents. Digital tools of discourse analysis will be used to identity, contextualise and interpret the most significant patterns. The online guide to the Luxembourg institutions and their archives, and the prosopographic dictionary of the political actors, two important deliverables of the project, will be useful tools for other researchers. Besides other international publications, the GOVLU team will prepare a final edited volume, with a comparative dimension, about “Uses of political discourse in the government of peripheral provinces and small states in the early modern period)”. At the end of the project, the PhD candidate will deliver a doctoral thesis entitled “Correspondence, mobility and discourse in the government of a strategic periphery of the Spanish Netherlands (16th-17th c.)”. GOVLU is an important contribution to the political history of the polycentric monarchy of Habsburg Spain in the 16th and 17th century. While dwelling on the fascinating case of Luxembourg, its aspirations for regional autonomy and its embedding in larger entities, the project also contributes to a better understanding of how plural identities emerge in the long term. Finally, by focusing on the written, spoken and performative word, GOVLU participates in a wider reflection on the uses of political discourse in early modern Europe.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
E-CoLuxData-Driven Electromobility Charging Solutions for Local and Cross-border Commuters in Luxembourg and BeyondUniversity of LuxembourgGilbert FridgenSustainable and Responsible Development € 681,000
KeywordsElectric Vehicles, Charging Infrastructure, Tariff Design, Local and Cross-Border Commuting, Data-Driven Decision Making, Energy System
AbstractAs countries intensify efforts to meet the Paris Agreement targets, the decarbonization of high-emission sectors—particularly transport—has become a global priority. In Europe, where transport accounts for nearly 24% of total CO₂ emissions, electric vehicle (EV) adoption is rising rapidly, with EVs making up 23.6% of vehicle sales in 2023 (EEA, 2024). To support this transition, E-CoLux proposes a data-driven decision support system (DSS) that enables effective infrastructure deployment and tariff design, tailored to both local and cross-border EV commuters. The DSS introduces three key innovations: (1) a focus on distinct EV commuter archetypes (e.g., work vs. non-work commuters), (2) explicit inclusion of cross-border commuting patterns, and (3) a dynamic link between infrastructure deployment and tariff design. These elements are critical for accurate demand forecasting and system-level optimization, particularly in countries like Luxembourg, where cross-border commuters represent 44% of the workforce (MMTP, 2023). EV adoption presents opportunities—new business models, reduced pollution, increased flexibility—but also challenges such as increased peak demand and grid congestion. Charging behavior (home vs. work) and the growth of distributed renewable energy further complicate system management (Fridgen et al., 2021; Pavić et al., 2015). Policymakers and system operators must anticipate where and when demand will arise and act accordingly. E-CoLux addresses this need through a holistic, data-driven approach that combines real-world data (e.g., Luxmobil survey, STATEC statistics) with optimization, machine learning, and behavioural modelling to inform policy and planning. Luxembourg is an ideal testbed: it offers rich mobility data, a compact grid, and high EV potential. However, the DSS is designed for scalability and transferability, incorporating flexible, country-specific parameters for adaptation across regions and use cases. Built with equity and energy justice in mind, it considers socio-demographic factors and commuter preferences to inform fair tariff structures—even in developing countries. Through its multidisciplinary framework and stakeholder engagement—including grid operator Creos and academic partners—E-CoLux supports the transition to a resilient, sustainable, and efficient energy system prepared for high EV penetration.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
LUXTAXLuxembourg’s legacy as a Tax HavenUniversity of LuxembourgAikaterini PantazatouSustainable and Responsible Development  € 442,000
KeywordsLuxembourg, tax haven, tax evasion, wealth, legacy, history
AbstractThe LUXTAX project will employ an inter-disciplinary approach to explore Luxembourg’s legacy and actuality as a tax haven. By employing a law and history approach the LUXTAX project aims to investigate the history of Luxembourg’s tax regime, its beneficial elements through time as well as the socio-economic context within which this regime/the different regimes through time developed. It is well-known that Luxembourg has figured in academic literature, popular science and even in political debates as a tax haven. To understand whether this qualification was (and remains) correct, the project will commence by exploring the history and evolution of tax havens as a whole. It will then focus on the contemporary history of Luxembourg and its fiscal regime. How and why Luxembourg developed into a ‘friendly tax regime’ and what legal characteristics did this(-ese) regime(s) have? Who were the architects behind these developments and how did the international and EU legal framework react at the time to these challenges? The project will then continue by focusing on more contemporary times and by investigating Luxembourg’s current qualification as a tax haven. Can Luxembourg, a founding Member State of the EU remain a tax haven? Is this even possible after complying with all anti-tax avoidance and transparency secondary EU legislation? The LUXTAX project will attempt to answer this question by relying on the (working) definition of tax havens as well as presumptions on tax havens and the importance of a country’s legacy in building (or maintaining) such presumptions. By answering these questions from an intra-disciplinary perspective, the LUXTAX project aims to offer both normative claims and policy proposals for future tax reforms, with the goal of addressing all elements of harmful tax practices.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
GreenLANDThe effects of greening landscape practices on environmental values and property market in urban densificationLuxembourg Institute of Socio-Economic Research (LISER)Karolina Zieba-KulawikSustainable and Responsible DevelopmentYes € 489,000
Keywordsurban development, green infrastructure, environmental economics, climate justice, value of greenery, developer practices, residents perception, climate change adaptation, remote sensing, 3D GIS, computer vision
AbstractIn the face of rapid urban densification and growing climate pressures, integrating green spaces (GS) into urban development is vital for creating livable, sustainable cities. Yet, greenery is often undervalued in planning processes—seen as costless or secondary—and equitable access to GS remains largely overlooked. While there is growing interest among investors and developers in nature-based solutions, including marketing green views as desirable features, it remains unclear whether such interest reflects a genuine commitment to urban greening or a strategic use of existing greenery to boost property appeal. This raises important questions about how green infrastructure is perceived, valued, and implemented, particularly in the context of market forces and social equity. The GreenLAND project responds to this challenge by adopting an interdisciplinary approach grounded in environmental justice, urban political ecology, and hedonic pricing. It investigates how residents and real estate developers perceive and incorporate greenery, and how these practices affect social equity, housing market dynamics, and climate adaptation—especially urban heat mitigation. The project’s core objectives are to: (1) assess the distribution of green spaces across socio-economic groups and evaluate whether current urban planning aligns with residents’ needs; (2) analyze developers’ motivations and practices in including greenery, and whether these reinforce spatial inequalities; (3) estimate the socio-economic value of urban green spaces; and (4) evaluate how vegetation type and 3D spatial configuration influence green spaces’ effectiveness in reducing urban heat. To address these objectives, GreenLAND uses a mixed-methods design that integrates advanced spatial analysis with social science methods. High-resolution remote sensing techniques—including voxel-based 3D vegetation mapping using LiDAR and green view indices derived from computer vision—quantify the volume and visibility of greenery. These objective indicators are combined with resident surveys and qualitative interviews with developers and architects to understand how green spaces are experienced, valued, and implemented. Hedonic pricing models are also used to assess the financial value of greenery in the housing market. This interdisciplinary triangulation ensures that the findings reflect both measurable spatial patterns and the subjective meanings attributed to green spaces, offering a richer, more holistic view than single-method approaches. Luxembourg provides a strategic and globally relevant case study. As a high-income country with intense development pressures and a national commitment to “no net land take” by 2050, it exemplifies the tensions between urban densification and green space preservation. The country’s high property prices and strong sustainability policies create an ideal context for examining how planning frameworks, market incentives, and governance practices shape the implementation of green infrastructure. Insights from this setting will be transferable to other urban areas facing similar challenges. By quantifying the role of vegetation structure in mitigating urban heat and identifying the market value of green amenities, GreenLAND aims to support evidence-based climate adaptation and land-use planning. At the same time, by revealing patterns of unequal access to green space, the project will inform more socially just planning practices. Through its methodological innovation and integrated approach, GreenLAND will generate actionable insights for urban planners and policymakers, while also contributing to academic debates on equitable urban development in the face of the challenges of climate change.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
VITRENZAFundamental studies on enzymatic degradation mechanisms of vitrimersLuxembourg Institute of Science and Technology (LIST)Pierre VergeSustainable and Responsible DevelopmentARIS € 648,000
KeywordsVitrimers, Enzymatic degradation, Polybenzoxazines, Sustainable materials
AbstractVitrimers have recently emerged as alternatives to thermosets. They have similar 3D structures resulting from the crosslinking process, allowing for high thermal and mechanical properties. However, in contrast to thermosets, which are composed of irreversible bonds, vitrimers stand out because they bear reversible covalent bonds that can be triggered under a specific stimulus. These dynamic bonds have the advantage of exchanging when the material is exposed above its topology freezing transition temperature, or, in other words, its reworking temperature. Discovered by Leibler in 2011, the popularity of vitrimers has consistently increased, and they are now found in multiple applications. Degradability, recyclability, and self-healing have been demonstrated multiple times. Vitrimers designed to be applied in structural applications—for instance, benzoxazine-based vitrimers—have an extremely high crosslinking density. After their chemical or mechanical recycling, benzoxazine-based vitrimers result in a fine powder of particles owing to the permanence of crosslinking nodes that cannot be degraded, and to their ratio with dynamic bonds. Monomers or oligomers are not recovered after such a process. Thus, their degradation by-products are mainly downcycled rather than reused in similar manufacturing processes, and alternative approaches to conventional chemical and mechanical recycling should be considered. Recently, enzymatic degradation has emerged as a more environmentally friendly method for polymer recycling. Some enzymes, including hydrolases and esterases, have demonstrated effectiveness in decomposing synthetic polymers with ester bonds, such as polyethylene terephthalate, into monomers or oligomers that can be reused in the same production process. Considering that vitrimers relying on transesterification exchanges are composed of ester bonds, similarly to such polymers, this approach could be considered. However, it remains largely unexplored, as similar challenges to thermosets must be overcome: vitrimers for structural applications have a very high crosslinking density, which restricts enzyme access to degradable sites, and can reduce their activity as well. To address this issue, specific solvents can be used that increase the susceptibility of the polymers to enzymatic degradation by expanding the network meshes, facilitating access to degradable sites, as well as increasing the activity of enzymes. In this regard, deep eutectic solvents (DES) show potential as a promising and environmentally benign alternative for reaction media. The goal of this project is to investigate the enzymatic degradation of vitrimers facilitated by DES, and the reuse of their degradation by-products. We aim to identify the chemical structures of vitrimers that are best suited for promoting enzymatic degradation. Based on these findings, new vitrimers will be developed and subjected to enzymatic degradation facilitated by DES. In this approach, DES will offer multiple advantages: they will be used both to pre-degrade vitrimers, to swell their networks and to enhance enzyme activity and recovery. With this approach, we aim to reconcile functional properties that are typically considered mutually exclusive—e.g., high performance and degradability—in a manner that is environmentally viable. The project consortium will bring together two key experts: the Luxembourg Institute of Science and Technology (Luxembourg), with expertise in benzoxazine-based vitrimers, and the National Institute of Chemistry (Slovenia), specializing in enzymatic degradation and deep eutectic solvents.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
ReStartResource Efficient Steel – recycled aggregate concrete – timber Hybrid Slim Floor SystemsUniversity of Luxembourg  Qiuni FuSustainable and Responsible DevelopmentYes € 704,000
KeywordsSteel-concrete composite, Recycled aggregate concrete, Puzzle-shaped dowel, Nonlinear resistance, Slim floor
AbstractThe construction industry is a major contributor to global resource consumption and CO₂ emissions, making it a prime target for circular economy strategies. This project, ReStart, introduces a novel hybrid slim floor system integrating steel, recycled aggregate concrete (RAC), and timber to enhance resource efficiency and reduce carbon emissions. The system utilizes puzzle-shaped composite dowels to connect RAC slabs with steel beams and incorporates timber as both permanent formwork and part of timber-concrete composite slabs. The overarching aim is to enable a reliable and cost-effective nonlinear design of slim composite beams using RAC, with particular attention to time-dependent effects due to creep and shrinkage. The research is structured into three main objectives: (1) characterizing the shear behaviour of steel-RAC composite dowels and developing a mechanics-based resistance model; (2) investigating the impacts of RAC creep on dowel performance and ultimate shear resistance under varying stress levels and loading sequences; and (3) assessing the strain-limited plastic resistance of composite beams using steel-RAC composite dowels at the cross-section and member levels, considering the influence of compact cross-sections with a steel T-profile and long-term effects. The project will deliver new knowledge and analytical tools — including a shear resistance model for steel-RAC composite dowels, a reduction factor for long-term effects, and a moment reduction factor at the cross-section level — along with design recommendations for strain-limited plastic (nonlinear) design of composite beams using composite dowels, all of which will support the development of CEN technical specifications and promote sustainable applications in structural engineering.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
Ge2RawCombined Geochemistry and Ecotoxicology for Critical Raw Materials environmental (risk) assessmentLuxembourg Institute of Science and Technology (LIST)Kahina MehennaouiSustainable and Responsible DevelopmentYes € 620,000
KeywordsEcotoxicology, Geochemistry, Gammarus fossarum, Rare earth Elements, Colloids
AbstractThe European Union (EU) is committed to sustainable development and green innovation, recognizing the critical role of critical raw materials (CRM), including rare earth elements (REE), in advancing renewable energy, electromobility, and high-tech applications. However, the EU’s dependence on REEs remains precarious, and their increasing anthropogenic use has led to environmental concerns, particularly regarding their accumulation in aquatic ecosystems. These concerns are further exacerbated by climate change, which alters aquatic system functioning and pollutant mobility, potentially intensifying their toxicological impact on organisms. To address these challenges, Ge2Raw investigates the geochemical behavior and ecotoxicological effects of REE in freshwater ecosystems, considering temperature fluctuations and water turbidity as key environmental stressors. This project moves beyond conventional ecotoxicology approaches by integrating geochemistry, hydrology, molecular biology and ecotoxicology, to elucidate the mechanisms governing REE bioaccumulation and toxicity in aquatic organisms. A particular focus is placed on colloidal REE species, whose role in REE transport, bioavailability, and toxicity remains poorly understood. By characterizing the interactions between REE and colloids, Ge2Raw aims to refine environmental fate models and improve predictions of REE exposure risks. Building on previous research, Ge2Raw will fill key methodological gaps and provide novel insights into the environmental impact of REE on Gammarus fossarum, a sentinel species for freshwater ecosystem health. Preliminary studies indicate that REE exposure significantly affects osmoregulation, behavior, and life history traits of G. fossarum, with potential cascading effects on ecosystem functioning. However, the underlying geochemical processes influencing REE bioaccumulation and their mechanisms of toxicity remain largely unexplored. This project will address these knowledge gaps by establishing a Multiscale Biological Response Framework that links molecular, physiological, behavioral, and population-level effects of REE exposure under realistic environmental conditions. The Ge2Raw project will explore the complex interplay between the geochemical dynamics of REE in river systems and their ecological effects on aquatic organisms, focusing on the Alzette River basin in Luxembourg. By combining lab and environmental approaches, Ge2Raw will ensure the ecological relevance of its findings and contribute to improving risk assessment frameworks for REE contamination, contributing to sustainable resource management, and support the development of regulatory guidelines for REE and colloidal pollutants under changing environmental conditions
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
ENER-GEmpowering Networks of E-buses for Resilient and Green mobilityUniversity of Luxembourg  Francesco VitiSustainable and Responsible Development € 732,000
KeywordsElectric Buses; Vehicle-to-Grid; Scheduling Optimization; Game Theory
AbstractThe exponential growth of electric vehicles, including electric buses (e-buses), raises challenges for energy production and costs, grid capacity, and operational planning. With Luxembourg set to electrify its entire bus fleet by 2030, a coordinated strategy is essential to ensure this transition supports both sustainable mobility and energy resilience. The ENER-G project aims to develop innovative solutions for a successful transition to large-scale e-bus deployment, proposing an integration of vehicle-to-grid (V2G) technology with local solar energy generation and battery storage systems (BSS). While prior research has explored V2G applications for private vehicles, their deployment in public transport remains underexplored, especially when considered alongside depot-level solar energy generation and grid-connected storage. ENER-G addresses this gap by developing optimization models and algorithms for the V2G-enabled electric bus systems, considering local photovoltaic (PV) generation, BSS, and dynamic electricity pricing. At the strategic level, the project will develop optimization models for long-term infrastructure planning, including decisions on fleet size, PV and V2G charging infrastructure planning, and BSS capacity. At the tactical level, it will formulate day-ahead scheduling models for vehicle dispatch and charging, prioritizing cost-efficient use of renewable energy while accounting for constraints such as time-of-day energy prices and battery degradation. At the operational level, the project will develop real-time decision-support algorithms accounting for trip delays and renewable variability. A core innovation of ENER-G is in its development of game-theoretical incentive mechanisms to align the objectives of transport and energy actors. These incentives ensure that V2G services are economically viable for bus operators while serving grid stability goals. The project will design dynamic pricing strategies and compensation schemes to incentivize the provision of ancillary grid services. These mechanisms will captures the strategic interactions between public transport operators and energy providers. The developed methodology will be validated using operational and open data with the support of ENOVOS, the leading energy player, and Sales-Lentz, the biggest private bus operator in Luxembourg. By integrating energy and mobility domains through V2G-enabled e-bus systems, ENER-G contributes to Luxembourg’s sustainable development goals and the broader EU Green Deal. The expected outcomes include enhanced economic viability of e-buses, increased uptake of renewables, and improved coordination between energy and transport actors. The outcome will provide useful decision support tools for public transport operators in their charging infrastructure planning and fleet management. It will enhance the viability of new business and management solutions for public transport electrification by integrating and aligning the objectives of energy supply chain actors through the smart V2G, local renewable energy generation, and BSS.
AcronymTitleHost institutionPINational Research PriorityCORE JuniorCORE InterFNR funding
OPTMONITOROptimal Monitoring and Predictive Simularion-Based Modeling for Climate-Resilient Slope StabilityUniversity of LuxembourgArash Alimardani LavasanSustainable and Responsible Development € 427,000
KeywordsMonitoring optimisation, sustainable geostructures, risk management, predicitve modeling, climate resilience
AbstractReliable prediction and mitigation of landslide risks have become increasingly crucial under evolving climate change conditions, characterized by heightened extreme weather scenarios. Effective monitoring systems placed strategically in high-risk regions are essential not merely for reactive purposes but especially to anticipate how climatic factors such as intense rainfall, prolonged drought, or significant temperature fluctuations, impact slope stability. Developing robust numerical models capable of accurately predicting slope behavior under these changing climatic conditions is vital for creating sustainable and resilient geostructures, thereby securing human lives and infrastructure. To ensure reliability, numerical models require validation through inverse analyses, utilizing field-measured data reflecting landslide-triggering conditions exacerbated by climate change. Optimization algorithms iteratively adjust model parameters to minimize discrepancies between observed field data and model predictions. However, the strategic placement and configuration of monitoring equipment, encompassing sensor type, location, and precision still remain underexplored. An “Optimal Experimental Design” approach can significantly enhance cost-effectiveness and data accuracy, a concept still underutilized in civil and geotechnical engineering. This research project applies advanced mathematical methods, including global sensitivity analyses and Bootstrap techniques, to systematically optimize monitoring setups for slope stability assessment under climate-induced extreme weather conditions. The project focuses explicitly on two well-monitored slopes in the Italian Alps and Apennines, vulnerable to climate change impacts. Numerical models of these slopes will be established and validated against long-term measurement datasets, enabling identification of critical soil and rock parameters influencing slope behavior. Artificial datasets generated from numerical simulations will further facilitate parameter identification processes under various monitoring scenarios. Comparing the accuracy of identified parameters across different setups will directly inform the precision and reliability of slope stability predictions, ultimately supporting the design and maintenance of sustainable and climate-resilient geotechnical infrastructure.

Related Funding Instruments