YAGHMA's vision is to embed impact assessment into the DNA of every company, because one cannot improve what one cannot assess. At YAGHMA, we work for a better use of new technologies. Currently, we focus on AI and operate mainly in the healthcare sector as well as in sectors impacted by the energy transition. We strive to make the impact of tech-projects visible before undesired consequences become unavoidable. This mission is driven by a fundamental believe in the value of all humans, the importance of free and caring communities and the positive opportunities new technologies bring.
YAGHMA develops, customises and refines impact monitoring systems that foresee the intended and non-intended ethical, societal, legal, environmental and governance impacts of projects involving new technologies, in particular in the Digital Health arena. To provide a complete impact analysis, we combine deep knowledge of digital health technologies and their research and development cycle with materiality assessment, risk assessment and sector insights. To ensure that we are at the forefront of research, we participate and propose international research projects where we develop, implement and evaluate application-specific non-technical impact assessment schemes. In doing so, YAGHMA helps companies and public stakeholders identify relevant impact areas, quantify performance indicators and monitor them throughout research and innovation, development and deployment of new technologies and products.
Role within REALM
YAGHMA contributes to REALM by assessing the wider impact of the solutions used in the demonstrators. The YAGHMA Impact Assessment (AIIA) tool is used to identify unforeseen consequences of the developed technologies. The YAGHMA AIIA tool helps public health stakeholders as well as health data processors and controllers to make informed decisions and improve the AI governance in the digital health ecosystem following ethical, legal, and social aspects measures. YAGHMA works on methods and tools to identify, prioritise and counter trustworthiness issues (e.g., explainability, bias and privacy) for multi-modal AI-based decision support for health care applications.