#AI4Health

2026-01-26

#Forschung
#KI-Modelle wie AlphaFold3 gelten als Gamechanger bei der #Strukturaufklärung von Proteinen – doch wie „gesetzestreu“ sind ihre Vorhersagen?
Eine Studie der @unibasel zeigt: Selbst modernste Systeme modellieren Protein-Liganden-Interaktionen häufig ohne Berücksichtigung physikalisch-chemischer Prinzipien. Der Artikel von Larissa Tetsch: laborjournal.de/editorials/341

#Laborjournal #LifeSciences #aktuell #Bioinformatik #AI4Health #Proteine #Pharmazie #Basel

Oben: Ein Roboter vor rotem Hintergrund, der sich vor die Stirn fasst. 
Unten: Titel des im Post verlinkten Artikels „Gesetzeswidrige KI-Modelle” sowie dessen Vorspann „In der Wirkstoff-Entwicklung ist KI heute nicht mehr wegzudenken. Allerdings berücksichtigt sie keine physikalisch-chemischen Gesetzmäßigkeiten.“ Links daneben befindet sich ein Icon mit Smartphone und Magazinseite sowie der Schriftzug „Online lesen!“. In der rechten unteren Ecke befindet sich das Laborjournal-Logo.Erster Abschnitt des verlinkten Artikels.
UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2026-01-23

🎉 💡 New Year - New great news: Many Congratulations to our #AI4Health Alumnus, William Bolton, PhD, on the latest journal publication: The Lancet Digital Health on AI support for clinicians' antimicrobial prescribing!

Read the paper here: thelancet.com/journals/landig/

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2025-11-08

We were so proud to see our #AI4Health Researchers presenting their research at a big conference stage: MICCAI 2025

Vishal Jain, Fiona Kekwick, Sarah Cechnicka, Connor Daly, Arthur Lefebvre, Hadrien Reynaud (alumni)

Read the story here:
blogs.imperial.ac.uk/ai4health

2025-10-14

The great unlearning: notes on the Empower Learners for the Age of AI conference

Artificial intelligence is forcing a reckoning not just in our schools, but in how we solve the world’s most complex problems. 

When ChatGPT exploded into public consciousness, the immediate fear that rippled through our institutions was singular: the corruption of process.

The specter of students, professionals, and even leaders outsourcing their intellectual labor to a machine seemed to threaten the very foundation of competence and accountability.

In response, a predictable arsenal was deployed: detection software, outright bans, and policies hastily drafted to contain the threat.

Three years later, a more profound and unsettling truth is emerging.

The Empowering Learners AI 2025 global conference (7-10 October 2025) was a fascinating location to observe how academics – albeit mostly white men from the Global North centers that concentrate resources for research – are navigating these troubled waters.

The impacts of AI in education matter because, as the OECD’s Stefan Vincent-Lancrin explained: “performance in education is the learning, whereas in many other businesses, the performance is performing the task that you’re supposed to do.” 

The problem is not that AI will do our work for us.

The problem is that in doing so, it may cause us to forget how to think.

This is not a distant, dystopian fear.

It is happening now.

A landmark study presented by Vincent-Lancrin delivered a startling verdict: students who used a generic, answer-providing chatbot to study for a math exam performed significantly worse than those who used no AI at all.

The tool, designed for efficiency, had become a shortcut around the very cognitive struggle that builds lasting knowledge.

Jason Lodge of the University of Queensland captured the paradox with a simple analogy.

“It’s like an e-bike,” he explained. “An e-bike will help you get to a destination… But if you’re using an e-bike to get fit, then getting the e-bike to do all the work is not going to get you fit. And ultimately our job… is to help our students be fit in their minds”.

This phenomenon, dubbed “cognitive offloading,” is creating what Professor Dragan Gasevic of Monash University calls an epidemic of “metacognitive laziness”.

Metacognition – the ability to think about our own thinking – is the engine of critical inquiry.

Yet, generative AI is masterfully engineered to disarm it.

By producing content that is articulate, confident, and authoritative, it exploits a fundamental human bias known as “processing fluency,” our tendency to be less critical of information that is presented cleanly. 

“Generative AI articulates content… that basically sounds really good, and that can potentially disarm us as the users of such content,” Gasevic warned.

The risk is not merely that a health worker will use AI to draft a report, but that they will trust its conclusions without the rigorous, critical validation that prevents catastrophic errors.

Empower Learners for the Age of AI: the human algorithm

If AI is taking over the work of assembling and synthesizing information, what, then, is left for us to learn and to do?

This question has triggered a profound re-evaluation of our priorities.

The consensus emerging is a radical shift away from what can be automated and toward what makes us uniquely human.

The urgency of this shift is not just philosophical.

It is economic.

Matt Sigelman, president of The Burning Glass Institute, presented sobering data showing that AI is already automating the routine tasks that constitute the first few rungs of a professional career ladder.

“The problem is that if AI overlaps with… those humble tasks… then employers tend to say, well, gee, why am I hiring people at the entry level?” Sigelman explained.

The result is a shrinking number of entry-level jobs, forcing us to cultivate judgment and adaptive skills from day one.

This new reality demands a focus on what machines cannot replicate.

For Pinar Demirdag, an artist and co-founder of the creative AI company Cuebric, this means a focus on the “5 Cs”: Creativity, Curiosity, Critical Thinking, Collective Care, and Consciousness.

She argues that true creativity remains an exclusively human domain. “I don’t believe any machine can ever be creative because it doesn’t lie in their nature,” she asserted.

She believes that AI is confined to recombining what is already in its data, while human creativity stems from presence and a capacity to break patterns.

This sentiment was echoed by Rob English, a creative director who sees AI not as a threat, but as a catalyst for a deeper humanity.

“It creates an opportunity for us to sort of have to amplify the things that make us more human,” he argued.

For English, the future of learning lies in transforming it from a transactional task into a “lifestyle,” a mode of being grounded in identity and personal meaning.

He believes that as the value of simply aggregating information diminishes, what becomes more valuable is our ability “to dissect… to interpret or to infer”.

In this new landscape, the purpose of learning – whether for a student or a seasoned professional – shifts from knowledge transmission to the cultivation of human-centric capabilities.

It is no longer enough to know things.

The premium is on judgment, contextual wisdom, ethical reasoning, and the ability to connect with others – skills forged through the very intellectual and social struggles that generic AI helps us avoid.

Empower Learners for the Age of AI: Collaborate or be colonized

While the pedagogical challenge is profound, the institutional one may be even greater.

For all the talk of disruptive change, the current state in many of our organizations is one of inertia, indecision, and a dangerous passivity.

As George Siemens lamented after investing several years in trying to move the needle at higher education institutions, leadership has been “too passive,” risking a repeat of the era when institutions outsourced online learning to corporations known as “OPMs” (online programme managers) that did not share their values: “I’m worried that we’re going to do the same thing with AI, that we’re just going to sit on our hands, leadership’s going to be too passive… and the end result is we’re going to be reliant down the road on handing off the visioning and the capabilities of AI to external partners.”

The presidents of two of the largest nonprofit universities in the United States, Dr. Mark Milliron of National University and Dr. Lisa Marsh Ryerson, president of Southern New Hampshire University, offered a candid diagnosis of the problem.

Ryerson set the stage: “We don’t see it as a tool. We see it as a true framework redesign for learning for the future.” 

However, before any institution can deploy sophisticated AI, it must first undertake the unglamorous, foundational work of fixing its own data infrastructure.

“A lot of universities aren’t willing to take two steps back before they take three steps forward on this,” Dr. Milliron stated. “They want to jump to the advanced AI… when they actually need to go back and really… get the basics done”.

This failure to fix the “plumbing” leaves organizations vulnerable, unable to build their own strategic capabilities.

Such a dynamic is creating what keynote speaker Howard Brodsky termed a new form of “digital colonialism,” where a handful of powerful tech companies dictate the future of critical public goods like health and education.

His proposed solution is for institutions to form a cooperative, a model that has proven successful for over a billion people globally.

“I don’t believe at the current that universities have a seat at the table,” Brodsky argued. “And the only way you get a seat at the table is scale. And it’s to have a large voice”.

A cooperative would give organizations the collective power to negotiate with tech giants and co-shape an AI ecosystem that serves public interest, not just commercial agendas.

Without such collective action, the fear is that our health systems and educational institutions will become mere consumers of technologies designed without their input, ceding their agency and their future to Silicon Valley.

The choice is stark: either become intentional builders of our own solutions, or become passive subjects of a transformation orchestrated by others.

The engine of equity

Amid these profound challenges, a powerfully optimistic vision for AI’s role is also taking shape.

If harnessed intentionally, AI could become one of the greatest engines for equity in our history.

The key lies in recognizing the invisible advantages that have long propped up success.

As Dr. Mark Milliron explained in a moment of striking clarity: “I actually think AI has the potential to level the playing field… second, third, fourth generation higher ed students have always had AI. They were extended families… who came in and helped them navigate higher education because they had a knowing about it.”

For generations, those from privileged backgrounds have had access to a human support network that functions as a sophisticated guidance system.

First-generation students and professionals in under-resourced settings are often left to fend for themselves.

AI offers the possibility of democratizing that support system.

A personalized AI companion can serve as that navigational guide for everyone, answering logistical questions, reducing administrative friction, and connecting them with the right human support at the right time.

This is not about replacing human mentors.

It is about ensuring that every learner and every practitioner has the foundational scaffolding needed to thrive.

As Dr. Lisa Marsh Ryerson put it, the goal is to use AI to “serve more learners, more equitably, with equitable outcomes, and more humanely”.

This vision recasts AI not as a threat to be managed, but as a moral imperative to be embraced.

It suggests that the technology’s most profound impact may not be in how it changes our interaction with knowledge, but in how it changes our access to opportunity.

Technology as culture

The debates from the conference make one thing clear.

The AI revolution is not, at its core, a technological event.

Read the article: Why learning technologists are obsolete

It is a pedagogical, ethical, and institutional one.

It forces us to ask what we believe the purpose of learning is, what skills are foundational to a flourishing human life, and what kind of world we want to build.

The technology will not provide the answers.

It will only amplify the choices we make.

As we stand at this inflection point, the most critical task is not to integrate AI, but to become more intentional about our own humanity.

The future of our collective ability to solve the world’s most pressing challenges depends on it.

Do you work in health?

As AI capabilities advance rapidly, health leaders need to prepare, learn, and adapt. The Geneva Learning Foundation’s new AI4Health Framework equips you to harness AI’s potential while protecting what matters most—human experience, local leadership, and health equity. Learn more: https://www.learning.foundation/ai.

References

Image: The Geneva Learning Foundation Collection © 2025

#AI4Health #ArtificialIntelligence #EmpowerLearnersForTheAgeOfAI #GeorgeSiemens #TheGenevaLearningFoundation

Empower Learners for the Age of AI conference
UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2025-09-26

Looking forward to welcoming our new AI4Health researchers Ania, John, Piotr, Dhruv, Ollie, Garry, Sadaf, Giorgia, Dillon, Paul, James, Ibrahim, Andreas, starting their PhD journey at our #AI4Health Centre for Doctoral Training this October.

Since 2019, the Innovative AI research happening at the #4Health Centre has been transforming clinical practice, from cutting-edge algorithms to real-world deployment.​

Welcome Message Imperial South Kensington Campus
UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2025-08-05

AI4Health Industry Day-From Algorithm to Bedside

-Exploring how AI is transforming clinical practice, from cutting-edge algorithms to real-world deployment.​
-Expert talks from InterSystems, Oracle, British Standards Institution (BSI), Siemens Healthineers, European Health Data Space (EHDS)

📅 24th September, all day
📍City and Guilds Building, Imperial College London, South Kensington Campus

Register here: lnkd.in/eHv65yGd

#AI4Health #ClinicalAI #HealthcareInnovation #DigitalHealth

2025-06-10

While tuberculosis is treatable, it still kills 1.3M people every year in sub-Saharan Africa.

EPFL + CHUV have teamed up to fight back with Ultr-AI: an AI-powered lung ultrasound tool that screens for TB.

📱 Portable. Affordable. Works with a smartphone.

📖 ai.epfl.ch/ai-making-it-easier

#AI4Health

https://ai.epfl.ch/ai-making-it-easier-to-diagnose-tuberculosis-in-sub-saharan-africa/
UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2025-06-03

May 2025. The historic city of York was the hub of AI innovation: The 2025 edition of the Joint Conference of UKRI AI for Health Centre brought together the UK’s leading Centres for Doctoral Training (CDTs) at the intersection of artificial intelligence and medicine. More than 90 PhD researchers showcased cutting-edge research on topics ranging from explainable AI in oncology and wearable neurotech to machine learning for diagnostics and federated learning for disease detection. #AI4Health

Gaya Mathusuthan's Talk
2025-05-08

Computational modeling in biology is now mature, shifting from universal models to digital twins & virtual cohorts for in silico trials. New publication by our partner Axel Loewe from KIT Institute of Biomedical Engineering:
Loewe, A., Hunter, P., Kohl, P., (2024) Computational modelling of biological systems now and then: revisiting tools and visions from the beginning of the century
zenodo.org/records/15363208

#biomed_research #DigitalTwin #InSilicoTrials #AI4Health @AnnaJacyszyn @Feelix

Figure 1. Synergy of mechanistic and statistical (data-driven) models. Reproduced from Corral-Acero et al. [12] under the Cre-
ative Commons Attribution License 4.0.

The advent of new experimental methods and the rise of machine learning [8–11] have introduced an apparent dichotomy between mechanistic and data‑driven approaches. Data‑driven
models seem to navigate a cloudy summit, while mechanistic models dwell in the village. Both possess the potential for integration, as data‑driven models can be hypothesis proffering
(mountain‑to‑village information transfer), while mechanistic models, especially when bridging multiple scales (temporal, spatial, species, disease, etc.) can offer relevant data input (village‑to‑
mountain information transfer). With the latter being built upon first principles, they are more likely to generalize well and respect fundamental laws of physics. The former, however, are di‑
rectly linked to real‑world observations and thus more likely to capture important phenomena of in vivo (patho‑)physiology. Looking to the future, we need to integrate data‑driven and mecha‑
nistic modelling approaches, as visualized in figure 1, more systematically in order to use the full potential of both.
UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2025-05-07

Exciting event with Imperial Global USA #AI4Health Centre Director Professor A. Aldo Faisal chaired a panel with experts from Stanford Health Care, UCSF, and Sanofi, exploring how global ecosystems are shaping the future of healthcare.

Great to see how the AI4Health innovations bridging Imperial, London and the Bay Area!

#Imperial Global USA, #FaisalLab, Center for Digital Health Innovation at UCSF, #Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), #UKRI

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2025-03-28

📢 This is an incredibly exciting time to be in AI & Healthcare at Imperial. This week #AI4Health researcher showcased their work and networked with MedTech contacts...
o Deep-learning EEG analysis for brain stimulation, Parkinson’s disease
o AN AI platform for Learning to laugh
o AI clinician for Treatment of Paediatric Sepsis
o Enhance Stroke Care: Wearable technology for Behavioural insights
o BehaviourGPT: Large Behaviour Models to measure health state in neurological diseases

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2025-03-28

A team from Imperial, led by Professor Faisal’s leadership, is leading a world-first initiative, turning the UK’s National Health Service (NHS) data libraries into an AI factory. By building a specialised AI model for healthcare, the aim is to match the intelligence of leading AI models while applying its capabilities to medical research, diagnostics, and treatment. The impact of Nightingale AI could be far-reaching.....

🔎 Find out more here: lnkd.in/dpA6d65F

#AI4Health #NightingaleAI

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2025-03-06

Today our #AI4Health cohorts participated in the Active Bystander Training, reinforcing our commitment to fostering a positive, respectful, & inclusive culture across our Centre and networks. From supervisors to professional staff and students, everyone is leading by example, promoting the idea that a diverse and inclusive culture is essential to becoming better (AI) researchers, leaders and innovators. A big thank you The Active Bystander Company for delivering an engaging experience. #speakup

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2025-03-03

#AI4Health researcher Lucille Cazenave & Team led a teaching workshop on Research Methods & Applied Statistics to the students of the Human and Biological Robotics Master of Science, Imperial College London; Lucille shared that "It was such a great teaching experience, where we went through both theoretical content and a hands-on tutorial." Amazing work.

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2024-12-20

Seasons Greetings from our AI for Healthcare Centre; looking forward to more #AI4Health ventures in 2025

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2024-12-05

#AI4Health Centre presented at ☕ Imperial Science Café

We were delighted to be invited by the Imperial College London Science Café to lead an evening of talks about AI research for healthcare, with focus on cancer. Thank you, Kelly Gleason, Imperial's Cancer Research UK (CRUK) Lead Nurse, for giving our #AI4Health researchers a platform to share their innovative AI research with patients.💡 What truly stood out was the collaborative spirit and shared vision for better health outcomes.

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2024-11-27

Congratulations to our #AI4Health Director Professor Aldo Faisal on his
appointment as Co-Director of the School for Convergence Science in Human and Artificial Intelligence @imperialcollege

imperial.ac.uk/news/258734/imp

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2024-11-21

📢Masterclass: Clinical Academic Training Office (CATO) 19 Nov 2024

CATO runs a number of events to support the development of clinical researchers
@imperialcollege's Academic Health Science Centre, and PhD researchers from our #AI4Health Centre showcased AI innovations and what impact they have on healthcare

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2024-11-19

Exciting Week at #MEDICA 2024!
Our Centre showcased our innovative AI solutions aimed at transforming healthcare; PhD researchers Ruoyu Hu and Oskar Fraser-Krauss supported the delivery by talking about their projects: Oskar works on AI tools to detect an Anti-microbal resistance in hospitals early, and Ruoyu on An AI platform for learning to laugh

Let’s keep the conversation going!
#AIInnovation #AI4Health

UKRI AI for Healthcare CentresAI4Health@sigmoid.social
2024-10-28

Self-paced training for clinical researchers: Introduction to AI for Clinical Researchers: In 2022, the #AI4Health Centre team developed an online course on AI, in partnership with the NIHR (National Institute for Health and Care Research) Clinical Research Network (CRN) and the Institute of Global Health Innovation (IGHI); it aims to build awareness and knowledge in AI for professionals developing and carrying out research.

nihr.ac.uk/news/artificial-int

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst