#BiomedicalAI

CSBJcsbj
2025-10-11

🧬 Can quantum intelligence unlock new ways to understand genetic mutations?

πŸ”— Quantum neural networks meet federated learning for DNA mutation prediction. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2025.09

πŸ“š CSBJ Quantum Biology & Biophotonics: csbj.org/qbio

Quantum neural networks meet federated learning for DNA mutation prediction. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2025.09.036
CSBJcsbj
2025-10-07

πŸ€–πŸ©Ί Could transparency be the real prescription for ethical AI in healthcare?

πŸ”— AI Model Passport: Data and system traceability framework for transparent AI in health. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2025.09

πŸ“š CSBJ Smart Hospital: csbj.org/smarthospital

AI Model Passport: Data and system traceability framework for transparent AI in health. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2025.09.041
CSBJcsbj
2025-09-02

πŸ€– Can large language models (LLMs) reliably extract biomedical knowledge from scientific literature?

πŸ”— Developing foundations for biomedical knowledgebases from literature using large language models – A systematic assessment. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2025.07

πŸ“š CSBJ: csbj.org/

Developing foundations for biomedical knowledgebases from literature using large language models – A systematic assessment. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2025.07.042
CSBJcsbj
2025-08-27

🧬 Will smaller, lighter AI models democratize computational biology?

πŸ”— Small, open-source text-embedding models as substitutes to OpenAI models for gene analysis. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2025.07

πŸ“š CSBJ: csbj.org/

Small, open-source text-embedding models as substitutes to OpenAI models for gene analysis. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2025.07.053
CSBJcsbj
2025-08-18

🧬 Could the key to better genomic models lie in how we β€œread” sequences?

πŸ”— Tokenization and deep learning architectures in genomics: A comprehensive review. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2025.07

πŸ“š CSBJ: csbj.org/

Tokenization and deep learning architectures in genomics: A comprehensive review. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2025.07.038
2025-07-23

AI: Explainable Enough

They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

What the domain expert user doesn’t want:
– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

What the domain expert desires: 
– Help at the lowest level of detail that they care about. 
– AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code β€œworks” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like β€˜juicy’.

#AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

CSBJcsbj
2025-07-03

🧬 Are we ready to let algorithms decide which human samples are worth saving for science?

πŸ”— Automated sample annotation for diabetes mellitus in healthcare integrated biobanking. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2024.10

πŸ“š CSBJ Smart Hospital: csbj.org/smarthospital

Automated sample annotation for diabetes mellitus in healthcare integrated biobanking. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2024.10.033
CSBJcsbj
2025-06-18

πŸ”Ž How can AI-powered tools help scientists uncover hidden connections in disease research?

πŸ”— Darling (v2.0): Mining disease-related databases for the detection of biomedical entity associations. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2025.06

πŸ“š CSBJ: csbj.org/

Darling (v2.0): Mining disease-related databases for the detection of biomedical entity associations. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2025.06.025
CSBJcsbj
2025-06-09

🧬 Can AI fix the chaos in biological sample data?

πŸ”— Annotation of biological samples data to standard ontologies with support from large language models. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2025.05

πŸ“š CSBJ: csbj.org/

Annotation of biological samples data to standard ontologies with support from large language models. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2025.05.020
CSBJcsbj
2025-05-27

πŸ€– How resilient are deep learning models to real-world imperfections in microscopy?

πŸ”— Practical guidelines for cell segmentation models under optical aberrations in microscopy. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2024.09

πŸ“š CSBJ Quantum Biology and Biophotonics: csbj.org/qbio

Practical guidelines for cell segmentation models under optical aberrations in microscopy. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2024.09.002
CSBJcsbj
2025-04-07

🧬Can we trust AI in bioinformatics if we don’t understand how it makes decisions?

As AI becomes central to bioinformatics, the opacity of its decision-making remains a major concern.

πŸ”— Demystifying the Black Box: A Survey on Explainable Artificial Intelligence (XAI) in Bioinformatics. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2024.12

πŸ“š CSBJ: csbj.org/

Demystifying the Black Box: A Survey on Explainable Artificial Intelligence (XAI) in Bioinformatics. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2024.12.027
Winbuzzerwinbuzzer
2025-03-31

Google DeepMind has released TxGemma, a set of open-source AI models designed to accelerate drug discovery using Gemini-powered workflows

winbuzzer.com/2025/03/31/googl

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst