Internship / Ph.D. proposal (w. J-B Fermanian):
"Exploring Conformal Prediction in Long-Tail Scenarios"
Come and work on @plantnet.bsky.social data with us!
Internship / Ph.D. proposal (w. J-B Fermanian):
"Exploring Conformal Prediction in Long-Tail Scenarios"
Come and work on @plantnet.bsky.social data with us!
🚨 New blog post 🚨
"**Optimal prediction sets for plant identification: an interactive guide**"
https://josephsalmon.eu/blog/long-tail/
Joint work with Tiffany Ding and Jean-Baptiste Fermanian.
#longtail
#PlantNet
#AppliedConformalPrediction
#ConformalPrediction
#statstab #357 Uncertainty Estimation with Conformal Prediction
Thoughts: Haven't parsed this properly but maybe be an interesting discussion point. How best to quantify uncertainty?
#conformalprediction #bayesian #confidenceintervals #uncertainty
#statstab #223 Conformal predictions w/ {marginaleffects}
Thoughts: Sometimes you need a range of likely future values. To get an assumption-free Prediction Interval, use conformal methods.
Some thoughts on #conformalprediction
for #timeseries with Xiaoqian Wang https://robjhyndman.com/publications/cpts.html
In the last couple of weeks I've been learning about #ConformalPrediction, a family of algorithms to measure the uncertainty of predictions made by #MachineLearning models.
Here are a few links to get you started:
- CP course by @ChristophMolnar https://mindfulmodeler.substack.com/p/week-1-getting-started-with-conformal
- Multi-class notebook (in Spanish) https://nbviewer.org/github/MMdeCastro/Uncertainty_Quantification_XAI/blob/main/UQ_multiclass.ipynb
- MAPIE library: https://mapie.readthedocs.io/en/latest/index.html
- TorchCP library: https://github.com/ml-stat-Sustech/TorchCP
Making the rounds again...
...Blackbox #MachineLearning models are now routinely used in high-risk settings, like medical diagnostics, which demand uncertainty quantification to avoid consequential model failures... #ConformalPrediction is a user-friendly paradigm for creating statistically rigorous uncertainty sets/intervals for the predictions of such models...
[1] https://arxiv.org/abs/2107.07511
[2] https://arxiv.org/abs/2106.06137
⏩ Analítica acelerada con Shapelets y conformal prediction
https://fediverse.tv/videos/watch/8c55336e-d713-4829-ac8f-9e4b0178e4bd
Nos vemos *hoy* en nuestra reunión de marzo: ⏩ Analítica acelerada con Shapelets y conformal prediction, este mes en The Bridge
https://www.meetup.com/pydata-madrid/events/299749589/
¡Te esperamos a las 19:00! Y después, networking 🗣️
#PyDataMadrid #PyData #python #MachineLearning #ConformalPrediction #shapelets
The distinction between marginal and conditional coverage finally clicked for me. #conformalprediction provides the former but not the latter, and for many (most?) real-world use cases in ML one wants the latter.
If it sounds too good to be true...
Deepmind erforscht die unsicheren Wahrheiten der Künstlichen Intelligenz
#KI,#Grundwahrheit,#Unsicherheit,#DeepMind,#Google,#Annotationen,#StatistischeModelle,#ConformalPrediction,#MonteCarloCP,#Hautzustandsklassifizierung
https://kinews24.de/deepmind-erforscht-die-unsicheren-wahrheiten/
TIL of #conformalprediction, a way to assess the uncertainty of a prediction (from any algorithm, including from #machineleaning). It is used in research to make #autonomousdriving safer by predicting other agent's movements: https://www.youtube.com/watch?v=QvIJH4cZy3E
It does not require an expert model, but in turn it needs a statistically representative dataset.
Why perform cross validation (CV) in #MachineLearning? To estimate the generalization error of a trained predictor. This paper uses the idea of a #ProperLoss (called Q-class). Then it covers CV, bootstrap, and Mallow's covariance penalties. It also covers #ConformalPrediction, which is newly popular because of Emanuel Candes' keynote at #NeurIPS 2022
https://doi.org/10.3390/stats4040063
The paper is also a good advertisement for Efron and Hastie's recent book.
🚀 #AWS Fortuna is skyrocketing! 🚀 Just a few days, and so many GitHub stars and forks! ⭐️
Fortuna supports #ConformalPrediction, #BayesianInference and other methods for #UncertaintyQuantification in #DeepLearning.
Try it out and let us know!
https://github.com/awslabs/fortuna
In collaboration with @cedapprox, @andrewgwils and team.
#uncertainty #neuralnetworks #bayesian #conformal #calibration #jax #flax #python #opensource #library #machinelearning #ai
Today, we open sourced Fortuna (https://github.com/awslabs/fortuna) a library for uncertainty quantification.
Deep neural networks are often overconfident and do not know what they don’t know. Quantifying the uncertainty in the predictions they make will help deploy deep learning more responsibly and more safely.
#responsibleAI #ConformalPrediction #BayesianInference #UncertaintyQuantification #deeplearning #opensource
Just shared a more hands-on guide for using the new package for #conformalprediction in #julia that I’ve been working on: https://github.com/pat-alt/ConformalPrediction.jl
“How to Coformalize a Deep Image Classifier” on TDS (https://towardsdatascience.com/how-to-conformalize-a-deep-image-classifier-14ead4e1a5a0) or my blog (https://www.paltmeyer.com/blog/posts/conformal-image-classifier/)
Thoughts and contributions welcome 🤗
One approach to do conformal prediction in regression is to use quantile regression (pinball loss). One annoying thing about quantile regression is that if you estimate multiple quantiles, they could cross (and they really shoudn't). This paper proposes a method that prevents crossing (there are other papers that do so too), in particular for conformal prediction.