RE: https://hachyderm.io/@loleg/116041701680591995
Collected impressions of the #OpenSourceLLM summit on my personal blog https://log.alets.ch/121/
RE: https://hachyderm.io/@loleg/116041701680591995
Collected impressions of the #OpenSourceLLM summit on my personal blog https://log.alets.ch/121/
The workshop sessions and panels rounded off an excellent day, with thanks to the EPFL AI Center team for organising and fellow participants for the #ShareEverything vibes of #OpenSourceLLM
Have everything well documented. Use CI. Work closely together, use fairly, and continuously improve shared capabilities across the Swiss AI Initiative. Joost VandeVondele (CSCS) at #OpenSourceLLM
A #Reachy greets visitors at #OpenSourceLLM #EPFL - leading to reflections on the multiple modalities, disruptive industries, and historic experiences of communications technology. See also https://hachyderm.io/@loleg/116040818446611641
Running with Swiss perseverance if not quite precision today 😅 Lots of questions and an avalanche of content shared in the past 4 hours.
The #Qwen team can’t be very specific on the compute capabilities, but they seem ready for the road ahead:
• Hybrid Architecture works
• Multimodal native pretraining + post-training
• Coding with an LLM with vision
• Tackle long-horizon agentic tasks
Alibaba‘s #Qwen team, represented today by Junyang Lin, knows the importance of the developer base as driver of the roadmap #OpenSourceLLM
Quick deep dive 🙆♀️ into the GLM series architecture of #ZAI with Yuxuan Zhang (Zhipu AI) #OpenSourceLLM
Matthias Bethge (Tübingen University) proposes #OpenPipeline – inspired by open scientific data formats like #fastq - for more meaningful empirical comparison across LLM training systems #OpenSourceLLM
To have a fast turn around time, Jian Gang Ngui (AI Singapore) shares a long bucket list of experiences from the SEA-LION (Southeast Asian Languages in One Network) model - such as a public leaderboard #OpenSourceLLM
With AI talent and momentum concentrated in well-funded industry labs, academic teams need to work tightly despite funding gaps, high PhD turnover, and fragmented grant cycles.
Hector Liu (K2, MBZUAI) shares ideas on how to sustain the long-term commitment needed for frontier model development — including large scale distributed collaboration, open source models, and „high TPP validation using small scale proxies“
#OpenSourceLLM
Advancing the state-of-the-art is a saucy data mix, as illustrated by Kyle Lo of the #Olmo 3 team @allenai #OpenSourceLLM
We don’t want to put too much weight into benchmarks, but after a methodical explanation of the latest improvements in #EuroLLM 📈 the charts are what we want to see. André F. T. Martins (IST Lisbon) presenting at
#OpenSourceLLM
#ShareEverything to work on or support the growing community of Fully Open models in the frontier projects, whose representatives have gathered today, and many smaller ones in the ecosystem of #OpenSourceLLM
Why Build Our Own Models?
- Legal necessity
- R&D autonomy
- Scientific integrity
- Deployment control
Transparent, Responsible, Open Data, On-premise.
#DataSovereignty meets #OpenSourceLLM as Imanol Schlag presents #Apertus
Question from the audience: what about #Successful collaborations across working culture? A: pushing to compromise through an understanding that we all want the experiment to succeed!
#OpenSourceLLM
The Keys to Success at CERN: « #Successful large collaborations are built on mutual trust, respect, and transparent, fair procedures, with leadership grounded in excellence and expertise. Open and Inclusive goverance - through clear structures, representative decision bodies, merit-based evaluation of ideas, consensus building, and collective ownership of major decisions - ensures that all members, including early-career researchers, have a voice and that innovation is encouraged and rewarded » #OpenSourceLLM
The principles and tools of studying models of particle physics can spark our inquiries into the models of AI: intro by Andreas Hoecker (CERN) with valuable lessons from a world leading international Research Collaboration, a hub for over 12‘000 people #OpenSourceLLM
A cozy venue to discuss the next generation of models with colleagues from around the world today #OpenSourceLLM
Google just integrated Gemini’s auto‑browse feature into Chrome, letting the model surf the web in real time. Meanwhile, Moltbot now offers an always‑on AI experience, powered by open‑source LLMs. Both moves push AI closer to everyday browsing. Curious how these changes could affect your workflow? Read the full story. #GeminiAI #ChromeAutoBrowse #Moltbot #OpenSourceLLM
🔗 https://aidailypost.com/news/google-adds-gemini-autobrowse-chrome-moltbot-gains-alwayson-ai-users
Moonshot AI just released Kimi K2.5, an open‑source LLM that beats the proprietary Opus 4.5 on benchmarks. The model is freely available, community‑ready, and pushes the frontier of accessible AI. Dive into the details to see why Kimi K2.5 could be the next big step for open‑source ML. #MoonshotAI #KimiK2_5 #OpenSourceLLM #Opus45
🔗 https://aidailypost.com/news/moonshot-ai-launches-kimi-k25-opensource-llm-that-outperforms-opus-45