#foundationModels

2026-02-18

Nice set of embodied foundation models this week:

in 2B, 8B, and 30B variants + RynnBrain‑Plan (manipulation planning), RynnBrain‑Nav (navigation), and RynnBrain‑CoP (spatial reasoning).

alibaba-damo-academy.github.io

ABot-M0: VLA Foundation Model for Robotic Manipulation, github.com/amap-cvlab/ABot-Man

ABot-N0: Unified VLA for embodied navigation, amap-cvlab.github.io/ABot-Navi

#foundationModels #embodied #robotics #robotFoundationModels #embodiedFoundationModels #VLA

jai (@jai_chopra)

Jeff Dean이 AI가 모든 지식 노동을 대체하지 않을 수 있다는 낙관적 관점을 제시했습니다. 파운데이션 모델에 더 많은 도메인 특화 데이터를 추가하면 특정 도메인에서 품질 트레이드오프가 발생할 수 있어, 수직별로는 기존 중소기업(SMB) 등이 경쟁력을 유지할 여지가 있다는 분석입니다.

x.com/jai_chopra/status/202312

#jeffdean #foundationmodels #aiimpact

2026-02-11

Meeting Minutes: Pragmatic AI: Foundation models running entirely on-device. Apple's exclusive iOS 26 AI. No other platform can do this.
hashtag#Simple

.

apps.apple.com/gb/app/meeting-

Ron Alfa (@Ronalfa)

Melissa Du가 Latent Space 팟캐스트에서 쓴 글을 인용하며 LLM만으로는 과학 연구에 충분하지 않다는 점을 설명합니다. 저자는 도메인별 데이터와 파운데이션 모델이 필요하다고 강조하며, LLM 한계와 도메인 특화 모델·데이터의 중요성을 뉘앙스 있게 전달한 기사임을 정리합니다.

x.com/Ronalfa/status/202156174

#llm #foundationmodels #airesearch #domainspecific

Nico - NFC.coolnico42
2026-02-09

The Apple Developer workshop in Madrid last week was very nice. Always good to meet like minded people 👨🏼‍💻  👩🏻‍💻

2026-02-05

Today, the #FoundationModels (Transformer, LLM, etc.) of #AI predominate every field of human endeavour. Perhaps a grand-sounding moniker, like the following, is appropriate for this much-revered technology:

Regnal Transformative Foundation Models

And recent studies in various fields had shown conclusively that quick access to #information in the short-term erodes the drive to acquire and retain #knowledge for the long-term. So, this estimable label's #RTFM acronym, which harkens back to that famous, 1980s tech aphorism, is equally appropriate.

TechFollow (@TechFollowrazzi)

@denny_zhou 계정이 TencentHunyuan을 팔로우했다는 멘션과 함께, TencentHunyuan이 텍스트·이미지·비디오·3D 생성 등에서 경계를 넓히는 텐센트의 강력한 파운데이션 모델이라는 설명이 붙어 있습니다. 텐센트의 멀티모달 기초모델 존재감을 알리는 내용입니다.

x.com/TechFollowrazzi/status/2

#tencent #hunyuan #foundationmodels #multimodal

2026-02-04

The #FoundationModels of AI (Transformer, LLM, call it whatever) are troublesome, not because they are deficient, but because they are endowed with #emergent behaviours that suddenly spring from 500-billion-plus trainable parameters and terabytes upon terabytes of human behavioural data. No human could grasp the aggregate meaning of that unanticipated, implicit, emergent behaviour. No human could divine the outcomes of their unforeseeable concoctions. Worse of all, all these models are inhered with biases and prejudices, both intentional and unintentional, that are embedded deep within these deep-learning models, whose untoward conducts are ever so subtle and complex as to be casually observable.

The invidious effects of these poorly-understood foundation models are insidious. By the time their detrimental behaviour becomes observable, it would be too late for society to countermand, for these models are already enmeshed within the social-political-economic fabric of the whole of humanity.

It is easy, for a lawyer like me, reflexively to shout, "#Regulate #AI". But in the present heady mix of delirium, confusion, delusion, and collusion, regulate exactly what and precisely how, pray tell.

Who will save society from AI?

Well, perhaps AI might....🤦‍♂️

Matrix multiplication is not inhered with empathy, conscience, judgement, elation, shame, and the like. Those are innately human characteristics, neither mechanical nor mathematical. Let us not surrender to the silicon-based machines the #ethical duty to be thinking, feeling carbon-based beings.

Artificial Analysis (@ArtificialAnlys)

MBZUAI의 Institute for Foundation Models가 새로운 K2 Think V2 모델을 발표했습니다. 기존의 70B K2‑V2보다 지능이 향상되었고, Openness Index에서 공동 1위를 유지했으며 홀루시네이션(오답) 비율이 낮다고 보고되었습니다. 이번 달 Artificial Analysis에 오른 세 번째 UAE 모델로, 모델은 'dense 70B' 구조입니다.

x.com/ArtificialAnlys/status/2

#mbzuai #k2thinkv2 #foundationmodels #opennessindex

Tero Keski-Valkamatero@rukii.net
2026-01-27

Multi-agentic foundation models are important for #robotics and #automation in negotiated and adversarial places such as #traffic and #warfare.

But how to implement them? I have previously drafted a data-centric architecture for decomposing agentic representations for #UniversalEmbodiment in a GitHub repository.

But LLMs already have internalized multi-agentic representations, why can't we utilize them directly? For example, in text you can easily ask an LLM to describe all the persons or agents present in the scene and their intents.

We can and we must certainly utilize these! But these representations aren't grounded.

What we need to do is to craft robotic foundation model training data to involve scenarios where there are multiple agents present.

First start acausally from what ultimately happened — how was the scenario negotiated between multiple participants, who drove first, what attack and evasive patterns were used?

As we then know what happened, we can go back in time and ask the foundation model to identify all the participants in the feed, and complete their intentions with the information from the ultimate outcome.

The foundation model can then utilize all the language space knowledge it has about multi-agent environments, but also anchor this to visual and control signals present in the training data.

This allows the model to not only answer questions of what each participant intents to do, but also anchor this to multi-modal sensory information, and also project embodiment related control intents to all the participants in the scenario, not only ego.

Ego becomes just a special case in robotic control, the model should learn to generalize to project control intents to all agents present in the data.

Ultimately this allows the foundation model to learn from perceived and projected experiences of others, to learn to imitate or not imitate what it has seen other agents do.

It's all about crafting data, not really about sophisticated model architectures.

#RoboticFoundationModels #FoundationModels #PhysicalAI #AI #AGI

Yicai 第一财经 (@yicaichina)

중국 AI 스타트업 StepFun이 약 USD 718.8백만을 조달해 기초모델 개발과 터미널 에이전트 활용을 강화한다고 보도되었습니다. 대규모 자금조달은 연구개발 확대 및 기기·터미널 수준의 AI 통합 촉진을 목표로 합니다.

x.com/yicaichina/status/201569

#stepfun #funding #foundationmodels #terminalagents #llm

Yicai 第一财经 (@yicaichina)

중국 AI 모델 기업 StepFun이 CNY 50억(약 USD 718백만) 규모의 시리즈 B+ 투자를 완료했다고 발표했습니다. 이는 최근 1년간 중국 LLM 분야에서 단일 최대 자금조달로, 자금은 기초모델 연구개발과 기기 전체의 AI 통합 가속화에 사용될 예정입니다.

x.com/yicaichina/status/201560

#stepfun #funding #llm #foundationmodels #china

Tero Keski-Valkamatero@rukii.net
2026-01-21

What should software consultancy companies do in the AI transformation?

They have a special position in the software ecosystem and AI-driven data-polarization which has traditionally been based on the scarcity of experts. So, when the AI-transformation is needed across the board, surely the scarcity of AI experts will provide a convective updraft for the software consultancy companies with the AI experts?

Perhaps. But there are many challenges.

First of all, classical software consultancy houses are typically at a disadvantage in attracting the best AI people. The compensation isn't competitive when the domain is capital-intensive and you're structured into a labor-intensive setting. For large capital concentration, you want to hire the best people no matter the cost because otherwise your capital doesn't produce value.

There is also the Claude Code effect where businesses won't contract with consultancy companies for small non-core projects because AI can do those as well.

In AI-driven data-polarization, the data is created by applied AI companies, and the models by foundation model providers. Both of these positions are defensible in the medium-term future. But is a position based on talent scarcity defensible? When that talent is hard to attract as well?

I think software consultancy houses should really re-evaluate their strategy. They should focus on AI and on lifting the capability level of specific domains by sharing data management and refinement practices. With that, they become capital-intensive in a way because data is capital, and become able to attract talent, and sit in a defensible position in the ecosystem.

Without this AI and data-focus, the software consultancies will wither against Claude Code eating their bread, talents choosing to go elsewhere, and the position being inherently indefensible in the coming AI-driven, data-polarized economy.

#AI #FoundationModels #SoftwareConsultancy #economy

Jordy Wittemanjordywitteman
2026-01-19

My first iOS app is live on the App Store🎉

SparkyTales lets parents & kids create animal characters and bedtime stories using Apple’s Foundation Models, fully on-device & offline.

Download: apps.apple.com/us/app/sparkyta

Logan Kilpatrick (@OfficialLoganK)

Apple이 자체 Foundation Models의 기반으로 Google의 AI 기술을 채택하기로 결정했다는 내용입니다. Apple은 Google의 기술이 가장 능력 있는 토대라고 판단했으며, 이를 통해 Apple 사용자에게 혁신적 경험을 제공할 수 있을 것으로 기대한다고 발표한 것으로 전해집니다.

x.com/OfficialLoganK/status/20

#apple #google #foundationmodels #partnership #ai

AI Leaks and News (@AILeaksAndNews)

애플이 차세대 AI 애플리케이션을 위해 구글과 다년간 협력한다고 발표했습니다. 이 협업은 구글의 Gemini 모델과 클라우드 기술을 기반으로 애플의 차세대 'Apple Foundation Models' 개발에 적용되며, Apple Intelligence와 더 개인화된 Siri 개발에 활용될 예정입니다.

x.com/AILeaksAndNews/status/20

#apple #google #gemini #siri #foundationmodels

Snapp iOS Weeklyios_newsletter_snapp
2026-01-12

Struggling to get reliable results from on-device foundation models? This guide reveals critical prompt engineering strategies that reduce hallucinations and maximize performance on smaller models.

🔗: developer.apple.com/documentat

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst