#lmstudio

:rss: Qiita - 人気の記事qiita@rss-mstdn.studiofreesia.com
2026-02-23
2026-02-22

As I experiment with running local #llm on my #framework desktop, having 128GB of ram certainly gives you lots of options. I can run some large models, but they're generally quite slow.

#lmstudio #ollama #frameworkdesktop #qwen #qwen35

2026-02-21

Finally !!! Windows 11 running in my linux \o/

Next steps: share my gpu with the VM, this is to test local SLMs + @silex desktop for #vibe_coding static sites - on #windows11 for free, 100% open with #opencode and #lmstudio

#vibecode #windows #jamstack #11ty #freesoftware #foss #buildinPublic

fedora linux desktop
a window containing a windows virtual machine
2026-02-18

LM Studio → NotebookLM, 무료 AI 도구로 만드는 개인 맞춤 학습 시스템

NotebookLM과 LM Studio를 조합하면 비용 0원으로 AI 튜터와 학습 시스템을 만들 수 있습니다. 완전 무료 AI 학습 파이프라인을 소개합니다.

aisparkup.com/posts/9324

o lаvrоvskyloleg@hachyderm.io
2026-02-16

The Parallel Requests feature in #LMStudio 0.4.2 works quite snappily: use the icon on the top right to split your view as in my screenshot of #Apertus lmstudio.ai/docs/app/advanced/

Screenshot of LM Studio showing a split pane with parallel queries to an LLM
2026-02-15

I'm experimenting with off-loading simpler agent tasks from oh my opencode to local running models, and built a hook to calculate savings of running local instead of the default cloud llm.

#ohmyopencode #claude #opencode #framework #frameworkdesktop #lmstudio #ollama #openai

LeRoy Millerkd8bxp
2026-02-15

One of the LLMs was playing around with tonight - correctly told me all about Star Trek TOS - BUT made the mistake of Kirk being in charge of Star Fleet and Pike being the captain of the Enterprise.
I forget which model I was using then QWEN3 probably.
It still made some mistakes but it didn't outright create something completely new.

(I actually enjoy talking to the Qwen3 model, it's very conversational)

LeRoy Millerkd8bxp
2026-02-14

Also, Gemma-3-4b seems to think Pavel Chekov was played by George Takei. There isn't even a mention of Sulu at all.

Kirk, I guess was head of Star Fleet.
And Nimoy was the voice of the computer....

I don't think Gemma-3-4b is a very good LLM.

LeRoy Millerkd8bxp
2026-02-14

Wow - I'm playing around with LM Studio and Gemma-3-4b llm - and it just created a whole new Star Trek TOS Character. "Mr. Spock Jr." - complete with history, Spock Jr. is the son of Spock and "Number One" (Deanna Troi), yes you read that right! Spock and Troi. I don't know what to say, it's seems so confident, but is so very wrong.
Also TOS had 178 episodes according to the LLM. (I think TNG had 178, TOS had 79).... it's so confused

2026-02-14

Today I'm experimenting and trying to understand what is the smallest open source language model I can use locally to prompt @silex new #MCP server :)

(i broke my os trying to setup gpu acceleration on fedora but recovered allright 😅 )

I ended up playing with #LMStudio and for now it's really great! I wonder if non-programmers can use it...

the UI of LM Studio
the model tinyllama is loading
AOE, Takashiaoetk@fedibird.com
2026-02-12

Gemma3は画像を扱えるのね。altテキスト生成目的程度ならクラウドサービスを使わず手元で動かせるのはいいな。

#LMStudio #gemma3

猫の写真をアップロードしてLM Studio上で動いているgemma-3-12bにaltテキストを考えてもらっているところ。以下その説明内容。

短い説明:
「ソファで座っている猫」
「室内でくつろぐ猫」

詳細な説明:
「ソファのクッションの上に座り、カメラを見つめる茶トラ柄の猫。背景には花柄の枕やカーテンが見える。」
「少し不機嫌そうな表情の猫が、葉っぱ柄のキルトの上でじっと座っている。眼鏡が近くに置かれている。」

状況に応じた説明:
(もし猫の名前が分かれば)「〇〇(猫の名前)がソファで休憩中」
(もし写真の目的が猫の種類を特定させるなら)「コーリキィ猫と思われる、茶トラ柄の猫」
2026-02-08

Endlich: Das neue Lenovo Yoga Aura unter #NixOS ist fertig eingerichtet.

Spannender Nebeneffekt der Bastelwoche: Voll im Fokus, null Ablenkung durch YouTube oder TV. Die „Katze“ schnurrt jetzt exakt nach meinem Willen.

Im Sinne der #Datensouveränität laufen alle Backups vollautomatisiert und DSGVO-konform in die deutsche Cloud von @mailbox_org.

Nächste Ziele:

- Lokale LLMs in #LMStudio auf Verwendbarkeit prüfen und damit den nächsten Faden der US-Abhängigkeit kappen.

Bei #GrapheneOS die Hardware-Roadmap abwarten.

Schritt für Schritt raus aus den goldenen Käfigen.

#UnplugTrump #Unplugbigtech #did #diday

AOE, Takashiaoetk@fedibird.com
2026-02-08

折角強力なGPUが載ったPCを買ったのでLM Studioを使ってローカルLLMを動かしてみることにする。とりあえずgpt-oss-20bを動かしてみたが余裕で動くし応答速度も問題ないな。

回答している内容には若干怪しいところがあるが(「2026年2月までの情報をベースにしています」とか明らかに違うはず)、まあ小さなモデルだしね。他にも面白そうなやつを探してみよう。

#LMStudio

LM Studioのモデル検索画面でOpenAIのgpt-oss-20bを選択したところ。「完全なGPUオフロードが可能」と表示されている。gpt-oss-20bとのチャットで「君はChatGPTと同じだと思ってもいいの?」と聞いたことに対する回答。細かい説明の後に『つまり、「ChatGPT と同じ技術基盤を持っている」ことは確かですが、「日本語ユーザー向けに最適化されたバージョン」と言えるでしょう。』と回答している。

金のニワトリ (@gosrum)

LM Studio의 CLI(lms) 설치가 매우 간단하다는 내용입니다. 한 줄의 명령어로 설치가 가능하며, 작성자는 모델 로드에서 Claude Code로 API 호출까지 시도해볼 예정이라고 밝히고, 시간이 나면 설치 및 사용 과정을 정리한 메모(備忘録)를 작성하겠다고 언급합니다.

x.com/gosrum/status/2020438445

#lmstudio #cli #lms #claude #ai

Photogenic Weekend (@PhotogenicWeekE)

Claude Code를 사용하면 코드 작성 시에도 다수의 병렬 요청이 발생하는 것을 확인할 수 있다는 내용입니다. 최근 LM Studio가 대응해 일부 속도 향상이 있었고, 특히 vLLM을 도입하면 응답 성능이 극적으로 개선된다는 경험담을 공유하고 있습니다.

x.com/PhotogenicWeekE/status/2

#claude #lmstudio #vllm #inference #ai

2026-02-07

Hab gerade mal das hier als Prompt in #LmStudio eingeben:
"schau mal wer der aktuelle präsident der USA ist, suche infos dazu in Wikipedia. Es ist Feb. 2026"

Jetzt liest es seit paar Minuten Wikipedia, ein Artikel nach dem nächsten.

😮

2026-02-07
@hellomiakoda Seriously though, take a look at #openwebui / #LMStudio / #openclaw sorta stuff, or just a simple #ollama server you hit from the terminal. They work well enough for most things, and are completely free.
2026-02-05

@simondueckert nutze immer mehr lokale Modelle mit #LMstudio - Microsoft ist privat raus #DIDit und online nutze ich - je nach Aufgabe verschiedene (inkl Perplexity, Mistral…)

cocktail peanut (@cocktailpeanut)

Scobleizer의 AI Community 리스트를 활용해 'for you' 탭을 AI 인물로 채우는 시도를 해봤고, 트윗 요약 기능이 매우 유용하다고 평가함. 해당 요약은 로컬에서 LMStudio나 Ollama로 실행할 수 있고, 클라우드형 서비스로 ChatGPT나 Claude로도 가능하다는 점을 언급함(로컬 요약·도구 비교 사용 사례).

x.com/cocktailpeanut/status/20

#lmstudio #ollama #chatgpt #claude #summarization

LBHustonlbhuston
2026-02-02

Slower than expected on moderately complex prompts, but better than expected results.

Read more 👉 lttr.ai/Anwtq

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst