重点推荐
模型动态
更多 →- 7.0
Meta 发布 Muse Spark:Superintelligence Labs 首个模型,覆盖全产品线
Meta Superintelligence Labs 发布重组后首个模型 Muse Spark,已上线 Meta AI 应用及网站,未来数周将覆盖 WhatsApp、Instagram、Facebook、Messenger 和智能眼镜
- 7.2
Anthropic 发布 Claude Mythos Preview:联合 Nvidia/Google/Apple 等启动网络安全计划 Project Glasswing
Anthropic 推出专为安全漏洞发现设计的新模型 Claude Mythos Preview,与六大科技巨头联合启动 Project Glasswing 网络安全项目,几乎无需人工干预即可标记系统漏洞
- 6.8
Arcee:26 人小团队的开源大模型为何让 OpenClaw 社区着迷
仅 26 人的美国初创公司 Arcee 打造出高性能开源大模型,在 OpenClaw 用户群体中快速走红,证明小团队也能在开源模型领域突围
- 8.5
Gemma 4:Google 发布最强开源模型系列,前端多模态能力设备端可用
Google DeepMind 发布 Gemma 4 系列开源模型,涵盖 1B/4B/12B/27B 四个参数规模,支持原生多模态输入输出,在同等参数量下达到前沿性能,27B 版本支持设备端运行。
- 6.5
微软发布三款全新基础模型,正面挑战 OpenAI 和 Anthropic
微软推出三款自研 MAI 系列基础模型,减少对 OpenAI 的依赖。
- 6.0
下载量暴跌65%,Sora 为何成了 OpenAI 的弃子?
OpenAI宣布关闭Sora产品,下载量暴跌65%,曾让好莱坞颤抖的AI视频生成工具最终沦为弃子。
工程实践
更多 →- 5.5
- 5.5
multica-ai / multica
开源 Agent 管理平台,可将编码 Agent 转化为真正队友,今日新增 1680 star
- 6.7
Astropad Workbench:专为 AI Agent 设计的远程桌面
Astropad 推出 Workbench,将远程桌面重新定义为 AI Agent 监控工具,支持从 iPhone/iPad 低延迟操控 Mac Mini 上的 Agent
- 7.2
Compiled AI:确定性代码生成的 LLM 工作流自动化范式
提出编译式 AI 范式,LLM 在编译阶段生成可执行代码,之后工作流确定性地运行无需再调用模型,适用于医疗等高可靠性场景
- 7.0
Auditable Agents:可审计的 AI Agent 架构
提出可审计 Agent 架构,为自主 AI 系统的每一步决策提供可追溯、可验证的证据链,解决 Agent 部署中的信任和合规问题
- 6.8
HYVE:面向机器数据的 LLM 上下文工程混合视图方案
提出混合视图方法处理机器数据(日志、指标等),让 LLM 在结构化查询和自然语言理解之间灵活切换
学术前沿
更多 →- 3.7
ABot-M0: VLA Foundation Model for Robotic Manipulation with Action Manifold Learning
arXiv:2602.11236v2 Announce Type: replace-cross Abstract: Building general-purpose embodied agents across diverse hardware remains a central challenge in robotics, often framed as the ''one-brain, man
- 3.7
DyBBT: Dynamic Balance via Bandit-inspired Targeting for Dialog Policy with Cognitive Dual-Systems
arXiv:2509.19695v3 Announce Type: replace Abstract: Task oriented dialog systems often rely on static exploration strategies that do not adapt to dynamic dialog contexts, leading to inefficient explor
- 3.7
Locate, Steer, and Improve: A Practical Survey of Actionable Mechanistic Interpretability in Large Language Models
arXiv:2601.14004v4 Announce Type: replace Abstract: Mechanistic Interpretability (MI) has emerged as a vital approach to demystify the opaque decision-making of Large Language Models (LLMs). However,
- 3.3
OctoTools: An Agentic Framework with Extensible Tools for Complex Reasoning
arXiv:2502.11271v2 Announce Type: replace-cross Abstract: Solving complex reasoning tasks may involve visual understanding, domain knowledge retrieval, numerical calculation, and multi-step reasoning.
- 3.3
Are Video Reasoning Models Ready to Go Outside?
arXiv:2603.10652v2 Announce Type: replace-cross Abstract: In real-world deployment, vision-language models often encounter disturbances such as weather, occlusion, and camera motion. Under such condit
- 3.3
BID-LoRA: A Parameter-Efficient Framework for Continual Learning and Unlearning
arXiv:2604.12686v1 Announce Type: new Abstract: Recent advances in deep learning underscore the need for systems that can not only acquire new knowledge through Continual Learning (CL) but also remove
行业动态
更多 →- 5.5
Building trust in the AI era with privacy-led UX
MIT Technology Review探讨AI时代以隐私为主导的用户体验设计如何建立用户信任
- 6.0
- 5.5
The attacks on Sam Altman are a warning for the AI world
· 04/15 06:32 采集 - 6.0
Redefining the future of software engineering
· 04/15 04:31 采集