Filtrar por género

LlamaCast

LlamaCast

Shahriar Shariati

Daily podcast about the published articles in the LLM field.

49 - Marco-o1
0:00 / 0:00
1x
  • 49 - Marco-o1

    🤖 Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions

    The Alibaba MarcoPolo team presents Marco-o1, a large reasoning model designed to excel in open-ended problem-solving. Building upon OpenAI's o1 model, Marco-o1 incorporates Chain-of-Thought fine-tuning, Monte Carlo Tree Search, and innovative reasoning strategies to improve accuracy on complex tasks. The model is trained on a combination of existing and synthetic datasets and shows improvements in accuracy on benchmark datasets, particularly in handling nuanced language translation. Further research focuses on refining the reward system within the Monte Carlo Tree Search and using reinforcement learning to enhance its capabilities. The paper details the model's architecture, training process, and experimental results, highlighting its advancements in open-ended reasoning.

    📎 Link to paper

    Sat, 23 Nov 2024
  • 48 - Scaling Laws for Precision

    ⚖️ Scaling Laws for Precision

    This research paper investigates the impact of precision in training and inference on the performance of large language models. The authors explore how precision affects the effective parameter count and propose scaling laws that predict performance degradation due to low-precision training and post-training quantization. They find that overtrained models are more sensitive to post-training quantization, and that training larger models in lower precision might be computationally optimal. Their unified scaling law accounts for both training and post-training effects and predicts loss in varied precision settings, ultimately suggesting that the standard practice of training models in 16-bit might be suboptimal.

    📎 Link to paper
    🌐 Read their Tweet

    Mon, 18 Nov 2024
  • 47 - Test-Time Training

    ⌛️ The Surprising Effectiveness of Test-Time Training for Abstract Reasoning

    This paper examines how test-time training (TTT) can enhance the abstract reasoning abilities of large language models (LLMs). TTT, which updates model parameters during inference, significantly improves performance on the Abstraction and Reasoning Corpus (ARC) benchmark. Key factors for effective TTT include initial fine-tuning, auxiliary tasks, and instance-specific training. The approach achieves state-of-the-art results on ARC, even matching human averages with program synthesis. This study suggests that dedicating computation at test time, rather than relying on symbolic components, may be essential for complex reasoning tasks.

    📎 Link to paper

    Thu, 14 Nov 2024
  • 46 - Qwen2.5-Coder

    🔷 Qwen2.5-Coder Technical Report

    The report introduces the Qwen2.5-Coder series, which includes the Qwen2.5-Coder-1.5B and Qwen2.5-Coder-7B models. These models are specifically designed for coding tasks and have been pre-trained on a massive dataset of 5.5 trillion code-related tokens. A significant focus is placed on data quality, with detailed cleaning and filtering processes, and advanced training techniques such as file-level and repo-level pre-training. The models were rigorously tested on various benchmarks, including code generation, completion, reasoning, repair, and text-to-SQL tasks, where they demonstrated strong performance, even surpassing larger models in some areas. The report concludes with suggestions for future research, such as scaling model size and enhancing reasoning abilities.

    📎 Link to paper

    Tue, 12 Nov 2024
  • 45 - Attacking Vision-Language Computer Agents via Pop-ups

    😈 Attacking Vision-Language Computer Agents via Pop-ups

    This research paper examines vulnerabilities in vision-language models (VLMs) that power autonomous agents performing computer tasks. The authors show that these VLM agents can be easily tricked into clicking on carefully crafted malicious pop-ups, which humans would typically recognize and avoid. These deceptive pop-ups mislead the agents, disrupting their task performance and reducing success rates. The study tests various pop-up designs across different VLM agents and finds that even simple countermeasures, such as instructing the agent to ignore pop-ups, are ineffective. The authors conclude that these vulnerabilities highlight serious security risks and call for more robust safety measures to ensure reliable agent performance.

    📎 Link to paper

    Sat, 09 Nov 2024
Mostrar más episodios