Nach Genre filtern

LlamaCast

LlamaCast

Shahriar Shariati

Daily podcast about the published articles in the LLM field.

47 - Test-Time Training
0:00 / 0:00
1x
  • 47 - Test-Time Training

    โŒ›๏ธ The Surprising Effectiveness of Test-Time Training for Abstract Reasoning

    This paper examines how test-time training (TTT) can enhance the abstract reasoning abilities of large language models (LLMs). TTT, which updates model parameters during inference, significantly improves performance on the Abstraction and Reasoning Corpus (ARC) benchmark. Key factors for effective TTT include initial fine-tuning, auxiliary tasks, and instance-specific training. The approach achieves state-of-the-art results on ARC, even matching human averages with program synthesis. This study suggests that dedicating computation at test time, rather than relying on symbolic components, may be essential for complex reasoning tasks.

    ๐Ÿ“Ž Link to paper

    Thu, 14 Nov 2024
  • 46 - Qwen2.5-Coder

    ๐Ÿ”ท Qwen2.5-Coder Technical Report

    The report introduces the Qwen2.5-Coder series, which includes the Qwen2.5-Coder-1.5B and Qwen2.5-Coder-7B models. These models are specifically designed for coding tasks and have been pre-trained on a massive dataset of 5.5 trillion code-related tokens. A significant focus is placed on data quality, with detailed cleaning and filtering processes, and advanced training techniques such as file-level and repo-level pre-training. The models were rigorously tested on various benchmarks, including code generation, completion, reasoning, repair, and text-to-SQL tasks, where they demonstrated strong performance, even surpassing larger models in some areas. The report concludes with suggestions for future research, such as scaling model size and enhancing reasoning abilities.

    ๐Ÿ“Ž Link to paper

    Tue, 12 Nov 2024
  • 45 - Attacking Vision-Language Computer Agents via Pop-ups

    ๐Ÿ˜ˆ Attacking Vision-Language Computer Agents via Pop-ups

    This research paper examines vulnerabilities in vision-language models (VLMs) that power autonomous agents performing computer tasks. The authors show that these VLM agents can be easily tricked into clicking on carefully crafted malicious pop-ups, which humans would typically recognize and avoid. These deceptive pop-ups mislead the agents, disrupting their task performance and reducing success rates. The study tests various pop-up designs across different VLM agents and finds that even simple countermeasures, such as instructing the agent to ignore pop-ups, are ineffective. The authors conclude that these vulnerabilities highlight serious security risks and call for more robust safety measures to ensure reliable agent performance.

    ๐Ÿ“Ž Link to paper

    Sat, 09 Nov 2024
  • 44 - Number Cookbook

    ๐Ÿ““ Number Cookbook: Number Understanding of Language Models and How to Improve It

    This research paper examines the numerical understanding and processing abilities (NUPA) of large language models (LLMs). The authors create a benchmark to test LLMs on four numerical representations (integers, floating-point numbers, fractions, and scientific notation) across 17 tasks grouped into four ability categories. They find that, despite strong problem-solving capabilities, LLMs struggle with basic numerical operations. The paper evaluates methods to enhance NUPA during pretraining and finetuning, such as specialized tokenizers, positional encodings, and data formats, and notes the limitations of chain-of-thought techniques for numerical tasks. The authors call for further research to improve LLMs' fundamental numerical capabilities.

    ๐Ÿ“Ž Link to paper

    Fri, 08 Nov 2024
  • 43 - Jigsaw Puzzles

    ๐Ÿงฉ Jigsaw Puzzles: Splitting Harmful Questions to Jailbreak Large Language Models

    This research paper investigates the vulnerabilities of large language models (LLMs) to "jailbreak" attacks, where malicious users attempt to trick the model into generating harmful content. The authors propose a new attack strategy called Jigsaw Puzzles (JSP) which breaks down harmful questions into harmless fractions and feeds them to the LLM in multiple turns, bypassing the model's built-in safeguards. The paper explores the effectiveness of JSP across different LLM models and harmful categories, analyzing the role of various prompt designs and splitting strategies. The authors also compare JSP's performance to other existing jailbreak methods and demonstrate its ability to overcome various defense mechanisms. The paper concludes by highlighting the importance of continued research and development of more robust defenses against such attacks.

    ๐Ÿ“Ž Link to paper

    Thu, 07 Nov 2024
Weitere Folgen anzeigen