[AINews] not much happened today • ButtondownTwitterTwitter

buttondown.com

Updated on September 26 2024


AI Twitter and Reddit Recap

This section provides a recap of recent discussions in the AI community from Twitter and Reddit. AI Twitter Recap features updates on Meta releasing Llama 3.2 models with new variants, performance details, technical aspects, ecosystem support, and open-source availability. Other news includes the departure of OpenAI's CTO and the release of Molmo by Allen AI. Google's improvements to Gemini 1.5 and Meta's announcements about Project Orion and Quest 3S are also highlighted. Discussions on benchmarks, model optimization, and AI safety are mentioned. AI Reddit Recap covers discussions from /r/LocalLlama, showcasing Molmo's unique abilities and challenging proprietary giants. The vision-language models' performance and capabilities are compared, emphasizing Molmo's achievement in reading analog clocks.

Molmo, Ovis 1.6, and Llama 3.2: AI Model Comparisons

Molmo: A family of open state-of-the-art multimodal AI models by AllenAI

  • Molmo models, available in various sizes with up to 3 billion parameters, excel in visual question answering and image captioning tasks.
  • The model architecture of Molmo utilizes OpenAI's ViT-L/14 CLIP for vision encoding, surpassing SigLIP in experiments.
  • Molmo offers fully open-source datasets, training code, and plans for future experimentation with different language and vision backbones.

Ovis 1.6 - a Gemma 2-based 10B vision-language model

  • Ovis 1.6, a 10B parameter vision-language model, outperforms larger models like Llama 3.2 11B and GPT-4o-mini on the MMMU benchmark.
  • Users express skepticism and compare Ovis 1.6 to Llama 3.2, discussing the performance differences between the models.
  • Ovis 1.6 is tested via the Spaces demo, and discussions ensue about model capabilities and potential future iterations.

Llama 3.2: Meta's Multimodal Leap in Open Source AI

  • Llama 3.2 models, ranging from 1B to 90B, demonstrate strong performance on multimodal benchmarks, outperforming certain models in tasks like mathematical reasoning and visual question answering.
  • The release of smaller 1B and 3B models alongside larger versions sparks controversy, and discussions arise regarding the unavailability of EU access for the models.
  • Users share experiences of running Llama 3.2 3B on mobile devices, detailing performance metrics and technical aspects of the application.

AM Constraints and Large Model Deployment

Conversations highlighted that 24GB GPUs struggle with 70B models, favoring setups maintaining at least 15 tok/s speed. Solutions like multi-GPU integration and model quantization were explored to overcome these limitations.

OpenRouter (Alex Atallah) Discord

Vision Llama Hits OpenRouter with Free Endpoint:

  • The first vision Llama is now available on OpenRouter, featuring a free endpoint. In total, five new endpoints have been introduced, powered by multiple providers.
    • Users are encouraged to enjoy the latest features, marked by the celebratory icon 🎁🦙.

Gemini Tokenization Simplifies Costs:

  • OpenRouter will transition to counting tokens instead of characters for Gemini models, reducing apparent token counts by a factor of ~4. This aims to normalize and cut costs for developers.
    • These changes will lead to doubling of current prices as they align tokens to a per-token pricing model, set to adjust further after October 1.

OpenRouter Credits and Invoice Issues:

  • Users reported difficulties with credit transactions on OpenRouter, noting that transactions might take time to appear after payments are made. A backend delay or provider issues might be causing disruption in viewing transaction history.
    • One user illustrated their eventual receipt of credits, raising concerns about the reliability of the credit system.

Llama 3.2 Restrictions for EU Users:

  • Meta's policy on using their vision models in the EU raises concerns about accessibility and legality for users in that region. Members noted confusion over provider locations and compliance with Meta's rules could pose problems.
    • This has sparked debate on the implications for inference provision related to Llama 3.2 in Europe.

Request for BYOK Beta Participation:

  • A member inquired about joining the Bring Your Own Key (BYOK) beta test. They offered to provide their email address via direct message to facilitate participation.
    • The member expressed willingness to share personal contact information to assist with the beta process.

Channel Discussions on Discord Communities

In the Discord channels related to AI discussions, various topics were highlighted. In the 'Alignment Lab AI' Discord, concerns were raised about the boundaries of promotion and the lack of clarity in discussions. The 'Mozilla AI' Discord featured insights on Mozilla AI being showcased in Nature, advancements in Large Language Models (LLMs), and the rising popularity of the 'Continue tool' for AI-assisted coding. The 'Gorilla LLM (Berkeley Function Calling)' Discord saw discussions on user confusion regarding function calling evaluation and the demand for tools to analyze custom datasets effectively. The HuggingFace Discord channels covered updates on OpenAI leadership changes, Hugging Face reaching 1 million models, the launch of Gemini's object detection space, and discussions on LLama 3.2, machine learning in neuroscience, and AI deployment strategies. Different topics like multimodal models, VLMs, contrastive learning, and challenges in 3D/4D understanding were explored in the 'HuggingFace computer-vision' channel. Additionally, the 'NLP' channel delved into the challenges of word injection in embeddings and parameter freezing techniques.

HuggingFace ▷ #diffusion-discussions

HuggingFace ▷ #diffusion-discussions (2 messages):

  • Colab Free Tier Performance Potential: A user noted the ability to run models in the free tier of Google Colab, sparking a discussion on the specifics of model execution.

    • Raises questions about the descriptor 'relatively performant' and its application in this environment.
  • Training Diffusion Models With UNet2D: Queries arose about the default training logic in Hugging Face diffusers tutorial, specifically for unconditional image generation using a UNet2D model to generate butterfly images from the Smithsonian dataset.

    • Links provided for additional resources for guidance.
  • MSE Loss Confusion in Diffusion Models: A user questioned why MSE loss is computed on random Gaussian noise rather than the residual, emphasizing the model's focus on learning the residual for understanding the noise at each timestep.

    • Critical for subtracting noise from current noisy images.

Suitable LLMs for Hardware

Members discussed recommendations for LLMs to run on systems with Intel i7-8750H and 32GB of RAM, suggesting to look for models like qwen 2.5. Options for using integrated Intel GPUs were also raised, noting the limits in speed due to reliance on system RAM.

Sonar Huge 405b & PMPP Textbook Discussion

This section discusses the Sonar Huge 405b tool, which focuses on scratchpad usage and efficient search capabilities to streamline the exploration process. Additionally, it addresses the challenges with the rate-limited O1-Preview option in the Wordware app and how users can experiment with a dedicated O1-Preview flow. Furthermore, it highlights a discussion within the GPU Mode Discord group about identifying scam links and the excitement of using PMPP as a class textbook, emphasizing its relevance in educational discussions. Overall, the section provides insights into enhancing exploration efficiency and mitigating performance challenges in search capabilities.

GPU Mode Discussions

Discussions in the GPU mode channels involved various topics related to GPU performance optimization, including testing the iPhone 16 Pro for Metal Benchmarks, bfloat16 enablement in ExecutorCH, and scalable matrix extension for the Apple M4 processor. Users shared insights on enhancing GPU performance, attending events like CUDA Virtual Connect, and fully utilizing CuPy in Llama 3.2. Links to relevant GitHub repositories were provided for further exploration.

Interconnects (Nathan Lambert) Random Discussions

The Interconnects (Nathan Lambert) random discussions covered various topics including the LLaMA Stack integration tools, the FTC's AI crackdown announcement, NeurIPS rejection of Rewardbench, questioning the relevance of C++ knowledge in academia, and social media responses. Members engaged in debates about the significance of accumulating social media followers, skepticism surrounding Molmo's praises compared to LLaMA 3.2, and clarification on Molmo's release timing. Overall, the discussions ranged from software tools to industry regulations, reflecting a mix of technical insights and community perspectives.

Gemini Tokenization and Upcoming Pricing Adjustments

The section discusses the transition of OpenRouter to counting tokens instead of characters for Gemini models, resulting in reduced token counts by a factor of ~4 to cut costs for developers. This change will lead to a doubling of current prices to align with a per-token pricing model, with further adjustments planned after October 1st. Additionally, upcoming Gemini prices will be adjusted to match lower tier per-token prices from AI Studio to enhance the developer experience and standardization. Anticipated price cuts on October 1st aim to bring further reductions for users and improve the overall cost structure moving forward.

Understanding VectorStoreIndex

Confusion arose regarding the VectorStoreIndex, with users clarifying the relationship between indexes and underlying vector stores. The group discussed how to access the vector_store property of an index without needing to initialize a new vector store.

Critiques and Discussions on Tinygrad, LLaMA, and Other Models

This section presents various critiques and discussions revolving around the models like Tinygrad, LLaMA, and other related technologies. Users inquire about calculations, data quality impact on model performance, and data characteristics. Discussions delve into view mergeability proofs, pairwise versus global mergeability, offsets, masks, view merging optimization, and view merging as symbolic reduction in Tinygrad. In another conversation, users report issues with Tinygrad training, Metal errors, limitations on Metal, comparisons between Tinygrad, PyTorch, and CUDA, along with customization and optimization benefits. Additionally, insights on LLaMA's image captioning capabilities, OpenAI's function calling API, and free access to LLaMA 3.2 Vision are shared. The section also covers the introduction of MaskBit for image generation, MonoFormer for unified generation processes, Huggingface's recent developments, and discussions on multimodality transformer models.

Footer Information and Acknowledgement

The footer section of the page includes social networking links for finding AI News elsewhere, such as on Twitter and through a newsletter. The content is brought to you by Buttondown, a platform that helps in starting and growing newsletters.


FAQ

Q: What is Molmo by Allen AI?

A: Molmo is a family of state-of-the-art multimodal AI models by Allen AI, excelling in visual question answering and image captioning tasks.

Q: How does the model architecture of Molmo utilize OpenAI's ViT-L/14 CLIP for vision encoding?

A: The model architecture of Molmo utilizes OpenAI's ViT-L/14 CLIP for vision encoding, surpassing SigLIP in experiments.

Q: What is Ovis 1.6?

A: Ovis 1.6 is a Gemma 2-based 10B parameter vision-language model that outperforms larger models like Llama 3.2 11B and GPT-4o-mini on the MMMU benchmark.

Q: What capabilities and tasks does Ovis 1.6 excel in?

A: Ovis 1.6 excels in the MMMU benchmark and outperforms larger models like Llama 3.2 11B and GPT-4o-mini, showcasing its strong performance in vision-language tasks.

Q: What are some highlights of Llama 3.2 by Meta in the AI community?

A: Llama 3.2 models by Meta have been praised for strong performance on multimodal benchmarks, including tasks like mathematical reasoning and visual question answering. The release of smaller 1B and 3B models alongside larger versions sparked controversy.

Q: What challenges have EU users faced regarding Llama 3.2 models?

A: EU users have faced accessibility challenges with Llama 3.2 models due to Meta's restrictions and policies, sparking debates on inference provision in Europe.

Q: What changes has OpenRouter implemented regarding tokenization for Gemini models?

A: OpenRouter has transitioned to counting tokens instead of characters for Gemini models, reducing apparent token counts by a factor of ~4 to cut costs for developers.

Q: What are some topics discussed in the AI community Discord channels?

A: Various topics like advancements in large language models, discussions on multimodal models, contrastive learning, and challenges in 3D/4D understanding were explored in the AI community Discord channels.

Q: What tool focuses on scratchpad usage and efficient search capabilities in the AI domain?

A: The Sonar Huge 405b tool focuses on scratchpad usage and efficient search capabilities in the AI domain to streamline the exploration process.

Q: What are some of the challenges discussed in the section concerning Tinygrad and LLaMA models?

A: Challenges discussed include view mergeability proofs, limitations on Metal, comparisons between Tinygrad, PyTorch, and CUDA, and optimizations and customization benefits.

Logo

Get your own AI Agent Today

Thousands of businesses worldwide are using Chaindesk Generative AI platform.
Don't get left behind - start building your own custom AI chatbot now!