The multimodal LLM can use parts of images as queries using the GRIT Dataset consists around 1.1Mn examples.
Apple Inc. in collaboration with Columbia University’s AI researchers has quietly introduced an open-source multimodal large language model named “Ferret.” This model, unveiled on GitHub in October, gained significant attention from the AI research community, despite no official announcement.
Ferret is trained on 8 A100 GPUs with 80GB memory. The dataset used in the project is governed by the CC BY NC 4.0 licence, which permits non-commercial use only. The key contributions of the project include the Ferret model, GRIT dataset and Ferret-Bench.
The Ferret model combines a hybrid region representation with a spatial-aware visual sampler to enable fine-grained and open-vocabulary referring and grounding within a multimodal large language model (MLLM). This capability enhances the model’s ability to understand and respond to complex queries that involve both text and images.
The project introduces the GRIT Dataset, which consists of approximately 1.1 million examples. This dataset is designed to support large-scale, hierarchical, and robust instruction tuning for grounding and referring tasks. It serves as a valuable resource for training and evaluating AI models in tasks related to understanding and responding to instructions.
Ferret-Bench is a multimodal evaluation benchmark created as part of the project. It is designed to assess the performance of AI models across various dimensions, including Referring/Grounding, Semantics, Knowledge, and Reasoning. This benchmark provides a comprehensive testing ground for evaluating the capabilities of models like Ferret in real-world scenarios.
Ferret is described as a model that can use parts of images as queries, making it a powerful multimodal AI system. Its working involves examination of a specific region of an image. It then identifies elements within that region that could be relevant to a query and draws bounding boxes around these elements. Then it uses the identified elements as part of a query to provide responses in a traditional language model manner.
This means if a user highlights an image of an animal within a larger image and asks what the animal is, Ferret identifies the species of the creature and can use context from other elements in the image to provide further information or context.
The release of Ferret is seen as significant because it represents an unexpected level of openness from Apple, a company known for its secrecy. This open-source approach contrasts with Apple’s traditional practices.
One reason for this openness may be Apple’s need to compete in the AI industry, where it faces challenges from rivals like Microsoft and Google. Apple’s infrastructure is not optimised for serving large language models (LLMs) at scale, which puts it at a disadvantage. To address this, Apple must choose between partnering with cloud hyperscalers for AI or sharing its work with the open-source community, a strategy similar to what Meta Platforms Inc. (formerly Facebook) has adopted.
Ferret’s release demonstrates Apple’s willingness to collaborate and contribute to the AI research community, reflecting a shift in its approach to AI development.