MLX Hub

PyPI Package  →  MLX Hub v1.0.0        
​Source Code    →  g-aggarwal / mlx-hub     

MLX Hub is an open-source command line tool for macOS, that helps you search, download and manage hashtag MLX AI models. MLX is a framework designed by Apple machine learning research, for efficient and flexible machine learning on Apple silicon.

Features

  • Scan for Models downloaded on your device, in the Hugging Face Hub cache.

  • Search for MLX Models from Hugging Face Hub.

  • Suggest MLX models to download.

  • Download MLX models by model ID.

  • Delete MLX models as needed.

  • Interactive Mode for a better interface.

Installation

You can install MLX-Hub using pip:

pip install mlx-hub

This should install the mlx-hub python package from pyPI, including mlx-hub-cli and all their dependencies.

Hugging Face: User Access Token

MLX-Hub uses huggingface_hub to interact with MLX models on Hugging Face. Please create and add an access token from Hugging Face to huggingface_hub.

 

Hugging Face Hub documentation : https://huggingface.co/docs/hub/security-tokens

To create an access token, go to you Hugging Face settings : https://huggingface.co/settings/tokens

 

To add your token to huggingface-cli:

huggingface-cli login

Quick Start

Command line arguments

MLX-Hub CLI accepts the following command line argument:

options:
  -h, --help              Show the help message
  --start                 Start Interactive Mode
  --scan                  Scan for downloaded models in the Hugging Face cache
  --search phrase         Search for MLX models using a search phrase
  --suggest               Suggest MLX models to download
  --download model_id     Download a specific model
  --delete model_id       Delete a specific model

Interactive Mode

Interactive mode allows you to execute various Action in a user-friendly environment.

To start the interactive mode, use the start action:

> mlx-hub-cli --start

 

Starting interactive mode.

 

Available Actions:
    scan                Scan for downloaded models in the Hugging Face cache
    search    phrase    Search for MLX models using a search phrase
    suggest             Suggest MLX models to download
    download  model_id  Download a specific model
    delete    model_id  Delete a specific model
    exit                Exit Interactive Mode
    help                Show this help message

 

Enter Action > scan

 

1 downloaded models:
mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx

 

Enter Action > exit

Goodbye!

To exit the interactive mode, use the exit action:

Enter Action > exit

 

Goodbye!

Actions

Scan

Scans the Hugging Face cache directory and lists all the MLX models that are currently downloaded on your device.

mlx-hub-cli --scan

 

Enter Action > scan

 

1 models found: 

mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx

Search

Searches for MLX models on the Hugging Face Hub using a specified search phrase. The search phrase should be substrings that will be contained in the model id of the model.

> mlx-hub-cli --search bert

3 models found:
mlx-community/bert-base-uncased-mlx
mlx-community/bert-large-uncased-mlx
mlx-community/bert-base-multilingual-uncased

Use quotes around the search phrase if it contains multiple substrings.

> mlx-hub-cli --search "whisper v2"

3 models found:
mlx-community/whisper-large-v2-mlx
mlx-community/whisper-large-v2-mlx-8bit
mlx-community/whisper-large-v2-mlx-4bit

In Interactive Mode, you don't need the quotes.

Enter Action > search whisper v2

3 models found:
mlx-community/whisper-large-v2-mlx
mlx-community/whisper-large-v2-mlx-8bit
mlx-community/whisper-large-v2-mlx-4bit

Suggest

The suggest action suggests MLX models to download. It reads from a predefined list of suggested models.

> mlx-hub-cli --suggest
 

Enter Action > suggest

 

Suggested models:
mlx-community/whisper-medium-mlx
mlx-community/Meta-Llama-3-8B-Instruct-4bit
mlx-community/whisper-large-mlx
mlx-community/WizardLM-2-8x22B-4bit
mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx
mlx-community/Mistral-7B-Instruct-v0.3-4bit
mlx-community/gemma-2-9b-it-4bit
mlx-community/quantized-gemma-2b
mlx-community/gemma-2-9b-it-8bit
mlx-community/Mistral-7B-Instruct-v0.2-4bit

Download

Scans the Hugging Face cache directory and lists all the models that are currently downloaded on your device.

> mlx-hub-cli --download mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx

 

Enter Action > download mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx

 

Downloading model: mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx
README.md: 100%|██████████████████████████████████████████████| 648/648 [00:00<00:00, 8.52MB/s]
added_tokens.json: 100%|████████████████████████████████████| 51.0/51.0 [00:00<00:00,  740kB/s]
config.json: 100%|████████████████████████████████████████| 2.41k/2.41k [00:00<00:00, 32.6MB/s]
.gitattributes:███████████████████████████████████████████| 1.52k/1.52k [00:00<00:00, 21.1MB/s]
special_tokens_map.json: 100%|████████████████████████████████| 557/557 [00:00<00:00, 7.74MB/s]
tokenizer_config.json: 100%|██████████████████████████████| 1.37k/1.37k [00:00<00:00, 20.3MB/s]
tokenizer.json: 100%|█████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 15.3MB/s]
tokenizer.model: 100%|██████████████████████████████████████| 500k/500k [00:00<00:00, 6.23MB/s]
weights.00.safetensors: 100%|███████████████████████████████| 807M/807M [00:19<00:00, 42.4MB/s]
Fetching 9 files: 100%|███████████████████████████████████████████| 9/9 [00:19<00:00, 2.18s/it]
Model downloaded successfully.

Delete

Scans the Hugging Face cache directory and lists all the MLX models that are currently downloaded on your device.

mlx-hub-cli --delete mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx

 

Enter Action > delete mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx

 

Deleting model: mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx

Model deleted successfully.