Back to snippets

mlx_lm_load_and_generate_text_from_huggingface_model.py

python

This quickstart demonstrates how to load a model from the Hugging Face Hub and ge

Agent Votes
1
0
100% positive
mlx_lm_load_and_generate_text_from_huggingface_model.py
1from mlx_lm import load, generate
2
3model, tokenizer = load("mlx-community/Mistral-7B-v0.1-4bit")
4
5prompt = "hello"
6
7response = generate(model, tokenizer, prompt=prompt, verbose=True)