--- library_name: mlx model_name: Llama 3.3 Shisa V2.1 70B license: llama3.3 pipeline_tag: text-generation language: - ja - en tags: - transformers - mlx - translation base_model: - shisa-ai/shisa-v2.1-llama3.3-70b datasets: - shisa-ai/shisa-v2.1-sharegpt --- # mlx-community/shisa-v2.1-llama3.3-70b-mlx-bf16 The Model [mlx-community/shisa-v2.1-llama3.3-70b-mlx-bf16](https://huggingface.co/mlx-community/shisa-v2.1-llama3.3-70b-mlx-bf16) was converted to MLX format from [shisa-ai/shisa-v2.1-llama3.3-70b](https://huggingface.co/shisa-ai/shisa-v2.1-llama3.3-70b) using mlx-lm version **0.28.4**. You can find other similar translation-related MLX model quants for an Apple Mac at https://huggingface.co/bibproj ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/shisa-v2.1-llama3.3-70b-mlx-bf16") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```