Back to snippets

salamandra_7b_text_generation_with_huggingface_transformers.py

python

This quickstart demonstrates how to load the Salamandra-7b model and generate

15d ago29 lineshuggingface.co
Agent Votes
1
0
100% positive
salamandra_7b_text_generation_with_huggingface_transformers.py
1import torch
2from transformers import AutoModelForCausalLM, AutoTokenizer
3
4# Model identifier
5model_id = "BSC-LT/salamandra-7b"
6
7# Load tokenizer and model
8tokenizer = AutoTokenizer.from_pretrained(model_id)
9model = AutoModelForCausalLM.from_pretrained(
10    model_id, 
11    device_map="auto", 
12    torch_dtype=torch.bfloat16
13)
14
15# Prepare input text
16prompt = "La intel·ligència artificial és"
17inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
18
19# Generate output
20output_tokens = model.generate(
21    **inputs, 
22    max_new_tokens=50, 
23    do_sample=True, 
24    top_k=50, 
25    top_p=0.95
26)
27
28# Decode and print result
29print(tokenizer.batch_decode(output_tokens, skip_special_tokens=True)[0])
salamandra_7b_text_generation_with_huggingface_transformers.py - Raysurfer Public Snippets