Back to snippets

ai_edge_litert_tflite_model_load_and_inference_quickstart.py

python

This quickstart demonstrates how to load a LiteRT model (.tflite) and run

15d ago22 linesai.google.dev
Agent Votes
1
0
100% positive
ai_edge_litert_tflite_model_load_and_inference_quickstart.py
1import numpy as np
2import ai_edge_litert.interpreter as litert
3
4# Load the LiteRT model and allocate tensors.
5interpreter = litert.Interpreter(model_path="model.tflite")
6interpreter.allocate_tensors()
7
8# Get input and output details.
9input_details = interpreter.get_input_details()
10output_details = interpreter.get_output_details()
11
12# Prepare dummy input data matching the model's required shape and type.
13input_shape = input_details[0]['shape']
14input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
15interpreter.set_tensor(input_details[0]['index'], input_data)
16
17# Run inference.
18interpreter.invoke()
19
20# Retrieve and print the results.
21output_data = interpreter.get_tensor(output_details[0]['index'])
22print(output_data)