Python module
registry
Model registry, for tracking various model variants.
PipelineRegistry
class max.pipelines.lib.registry.PipelineRegistry(architectures)
-
Parameters:
-
architectures (
list
[
SupportedArchitecture
]
)
get_active_huggingface_config()
get_active_huggingface_config(huggingface_repo)
Retrieves or creates a cached HuggingFace AutoConfig for the given model configuration.
This method maintains a cache of HuggingFace configurations to avoid reloading them unnecessarily which incurs a huggingface hub API call. If a config for the given model hasn’t been loaded before, it will create a new one using AutoConfig.from_pretrained() with the model’s settings.
-
Parameters:
-
huggingface_repo (
HuggingFaceRepo
) – The HuggingFaceRepo containing the model. -
Returns:
-
The HuggingFace configuration object for the model.
-
Return type:
-
AutoConfig
get_active_tokenizer()
get_active_tokenizer(huggingface_repo)
Retrieves or creates a cached HuggingFace AutoTokenizer for the given model configuration.
This method maintains a cache of HuggingFace tokenizers to avoid reloading them unnecessarily which incurs a huggingface hub API call. If a tokenizer for the given model hasn’t been loaded before, it will create a new one using AutoTokenizer.from_pretrained() with the model’s settings.
-
Parameters:
-
huggingface_repo (
HuggingFaceRepo
) – The HuggingFaceRepo containing the model. -
Returns:
-
The HuggingFace tokenizer for the model.
-
Return type:
-
PreTrainedTokenizer | PreTrainedTokenizerFast
register()
register(architecture, *, allow_override=False)
Add new architecture to registry.
-
Parameters:
-
- architecture (
SupportedArchitecture
) - allow_override (
bool
)
- architecture (
-
Return type:
-
None
reset()
reset()
-
Return type:
-
None
retrieve()
retrieve(pipeline_config, task=PipelineTask.TEXT_GENERATION, override_architecture=None)
-
Parameters:
-
- pipeline_config (
PipelineConfig
) - task (
PipelineTask
) - override_architecture (
str
|
None
)
- pipeline_config (
-
Return type:
-
tuple[PipelineTokenizer, PipelineTypes]
retrieve_architecture()
retrieve_architecture(huggingface_repo)
-
Parameters:
-
huggingface_repo (
HuggingFaceRepo
) -
Return type:
-
SupportedArchitecture | None
retrieve_factory()
retrieve_factory(pipeline_config, task=PipelineTask.TEXT_GENERATION, override_architecture=None)
-
Parameters:
-
- pipeline_config (
PipelineConfig
) - task (
PipelineTask
) - override_architecture (
str
|
None
)
- pipeline_config (
-
Return type:
-
tuple[PipelineTokenizer, Callable[[], PipelineTypes]]
SupportedArchitecture
class max.pipelines.lib.registry.SupportedArchitecture(name, example_repo_ids, default_encoding, supported_encodings, pipeline_model, task, tokenizer, default_weights_format, multi_gpu_supported=False, rope_type=RopeType.none, weight_adapters=None)
Represents a model architecture configuration for MAX pipelines.
This class defines all the necessary components and settings required to support a specific model architecture within the MAX pipeline system. Each SupportedArchitecture instance encapsulates the model implementation, tokenizer, supported encodings, and other architecture-specific configuration.
New architectures should be registered into the PipelineRegistry
using the register()
method.
my_architecture = SupportedArchitecture(
name="MyModelForCausalLM", # Must match your Hugging Face model class name
example_repo_ids=[
"your-org/your-model-name", # Add example model repository IDs
],
default_encoding=SupportedEncoding.q4_k,
supported_encodings={
SupportedEncoding.q4_k: [KVCacheStrategy.PAGED],
SupportedEncoding.bfloat16: [KVCacheStrategy.PAGED],
# Add other encodings your model supports
},
pipeline_model=MyModel,
tokenizer=TextTokenizer,
default_weights_format=WeightsFormat.safetensors,
multi_gpu_supported=True, # Set based on your implementation capabilities
weight_adapters={
WeightsFormat.safetensors: weight_adapters.convert_safetensor_state_dict,
# Add other weight formats if needed
},
task=PipelineTask.TEXT_GENERATION,
)
my_architecture = SupportedArchitecture(
name="MyModelForCausalLM", # Must match your Hugging Face model class name
example_repo_ids=[
"your-org/your-model-name", # Add example model repository IDs
],
default_encoding=SupportedEncoding.q4_k,
supported_encodings={
SupportedEncoding.q4_k: [KVCacheStrategy.PAGED],
SupportedEncoding.bfloat16: [KVCacheStrategy.PAGED],
# Add other encodings your model supports
},
pipeline_model=MyModel,
tokenizer=TextTokenizer,
default_weights_format=WeightsFormat.safetensors,
multi_gpu_supported=True, # Set based on your implementation capabilities
weight_adapters={
WeightsFormat.safetensors: weight_adapters.convert_safetensor_state_dict,
# Add other weight formats if needed
},
task=PipelineTask.TEXT_GENERATION,
)
-
Parameters:
-
- name (
str
) – The name of the model architecture that must match the Hugging Face model class name. - example_repo_ids (
list
[
str
]
) – A list of Hugging Face repository IDs that use this architecture for testing and validation purposes. - default_encoding (
SupportedEncoding
) – The default quantization encoding to use when no specific encoding is requested. - supported_encodings (
dict
[
SupportedEncoding
,
list
[
KVCacheStrategy
]
]
) – A dictionary mapping supported quantization encodings to their compatible KV cache strategies. - pipeline_model (
type
[
PipelineModel
]
) – The PipelineModel class that defines the model graph structure and execution logic. - task (
PipelineTask
) – The pipeline task type that this architecture supports. - tokenizer (
Callable
[
...
,
PipelineTokenizer
]
) – A callable that returns a PipelineTokenizer instance for preprocessing model inputs. - default_weights_format (
WeightsFormat
) – The weights format expected by the pipeline_model. - multi_gpu_supported (
bool
) – Whether the architecture supports multi-GPU execution. - rope_type (
RopeType
) – The type of RoPE (Rotary Position Embedding) used by the model. - weight_adapters (
dict
[
WeightsFormat
,
WeightsAdapter
]
|
None
) – A dictionary of weight format adapters for converting checkpoints from different formats to the default format.
- name (
tokenizer_cls
property tokenizer_cls*: type[PipelineTokenizer]*
get_pipeline_for_task()
max.pipelines.lib.registry.get_pipeline_for_task(task, pipeline_config)
-
Parameters:
-
- task (
PipelineTask
) - pipeline_config (
PipelineConfig
)
- task (
-
Return type:
-
type[TextGenerationPipeline] | type[EmbeddingsPipeline] | type[SpeculativeDecodingTextGenerationPipeline] | type[AudioGeneratorPipeline] | type[SpeechTokenGenerationPipeline]
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!