OVERVIEW
Architecture overview¶
This document explains how pydantic2django
is organized, what the shared core is responsible for, and what each implementation (Pydantic, Dataclass, TypedClass) has in common.
For inlined API links below, we use mkdocstrings’ Python handler syntax; see “mkdocstrings usage” for details. Reference: mkdocstrings usage.
Big picture¶
- Goal: Convert typed Python models into Django models and make them interoperable in both directions.
- Layers:
- Core: Shared building blocks (discovery, factories, bidirectional type mapping, typing utils, import aggregation, context handling, code generation base).
- Implementations: Source-specific adapters that plug into the core (Pydantic, Dataclass, TypedClass).
- Django base models: Ready-to-extend base classes that store or map typed objects inside Django models.
High-level flow (static code generation path):
- Discover source models in Python packages.
- For each model, convert its fields to Django fields via the bidirectional type mapper.
- Create an in-memory Django model class and collect rendered definitions using Jinja templates.
- Write a
models.py
file including imports, model classes, and optional context classes.
Templates live in src/pydantic2django/django/templates
and are used by the generator base.
Core responsibilities¶
- Static generation orchestration
-
Bases:
ABC
,Generic[SourceModelType, FieldInfoType]
Abstract base class for generating static Django models from source models (like Pydantic or Dataclasses).
Source code in
src/pydantic2django/core/base_generator.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479
class BaseStaticGenerator(ABC, Generic[SourceModelType, FieldInfoType]): """ Abstract base class for generating static Django models from source models (like Pydantic or Dataclasses). """ def __init__( self, output_path: str, app_label: str, filter_function: Optional[Callable[[type[SourceModelType]], bool]], verbose: bool, discovery_instance: BaseDiscovery[SourceModelType], model_factory_instance: BaseModelFactory[SourceModelType, FieldInfoType], module_mappings: Optional[dict[str, str]], base_model_class: type[models.Model], packages: list[str] | None = None, class_name_prefix: str = "Django", ): """ Initialize the base generator. Args: output_path: Path to output the generated models.py file. packages: List of packages to scan for source models. app_label: Django app label to use for the models. filter_function: Optional function to filter which source models to include. verbose: Print verbose output. discovery_instance: An instance of a BaseDiscovery subclass. model_factory_instance: An instance of a BaseModelFactory subclass. module_mappings: Optional mapping of modules to remap imports. base_model_class: The base Django model class to inherit from. class_name_prefix: Prefix for generated Django model class names. """ self.output_path = output_path self.packages = packages self.app_label = app_label self.filter_function = filter_function self.verbose = verbose self.discovery_instance = discovery_instance self.model_factory_instance = model_factory_instance self.base_model_class = base_model_class self.class_name_prefix = class_name_prefix self.carriers: list[ConversionCarrier[SourceModelType]] = [] # Stores results from model factory self.import_handler = ImportHandler(module_mappings=module_mappings) # Initialize Jinja2 environment # Look for templates in the django/templates subdirectory # package_templates_dir = os.path.join(os.path.dirname(__file__), "..", "templates") # Old path package_templates_dir = os.path.join(os.path.dirname(__file__), "..", "django", "templates") # Corrected path # If templates don't exist in the package, use the ones relative to the execution? # This might need adjustment based on packaging/distribution. # For now, assume templates are relative to the package structure. if not os.path.exists(package_templates_dir): # Fallback or raise error might be needed package_templates_dir = os.path.join(pathlib.Path(__file__).parent.parent.absolute(), "templates") if not os.path.exists(package_templates_dir): logger.warning( f"Templates directory not found at expected location: {package_templates_dir}. Jinja might fail." ) self.jinja_env = jinja2.Environment( loader=jinja2.FileSystemLoader(package_templates_dir), trim_blocks=True, lstrip_blocks=True, ) # Register common custom filters self.jinja_env.filters["format_type_string"] = TypeHandler.format_type_string # Add more common filters if needed # Add base model import self.import_handler._add_type_import(base_model_class) # --- Abstract Methods to be Implemented by Subclasses --- @abstractmethod def _get_source_model_name(self, carrier: ConversionCarrier[SourceModelType]) -> str: """Get the name of the original source model from the carrier.""" pass @abstractmethod def _add_source_model_import(self, carrier: ConversionCarrier[SourceModelType]): """Add the necessary import for the original source model.""" pass @abstractmethod def _prepare_template_context( self, unique_model_definitions: list[str], django_model_names: list[str], imports: dict ) -> dict: """Prepare the subclass-specific context for the main models_file.py.j2 template.""" pass @abstractmethod def _get_models_in_processing_order(self) -> list[SourceModelType]: """Return source models in the correct processing (dependency) order.""" pass @abstractmethod def _get_model_definition_extra_context(self, carrier: ConversionCarrier[SourceModelType]) -> dict: """Provide extra context specific to the source type for model_definition.py.j2.""" pass # --- Common Methods --- def generate(self) -> str: """ Main entry point: Generate and write the models file. Returns: The path to the generated models file. """ try: content = self.generate_models_file() self._write_models_file(content) logger.info(f"Successfully generated models file at {self.output_path}") return self.output_path except Exception as e: logger.exception(f"Error generating models file: {e}", exc_info=True) # Use exc_info for traceback raise def _write_models_file(self, content: str) -> None: """Write the generated content to the output file.""" if self.verbose: logger.info(f"Writing models to {self.output_path}") output_dir = os.path.dirname(self.output_path) if output_dir and not os.path.exists(output_dir): try: os.makedirs(output_dir) if self.verbose: logger.info(f"Created output directory: {output_dir}") except OSError as e: logger.error(f"Failed to create output directory {output_dir}: {e}") raise # Re-raise after logging try: with open(self.output_path, "w", encoding="utf-8") as f: # Specify encoding f.write(content) if self.verbose: logger.info(f"Successfully wrote models to {self.output_path}") except OSError as e: logger.error(f"Failed to write to output file {self.output_path}: {e}") raise # Re-raise after logging def discover_models(self) -> None: """Discover source models using the configured discovery instance.""" if self.verbose: logger.info(f"Discovering models from packages: {self.packages}") # Corrected call matching BaseDiscovery signature self.discovery_instance.discover_models( self.packages or [], # Pass empty list if None app_label=self.app_label, user_filters=self.filter_function, # Keep as is for now ) # Analyze dependencies after discovery self.discovery_instance.analyze_dependencies() if self.verbose: logger.info(f"Discovered {len(self.discovery_instance.filtered_models)} models after filtering.") if self.discovery_instance.filtered_models: for name in self.discovery_instance.filtered_models.keys(): logger.info(f" - {name}") else: logger.info(" (No models found or passed filter)") logger.info("Dependency analysis complete.") def setup_django_model(self, source_model: SourceModelType) -> Optional[ConversionCarrier[SourceModelType]]: """ Uses the model factory to create a Django model representation from a source model. Args: source_model: The source model instance (e.g., Pydantic class, Dataclass). Returns: A ConversionCarrier containing the results, or None if creation failed. """ source_model_name = getattr(source_model, "__name__", str(source_model)) if self.verbose: logger.info(f"Setting up Django model for {source_model_name}") # Instantiate the carrier here carrier = ConversionCarrier( source_model=cast(type[SourceModelType], source_model), meta_app_label=self.app_label, base_django_model=self.base_model_class, class_name_prefix=self.class_name_prefix, # Add other defaults/configs if needed, e.g., strict mode strict=False, # Example default ) try: # Use the factory to process the source model and populate the carrier self.model_factory_instance.make_django_model(carrier) # Pass carrier to factory if carrier.django_model: self.carriers.append(carrier) if self.verbose: logger.info(f"Successfully processed {source_model_name} -> {carrier.django_model.__name__}") return carrier else: # Log if model creation resulted in None (e.g., only context fields) # Check carrier.context_fields or carrier.invalid_fields for details if carrier.context_fields and not carrier.django_fields and not carrier.relationship_fields: logger.info(f"Skipped Django model class for {source_model_name} - only context fields found.") elif carrier.invalid_fields: logger.warning( f"Skipped Django model class for {source_model_name} due to invalid fields: {carrier.invalid_fields}" ) else: logger.warning(f"Django model was not generated for {source_model_name} for unknown reasons.") return None # Return None if no Django model was created except Exception as e: logger.error(f"Error processing {source_model_name} with factory: {e}", exc_info=True) return None def generate_model_definition(self, carrier: ConversionCarrier[SourceModelType]) -> str: """ Generates a string definition for a single Django model using a template. Args: carrier: The ConversionCarrier containing the generated Django model and context. Returns: The string representation of the Django model definition. """ if not carrier.django_model: # It's possible a carrier exists only for context, handle gracefully. source_name = self._get_source_model_name(carrier) if carrier.model_context and carrier.model_context.context_fields: logger.info(f"Skipping Django model definition for {source_name} (likely context-only).") return "" else: logger.warning( f"Cannot generate model definition for {source_name}: django_model is missing in carrier." ) return "" django_model_name = self._clean_generic_type(carrier.django_model.__name__) source_model_name = self._get_source_model_name(carrier) # Get original name via abstract method # --- Prepare Fields --- fields_info = [] # Combine regular and relationship fields from the carrier all_django_fields = {**carrier.django_fields, **carrier.relationship_fields} for field_name, field_object in all_django_fields.items(): # The field_object is already an instantiated Django field # Add (name, object) tuple directly for the template fields_info.append((field_name, field_object)) # --- Prepare Meta --- meta_options = {} if hasattr(carrier.django_model, "_meta"): model_meta = carrier.django_model._meta meta_options = { "db_table": getattr(model_meta, "db_table", f"{self.app_label}_{django_model_name.lower()}"), "app_label": self.app_label, "verbose_name": getattr(model_meta, "verbose_name", django_model_name), "verbose_name_plural": getattr(model_meta, "verbose_name_plural", f"{django_model_name}s"), # Add other meta options if needed } # --- Prepare Base Class Info --- base_model_name = self.base_model_class.__name__ if carrier.django_model.__bases__ and carrier.django_model.__bases__[0] != models.Model: # Use the immediate parent if it's not the absolute base 'models.Model' # Assumes single inheritance for the generated model besides the ultimate base parent_class = carrier.django_model.__bases__[0] # Check if the parent is our intended base_model_class or something else # This logic might need refinement depending on how complex the inheritance gets if issubclass(parent_class, models.Model) and parent_class != models.Model: base_model_name = parent_class.__name__ # Add import for the parent if it's not the configured base_model_class if parent_class != self.base_model_class: self.import_handler._add_type_import(parent_class) # --- Prepare Context Class Info --- context_class_name = "" if carrier.model_context and carrier.model_context.context_fields: # Standard naming convention context_class_name = f"{django_model_name}Context" # --- Get Subclass Specific Context --- extra_context = self._get_model_definition_extra_context(carrier) # --- Process Pending Multi-FK Unions and add to definitions dict --- multi_fk_field_names = [] # Keep track for validation hint validation_needed = False if carrier.pending_multi_fk_unions: validation_needed = True for original_field_name, union_details in carrier.pending_multi_fk_unions: pydantic_models = union_details.get("models", []) for pydantic_model in pydantic_models: # Construct field name (e.g., original_name_relatedmodel) fk_field_name = f"{original_field_name}_{pydantic_model.__name__.lower()}" multi_fk_field_names.append(fk_field_name) # Get corresponding Django model pydantic_factory = cast(PydanticModelFactory, self.model_factory_instance) django_model_rel = pydantic_factory.relationship_accessor.get_django_model_for_pydantic( pydantic_model ) if not django_model_rel: logger.error( f"Could not find Django model for Pydantic model {pydantic_model.__name__} referenced in multi-FK union for {original_field_name}. Skipping FK field." ) continue # Use string for model ref in kwargs target_model_str = f"'{django_model_rel._meta.app_label}.{django_model_rel.__name__}'" # Add import for the related Django model self.import_handler._add_type_import(django_model_rel) # Define FK kwargs (always null=True, blank=True) # Use strings for values that need to be represented in code fk_kwargs = { "to": target_model_str, "on_delete": "models.SET_NULL", # Use string for template "null": True, "blank": True, # Generate related_name to avoid clashes "related_name": f"'{carrier.django_model.__name__.lower()}_{fk_field_name}_set'", # Ensure related_name is quoted string } # Generate the definition string fk_def_string = generate_field_definition_string(models.ForeignKey, fk_kwargs, self.app_label) # Add to the main definitions dictionary carrier.django_field_definitions[fk_field_name] = fk_def_string # --- Prepare Final Context --- # # Ensure the context uses the potentially updated definitions dict from the carrier # Subclass _get_model_definition_extra_context should already provide this # via `field_definitions=carrier.django_field_definitions` template_context = { "model_name": django_model_name, "pydantic_model_name": source_model_name, "base_model_name": base_model_name, "meta": meta_options, "app_label": self.app_label, "multi_fk_field_names": multi_fk_field_names, # Pass names for validation hint "validation_needed": validation_needed, # Signal if validation needed # Include extra context from subclass (should include field_definitions) **extra_context, } # --- Render Template --- # template = self.jinja_env.get_template("model_definition.py.j2") definition_str = template.render(**template_context) # Add import for the original source model self._add_source_model_import(carrier) return definition_str def _deduplicate_definitions(self, definitions: list[str]) -> list[str]: """Remove duplicate model definitions based on class name.""" unique_definitions = [] seen_class_names = set() for definition in definitions: # Basic regex to find 'class ClassName(' - might need adjustment for complex cases match = re.search(r"^\s*class\s+(\w+)\(", definition, re.MULTILINE) if match: class_name = match.group(1) if class_name not in seen_class_names: unique_definitions.append(definition) seen_class_names.add(class_name) # else: logger.debug(f"Skipping duplicate definition for class: {class_name}") else: # If no class definition found (e.g., comments, imports), keep it? Or discard? # For now, keep non-class definitions assuming they might be needed context/comments. unique_definitions.append(definition) logger.warning("Could not extract class name from definition block for deduplication.") return unique_definitions def _clean_generic_type(self, name: str) -> str: """Remove generic parameters like [T] or <T> from a type name.""" # Handles Class[Param] or Class<Param> cleaned_name = re.sub(r"[\[<].*?[\]>]", "", name) # Also handle cases like 'ModelName.T' if typevars are used this way cleaned_name = cleaned_name.split(".")[-1] return cleaned_name def generate_models_file(self) -> str: """ Generates the complete content for the models.py file. This method orchestrates discovery, model setup, definition generation, import collection, and template rendering. Subclasses might override this to add specific steps (like context class generation). """ self.discover_models() # Populates discovery instance models_to_process = self._get_models_in_processing_order() # Abstract method # Reset state for this run self.carriers = [] self.import_handler.extra_type_imports.clear() self.import_handler.pydantic_imports.clear() self.import_handler.context_class_imports.clear() self.import_handler.imported_names.clear() self.import_handler.processed_field_types.clear() # Re-add base model import after clearing self.import_handler._add_type_import(self.base_model_class) model_definitions = [] django_model_names = [] # For __all__ # Setup Django models first (populates self.carriers) for source_model in models_to_process: self.setup_django_model(source_model) # Calls factory, populates carrier # Generate definitions from carriers for carrier in self.carriers: # Generate Django model definition if model exists if carrier.django_model: try: model_def = self.generate_model_definition(carrier) # Uses template if model_def: # Only add if definition was generated model_definitions.append(model_def) django_model_name = self._clean_generic_type(carrier.django_model.__name__) django_model_names.append(f"'{django_model_name}'") except Exception as e: source_name = self._get_source_model_name(carrier) logger.error(f"Error generating definition for source model {source_name}: {e}", exc_info=True) # Subclasses might add context class generation here by overriding this method # or by generate_model_definition adding context-related imports. # Deduplicate definitions unique_model_definitions = self._deduplicate_definitions(model_definitions) # Deduplicate imports gathered during the process imports = self.import_handler.deduplicate_imports() # Prepare context using subclass method (_prepare_template_context) template_context = self._prepare_template_context(unique_model_definitions, django_model_names, imports) # Add common context items template_context.update( { "generation_timestamp": datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "base_model_module": self.base_model_class.__module__, "base_model_name": self.base_model_class.__name__, "extra_type_imports": sorted(self.import_handler.extra_type_imports), # Add other common items as needed } ) # Render the main template template = self.jinja_env.get_template("models_file.py.j2") return template.render(**template_context)
__init__(output_path, app_label, filter_function, verbose, discovery_instance, model_factory_instance, module_mappings, base_model_class, packages=None, class_name_prefix='Django')
¶Initialize the base generator.
Parameters:
Name Type Description Default output_path
str
Path to output the generated models.py file.
required packages
list[str] | None
List of packages to scan for source models.
None
app_label
str
Django app label to use for the models.
required filter_function
Optional[Callable[[type[SourceModelType]], bool]]
Optional function to filter which source models to include.
required verbose
bool
Print verbose output.
required discovery_instance
BaseDiscovery[SourceModelType]
An instance of a BaseDiscovery subclass.
required model_factory_instance
BaseModelFactory[SourceModelType, FieldInfoType]
An instance of a BaseModelFactory subclass.
required module_mappings
Optional[dict[str, str]]
Optional mapping of modules to remap imports.
required base_model_class
type[Model]
The base Django model class to inherit from.
required class_name_prefix
str
Prefix for generated Django model class names.
'Django'
Source code in
src/pydantic2django/core/base_generator.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
def __init__( self, output_path: str, app_label: str, filter_function: Optional[Callable[[type[SourceModelType]], bool]], verbose: bool, discovery_instance: BaseDiscovery[SourceModelType], model_factory_instance: BaseModelFactory[SourceModelType, FieldInfoType], module_mappings: Optional[dict[str, str]], base_model_class: type[models.Model], packages: list[str] | None = None, class_name_prefix: str = "Django", ): """ Initialize the base generator. Args: output_path: Path to output the generated models.py file. packages: List of packages to scan for source models. app_label: Django app label to use for the models. filter_function: Optional function to filter which source models to include. verbose: Print verbose output. discovery_instance: An instance of a BaseDiscovery subclass. model_factory_instance: An instance of a BaseModelFactory subclass. module_mappings: Optional mapping of modules to remap imports. base_model_class: The base Django model class to inherit from. class_name_prefix: Prefix for generated Django model class names. """ self.output_path = output_path self.packages = packages self.app_label = app_label self.filter_function = filter_function self.verbose = verbose self.discovery_instance = discovery_instance self.model_factory_instance = model_factory_instance self.base_model_class = base_model_class self.class_name_prefix = class_name_prefix self.carriers: list[ConversionCarrier[SourceModelType]] = [] # Stores results from model factory self.import_handler = ImportHandler(module_mappings=module_mappings) # Initialize Jinja2 environment # Look for templates in the django/templates subdirectory # package_templates_dir = os.path.join(os.path.dirname(__file__), "..", "templates") # Old path package_templates_dir = os.path.join(os.path.dirname(__file__), "..", "django", "templates") # Corrected path # If templates don't exist in the package, use the ones relative to the execution? # This might need adjustment based on packaging/distribution. # For now, assume templates are relative to the package structure. if not os.path.exists(package_templates_dir): # Fallback or raise error might be needed package_templates_dir = os.path.join(pathlib.Path(__file__).parent.parent.absolute(), "templates") if not os.path.exists(package_templates_dir): logger.warning( f"Templates directory not found at expected location: {package_templates_dir}. Jinja might fail." ) self.jinja_env = jinja2.Environment( loader=jinja2.FileSystemLoader(package_templates_dir), trim_blocks=True, lstrip_blocks=True, ) # Register common custom filters self.jinja_env.filters["format_type_string"] = TypeHandler.format_type_string # Add more common filters if needed # Add base model import self.import_handler._add_type_import(base_model_class)
discover_models()
¶Discover source models using the configured discovery instance.
Source code in
src/pydantic2django/core/base_generator.py
173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195
def discover_models(self) -> None: """Discover source models using the configured discovery instance.""" if self.verbose: logger.info(f"Discovering models from packages: {self.packages}") # Corrected call matching BaseDiscovery signature self.discovery_instance.discover_models( self.packages or [], # Pass empty list if None app_label=self.app_label, user_filters=self.filter_function, # Keep as is for now ) # Analyze dependencies after discovery self.discovery_instance.analyze_dependencies() if self.verbose: logger.info(f"Discovered {len(self.discovery_instance.filtered_models)} models after filtering.") if self.discovery_instance.filtered_models: for name in self.discovery_instance.filtered_models.keys(): logger.info(f" - {name}") else: logger.info(" (No models found or passed filter)") logger.info("Dependency analysis complete.")
generate()
¶Main entry point: Generate and write the models file.
Returns:
Type Description str
The path to the generated models file.
Source code in
src/pydantic2django/core/base_generator.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147
def generate(self) -> str: """ Main entry point: Generate and write the models file. Returns: The path to the generated models file. """ try: content = self.generate_models_file() self._write_models_file(content) logger.info(f"Successfully generated models file at {self.output_path}") return self.output_path except Exception as e: logger.exception(f"Error generating models file: {e}", exc_info=True) # Use exc_info for traceback raise
generate_model_definition(carrier)
¶Generates a string definition for a single Django model using a template.
Parameters:
Name Type Description Default carrier
ConversionCarrier[SourceModelType]
The ConversionCarrier containing the generated Django model and context.
required Returns:
Type Description str
The string representation of the Django model definition.
Source code in
src/pydantic2django/core/base_generator.py
247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381
def generate_model_definition(self, carrier: ConversionCarrier[SourceModelType]) -> str: """ Generates a string definition for a single Django model using a template. Args: carrier: The ConversionCarrier containing the generated Django model and context. Returns: The string representation of the Django model definition. """ if not carrier.django_model: # It's possible a carrier exists only for context, handle gracefully. source_name = self._get_source_model_name(carrier) if carrier.model_context and carrier.model_context.context_fields: logger.info(f"Skipping Django model definition for {source_name} (likely context-only).") return "" else: logger.warning( f"Cannot generate model definition for {source_name}: django_model is missing in carrier." ) return "" django_model_name = self._clean_generic_type(carrier.django_model.__name__) source_model_name = self._get_source_model_name(carrier) # Get original name via abstract method # --- Prepare Fields --- fields_info = [] # Combine regular and relationship fields from the carrier all_django_fields = {**carrier.django_fields, **carrier.relationship_fields} for field_name, field_object in all_django_fields.items(): # The field_object is already an instantiated Django field # Add (name, object) tuple directly for the template fields_info.append((field_name, field_object)) # --- Prepare Meta --- meta_options = {} if hasattr(carrier.django_model, "_meta"): model_meta = carrier.django_model._meta meta_options = { "db_table": getattr(model_meta, "db_table", f"{self.app_label}_{django_model_name.lower()}"), "app_label": self.app_label, "verbose_name": getattr(model_meta, "verbose_name", django_model_name), "verbose_name_plural": getattr(model_meta, "verbose_name_plural", f"{django_model_name}s"), # Add other meta options if needed } # --- Prepare Base Class Info --- base_model_name = self.base_model_class.__name__ if carrier.django_model.__bases__ and carrier.django_model.__bases__[0] != models.Model: # Use the immediate parent if it's not the absolute base 'models.Model' # Assumes single inheritance for the generated model besides the ultimate base parent_class = carrier.django_model.__bases__[0] # Check if the parent is our intended base_model_class or something else # This logic might need refinement depending on how complex the inheritance gets if issubclass(parent_class, models.Model) and parent_class != models.Model: base_model_name = parent_class.__name__ # Add import for the parent if it's not the configured base_model_class if parent_class != self.base_model_class: self.import_handler._add_type_import(parent_class) # --- Prepare Context Class Info --- context_class_name = "" if carrier.model_context and carrier.model_context.context_fields: # Standard naming convention context_class_name = f"{django_model_name}Context" # --- Get Subclass Specific Context --- extra_context = self._get_model_definition_extra_context(carrier) # --- Process Pending Multi-FK Unions and add to definitions dict --- multi_fk_field_names = [] # Keep track for validation hint validation_needed = False if carrier.pending_multi_fk_unions: validation_needed = True for original_field_name, union_details in carrier.pending_multi_fk_unions: pydantic_models = union_details.get("models", []) for pydantic_model in pydantic_models: # Construct field name (e.g., original_name_relatedmodel) fk_field_name = f"{original_field_name}_{pydantic_model.__name__.lower()}" multi_fk_field_names.append(fk_field_name) # Get corresponding Django model pydantic_factory = cast(PydanticModelFactory, self.model_factory_instance) django_model_rel = pydantic_factory.relationship_accessor.get_django_model_for_pydantic( pydantic_model ) if not django_model_rel: logger.error( f"Could not find Django model for Pydantic model {pydantic_model.__name__} referenced in multi-FK union for {original_field_name}. Skipping FK field." ) continue # Use string for model ref in kwargs target_model_str = f"'{django_model_rel._meta.app_label}.{django_model_rel.__name__}'" # Add import for the related Django model self.import_handler._add_type_import(django_model_rel) # Define FK kwargs (always null=True, blank=True) # Use strings for values that need to be represented in code fk_kwargs = { "to": target_model_str, "on_delete": "models.SET_NULL", # Use string for template "null": True, "blank": True, # Generate related_name to avoid clashes "related_name": f"'{carrier.django_model.__name__.lower()}_{fk_field_name}_set'", # Ensure related_name is quoted string } # Generate the definition string fk_def_string = generate_field_definition_string(models.ForeignKey, fk_kwargs, self.app_label) # Add to the main definitions dictionary carrier.django_field_definitions[fk_field_name] = fk_def_string # --- Prepare Final Context --- # # Ensure the context uses the potentially updated definitions dict from the carrier # Subclass _get_model_definition_extra_context should already provide this # via `field_definitions=carrier.django_field_definitions` template_context = { "model_name": django_model_name, "pydantic_model_name": source_model_name, "base_model_name": base_model_name, "meta": meta_options, "app_label": self.app_label, "multi_fk_field_names": multi_fk_field_names, # Pass names for validation hint "validation_needed": validation_needed, # Signal if validation needed # Include extra context from subclass (should include field_definitions) **extra_context, } # --- Render Template --- # template = self.jinja_env.get_template("model_definition.py.j2") definition_str = template.render(**template_context) # Add import for the original source model self._add_source_model_import(carrier) return definition_str
generate_models_file()
¶Generates the complete content for the models.py file. This method orchestrates discovery, model setup, definition generation, import collection, and template rendering. Subclasses might override this to add specific steps (like context class generation).
Source code in
src/pydantic2django/core/base_generator.py
412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479
def generate_models_file(self) -> str: """ Generates the complete content for the models.py file. This method orchestrates discovery, model setup, definition generation, import collection, and template rendering. Subclasses might override this to add specific steps (like context class generation). """ self.discover_models() # Populates discovery instance models_to_process = self._get_models_in_processing_order() # Abstract method # Reset state for this run self.carriers = [] self.import_handler.extra_type_imports.clear() self.import_handler.pydantic_imports.clear() self.import_handler.context_class_imports.clear() self.import_handler.imported_names.clear() self.import_handler.processed_field_types.clear() # Re-add base model import after clearing self.import_handler._add_type_import(self.base_model_class) model_definitions = [] django_model_names = [] # For __all__ # Setup Django models first (populates self.carriers) for source_model in models_to_process: self.setup_django_model(source_model) # Calls factory, populates carrier # Generate definitions from carriers for carrier in self.carriers: # Generate Django model definition if model exists if carrier.django_model: try: model_def = self.generate_model_definition(carrier) # Uses template if model_def: # Only add if definition was generated model_definitions.append(model_def) django_model_name = self._clean_generic_type(carrier.django_model.__name__) django_model_names.append(f"'{django_model_name}'") except Exception as e: source_name = self._get_source_model_name(carrier) logger.error(f"Error generating definition for source model {source_name}: {e}", exc_info=True) # Subclasses might add context class generation here by overriding this method # or by generate_model_definition adding context-related imports. # Deduplicate definitions unique_model_definitions = self._deduplicate_definitions(model_definitions) # Deduplicate imports gathered during the process imports = self.import_handler.deduplicate_imports() # Prepare context using subclass method (_prepare_template_context) template_context = self._prepare_template_context(unique_model_definitions, django_model_names, imports) # Add common context items template_context.update( { "generation_timestamp": datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "base_model_module": self.base_model_class.__module__, "base_model_name": self.base_model_class.__name__, "extra_type_imports": sorted(self.import_handler.extra_type_imports), # Add other common items as needed } ) # Render the main template template = self.jinja_env.get_template("models_file.py.j2") return template.render(**template_context)
setup_django_model(source_model)
¶Uses the model factory to create a Django model representation from a source model.
Parameters:
Name Type Description Default source_model
SourceModelType
The source model instance (e.g., Pydantic class, Dataclass).
required Returns:
Type Description Optional[ConversionCarrier[SourceModelType]]
A ConversionCarrier containing the results, or None if creation failed.
Source code in
src/pydantic2django/core/base_generator.py
197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245
def setup_django_model(self, source_model: SourceModelType) -> Optional[ConversionCarrier[SourceModelType]]: """ Uses the model factory to create a Django model representation from a source model. Args: source_model: The source model instance (e.g., Pydantic class, Dataclass). Returns: A ConversionCarrier containing the results, or None if creation failed. """ source_model_name = getattr(source_model, "__name__", str(source_model)) if self.verbose: logger.info(f"Setting up Django model for {source_model_name}") # Instantiate the carrier here carrier = ConversionCarrier( source_model=cast(type[SourceModelType], source_model), meta_app_label=self.app_label, base_django_model=self.base_model_class, class_name_prefix=self.class_name_prefix, # Add other defaults/configs if needed, e.g., strict mode strict=False, # Example default ) try: # Use the factory to process the source model and populate the carrier self.model_factory_instance.make_django_model(carrier) # Pass carrier to factory if carrier.django_model: self.carriers.append(carrier) if self.verbose: logger.info(f"Successfully processed {source_model_name} -> {carrier.django_model.__name__}") return carrier else: # Log if model creation resulted in None (e.g., only context fields) # Check carrier.context_fields or carrier.invalid_fields for details if carrier.context_fields and not carrier.django_fields and not carrier.relationship_fields: logger.info(f"Skipped Django model class for {source_model_name} - only context fields found.") elif carrier.invalid_fields: logger.warning( f"Skipped Django model class for {source_model_name} due to invalid fields: {carrier.invalid_fields}" ) else: logger.warning(f"Django model was not generated for {source_model_name} for unknown reasons.") return None # Return None if no Django model was created except Exception as e: logger.error(f"Error processing {source_model_name} with factory: {e}", exc_info=True) return None
-
Model discovery (abstract + per-source dependency graph)
-
Bases:
ABC
,Generic[TModel]
Abstract base class for discovering models (e.g., Pydantic, Dataclasses).
Source code in
src/pydantic2django/core/discovery.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145
class BaseDiscovery(abc.ABC, Generic[TModel]): """Abstract base class for discovering models (e.g., Pydantic, Dataclasses).""" def __init__(self): self.all_models: dict[str, type[TModel]] = {} # All discovered models before any filtering self.filtered_models: dict[str, type[TModel]] = {} # Models after all filters self.dependencies: dict[type[TModel], set[type[TModel]]] = {} # Dependencies between filtered models @abc.abstractmethod def _is_target_model(self, obj: Any) -> bool: """Check if an object is the type of model this discovery class handles.""" pass @abc.abstractmethod def _default_eligibility_filter(self, model: type[TModel]) -> bool: """ Apply default filtering logic inherent to the model type (e.g., exclude abstract classes). Return True if the model is eligible, False otherwise. """ pass def discover_models( self, packages: list[str], app_label: str, # Keep for potential use in filters or subclasses user_filters: Optional[Union[Callable[[type[TModel]], bool], list[Callable[[type[TModel]], bool]]]] = None, ) -> None: """Discover target models in the specified packages, applying default and user filters.""" self.all_models = {} self.filtered_models = {} self.dependencies = {} # Normalize user_filters to always be a list if user_filters is None: filters = [] elif isinstance(user_filters, list): filters = user_filters else: # It's a single callable filters = [user_filters] model_type_name = getattr(self, "__name__", "TargetModel") # Get class name for logging logger.info(f"Starting {model_type_name} discovery in packages: {packages}") for package_name in packages: try: package = importlib.import_module(package_name) logger.debug(f"Scanning package: {package_name}") for importer, modname, ispkg in pkgutil.walk_packages( path=package.__path__ if hasattr(package, "__path__") else None, prefix=package.__name__ + ".", onerror=lambda name: logger.warning(f"Error accessing module {name}"), ): try: module = importlib.import_module(modname) for name, obj in inspect.getmembers(module): # Use the subclass implementation to check if it's the right model type if self._is_target_model(obj): model_qualname = f"{modname}.{name}" if model_qualname not in self.all_models: self.all_models[model_qualname] = obj logger.debug(f"Discovered potential {model_type_name}: {model_qualname}") # Apply filters sequentially using subclass implementation is_eligible = self._default_eligibility_filter(obj) if is_eligible: for user_filter in filters: try: if not user_filter(obj): is_eligible = False logger.debug( f"Filtered out {model_type_name} by user filter: {model_qualname}" ) break # No need to check other filters except Exception as filter_exc: # Attempt to get filter name, default to repr filter_name = getattr(user_filter, "__name__", repr(user_filter)) logger.error( f"Error applying user filter {filter_name} to {model_qualname}: {filter_exc}", exc_info=True, ) is_eligible = False # Exclude on filter error break if is_eligible: self.filtered_models[model_qualname] = obj logger.debug(f"Added eligible {model_type_name}: {model_qualname}") except ImportError as e: logger.warning(f"Could not import module {modname}: {e}") except Exception as e: logger.error(f"Error inspecting module {modname} for {model_type_name}s: {e}", exc_info=True) except ImportError: logger.error(f"Package {package_name} not found.") except Exception as e: logger.error(f"Error discovering {model_type_name}s in package {package_name}: {e}", exc_info=True) logger.info( f"{model_type_name} discovery complete. Found {len(self.all_models)} total models, {len(self.filtered_models)} after filtering." ) # Hooks for subclass-specific post-processing if needed self._post_discovery_hook() # Resolve forward references if applicable (might be subclass specific) self._resolve_forward_refs() # Build dependency graph for filtered models self.analyze_dependencies() @abc.abstractmethod def analyze_dependencies(self) -> None: """Analyze dependencies between the filtered models.""" pass @abc.abstractmethod def get_models_in_registration_order(self) -> list[type[TModel]]: """Return filtered models sorted topologically based on dependencies.""" pass # Optional hook for subclasses to run code after discovery loop but before analyze def _post_discovery_hook(self) -> None: pass # Keep placeholder, subclasses might override if needed def _resolve_forward_refs(self) -> None: """Placeholder for resolving forward references if needed.""" logger.debug("Base _resolve_forward_refs called (if applicable).") pass
analyze_dependencies()
abstractmethod
¶Analyze dependencies between the filtered models.
Source code in
src/pydantic2django/core/discovery.py
127 128 129 130
@abc.abstractmethod def analyze_dependencies(self) -> None: """Analyze dependencies between the filtered models.""" pass
discover_models(packages, app_label, user_filters=None)
¶Discover target models in the specified packages, applying default and user filters.
Source code in
src/pydantic2django/core/discovery.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
def discover_models( self, packages: list[str], app_label: str, # Keep for potential use in filters or subclasses user_filters: Optional[Union[Callable[[type[TModel]], bool], list[Callable[[type[TModel]], bool]]]] = None, ) -> None: """Discover target models in the specified packages, applying default and user filters.""" self.all_models = {} self.filtered_models = {} self.dependencies = {} # Normalize user_filters to always be a list if user_filters is None: filters = [] elif isinstance(user_filters, list): filters = user_filters else: # It's a single callable filters = [user_filters] model_type_name = getattr(self, "__name__", "TargetModel") # Get class name for logging logger.info(f"Starting {model_type_name} discovery in packages: {packages}") for package_name in packages: try: package = importlib.import_module(package_name) logger.debug(f"Scanning package: {package_name}") for importer, modname, ispkg in pkgutil.walk_packages( path=package.__path__ if hasattr(package, "__path__") else None, prefix=package.__name__ + ".", onerror=lambda name: logger.warning(f"Error accessing module {name}"), ): try: module = importlib.import_module(modname) for name, obj in inspect.getmembers(module): # Use the subclass implementation to check if it's the right model type if self._is_target_model(obj): model_qualname = f"{modname}.{name}" if model_qualname not in self.all_models: self.all_models[model_qualname] = obj logger.debug(f"Discovered potential {model_type_name}: {model_qualname}") # Apply filters sequentially using subclass implementation is_eligible = self._default_eligibility_filter(obj) if is_eligible: for user_filter in filters: try: if not user_filter(obj): is_eligible = False logger.debug( f"Filtered out {model_type_name} by user filter: {model_qualname}" ) break # No need to check other filters except Exception as filter_exc: # Attempt to get filter name, default to repr filter_name = getattr(user_filter, "__name__", repr(user_filter)) logger.error( f"Error applying user filter {filter_name} to {model_qualname}: {filter_exc}", exc_info=True, ) is_eligible = False # Exclude on filter error break if is_eligible: self.filtered_models[model_qualname] = obj logger.debug(f"Added eligible {model_type_name}: {model_qualname}") except ImportError as e: logger.warning(f"Could not import module {modname}: {e}") except Exception as e: logger.error(f"Error inspecting module {modname} for {model_type_name}s: {e}", exc_info=True) except ImportError: logger.error(f"Package {package_name} not found.") except Exception as e: logger.error(f"Error discovering {model_type_name}s in package {package_name}: {e}", exc_info=True) logger.info( f"{model_type_name} discovery complete. Found {len(self.all_models)} total models, {len(self.filtered_models)} after filtering." ) # Hooks for subclass-specific post-processing if needed self._post_discovery_hook() # Resolve forward references if applicable (might be subclass specific) self._resolve_forward_refs() # Build dependency graph for filtered models self.analyze_dependencies()
get_models_in_registration_order()
abstractmethod
¶Return filtered models sorted topologically based on dependencies.
Source code in
src/pydantic2django/core/discovery.py
132 133 134 135
@abc.abstractmethod def get_models_in_registration_order(self) -> list[type[TModel]]: """Return filtered models sorted topologically based on dependencies.""" pass
-
Model/field factories and the conversion carrier
-
Bases:
ABC
,Generic[SourceModelType, SourceFieldType]
Abstract base class for model factories.
Source code in
src/pydantic2django/core/factories.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278
class BaseModelFactory(ABC, Generic[SourceModelType, SourceFieldType]): """Abstract base class for model factories.""" field_factory: BaseFieldFactory[SourceFieldType] def __init__(self, field_factory: BaseFieldFactory[SourceFieldType], *args, **kwargs): """ Initializes the model factory with a compatible field factory. Allows subclasses to accept additional dependencies. """ self.field_factory = field_factory @abstractmethod def _process_source_fields(self, carrier: ConversionCarrier[SourceModelType]): """Abstract method for subclasses to implement field processing.""" pass # Common logic moved from subclasses def _handle_field_collisions(self, carrier: ConversionCarrier[SourceModelType]): """Check for field name collisions with the base Django model.""" base_model = carrier.base_django_model if not base_model or not hasattr(base_model, "_meta"): return try: base_fields = base_model._meta.get_fields(include_parents=True, include_hidden=False) base_field_names = {f.name for f in base_fields if not f.name.endswith("+")} except Exception as e: logger.warning(f"Could not get fields from base model {base_model.__name__} for collision check: {e}") return all_new_fields = set(carrier.django_fields.keys()) | set(carrier.relationship_fields.keys()) collision_fields = all_new_fields & base_field_names if collision_fields: source_name = getattr(carrier.source_model, "__name__", "?") msg = f"Field collision detected between {source_name} and base model {base_model.__name__}: {collision_fields}." if carrier.strict: logger.error(msg + " Raising error due to strict=True.") raise ValueError(msg + " Use strict=False or rename fields.") else: logger.warning(msg + " Removing colliding fields from generated model (strict=False).") for field_name in collision_fields: carrier.django_fields.pop(field_name, None) carrier.relationship_fields.pop(field_name, None) def _create_django_meta(self, carrier: ConversionCarrier[SourceModelType]): """Create the Meta class for the generated Django model.""" source_name = getattr(carrier.source_model, "__name__", "UnknownSourceModel") source_model_name_cleaned = source_name.replace("_", " ") meta_attrs = { "app_label": carrier.meta_app_label, "db_table": f"{carrier.meta_app_label}_{source_name.lower()}", "abstract": False, "managed": True, "verbose_name": source_model_name_cleaned, "verbose_name_plural": source_model_name_cleaned + "s", "ordering": ["pk"], } base_meta_obj = getattr(carrier.base_django_model, "Meta", None) if carrier.base_django_model else None if base_meta_obj: logger.debug(f"Creating Meta inheriting from {carrier.base_django_model.__name__}'s Meta") final_meta_attrs = {**meta_attrs} # Ensure our settings override base carrier.django_meta_class = type("Meta", (base_meta_obj,), final_meta_attrs) else: logger.debug("Creating new Meta class") carrier.django_meta_class = type("Meta", (), meta_attrs) def _assemble_django_model_class(self, carrier: ConversionCarrier[SourceModelType]): """Assemble the final Django model class using type().""" source_name = getattr(carrier.source_model, "__name__", "UnknownSourceModel") source_module = getattr(carrier.source_model, "__module__", None) model_attrs: dict[str, Any] = { **carrier.django_fields, **carrier.relationship_fields, # Set __module__ for where the model appears to live "__module__": source_module or f"{carrier.meta_app_label}.models", } if carrier.django_meta_class: model_attrs["Meta"] = carrier.django_meta_class # Add a reference back to the source model (generic attribute name) model_attrs["_pydantic2django_source"] = carrier.source_model bases = (carrier.base_django_model,) if carrier.base_django_model else (models.Model,) if not carrier.django_fields and not carrier.relationship_fields: logger.info(f"No Django fields generated for {source_name}, skipping model class creation.") carrier.django_model = None return model_name = f"{carrier.class_name_prefix}{source_name}" logger.debug(f"Assembling model class '{model_name}' with bases {bases} and attrs: {list(model_attrs.keys())}") try: # Use type() to dynamically create the class carrier.django_model = cast(type[models.Model], type(model_name, bases, model_attrs)) logger.info(f"Successfully assembled Django model class: {model_name}") except Exception as e: logger.error(f"Failed to assemble Django model class {model_name}: {e}", exc_info=True) carrier.invalid_fields.append(("_assembly", f"Failed to create model type: {e}")) carrier.django_model = None @abstractmethod def _build_model_context(self, carrier: ConversionCarrier[SourceModelType]): """Abstract method for subclasses to build the specific ModelContext.""" pass # Main orchestration method def make_django_model(self, carrier: ConversionCarrier[SourceModelType]) -> None: """ Orchestrates the Django model creation process. Subclasses implement _process_source_fields and _build_model_context. Handles caching. """ model_key = carrier.model_key logger.debug(f"Attempting to create Django model for {model_key}") # TODO: Cache handling needs refinement - how to access subclass cache? # For now, skipping cache check in base class. # if model_key in self._converted_models and not carrier.existing_model: # # ... update carrier from cache ... # return # Reset results on carrier carrier.django_fields = {} carrier.relationship_fields = {} carrier.context_fields = {} carrier.invalid_fields = [] carrier.django_meta_class = None carrier.django_model = None carrier.model_context = None carrier.django_field_definitions = {} # Reset definitions dict # Core Steps self._process_source_fields(carrier) self._handle_field_collisions(carrier) self._create_django_meta(carrier) self._assemble_django_model_class(carrier) # Build context only if model assembly succeeded if carrier.django_model: self._build_model_context(carrier)
__init__(field_factory, *args, **kwargs)
¶Initializes the model factory with a compatible field factory. Allows subclasses to accept additional dependencies.
Source code in
src/pydantic2django/core/factories.py
138 139 140 141 142 143
def __init__(self, field_factory: BaseFieldFactory[SourceFieldType], *args, **kwargs): """ Initializes the model factory with a compatible field factory. Allows subclasses to accept additional dependencies. """ self.field_factory = field_factory
make_django_model(carrier)
¶Orchestrates the Django model creation process. Subclasses implement _process_source_fields and _build_model_context. Handles caching.
Source code in
src/pydantic2django/core/factories.py
245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278
def make_django_model(self, carrier: ConversionCarrier[SourceModelType]) -> None: """ Orchestrates the Django model creation process. Subclasses implement _process_source_fields and _build_model_context. Handles caching. """ model_key = carrier.model_key logger.debug(f"Attempting to create Django model for {model_key}") # TODO: Cache handling needs refinement - how to access subclass cache? # For now, skipping cache check in base class. # if model_key in self._converted_models and not carrier.existing_model: # # ... update carrier from cache ... # return # Reset results on carrier carrier.django_fields = {} carrier.relationship_fields = {} carrier.context_fields = {} carrier.invalid_fields = [] carrier.django_meta_class = None carrier.django_model = None carrier.model_context = None carrier.django_field_definitions = {} # Reset definitions dict # Core Steps self._process_source_fields(carrier) self._handle_field_collisions(carrier) self._create_django_meta(carrier) self._assemble_django_model_class(carrier) # Build context only if model assembly succeeded if carrier.django_model: self._build_model_context(carrier)
-
Bases:
ABC
,Generic[SourceFieldType]
Abstract base class for field factories.
Source code in
src/pydantic2django/core/factories.py
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130
class BaseFieldFactory(ABC, Generic[SourceFieldType]): """Abstract base class for field factories.""" def __init__(self, *args, **kwargs): # Allow subclasses to accept necessary dependencies (e.g., relationship accessors) pass @abstractmethod def create_field( self, field_info: SourceFieldType, model_name: str, carrier: ConversionCarrier ) -> FieldConversionResult: """ Convert a source field type into a Django Field. Args: field_info: The field information object from the source (Pydantic/Dataclass). model_name: The name of the source model containing the field. carrier: The conversion carrier for context (e.g., app_label, relationships). Returns: A FieldConversionResult containing the generated Django field or context/error info. """ pass
create_field(field_info, model_name, carrier)
abstractmethod
¶Convert a source field type into a Django Field.
Parameters:
Name Type Description Default field_info
SourceFieldType
The field information object from the source (Pydantic/Dataclass).
required model_name
str
The name of the source model containing the field.
required carrier
ConversionCarrier
The conversion carrier for context (e.g., app_label, relationships).
required Returns:
Type Description FieldConversionResult
A FieldConversionResult containing the generated Django field or context/error info.
Source code in
src/pydantic2django/core/factories.py
115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130
@abstractmethod def create_field( self, field_info: SourceFieldType, model_name: str, carrier: ConversionCarrier ) -> FieldConversionResult: """ Convert a source field type into a Django Field. Args: field_info: The field information object from the source (Pydantic/Dataclass). model_name: The name of the source model containing the field. carrier: The conversion carrier for context (e.g., app_label, relationships). Returns: A FieldConversionResult containing the generated Django field or context/error info. """ pass
-
Bases:
Generic[SourceModelType]
Carrier class for converting a source model (Pydantic/Dataclass) to a Django model. Holds configuration and accumulates results during the conversion process. Generalized from the original DjangoModelFactoryCarrier.
Source code in
src/pydantic2django/core/factories.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62
@dataclass class ConversionCarrier(Generic[SourceModelType]): """ Carrier class for converting a source model (Pydantic/Dataclass) to a Django model. Holds configuration and accumulates results during the conversion process. Generalized from the original DjangoModelFactoryCarrier. """ source_model: type[SourceModelType] meta_app_label: str base_django_model: type[models.Model] # Base Django model to inherit from existing_model: Optional[type[models.Model]] = None # For updating existing models class_name_prefix: str = "Django" # Prefix for generated Django model name strict: bool = False # Strict mode for field collisions used_related_names_per_target: dict[str, set[str]] = field(default_factory=dict) django_field_definitions: dict[str, str] = field(default_factory=dict) # Added field defs # --- Result fields (populated during conversion) --- django_fields: dict[str, models.Field] = field(default_factory=dict) relationship_fields: dict[str, models.Field] = field(default_factory=dict) context_fields: dict[str, Any] = field(default_factory=dict) # Store original source field info context_data: dict[str, Any] = field(default_factory=dict) # Stores (original_field_name, union_details_dict) for multi-FK unions pending_multi_fk_unions: list[tuple[str, dict]] = field(default_factory=list) invalid_fields: list[tuple[str, str]] = field(default_factory=list) django_meta_class: Optional[type] = None django_model: Optional[type[models.Model]] = None # Changed from DjangoModelType model_context: Optional[ModelContext] = None # Removed import_handler from carrier def model_key(self) -> str: """Generate a unique key for the source model.""" module = getattr(self.source_model, "__module__", "?") name = getattr(self.source_model, "__name__", "UnknownModel") return f"{module}.{name}" def __str__(self): source_name = getattr(self.source_model, "__name__", "UnknownSource") django_name = getattr(self.django_model, "__name__", "None") if self.django_model else "None" return f"{source_name} -> {django_name}"
model_key()
¶Generate a unique key for the source model.
Source code in
src/pydantic2django/core/factories.py
53 54 55 56 57
def model_key(self) -> str: """Generate a unique key for the source model.""" module = getattr(self.source_model, "__module__", "?") name = getattr(self.source_model, "__name__", "UnknownModel") return f"{module}.{name}"
-
Data structure holding the result of attempting to convert a single source field.
Source code in
src/pydantic2django/core/factories.py
65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102
@dataclass class FieldConversionResult: """Data structure holding the result of attempting to convert a single source field.""" field_info: Any # Original source field info (FieldInfo, dataclasses.Field) field_name: str # type_mapping_definition: Optional[TypeMappingDefinition] = None # Keep mapping info internal? field_kwargs: dict[str, Any] = field(default_factory=dict) django_field: Optional[models.Field] = None context_field: Optional[Any] = None # Holds original field_info if handled by context error_str: Optional[str] = None field_definition_str: Optional[str] = None # Added field definition string # Added required_imports dictionary required_imports: dict[str, list[str]] = field(default_factory=dict) # Store the raw kwargs returned by the mapper raw_mapper_kwargs: dict[str, Any] = field(default_factory=dict) def add_import(self, module: str, name: str): """Helper to add an import to this result.""" if not module or module == "builtins": return current_names = self.required_imports.setdefault(module, []) if name not in current_names: current_names.append(name) def add_import_for_obj(self, obj: Any): """Helper to add an import for a given object (class, function, etc.).""" if hasattr(obj, "__module__") and hasattr(obj, "__name__"): module = obj.__module__ name = obj.__name__ self.add_import(module, name) else: logger.warning(f"Could not determine import for object: {obj!r}") def __str__(self): status = "Success" if self.django_field else ("Context" if self.context_field else f"Error: {self.error_str}") field_type = type(self.django_field).__name__ if self.django_field else "N/A" return f"FieldConversionResult(field={self.field_name}, status={status}, django_type={field_type})"
add_import(module, name)
¶Helper to add an import to this result.
Source code in
src/pydantic2django/core/factories.py
82 83 84 85 86 87 88
def add_import(self, module: str, name: str): """Helper to add an import to this result.""" if not module or module == "builtins": return current_names = self.required_imports.setdefault(module, []) if name not in current_names: current_names.append(name)
add_import_for_obj(obj)
¶Helper to add an import for a given object (class, function, etc.).
Source code in
src/pydantic2django/core/factories.py
90 91 92 93 94 95 96 97
def add_import_for_obj(self, obj: Any): """Helper to add an import for a given object (class, function, etc.).""" if hasattr(obj, "__module__") and hasattr(obj, "__name__"): module = obj.__module__ name = obj.__name__ self.add_import(module, name) else: logger.warning(f"Could not determine import for object: {obj!r}")
-
Bidirectional type mapping (Python/Pydantic ↔ Django.Field)
-
Registry and entry point for bidirectional type mapping.
Source code in
src/pydantic2django/core/bidirectional_mapper.py
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872
class BidirectionalTypeMapper: """Registry and entry point for bidirectional type mapping.""" def __init__(self, relationship_accessor: Optional[RelationshipConversionAccessor] = None): self.relationship_accessor = relationship_accessor or RelationshipConversionAccessor() self._registry: list[type[TypeMappingUnit]] = self._build_registry() # Caches self._pydantic_cache: dict[Any, Optional[type[TypeMappingUnit]]] = {} self._django_cache: dict[type[models.Field], Optional[type[TypeMappingUnit]]] = {} def _build_registry(self) -> list[type[TypeMappingUnit]]: """Discover and order TypeMappingUnit subclasses.""" # Order matters less for selection now, but still useful for tie-breaking? # References mapping units imported from .mapping_units ordered_units = [ # Specific PKs first (subclass of IntField) BigAutoFieldMapping, SmallAutoFieldMapping, AutoFieldMapping, # Specific Numerics (subclass of IntField/FloatField/DecimalField) PositiveBigIntFieldMapping, PositiveSmallIntFieldMapping, PositiveIntFieldMapping, # Specific Strings (subclass of CharField/TextField) EmailFieldMapping, URLFieldMapping, SlugFieldMapping, IPAddressFieldMapping, FilePathFieldMapping, # Needs Path, but Django field is specific # File Fields (map Path/str, Django fields are specific) ImageFieldMapping, # Subclass of FileField FileFieldMapping, # Other specific types before bases UUIDFieldMapping, JsonFieldMapping, # Before generic collections/Any might map elsewhere # Base Relationship types (before fields they might inherit from like FK < Field) ManyToManyFieldMapping, OneToOneFieldMapping, ForeignKeyMapping, # General Base Types LAST DecimalFieldMapping, DateTimeFieldMapping, DateFieldMapping, TimeFieldMapping, DurationFieldMapping, BinaryFieldMapping, FloatFieldMapping, BoolFieldMapping, # Str/Text: Order now primarily determined by `matches` score overrides TextFieldMapping, StrFieldMapping, # Specific Int types first BigIntFieldMapping, # Map int to BigInt before Int SmallIntFieldMapping, IntFieldMapping, # Enum handled dynamically by find method EnumFieldMapping, # Include EnumFieldMapping here for the loop ] # Remove duplicates just in case seen = set() unique_units = [] for unit in ordered_units: if unit not in seen: unique_units.append(unit) seen.add(unit) return unique_units def _find_unit_for_pydantic_type( self, py_type: Any, field_info: Optional[FieldInfo] = None ) -> Optional[type[TypeMappingUnit]]: """ Find the best mapping unit for a given Pydantic type and FieldInfo. Uses a scoring system based on the `matches` classmethod of each unit. Handles Optional unwrapping and caching. """ original_type_for_cache = py_type # Use the original type as the cache key # --- Unwrap Optional --- origin = get_origin(py_type) if origin is Optional: args = get_args(py_type) # Get the first non-None type argument type_to_match = next((arg for arg in args if arg is not type(None)), Any) logger.debug(f"Unwrapped Optional[{type_to_match.__name__}] to {type_to_match.__name__}") # Handle X | None syntax (UnionType) elif origin is UnionType: args = get_args(py_type) non_none_args = [arg for arg in args if arg is not type(None)] if len(non_none_args) == 1: # If it's just `T | None` type_to_match = non_none_args[0] logger.debug(f"Unwrapped Union[{py_type}] with None to {type_to_match}") else: # Keep the original UnionType if it's Union[A, B, ...] type_to_match = py_type logger.debug(f"Keeping UnionType {py_type} as is for matching.") else: type_to_match = py_type # Use the original type if not Optional or simple T | None logger.debug( f"Final type_to_match for scoring: {type_to_match} (origin: {get_origin(type_to_match)}, args: {get_args(type_to_match)})" ) # --- Cache Check --- # Re-enable caching cache_key = (original_type_for_cache, field_info) if cache_key in self._pydantic_cache: # logger.debug(f"Cache hit for {cache_key}") return self._pydantic_cache[cache_key] # logger.debug(f"Cache miss for {cache_key}") # --- Literal Type Check (using original type) --- # original_origin = get_origin(original_type_for_cache) if original_origin is Literal: logger.debug(f"Type {original_type_for_cache} is Literal. Selecting EnumFieldMapping directly.") best_unit = EnumFieldMapping self._pydantic_cache[cache_key] = best_unit return best_unit # --- Prioritize Collection Types -> JSON --- # # Use the unwrapped origin for this check # unwrapped_origin = get_origin(type_to_match) # if unwrapped_origin in (list, dict, set, tuple): # logger.debug(f"Type {type_to_match} is a collection. Selecting JsonFieldMapping directly.") # best_unit = JsonFieldMapping # self._pydantic_cache[cache_key] = best_unit # return best_unit # --- Initialization --- # best_unit: Optional[type[TypeMappingUnit]] = None highest_score = 0.0 scores: dict[str, float | str] = {} # Store scores for debugging # --- Relationship Check (Specific Model Types and Lists of Models) BEFORE Scoring --- # # Check if the type_to_match itself is a known model try: is_direct_known_model = ( inspect.isclass(type_to_match) and (issubclass(type_to_match, BaseModel) or dataclasses.is_dataclass(type_to_match)) and self.relationship_accessor.is_source_model_known(type_to_match) ) except TypeError: is_direct_known_model = False if is_direct_known_model: logger.debug( f"Type {type_to_match.__name__} is a known related model. Selecting ForeignKeyMapping directly." ) best_unit = ForeignKeyMapping self._pydantic_cache[cache_key] = best_unit return best_unit # Check if it's a list/set of known models (potential M2M) unwrapped_origin = get_origin(type_to_match) unwrapped_args = get_args(type_to_match) if unwrapped_origin in (list, set) and unwrapped_args: # Check for list or set inner_type = unwrapped_args[0] try: is_list_of_known_models = ( inspect.isclass(inner_type) and (issubclass(inner_type, BaseModel) or dataclasses.is_dataclass(inner_type)) and self.relationship_accessor.is_source_model_known(inner_type) ) except TypeError: is_list_of_known_models = False logger.error(f"TypeError checking if {inner_type} is a known model list item.", exc_info=True) logger.debug( f"Checking list/set: unwrapped_origin={unwrapped_origin}, inner_type={inner_type}, is_list_of_known_models={is_list_of_known_models}" ) if is_list_of_known_models: logger.debug( f"Type {type_to_match} is a list/set of known models ({inner_type.__name__}). Selecting ManyToManyFieldMapping directly." ) best_unit = ManyToManyFieldMapping self._pydantic_cache[cache_key] = best_unit return best_unit else: logger.debug( f"Type {type_to_match} is a list/set, but inner type {inner_type} is not a known model. Proceeding." ) # --- Specific Union Handling BEFORE Scoring --- # unwrapped_args = get_args(type_to_match) # Check for non-model Unions (Model unions handled in get_django_mapping signal) if unwrapped_origin in (Union, UnionType) and unwrapped_args: logger.debug(f"Evaluating specific Union type {type_to_match} args: {unwrapped_args} before scoring.") has_str = any(arg is str for arg in unwrapped_args) has_collection_or_any = any( get_origin(arg) in (dict, list, set, tuple) or arg is Any for arg in unwrapped_args if arg is not type(None) ) # Don't handle Union[ModelA, ModelB] here, that needs the signal mechanism is_model_union = any( inspect.isclass(arg) and (issubclass(arg, BaseModel) or dataclasses.is_dataclass(arg)) for arg in unwrapped_args if arg is not type(None) ) if not is_model_union: if has_str and not has_collection_or_any: logger.debug(f"Union {type_to_match} contains str, selecting TextFieldMapping directly.") best_unit = TextFieldMapping self._pydantic_cache[cache_key] = best_unit return best_unit elif has_collection_or_any: logger.debug(f"Union {type_to_match} contains complex types, selecting JsonFieldMapping directly.") best_unit = JsonFieldMapping self._pydantic_cache[cache_key] = best_unit return best_unit # Else: Union of simple types (e.g., int | float) - let scoring handle it. else: logger.debug(f"Union {type_to_match} is non-model, non-str/complex. Proceeding to scoring.") else: logger.debug( f"Union {type_to_match} contains models. Proceeding to scoring (expecting JsonField fallback)." ) # --- Scoring Loop (Only if not a known related model or specific Union handled above) --- # # Use type_to_match (unwrapped) for matching # --- EDIT: Removed redundant check `if best_unit is None:` --- # # This loop now runs only if no direct selection happened above. for unit_cls in self._registry: try: # Add try-except around matches call for robustness # Pass the unwrapped type to matches score = unit_cls.matches(type_to_match, field_info) if score > 0: # Log all positive scores scores[unit_cls.__name__] = score # Store score regardless of whether it's the highest logger.debug( f"Scoring {unit_cls.__name__}.matches({type_to_match}, {field_info=}) -> {score}" ) # Added logging if score > highest_score: highest_score = score best_unit = unit_cls # Store the winning score as well - Moved above # scores[unit_cls.__name__] = score # Overwrite if it was a lower score before # elif score > 0: # Log non-winning positive scores too - Moved above # Only add if not already present (first positive score encountered) # scores.setdefault(unit_cls.__name__, score) except Exception as e: logger.error(f"Error calling {unit_cls.__name__}.matches for {type_to_match}: {e}", exc_info=True) scores[unit_cls.__name__] = f"ERROR: {e}" # Log error in scores dict # Sort scores for clearer logging (highest first) sorted_scores = dict( sorted(scores.items(), key=lambda item: item[1] if isinstance(item[1], (int, float)) else -1, reverse=True) ) logger.debug( f"Scores for {original_type_for_cache} (unwrapped: {type_to_match}, {field_info=}): {sorted_scores}" ) if best_unit: # Added logging logger.debug(f"Selected best unit: {best_unit.__name__} with score {highest_score}") # Added logging else: # Added logging logger.debug("No best unit found based on scoring.") # Added logging # --- Handle Fallbacks (Collections/Any) --- # if best_unit is None and highest_score == 0.0: logger.debug(f"Re-evaluating fallback/handling for {type_to_match}") unwrapped_origin = get_origin(type_to_match) unwrapped_args = get_args(type_to_match) # 1. Check standard collections first (MOVED FROM TOP) if unwrapped_origin in (dict, list, set, tuple) or type_to_match in (dict, list, set, tuple): # Re-check list/set here to ensure it wasn't a list of known models handled above if unwrapped_origin in (list, set) and unwrapped_args: inner_type = unwrapped_args[0] try: is_list_of_known_models_fallback = ( inspect.isclass(inner_type) and (issubclass(inner_type, BaseModel) or dataclasses.is_dataclass(inner_type)) and self.relationship_accessor.is_source_model_known(inner_type) ) except TypeError: is_list_of_known_models_fallback = False if not is_list_of_known_models_fallback: logger.debug(f"Type {type_to_match} is a non-model collection, selecting JsonFieldMapping.") best_unit = JsonFieldMapping # else: It was a list of known models, should have been handled earlier. Log warning? else: logger.warning(f"List of known models {type_to_match} reached fallback logic unexpectedly.") # Default to M2M as a safe bet? best_unit = ManyToManyFieldMapping # Handle dict/tuple elif unwrapped_origin in (dict, tuple) or type_to_match in (dict, tuple): logger.debug(f"Type {type_to_match} is a dict/tuple collection, selecting JsonFieldMapping.") best_unit = JsonFieldMapping # 2. Check for Any elif type_to_match is Any: logger.debug("Type is Any, selecting JsonFieldMapping.") best_unit = JsonFieldMapping # Final Logging if best_unit is None: logger.warning( f"No specific mapping unit found for Python type: {original_type_for_cache} (unwrapped to {type_to_match}) with field_info: {field_info}" ) # Log cache state before potential fallback write logger.debug(f"Cache keys before fallback write: {list(self._pydantic_cache.keys())}") # Re-enable cache write self._pydantic_cache[cache_key] = best_unit # Cache using original key return best_unit def _find_unit_for_django_field(self, dj_field_type: type[models.Field]) -> Optional[type[TypeMappingUnit]]: """Find the most specific mapping unit based on Django field type MRO and registry order.""" # Revert to simpler single pass using refined registry order. if dj_field_type in self._django_cache: return self._django_cache[dj_field_type] # Filter registry to exclude EnumFieldMapping unless it's specifically needed? No, registry order handles it. # Ensure EnumFieldMapping isn't incorrectly picked before Str/Int if choices are present. # The registry order should have Str/Int base mappings *after* EnumFieldMapping if EnumFieldMapping # only maps Enum/Literal python types. But dj_field_type matching is different. # If a CharField has choices, we want EnumFieldMapping logic, not StrFieldMapping. registry_for_django = self._registry # Use the full registry for now for unit_cls in registry_for_django: # Special check: If field has choices, prioritize EnumFieldMapping if applicable type # This is handled by get_pydantic_mapping logic already, not needed here. if issubclass(dj_field_type, unit_cls.django_field_type): # Found the first, most specific match based on registry order # Example: PositiveIntegerField is subclass of IntegerField. If PositiveIntFieldMapping # comes first in registry, it will be matched correctly. self._django_cache[dj_field_type] = unit_cls return unit_cls # Fallback if no unit explicitly handles it (should be rare) logger.warning( f"No specific mapping unit found for Django field type: {dj_field_type.__name__}, check registry order." ) self._django_cache[dj_field_type] = None return None def get_django_mapping( self, python_type: Any, field_info: Optional[FieldInfo] = None, parent_pydantic_model: Optional[type[BaseModel]] = None, # Add parent model for self-ref check ) -> tuple[type[models.Field], dict[str, Any]]: """Get the corresponding Django Field type and constructor kwargs for a Python type.""" processed_type_info = TypeHandler.process_field_type(python_type) original_py_type = python_type is_optional = processed_type_info["is_optional"] is_list = processed_type_info["is_list"] unit_cls = None # Initialize unit_cls base_py_type = original_py_type # Start with original union_details = None # Store details if it's a Union[BaseModel,...] gfk_details = None # --- Check for M2M case FIRST --- if is_list: # Get the type inside the list, handling Optional[List[T]] list_inner_type = original_py_type if is_optional: args_check = get_args(list_inner_type) list_inner_type = next((arg for arg in args_check if arg is not type(None)), Any) # Now get the type *inside* the list list_args = get_args(list_inner_type) # Should be List[T] inner_type = list_args[0] if list_args else Any # --- GFK Check: Is the inner type a Union of known models? --- inner_origin = get_origin(inner_type) inner_args = get_args(inner_type) if inner_origin in (Union, UnionType) and inner_args: union_models = [] other_types = [ arg for arg in inner_args if not ( inspect.isclass(arg) and (issubclass(arg, BaseModel) or dataclasses.is_dataclass(arg)) and self.relationship_accessor.is_source_model_known(arg) ) ] union_models = [arg for arg in inner_args if arg not in other_types] if union_models and not other_types: logger.debug(f"Detected GFK List[Union[...]] with models: {union_models}") gfk_details = { "type": "gfk", "models": union_models, "is_optional": is_optional, } unit_cls = JsonFieldMapping base_py_type = original_py_type if unit_cls is None: # --- M2M Check: Is the inner type a known related BaseModel OR Dataclass? --- if ( inspect.isclass(inner_type) and (issubclass(inner_type, BaseModel) or dataclasses.is_dataclass(inner_type)) and self.relationship_accessor.is_source_model_known(inner_type) ): unit_cls = ManyToManyFieldMapping base_py_type = inner_type logger.debug(f"Detected List[RelatedModel] ({inner_type.__name__}), mapping to ManyToManyField.") else: # --- Fallback for other lists --- unit_cls = JsonFieldMapping base_py_type = original_py_type logger.debug(f"Detected List of non-models ({original_py_type}), mapping directly to JSONField.") # --- If not a list, find unit for the base (non-list) type --- if unit_cls is None: # --- Handle Union[BaseModel,...] Signaling FIRST --- # simplified_base_type = processed_type_info["type_obj"] simplified_origin = get_origin(simplified_base_type) simplified_args = get_args(simplified_base_type) logger.debug( f"Checking simplified type for Union[Model,...]: {simplified_base_type!r} (Origin: {simplified_origin})" ) # Log the is_optional flag determined by TypeHandler logger.debug(f"TypeHandler returned is_optional: {is_optional} for original type: {original_py_type!r}") # Check if the simplified origin is Union[...] or T | U if simplified_origin in (Union, UnionType) and simplified_args: union_models = [] other_types_in_union = [] for arg in simplified_args: # We already unwrapped Optional, so no need to check for NoneType here logger.debug(f"-- Checking simplified Union arg: {arg!r}") # Check if arg is a known BaseModel or Dataclass is_class = inspect.isclass(arg) # Need try-except for issubclass with non-class types is_pyd_model = False is_dc = False is_known_by_accessor = False if is_class: try: is_pyd_model = issubclass(arg, BaseModel) is_dc = dataclasses.is_dataclass(arg) # Only check accessor if it's a model type if is_pyd_model or is_dc: is_known_by_accessor = self.relationship_accessor.is_source_model_known(arg) except TypeError: # issubclass might fail if arg is not a class (e.g., a type alias) pass # Keep flags as False logger.debug( f" is_class: {is_class}, is_pyd_model: {is_pyd_model}, is_dc: {is_dc}, is_known_by_accessor: {is_known_by_accessor}" ) is_known_model_or_dc = is_class and (is_pyd_model or is_dc) and is_known_by_accessor if is_known_model_or_dc: logger.debug(f" -> Added {arg.__name__} to union_models") # More specific logging union_models.append(arg) else: # Make sure we don't add NoneType here if Optional wasn't fully handled upstream somehow if arg is not type(None): logger.debug(f" -> Added {arg!r} to other_types_in_union") # More specific logging other_types_in_union.append(arg) # --- EDIT: Only set union_details IF ONLY models were found --- # Add logging just before the check logger.debug( f"Finished Union arg loop. union_models: {[m.__name__ for m in union_models]}, other_types: {other_types_in_union}" ) if union_models and not other_types_in_union: logger.debug( f"Detected Union containing ONLY known models: {union_models}. Generating _union_details signal." ) union_details = { "type": "multi_fk", "models": union_models, "is_optional": is_optional, # Use the flag determined earlier } # Log the created union_details logger.debug(f"Generated union_details: {union_details!r}") # Set unit_cls to JsonFieldMapping for model unions unit_cls = JsonFieldMapping base_py_type = original_py_type logger.debug("Setting unit_cls to JsonFieldMapping for model union") # --- Now, find the unit for the (potentially complex) base type --- # # Only find unit if not already set (e.g. by model union handling) if unit_cls is None: # Determine the type to use for finding the unit. # If it was M2M or handled List, unit_cls is already set. # Otherwise, use the processed type_obj which handles Optional/Annotated. type_for_unit_finding = processed_type_info["type_obj"] logger.debug(f"Type used for finding unit (after Union check): {type_for_unit_finding!r}") # Use the simplified base type after processing Optional/Annotated base_py_type = type_for_unit_finding logger.debug(f"Finding unit for base type: {base_py_type!r} with field_info: {field_info}") unit_cls = self._find_unit_for_pydantic_type(base_py_type, field_info) # --- Check if a unit was found --- # if not unit_cls: # If _find_unit_for_pydantic_type returned None, fallback to JSON logger.warning( f"No mapping unit found by scoring for base type {base_py_type} " f"(derived from {original_py_type}), falling back to JSONField." ) unit_cls = JsonFieldMapping # Consider raising MappingError if even JSON doesn't fit? # raise MappingError(f"Could not find mapping unit for Python type: {base_py_type}") # >> Add logging to check selected unit << logger.info(f"Selected Unit for {original_py_type}: {unit_cls.__name__ if unit_cls else 'None'}") instance_unit = unit_cls() # Instantiate to call methods # --- Determine Django Field Type --- # Start with the type defined on the selected unit class django_field_type = instance_unit.django_field_type # --- Get Kwargs (before potentially overriding field type for Enums) --- kwargs = instance_unit.pydantic_to_django_kwargs(base_py_type, field_info) # --- Add Union or GFK Details if applicable --- # if union_details: logger.info("Adding _union_details to kwargs.") kwargs["_union_details"] = union_details kwargs["null"] = union_details.get("is_optional", False) kwargs["blank"] = union_details.get("is_optional", False) elif gfk_details: logger.info("Adding _gfk_details to kwargs.") kwargs["_gfk_details"] = gfk_details # GFK fields are placeholder JSONFields, nullability is based on Optional status kwargs["null"] = is_optional kwargs["blank"] = is_optional else: logger.debug("union_details and gfk_details are None, skipping addition to kwargs.") # --- Special Handling for Enums/Literals (Only if not multi-FK/GFK union) --- # if unit_cls is EnumFieldMapping: field_type_hint = kwargs.pop("_field_type_hint", None) if field_type_hint and isinstance(field_type_hint, type) and issubclass(field_type_hint, models.Field): # Directly use the hinted field type if valid logger.debug( f"Using hinted field type {field_type_hint.__name__} from EnumFieldMapping for {base_py_type}." ) django_field_type = field_type_hint # Ensure max_length is removed if type becomes IntegerField if django_field_type == models.IntegerField: kwargs.pop("max_length", None) else: logger.warning("EnumFieldMapping selected but failed to get valid field type hint from kwargs.") # --- Handle Relationships (Only if not multi-FK union) --- # # This section needs to run *after* unit selection but *before* final nullability checks if unit_cls in (ForeignKeyMapping, OneToOneFieldMapping, ManyToManyFieldMapping): # Ensure base_py_type is the related model (set during M2M check or found by find_unit for FK/O2O) related_py_model = base_py_type # Check if it's a known Pydantic BaseModel OR a known Dataclass is_pyd_or_dc = inspect.isclass(related_py_model) and ( issubclass(related_py_model, BaseModel) or dataclasses.is_dataclass(related_py_model) ) if not is_pyd_or_dc: raise MappingError( f"Relationship mapping unit {unit_cls.__name__} selected, but base type {related_py_model} is not a known Pydantic model or Dataclass." ) # Check for self-reference BEFORE trying to get the Django model is_self_ref = parent_pydantic_model is not None and related_py_model == parent_pydantic_model if is_self_ref: model_ref = "self" # Get the target Django model name for logging/consistency if possible, but use 'self' # Check if the related model is a Pydantic BaseModel or a dataclass if inspect.isclass(related_py_model) and issubclass(related_py_model, BaseModel): target_django_model = self.relationship_accessor.get_django_model_for_pydantic( cast(type[BaseModel], related_py_model) ) elif dataclasses.is_dataclass(related_py_model): target_django_model = self.relationship_accessor.get_django_model_for_dataclass( related_py_model ) else: # This case should ideally not be reached due to earlier checks, but handle defensively target_django_model = None logger.warning( f"Self-reference check: related_py_model '{related_py_model}' is neither BaseModel nor dataclass." ) logger.debug( f"Detected self-reference for {related_py_model.__name__ if inspect.isclass(related_py_model) else related_py_model} " f"(Django: {getattr(target_django_model, '__name__', 'N/A')}), using 'self'." ) else: # Get target Django model based on source type (Pydantic or Dataclass) target_django_model = None # Ensure related_py_model is actually a type before issubclass check if inspect.isclass(related_py_model) and issubclass(related_py_model, BaseModel): # Cast to satisfy type checker, as we've confirmed it's a BaseModel subclass here target_django_model = self.relationship_accessor.get_django_model_for_pydantic( cast(type[BaseModel], related_py_model) ) elif dataclasses.is_dataclass(related_py_model): target_django_model = self.relationship_accessor.get_django_model_for_dataclass( related_py_model ) if not target_django_model: raise MappingError( f"Cannot map relationship: No corresponding Django model found for source model " f"{related_py_model.__name__} in RelationshipConversionAccessor." ) # Use string representation (app_label.ModelName) if possible, else name model_ref = getattr(target_django_model._meta, "label_lower", target_django_model.__name__) kwargs["to"] = model_ref django_field_type = unit_cls.django_field_type # Re-confirm M2MField, FK, O2O type # Set on_delete for FK/O2O based on Optional status if unit_cls in (ForeignKeyMapping, OneToOneFieldMapping): # Default to CASCADE for non-optional, SET_NULL for optional (matching test expectation) kwargs["on_delete"] = ( models.SET_NULL if is_optional else models.CASCADE ) # Changed PROTECT to CASCADE # --- Final Adjustments (Nullability, etc.) --- # # Apply nullability. M2M fields cannot be null in Django. # Do not override nullability if it was already forced by a multi-FK union if django_field_type != models.ManyToManyField and not union_details: kwargs["null"] = is_optional # Explicitly set blank based on optionality. # Simplified logic: Mirror the null assignment directly kwargs["blank"] = is_optional logger.debug( f"FINAL RETURN from get_django_mapping: Type={django_field_type}, Kwargs={kwargs}" ) # Added final state logging return django_field_type, kwargs def get_pydantic_mapping(self, dj_field: models.Field) -> tuple[Any, dict[str, Any]]: """Get the corresponding Pydantic type hint and FieldInfo kwargs for a Django Field.""" dj_field_type = type(dj_field) is_optional = dj_field.null is_choices = bool(dj_field.choices) # --- Find base unit (ignoring choices for now) --- # Find the mapping unit based on the specific Django field type MRO # This gives us the correct underlying Python type (str, int, etc.) base_unit_cls = self._find_unit_for_django_field(dj_field_type) if not base_unit_cls: logger.warning(f"No base mapping unit for {dj_field_type.__name__}, falling back to Any.") pydantic_type = Optional[Any] if is_optional else Any return pydantic_type, {} base_instance_unit = base_unit_cls() # Get the base Pydantic type from this unit final_pydantic_type = base_instance_unit.python_type # --- Determine Final Pydantic Type Adjustments --- # # (Relationships, AutoPK, Optional wrapper) # Handle choices FIRST to determine the core type before Optional wrapping if is_choices: # Default to base type, override if valid choices found final_pydantic_type = base_instance_unit.python_type if dj_field.choices: # Explicit check before iteration try: choice_values = tuple(choice[0] for choice in dj_field.choices) if choice_values: # Ensure the tuple is not empty final_pydantic_type = Literal[choice_values] # type: ignore logger.debug(f"Mapped choices for '{dj_field.name}' to Pydantic type: {final_pydantic_type}") else: logger.warning( f"Field '{dj_field.name}' has choices defined, but extracted values are empty. Falling back." ) # Keep final_pydantic_type as base type except Exception as e: logger.warning(f"Failed to extract choices for field '{dj_field.name}'. Error: {e}. Falling back.") # Keep final_pydantic_type as base type # If dj_field.choices was None/empty initially, final_pydantic_type remains the base type else: # Get the base Pydantic type from this unit if not choices final_pydantic_type = base_instance_unit.python_type # 1. Handle Relationships first, as they determine the core type if base_unit_cls in (ForeignKeyMapping, OneToOneFieldMapping, ManyToManyFieldMapping): related_dj_model = getattr(dj_field, "related_model", None) if not related_dj_model: raise MappingError(f"Cannot determine related Django model for field '{dj_field.name}'") # Resolve 'self' reference if related_dj_model == "self": # We need the Django model class that dj_field belongs to. # This info isn't directly passed, so this approach might be limited. # Assuming self-reference points to the same type hierarchy for now. # A better solution might need the model context passed down. logger.warning( f"Handling 'self' reference for field '{dj_field.name}'. Mapping might be incomplete without parent model context." ) # Attempt to get Pydantic model mapped to the field's owner model if possible (heuristically) # This is complex and potentially fragile. # For now, let's use a placeholder or raise an error if needed strictly. # Sticking with the base type (e.g., Any or int for PK) might be safer without context. # Use the base type (likely PK int/uuid) as the fallback type here target_pydantic_model = base_instance_unit.python_type logger.debug(f"Using Any as placeholder for 'self' reference '{dj_field.name}'") else: target_pydantic_model = self.relationship_accessor.get_pydantic_model_for_django(related_dj_model) if not target_pydantic_model or target_pydantic_model is Any: if related_dj_model != "self": # Avoid redundant warning for self logger.warning( f"Cannot map relationship: No corresponding Pydantic model found for Django model " f"'{related_dj_model._meta.label if hasattr(related_dj_model, '_meta') else related_dj_model.__name__}'. " f"Using placeholder '{final_pydantic_type}'." ) # Keep final_pydantic_type as the base unit's python_type (e.g., int for FK) else: if base_unit_cls == ManyToManyFieldMapping: final_pydantic_type = list[target_pydantic_model] else: # FK or O2O # Keep the PK type (e.g., int) if target model not found, # otherwise use the target Pydantic model type. final_pydantic_type = target_pydantic_model # This should now be the related model type logger.debug(f"Mapped relationship field '{dj_field.name}' to Pydantic type: {final_pydantic_type}") # 2. AutoPK override (after relationship resolution) is_auto_pk = dj_field.primary_key and isinstance( dj_field, (models.AutoField, models.BigAutoField, models.SmallAutoField) ) if is_auto_pk: final_pydantic_type = Optional[int] logger.debug(f"Mapped AutoPK field '{dj_field.name}' to {final_pydantic_type}") is_optional = True # AutoPKs are always optional in Pydantic input # 3. Apply Optional[...] wrapper if necessary (AFTER relationship/AutoPK) # Do not wrap M2M lists or already Optional AutoPKs in Optional[] again. # Also, don't wrap if the type is already Literal (choices handled Optionality) - NO, wrap Literal too if null=True if is_optional and not is_auto_pk: # Check if is_choices? No, optional applies to literal too. origin = get_origin(final_pydantic_type) args = get_args(final_pydantic_type) is_already_optional = origin is Optional or origin is UnionType and type(None) in args if not is_already_optional: final_pydantic_type = Optional[final_pydantic_type] logger.debug(f"Wrapped type for '{dj_field.name}' in Optional: {final_pydantic_type}") # --- Generate FieldInfo Kwargs --- # # Use EnumFieldMapping logic for kwargs ONLY if choices exist, # otherwise use the base unit determined earlier. # --> NO, always use base unit for kwargs now. Literal type handles choices. # kwargs_unit_cls = EnumFieldMapping if is_choices else base_unit_cls # OLD logic instance_unit = base_unit_cls() # Use the base unit (e.g., StrFieldMapping) for base kwargs field_info_kwargs = instance_unit.django_to_pydantic_field_info_kwargs(dj_field) # --- Explicitly cast title (verbose_name) and description (help_text) --- # if field_info_kwargs.get("title") is not None: field_info_kwargs["title"] = str(field_info_kwargs["title"]) logger.debug(f"Ensured title is str for '{dj_field.name}': {field_info_kwargs['title']}") if field_info_kwargs.get("description") is not None: field_info_kwargs["description"] = str(field_info_kwargs["description"]) logger.debug(f"Ensured description is str for '{dj_field.name}': {field_info_kwargs['description']}") # --- End Casting --- # # --- Keep choices in json_schema_extra even when using Literal --- # This preserves the (value, label) mapping as metadata alongside the Literal type. if ( is_choices and "json_schema_extra" in field_info_kwargs and "choices" in field_info_kwargs["json_schema_extra"] ): logger.debug(f"Kept choices in json_schema_extra for Literal field '{dj_field.name}'") elif is_choices: logger.debug( f"Field '{dj_field.name}' has choices, but they weren't added to json_schema_extra by the mapping unit." ) # Set default=None for optional fields that don't have an explicit default if is_optional and "default" not in field_info_kwargs: field_info_kwargs["default"] = None logger.debug(f"Set default=None for Optional field '{dj_field.name}'") # Clean up redundant `default=None` for Optional fields handled by Pydantic v2. # Only pop default=None if the field is Optional (and not an AutoPK, though autoPK sets default=None anyway) elif is_optional and field_info_kwargs.get("default") is None: # Check if it's already explicitly default=None from AutoPK handling if not is_auto_pk: field_info_kwargs.pop("default", None) logger.debug(f"Removed redundant default=None for Optional field '{dj_field.name}'") else: logger.debug(f"Keeping explicit default=None for AutoPK field '{dj_field.name}'") logger.debug( f"Final Pydantic mapping for '{dj_field.name}': Type={final_pydantic_type}, Kwargs={field_info_kwargs}" ) return final_pydantic_type, field_info_kwargs
get_django_mapping(python_type, field_info=None, parent_pydantic_model=None)
¶Get the corresponding Django Field type and constructor kwargs for a Python type.
Source code in
src/pydantic2django/core/bidirectional_mapper.py
418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714
def get_django_mapping( self, python_type: Any, field_info: Optional[FieldInfo] = None, parent_pydantic_model: Optional[type[BaseModel]] = None, # Add parent model for self-ref check ) -> tuple[type[models.Field], dict[str, Any]]: """Get the corresponding Django Field type and constructor kwargs for a Python type.""" processed_type_info = TypeHandler.process_field_type(python_type) original_py_type = python_type is_optional = processed_type_info["is_optional"] is_list = processed_type_info["is_list"] unit_cls = None # Initialize unit_cls base_py_type = original_py_type # Start with original union_details = None # Store details if it's a Union[BaseModel,...] gfk_details = None # --- Check for M2M case FIRST --- if is_list: # Get the type inside the list, handling Optional[List[T]] list_inner_type = original_py_type if is_optional: args_check = get_args(list_inner_type) list_inner_type = next((arg for arg in args_check if arg is not type(None)), Any) # Now get the type *inside* the list list_args = get_args(list_inner_type) # Should be List[T] inner_type = list_args[0] if list_args else Any # --- GFK Check: Is the inner type a Union of known models? --- inner_origin = get_origin(inner_type) inner_args = get_args(inner_type) if inner_origin in (Union, UnionType) and inner_args: union_models = [] other_types = [ arg for arg in inner_args if not ( inspect.isclass(arg) and (issubclass(arg, BaseModel) or dataclasses.is_dataclass(arg)) and self.relationship_accessor.is_source_model_known(arg) ) ] union_models = [arg for arg in inner_args if arg not in other_types] if union_models and not other_types: logger.debug(f"Detected GFK List[Union[...]] with models: {union_models}") gfk_details = { "type": "gfk", "models": union_models, "is_optional": is_optional, } unit_cls = JsonFieldMapping base_py_type = original_py_type if unit_cls is None: # --- M2M Check: Is the inner type a known related BaseModel OR Dataclass? --- if ( inspect.isclass(inner_type) and (issubclass(inner_type, BaseModel) or dataclasses.is_dataclass(inner_type)) and self.relationship_accessor.is_source_model_known(inner_type) ): unit_cls = ManyToManyFieldMapping base_py_type = inner_type logger.debug(f"Detected List[RelatedModel] ({inner_type.__name__}), mapping to ManyToManyField.") else: # --- Fallback for other lists --- unit_cls = JsonFieldMapping base_py_type = original_py_type logger.debug(f"Detected List of non-models ({original_py_type}), mapping directly to JSONField.") # --- If not a list, find unit for the base (non-list) type --- if unit_cls is None: # --- Handle Union[BaseModel,...] Signaling FIRST --- # simplified_base_type = processed_type_info["type_obj"] simplified_origin = get_origin(simplified_base_type) simplified_args = get_args(simplified_base_type) logger.debug( f"Checking simplified type for Union[Model,...]: {simplified_base_type!r} (Origin: {simplified_origin})" ) # Log the is_optional flag determined by TypeHandler logger.debug(f"TypeHandler returned is_optional: {is_optional} for original type: {original_py_type!r}") # Check if the simplified origin is Union[...] or T | U if simplified_origin in (Union, UnionType) and simplified_args: union_models = [] other_types_in_union = [] for arg in simplified_args: # We already unwrapped Optional, so no need to check for NoneType here logger.debug(f"-- Checking simplified Union arg: {arg!r}") # Check if arg is a known BaseModel or Dataclass is_class = inspect.isclass(arg) # Need try-except for issubclass with non-class types is_pyd_model = False is_dc = False is_known_by_accessor = False if is_class: try: is_pyd_model = issubclass(arg, BaseModel) is_dc = dataclasses.is_dataclass(arg) # Only check accessor if it's a model type if is_pyd_model or is_dc: is_known_by_accessor = self.relationship_accessor.is_source_model_known(arg) except TypeError: # issubclass might fail if arg is not a class (e.g., a type alias) pass # Keep flags as False logger.debug( f" is_class: {is_class}, is_pyd_model: {is_pyd_model}, is_dc: {is_dc}, is_known_by_accessor: {is_known_by_accessor}" ) is_known_model_or_dc = is_class and (is_pyd_model or is_dc) and is_known_by_accessor if is_known_model_or_dc: logger.debug(f" -> Added {arg.__name__} to union_models") # More specific logging union_models.append(arg) else: # Make sure we don't add NoneType here if Optional wasn't fully handled upstream somehow if arg is not type(None): logger.debug(f" -> Added {arg!r} to other_types_in_union") # More specific logging other_types_in_union.append(arg) # --- EDIT: Only set union_details IF ONLY models were found --- # Add logging just before the check logger.debug( f"Finished Union arg loop. union_models: {[m.__name__ for m in union_models]}, other_types: {other_types_in_union}" ) if union_models and not other_types_in_union: logger.debug( f"Detected Union containing ONLY known models: {union_models}. Generating _union_details signal." ) union_details = { "type": "multi_fk", "models": union_models, "is_optional": is_optional, # Use the flag determined earlier } # Log the created union_details logger.debug(f"Generated union_details: {union_details!r}") # Set unit_cls to JsonFieldMapping for model unions unit_cls = JsonFieldMapping base_py_type = original_py_type logger.debug("Setting unit_cls to JsonFieldMapping for model union") # --- Now, find the unit for the (potentially complex) base type --- # # Only find unit if not already set (e.g. by model union handling) if unit_cls is None: # Determine the type to use for finding the unit. # If it was M2M or handled List, unit_cls is already set. # Otherwise, use the processed type_obj which handles Optional/Annotated. type_for_unit_finding = processed_type_info["type_obj"] logger.debug(f"Type used for finding unit (after Union check): {type_for_unit_finding!r}") # Use the simplified base type after processing Optional/Annotated base_py_type = type_for_unit_finding logger.debug(f"Finding unit for base type: {base_py_type!r} with field_info: {field_info}") unit_cls = self._find_unit_for_pydantic_type(base_py_type, field_info) # --- Check if a unit was found --- # if not unit_cls: # If _find_unit_for_pydantic_type returned None, fallback to JSON logger.warning( f"No mapping unit found by scoring for base type {base_py_type} " f"(derived from {original_py_type}), falling back to JSONField." ) unit_cls = JsonFieldMapping # Consider raising MappingError if even JSON doesn't fit? # raise MappingError(f"Could not find mapping unit for Python type: {base_py_type}") # >> Add logging to check selected unit << logger.info(f"Selected Unit for {original_py_type}: {unit_cls.__name__ if unit_cls else 'None'}") instance_unit = unit_cls() # Instantiate to call methods # --- Determine Django Field Type --- # Start with the type defined on the selected unit class django_field_type = instance_unit.django_field_type # --- Get Kwargs (before potentially overriding field type for Enums) --- kwargs = instance_unit.pydantic_to_django_kwargs(base_py_type, field_info) # --- Add Union or GFK Details if applicable --- # if union_details: logger.info("Adding _union_details to kwargs.") kwargs["_union_details"] = union_details kwargs["null"] = union_details.get("is_optional", False) kwargs["blank"] = union_details.get("is_optional", False) elif gfk_details: logger.info("Adding _gfk_details to kwargs.") kwargs["_gfk_details"] = gfk_details # GFK fields are placeholder JSONFields, nullability is based on Optional status kwargs["null"] = is_optional kwargs["blank"] = is_optional else: logger.debug("union_details and gfk_details are None, skipping addition to kwargs.") # --- Special Handling for Enums/Literals (Only if not multi-FK/GFK union) --- # if unit_cls is EnumFieldMapping: field_type_hint = kwargs.pop("_field_type_hint", None) if field_type_hint and isinstance(field_type_hint, type) and issubclass(field_type_hint, models.Field): # Directly use the hinted field type if valid logger.debug( f"Using hinted field type {field_type_hint.__name__} from EnumFieldMapping for {base_py_type}." ) django_field_type = field_type_hint # Ensure max_length is removed if type becomes IntegerField if django_field_type == models.IntegerField: kwargs.pop("max_length", None) else: logger.warning("EnumFieldMapping selected but failed to get valid field type hint from kwargs.") # --- Handle Relationships (Only if not multi-FK union) --- # # This section needs to run *after* unit selection but *before* final nullability checks if unit_cls in (ForeignKeyMapping, OneToOneFieldMapping, ManyToManyFieldMapping): # Ensure base_py_type is the related model (set during M2M check or found by find_unit for FK/O2O) related_py_model = base_py_type # Check if it's a known Pydantic BaseModel OR a known Dataclass is_pyd_or_dc = inspect.isclass(related_py_model) and ( issubclass(related_py_model, BaseModel) or dataclasses.is_dataclass(related_py_model) ) if not is_pyd_or_dc: raise MappingError( f"Relationship mapping unit {unit_cls.__name__} selected, but base type {related_py_model} is not a known Pydantic model or Dataclass." ) # Check for self-reference BEFORE trying to get the Django model is_self_ref = parent_pydantic_model is not None and related_py_model == parent_pydantic_model if is_self_ref: model_ref = "self" # Get the target Django model name for logging/consistency if possible, but use 'self' # Check if the related model is a Pydantic BaseModel or a dataclass if inspect.isclass(related_py_model) and issubclass(related_py_model, BaseModel): target_django_model = self.relationship_accessor.get_django_model_for_pydantic( cast(type[BaseModel], related_py_model) ) elif dataclasses.is_dataclass(related_py_model): target_django_model = self.relationship_accessor.get_django_model_for_dataclass( related_py_model ) else: # This case should ideally not be reached due to earlier checks, but handle defensively target_django_model = None logger.warning( f"Self-reference check: related_py_model '{related_py_model}' is neither BaseModel nor dataclass." ) logger.debug( f"Detected self-reference for {related_py_model.__name__ if inspect.isclass(related_py_model) else related_py_model} " f"(Django: {getattr(target_django_model, '__name__', 'N/A')}), using 'self'." ) else: # Get target Django model based on source type (Pydantic or Dataclass) target_django_model = None # Ensure related_py_model is actually a type before issubclass check if inspect.isclass(related_py_model) and issubclass(related_py_model, BaseModel): # Cast to satisfy type checker, as we've confirmed it's a BaseModel subclass here target_django_model = self.relationship_accessor.get_django_model_for_pydantic( cast(type[BaseModel], related_py_model) ) elif dataclasses.is_dataclass(related_py_model): target_django_model = self.relationship_accessor.get_django_model_for_dataclass( related_py_model ) if not target_django_model: raise MappingError( f"Cannot map relationship: No corresponding Django model found for source model " f"{related_py_model.__name__} in RelationshipConversionAccessor." ) # Use string representation (app_label.ModelName) if possible, else name model_ref = getattr(target_django_model._meta, "label_lower", target_django_model.__name__) kwargs["to"] = model_ref django_field_type = unit_cls.django_field_type # Re-confirm M2MField, FK, O2O type # Set on_delete for FK/O2O based on Optional status if unit_cls in (ForeignKeyMapping, OneToOneFieldMapping): # Default to CASCADE for non-optional, SET_NULL for optional (matching test expectation) kwargs["on_delete"] = ( models.SET_NULL if is_optional else models.CASCADE ) # Changed PROTECT to CASCADE # --- Final Adjustments (Nullability, etc.) --- # # Apply nullability. M2M fields cannot be null in Django. # Do not override nullability if it was already forced by a multi-FK union if django_field_type != models.ManyToManyField and not union_details: kwargs["null"] = is_optional # Explicitly set blank based on optionality. # Simplified logic: Mirror the null assignment directly kwargs["blank"] = is_optional logger.debug( f"FINAL RETURN from get_django_mapping: Type={django_field_type}, Kwargs={kwargs}" ) # Added final state logging return django_field_type, kwargs
get_pydantic_mapping(dj_field)
¶Get the corresponding Pydantic type hint and FieldInfo kwargs for a Django Field.
Source code in
src/pydantic2django/core/bidirectional_mapper.py
716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872
def get_pydantic_mapping(self, dj_field: models.Field) -> tuple[Any, dict[str, Any]]: """Get the corresponding Pydantic type hint and FieldInfo kwargs for a Django Field.""" dj_field_type = type(dj_field) is_optional = dj_field.null is_choices = bool(dj_field.choices) # --- Find base unit (ignoring choices for now) --- # Find the mapping unit based on the specific Django field type MRO # This gives us the correct underlying Python type (str, int, etc.) base_unit_cls = self._find_unit_for_django_field(dj_field_type) if not base_unit_cls: logger.warning(f"No base mapping unit for {dj_field_type.__name__}, falling back to Any.") pydantic_type = Optional[Any] if is_optional else Any return pydantic_type, {} base_instance_unit = base_unit_cls() # Get the base Pydantic type from this unit final_pydantic_type = base_instance_unit.python_type # --- Determine Final Pydantic Type Adjustments --- # # (Relationships, AutoPK, Optional wrapper) # Handle choices FIRST to determine the core type before Optional wrapping if is_choices: # Default to base type, override if valid choices found final_pydantic_type = base_instance_unit.python_type if dj_field.choices: # Explicit check before iteration try: choice_values = tuple(choice[0] for choice in dj_field.choices) if choice_values: # Ensure the tuple is not empty final_pydantic_type = Literal[choice_values] # type: ignore logger.debug(f"Mapped choices for '{dj_field.name}' to Pydantic type: {final_pydantic_type}") else: logger.warning( f"Field '{dj_field.name}' has choices defined, but extracted values are empty. Falling back." ) # Keep final_pydantic_type as base type except Exception as e: logger.warning(f"Failed to extract choices for field '{dj_field.name}'. Error: {e}. Falling back.") # Keep final_pydantic_type as base type # If dj_field.choices was None/empty initially, final_pydantic_type remains the base type else: # Get the base Pydantic type from this unit if not choices final_pydantic_type = base_instance_unit.python_type # 1. Handle Relationships first, as they determine the core type if base_unit_cls in (ForeignKeyMapping, OneToOneFieldMapping, ManyToManyFieldMapping): related_dj_model = getattr(dj_field, "related_model", None) if not related_dj_model: raise MappingError(f"Cannot determine related Django model for field '{dj_field.name}'") # Resolve 'self' reference if related_dj_model == "self": # We need the Django model class that dj_field belongs to. # This info isn't directly passed, so this approach might be limited. # Assuming self-reference points to the same type hierarchy for now. # A better solution might need the model context passed down. logger.warning( f"Handling 'self' reference for field '{dj_field.name}'. Mapping might be incomplete without parent model context." ) # Attempt to get Pydantic model mapped to the field's owner model if possible (heuristically) # This is complex and potentially fragile. # For now, let's use a placeholder or raise an error if needed strictly. # Sticking with the base type (e.g., Any or int for PK) might be safer without context. # Use the base type (likely PK int/uuid) as the fallback type here target_pydantic_model = base_instance_unit.python_type logger.debug(f"Using Any as placeholder for 'self' reference '{dj_field.name}'") else: target_pydantic_model = self.relationship_accessor.get_pydantic_model_for_django(related_dj_model) if not target_pydantic_model or target_pydantic_model is Any: if related_dj_model != "self": # Avoid redundant warning for self logger.warning( f"Cannot map relationship: No corresponding Pydantic model found for Django model " f"'{related_dj_model._meta.label if hasattr(related_dj_model, '_meta') else related_dj_model.__name__}'. " f"Using placeholder '{final_pydantic_type}'." ) # Keep final_pydantic_type as the base unit's python_type (e.g., int for FK) else: if base_unit_cls == ManyToManyFieldMapping: final_pydantic_type = list[target_pydantic_model] else: # FK or O2O # Keep the PK type (e.g., int) if target model not found, # otherwise use the target Pydantic model type. final_pydantic_type = target_pydantic_model # This should now be the related model type logger.debug(f"Mapped relationship field '{dj_field.name}' to Pydantic type: {final_pydantic_type}") # 2. AutoPK override (after relationship resolution) is_auto_pk = dj_field.primary_key and isinstance( dj_field, (models.AutoField, models.BigAutoField, models.SmallAutoField) ) if is_auto_pk: final_pydantic_type = Optional[int] logger.debug(f"Mapped AutoPK field '{dj_field.name}' to {final_pydantic_type}") is_optional = True # AutoPKs are always optional in Pydantic input # 3. Apply Optional[...] wrapper if necessary (AFTER relationship/AutoPK) # Do not wrap M2M lists or already Optional AutoPKs in Optional[] again. # Also, don't wrap if the type is already Literal (choices handled Optionality) - NO, wrap Literal too if null=True if is_optional and not is_auto_pk: # Check if is_choices? No, optional applies to literal too. origin = get_origin(final_pydantic_type) args = get_args(final_pydantic_type) is_already_optional = origin is Optional or origin is UnionType and type(None) in args if not is_already_optional: final_pydantic_type = Optional[final_pydantic_type] logger.debug(f"Wrapped type for '{dj_field.name}' in Optional: {final_pydantic_type}") # --- Generate FieldInfo Kwargs --- # # Use EnumFieldMapping logic for kwargs ONLY if choices exist, # otherwise use the base unit determined earlier. # --> NO, always use base unit for kwargs now. Literal type handles choices. # kwargs_unit_cls = EnumFieldMapping if is_choices else base_unit_cls # OLD logic instance_unit = base_unit_cls() # Use the base unit (e.g., StrFieldMapping) for base kwargs field_info_kwargs = instance_unit.django_to_pydantic_field_info_kwargs(dj_field) # --- Explicitly cast title (verbose_name) and description (help_text) --- # if field_info_kwargs.get("title") is not None: field_info_kwargs["title"] = str(field_info_kwargs["title"]) logger.debug(f"Ensured title is str for '{dj_field.name}': {field_info_kwargs['title']}") if field_info_kwargs.get("description") is not None: field_info_kwargs["description"] = str(field_info_kwargs["description"]) logger.debug(f"Ensured description is str for '{dj_field.name}': {field_info_kwargs['description']}") # --- End Casting --- # # --- Keep choices in json_schema_extra even when using Literal --- # This preserves the (value, label) mapping as metadata alongside the Literal type. if ( is_choices and "json_schema_extra" in field_info_kwargs and "choices" in field_info_kwargs["json_schema_extra"] ): logger.debug(f"Kept choices in json_schema_extra for Literal field '{dj_field.name}'") elif is_choices: logger.debug( f"Field '{dj_field.name}' has choices, but they weren't added to json_schema_extra by the mapping unit." ) # Set default=None for optional fields that don't have an explicit default if is_optional and "default" not in field_info_kwargs: field_info_kwargs["default"] = None logger.debug(f"Set default=None for Optional field '{dj_field.name}'") # Clean up redundant `default=None` for Optional fields handled by Pydantic v2. # Only pop default=None if the field is Optional (and not an AutoPK, though autoPK sets default=None anyway) elif is_optional and field_info_kwargs.get("default") is None: # Check if it's already explicitly default=None from AutoPK handling if not is_auto_pk: field_info_kwargs.pop("default", None) logger.debug(f"Removed redundant default=None for Optional field '{dj_field.name}'") else: logger.debug(f"Keeping explicit default=None for AutoPK field '{dj_field.name}'") logger.debug( f"Final Pydantic mapping for '{dj_field.name}': Type={final_pydantic_type}, Kwargs={field_info_kwargs}" ) return final_pydantic_type, field_info_kwargs
-
Base class defining a bidirectional mapping between a Python type and a Django Field.
Source code in
src/pydantic2django/core/mapping_units.py
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314
class TypeMappingUnit: """Base class defining a bidirectional mapping between a Python type and a Django Field.""" python_type: type[T_PydanticType] django_field_type: type[models.Field] # Use base class here def __init_subclass__(cls, **kwargs): """Ensure subclasses define the required types.""" super().__init_subclass__(**kwargs) if not hasattr(cls, "python_type") or not hasattr(cls, "django_field_type"): raise NotImplementedError( "Subclasses of TypeMappingUnit must define 'python_type' and 'django_field_type' class attributes." ) @classmethod def matches(cls, py_type: Any, field_info: Optional[FieldInfo] = None) -> float: """ Calculate a score indicating how well this unit matches the given Python type and FieldInfo. Args: py_type: The Python type to match against. field_info: Optional Pydantic FieldInfo for context. Returns: A float score (0.0 = no match, higher = better match). Base implementation scores: - 1.0 for exact type match (cls.python_type == py_type) - 0.5 for subclass match (issubclass(py_type, cls.python_type)) - 0.0 otherwise """ target_py_type = cls.python_type if py_type == target_py_type: return 1.0 try: # Check issubclass only if both are actual classes and py_type is not Any if ( py_type is not Any and inspect.isclass(py_type) and inspect.isclass(target_py_type) and issubclass(py_type, target_py_type) ): # Don't match if it's the same type (already handled by exact match) if py_type is not target_py_type: return 0.5 except TypeError: # issubclass fails on non-classes (like Any, List[int], etc.) pass return 0.0 def pydantic_to_django_kwargs(self, py_type: Any, field_info: Optional[FieldInfo] = None) -> dict[str, Any]: """Generate Django field constructor kwargs from Pydantic FieldInfo.""" kwargs = {} if field_info: # Map common attributes if field_info.title: kwargs["verbose_name"] = field_info.title if field_info.description: kwargs["help_text"] = field_info.description # Only consider `default` if `default_factory` is None if field_info.default_factory is None: if field_info.default is not PydanticUndefined and field_info.default is not None: # Django doesn't handle callable defaults easily here if not callable(field_info.default): kwargs["default"] = field_info.default elif field_info.default is None: # Explicitly check for None default kwargs["default"] = None # Add default=None if present in FieldInfo # else: If default_factory is present, do not add a 'default' kwarg. # No warning needed as this is now expected behavior. # Note: Frozen, ge, le etc. are validation rules, map separately if needed return kwargs def django_to_pydantic_field_info_kwargs(self, dj_field: models.Field) -> dict[str, Any]: """Generate Pydantic FieldInfo kwargs from a Django field instance.""" kwargs = {} field_name = getattr(dj_field, "name", "unknown_field") # Get field name for logging # Title: Use verbose_name or generate from field name verbose_name = getattr(dj_field, "verbose_name", None) logger.debug(f"Processing field '{field_name}': verbose_name='{verbose_name}'") if verbose_name: # Ensure verbose_name is a string, handling lazy proxies kwargs["title"] = force_str(verbose_name).capitalize() elif field_name != "unknown_field" and isinstance(field_name, str): # Generate title from name if verbose_name is missing and name is a string generated_title = field_name.replace("_", " ").capitalize() kwargs["title"] = generated_title logger.debug(f"Generated title for '{field_name}': '{generated_title}'") # else: field name is None or 'unknown_field', no title generated by default # Description if dj_field.help_text: # Ensure help_text is a string, handling lazy proxies kwargs["description"] = force_str(dj_field.help_text) # Default value/factory handling if dj_field.has_default(): dj_default = dj_field.get_default() if dj_default is not models.fields.NOT_PROVIDED: if callable(dj_default): factory_set = False if dj_default is dict: kwargs["default_factory"] = dict factory_set = True elif dj_default is list: kwargs["default_factory"] = list factory_set = True # Add other known callable mappings if needed else: logger.debug( f"Django field '{dj_field.name}' has an unmapped callable default ({dj_default}), " "not mapping to Pydantic default/default_factory." ) if factory_set: kwargs.pop("default", None) # Handle non-callable defaults # Map default={} back to default_factory=dict for JSONField elif dj_default == {}: kwargs["default_factory"] = dict kwargs.pop("default", None) elif dj_default == []: kwargs["default_factory"] = list kwargs.pop("default", None) elif dj_default is not None: # Add non-None, non-callable, non-empty-collection defaults logger.debug( f"Processing non-callable default for '{field_name}'. Type: {type(dj_default)}, Value: {dj_default!r}" ) # Apply force_str ONLY if the default value's type suggests it might be a lazy proxy string. # A simple check is if 'proxy' is in the type name. processed_default = dj_default if "proxy" in type(dj_default).__name__: try: processed_default = force_str(dj_default) logger.debug( f"Applied force_str to potential lazy default for '{field_name}'. New value: {processed_default!r}" ) except Exception as e: logger.error( f"Failed to apply force_str to default value for '{field_name}': {e}. Assigning raw default." ) processed_default = dj_default # Keep original on error kwargs["default"] = processed_default logger.debug(f"Assigned final default for '{field_name}': {kwargs.get('default')!r}") # Handle AutoField PKs -> frozen=True, default=None is_auto_pk = dj_field.primary_key and isinstance( dj_field, (models.AutoField, models.BigAutoField, models.SmallAutoField) ) if is_auto_pk: kwargs["frozen"] = True kwargs["default"] = None # Handle choices (including processing labels and limiting) # Log choices *before* calling handle_choices if hasattr(dj_field, "choices") and dj_field.choices: try: # Log the raw choices from the Django field raw_choices_repr = repr(list(dj_field.choices)) # Materialize and get repr logger.debug(f"Field '{field_name}': Raw choices before handle_choices: {raw_choices_repr}") except Exception as log_err: logger.warning(f"Field '{field_name}': Error logging raw choices: {log_err}") self.handle_choices(dj_field, kwargs) # Log choices *after* handle_choices modified kwargs processed_choices_repr = repr(kwargs.get("json_schema_extra", {}).get("choices")) logger.debug(f"Field '{field_name}': Choices in kwargs after handle_choices: {processed_choices_repr}") # Handle non-choice max_length only if choices were NOT processed elif dj_field.max_length is not None: # Only add max_length if not choices - specific units can override kwargs["max_length"] = dj_field.max_length logger.debug(f"Base kwargs generated for '{field_name}': {kwargs}") return kwargs def handle_choices(self, dj_field: models.Field, kwargs: dict[str, Any]) -> None: """ Handles Django field choices, ensuring lazy translation proxies are resolved. It processes the choices, forces string conversion on labels within an active translation context, limits the number of choices added to the schema, and stores them in `json_schema_extra`. """ field_name = getattr(dj_field, "name", "unknown_field") processed_choices = [] MAX_CHOICES_IN_SCHEMA = 30 # TODO: Make configurable limited_choices = [] default_value = kwargs.get("default") # Use potentially processed default default_included = False # --- Ensure Translation Context --- # active_translation = None if translation: try: # Get the currently active language to restore later current_language = translation.get_language() # Activate the default language (or a specific one like 'en') # This forces lazy objects to resolve using a consistent language. # Using settings.LANGUAGE_CODE assumes it's set correctly. default_language = getattr(settings, "LANGUAGE_CODE", "en") # Fallback to 'en' active_translation = translation.override(default_language) logger.debug( f"Activated translation override ('{default_language}') for processing choices of '{field_name}'" ) active_translation.__enter__() # Manually enter context except Exception as trans_err: logger.warning(f"Failed to activate translation context for '{field_name}': {trans_err}") active_translation = None # Ensure it's None if activation failed else: logger.warning("Django translation module not available. Lazy choices might not resolve correctly.") # --- Process Choices (within potential translation context) --- # try: all_choices = list(dj_field.choices or []) for value, label in all_choices: logger.debug( f" Processing choice for '{field_name}': Value={value!r}, Label={label!r} (Type: {type(label)})" ) try: # Apply force_str defensively; should resolve lazy proxies if context is active processed_label = force_str(label) logger.debug( f" Processed label for '{field_name}': Value={value!r}, Label={processed_label!r} (Type: {type(processed_label)})" ) except Exception as force_str_err: logger.error( f"Error using force_str on label for '{field_name}' (value: {value!r}): {force_str_err}" ) # Fallback: use repr or a placeholder if force_str fails completely processed_label = f"<unresolved: {repr(label)}>" processed_choices.append((value, processed_label)) # --- Limit Choices --- # if len(processed_choices) > MAX_CHOICES_IN_SCHEMA: logger.warning( f"Limiting choices for '{field_name}' from {len(processed_choices)} to {MAX_CHOICES_IN_SCHEMA}" ) if default_value is not None: for val, lbl in processed_choices: if val == default_value: limited_choices.append((val, lbl)) default_included = True break remaining_slots = MAX_CHOICES_IN_SCHEMA - len(limited_choices) if remaining_slots > 0: for val, lbl in processed_choices: if len(limited_choices) >= MAX_CHOICES_IN_SCHEMA: break if not (default_included and val == default_value): limited_choices.append((val, lbl)) final_choices_list = limited_choices else: final_choices_list = processed_choices # --- Store Choices --- # kwargs.setdefault("json_schema_extra", {})["choices"] = final_choices_list kwargs.pop("max_length", None) # Remove max_length if choices are present logger.debug(f"Stored final choices in json_schema_extra for '{field_name}'") except Exception as e: logger.error(f"Error processing or limiting choices for field '{field_name}': {e}", exc_info=True) kwargs.pop("json_schema_extra", None) finally: # --- Deactivate Translation Context --- # if active_translation: try: active_translation.__exit__(None, None, None) # Manually exit context logger.debug(f"Deactivated translation override for '{field_name}'") except Exception as trans_exit_err: logger.warning(f"Error deactivating translation context for '{field_name}': {trans_exit_err}")
__init_subclass__(**kwargs)
¶Ensure subclasses define the required types.
Source code in
src/pydantic2django/core/mapping_units.py
47 48 49 50 51 52 53
def __init_subclass__(cls, **kwargs): """Ensure subclasses define the required types.""" super().__init_subclass__(**kwargs) if not hasattr(cls, "python_type") or not hasattr(cls, "django_field_type"): raise NotImplementedError( "Subclasses of TypeMappingUnit must define 'python_type' and 'django_field_type' class attributes." )
django_to_pydantic_field_info_kwargs(dj_field)
¶Generate Pydantic FieldInfo kwargs from a Django field instance.
Source code in
src/pydantic2django/core/mapping_units.py
114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217
def django_to_pydantic_field_info_kwargs(self, dj_field: models.Field) -> dict[str, Any]: """Generate Pydantic FieldInfo kwargs from a Django field instance.""" kwargs = {} field_name = getattr(dj_field, "name", "unknown_field") # Get field name for logging # Title: Use verbose_name or generate from field name verbose_name = getattr(dj_field, "verbose_name", None) logger.debug(f"Processing field '{field_name}': verbose_name='{verbose_name}'") if verbose_name: # Ensure verbose_name is a string, handling lazy proxies kwargs["title"] = force_str(verbose_name).capitalize() elif field_name != "unknown_field" and isinstance(field_name, str): # Generate title from name if verbose_name is missing and name is a string generated_title = field_name.replace("_", " ").capitalize() kwargs["title"] = generated_title logger.debug(f"Generated title for '{field_name}': '{generated_title}'") # else: field name is None or 'unknown_field', no title generated by default # Description if dj_field.help_text: # Ensure help_text is a string, handling lazy proxies kwargs["description"] = force_str(dj_field.help_text) # Default value/factory handling if dj_field.has_default(): dj_default = dj_field.get_default() if dj_default is not models.fields.NOT_PROVIDED: if callable(dj_default): factory_set = False if dj_default is dict: kwargs["default_factory"] = dict factory_set = True elif dj_default is list: kwargs["default_factory"] = list factory_set = True # Add other known callable mappings if needed else: logger.debug( f"Django field '{dj_field.name}' has an unmapped callable default ({dj_default}), " "not mapping to Pydantic default/default_factory." ) if factory_set: kwargs.pop("default", None) # Handle non-callable defaults # Map default={} back to default_factory=dict for JSONField elif dj_default == {}: kwargs["default_factory"] = dict kwargs.pop("default", None) elif dj_default == []: kwargs["default_factory"] = list kwargs.pop("default", None) elif dj_default is not None: # Add non-None, non-callable, non-empty-collection defaults logger.debug( f"Processing non-callable default for '{field_name}'. Type: {type(dj_default)}, Value: {dj_default!r}" ) # Apply force_str ONLY if the default value's type suggests it might be a lazy proxy string. # A simple check is if 'proxy' is in the type name. processed_default = dj_default if "proxy" in type(dj_default).__name__: try: processed_default = force_str(dj_default) logger.debug( f"Applied force_str to potential lazy default for '{field_name}'. New value: {processed_default!r}" ) except Exception as e: logger.error( f"Failed to apply force_str to default value for '{field_name}': {e}. Assigning raw default." ) processed_default = dj_default # Keep original on error kwargs["default"] = processed_default logger.debug(f"Assigned final default for '{field_name}': {kwargs.get('default')!r}") # Handle AutoField PKs -> frozen=True, default=None is_auto_pk = dj_field.primary_key and isinstance( dj_field, (models.AutoField, models.BigAutoField, models.SmallAutoField) ) if is_auto_pk: kwargs["frozen"] = True kwargs["default"] = None # Handle choices (including processing labels and limiting) # Log choices *before* calling handle_choices if hasattr(dj_field, "choices") and dj_field.choices: try: # Log the raw choices from the Django field raw_choices_repr = repr(list(dj_field.choices)) # Materialize and get repr logger.debug(f"Field '{field_name}': Raw choices before handle_choices: {raw_choices_repr}") except Exception as log_err: logger.warning(f"Field '{field_name}': Error logging raw choices: {log_err}") self.handle_choices(dj_field, kwargs) # Log choices *after* handle_choices modified kwargs processed_choices_repr = repr(kwargs.get("json_schema_extra", {}).get("choices")) logger.debug(f"Field '{field_name}': Choices in kwargs after handle_choices: {processed_choices_repr}") # Handle non-choice max_length only if choices were NOT processed elif dj_field.max_length is not None: # Only add max_length if not choices - specific units can override kwargs["max_length"] = dj_field.max_length logger.debug(f"Base kwargs generated for '{field_name}': {kwargs}") return kwargs
handle_choices(dj_field, kwargs)
¶Handles Django field choices, ensuring lazy translation proxies are resolved.
It processes the choices, forces string conversion on labels within an active translation context, limits the number of choices added to the schema, and stores them in
json_schema_extra
.Source code in
src/pydantic2django/core/mapping_units.py
219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314
def handle_choices(self, dj_field: models.Field, kwargs: dict[str, Any]) -> None: """ Handles Django field choices, ensuring lazy translation proxies are resolved. It processes the choices, forces string conversion on labels within an active translation context, limits the number of choices added to the schema, and stores them in `json_schema_extra`. """ field_name = getattr(dj_field, "name", "unknown_field") processed_choices = [] MAX_CHOICES_IN_SCHEMA = 30 # TODO: Make configurable limited_choices = [] default_value = kwargs.get("default") # Use potentially processed default default_included = False # --- Ensure Translation Context --- # active_translation = None if translation: try: # Get the currently active language to restore later current_language = translation.get_language() # Activate the default language (or a specific one like 'en') # This forces lazy objects to resolve using a consistent language. # Using settings.LANGUAGE_CODE assumes it's set correctly. default_language = getattr(settings, "LANGUAGE_CODE", "en") # Fallback to 'en' active_translation = translation.override(default_language) logger.debug( f"Activated translation override ('{default_language}') for processing choices of '{field_name}'" ) active_translation.__enter__() # Manually enter context except Exception as trans_err: logger.warning(f"Failed to activate translation context for '{field_name}': {trans_err}") active_translation = None # Ensure it's None if activation failed else: logger.warning("Django translation module not available. Lazy choices might not resolve correctly.") # --- Process Choices (within potential translation context) --- # try: all_choices = list(dj_field.choices or []) for value, label in all_choices: logger.debug( f" Processing choice for '{field_name}': Value={value!r}, Label={label!r} (Type: {type(label)})" ) try: # Apply force_str defensively; should resolve lazy proxies if context is active processed_label = force_str(label) logger.debug( f" Processed label for '{field_name}': Value={value!r}, Label={processed_label!r} (Type: {type(processed_label)})" ) except Exception as force_str_err: logger.error( f"Error using force_str on label for '{field_name}' (value: {value!r}): {force_str_err}" ) # Fallback: use repr or a placeholder if force_str fails completely processed_label = f"<unresolved: {repr(label)}>" processed_choices.append((value, processed_label)) # --- Limit Choices --- # if len(processed_choices) > MAX_CHOICES_IN_SCHEMA: logger.warning( f"Limiting choices for '{field_name}' from {len(processed_choices)} to {MAX_CHOICES_IN_SCHEMA}" ) if default_value is not None: for val, lbl in processed_choices: if val == default_value: limited_choices.append((val, lbl)) default_included = True break remaining_slots = MAX_CHOICES_IN_SCHEMA - len(limited_choices) if remaining_slots > 0: for val, lbl in processed_choices: if len(limited_choices) >= MAX_CHOICES_IN_SCHEMA: break if not (default_included and val == default_value): limited_choices.append((val, lbl)) final_choices_list = limited_choices else: final_choices_list = processed_choices # --- Store Choices --- # kwargs.setdefault("json_schema_extra", {})["choices"] = final_choices_list kwargs.pop("max_length", None) # Remove max_length if choices are present logger.debug(f"Stored final choices in json_schema_extra for '{field_name}'") except Exception as e: logger.error(f"Error processing or limiting choices for field '{field_name}': {e}", exc_info=True) kwargs.pop("json_schema_extra", None) finally: # --- Deactivate Translation Context --- # if active_translation: try: active_translation.__exit__(None, None, None) # Manually exit context logger.debug(f"Deactivated translation override for '{field_name}'") except Exception as trans_exit_err: logger.warning(f"Error deactivating translation context for '{field_name}': {trans_exit_err}")
matches(py_type, field_info=None)
classmethod
¶Calculate a score indicating how well this unit matches the given Python type and FieldInfo.
Parameters:
Name Type Description Default py_type
Any
The Python type to match against.
required field_info
Optional[FieldInfo]
Optional Pydantic FieldInfo for context.
None
Returns:
Type Description float
A float score (0.0 = no match, higher = better match).
float
Base implementation scores:
float
- 1.0 for exact type match (cls.python_type == py_type)
float
- 0.5 for subclass match (issubclass(py_type, cls.python_type))
float
- 0.0 otherwise
Source code in
src/pydantic2django/core/mapping_units.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88
@classmethod def matches(cls, py_type: Any, field_info: Optional[FieldInfo] = None) -> float: """ Calculate a score indicating how well this unit matches the given Python type and FieldInfo. Args: py_type: The Python type to match against. field_info: Optional Pydantic FieldInfo for context. Returns: A float score (0.0 = no match, higher = better match). Base implementation scores: - 1.0 for exact type match (cls.python_type == py_type) - 0.5 for subclass match (issubclass(py_type, cls.python_type)) - 0.0 otherwise """ target_py_type = cls.python_type if py_type == target_py_type: return 1.0 try: # Check issubclass only if both are actual classes and py_type is not Any if ( py_type is not Any and inspect.isclass(py_type) and inspect.isclass(target_py_type) and issubclass(py_type, target_py_type) ): # Don't match if it's the same type (already handled by exact match) if py_type is not target_py_type: return 0.5 except TypeError: # issubclass fails on non-classes (like Any, List[int], etc.) pass return 0.0
pydantic_to_django_kwargs(py_type, field_info=None)
¶Generate Django field constructor kwargs from Pydantic FieldInfo.
Source code in
src/pydantic2django/core/mapping_units.py
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112
def pydantic_to_django_kwargs(self, py_type: Any, field_info: Optional[FieldInfo] = None) -> dict[str, Any]: """Generate Django field constructor kwargs from Pydantic FieldInfo.""" kwargs = {} if field_info: # Map common attributes if field_info.title: kwargs["verbose_name"] = field_info.title if field_info.description: kwargs["help_text"] = field_info.description # Only consider `default` if `default_factory` is None if field_info.default_factory is None: if field_info.default is not PydanticUndefined and field_info.default is not None: # Django doesn't handle callable defaults easily here if not callable(field_info.default): kwargs["default"] = field_info.default elif field_info.default is None: # Explicitly check for None default kwargs["default"] = None # Add default=None if present in FieldInfo # else: If default_factory is present, do not add a 'default' kwarg. # No warning needed as this is now expected behavior. # Note: Frozen, ge, le etc. are validation rules, map separately if needed return kwargs
-
Relationships and cross-model resolution
-
Source code in
src/pydantic2django/core/relationships.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262
@dataclass class RelationshipConversionAccessor: available_relationships: list[RelationshipMapper] = field(default_factory=list) # dependencies: Optional[dict[str, set[str]]] = field(default=None) # Keep if used @classmethod def from_dict(cls, relationship_mapping_dict: dict) -> "RelationshipConversionAccessor": """ Convert a dictionary of strings representing model qualified names to a RelationshipConversionAccessor The dictionary should be of the form: { "pydantic_model_qualified_name": "django_model_qualified_name", ... } """ available_relationships = [] for pydantic_mqn, django_mqn in relationship_mapping_dict.items(): try: # Split the module path and class name pydantic_module_path, pydantic_class_name = pydantic_mqn.rsplit(".", 1) django_module_path, django_class_name = django_mqn.rsplit(".", 1) # Import the modules pydantic_module = importlib.import_module(pydantic_module_path) django_module = importlib.import_module(django_module_path) # Get the actual class objects pydantic_model = getattr(pydantic_module, pydantic_class_name) django_model = getattr(django_module, django_class_name) available_relationships.append(RelationshipMapper(pydantic_model, django_model, context=None)) except Exception as e: logger.warning(f"Error importing model {pydantic_mqn} or {django_mqn}: {e}") continue return cls(available_relationships) def to_dict(self) -> dict: """ Convert the relationships to a dictionary of strings representing model qualified names for bidirectional conversion. Can be stored in a JSON field, and used to reconstruct the relationships. """ relationship_mapping_dict = {} for relationship in self.available_relationships: # Skip relationships where either model is None if relationship.pydantic_model is None or relationship.django_model is None: continue pydantic_mqn = self._get_pydantic_model_qualified_name(relationship.pydantic_model) django_mqn = self._get_django_model_qualified_name(relationship.django_model) relationship_mapping_dict[pydantic_mqn] = django_mqn return relationship_mapping_dict def _get_pydantic_model_qualified_name(self, model: type[BaseModel] | None) -> str: """Get the fully qualified name of a Pydantic model as module.class_name""" if model is None: return "" return f"{model.__module__}.{model.__name__}" def _get_django_model_qualified_name(self, model: type[models.Model] | None) -> str: """Get the fully qualified name of a Django model as app_label.model_name""" if model is None: return "" return f"{model._meta.app_label}.{model.__name__}" @property def available_source_models(self) -> list[type]: """Get a list of all source models (Pydantic or Dataclass).""" models = [] for r in self.available_relationships: if r.pydantic_model: models.append(r.pydantic_model) if r.dataclass_model: models.append(r.dataclass_model) return models @property def available_django_models(self) -> list[type[models.Model]]: """Get a list of all Django models in the relationship accessor""" return [r.django_model for r in self.available_relationships if r.django_model is not None] def add_pydantic_model(self, model: type[BaseModel]) -> None: """Add a Pydantic model to the relationship accessor""" # Check if the model is already in available_pydantic_models by comparing class names model_name = model.__name__ existing_models = [m.__name__ for m in self.available_source_models] if model_name not in existing_models: self.available_relationships.append(RelationshipMapper(model, None, context=None)) def add_dataclass_model(self, model: type) -> None: """Add a Dataclass model to the relationship accessor""" # Check if the model is already mapped if any(r.dataclass_model == model for r in self.available_relationships): return # Already exists # Check if a Pydantic model with the same name is already mapped (potential conflict) if any(r.pydantic_model and r.pydantic_model.__name__ == model.__name__ for r in self.available_relationships): logger.warning(f"Adding dataclass {model.__name__}, but a Pydantic model with the same name exists.") self.available_relationships.append(RelationshipMapper(dataclass_model=model)) def add_django_model(self, model: type[models.Model]) -> None: """Add a Django model to the relationship accessor""" # Check if the model is already in available_django_models by comparing class names model_name = model.__name__ existing_models = [m.__name__ for m in self.available_django_models] if model_name not in existing_models: self.available_relationships.append(RelationshipMapper(None, None, model, context=None)) def get_django_model_for_pydantic(self, pydantic_model: type[BaseModel]) -> Optional[type[models.Model]]: """ Find the corresponding Django model for a given Pydantic model Returns None if no matching Django model is found """ for relationship in self.available_relationships: if relationship.pydantic_model == pydantic_model and relationship.django_model is not None: return relationship.django_model return None def get_pydantic_model_for_django(self, django_model: type[models.Model]) -> Optional[type[BaseModel]]: """ Find the corresponding Pydantic model for a given Django model Returns None if no matching Pydantic model is found """ for relationship in self.available_relationships: if relationship.django_model == django_model and relationship.pydantic_model is not None: return relationship.pydantic_model return None def get_django_model_for_dataclass(self, dataclass_model: type) -> Optional[type[models.Model]]: """Find the corresponding Django model for a given Dataclass model.""" logger.debug(f"Searching for Django model matching dataclass: {dataclass_model.__name__}") for relationship in self.available_relationships: # Check if this mapper holds the target dataclass and has a linked Django model if relationship.dataclass_model == dataclass_model and relationship.django_model is not None: logger.debug(f" Found match: {relationship.django_model.__name__}") return relationship.django_model logger.debug(f" No match found for dataclass {dataclass_model.__name__}") return None def map_relationship(self, source_model: type, django_model: type[models.Model]) -> None: """ Create or update a mapping between a source model (Pydantic/Dataclass) and a Django model. """ source_type = ( "pydantic" if isinstance(source_model, type) and issubclass(source_model, BaseModel) else "dataclass" if dataclasses.is_dataclass(source_model) else "unknown" ) if source_type == "unknown": logger.warning(f"Cannot map relationship for unknown source type: {source_model}") return # Check if either model already exists in a relationship for relationship in self.available_relationships: if source_type == "pydantic" and relationship.pydantic_model == source_model: relationship.django_model = django_model # Ensure dataclass_model is None if we map pydantic relationship.dataclass_model = None logger.debug(f"Updated mapping: Pydantic {source_model.__name__} -> Django {django_model.__name__}") return if source_type == "dataclass" and relationship.dataclass_model == source_model: relationship.django_model = django_model # Ensure pydantic_model is None relationship.pydantic_model = None logger.debug(f"Updated mapping: Dataclass {source_model.__name__} -> Django {django_model.__name__}") return if relationship.django_model == django_model: # Map the source model based on its type if source_type == "pydantic": relationship.pydantic_model = cast(type[BaseModel], source_model) relationship.dataclass_model = None logger.debug( f"Updated mapping: Pydantic {source_model.__name__} -> Django {django_model.__name__} (found via Django model)" ) elif source_type == "dataclass": relationship.dataclass_model = cast(type, source_model) relationship.pydantic_model = None logger.debug( f"Updated mapping: Dataclass {source_model.__name__} -> Django {django_model.__name__} (found via Django model)" ) return # If no existing relationship found, create a new one logger.debug( f"Creating new mapping: {source_type.capitalize()} {source_model.__name__} -> Django {django_model.__name__}" ) if source_type == "pydantic": self.available_relationships.append( RelationshipMapper(pydantic_model=cast(type[BaseModel], source_model), django_model=django_model) ) elif source_type == "dataclass": self.available_relationships.append( RelationshipMapper(dataclass_model=cast(type, source_model), django_model=django_model) ) def is_source_model_known(self, model: type) -> bool: """Check if a specific source model (Pydantic or Dataclass) is known.""" is_pydantic = isinstance(model, type) and issubclass(model, BaseModel) is_dataclass = dataclasses.is_dataclass(model) for r in self.available_relationships: if is_pydantic and r.pydantic_model == model: return True if is_dataclass and r.dataclass_model == model: return True return False # Add a method to lookup source type by name def get_source_model_by_name(self, model_name: str) -> Optional[type]: """Find a known source model (Pydantic or Dataclass) by its class name.""" for r in self.available_relationships: if r.pydantic_model and r.pydantic_model.__name__ == model_name: return r.pydantic_model if r.dataclass_model and r.dataclass_model.__name__ == model_name: return r.dataclass_model return None
available_django_models
property
¶Get a list of all Django models in the relationship accessor
available_source_models
property
¶Get a list of all source models (Pydantic or Dataclass).
add_dataclass_model(model)
¶Add a Dataclass model to the relationship accessor
Source code in
src/pydantic2django/core/relationships.py
130 131 132 133 134 135 136 137 138 139
def add_dataclass_model(self, model: type) -> None: """Add a Dataclass model to the relationship accessor""" # Check if the model is already mapped if any(r.dataclass_model == model for r in self.available_relationships): return # Already exists # Check if a Pydantic model with the same name is already mapped (potential conflict) if any(r.pydantic_model and r.pydantic_model.__name__ == model.__name__ for r in self.available_relationships): logger.warning(f"Adding dataclass {model.__name__}, but a Pydantic model with the same name exists.") self.available_relationships.append(RelationshipMapper(dataclass_model=model))
add_django_model(model)
¶Add a Django model to the relationship accessor
Source code in
src/pydantic2django/core/relationships.py
141 142 143 144 145 146 147 148
def add_django_model(self, model: type[models.Model]) -> None: """Add a Django model to the relationship accessor""" # Check if the model is already in available_django_models by comparing class names model_name = model.__name__ existing_models = [m.__name__ for m in self.available_django_models] if model_name not in existing_models: self.available_relationships.append(RelationshipMapper(None, None, model, context=None))
add_pydantic_model(model)
¶Add a Pydantic model to the relationship accessor
Source code in
src/pydantic2django/core/relationships.py
121 122 123 124 125 126 127 128
def add_pydantic_model(self, model: type[BaseModel]) -> None: """Add a Pydantic model to the relationship accessor""" # Check if the model is already in available_pydantic_models by comparing class names model_name = model.__name__ existing_models = [m.__name__ for m in self.available_source_models] if model_name not in existing_models: self.available_relationships.append(RelationshipMapper(model, None, context=None))
from_dict(relationship_mapping_dict)
classmethod
¶Convert a dictionary of strings representing model qualified names to a RelationshipConversionAccessor
The dictionary should be of the form: { "pydantic_model_qualified_name": "django_model_qualified_name", ... }
Source code in
src/pydantic2django/core/relationships.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
@classmethod def from_dict(cls, relationship_mapping_dict: dict) -> "RelationshipConversionAccessor": """ Convert a dictionary of strings representing model qualified names to a RelationshipConversionAccessor The dictionary should be of the form: { "pydantic_model_qualified_name": "django_model_qualified_name", ... } """ available_relationships = [] for pydantic_mqn, django_mqn in relationship_mapping_dict.items(): try: # Split the module path and class name pydantic_module_path, pydantic_class_name = pydantic_mqn.rsplit(".", 1) django_module_path, django_class_name = django_mqn.rsplit(".", 1) # Import the modules pydantic_module = importlib.import_module(pydantic_module_path) django_module = importlib.import_module(django_module_path) # Get the actual class objects pydantic_model = getattr(pydantic_module, pydantic_class_name) django_model = getattr(django_module, django_class_name) available_relationships.append(RelationshipMapper(pydantic_model, django_model, context=None)) except Exception as e: logger.warning(f"Error importing model {pydantic_mqn} or {django_mqn}: {e}") continue return cls(available_relationships)
get_django_model_for_dataclass(dataclass_model)
¶Find the corresponding Django model for a given Dataclass model.
Source code in
src/pydantic2django/core/relationships.py
172 173 174 175 176 177 178 179 180 181
def get_django_model_for_dataclass(self, dataclass_model: type) -> Optional[type[models.Model]]: """Find the corresponding Django model for a given Dataclass model.""" logger.debug(f"Searching for Django model matching dataclass: {dataclass_model.__name__}") for relationship in self.available_relationships: # Check if this mapper holds the target dataclass and has a linked Django model if relationship.dataclass_model == dataclass_model and relationship.django_model is not None: logger.debug(f" Found match: {relationship.django_model.__name__}") return relationship.django_model logger.debug(f" No match found for dataclass {dataclass_model.__name__}") return None
get_django_model_for_pydantic(pydantic_model)
¶Find the corresponding Django model for a given Pydantic model
Returns None if no matching Django model is found
Source code in
src/pydantic2django/core/relationships.py
150 151 152 153 154 155 156 157 158 159
def get_django_model_for_pydantic(self, pydantic_model: type[BaseModel]) -> Optional[type[models.Model]]: """ Find the corresponding Django model for a given Pydantic model Returns None if no matching Django model is found """ for relationship in self.available_relationships: if relationship.pydantic_model == pydantic_model and relationship.django_model is not None: return relationship.django_model return None
get_pydantic_model_for_django(django_model)
¶Find the corresponding Pydantic model for a given Django model
Returns None if no matching Pydantic model is found
Source code in
src/pydantic2django/core/relationships.py
161 162 163 164 165 166 167 168 169 170
def get_pydantic_model_for_django(self, django_model: type[models.Model]) -> Optional[type[BaseModel]]: """ Find the corresponding Pydantic model for a given Django model Returns None if no matching Pydantic model is found """ for relationship in self.available_relationships: if relationship.django_model == django_model and relationship.pydantic_model is not None: return relationship.pydantic_model return None
get_source_model_by_name(model_name)
¶Find a known source model (Pydantic or Dataclass) by its class name.
Source code in
src/pydantic2django/core/relationships.py
255 256 257 258 259 260 261 262
def get_source_model_by_name(self, model_name: str) -> Optional[type]: """Find a known source model (Pydantic or Dataclass) by its class name.""" for r in self.available_relationships: if r.pydantic_model and r.pydantic_model.__name__ == model_name: return r.pydantic_model if r.dataclass_model and r.dataclass_model.__name__ == model_name: return r.dataclass_model return None
is_source_model_known(model)
¶Check if a specific source model (Pydantic or Dataclass) is known.
Source code in
src/pydantic2django/core/relationships.py
242 243 244 245 246 247 248 249 250 251 252
def is_source_model_known(self, model: type) -> bool: """Check if a specific source model (Pydantic or Dataclass) is known.""" is_pydantic = isinstance(model, type) and issubclass(model, BaseModel) is_dataclass = dataclasses.is_dataclass(model) for r in self.available_relationships: if is_pydantic and r.pydantic_model == model: return True if is_dataclass and r.dataclass_model == model: return True return False
map_relationship(source_model, django_model)
¶Create or update a mapping between a source model (Pydantic/Dataclass) and a Django model.
Source code in
src/pydantic2django/core/relationships.py
183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240
def map_relationship(self, source_model: type, django_model: type[models.Model]) -> None: """ Create or update a mapping between a source model (Pydantic/Dataclass) and a Django model. """ source_type = ( "pydantic" if isinstance(source_model, type) and issubclass(source_model, BaseModel) else "dataclass" if dataclasses.is_dataclass(source_model) else "unknown" ) if source_type == "unknown": logger.warning(f"Cannot map relationship for unknown source type: {source_model}") return # Check if either model already exists in a relationship for relationship in self.available_relationships: if source_type == "pydantic" and relationship.pydantic_model == source_model: relationship.django_model = django_model # Ensure dataclass_model is None if we map pydantic relationship.dataclass_model = None logger.debug(f"Updated mapping: Pydantic {source_model.__name__} -> Django {django_model.__name__}") return if source_type == "dataclass" and relationship.dataclass_model == source_model: relationship.django_model = django_model # Ensure pydantic_model is None relationship.pydantic_model = None logger.debug(f"Updated mapping: Dataclass {source_model.__name__} -> Django {django_model.__name__}") return if relationship.django_model == django_model: # Map the source model based on its type if source_type == "pydantic": relationship.pydantic_model = cast(type[BaseModel], source_model) relationship.dataclass_model = None logger.debug( f"Updated mapping: Pydantic {source_model.__name__} -> Django {django_model.__name__} (found via Django model)" ) elif source_type == "dataclass": relationship.dataclass_model = cast(type, source_model) relationship.pydantic_model = None logger.debug( f"Updated mapping: Dataclass {source_model.__name__} -> Django {django_model.__name__} (found via Django model)" ) return # If no existing relationship found, create a new one logger.debug( f"Creating new mapping: {source_type.capitalize()} {source_model.__name__} -> Django {django_model.__name__}" ) if source_type == "pydantic": self.available_relationships.append( RelationshipMapper(pydantic_model=cast(type[BaseModel], source_model), django_model=django_model) ) elif source_type == "dataclass": self.available_relationships.append( RelationshipMapper(dataclass_model=cast(type, source_model), django_model=django_model) )
to_dict()
¶Convert the relationships to a dictionary of strings representing model qualified names for bidirectional conversion.
Can be stored in a JSON field, and used to reconstruct the relationships.
Source code in
src/pydantic2django/core/relationships.py
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91
def to_dict(self) -> dict: """ Convert the relationships to a dictionary of strings representing model qualified names for bidirectional conversion. Can be stored in a JSON field, and used to reconstruct the relationships. """ relationship_mapping_dict = {} for relationship in self.available_relationships: # Skip relationships where either model is None if relationship.pydantic_model is None or relationship.django_model is None: continue pydantic_mqn = self._get_pydantic_model_qualified_name(relationship.pydantic_model) django_mqn = self._get_django_model_qualified_name(relationship.django_model) relationship_mapping_dict[pydantic_mqn] = django_mqn return relationship_mapping_dict
-
Bidirectional mapper between source models (Pydantic/Dataclass) and Django models.
Source code in
src/pydantic2django/core/relationships.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
@dataclass class RelationshipMapper: """ Bidirectional mapper between source models (Pydantic/Dataclass) and Django models. """ # Allow storing either source type pydantic_model: Optional[type[BaseModel]] = None dataclass_model: Optional[type] = None django_model: Optional[type[models.Model]] = None context: Optional[ModelContext] = None # Keep context if needed later @property def source_model(self) -> Optional[type]: """Return the source model (either Pydantic or Dataclass).""" return self.pydantic_model or self.dataclass_model
source_model
property
¶Return the source model (either Pydantic or Dataclass).
-
Typing utilities
-
Source code in
src/pydantic2django/core/typing.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477
class TypeHandler: PATTERNS = { "angle_bracket_class": re.compile(r"<class '([^']+)'>"), } @staticmethod def _add_import(imports: dict[str, list[str]], module: str, name: str): """Safely add an import to the dictionary.""" if not module or module == "builtins": return # Avoid adding the module itself if name matches module (e.g., import datetime) # if name == module.split('.')[-1]: # name = module # This logic might be too simplistic, revert for now current_names = imports.setdefault(module, []) if name not in current_names: current_names.append(name) @staticmethod def _merge_imports(dict1: dict, dict2: dict) -> dict: """Merge two import dictionaries.""" merged = dict1.copy() for module, names in dict2.items(): current_names = merged.setdefault(module, []) for name in names: if name not in current_names: current_names.append(name) # Sort names within each module for consistency for module in merged: merged[module].sort() return merged @staticmethod def get_class_name(type_obj: Any) -> str: """Extract a simple, usable class name from a type object.""" origin = get_origin(type_obj) args = get_args(type_obj) # Check for Optional[T] specifically first (Union[T, NoneType]) if origin in (Union, UnionType) and len(args) == 2 and type(None) in args: return "Optional" if origin: # Now check for other origins if origin in (Union, UnionType): # Handles Union[A, B, ...] return "Union" if origin is list: return "List" # Use capital L consistently if origin is dict: return "Dict" # Use capital D consistently if origin is tuple: return "Tuple" # Use capital T consistently if origin is set: return "Set" # Use capital S consistently if origin is Callable: return "Callable" if origin is type: return "Type" # Fallback for other generic types return getattr(origin, "__name__", str(origin)) # Handle non-generic types if hasattr(type_obj, "__name__"): return type_obj.__name__ type_str = str(type_obj) match = TypeHandler.PATTERNS["angle_bracket_class"].match(type_str) if match: return match.group(1).split(".")[-1] return str(type_obj) @staticmethod def get_required_imports(type_obj: Any) -> dict[str, list[str]]: """Determine necessary imports by traversing a type object.""" imports: dict[str, list[str]] = {} processed_types = set() # Define modules for known Pydantic types that might need explicit import pydantic_module_map = { "EmailStr": "pydantic", "IPvAnyAddress": "pydantic", "Json": "pydantic", "BaseModel": "pydantic", # Add others if needed (e.g., SecretStr, UrlStr) } def _traverse(current_type: Any): nonlocal imports try: type_repr = repr(current_type) if type_repr in processed_types: return processed_types.add(type_repr) except TypeError: # Handle unhashable types if necessary, e.g., log a warning pass origin = get_origin(current_type) args = get_args(current_type) if origin: # Handle Generic Alias (List, Dict, Union, Optional, Callable, Type) origin_module = getattr(origin, "__module__", "") origin_name = getattr(origin, "__name__", "") # Determine the canonical name used in 'typing' imports (e.g., List, Dict, Callable) typing_name = None if origin is list: typing_name = "List" elif origin is dict: typing_name = "Dict" elif origin is tuple: typing_name = "Tuple" elif origin is set: typing_name = "Set" elif origin in (Union, UnionType): # Handle types.UnionType for Python 3.10+ # We don't need to add Union or Optional imports anymore with | syntax typing_name = None elif origin is type: typing_name = "Type" # Check both typing.Callable and collections.abc.Callable elif origin_module == "typing" and origin_name == "Callable": typing_name = "Callable" elif origin_module == "collections.abc" and origin_name == "Callable": typing_name = "Callable" # Add more specific checks if needed (e.g., Sequence, Mapping) # Add import if we identified a standard typing construct if typing_name: TypeHandler._add_import(imports, "typing", typing_name) # Traverse arguments regardless of origin's module for arg in args: if arg is not type(None): # Skip NoneType in Optional/Union if isinstance(arg, TypeVar): # Handle TypeVar by traversing its constraints/bound constraints = getattr(arg, "__constraints__", ()) bound = getattr(arg, "__bound__", None) if bound: _traverse(bound) for constraint in constraints: _traverse(constraint) else: _traverse(arg) # Recursively traverse arguments # Handle Base Types or Classes (int, str, MyClass, etc.) elif isinstance(current_type, type): module_name = getattr(current_type, "__module__", "") type_name = getattr(current_type, "__name__", "") if not type_name or module_name == "builtins": pass # Skip builtins or types without names elif module_name == "typing" and type_name not in ("NoneType", "Generic"): # Catch Any, etc. used directly TypeHandler._add_import(imports, "typing", type_name) # Check for dataclasses and Pydantic models specifically elif is_dataclass(current_type) or ( inspect.isclass(current_type) and issubclass(current_type, BaseModel) ): actual_module = inspect.getmodule(current_type) if actual_module and actual_module.__name__ != "__main__": TypeHandler._add_import(imports, actual_module.__name__, type_name) # Add specific imports if needed (e.g., dataclasses.dataclass, pydantic.BaseModel) if is_dataclass(current_type): TypeHandler._add_import(imports, "dataclasses", "dataclass") # No need to add BaseModel here usually, handled by pydantic_module_map or direct usage elif module_name: # Handle known standard library modules explicitly known_stdlib = {"datetime", "decimal", "uuid", "pathlib"} if module_name in known_stdlib: TypeHandler._add_import(imports, module_name, type_name) # Handle known Pydantic types explicitly (redundant with BaseModel check?) elif type_name in pydantic_module_map: TypeHandler._add_import(imports, pydantic_module_map[type_name], type_name) # Assume other types defined in modules need importing elif module_name != "__main__": # Avoid importing from main script context TypeHandler._add_import(imports, module_name, type_name) elif current_type is Any: TypeHandler._add_import(imports, "typing", "Any") elif isinstance(current_type, TypeVar): # Handle TypeVar used directly constraints = getattr(current_type, "__constraints__", ()) bound = getattr(current_type, "__bound__", None) if bound: _traverse(bound) for c in constraints: _traverse(c) # Consider adding ForwardRef handling if needed: # elif isinstance(current_type, typing.ForwardRef): # # Potentially add logic to resolve/import forward refs # pass _traverse(type_obj) # Clean up imports (unique, sorted) final_imports = {} for module, names in imports.items(): unique_names = sorted(set(names)) if unique_names: final_imports[module] = unique_names return final_imports @staticmethod def process_field_type(field_type: Any) -> dict[str, Any]: """Process a field type to get name, flags, imports, and contained dataclasses.""" logger.debug(f"[TypeHandler] Processing type: {field_type!r}") is_optional = False is_list = False metadata: tuple[Any, ...] | None = None # Initialize metadata with type hint imports = set() contained_dataclasses = set() current_type = field_type # Keep track of the potentially unwrapped type # Helper function (remains the same) def _is_potential_dataclass(t: Any) -> bool: return inspect.isclass(t) and is_dataclass(t) def _find_contained_dataclasses(current_type: Any): origin = get_origin(current_type) args = get_args(current_type) if origin: for arg in args: if arg is not type(None): _find_contained_dataclasses(arg) elif _is_potential_dataclass(current_type): contained_dataclasses.add(current_type) _find_contained_dataclasses(field_type) if contained_dataclasses: logger.debug(f" Found potential contained dataclasses: {[dc.__name__ for dc in contained_dataclasses]}") # --- Simplification Loop --- # Repeatedly unwrap until we hit a base type or Any processed = True while processed: processed = False origin = get_origin(current_type) args = get_args(current_type) # 0. Unwrap Annotated[T, ...] # Check if the origin exists and has the name 'Annotated' # This check is more robust than `origin is Annotated` across Python versions if origin is Annotated: if args: core_type = args[0] metadata = args[1:] current_type = core_type logger.debug(f" Unwrapped Annotated, current type: {current_type!r}, metadata: {metadata!r}") processed = True continue # Restart loop with unwrapped type else: logger.warning(" Found Annotated without arguments? Treating as Any.") current_type = Any processed = True continue # 1. Unwrap Optional[T] (Union[T, NoneType]) if origin in (Union, UnionType) and type(None) in args: is_optional = True # Flag it # Rebuild the Union without NoneType non_none_args = tuple(arg for arg in args if arg is not type(None)) if len(non_none_args) == 1: current_type = non_none_args[0] # Simplify Union[T, None] to T elif len(non_none_args) > 1: # Use UnionType to rebuild current_type = reduce(lambda x, y: x | y, non_none_args) else: # pragma: no cover # Should not happen if NoneType was in args current_type = Any logger.debug(f" Unwrapped Union with None, current type: {current_type!r}") processed = True continue # Restart loop with the non-optional type # 2. Unwrap List[T] or Sequence[T] if origin in (list, Sequence): is_list = True # Flag it if args: current_type = args[0] logger.debug(f" Unwrapped List/Sequence, current element type: {current_type!r}") else: current_type = Any # List without args -> List[Any] logger.debug(" Unwrapped List/Sequence without args, assuming Any") processed = True continue # Restart loop with unwrapped element type # 3. Unwrap Literal[...] if origin is Literal: # Keep the Literal origin, but simplify args if possible? # No, the mapper needs the original Literal to extract choices. # Just log and break the loop for Literal. logger.debug(" Hit Literal origin, stopping simplification loop.") break # Stop simplification here, keep Literal type # --- Post-Loop Handling --- # At this point, current_type should be the base type (int, str, datetime, Any, etc.) # or a complex type we don't simplify further (like a raw Union or a specific class) base_type_obj = current_type # --- FIX: If the original type was a list, ensure base_type_obj reflects the *List* --- # # The simplification loop above sets current_type to the *inner* type of the list. # We need the actual List type for the mapper logic. if is_list: # Determine the simplified inner type from the end of the loop simplified_inner_type = base_type_obj # Check if the original type involved Optional wrapping the list # A simple check: was is_optional also flagged? if is_optional: # Reconstruct Optional[List[SimplifiedInner]] reconstructed_type = list[simplified_inner_type] | None logger.debug( f" Original was Optional[List-like]. Reconstructing List[...] | None " f"around simplified inner type {simplified_inner_type!r} -> {reconstructed_type!r}" ) else: # Reconstruct List[SimplifiedInner] reconstructed_type = list[simplified_inner_type] logger.debug( f" Original was List-like (non-optional). Reconstructing List[...] " f"around simplified inner type {simplified_inner_type!r} -> {reconstructed_type!r}" ) # Check against original type structure (might be more robust but complex?) # original_origin = get_origin(field_type) # if original_origin is Optional and get_origin(get_args(field_type)[0]) in (list, Sequence): # # Handle Optional[List[...]] structure # elif original_origin in (list, Sequence): # # Handle List[...] structure # else: # # Handle complex cases like Annotated[Optional[List[...]]] base_type_obj = reconstructed_type # --- End FIX --- # # Add check for Callable simplification origin = get_origin(base_type_obj) if origin is Callable or ( hasattr(base_type_obj, "__module__") and base_type_obj.__module__ == "collections.abc" and base_type_obj.__name__ == "Callable" ): logger.debug( f" Final type is complex Callable {base_type_obj!r}, simplifying base object to Callable origin." ) base_type_obj = Callable # --- Result Assembly --- imports = TypeHandler.get_required_imports(field_type) # Imports based on original type_string = TypeHandler.format_type_string(field_type) # Formatting based on original result = { "type_str": type_string, "type_obj": base_type_obj, # THIS is the crucial simplified type object "is_optional": is_optional, "is_list": is_list, "imports": imports, "contained_dataclasses": contained_dataclasses, "metadata": metadata, } logger.debug(f"[TypeHandler] Processed result: {result!r}") return result @staticmethod def format_type_string(type_obj: Any) -> str: """Return a string representation suitable for generated code.""" # --- Simplified version to break recursion --- # Get the raw string representation first raw_repr = TypeHandler._get_raw_type_string(type_obj) # Basic cleanup for common typing constructs base_name = raw_repr.replace("typing.", "") # Attempt to refine based on origin/args if needed (optional) origin = get_origin(type_obj) args = get_args(type_obj) if origin in (Union, UnionType) and len(args) == 2 and type(None) in args: # Handle Optional[T] inner_type_str = TypeHandler.format_type_string(next(arg for arg in args if arg is not type(None))) return f"{inner_type_str} | None" elif origin in (list, Sequence): # Handle List[T] / Sequence[T] if args: inner_type_str = TypeHandler.format_type_string(args[0]) return f"List[{inner_type_str}]" # Prefer List for generated code else: return "List[Any]" elif origin is dict: if args and len(args) == 2: key_type_str = TypeHandler.format_type_string(args[0]) value_type_str = TypeHandler.format_type_string(args[1]) return f"Dict[{key_type_str}, {value_type_str}]" else: return "dict" elif origin is Callable: if args: # For Callable[[A, B], R], args is ([A, B], R) in Py3.9+ # For Callable[A, R], args is (A, R) # For Callable[[], R], args is ([], R) param_part = args[0] return_part = args[-1] if param_part is ...: param_str = "..." elif isinstance(param_part, list): param_types = [TypeHandler.format_type_string(p) for p in param_part] param_str = f'[{", ".join(param_types)}]' else: # Single argument param_str = f"[{TypeHandler.format_type_string(param_part)}]" return_type_str = TypeHandler.format_type_string(return_part) return f"Callable[{param_str}, {return_type_str}]" else: return "Callable" elif origin in (Union, UnionType): # Non-optional Union inner_types = [TypeHandler.format_type_string(arg) for arg in args] return " | ".join(inner_types) elif origin is Literal: inner_values = [repr(arg) for arg in args] return f"Literal[{', '.join(inner_values)}]" # Add other origins like Dict, Tuple, Callable if needed # Fallback to the cleaned raw representation return base_name.replace("collections.abc.", "") @staticmethod def _get_raw_type_string(type_obj: Any) -> str: module = getattr(type_obj, "__module__", "") if module == "typing": return repr(type_obj).replace("typing.", "") # Use name for classes/dataclasses if hasattr(type_obj, "__name__") and isinstance(type_obj, type): return type_obj.__name__ # Fallback to str return str(type_obj)
format_type_string(type_obj)
staticmethod
¶Return a string representation suitable for generated code.
Source code in
src/pydantic2django/core/typing.py
405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466
@staticmethod def format_type_string(type_obj: Any) -> str: """Return a string representation suitable for generated code.""" # --- Simplified version to break recursion --- # Get the raw string representation first raw_repr = TypeHandler._get_raw_type_string(type_obj) # Basic cleanup for common typing constructs base_name = raw_repr.replace("typing.", "") # Attempt to refine based on origin/args if needed (optional) origin = get_origin(type_obj) args = get_args(type_obj) if origin in (Union, UnionType) and len(args) == 2 and type(None) in args: # Handle Optional[T] inner_type_str = TypeHandler.format_type_string(next(arg for arg in args if arg is not type(None))) return f"{inner_type_str} | None" elif origin in (list, Sequence): # Handle List[T] / Sequence[T] if args: inner_type_str = TypeHandler.format_type_string(args[0]) return f"List[{inner_type_str}]" # Prefer List for generated code else: return "List[Any]" elif origin is dict: if args and len(args) == 2: key_type_str = TypeHandler.format_type_string(args[0]) value_type_str = TypeHandler.format_type_string(args[1]) return f"Dict[{key_type_str}, {value_type_str}]" else: return "dict" elif origin is Callable: if args: # For Callable[[A, B], R], args is ([A, B], R) in Py3.9+ # For Callable[A, R], args is (A, R) # For Callable[[], R], args is ([], R) param_part = args[0] return_part = args[-1] if param_part is ...: param_str = "..." elif isinstance(param_part, list): param_types = [TypeHandler.format_type_string(p) for p in param_part] param_str = f'[{", ".join(param_types)}]' else: # Single argument param_str = f"[{TypeHandler.format_type_string(param_part)}]" return_type_str = TypeHandler.format_type_string(return_part) return f"Callable[{param_str}, {return_type_str}]" else: return "Callable" elif origin in (Union, UnionType): # Non-optional Union inner_types = [TypeHandler.format_type_string(arg) for arg in args] return " | ".join(inner_types) elif origin is Literal: inner_values = [repr(arg) for arg in args] return f"Literal[{', '.join(inner_values)}]" # Add other origins like Dict, Tuple, Callable if needed # Fallback to the cleaned raw representation return base_name.replace("collections.abc.", "")
get_class_name(type_obj)
staticmethod
¶Extract a simple, usable class name from a type object.
Source code in
src/pydantic2django/core/typing.py
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111
@staticmethod def get_class_name(type_obj: Any) -> str: """Extract a simple, usable class name from a type object.""" origin = get_origin(type_obj) args = get_args(type_obj) # Check for Optional[T] specifically first (Union[T, NoneType]) if origin in (Union, UnionType) and len(args) == 2 and type(None) in args: return "Optional" if origin: # Now check for other origins if origin in (Union, UnionType): # Handles Union[A, B, ...] return "Union" if origin is list: return "List" # Use capital L consistently if origin is dict: return "Dict" # Use capital D consistently if origin is tuple: return "Tuple" # Use capital T consistently if origin is set: return "Set" # Use capital S consistently if origin is Callable: return "Callable" if origin is type: return "Type" # Fallback for other generic types return getattr(origin, "__name__", str(origin)) # Handle non-generic types if hasattr(type_obj, "__name__"): return type_obj.__name__ type_str = str(type_obj) match = TypeHandler.PATTERNS["angle_bracket_class"].match(type_str) if match: return match.group(1).split(".")[-1] return str(type_obj)
get_required_imports(type_obj)
staticmethod
¶Determine necessary imports by traversing a type object.
Source code in
src/pydantic2django/core/typing.py
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242
@staticmethod def get_required_imports(type_obj: Any) -> dict[str, list[str]]: """Determine necessary imports by traversing a type object.""" imports: dict[str, list[str]] = {} processed_types = set() # Define modules for known Pydantic types that might need explicit import pydantic_module_map = { "EmailStr": "pydantic", "IPvAnyAddress": "pydantic", "Json": "pydantic", "BaseModel": "pydantic", # Add others if needed (e.g., SecretStr, UrlStr) } def _traverse(current_type: Any): nonlocal imports try: type_repr = repr(current_type) if type_repr in processed_types: return processed_types.add(type_repr) except TypeError: # Handle unhashable types if necessary, e.g., log a warning pass origin = get_origin(current_type) args = get_args(current_type) if origin: # Handle Generic Alias (List, Dict, Union, Optional, Callable, Type) origin_module = getattr(origin, "__module__", "") origin_name = getattr(origin, "__name__", "") # Determine the canonical name used in 'typing' imports (e.g., List, Dict, Callable) typing_name = None if origin is list: typing_name = "List" elif origin is dict: typing_name = "Dict" elif origin is tuple: typing_name = "Tuple" elif origin is set: typing_name = "Set" elif origin in (Union, UnionType): # Handle types.UnionType for Python 3.10+ # We don't need to add Union or Optional imports anymore with | syntax typing_name = None elif origin is type: typing_name = "Type" # Check both typing.Callable and collections.abc.Callable elif origin_module == "typing" and origin_name == "Callable": typing_name = "Callable" elif origin_module == "collections.abc" and origin_name == "Callable": typing_name = "Callable" # Add more specific checks if needed (e.g., Sequence, Mapping) # Add import if we identified a standard typing construct if typing_name: TypeHandler._add_import(imports, "typing", typing_name) # Traverse arguments regardless of origin's module for arg in args: if arg is not type(None): # Skip NoneType in Optional/Union if isinstance(arg, TypeVar): # Handle TypeVar by traversing its constraints/bound constraints = getattr(arg, "__constraints__", ()) bound = getattr(arg, "__bound__", None) if bound: _traverse(bound) for constraint in constraints: _traverse(constraint) else: _traverse(arg) # Recursively traverse arguments # Handle Base Types or Classes (int, str, MyClass, etc.) elif isinstance(current_type, type): module_name = getattr(current_type, "__module__", "") type_name = getattr(current_type, "__name__", "") if not type_name or module_name == "builtins": pass # Skip builtins or types without names elif module_name == "typing" and type_name not in ("NoneType", "Generic"): # Catch Any, etc. used directly TypeHandler._add_import(imports, "typing", type_name) # Check for dataclasses and Pydantic models specifically elif is_dataclass(current_type) or ( inspect.isclass(current_type) and issubclass(current_type, BaseModel) ): actual_module = inspect.getmodule(current_type) if actual_module and actual_module.__name__ != "__main__": TypeHandler._add_import(imports, actual_module.__name__, type_name) # Add specific imports if needed (e.g., dataclasses.dataclass, pydantic.BaseModel) if is_dataclass(current_type): TypeHandler._add_import(imports, "dataclasses", "dataclass") # No need to add BaseModel here usually, handled by pydantic_module_map or direct usage elif module_name: # Handle known standard library modules explicitly known_stdlib = {"datetime", "decimal", "uuid", "pathlib"} if module_name in known_stdlib: TypeHandler._add_import(imports, module_name, type_name) # Handle known Pydantic types explicitly (redundant with BaseModel check?) elif type_name in pydantic_module_map: TypeHandler._add_import(imports, pydantic_module_map[type_name], type_name) # Assume other types defined in modules need importing elif module_name != "__main__": # Avoid importing from main script context TypeHandler._add_import(imports, module_name, type_name) elif current_type is Any: TypeHandler._add_import(imports, "typing", "Any") elif isinstance(current_type, TypeVar): # Handle TypeVar used directly constraints = getattr(current_type, "__constraints__", ()) bound = getattr(current_type, "__bound__", None) if bound: _traverse(bound) for c in constraints: _traverse(c) # Consider adding ForwardRef handling if needed: # elif isinstance(current_type, typing.ForwardRef): # # Potentially add logic to resolve/import forward refs # pass _traverse(type_obj) # Clean up imports (unique, sorted) final_imports = {} for module, names in imports.items(): unique_names = sorted(set(names)) if unique_names: final_imports[module] = unique_names return final_imports
process_field_type(field_type)
staticmethod
¶Process a field type to get name, flags, imports, and contained dataclasses.
Source code in
src/pydantic2django/core/typing.py
244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403
@staticmethod def process_field_type(field_type: Any) -> dict[str, Any]: """Process a field type to get name, flags, imports, and contained dataclasses.""" logger.debug(f"[TypeHandler] Processing type: {field_type!r}") is_optional = False is_list = False metadata: tuple[Any, ...] | None = None # Initialize metadata with type hint imports = set() contained_dataclasses = set() current_type = field_type # Keep track of the potentially unwrapped type # Helper function (remains the same) def _is_potential_dataclass(t: Any) -> bool: return inspect.isclass(t) and is_dataclass(t) def _find_contained_dataclasses(current_type: Any): origin = get_origin(current_type) args = get_args(current_type) if origin: for arg in args: if arg is not type(None): _find_contained_dataclasses(arg) elif _is_potential_dataclass(current_type): contained_dataclasses.add(current_type) _find_contained_dataclasses(field_type) if contained_dataclasses: logger.debug(f" Found potential contained dataclasses: {[dc.__name__ for dc in contained_dataclasses]}") # --- Simplification Loop --- # Repeatedly unwrap until we hit a base type or Any processed = True while processed: processed = False origin = get_origin(current_type) args = get_args(current_type) # 0. Unwrap Annotated[T, ...] # Check if the origin exists and has the name 'Annotated' # This check is more robust than `origin is Annotated` across Python versions if origin is Annotated: if args: core_type = args[0] metadata = args[1:] current_type = core_type logger.debug(f" Unwrapped Annotated, current type: {current_type!r}, metadata: {metadata!r}") processed = True continue # Restart loop with unwrapped type else: logger.warning(" Found Annotated without arguments? Treating as Any.") current_type = Any processed = True continue # 1. Unwrap Optional[T] (Union[T, NoneType]) if origin in (Union, UnionType) and type(None) in args: is_optional = True # Flag it # Rebuild the Union without NoneType non_none_args = tuple(arg for arg in args if arg is not type(None)) if len(non_none_args) == 1: current_type = non_none_args[0] # Simplify Union[T, None] to T elif len(non_none_args) > 1: # Use UnionType to rebuild current_type = reduce(lambda x, y: x | y, non_none_args) else: # pragma: no cover # Should not happen if NoneType was in args current_type = Any logger.debug(f" Unwrapped Union with None, current type: {current_type!r}") processed = True continue # Restart loop with the non-optional type # 2. Unwrap List[T] or Sequence[T] if origin in (list, Sequence): is_list = True # Flag it if args: current_type = args[0] logger.debug(f" Unwrapped List/Sequence, current element type: {current_type!r}") else: current_type = Any # List without args -> List[Any] logger.debug(" Unwrapped List/Sequence without args, assuming Any") processed = True continue # Restart loop with unwrapped element type # 3. Unwrap Literal[...] if origin is Literal: # Keep the Literal origin, but simplify args if possible? # No, the mapper needs the original Literal to extract choices. # Just log and break the loop for Literal. logger.debug(" Hit Literal origin, stopping simplification loop.") break # Stop simplification here, keep Literal type # --- Post-Loop Handling --- # At this point, current_type should be the base type (int, str, datetime, Any, etc.) # or a complex type we don't simplify further (like a raw Union or a specific class) base_type_obj = current_type # --- FIX: If the original type was a list, ensure base_type_obj reflects the *List* --- # # The simplification loop above sets current_type to the *inner* type of the list. # We need the actual List type for the mapper logic. if is_list: # Determine the simplified inner type from the end of the loop simplified_inner_type = base_type_obj # Check if the original type involved Optional wrapping the list # A simple check: was is_optional also flagged? if is_optional: # Reconstruct Optional[List[SimplifiedInner]] reconstructed_type = list[simplified_inner_type] | None logger.debug( f" Original was Optional[List-like]. Reconstructing List[...] | None " f"around simplified inner type {simplified_inner_type!r} -> {reconstructed_type!r}" ) else: # Reconstruct List[SimplifiedInner] reconstructed_type = list[simplified_inner_type] logger.debug( f" Original was List-like (non-optional). Reconstructing List[...] " f"around simplified inner type {simplified_inner_type!r} -> {reconstructed_type!r}" ) # Check against original type structure (might be more robust but complex?) # original_origin = get_origin(field_type) # if original_origin is Optional and get_origin(get_args(field_type)[0]) in (list, Sequence): # # Handle Optional[List[...]] structure # elif original_origin in (list, Sequence): # # Handle List[...] structure # else: # # Handle complex cases like Annotated[Optional[List[...]]] base_type_obj = reconstructed_type # --- End FIX --- # # Add check for Callable simplification origin = get_origin(base_type_obj) if origin is Callable or ( hasattr(base_type_obj, "__module__") and base_type_obj.__module__ == "collections.abc" and base_type_obj.__name__ == "Callable" ): logger.debug( f" Final type is complex Callable {base_type_obj!r}, simplifying base object to Callable origin." ) base_type_obj = Callable # --- Result Assembly --- imports = TypeHandler.get_required_imports(field_type) # Imports based on original type_string = TypeHandler.format_type_string(field_type) # Formatting based on original result = { "type_str": type_string, "type_obj": base_type_obj, # THIS is the crucial simplified type object "is_optional": is_optional, "is_list": is_list, "imports": imports, "contained_dataclasses": contained_dataclasses, "metadata": metadata, } logger.debug(f"[TypeHandler] Processed result: {result!r}") return result
-
Import aggregation for generated code
-
Handles import statements for generated Django models and their context classes. Tracks and deduplicates imports from multiple sources while ensuring all necessary dependencies are included.
Source code in
src/pydantic2django/core/imports.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423
class ImportHandler: """ Handles import statements for generated Django models and their context classes. Tracks and deduplicates imports from multiple sources while ensuring all necessary dependencies are included. """ def __init__(self, module_mappings: Optional[dict[str, str]] = None): """ Initialize empty collections for different types of imports. Args: module_mappings: Optional mapping of modules to remap (e.g. {"__main__": "my_app.models"}) """ # Track imports by category self.extra_type_imports: set[str] = set() # For typing and other utility imports self.pydantic_imports: set[str] = set() # For Pydantic model imports self.context_class_imports: set[str] = set() # For context class and field type imports # For tracking imported names to avoid duplicates self.imported_names: dict[str, str] = {} # Maps type name to its module # For tracking field type dependencies we've already processed self.processed_field_types: set[str] = set() # Module mappings to remap imports (e.g. "__main__" -> "my_app.models") self.module_mappings = module_mappings or {} logger.info("ImportHandler initialized") if self.module_mappings: logger.info(f"Using module mappings: {self.module_mappings}") def add_import(self, module: str, name: str): """Adds a single import based on module and name strings.""" if not module or module == "builtins": return # Apply module mappings if module in self.module_mappings: module = self.module_mappings[module] # Clean name (e.g., remove generics for import statement) clean_name = self._clean_generic_type(name) # Check if already imported if name in self.imported_names: # Could verify module matches, but usually name is unique enough logger.debug(f"Skipping already imported name: {name} (from module {module})") return if clean_name != name and clean_name in self.imported_names: logger.debug(f"Skipping already imported clean name: {clean_name} (from module {module})") return # Determine category # Simplistic: If module is known Pydantic, Django, or common stdlib -> context # Otherwise, if it's 'typing' -> extra_type # TODO: Refine categorization if needed (e.g., dedicated django_imports set) import_statement = f"from {module} import {clean_name}" if module == "typing": self.extra_type_imports.add(clean_name) # Add only name to typing imports set logger.debug(f"Adding typing import: {clean_name}") # elif module.startswith("django."): # Add to a dedicated django set if we create one # self.context_class_imports.add(import_statement) # logger.info(f"Adding Django import: {import_statement}") else: # Default to context imports for non-typing self.context_class_imports.add(import_statement) logger.info(f"Adding context class import: {import_statement}") # Mark as imported self.imported_names[name] = module if clean_name != name: self.imported_names[clean_name] = module def add_pydantic_model_import(self, model_class: type) -> None: """ Add an import statement for a Pydantic model. Args: model_class: The Pydantic model class to import """ if not hasattr(model_class, "__module__") or not hasattr(model_class, "__name__"): logger.warning(f"Cannot add import for {model_class}: missing __module__ or __name__") return module_path = model_class.__module__ model_name = self._clean_generic_type(model_class.__name__) # Apply module mappings if needed if module_path in self.module_mappings: actual_module = self.module_mappings[module_path] logger.debug(f"Remapping module import: {module_path} -> {actual_module}") module_path = actual_module logger.debug(f"Processing Pydantic model import: {model_name} from {module_path}") # Skip if already imported if model_name in self.imported_names: logger.debug(f"Skipping already imported model: {model_name}") return import_statement = f"from {module_path} import {model_name}" logger.info(f"Adding Pydantic import: {import_statement}") self.pydantic_imports.add(import_statement) self.imported_names[model_name] = module_path def add_context_field_type_import(self, field_type: Any) -> None: """ Add an import statement for a context field type with recursive dependency detection. Args: field_type: The field type to import """ # Skip if we've already processed this field type field_type_str = str(field_type) if field_type_str in self.processed_field_types: logger.debug(f"Skipping already processed field type: {field_type_str}") return logger.info(f"Processing context field type: {field_type_str}") self.processed_field_types.add(field_type_str) # Try to add direct import for the field type if it's a class self._add_type_import(field_type) # Handle nested types in generics, unions, etc. self._process_nested_types(field_type) # Add typing imports based on the field type string self._add_typing_imports(field_type_str) def _add_type_import(self, field_type: Any) -> None: """ Add an import for a single type object if it has module and name attributes. Args: field_type: The type to import """ try: if hasattr(field_type, "__module__") and hasattr(field_type, "__name__"): type_module = field_type.__module__ type_name = field_type.__name__ # Apply module mappings if needed if type_module in self.module_mappings: actual_module = self.module_mappings[type_module] logger.debug(f"Remapping module import: {type_module} -> {actual_module}") type_module = actual_module logger.debug(f"Examining type: {type_name} from module {type_module}") # Skip built-in types and typing module types if ( type_module.startswith("typing") or type_module == "builtins" or type_name in ["str", "int", "float", "bool", "dict", "list"] ): logger.debug(f"Skipping built-in or typing type: {type_name}") return # Skip TypeVar definitions to avoid conflicts if type_name == "T" or type_name == "TypeVar": logger.debug(f"Skipping TypeVar definition: {type_name} - will be defined locally") return # Clean up any parametrized generic types for the import statement clean_type_name = self._clean_generic_type(type_name) # Use the original type_name (potentially with generics) for the imported_names check if type_name in self.imported_names: logger.debug(f"Skipping already imported type: {type_name}") return # Add to context class imports *before* marking as imported # Use the clean name for the import statement itself import_statement = f"from {type_module} import {clean_type_name}" logger.info(f"Adding context class import: {import_statement}") self.context_class_imports.add(import_statement) # Add the original type name to imported_names to prevent re-processing self.imported_names[type_name] = type_module # Also add the cleaned name in case it's encountered separately if clean_type_name != type_name: self.imported_names[clean_type_name] = type_module except (AttributeError, TypeError) as e: logger.warning(f"Error processing type import for {field_type}: {e}") def _process_nested_types(self, field_type: Any) -> None: """ Recursively process nested types in generics, unions, etc. Args: field_type: The type that might contain nested types """ # Handle __args__ for generic types, unions, etc. if hasattr(field_type, "__args__"): logger.debug(f"Processing nested types for {field_type}") for arg_type in field_type.__args__: logger.debug(f"Found nested type argument: {arg_type}") # Recursively process each argument type self.add_context_field_type_import(arg_type) # Handle __origin__ for generic types (like List, Dict, etc.) if hasattr(field_type, "__origin__"): logger.debug(f"Processing origin type for {field_type}: {field_type.__origin__}") self.add_context_field_type_import(field_type.__origin__) def _add_typing_imports(self, field_type_str: str) -> None: """ Add required typing imports based on the string representation of the field type. Args: field_type_str: String representation of the field type """ # Check for common typing constructs if "List[" in field_type_str or "list[" in field_type_str: logger.debug(f"Adding List import from {field_type_str}") self.extra_type_imports.add("List") if "Dict[" in field_type_str or "dict[" in field_type_str: logger.debug(f"Adding Dict import from {field_type_str}") self.extra_type_imports.add("Dict") if "Tuple[" in field_type_str or "tuple[" in field_type_str: logger.debug(f"Adding Tuple import from {field_type_str}") self.extra_type_imports.add("Tuple") if "Optional[" in field_type_str or "Union[" in field_type_str or "None" in field_type_str: logger.debug(f"Adding Optional import from {field_type_str}") self.extra_type_imports.add("Optional") if "Union[" in field_type_str: logger.debug(f"Adding Union import from {field_type_str}") self.extra_type_imports.add("Union") if "Callable[" in field_type_str: logger.debug(f"Adding Callable import from {field_type_str}") self.extra_type_imports.add("Callable") if "Any" in field_type_str: logger.debug(f"Adding Any import from {field_type_str}") self.extra_type_imports.add("Any") # Extract custom types from the field type string self._extract_custom_types_from_string(field_type_str) def _extract_custom_types_from_string(self, field_type_str: str) -> None: """ Extract custom type names from a string representation of a field type. Args: field_type_str: String representation of the field type """ # Extract potential type names from the string # This regex looks for capitalized words that might be type names type_names = re.findall(r"[A-Z][a-zA-Z0-9]*", field_type_str) logger.debug(f"Extracted potential type names from string {field_type_str}: {type_names}") for type_name in type_names: # Skip common type names that are already handled if type_name in ["List", "Dict", "Optional", "Union", "Tuple", "Callable", "Any"]: logger.debug(f"Skipping common typing name: {type_name}") continue # Skip if already in imported names if type_name in self.imported_names: logger.debug(f"Skipping already imported name: {type_name}") continue # Log potential custom type logger.info(f"Adding potential custom type to extra_type_imports: {type_name}") # Add to extra type imports - these are types that we couldn't resolve to a module # They'll need to be imported elsewhere or we might generate an error self.extra_type_imports.add(type_name) def get_required_imports(self, field_type_str: str) -> dict[str, list[str]]: """ Get typing and custom type imports required for a field type. Args: field_type_str: String representation of a field type Returns: Dictionary with "typing" and "custom" import lists """ logger.debug(f"Getting required imports for: {field_type_str}") self._add_typing_imports(field_type_str) # Get custom types (non-typing types) custom_types = [ name for name in self.extra_type_imports if name not in ["List", "Dict", "Tuple", "Set", "Optional", "Union", "Any", "Callable"] ] logger.debug(f"Found custom types: {custom_types}") # Return the latest state of imports return { "typing": list(self.extra_type_imports), "custom": custom_types, } def deduplicate_imports(self) -> dict[str, set[str]]: """ De-duplicate imports between Pydantic models and context field types. Returns: Dict with de-duplicated import sets """ logger.info("Deduplicating imports") logger.debug(f"Current pydantic imports: {self.pydantic_imports}") logger.debug(f"Current context imports: {self.context_class_imports}") # Extract class names and modules from import statements pydantic_classes = {} context_classes = {} # Handle special case for TypeVar imports typevars = set() for import_stmt in self.pydantic_imports: if import_stmt.startswith("from ") and " import " in import_stmt: module, classes = import_stmt.split(" import ") module = module.replace("from ", "") # Skip __main__ and rewrite to real module paths if possible if module == "__main__": logger.warning(f"Skipping __main__ import: {import_stmt} - these won't work when imported") continue for cls in classes.split(", "): # Check if it's a TypeVar to handle duplicate definitions if cls == "T" or cls == "TypeVar": typevars.add(cls) continue # Clean up any parameterized generic types in class names cls = self._clean_generic_type(cls) pydantic_classes[cls] = module for import_stmt in self.context_class_imports: if import_stmt.startswith("from ") and " import " in import_stmt: module, classes = import_stmt.split(" import ") module = module.replace("from ", "") # Skip __main__ imports or rewrite to real module paths if possible if module == "__main__": logger.warning(f"Skipping __main__ import: {import_stmt} - these won't work when imported") continue for cls in classes.split(", "): # Check if it's a TypeVar to handle duplicate definitions if cls == "T" or cls == "TypeVar": typevars.add(cls) continue # Clean up any parameterized generic types in class names cls = self._clean_generic_type(cls) # If this class is already imported in pydantic imports, skip it if cls in pydantic_classes: logger.debug(f"Skipping duplicate context import for {cls}, already in pydantic imports") continue context_classes[cls] = module # Rebuild import statements module_to_classes = {} for cls, module in pydantic_classes.items(): if module not in module_to_classes: module_to_classes[module] = [] module_to_classes[module].append(cls) deduplicated_pydantic_imports = set() for module, classes in module_to_classes.items(): deduplicated_pydantic_imports.add(f"from {module} import {', '.join(sorted(classes))}") # Same for context imports module_to_classes = {} for cls, module in context_classes.items(): if module not in module_to_classes: module_to_classes[module] = [] module_to_classes[module].append(cls) deduplicated_context_imports = set() for module, classes in module_to_classes.items(): deduplicated_context_imports.add(f"from {module} import {', '.join(sorted(classes))}") logger.info(f"Final pydantic imports: {deduplicated_pydantic_imports}") logger.info(f"Final context imports: {deduplicated_context_imports}") # Log any TypeVar names we're skipping if typevars: logger.info(f"Skipping TypeVar imports: {typevars} - these will be defined locally") return {"pydantic": deduplicated_pydantic_imports, "context": deduplicated_context_imports} def _clean_generic_type(self, name: str) -> str: """ Clean generic parameters from a type name. Args: name: The type name to clean Returns: The cleaned type name without generic parameters """ if "[" in name or "<" in name: cleaned = re.sub(r"\[.*\]", "", name) logger.debug(f"Cleaned generic type {name} to {cleaned}") return cleaned return name
__init__(module_mappings=None)
¶Initialize empty collections for different types of imports.
Parameters:
Name Type Description Default module_mappings
Optional[dict[str, str]]
Optional mapping of modules to remap (e.g. {"main": "my_app.models"})
None
Source code in
src/pydantic2django/core/imports.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
def __init__(self, module_mappings: Optional[dict[str, str]] = None): """ Initialize empty collections for different types of imports. Args: module_mappings: Optional mapping of modules to remap (e.g. {"__main__": "my_app.models"}) """ # Track imports by category self.extra_type_imports: set[str] = set() # For typing and other utility imports self.pydantic_imports: set[str] = set() # For Pydantic model imports self.context_class_imports: set[str] = set() # For context class and field type imports # For tracking imported names to avoid duplicates self.imported_names: dict[str, str] = {} # Maps type name to its module # For tracking field type dependencies we've already processed self.processed_field_types: set[str] = set() # Module mappings to remap imports (e.g. "__main__" -> "my_app.models") self.module_mappings = module_mappings or {} logger.info("ImportHandler initialized") if self.module_mappings: logger.info(f"Using module mappings: {self.module_mappings}")
add_context_field_type_import(field_type)
¶Add an import statement for a context field type with recursive dependency detection.
Parameters:
Name Type Description Default field_type
Any
The field type to import
required Source code in
src/pydantic2django/core/imports.py
116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139
def add_context_field_type_import(self, field_type: Any) -> None: """ Add an import statement for a context field type with recursive dependency detection. Args: field_type: The field type to import """ # Skip if we've already processed this field type field_type_str = str(field_type) if field_type_str in self.processed_field_types: logger.debug(f"Skipping already processed field type: {field_type_str}") return logger.info(f"Processing context field type: {field_type_str}") self.processed_field_types.add(field_type_str) # Try to add direct import for the field type if it's a class self._add_type_import(field_type) # Handle nested types in generics, unions, etc. self._process_nested_types(field_type) # Add typing imports based on the field type string self._add_typing_imports(field_type_str)
add_import(module, name)
¶Adds a single import based on module and name strings.
Source code in
src/pydantic2django/core/imports.py
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82
def add_import(self, module: str, name: str): """Adds a single import based on module and name strings.""" if not module or module == "builtins": return # Apply module mappings if module in self.module_mappings: module = self.module_mappings[module] # Clean name (e.g., remove generics for import statement) clean_name = self._clean_generic_type(name) # Check if already imported if name in self.imported_names: # Could verify module matches, but usually name is unique enough logger.debug(f"Skipping already imported name: {name} (from module {module})") return if clean_name != name and clean_name in self.imported_names: logger.debug(f"Skipping already imported clean name: {clean_name} (from module {module})") return # Determine category # Simplistic: If module is known Pydantic, Django, or common stdlib -> context # Otherwise, if it's 'typing' -> extra_type # TODO: Refine categorization if needed (e.g., dedicated django_imports set) import_statement = f"from {module} import {clean_name}" if module == "typing": self.extra_type_imports.add(clean_name) # Add only name to typing imports set logger.debug(f"Adding typing import: {clean_name}") # elif module.startswith("django."): # Add to a dedicated django set if we create one # self.context_class_imports.add(import_statement) # logger.info(f"Adding Django import: {import_statement}") else: # Default to context imports for non-typing self.context_class_imports.add(import_statement) logger.info(f"Adding context class import: {import_statement}") # Mark as imported self.imported_names[name] = module if clean_name != name: self.imported_names[clean_name] = module
add_pydantic_model_import(model_class)
¶Add an import statement for a Pydantic model.
Parameters:
Name Type Description Default model_class
type
The Pydantic model class to import
required Source code in
src/pydantic2django/core/imports.py
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114
def add_pydantic_model_import(self, model_class: type) -> None: """ Add an import statement for a Pydantic model. Args: model_class: The Pydantic model class to import """ if not hasattr(model_class, "__module__") or not hasattr(model_class, "__name__"): logger.warning(f"Cannot add import for {model_class}: missing __module__ or __name__") return module_path = model_class.__module__ model_name = self._clean_generic_type(model_class.__name__) # Apply module mappings if needed if module_path in self.module_mappings: actual_module = self.module_mappings[module_path] logger.debug(f"Remapping module import: {module_path} -> {actual_module}") module_path = actual_module logger.debug(f"Processing Pydantic model import: {model_name} from {module_path}") # Skip if already imported if model_name in self.imported_names: logger.debug(f"Skipping already imported model: {model_name}") return import_statement = f"from {module_path} import {model_name}" logger.info(f"Adding Pydantic import: {import_statement}") self.pydantic_imports.add(import_statement) self.imported_names[model_name] = module_path
deduplicate_imports()
¶De-duplicate imports between Pydantic models and context field types.
Returns:
Type Description dict[str, set[str]]
Dict with de-duplicated import sets
Source code in
src/pydantic2django/core/imports.py
316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407
def deduplicate_imports(self) -> dict[str, set[str]]: """ De-duplicate imports between Pydantic models and context field types. Returns: Dict with de-duplicated import sets """ logger.info("Deduplicating imports") logger.debug(f"Current pydantic imports: {self.pydantic_imports}") logger.debug(f"Current context imports: {self.context_class_imports}") # Extract class names and modules from import statements pydantic_classes = {} context_classes = {} # Handle special case for TypeVar imports typevars = set() for import_stmt in self.pydantic_imports: if import_stmt.startswith("from ") and " import " in import_stmt: module, classes = import_stmt.split(" import ") module = module.replace("from ", "") # Skip __main__ and rewrite to real module paths if possible if module == "__main__": logger.warning(f"Skipping __main__ import: {import_stmt} - these won't work when imported") continue for cls in classes.split(", "): # Check if it's a TypeVar to handle duplicate definitions if cls == "T" or cls == "TypeVar": typevars.add(cls) continue # Clean up any parameterized generic types in class names cls = self._clean_generic_type(cls) pydantic_classes[cls] = module for import_stmt in self.context_class_imports: if import_stmt.startswith("from ") and " import " in import_stmt: module, classes = import_stmt.split(" import ") module = module.replace("from ", "") # Skip __main__ imports or rewrite to real module paths if possible if module == "__main__": logger.warning(f"Skipping __main__ import: {import_stmt} - these won't work when imported") continue for cls in classes.split(", "): # Check if it's a TypeVar to handle duplicate definitions if cls == "T" or cls == "TypeVar": typevars.add(cls) continue # Clean up any parameterized generic types in class names cls = self._clean_generic_type(cls) # If this class is already imported in pydantic imports, skip it if cls in pydantic_classes: logger.debug(f"Skipping duplicate context import for {cls}, already in pydantic imports") continue context_classes[cls] = module # Rebuild import statements module_to_classes = {} for cls, module in pydantic_classes.items(): if module not in module_to_classes: module_to_classes[module] = [] module_to_classes[module].append(cls) deduplicated_pydantic_imports = set() for module, classes in module_to_classes.items(): deduplicated_pydantic_imports.add(f"from {module} import {', '.join(sorted(classes))}") # Same for context imports module_to_classes = {} for cls, module in context_classes.items(): if module not in module_to_classes: module_to_classes[module] = [] module_to_classes[module].append(cls) deduplicated_context_imports = set() for module, classes in module_to_classes.items(): deduplicated_context_imports.add(f"from {module} import {', '.join(sorted(classes))}") logger.info(f"Final pydantic imports: {deduplicated_pydantic_imports}") logger.info(f"Final context imports: {deduplicated_context_imports}") # Log any TypeVar names we're skipping if typevars: logger.info(f"Skipping TypeVar imports: {typevars} - these will be defined locally") return {"pydantic": deduplicated_pydantic_imports, "context": deduplicated_context_imports}
get_required_imports(field_type_str)
¶Get typing and custom type imports required for a field type.
Parameters:
Name Type Description Default field_type_str
str
String representation of a field type
required Returns:
Type Description dict[str, list[str]]
Dictionary with "typing" and "custom" import lists
Source code in
src/pydantic2django/core/imports.py
288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314
def get_required_imports(self, field_type_str: str) -> dict[str, list[str]]: """ Get typing and custom type imports required for a field type. Args: field_type_str: String representation of a field type Returns: Dictionary with "typing" and "custom" import lists """ logger.debug(f"Getting required imports for: {field_type_str}") self._add_typing_imports(field_type_str) # Get custom types (non-typing types) custom_types = [ name for name in self.extra_type_imports if name not in ["List", "Dict", "Tuple", "Set", "Optional", "Union", "Any", "Callable"] ] logger.debug(f"Found custom types: {custom_types}") # Return the latest state of imports return { "typing": list(self.extra_type_imports), "custom": custom_types, }
-
Context handling for non-serializable fields
-
Bases:
Generic[SourceModelType]
Base class for model context classes. Stores context information for a Django model's fields that require special handling during conversion back to the source object (Pydantic/Dataclass).
Source code in
src/pydantic2django/core/context.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216
@dataclass class ModelContext(Generic[SourceModelType]): # Make ModelContext generic """ Base class for model context classes. Stores context information for a Django model's fields that require special handling during conversion back to the source object (Pydantic/Dataclass). """ django_model: type[models.Model] source_class: type[SourceModelType] # Changed from pydantic_class context_fields: dict[str, FieldContext] = field(default_factory=dict) context_data: dict[str, Any] = field(default_factory=dict) @property def required_context_keys(self) -> set[str]: required_fields = {fc.field_name for fc in self.context_fields.values() if not fc.is_optional} return required_fields def add_field( self, field_name: str, field_type_str: str, is_optional: bool = False, is_list: bool = False, **kwargs: Any, ) -> None: """ Add a field to the context storage. Args: field_name: Name of the field. field_type_str: String representation of the field's type. is_optional: Whether the field is optional. is_list: Whether the field is a list. **kwargs: Additional metadata for the field. """ # Pass is_optional, is_list explicitly field_context = FieldContext( field_name=field_name, field_type_str=field_type_str, is_optional=is_optional, is_list=is_list, additional_metadata=kwargs, ) self.context_fields[field_name] = field_context def validate_context(self, context: dict[str, Any]) -> None: """ Validate that all required context fields are present. Args: context: The context dictionary to validate Raises: ValueError: If required context fields are missing """ missing_fields = self.required_context_keys - set(context.keys()) if missing_fields: raise ValueError(f"Missing required context fields: {', '.join(missing_fields)}") def get_field_type_str(self, field_name: str) -> Optional[str]: """Get the string representation type of a context field.""" field_context = self.context_fields.get(field_name) return field_context.field_type_str if field_context else None def get_field_by_name(self, field_name: str) -> Optional[FieldContext]: """ Get a field context by name. Args: field_name: Name of the field to find Returns: The FieldContext if found, None otherwise """ return self.context_fields.get(field_name) def to_conversion_dict(self) -> dict[str, Any]: """Convert context to a dictionary format suitable for conversion back to source object.""" # Renamed from to_dict to be more generic return { field_name: field_context.value for field_name, field_context in self.context_fields.items() if field_context.value is not None } def set_value(self, field_name: str, value: Any) -> None: """ Set the value for a context field. Args: field_name: Name of the field value: Value to set Raises: ValueError: If the field doesn't exist in the context """ field = self.get_field_by_name(field_name) if field is None: raise ValueError(f"Field {field_name} not found in context") field.value = value def get_value(self, field_name: str) -> Optional[Any]: """ Get the value of a context field. Args: field_name: Name of the field Returns: The field value if it exists and has been set, None otherwise """ field = self.get_field_by_name(field_name) if field is not None: return field.value return None def get_required_imports(self) -> dict[str, set[str]]: # Return sets for auto-dedup """ Get all required imports for the context class fields using TypeHandler. """ imports: dict[str, set[str]] = {"typing": set(), "custom": set()} # Process each field for _, field_context in self.context_fields.items(): # Use TypeHandler with the stored type string type_imports = TypeHandler.get_required_imports(field_context.field_type_str) # Add to our overall imports imports["typing"].update(type_imports.get("typing", [])) imports["custom"].update(type_imports.get("datetime", [])) # Example specific types imports["custom"].update(type_imports.get("decimal", [])) imports["custom"].update(type_imports.get("uuid", [])) # Add any other known modules TypeHandler might return # Add Optional/List based on flags if field_context.is_optional: imports["typing"].add("Optional") if field_context.is_list: imports["typing"].add("List") # Add base source model import source_module = getattr(self.source_class, "__module__", None) source_name = getattr(self.source_class, "__name__", None) if source_module and source_name and source_module != "builtins": imports["custom"].add(f"from {source_module} import {source_name}") # Add BaseModel or dataclass import if isinstance(self.source_class, type) and issubclass(self.source_class, BaseModel): imports["custom"].add("from pydantic import BaseModel") elif dataclasses.is_dataclass(self.source_class): imports["custom"].add("from dataclasses import dataclass") # Add Any import if needed if any("Any" in fc.field_type_str for fc in self.context_fields.values()): imports["typing"].add("Any") return imports @classmethod def generate_context_class_code(cls, model_context: "ModelContext", jinja_env: Any | None = None) -> str: """ Generate a string representation of the context class. Args: model_context: The ModelContext to generate a class for jinja_env: Optional Jinja2 environment to use for rendering Returns: String representation of the context class """ # Create a ContextClassGenerator and use it to generate the class generator = ContextClassGenerator(jinja_env=jinja_env) return generator.generate_context_class(model_context)
add_field(field_name, field_type_str, is_optional=False, is_list=False, **kwargs)
¶Add a field to the context storage.
Parameters:
Name Type Description Default field_name
str
Name of the field.
required field_type_str
str
String representation of the field's type.
required is_optional
bool
Whether the field is optional.
False
is_list
bool
Whether the field is a list.
False
**kwargs
Any
Additional metadata for the field.
{}
Source code in
src/pydantic2django/core/context.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86
def add_field( self, field_name: str, field_type_str: str, is_optional: bool = False, is_list: bool = False, **kwargs: Any, ) -> None: """ Add a field to the context storage. Args: field_name: Name of the field. field_type_str: String representation of the field's type. is_optional: Whether the field is optional. is_list: Whether the field is a list. **kwargs: Additional metadata for the field. """ # Pass is_optional, is_list explicitly field_context = FieldContext( field_name=field_name, field_type_str=field_type_str, is_optional=is_optional, is_list=is_list, additional_metadata=kwargs, ) self.context_fields[field_name] = field_context
generate_context_class_code(model_context, jinja_env=None)
classmethod
¶Generate a string representation of the context class.
Parameters:
Name Type Description Default model_context
ModelContext
The ModelContext to generate a class for
required jinja_env
Any | None
Optional Jinja2 environment to use for rendering
None
Returns:
Type Description str
String representation of the context class
Source code in
src/pydantic2django/core/context.py
202 203 204 205 206 207 208 209 210 211 212 213 214 215 216
@classmethod def generate_context_class_code(cls, model_context: "ModelContext", jinja_env: Any | None = None) -> str: """ Generate a string representation of the context class. Args: model_context: The ModelContext to generate a class for jinja_env: Optional Jinja2 environment to use for rendering Returns: String representation of the context class """ # Create a ContextClassGenerator and use it to generate the class generator = ContextClassGenerator(jinja_env=jinja_env) return generator.generate_context_class(model_context)
get_field_by_name(field_name)
¶Get a field context by name.
Parameters:
Name Type Description Default field_name
str
Name of the field to find
required Returns:
Type Description Optional[FieldContext]
The FieldContext if found, None otherwise
Source code in
src/pydantic2django/core/context.py
108 109 110 111 112 113 114 115 116 117 118
def get_field_by_name(self, field_name: str) -> Optional[FieldContext]: """ Get a field context by name. Args: field_name: Name of the field to find Returns: The FieldContext if found, None otherwise """ return self.context_fields.get(field_name)
get_field_type_str(field_name)
¶Get the string representation type of a context field.
Source code in
src/pydantic2django/core/context.py
103 104 105 106
def get_field_type_str(self, field_name: str) -> Optional[str]: """Get the string representation type of a context field.""" field_context = self.context_fields.get(field_name) return field_context.field_type_str if field_context else None
get_required_imports()
¶Get all required imports for the context class fields using TypeHandler.
Source code in
src/pydantic2django/core/context.py
160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200
def get_required_imports(self) -> dict[str, set[str]]: # Return sets for auto-dedup """ Get all required imports for the context class fields using TypeHandler. """ imports: dict[str, set[str]] = {"typing": set(), "custom": set()} # Process each field for _, field_context in self.context_fields.items(): # Use TypeHandler with the stored type string type_imports = TypeHandler.get_required_imports(field_context.field_type_str) # Add to our overall imports imports["typing"].update(type_imports.get("typing", [])) imports["custom"].update(type_imports.get("datetime", [])) # Example specific types imports["custom"].update(type_imports.get("decimal", [])) imports["custom"].update(type_imports.get("uuid", [])) # Add any other known modules TypeHandler might return # Add Optional/List based on flags if field_context.is_optional: imports["typing"].add("Optional") if field_context.is_list: imports["typing"].add("List") # Add base source model import source_module = getattr(self.source_class, "__module__", None) source_name = getattr(self.source_class, "__name__", None) if source_module and source_name and source_module != "builtins": imports["custom"].add(f"from {source_module} import {source_name}") # Add BaseModel or dataclass import if isinstance(self.source_class, type) and issubclass(self.source_class, BaseModel): imports["custom"].add("from pydantic import BaseModel") elif dataclasses.is_dataclass(self.source_class): imports["custom"].add("from dataclasses import dataclass") # Add Any import if needed if any("Any" in fc.field_type_str for fc in self.context_fields.values()): imports["typing"].add("Any") return imports
get_value(field_name)
¶Get the value of a context field.
Parameters:
Name Type Description Default field_name
str
Name of the field
required Returns:
Type Description Optional[Any]
The field value if it exists and has been set, None otherwise
Source code in
src/pydantic2django/core/context.py
145 146 147 148 149 150 151 152 153 154 155 156 157 158
def get_value(self, field_name: str) -> Optional[Any]: """ Get the value of a context field. Args: field_name: Name of the field Returns: The field value if it exists and has been set, None otherwise """ field = self.get_field_by_name(field_name) if field is not None: return field.value return None
set_value(field_name, value)
¶Set the value for a context field.
Parameters:
Name Type Description Default field_name
str
Name of the field
required value
Any
Value to set
required Raises:
Type Description ValueError
If the field doesn't exist in the context
Source code in
src/pydantic2django/core/context.py
129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
def set_value(self, field_name: str, value: Any) -> None: """ Set the value for a context field. Args: field_name: Name of the field value: Value to set Raises: ValueError: If the field doesn't exist in the context """ field = self.get_field_by_name(field_name) if field is None: raise ValueError(f"Field {field_name} not found in context") field.value = value
to_conversion_dict()
¶Convert context to a dictionary format suitable for conversion back to source object.
Source code in
src/pydantic2django/core/context.py
120 121 122 123 124 125 126 127
def to_conversion_dict(self) -> dict[str, Any]: """Convert context to a dictionary format suitable for conversion back to source object.""" # Renamed from to_dict to be more generic return { field_name: field_context.value for field_name, field_context in self.context_fields.items() if field_context.value is not None }
validate_context(context)
¶Validate that all required context fields are present.
Parameters:
Name Type Description Default context
dict[str, Any]
The context dictionary to validate
required Raises:
Type Description ValueError
If required context fields are missing
Source code in
src/pydantic2django/core/context.py
88 89 90 91 92 93 94 95 96 97 98 99 100 101
def validate_context(self, context: dict[str, Any]) -> None: """ Validate that all required context fields are present. Args: context: The context dictionary to validate Raises: ValueError: If required context fields are missing """ missing_fields = self.required_context_keys - set(context.keys()) if missing_fields: raise ValueError(f"Missing required context fields: {', '.join(missing_fields)}")
-
Utility class for generating context class code from ModelContext objects.
Source code in
src/pydantic2django/core/context.py
219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333
class ContextClassGenerator: """ Utility class for generating context class code from ModelContext objects. """ def __init__(self, jinja_env: Any | None = None) -> None: """ Initialize the ContextClassGenerator. Args: jinja_env: Optional Jinja2 environment to use for template rendering. """ self.jinja_env = jinja_env # Initialize imports needed for the context class generation self.imports: dict[str, set[str]] = {"typing": set(), "custom": set()} def _load_template(self, template_name: str) -> Any: """Load Jinja2 template.""" if self.jinja_env: return self.jinja_env.get_template(template_name) else: # Fallback to basic string formatting if Jinja2 is not available # Note: This is a simplified fallback and might not handle complex templates # Load template content from file or define as string here # Example using basic string formatting: # template_content = "... {model_name} ... {field_definitions} ..." # return template_content raise ImportError("Jinja2 environment not provided for template loading.") def _simplify_type_string(self, type_str: str) -> str: """ Simplifies complex type strings for cleaner code generation. Removes module paths like 'typing.' or full paths for common types. """ # Basic simplification: remove typing module path simplified = type_str.replace("typing.", "") # Use TypeHandler to potentially clean further if needed # simplified = TypeHandler.clean_type_string(simplified) # Regex to remove full paths for nested standard types like list, dict, etc. # Define common standard types that might appear with full paths standard_types = ["list", "dict", "tuple", "set", "Optional", "Union"] def replacer_class(match): full_path = match.group(0) # Extract the class name after the last dot class_name = full_path.split(".")[-1] # Check if the extracted class name is a standard type we want to simplify if class_name in standard_types: # If it is, return just the class name return class_name else: # Otherwise, keep the full path (or handle custom imports) # For now, keeping full path for non-standard types # self._maybe_add_type_to_imports(full_path) # Add import for custom type return full_path # Pattern to find qualified names (e.g., some.module.ClassName) # This needs careful crafting to avoid unintended replacements # Example: r'\b([a-zA-Z_][\w\.]*\.)?([A-Z][a-zA-Z0-9_]*)\b' might be too broad # Focusing on paths likely coming from TypeHandler.get_required_imports might be safer # For now, rely on basic replace and potential TypeHandler cleaning return simplified def generate_context_class(self, model_context: ModelContext) -> str: """ Generates the Python code string for a context dataclass. """ template = self._load_template("context_class.py.j2") self.imports = model_context.get_required_imports() # Get imports first field_definitions = [] for field_name, field_context in model_context.context_fields.items(): field_type_str = field_context.field_type_str # field_type is now the string representation # Use TypeHandler._get_raw_type_string to get the clean, unquoted type string # --- Corrected import path for TypeHandler --- from .typing import TypeHandler clean_type = TypeHandler._get_raw_type_string(field_type_str) # Simplify the type string for display simplified_type = self._simplify_type_string(clean_type) # Add necessary imports based on the simplified type # (Assuming _simplify_type_string and get_required_imports handle this) # Format default value if present default_repr = repr(field_context.value) if field_context.value is not None else "None" field_def = f" {field_name}: {simplified_type} = field(default={default_repr})" field_definitions.append(field_def) # Prepare imports for the template typing_imports_str = ", ".join(sorted(self.imports["typing"])) custom_imports_list = sorted(self.imports["custom"]) # Keep as list of strings model_name = self._clean_generic_type(model_context.django_model.__name__) source_class_name = self._clean_generic_type(model_context.source_class.__name__) return template.render( model_name=model_name, # Use source_class_name instead of pydantic_class source_class_name=source_class_name, source_module=model_context.source_class.__module__, field_definitions="\n".join(field_definitions), typing_imports=typing_imports_str, custom_imports=custom_imports_list, ) def _clean_generic_type(self, name: str) -> str: """Remove generic parameters like [T] from class names.""" return name.split("[")[0]
__init__(jinja_env=None)
¶Initialize the ContextClassGenerator.
Parameters:
Name Type Description Default jinja_env
Any | None
Optional Jinja2 environment to use for template rendering.
None
Source code in
src/pydantic2django/core/context.py
224 225 226 227 228 229 230 231 232 233
def __init__(self, jinja_env: Any | None = None) -> None: """ Initialize the ContextClassGenerator. Args: jinja_env: Optional Jinja2 environment to use for template rendering. """ self.jinja_env = jinja_env # Initialize imports needed for the context class generation self.imports: dict[str, set[str]] = {"typing": set(), "custom": set()}
generate_context_class(model_context)
¶Generates the Python code string for a context dataclass.
Source code in
src/pydantic2django/core/context.py
285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329
def generate_context_class(self, model_context: ModelContext) -> str: """ Generates the Python code string for a context dataclass. """ template = self._load_template("context_class.py.j2") self.imports = model_context.get_required_imports() # Get imports first field_definitions = [] for field_name, field_context in model_context.context_fields.items(): field_type_str = field_context.field_type_str # field_type is now the string representation # Use TypeHandler._get_raw_type_string to get the clean, unquoted type string # --- Corrected import path for TypeHandler --- from .typing import TypeHandler clean_type = TypeHandler._get_raw_type_string(field_type_str) # Simplify the type string for display simplified_type = self._simplify_type_string(clean_type) # Add necessary imports based on the simplified type # (Assuming _simplify_type_string and get_required_imports handle this) # Format default value if present default_repr = repr(field_context.value) if field_context.value is not None else "None" field_def = f" {field_name}: {simplified_type} = field(default={default_repr})" field_definitions.append(field_def) # Prepare imports for the template typing_imports_str = ", ".join(sorted(self.imports["typing"])) custom_imports_list = sorted(self.imports["custom"]) # Keep as list of strings model_name = self._clean_generic_type(model_context.django_model.__name__) source_class_name = self._clean_generic_type(model_context.source_class.__name__) return template.render( model_name=model_name, # Use source_class_name instead of pydantic_class source_class_name=source_class_name, source_module=model_context.source_class.__module__, field_definitions="\n".join(field_definitions), typing_imports=typing_imports_str, custom_imports=custom_imports_list, )
-
Serialization helpers
-
Serialize a value using the most appropriate method.
Parameters:
Name Type Description Default value
Any
The value to serialize
required Returns:
Type Description Any
The serialized value
Source code in
src/pydantic2django/core/serialization.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94
def serialize_value(value: Any) -> Any: """ Serialize a value using the most appropriate method. Args: value: The value to serialize Returns: The serialized value """ method, serializer = get_serialization_method(value) if method == SerializationMethod.NONE: # If no serialization method is found, return the value as is return value if serializer is None: return value try: return serializer() except Exception: # If serialization fails, return the string representation return str(value)
-
Get the appropriate serialization method for an object.
This function checks for various serialization methods in order of preference: 1. model_dump (Pydantic) 2. to_json 3. to_dict 4. str (if overridden) 5. dict
Parameters:
Name Type Description Default obj
Any
The object to check for serialization methods
required Returns:
Type Description tuple[SerializationMethod, Optional[Callable[[], Any]]]
A tuple of (SerializationMethod, Optional[Callable]). The callable is None if no method is found.
Source code in
src/pydantic2django/core/serialization.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
def get_serialization_method( obj: Any, ) -> tuple[SerializationMethod, Optional[Callable[[], Any]]]: """ Get the appropriate serialization method for an object. This function checks for various serialization methods in order of preference: 1. model_dump (Pydantic) 2. to_json 3. to_dict 4. __str__ (if overridden) 5. __dict__ Args: obj: The object to check for serialization methods Returns: A tuple of (SerializationMethod, Optional[Callable]). The callable is None if no method is found. """ # Check for Pydantic model_dump if isinstance(obj, BaseModel): return SerializationMethod.MODEL_DUMP, obj.model_dump # Check for to_json method if hasattr(obj, "to_json"): return SerializationMethod.TO_JSON, obj.to_json # Check for to_dict method if hasattr(obj, "to_dict"): return SerializationMethod.TO_DICT, obj.to_dict # Check for overridden __str__ method if hasattr(obj, "__str__") and obj.__class__.__str__ is not object.__str__: return SerializationMethod.STR, obj.__str__ # Check for __dict__ attribute if hasattr(obj, "__dict__"): def dict_serializer(): return { "__class__": obj.__class__.__name__, "__module__": obj.__class__.__module__, "data": obj.__dict__, } return SerializationMethod.DICT, dict_serializer return SerializationMethod.NONE, None
What all implementations have in common¶
Each implementation integrates the same core concepts:
- A
Discovery
that lists eligible source models and computes a safe registration order. - A
ModelFactory
andFieldFactory
pair that build Django fields using the sharedBidirectionalTypeMapper
. - A
Generator
that subclassesBaseStaticGenerator
, wires up discovery/factories, and prepares template context. - Use of
RelationshipConversionAccessor
so relationship fields (FK/O2O/M2M) can be resolved across generated models. - Shared
ImportHandler
,TypeHandler
, andModelContext
mechanisms.
Implementations¶
Pydantic¶
- Discovery, factory, and generator:
-
Bases:
BaseDiscovery[type[BaseModel]]
Discovers Pydantic models within specified packages.
Source code in
src/pydantic2django/pydantic/discovery.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160
class PydanticDiscovery(BaseDiscovery[type[BaseModel]]): """Discovers Pydantic models within specified packages.""" # __init__ is inherited and sufficient def _is_target_model(self, obj: Any) -> bool: """Check if an object is a Pydantic BaseModel, excluding the base itself.""" return inspect.isclass(obj) and issubclass(obj, BaseModel) and obj is not BaseModel def _default_eligibility_filter(self, model: type[BaseModel]) -> bool: """Check default eligibility: not abstract and not inheriting directly from ABC.""" # Skip models that directly inherit from ABC if abc.ABC in model.__bases__: logger.debug(f"Filtering out {model.__name__} (inherits directly from ABC)") return False # Skip models that are marked as abstract if getattr(model, "__abstract__", False): logger.debug(f"Filtering out {model.__name__} (marked as __abstract__)") return False # Example for potentially filtering Pydantic internal models (uncomment if needed) # if model.__module__.startswith('pydantic._internal'): # logger.debug(f"Filtering out internal Pydantic model: {model.__name__}") # return False return True # Eligible by default def discover_models( self, packages: list[str], app_label: str, user_filters: Optional[ Union[Callable[[type[BaseModel]], bool], list[Callable[[type[BaseModel]], bool]]] ] = None, ): """Discover Pydantic models in the specified packages, applying filters.""" # Pass user_filters directly to the base class method super().discover_models(packages, app_label, user_filters=user_filters) # --- analyze_dependencies and get_models_in_registration_order remain --- def analyze_dependencies(self) -> None: """Build the dependency graph for the filtered Pydantic models.""" logger.info("Analyzing dependencies between filtered Pydantic models...") self.dependencies: dict[type[BaseModel], set[type[BaseModel]]] = {} filtered_model_qualnames = set(self.filtered_models.keys()) def _find_and_add_dependency(model_type: type[BaseModel], potential_dep_type: Any): if not self._is_target_model(potential_dep_type): return dep_qualname = f"{potential_dep_type.__module__}.{potential_dep_type.__name__}" if dep_qualname in filtered_model_qualnames and potential_dep_type is not model_type: dep_model_obj = self.filtered_models.get(dep_qualname) if dep_model_obj: if model_type in self.dependencies: self.dependencies[model_type].add(dep_model_obj) else: # Initialize if missing (shouldn't happen often with new base discover_models) logger.warning( f"Model {model_type.__name__} wasn't pre-initialized in dependencies dict during analysis. Initializing now." ) self.dependencies[model_type] = {dep_model_obj} else: logger.warning( f"Inconsistency: Dependency '{dep_qualname}' for model '{model_type.__name__}' found by name but not object in filtered set." ) # Initialize keys based on filtered models (important step) for model_type in self.filtered_models.values(): self.dependencies[model_type] = set() # Analyze fields for model_type in self.filtered_models.values(): for field in model_type.model_fields.values(): annotation = field.annotation if annotation is None: continue origin = get_origin(annotation) args = get_args(annotation) if origin is Union and type(None) in args and len(args) == 2: annotation = next(arg for arg in args if arg is not type(None)) origin = get_origin(annotation) args = get_args(annotation) _find_and_add_dependency(model_type, annotation) if origin in (list, dict, set, tuple): for arg in args: arg_origin = get_origin(arg) arg_args = get_args(arg) if arg_origin is Union and type(None) in arg_args and len(arg_args) == 2: nested_model_type = next(t for t in arg_args if t is not type(None)) _find_and_add_dependency(model_type, nested_model_type) else: _find_and_add_dependency(model_type, arg) logger.info("Dependency analysis complete.") # Debug logging moved inside BaseDiscovery def get_models_in_registration_order(self) -> list[type[BaseModel]]: """ Return models sorted topologically based on dependencies. Models with no dependencies come first. """ if not self.dependencies: logger.warning("No dependencies found or analyzed, returning Pydantic models in arbitrary order.") return list(self.filtered_models.values()) sorted_models = [] visited: set[type[BaseModel]] = set() visiting: set[type[BaseModel]] = set() filtered_model_objects = set(self.filtered_models.values()) def visit(model: type[BaseModel]): if model in visited: return if model in visiting: logger.error(f"Circular dependency detected involving Pydantic model {model.__name__}") # Option: raise TypeError(...) return # Break cycle visiting.add(model) if model in self.dependencies: # Use .get for safety, ensure deps are also in filtered set for dep in self.dependencies.get(model, set()): if dep in filtered_model_objects: visit(dep) visiting.remove(model) visited.add(model) sorted_models.append(model) all_target_models = list(self.filtered_models.values()) for model in all_target_models: if model not in visited: visit(model) logger.info(f"Pydantic models sorted for registration: {[m.__name__ for m in sorted_models]}") return sorted_models
analyze_dependencies()
¶Build the dependency graph for the filtered Pydantic models.
Source code in
src/pydantic2django/pydantic/discovery.py
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117
def analyze_dependencies(self) -> None: """Build the dependency graph for the filtered Pydantic models.""" logger.info("Analyzing dependencies between filtered Pydantic models...") self.dependencies: dict[type[BaseModel], set[type[BaseModel]]] = {} filtered_model_qualnames = set(self.filtered_models.keys()) def _find_and_add_dependency(model_type: type[BaseModel], potential_dep_type: Any): if not self._is_target_model(potential_dep_type): return dep_qualname = f"{potential_dep_type.__module__}.{potential_dep_type.__name__}" if dep_qualname in filtered_model_qualnames and potential_dep_type is not model_type: dep_model_obj = self.filtered_models.get(dep_qualname) if dep_model_obj: if model_type in self.dependencies: self.dependencies[model_type].add(dep_model_obj) else: # Initialize if missing (shouldn't happen often with new base discover_models) logger.warning( f"Model {model_type.__name__} wasn't pre-initialized in dependencies dict during analysis. Initializing now." ) self.dependencies[model_type] = {dep_model_obj} else: logger.warning( f"Inconsistency: Dependency '{dep_qualname}' for model '{model_type.__name__}' found by name but not object in filtered set." ) # Initialize keys based on filtered models (important step) for model_type in self.filtered_models.values(): self.dependencies[model_type] = set() # Analyze fields for model_type in self.filtered_models.values(): for field in model_type.model_fields.values(): annotation = field.annotation if annotation is None: continue origin = get_origin(annotation) args = get_args(annotation) if origin is Union and type(None) in args and len(args) == 2: annotation = next(arg for arg in args if arg is not type(None)) origin = get_origin(annotation) args = get_args(annotation) _find_and_add_dependency(model_type, annotation) if origin in (list, dict, set, tuple): for arg in args: arg_origin = get_origin(arg) arg_args = get_args(arg) if arg_origin is Union and type(None) in arg_args and len(arg_args) == 2: nested_model_type = next(t for t in arg_args if t is not type(None)) _find_and_add_dependency(model_type, nested_model_type) else: _find_and_add_dependency(model_type, arg) logger.info("Dependency analysis complete.")
discover_models(packages, app_label, user_filters=None)
¶Discover Pydantic models in the specified packages, applying filters.
Source code in
src/pydantic2django/pydantic/discovery.py
42 43 44 45 46 47 48 49 50 51 52
def discover_models( self, packages: list[str], app_label: str, user_filters: Optional[ Union[Callable[[type[BaseModel]], bool], list[Callable[[type[BaseModel]], bool]]] ] = None, ): """Discover Pydantic models in the specified packages, applying filters.""" # Pass user_filters directly to the base class method super().discover_models(packages, app_label, user_filters=user_filters)
get_models_in_registration_order()
¶Return models sorted topologically based on dependencies. Models with no dependencies come first.
Source code in
src/pydantic2django/pydantic/discovery.py
120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160
def get_models_in_registration_order(self) -> list[type[BaseModel]]: """ Return models sorted topologically based on dependencies. Models with no dependencies come first. """ if not self.dependencies: logger.warning("No dependencies found or analyzed, returning Pydantic models in arbitrary order.") return list(self.filtered_models.values()) sorted_models = [] visited: set[type[BaseModel]] = set() visiting: set[type[BaseModel]] = set() filtered_model_objects = set(self.filtered_models.values()) def visit(model: type[BaseModel]): if model in visited: return if model in visiting: logger.error(f"Circular dependency detected involving Pydantic model {model.__name__}") # Option: raise TypeError(...) return # Break cycle visiting.add(model) if model in self.dependencies: # Use .get for safety, ensure deps are also in filtered set for dep in self.dependencies.get(model, set()): if dep in filtered_model_objects: visit(dep) visiting.remove(model) visited.add(model) sorted_models.append(model) all_target_models = list(self.filtered_models.values()) for model in all_target_models: if model not in visited: visit(model) logger.info(f"Pydantic models sorted for registration: {[m.__name__ for m in sorted_models]}") return sorted_models
-
Bases:
BaseModelFactory[type[BaseModel], FieldInfo]
Creates Django models from Pydantic models.
Source code in
src/pydantic2django/pydantic/factory.py
311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452
class PydanticModelFactory(BaseModelFactory[type[BaseModel], FieldInfo]): """Creates Django models from Pydantic models.""" # Cache specific to Pydantic models _converted_models: dict[str, ConversionCarrier[type[BaseModel]]] = {} relationship_accessor: RelationshipConversionAccessor # No need for field_factory instance here if Base class handles it def __init__(self, field_factory: PydanticFieldFactory, relationship_accessor: RelationshipConversionAccessor): """Initialize with field factory and relationship accessor.""" self.relationship_accessor = relationship_accessor # Pass the field_factory up to the base class super().__init__(field_factory=field_factory) # Overrides the base method to add caching and relationship mapping def make_django_model(self, carrier: ConversionCarrier[type[BaseModel]]) -> None: """Creates a Django model from Pydantic, checking cache first and mapping relationships.""" model_key = carrier.model_key() logger.debug(f"PydanticFactory: Attempting to create Django model for {model_key}") # --- Check Cache --- # if model_key in self._converted_models and not carrier.existing_model: logger.debug(f"PydanticFactory: Using cached conversion result for {model_key}") cached_carrier = self._converted_models[model_key] # Update the passed-in carrier with cached results carrier.__dict__.update(cached_carrier.__dict__) # Ensure used_related_names is properly updated (dict update might not merge sets correctly) for target, names in cached_carrier.used_related_names_per_target.items(): carrier.used_related_names_per_target.setdefault(target, set()).update(names) return # --- Call Base Implementation for Core Logic --- # # This calls _process_source_fields, _assemble_django_model_class etc. super().make_django_model(carrier) # --- Register Relationship Mapping (if successful) --- # if carrier.source_model and carrier.django_model: logger.debug( f"PydanticFactory: Registering mapping for {carrier.source_model.__name__} -> {carrier.django_model.__name__}" ) self.relationship_accessor.map_relationship( source_model=carrier.source_model, django_model=carrier.django_model ) # --- Cache Result --- # if carrier.django_model and not carrier.existing_model: logger.debug(f"PydanticFactory: Caching conversion result for {model_key}") # Store a copy to prevent modification issues? Simple assignment for now. self._converted_models[model_key] = carrier elif not carrier.django_model: logger.error( f"PydanticFactory: Failed to create Django model for {model_key}. Invalid fields: {carrier.invalid_fields}" ) # --- Implementation of Abstract Methods --- # def _process_source_fields(self, carrier: ConversionCarrier[type[BaseModel]]): """Iterate through Pydantic fields and convert them using the field factory.""" source_model = carrier.source_model model_name = source_model.__name__ for field_name_original, field_info in get_model_fields(source_model).items(): field_name = field_info.alias or field_name_original # Skip 'id' field if updating an existing model definition # Note: _handle_id_field in field factory handles primary key logic if field_name.lower() == "id" and carrier.existing_model: logger.debug(f"Skipping 'id' field for existing model update: {carrier.existing_model.__name__}") continue # Cast needed because BaseFactory uses generic TFieldInfo field_factory_typed = cast(PydanticFieldFactory, self.field_factory) conversion_result = field_factory_typed.create_field( field_info=field_info, model_name=model_name, carrier=carrier ) # Store results in the carrier if conversion_result.django_field: # Store definition string first if conversion_result.field_definition_str: carrier.django_field_definitions[field_name] = conversion_result.field_definition_str else: logger.warning(f"Missing field definition string for successfully created field '{field_name}'") # Store the field instance itself if isinstance( conversion_result.django_field, (models.ForeignKey, models.ManyToManyField, models.OneToOneField) ): carrier.relationship_fields[field_name] = conversion_result.django_field else: carrier.django_fields[field_name] = conversion_result.django_field elif conversion_result.context_field: carrier.context_fields[field_name] = conversion_result.context_field elif conversion_result.error_str: carrier.invalid_fields.append((field_name, conversion_result.error_str)) else: # Should not happen if FieldConversionResult is used correctly error = f"Field factory returned unexpected empty result for {model_name}.{field_name_original}" logger.error(error) carrier.invalid_fields.append((field_name, error)) def _build_pydantic_model_context(self, carrier: ConversionCarrier[type[BaseModel]]): """Builds the ModelContext specifically for Pydantic source models.""" # Renamed to match base class expectation self._build_model_context(carrier) # Actual implementation of the abstract method def _build_model_context(self, carrier: ConversionCarrier[type[BaseModel]]): """Builds the ModelContext specifically for Pydantic source models.""" if not carrier.source_model or not carrier.django_model: logger.debug("Skipping context build: missing source or django model.") return try: model_context = ModelContext( # Removed generic type hint for base class compatibility django_model=carrier.django_model, source_class=carrier.source_model, ) for field_name, field_info in carrier.context_fields.items(): if isinstance(field_info, FieldInfo) and field_info.annotation is not None: optional = is_pydantic_model_field_optional(field_info.annotation) # Use repr() for field_type_str as expected by ModelContext.add_field field_type_str = repr(field_info.annotation) model_context.add_field( field_name=field_name, field_type_str=field_type_str, # Pass string representation is_optional=optional, annotation=field_info.annotation, # Keep annotation if needed elsewhere ) elif isinstance(field_info, FieldInfo): logger.warning(f"Context field '{field_name}' has no annotation, cannot add to ModelContext.") else: logger.warning( f"Context field '{field_name}' is not a FieldInfo ({type(field_info)}), cannot add to ModelContext." ) carrier.model_context = model_context logger.debug(f"Successfully built ModelContext for {carrier.model_key()}") # Call model_key() except Exception as e: logger.error(f"Failed to build ModelContext for {carrier.model_key()}: {e}", exc_info=True) carrier.model_context = None
__init__(field_factory, relationship_accessor)
¶Initialize with field factory and relationship accessor.
Source code in
src/pydantic2django/pydantic/factory.py
320 321 322 323 324
def __init__(self, field_factory: PydanticFieldFactory, relationship_accessor: RelationshipConversionAccessor): """Initialize with field factory and relationship accessor.""" self.relationship_accessor = relationship_accessor # Pass the field_factory up to the base class super().__init__(field_factory=field_factory)
make_django_model(carrier)
¶Creates a Django model from Pydantic, checking cache first and mapping relationships.
Source code in
src/pydantic2django/pydantic/factory.py
327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364
def make_django_model(self, carrier: ConversionCarrier[type[BaseModel]]) -> None: """Creates a Django model from Pydantic, checking cache first and mapping relationships.""" model_key = carrier.model_key() logger.debug(f"PydanticFactory: Attempting to create Django model for {model_key}") # --- Check Cache --- # if model_key in self._converted_models and not carrier.existing_model: logger.debug(f"PydanticFactory: Using cached conversion result for {model_key}") cached_carrier = self._converted_models[model_key] # Update the passed-in carrier with cached results carrier.__dict__.update(cached_carrier.__dict__) # Ensure used_related_names is properly updated (dict update might not merge sets correctly) for target, names in cached_carrier.used_related_names_per_target.items(): carrier.used_related_names_per_target.setdefault(target, set()).update(names) return # --- Call Base Implementation for Core Logic --- # # This calls _process_source_fields, _assemble_django_model_class etc. super().make_django_model(carrier) # --- Register Relationship Mapping (if successful) --- # if carrier.source_model and carrier.django_model: logger.debug( f"PydanticFactory: Registering mapping for {carrier.source_model.__name__} -> {carrier.django_model.__name__}" ) self.relationship_accessor.map_relationship( source_model=carrier.source_model, django_model=carrier.django_model ) # --- Cache Result --- # if carrier.django_model and not carrier.existing_model: logger.debug(f"PydanticFactory: Caching conversion result for {model_key}") # Store a copy to prevent modification issues? Simple assignment for now. self._converted_models[model_key] = carrier elif not carrier.django_model: logger.error( f"PydanticFactory: Failed to create Django model for {model_key}. Invalid fields: {carrier.invalid_fields}" )
-
Bases:
BaseFieldFactory[FieldInfo]
Creates Django fields from Pydantic fields (FieldInfo).
Source code in
src/pydantic2django/pydantic/factory.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305
class PydanticFieldFactory(BaseFieldFactory[FieldInfo]): """Creates Django fields from Pydantic fields (FieldInfo).""" # Dependencies injected relationship_accessor: RelationshipConversionAccessor bidirectional_mapper: BidirectionalTypeMapper def __init__( self, relationship_accessor: RelationshipConversionAccessor, bidirectional_mapper: BidirectionalTypeMapper ): """Initializes with dependencies.""" self.relationship_accessor = relationship_accessor self.bidirectional_mapper = bidirectional_mapper # No super().__init__() needed def create_field( self, field_info: FieldInfo, model_name: str, carrier: ConversionCarrier[type[BaseModel]] ) -> FieldConversionResult: """ Convert a Pydantic FieldInfo to a Django field instance. Implements the abstract method from BaseFieldFactory. Uses BidirectionalTypeMapper and local instantiation. """ # Use alias first, then the actual key from model_fields as name field_name = field_info.alias or next( (k for k, v in carrier.source_model.model_fields.items() if v is field_info), "<unknown>" ) # Initialize result with the source field info and determined name result = FieldConversionResult(field_info=field_info, field_name=field_name) try: # Handle potential 'id' field conflict if id_field := self._handle_id_field(field_name, field_info): result.django_field = id_field # Need to capture kwargs for serialization if possible # For now, assume default kwargs for ID fields # TODO: Extract actual kwargs used in _handle_id_field result.field_kwargs = {"primary_key": True} if isinstance(id_field, models.CharField): result.field_kwargs["max_length"] = getattr(id_field, "max_length", 255) elif isinstance(id_field, models.UUIDField): pass # No extra kwargs needed typically else: # AutoField pass # No extra kwargs needed typically result.field_definition_str = self._generate_field_def_string(result, carrier.meta_app_label) return result # ID field handled, return early # Get field type from annotation field_type = field_info.annotation if field_type is None: logger.warning(f"Field '{model_name}.{field_name}' has no annotation, treating as context field.") result.context_field = field_info return result # --- Use BidirectionalTypeMapper --- # try: django_field_class, constructor_kwargs = self.bidirectional_mapper.get_django_mapping( python_type=field_type, field_info=field_info ) except MappingError as e: # Handle errors specifically from the mapper (e.g., missing relationship) logger.error(f"Mapping error for '{model_name}.{field_name}' (type: {field_type}): {e}") result.error_str = str(e) result.context_field = field_info # Treat as context on mapping error return result except Exception as e: # Handle unexpected errors during mapping lookup logger.error( f"Unexpected error getting Django mapping for '{model_name}.{field_name}': {e}", exc_info=True ) result.error_str = f"Unexpected mapping error: {e}" result.context_field = field_info return result # Store raw kwargs before modifications/checks result.raw_mapper_kwargs = constructor_kwargs.copy() # --- Check for Multi-FK Union Signal --- # union_details = constructor_kwargs.pop("_union_details", None) if union_details and isinstance(union_details, dict): logger.info(f"Detected multi-FK union signal for '{field_name}'. Deferring field generation.") # Store the original field name and the details for the generator carrier.pending_multi_fk_unions.append((field_name, union_details)) # Store remaining kwargs (null, blank for placeholder) in raw_kwargs if needed? Already done. # Do not set django_field or field_definition_str return result # Return early, deferring generation # --- Handle Relationships Specifically (Adjust Kwargs) --- # # Check if it's a relationship type *after* getting mapping AND checking for union signal is_relationship = issubclass( django_field_class, (models.ForeignKey, models.OneToOneField, models.ManyToManyField) ) if is_relationship: # Apply specific relationship logic (like related_name uniqueness) # The mapper should have set 'to' and basic 'on_delete' if "to" not in constructor_kwargs: # This indicates an issue in the mapper or relationship accessor setup result.error_str = f"Mapper failed to determine 'to' for relationship field '{field_name}'." logger.error(result.error_str) result.context_field = field_info return result # Sanitize and ensure unique related_name # Check Pydantic Field(..., json_schema_extra={"related_name": ...}) user_related_name = ( field_info.json_schema_extra.get("related_name") if isinstance(field_info.json_schema_extra, dict) else None ) target_django_model_str = constructor_kwargs["to"] # Mapper returns string like app_label.ModelName # Try to get the actual target model class to pass to sanitize_related_name if possible # This relies on the target model being importable/available target_model_cls = None target_model_cls_name_only = target_django_model_str # Default fallback try: app_label, model_cls_name = target_django_model_str.split(".") target_model_cls = apps.get_model(app_label, model_cls_name) # Use apps.get_model target_model_cls_name_only = model_cls_name # Use name from split except Exception: logger.warning( f"Could not get target model class for '{target_django_model_str}' when generating related_name for '{field_name}'. Using model name string." ) # Fallback: try splitting by dot just for name, otherwise use whole string target_model_cls_name_only = target_django_model_str.split(".")[-1] related_name_base = ( user_related_name if user_related_name else f"{carrier.source_model.__name__.lower()}_{field_name}_set" ) final_related_name_base = sanitize_related_name( str(related_name_base), target_model_cls.__name__ if target_model_cls else target_model_cls_name_only, field_name, ) # Ensure uniqueness using carrier's tracker target_model_key_for_tracker = ( target_model_cls.__name__ if target_model_cls else target_django_model_str ) target_related_names = carrier.used_related_names_per_target.setdefault( target_model_key_for_tracker, set() ) unique_related_name = final_related_name_base counter = 1 while unique_related_name in target_related_names: unique_related_name = f"{final_related_name_base}_{counter}" counter += 1 target_related_names.add(unique_related_name) constructor_kwargs["related_name"] = unique_related_name logger.debug(f"[REL] Field '{field_name}': Assigning related_name='{unique_related_name}'") # Re-confirm on_delete (mapper should set default based on Optional) if ( django_field_class in (models.ForeignKey, models.OneToOneField) and "on_delete" not in constructor_kwargs ): is_optional = is_pydantic_model_field_optional(field_type) constructor_kwargs["on_delete"] = models.SET_NULL if is_optional else models.CASCADE elif django_field_class == models.ManyToManyField: constructor_kwargs.pop("on_delete", None) # M2M doesn't use null=True, mapper handles this constructor_kwargs.pop("null", None) constructor_kwargs["blank"] = constructor_kwargs.get("blank", True) # M2M usually blank=True # --- Perform Instantiation Locally --- # try: logger.debug( f"Instantiating {django_field_class.__name__} for '{field_name}' with kwargs: {constructor_kwargs}" ) result.django_field = django_field_class(**constructor_kwargs) result.field_kwargs = constructor_kwargs # Store final kwargs except Exception as e: error_msg = f"Failed to instantiate Django field '{field_name}' (type: {django_field_class.__name__}) with kwargs {constructor_kwargs}: {e}" logger.error(error_msg, exc_info=True) result.error_str = error_msg result.context_field = field_info # Fallback to context return result # --- Generate Field Definition String --- # result.field_definition_str = self._generate_field_def_string(result, carrier.meta_app_label) return result # Success except Exception as e: # Catch-all for unexpected errors during conversion error_msg = f"Unexpected error converting field '{model_name}.{field_name}': {e}" logger.error(error_msg, exc_info=True) result.error_str = error_msg result.context_field = field_info # Fallback to context return result def _generate_field_def_string(self, result: FieldConversionResult, app_label: str) -> str: """Generates the field definition string safely.""" if not result.django_field: return "# Field generation failed" try: if result.field_kwargs: return generate_field_definition_string(type(result.django_field), result.field_kwargs, app_label) else: logger.warning( f"Could not generate definition string for '{result.field_name}': final kwargs not found in result. Using basic serialization." ) return FieldSerializer.serialize_field(result.django_field) except Exception as e: logger.error( f"Failed to generate field definition string for '{result.field_name}': {e}", exc_info=True, ) return f"# Error generating definition: {e}" def _handle_id_field(self, field_name: str, field_info: FieldInfo) -> Optional[models.Field]: """Handle potential ID field naming conflicts (logic moved from original factory).""" if field_name.lower() == "id": field_type = field_info.annotation # Default to AutoField unless explicitly specified by type field_class = models.AutoField field_kwargs = {"primary_key": True, "verbose_name": "ID"} # Use mapper to find appropriate Django PK field if type is specified # But only override AutoField if it's clearly not a standard int sequence pk_field_class_override = None if field_type is UUID: pk_field_class_override = models.UUIDField field_kwargs.pop("verbose_name") # UUIDField doesn't need verbose_name='ID' elif field_type is str: # Default Pydantic str ID to CharField PK pk_field_class_override = models.CharField field_kwargs["max_length"] = 255 # Default length elif field_type is int: pass # Default AutoField is fine elif field_type: # Check if mapper finds a specific non-auto int field (e.g., BigIntegerField) try: mapped_cls, mapped_kwargs = self.bidirectional_mapper.get_django_mapping(field_type, field_info) if issubclass(mapped_cls, models.IntegerField) and not issubclass(mapped_cls, models.AutoField): pk_field_class_override = mapped_cls field_kwargs.update(mapped_kwargs) # Ensure primary_key=True is set field_kwargs["primary_key"] = True elif not issubclass(mapped_cls, models.AutoField): logger.warning( f"Field 'id' has type {field_type} mapping to non-integer {mapped_cls.__name__}. Using AutoField PK." ) except MappingError: logger.warning(f"Field 'id' has unmappable type {field_type}. Using AutoField PK.") if pk_field_class_override: field_class = pk_field_class_override else: # Stick with AutoField, apply title if present if field_info.title: field_kwargs["verbose_name"] = field_info.title logger.debug(f"Handling field '{field_name}' as primary key using {field_class.__name__}") # Instantiate the ID field try: return field_class(**field_kwargs) except Exception as e: logger.error( f"Failed to instantiate ID field {field_class.__name__} with kwargs {field_kwargs}: {e}", exc_info=True, ) # Fallback to basic AutoField? Or let error propagate? # Let's return None and let the main create_field handle error reporting return None return None
__init__(relationship_accessor, bidirectional_mapper)
¶Initializes with dependencies.
Source code in
src/pydantic2django/pydantic/factory.py
42 43 44 45 46 47
def __init__( self, relationship_accessor: RelationshipConversionAccessor, bidirectional_mapper: BidirectionalTypeMapper ): """Initializes with dependencies.""" self.relationship_accessor = relationship_accessor self.bidirectional_mapper = bidirectional_mapper
create_field(field_info, model_name, carrier)
¶Convert a Pydantic FieldInfo to a Django field instance. Implements the abstract method from BaseFieldFactory. Uses BidirectionalTypeMapper and local instantiation.
Source code in
src/pydantic2django/pydantic/factory.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229
def create_field( self, field_info: FieldInfo, model_name: str, carrier: ConversionCarrier[type[BaseModel]] ) -> FieldConversionResult: """ Convert a Pydantic FieldInfo to a Django field instance. Implements the abstract method from BaseFieldFactory. Uses BidirectionalTypeMapper and local instantiation. """ # Use alias first, then the actual key from model_fields as name field_name = field_info.alias or next( (k for k, v in carrier.source_model.model_fields.items() if v is field_info), "<unknown>" ) # Initialize result with the source field info and determined name result = FieldConversionResult(field_info=field_info, field_name=field_name) try: # Handle potential 'id' field conflict if id_field := self._handle_id_field(field_name, field_info): result.django_field = id_field # Need to capture kwargs for serialization if possible # For now, assume default kwargs for ID fields # TODO: Extract actual kwargs used in _handle_id_field result.field_kwargs = {"primary_key": True} if isinstance(id_field, models.CharField): result.field_kwargs["max_length"] = getattr(id_field, "max_length", 255) elif isinstance(id_field, models.UUIDField): pass # No extra kwargs needed typically else: # AutoField pass # No extra kwargs needed typically result.field_definition_str = self._generate_field_def_string(result, carrier.meta_app_label) return result # ID field handled, return early # Get field type from annotation field_type = field_info.annotation if field_type is None: logger.warning(f"Field '{model_name}.{field_name}' has no annotation, treating as context field.") result.context_field = field_info return result # --- Use BidirectionalTypeMapper --- # try: django_field_class, constructor_kwargs = self.bidirectional_mapper.get_django_mapping( python_type=field_type, field_info=field_info ) except MappingError as e: # Handle errors specifically from the mapper (e.g., missing relationship) logger.error(f"Mapping error for '{model_name}.{field_name}' (type: {field_type}): {e}") result.error_str = str(e) result.context_field = field_info # Treat as context on mapping error return result except Exception as e: # Handle unexpected errors during mapping lookup logger.error( f"Unexpected error getting Django mapping for '{model_name}.{field_name}': {e}", exc_info=True ) result.error_str = f"Unexpected mapping error: {e}" result.context_field = field_info return result # Store raw kwargs before modifications/checks result.raw_mapper_kwargs = constructor_kwargs.copy() # --- Check for Multi-FK Union Signal --- # union_details = constructor_kwargs.pop("_union_details", None) if union_details and isinstance(union_details, dict): logger.info(f"Detected multi-FK union signal for '{field_name}'. Deferring field generation.") # Store the original field name and the details for the generator carrier.pending_multi_fk_unions.append((field_name, union_details)) # Store remaining kwargs (null, blank for placeholder) in raw_kwargs if needed? Already done. # Do not set django_field or field_definition_str return result # Return early, deferring generation # --- Handle Relationships Specifically (Adjust Kwargs) --- # # Check if it's a relationship type *after* getting mapping AND checking for union signal is_relationship = issubclass( django_field_class, (models.ForeignKey, models.OneToOneField, models.ManyToManyField) ) if is_relationship: # Apply specific relationship logic (like related_name uniqueness) # The mapper should have set 'to' and basic 'on_delete' if "to" not in constructor_kwargs: # This indicates an issue in the mapper or relationship accessor setup result.error_str = f"Mapper failed to determine 'to' for relationship field '{field_name}'." logger.error(result.error_str) result.context_field = field_info return result # Sanitize and ensure unique related_name # Check Pydantic Field(..., json_schema_extra={"related_name": ...}) user_related_name = ( field_info.json_schema_extra.get("related_name") if isinstance(field_info.json_schema_extra, dict) else None ) target_django_model_str = constructor_kwargs["to"] # Mapper returns string like app_label.ModelName # Try to get the actual target model class to pass to sanitize_related_name if possible # This relies on the target model being importable/available target_model_cls = None target_model_cls_name_only = target_django_model_str # Default fallback try: app_label, model_cls_name = target_django_model_str.split(".") target_model_cls = apps.get_model(app_label, model_cls_name) # Use apps.get_model target_model_cls_name_only = model_cls_name # Use name from split except Exception: logger.warning( f"Could not get target model class for '{target_django_model_str}' when generating related_name for '{field_name}'. Using model name string." ) # Fallback: try splitting by dot just for name, otherwise use whole string target_model_cls_name_only = target_django_model_str.split(".")[-1] related_name_base = ( user_related_name if user_related_name else f"{carrier.source_model.__name__.lower()}_{field_name}_set" ) final_related_name_base = sanitize_related_name( str(related_name_base), target_model_cls.__name__ if target_model_cls else target_model_cls_name_only, field_name, ) # Ensure uniqueness using carrier's tracker target_model_key_for_tracker = ( target_model_cls.__name__ if target_model_cls else target_django_model_str ) target_related_names = carrier.used_related_names_per_target.setdefault( target_model_key_for_tracker, set() ) unique_related_name = final_related_name_base counter = 1 while unique_related_name in target_related_names: unique_related_name = f"{final_related_name_base}_{counter}" counter += 1 target_related_names.add(unique_related_name) constructor_kwargs["related_name"] = unique_related_name logger.debug(f"[REL] Field '{field_name}': Assigning related_name='{unique_related_name}'") # Re-confirm on_delete (mapper should set default based on Optional) if ( django_field_class in (models.ForeignKey, models.OneToOneField) and "on_delete" not in constructor_kwargs ): is_optional = is_pydantic_model_field_optional(field_type) constructor_kwargs["on_delete"] = models.SET_NULL if is_optional else models.CASCADE elif django_field_class == models.ManyToManyField: constructor_kwargs.pop("on_delete", None) # M2M doesn't use null=True, mapper handles this constructor_kwargs.pop("null", None) constructor_kwargs["blank"] = constructor_kwargs.get("blank", True) # M2M usually blank=True # --- Perform Instantiation Locally --- # try: logger.debug( f"Instantiating {django_field_class.__name__} for '{field_name}' with kwargs: {constructor_kwargs}" ) result.django_field = django_field_class(**constructor_kwargs) result.field_kwargs = constructor_kwargs # Store final kwargs except Exception as e: error_msg = f"Failed to instantiate Django field '{field_name}' (type: {django_field_class.__name__}) with kwargs {constructor_kwargs}: {e}" logger.error(error_msg, exc_info=True) result.error_str = error_msg result.context_field = field_info # Fallback to context return result # --- Generate Field Definition String --- # result.field_definition_str = self._generate_field_def_string(result, carrier.meta_app_label) return result # Success except Exception as e: # Catch-all for unexpected errors during conversion error_msg = f"Unexpected error converting field '{model_name}.{field_name}': {e}" logger.error(error_msg, exc_info=True) result.error_str = error_msg result.context_field = field_info # Fallback to context return result
-
Bases:
BaseStaticGenerator[type[BaseModel], FieldInfo]
Generates Django models and their context classes from Pydantic models. Inherits common logic from BaseStaticGenerator.
Source code in
src/pydantic2django/pydantic/generator.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350
class StaticPydanticModelGenerator( BaseStaticGenerator[type[BaseModel], FieldInfo] ): # TModel=BaseModel, TFieldInfo=FieldInfo """ Generates Django models and their context classes from Pydantic models. Inherits common logic from BaseStaticGenerator. """ def __init__( self, output_path: str = "generated_models.py", # Keep original default packages: Optional[list[str]] = None, app_label: str = "django_app", # Keep original default filter_function: Optional[Callable[[type[BaseModel]], bool]] = None, verbose: bool = False, discovery_module: Optional[PydanticDiscovery] = None, module_mappings: Optional[dict[str, str]] = None, base_model_class: type[models.Model] = Pydantic2DjangoBaseClass, # Pydantic specific factories can be passed or constructed here # NOTE: Injecting factory instances is less preferred now due to mapper dependency # field_factory_instance: Optional[PydanticFieldFactory] = None, # model_factory_instance: Optional[PydanticModelFactory] = None, # Inject mapper instead? bidirectional_mapper_instance: Optional[BidirectionalTypeMapper] = None, ): # 1. Initialize Pydantic-specific discovery # Use provided instance or create a default one self.pydantic_discovery_instance = discovery_module or PydanticDiscovery() # 2. Initialize RelationshipAccessor (needed by factories and mapper) self.relationship_accessor = RelationshipConversionAccessor() # 3. Initialize BidirectionalTypeMapper (pass relationship accessor) self.bidirectional_mapper = bidirectional_mapper_instance or BidirectionalTypeMapper( relationship_accessor=self.relationship_accessor ) # 4. Initialize Pydantic-specific factories (pass mapper and accessor) # Remove dependency on passed-in factory instances, create them here self.pydantic_model_factory = create_pydantic_factory( relationship_accessor=self.relationship_accessor, bidirectional_mapper=self.bidirectional_mapper ) # 5. Call the base class __init__ with all required arguments super().__init__( output_path=output_path, packages=packages or ["pydantic_models"], # Default Pydantic package app_label=app_label, filter_function=filter_function, verbose=verbose, discovery_instance=self.pydantic_discovery_instance, # Pass the specific discovery instance model_factory_instance=self.pydantic_model_factory, # Pass the newly created model factory module_mappings=module_mappings, base_model_class=base_model_class, # Jinja setup is handled by base class ) # 6. Pydantic-specific Jinja setup or context generator # Context generator needs the jinja_env from the base class self.context_generator = ContextClassGenerator(jinja_env=self.jinja_env) # 7. Track context-specific info during generation (reset in generate_models_file) self.context_definitions: list[str] = [] self.model_has_context: dict[str, bool] = {} self.context_class_names: list[str] = [] self.seen_context_classes: set[str] = set() # --- Implement Abstract Methods from Base --- def _get_source_model_name(self, carrier: ConversionCarrier[type[BaseModel]]) -> str: """Get the name of the original Pydantic model.""" # Ensure source_model is not None before accessing __name__ return carrier.source_model.__name__ if carrier.source_model else "UnknownPydanticModel" def _add_source_model_import(self, carrier: ConversionCarrier[type[BaseModel]]): """Add import for the original Pydantic model.""" if carrier.source_model: # Use the correct method from ImportHandler self.import_handler.add_pydantic_model_import(carrier.source_model) else: logger.warning("Cannot add source model import: source_model is missing from carrier.") def _get_models_in_processing_order(self) -> list[type[BaseModel]]: """Return models in Pydantic dependency order.""" # Discovery must have run first (called by base generate_models_file -> discover_models) # Cast the discovery_instance from the base class to the specific Pydantic type discovery = cast(PydanticDiscovery, self.discovery_instance) if not discovery.filtered_models: logger.warning("No models discovered or passed filter, cannot determine processing order.") return [] # Ensure dependencies are analyzed if not already done (base class should handle this) # if not discovery.dependencies: # discovery.analyze_dependencies() # Base class analyze_dependencies called in discover_models return discovery.get_models_in_registration_order() def _prepare_template_context(self, unique_model_definitions, django_model_names, imports) -> dict: """Prepare the Pydantic-specific context for the main models_file.py.j2 template.""" # Base context items (model_definitions, django_model_names, imports) are passed in. # Add Pydantic-specific items gathered during generate_models_file override. base_context = { "model_definitions": unique_model_definitions, "django_model_names": django_model_names, # For __all__ # --- Imports (already structured by base class import_handler) --- "django_imports": sorted(imports.get("django", [])), "pydantic_imports": sorted(imports.get("pydantic", [])), # Check if import handler categorizes these "general_imports": sorted(imports.get("general", [])), "context_imports": sorted(imports.get("context", [])), # Check if import handler categorizes these # It might be simpler to rely on the structured imports dict directly in the template "imports": imports, # Pass the whole structured dict # --- Pydantic Specific --- "context_definitions": self.context_definitions, # Populated in generate_models_file override "all_models": [ # This seems redundant if django_model_names covers __all__ f"'{name}'" for name in django_model_names # Use Django names for __all__ consistency? ], "context_class_names": self.context_class_names, # Populated in generate_models_file override "model_has_context": self.model_has_context, # Populated in generate_models_file override "generation_source_type": "pydantic", # Flag for template logic } # Note: Common items like timestamp, base_model info, extra_type_imports # are added by the base class generate_models_file method after calling this. return base_context def _get_model_definition_extra_context(self, carrier: ConversionCarrier[type[BaseModel]]) -> dict: """Provide Pydantic-specific context for model_definition.py.j2.""" context_fields_info = [] context_class_name = "" has_context_for_this_model = False # Track if this specific model has context if carrier.model_context and carrier.model_context.context_fields: has_context_for_this_model = True django_model_name = ( self._clean_generic_type(carrier.django_model.__name__) if carrier.django_model else "UnknownModel" ) context_class_name = f"{django_model_name}Context" for field_name, field_context_info in carrier.model_context.context_fields.items(): field_type_attr = getattr(field_context_info, "field_type", None) or getattr( field_context_info, "annotation", None ) if field_type_attr: type_name = TypeHandler.format_type_string(field_type_attr) # Add imports for context field types via import_handler # Use the correct method which handles nested types and typing imports self.import_handler.add_context_field_type_import(field_type_attr) # Remove explicit add_extra_import calls, handled by add_context_field_type_import # if getattr(field_context_info, 'is_optional', False): # self.import_handler.add_extra_import("Optional", "typing") # if getattr(field_context_info, 'is_list', False): # self.import_handler.add_extra_import("List", "typing") else: type_name = "Any" # Fallback logger.warning( f"Could not determine context type annotation for field '{field_name}' in {django_model_name}" ) context_fields_info.append((field_name, type_name)) return { "context_class_name": context_class_name, "context_fields": context_fields_info, "is_pydantic_source": True, "is_dataclass_source": False, "has_context": has_context_for_this_model, "field_definitions": carrier.django_field_definitions, } # --- Override generate_models_file to handle Pydantic context class generation --- def generate_models_file(self) -> str: """ Generates the complete models.py file content, including Pydantic context classes. Overrides the base method to add context class handling during the generation loop. """ # 1. Base discovery and model ordering self.discover_models() # Calls base discovery and dependency analysis models_to_process = self._get_models_in_processing_order() # Uses overridden method # 2. Reset state for this run (imports handled by base reset) self.carriers = [] # Manually reset ImportHandler state instead of calling non-existent reset() self.import_handler.extra_type_imports.clear() self.import_handler.pydantic_imports.clear() self.import_handler.context_class_imports.clear() self.import_handler.imported_names.clear() self.import_handler.processed_field_types.clear() # Re-add base model import after clearing # Note: add_pydantic_model_import might not be the right method here if base_model_class isn't Pydantic # Need a more general import method on ImportHandler or handle it differently. # For now, let's assume a general import is needed or handled by template. # self.import_handler.add_import(self.base_model_class.__module__, self.base_model_class.__name__) # Let's add it back using _add_type_import, although it's protected. # A public add_general_import(module, name) on ImportHandler would be better. try: # This is a workaround - ideally ImportHandler would have a public method self.import_handler._add_type_import(self.base_model_class) except Exception as e: logger.warning(f"Could not add base model import via _add_type_import: {e}") # Reset Pydantic-specific tracking lists self.context_definitions = [] self.model_has_context = {} # Map of Pydantic model name -> bool self.context_class_names = [] # For __all__ self.seen_context_classes = set() # For deduplication of definitions # --- State tracking within the loop --- model_definitions = [] # Store generated Django model definition strings django_model_names = [] # Store generated Django model names for __all__ context_only_models = [] # Track Pydantic models yielding only context # 3. Setup Django models (populates self.carriers via base method calling factory) for source_model in models_to_process: self.setup_django_model(source_model) # Uses base setup_django_model # 4. Generate definitions (Django models AND Pydantic Context classes) for carrier in self.carriers: model_name = self._get_source_model_name(carrier) # Pydantic model name try: django_model_def = "" has_django_model = False django_model_name_cleaned = "" # --- A. Generate Django Model Definition (if applicable) --- if carrier.django_model: # Check fields using safe getattr for many_to_many has_concrete_fields = any(not f.primary_key for f in carrier.django_model._meta.fields) # Use getattr for safety m2m_fields = getattr(carrier.django_model._meta, "many_to_many", []) has_m2m = bool(m2m_fields) has_fields = bool(carrier.django_model._meta.fields) if has_concrete_fields or has_m2m or (not has_concrete_fields and not has_m2m and has_fields): django_model_def = self.generate_model_definition(carrier) if django_model_def: model_definitions.append(django_model_def) django_model_name_cleaned = self._clean_generic_type(carrier.django_model.__name__) django_model_names.append(f"'{django_model_name_cleaned}'") has_django_model = True else: logger.warning(f"Base generate_model_definition returned empty for {model_name}, skipping.") else: # Model exists but seems empty (no concrete fields/M2M) # Check if it *does* have context fields if carrier.model_context and carrier.model_context.context_fields: context_only_models.append(model_name) logger.info(f"Skipping Django model definition for {model_name} - only has context fields.") else: logger.warning( f"Model {model_name} resulted in an empty Django model with no context fields. Skipping definition." ) # Continue to next carrier if no Django model AND no context if not (carrier.model_context and carrier.model_context.context_fields): continue # --- B. Generate Context Class Definition (Pydantic Specific) --- has_context = False if carrier.model_context and carrier.model_context.context_fields: has_context = True # Generate context class definition string using the context_generator # This also handles adding necessary imports for context fields via TypeHandler/ImportHandler calls within it context_def = self.context_generator.generate_context_class(carrier.model_context) # Determine context class name (needs Django model name) # Use the cleaned name if available, otherwise construct from Pydantic name? base_name_for_context = django_model_name_cleaned if django_model_name_cleaned else model_name context_class_name = f"{base_name_for_context}Context" # Add context class definition if not seen before if context_class_name not in self.seen_context_classes: self.context_definitions.append(context_def) self.context_class_names.append(f"'{context_class_name}'") self.seen_context_classes.add(context_class_name) # Add imports for context fields (should be handled by context_generator now) # self.import_handler.add_context_field_imports(carrier.model_context) # Example hypothetical method # --- C. Update Tracking and Add Source Import --- self.model_has_context[model_name] = has_context # Add import for the original source model (Pydantic model) self._add_source_model_import(carrier) except Exception as e: logger.error(f"Error processing carrier for source model {model_name}: {e}", exc_info=True) # 5. Log Summary if context_only_models: logger.info( f"Skipped Django definitions for {len(context_only_models)} models with only context fields: {', '.join(context_only_models)}" ) # 6. Deduplicate Definitions (Django models only, context defs deduplicated by name during loop) unique_model_definitions = self._deduplicate_definitions(model_definitions) # Use base method # 7. Get Imports (handled by base import_handler) imports = self.import_handler.deduplicate_imports() # 8. Prepare Template Context (using overridden Pydantic-specific method) template_context = self._prepare_template_context(unique_model_definitions, django_model_names, imports) # 9. Add Common Context Items (handled by base class) - Reuse base class logic template_context.update( { "generation_timestamp": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "base_model_module": self.base_model_class.__module__, "base_model_name": self.base_model_class.__name__, "extra_type_imports": sorted(self.import_handler.extra_type_imports), # Ensure generation_source_type is set by _prepare_template_context } ) # 10. Render the main template template = self.jinja_env.get_template("models_file.py.j2") return template.render(**template_context)
generate_models_file()
¶Generates the complete models.py file content, including Pydantic context classes. Overrides the base method to add context class handling during the generation loop.
Source code in
src/pydantic2django/pydantic/generator.py
204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350
def generate_models_file(self) -> str: """ Generates the complete models.py file content, including Pydantic context classes. Overrides the base method to add context class handling during the generation loop. """ # 1. Base discovery and model ordering self.discover_models() # Calls base discovery and dependency analysis models_to_process = self._get_models_in_processing_order() # Uses overridden method # 2. Reset state for this run (imports handled by base reset) self.carriers = [] # Manually reset ImportHandler state instead of calling non-existent reset() self.import_handler.extra_type_imports.clear() self.import_handler.pydantic_imports.clear() self.import_handler.context_class_imports.clear() self.import_handler.imported_names.clear() self.import_handler.processed_field_types.clear() # Re-add base model import after clearing # Note: add_pydantic_model_import might not be the right method here if base_model_class isn't Pydantic # Need a more general import method on ImportHandler or handle it differently. # For now, let's assume a general import is needed or handled by template. # self.import_handler.add_import(self.base_model_class.__module__, self.base_model_class.__name__) # Let's add it back using _add_type_import, although it's protected. # A public add_general_import(module, name) on ImportHandler would be better. try: # This is a workaround - ideally ImportHandler would have a public method self.import_handler._add_type_import(self.base_model_class) except Exception as e: logger.warning(f"Could not add base model import via _add_type_import: {e}") # Reset Pydantic-specific tracking lists self.context_definitions = [] self.model_has_context = {} # Map of Pydantic model name -> bool self.context_class_names = [] # For __all__ self.seen_context_classes = set() # For deduplication of definitions # --- State tracking within the loop --- model_definitions = [] # Store generated Django model definition strings django_model_names = [] # Store generated Django model names for __all__ context_only_models = [] # Track Pydantic models yielding only context # 3. Setup Django models (populates self.carriers via base method calling factory) for source_model in models_to_process: self.setup_django_model(source_model) # Uses base setup_django_model # 4. Generate definitions (Django models AND Pydantic Context classes) for carrier in self.carriers: model_name = self._get_source_model_name(carrier) # Pydantic model name try: django_model_def = "" has_django_model = False django_model_name_cleaned = "" # --- A. Generate Django Model Definition (if applicable) --- if carrier.django_model: # Check fields using safe getattr for many_to_many has_concrete_fields = any(not f.primary_key for f in carrier.django_model._meta.fields) # Use getattr for safety m2m_fields = getattr(carrier.django_model._meta, "many_to_many", []) has_m2m = bool(m2m_fields) has_fields = bool(carrier.django_model._meta.fields) if has_concrete_fields or has_m2m or (not has_concrete_fields and not has_m2m and has_fields): django_model_def = self.generate_model_definition(carrier) if django_model_def: model_definitions.append(django_model_def) django_model_name_cleaned = self._clean_generic_type(carrier.django_model.__name__) django_model_names.append(f"'{django_model_name_cleaned}'") has_django_model = True else: logger.warning(f"Base generate_model_definition returned empty for {model_name}, skipping.") else: # Model exists but seems empty (no concrete fields/M2M) # Check if it *does* have context fields if carrier.model_context and carrier.model_context.context_fields: context_only_models.append(model_name) logger.info(f"Skipping Django model definition for {model_name} - only has context fields.") else: logger.warning( f"Model {model_name} resulted in an empty Django model with no context fields. Skipping definition." ) # Continue to next carrier if no Django model AND no context if not (carrier.model_context and carrier.model_context.context_fields): continue # --- B. Generate Context Class Definition (Pydantic Specific) --- has_context = False if carrier.model_context and carrier.model_context.context_fields: has_context = True # Generate context class definition string using the context_generator # This also handles adding necessary imports for context fields via TypeHandler/ImportHandler calls within it context_def = self.context_generator.generate_context_class(carrier.model_context) # Determine context class name (needs Django model name) # Use the cleaned name if available, otherwise construct from Pydantic name? base_name_for_context = django_model_name_cleaned if django_model_name_cleaned else model_name context_class_name = f"{base_name_for_context}Context" # Add context class definition if not seen before if context_class_name not in self.seen_context_classes: self.context_definitions.append(context_def) self.context_class_names.append(f"'{context_class_name}'") self.seen_context_classes.add(context_class_name) # Add imports for context fields (should be handled by context_generator now) # self.import_handler.add_context_field_imports(carrier.model_context) # Example hypothetical method # --- C. Update Tracking and Add Source Import --- self.model_has_context[model_name] = has_context # Add import for the original source model (Pydantic model) self._add_source_model_import(carrier) except Exception as e: logger.error(f"Error processing carrier for source model {model_name}: {e}", exc_info=True) # 5. Log Summary if context_only_models: logger.info( f"Skipped Django definitions for {len(context_only_models)} models with only context fields: {', '.join(context_only_models)}" ) # 6. Deduplicate Definitions (Django models only, context defs deduplicated by name during loop) unique_model_definitions = self._deduplicate_definitions(model_definitions) # Use base method # 7. Get Imports (handled by base import_handler) imports = self.import_handler.deduplicate_imports() # 8. Prepare Template Context (using overridden Pydantic-specific method) template_context = self._prepare_template_context(unique_model_definitions, django_model_names, imports) # 9. Add Common Context Items (handled by base class) - Reuse base class logic template_context.update( { "generation_timestamp": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "base_model_module": self.base_model_class.__module__, "base_model_name": self.base_model_class.__name__, "extra_type_imports": sorted(self.import_handler.extra_type_imports), # Ensure generation_source_type is set by _prepare_template_context } ) # 10. Render the main template template = self.jinja_env.get_template("models_file.py.j2") return template.render(**template_context)
-
Notes:
- Relies on Pydantic
FieldInfo
for field metadata and constraints. - Generates optional per-model context classes when non-serializable fields are detected.
Dataclass¶
- Discovery, factory, and generator:
-
Bases:
BaseDiscovery[DataclassType]
Discovers Python dataclasses within specified packages.
Source code in
src/pydantic2django/dataclass/discovery.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146
class DataclassDiscovery(BaseDiscovery[DataclassType]): """Discovers Python dataclasses within specified packages.""" def __init__(self): super().__init__() # Dataclass specific attributes (all_models now in base) def _is_target_model(self, obj: Any) -> bool: """Check if an object is a dataclass type.""" return inspect.isclass(obj) and dataclasses.is_dataclass(obj) def _default_eligibility_filter(self, model: DataclassType) -> bool: """Check default eligibility for dataclasses (e.g., not inheriting directly from ABC).""" # Skip models that directly inherit from ABC if abc.ABC in model.__bases__: logger.debug(f"Filtering out dataclass {model.__name__} (inherits directly from ABC)") return False # Dataclasses don't have a standard __abstract__ marker like Pydantic # Add other default checks if needed for dataclasses return True # discover_models is now implemented in the BaseDiscovery class # It will call the _is_target_model and _default_eligibility_filter defined above. # --- analyze_dependencies and get_models_in_registration_order remain --- def analyze_dependencies(self) -> None: """Build the dependency graph for the filtered dataclasses.""" logger.info("Analyzing dependencies between filtered dataclasses...") self.dependencies: dict[DataclassType, set[DataclassType]] = {} filtered_model_qualnames = set(self.filtered_models.keys()) def _find_and_add_dependency(model_type: DataclassType, potential_dep_type: Any): if not self._is_target_model(potential_dep_type): return dep_qualname = f"{potential_dep_type.__module__}.{potential_dep_type.__name__}" if dep_qualname in filtered_model_qualnames and potential_dep_type is not model_type: dep_model_obj = self.filtered_models.get(dep_qualname) if dep_model_obj: if model_type in self.dependencies: self.dependencies[model_type].add(dep_model_obj) else: logger.warning( f"Model {model_type.__name__} wasn't pre-initialized in dependencies dict during analysis. Initializing now." ) self.dependencies[model_type] = {dep_model_obj} else: logger.warning( f"Inconsistency: Dependency '{dep_qualname}' for dataclass '{model_type.__name__}' found by name but not object in filtered set." ) # Initialize keys based on filtered models for model_type in self.filtered_models.values(): self.dependencies[model_type] = set() # Analyze fields using dataclasses.fields for model_type in self.filtered_models.values(): assert dataclasses.is_dataclass(model_type), f"Expected {model_type} to be a dataclass" for field in dataclasses.fields(model_type): annotation = field.type if annotation is None: continue origin = get_origin(annotation) args = get_args(annotation) if origin is Union and type(None) in args and len(args) == 2: annotation = next(arg for arg in args if arg is not type(None)) origin = get_origin(annotation) args = get_args(annotation) _find_and_add_dependency(model_type, annotation) if origin in (list, dict, set, tuple): for arg in args: arg_origin = get_origin(arg) arg_args = get_args(arg) if arg_origin is Union and type(None) in arg_args and len(arg_args) == 2: nested_type = next(t for t in arg_args if t is not type(None)) _find_and_add_dependency(model_type, nested_type) else: _find_and_add_dependency(model_type, arg) logger.info("Dataclass dependency analysis complete.") # Debug logging moved inside BaseDiscovery def get_models_in_registration_order(self) -> list[DataclassType]: """ Return dataclasses sorted topologically based on dependencies. (Largely similar to Pydantic version, uses DataclassType) """ if not self.dependencies: logger.warning("No dependencies found or analyzed, returning dataclasses in arbitrary order.") return list(self.filtered_models.values()) sorted_models = [] visited: set[DataclassType] = set() visiting: set[DataclassType] = set() filtered_model_objects = set(self.filtered_models.values()) def visit(model: DataclassType): if model in visited: return if model in visiting: logger.error(f"Circular dependency detected involving dataclass {model.__name__}") # Option: raise TypeError(...) return # Break cycle visiting.add(model) if model in self.dependencies: # Use .get for safety, ensure deps are also in filtered set for dep in self.dependencies.get(model, set()): if dep in filtered_model_objects: visit(dep) visiting.remove(model) visited.add(model) sorted_models.append(model) all_target_models = list(self.filtered_models.values()) for model in all_target_models: if model not in visited: visit(model) logger.info(f"Dataclasses sorted for registration: {[m.__name__ for m in sorted_models]}") return sorted_models
analyze_dependencies()
¶Build the dependency graph for the filtered dataclasses.
Source code in
src/pydantic2django/dataclass/discovery.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103
def analyze_dependencies(self) -> None: """Build the dependency graph for the filtered dataclasses.""" logger.info("Analyzing dependencies between filtered dataclasses...") self.dependencies: dict[DataclassType, set[DataclassType]] = {} filtered_model_qualnames = set(self.filtered_models.keys()) def _find_and_add_dependency(model_type: DataclassType, potential_dep_type: Any): if not self._is_target_model(potential_dep_type): return dep_qualname = f"{potential_dep_type.__module__}.{potential_dep_type.__name__}" if dep_qualname in filtered_model_qualnames and potential_dep_type is not model_type: dep_model_obj = self.filtered_models.get(dep_qualname) if dep_model_obj: if model_type in self.dependencies: self.dependencies[model_type].add(dep_model_obj) else: logger.warning( f"Model {model_type.__name__} wasn't pre-initialized in dependencies dict during analysis. Initializing now." ) self.dependencies[model_type] = {dep_model_obj} else: logger.warning( f"Inconsistency: Dependency '{dep_qualname}' for dataclass '{model_type.__name__}' found by name but not object in filtered set." ) # Initialize keys based on filtered models for model_type in self.filtered_models.values(): self.dependencies[model_type] = set() # Analyze fields using dataclasses.fields for model_type in self.filtered_models.values(): assert dataclasses.is_dataclass(model_type), f"Expected {model_type} to be a dataclass" for field in dataclasses.fields(model_type): annotation = field.type if annotation is None: continue origin = get_origin(annotation) args = get_args(annotation) if origin is Union and type(None) in args and len(args) == 2: annotation = next(arg for arg in args if arg is not type(None)) origin = get_origin(annotation) args = get_args(annotation) _find_and_add_dependency(model_type, annotation) if origin in (list, dict, set, tuple): for arg in args: arg_origin = get_origin(arg) arg_args = get_args(arg) if arg_origin is Union and type(None) in arg_args and len(arg_args) == 2: nested_type = next(t for t in arg_args if t is not type(None)) _find_and_add_dependency(model_type, nested_type) else: _find_and_add_dependency(model_type, arg) logger.info("Dataclass dependency analysis complete.")
get_models_in_registration_order()
¶Return dataclasses sorted topologically based on dependencies. (Largely similar to Pydantic version, uses DataclassType)
Source code in
src/pydantic2django/dataclass/discovery.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146
def get_models_in_registration_order(self) -> list[DataclassType]: """ Return dataclasses sorted topologically based on dependencies. (Largely similar to Pydantic version, uses DataclassType) """ if not self.dependencies: logger.warning("No dependencies found or analyzed, returning dataclasses in arbitrary order.") return list(self.filtered_models.values()) sorted_models = [] visited: set[DataclassType] = set() visiting: set[DataclassType] = set() filtered_model_objects = set(self.filtered_models.values()) def visit(model: DataclassType): if model in visited: return if model in visiting: logger.error(f"Circular dependency detected involving dataclass {model.__name__}") # Option: raise TypeError(...) return # Break cycle visiting.add(model) if model in self.dependencies: # Use .get for safety, ensure deps are also in filtered set for dep in self.dependencies.get(model, set()): if dep in filtered_model_objects: visit(dep) visiting.remove(model) visited.add(model) sorted_models.append(model) all_target_models = list(self.filtered_models.values()) for model in all_target_models: if model not in visited: visit(model) logger.info(f"Dataclasses sorted for registration: {[m.__name__ for m in sorted_models]}") return sorted_models
-
Bases:
BaseModelFactory[DataclassType, Field]
Dynamically creates Django model classes from dataclasses.
Source code in
src/pydantic2django/dataclass/factory.py
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512
class DataclassModelFactory(BaseModelFactory[DataclassType, dataclasses.Field]): """Dynamically creates Django model classes from dataclasses.""" # Cache specific to Dataclass models _converted_models: dict[str, ConversionCarrier[DataclassType]] = {} relationship_accessor: RelationshipConversionAccessor # Changed Optional to required import_handler: ImportHandler # Added import handler def __init__( self, field_factory: DataclassFieldFactory, relationship_accessor: RelationshipConversionAccessor, # Now required import_handler: Optional[ImportHandler] = None, # Accept optionally ): """Initialize with field factory, relationship accessor, and import handler.""" self.relationship_accessor = relationship_accessor self.import_handler = import_handler or ImportHandler() # Call super init super().__init__(field_factory) def make_django_model(self, carrier: ConversionCarrier[DataclassType]) -> None: """ Orchestrates the Django model creation process. Subclasses implement _process_source_fields and _build_model_context. Handles caching. Passes import handler down. """ # --- Pass import handler via carrier --- (or could add to factory state) # Need to set import handler on carrier if passed during init # NOTE: BaseModelFactory.make_django_model does this now. # carrier.import_handler = self.import_handler super().make_django_model(carrier) # Register relationship after successful model creation (moved from original) if carrier.source_model and carrier.django_model: logger.debug( f"Mapping relationship in accessor: {carrier.source_model.__name__} -> {carrier.django_model.__name__}" ) self.relationship_accessor.map_relationship( source_model=carrier.source_model, django_model=carrier.django_model ) # Cache result (moved from original) model_key = carrier.model_key() if carrier.django_model and not carrier.existing_model: self._converted_models[model_key] = carrier def _process_source_fields(self, carrier: ConversionCarrier[DataclassType]): """Iterate through source dataclass fields, resolve types, create Django fields, and store results.""" source_model = carrier.source_model if not source_model: logger.error( f"Cannot process fields: source model missing in carrier for {getattr(carrier, 'target_model_name', '?')}" ) # Safely access target_model_name carrier.invalid_fields.append(("_source_model", "Source model missing.")) # Use invalid_fields return # --- Add check: Ensure source_model is a type --- if not isinstance(source_model, type): error_msg = f"Cannot process fields: expected source_model to be a type, but got {type(source_model)} ({source_model!r}). Problem likely upstream in model discovery/ordering." logger.error(error_msg) carrier.invalid_fields.append(("_source_model", error_msg)) return # --- End Add check --- # --- Use dataclasses.fields for introspection --- try: # Resolve type hints first to handle forward references (strings) # Need globals and potentially locals from the source model's module source_module = sys.modules.get(source_model.__module__) globalns = getattr(source_module, "__dict__", None) # Revert: Use only globals, assuming types are resolvable in module scope # resolved_types = get_type_hints(source_model, globalns=globalns, localns=localns) # logger.debug(f"Resolved types for {source_model.__name__} using {globalns=}, {localns=}: {resolved_types}") # Use updated call without potentially incorrect locals: resolved_types = get_type_hints(source_model, globalns=globalns, localns=None) logger.debug(f"Resolved types for {source_model.__name__} using module globals: {resolved_types}") dataclass_fields = dataclasses.fields(source_model) except (TypeError, NameError) as e: # Catch errors during type hint resolution or fields() call error_msg = f"Could not introspect fields or resolve types for {source_model.__name__}: {e}" logger.error(error_msg, exc_info=True) carrier.invalid_fields.append(("_introspection", error_msg)) # Use invalid_fields return # Use field definitions directly from carrier # field_definitions: dict[str, str] = {} # context_field_definitions: dict[str, str] = {} # Dataclasses likely won't use this for field_info in dataclass_fields: field_name = field_info.name # Get the *resolved* type for this field resolved_type = resolved_types.get(field_name) if resolved_type is None: logger.warning( f"Could not resolve type hint for field '{field_name}' in {source_model.__name__}. Using original: {field_info.type!r}" ) # Fallback to original, which might be a string resolved_type = field_info.type # --- Prepare field type for create_field --- # type_for_create_field = resolved_type # Start with the result from get_type_hints # If the original type annotation was a string (ForwardRef) # and get_type_hints failed to resolve it (resolved_type is None or still the string), # try to find the resolved type from the dict populated earlier. # This specifically handles nested dataclasses defined in local scopes like fixtures. if isinstance(field_info.type, str): explicitly_resolved = resolved_types.get(field_name) if explicitly_resolved and not isinstance(explicitly_resolved, str): logger.debug( f"Using explicitly resolved type {explicitly_resolved!r} for forward ref '{field_info.type}'" ) type_for_create_field = explicitly_resolved elif resolved_type is field_info.type: # Check if resolved_type is still the unresolved string logger.error( f"Type hint for '{field_name}' is string '{field_info.type}' but was not resolved by get_type_hints. Skipping field." ) carrier.invalid_fields.append( (field_name, f"Could not resolve forward reference: {field_info.type}") ) continue # Skip this field # Temporarily modify a copy of field_info or pass type directly if possible. # Modifying field_info directly is simpler for now. original_type_attr = field_info.type try: field_info.type = type_for_create_field # Use the determined type logger.debug(f"Calling create_field for '{field_name}' with type: {field_info.type!r}") field_result = self.field_factory.create_field( field_info=field_info, model_name=source_model.__name__, carrier=carrier ) finally: # Restore original type attribute field_info.type = original_type_attr logger.debug("Restored original field_info.type attribute") # Process the result (errors, definitions) if field_result.error_str: carrier.invalid_fields.append((field_name, field_result.error_str)) else: # Store the definition string if available if field_result.field_definition_str: carrier.django_field_definitions[field_name] = field_result.field_definition_str else: logger.warning(f"Field '{field_name}' processing yielded no error and no definition string.") # Store the actual field instance in the correct carrier dict if field_result.django_field: if isinstance( field_result.django_field, (models.ForeignKey, models.OneToOneField, models.ManyToManyField) ): carrier.relationship_fields[field_name] = field_result.django_field else: carrier.django_fields[field_name] = field_result.django_field elif field_result.context_field: # Handle context fields if needed (currently seems unused based on logs) carrier.context_fields[field_name] = field_result.context_field # Merge imports from result into the factory's import handler if field_result.required_imports: # Use the new add_import method for module, names in field_result.required_imports.items(): for name in names: self.import_handler.add_import(module=module, name=name) logger.debug(f"Finished processing fields for {source_model.__name__}. Errors: {len(carrier.invalid_fields)}") # Actual implementation of the abstract method _build_model_context def _build_model_context(self, carrier: ConversionCarrier[DataclassType]): """Builds the ModelContext specifically for dataclass source models.""" if not carrier.source_model or not carrier.django_model: logger.debug("Skipping context build: missing source or django model.") return try: # Remove generic type hint if ModelContext is not generic or if causing issues # Assuming ModelContext base class handles the source type appropriately model_context = ModelContext(django_model=carrier.django_model, source_class=carrier.source_model) for field_name, field_info in carrier.context_fields.items(): if isinstance(field_info, dataclasses.Field): # Calculate necessary info for ModelContext.add_field origin = get_origin(field_info.type) args = get_args(field_info.type) is_optional = origin is Union and type(None) in args field_type_str = repr(field_info.type) # Use repr for the type string # Call add_field with expected signature model_context.add_field( field_name=field_name, field_type_str=field_type_str, is_optional=is_optional, # Pass original annotation if ModelContext uses it annotation=field_info.type, ) else: # Log if context field is not the expected type logger.warning( f"Context field '{field_name}' is not a dataclasses.Field ({type(field_info)}), cannot add to ModelContext." ) carrier.model_context = model_context logger.debug(f"Successfully built ModelContext for {carrier.model_key()}") # Use method call except Exception as e: logger.error(f"Failed to build ModelContext for {carrier.model_key()}: {e}", exc_info=True) carrier.model_context = None
__init__(field_factory, relationship_accessor, import_handler=None)
¶Initialize with field factory, relationship accessor, and import handler.
Source code in
src/pydantic2django/dataclass/factory.py
315 316 317 318 319 320 321 322 323 324 325
def __init__( self, field_factory: DataclassFieldFactory, relationship_accessor: RelationshipConversionAccessor, # Now required import_handler: Optional[ImportHandler] = None, # Accept optionally ): """Initialize with field factory, relationship accessor, and import handler.""" self.relationship_accessor = relationship_accessor self.import_handler = import_handler or ImportHandler() # Call super init super().__init__(field_factory)
make_django_model(carrier)
¶Orchestrates the Django model creation process. Subclasses implement _process_source_fields and _build_model_context. Handles caching. Passes import handler down.
Source code in
src/pydantic2django/dataclass/factory.py
327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350
def make_django_model(self, carrier: ConversionCarrier[DataclassType]) -> None: """ Orchestrates the Django model creation process. Subclasses implement _process_source_fields and _build_model_context. Handles caching. Passes import handler down. """ # --- Pass import handler via carrier --- (or could add to factory state) # Need to set import handler on carrier if passed during init # NOTE: BaseModelFactory.make_django_model does this now. # carrier.import_handler = self.import_handler super().make_django_model(carrier) # Register relationship after successful model creation (moved from original) if carrier.source_model and carrier.django_model: logger.debug( f"Mapping relationship in accessor: {carrier.source_model.__name__} -> {carrier.django_model.__name__}" ) self.relationship_accessor.map_relationship( source_model=carrier.source_model, django_model=carrier.django_model ) # Cache result (moved from original) model_key = carrier.model_key() if carrier.django_model and not carrier.existing_model: self._converted_models[model_key] = carrier
-
Bases:
BaseFieldFactory[Field]
Creates Django model fields from dataclass fields.
Source code in
src/pydantic2django/dataclass/factory.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301
class DataclassFieldFactory(BaseFieldFactory[dataclasses.Field]): """Creates Django model fields from dataclass fields.""" relationship_accessor: RelationshipConversionAccessor # Changed Optional to required bidirectional_mapper: BidirectionalTypeMapper # Added mapper def __init__( self, relationship_accessor: RelationshipConversionAccessor, bidirectional_mapper: BidirectionalTypeMapper ): """Initializes with dependencies.""" self.relationship_accessor = relationship_accessor self.bidirectional_mapper = bidirectional_mapper # No super().__init__() needed if BaseFieldFactory.__init__ is empty or handles this def create_field( self, field_info: dataclasses.Field, model_name: str, carrier: ConversionCarrier[DataclassType] ) -> FieldConversionResult: """ Convert a dataclasses.Field to a Django field instance. Uses BidirectionalTypeMapper and local instantiation. Relies on field_info.metadata['django'] for specific overrides. Adds required imports to the result. """ field_name = field_info.name original_field_type = field_info.type metadata = field_info.metadata or {} # Ensure metadata is a dict django_meta_options = metadata.get("django", {}) # --- Resolve Forward Reference String if necessary --- # type_to_map = original_field_type result = FieldConversionResult(field_info=field_info, field_name=field_name) if isinstance(original_field_type, str): logger.debug( f"Field '{field_name}' has string type '{original_field_type}'. Attempting resolution via RelationshipAccessor." ) # Assume string type is a model name known to the accessor # Use the newly added method: resolved_source_model = self.relationship_accessor.get_source_model_by_name(original_field_type) if resolved_source_model: logger.debug(f"Resolved string '{original_field_type}' to type {resolved_source_model}") type_to_map = resolved_source_model else: # Critical Error: If it's a string but not in accessor, mapping will fail. logger.error( f"Field '{field_name}' type is string '{original_field_type}' but was not found in RelationshipAccessor. Cannot map." ) result.error_str = f"Unresolved forward reference or unknown model name: {original_field_type}" result.context_field = field_info return result # --- End Forward Reference Resolution --- logger.debug( f"Processing dataclass field {model_name}.{field_name}: Type={original_field_type}, Metadata={metadata}" ) try: # --- Use BidirectionalTypeMapper --- # try: # Pass field_info.type, but no Pydantic FieldInfo equivalent for metadata # The mapper primarily relies on the type itself. django_field_class, constructor_kwargs = self.bidirectional_mapper.get_django_mapping( python_type=type_to_map, field_info=None, # Pass None for field_info ) # Add import for the Django field class itself using the result's helper result.add_import_for_obj(django_field_class) except MappingError as e: logger.error(f"Mapping error for '{model_name}.{field_name}' (type: {type_to_map}): {e}") result.error_str = str(e) result.context_field = field_info return result except Exception as e: logger.error( f"Unexpected error getting Django mapping for '{model_name}.{field_name}': {e}", exc_info=True ) result.error_str = f"Unexpected mapping error: {e}" result.context_field = field_info return result # --- Merge Dataclass Metadata Overrides --- # # Apply explicit options from metadata *after* getting defaults from mapper constructor_kwargs.update(django_meta_options) # --- Apply Dataclass Defaults --- # # This logic now handles both `default` and `default_factory`. if "default" not in constructor_kwargs: default_value = field_info.default if default_value is dataclasses.MISSING and field_info.default_factory is not dataclasses.MISSING: default_value = field_info.default_factory if default_value is not dataclasses.MISSING: if callable(default_value): try: # Handle regular importable functions if ( hasattr(default_value, "__module__") and hasattr(default_value, "__name__") and not default_value.__name__ == "<lambda>" ): module_name = default_value.__module__ func_name = default_value.__name__ # Avoid importing builtins if module_name != "builtins": result.add_import(module_name, func_name) constructor_kwargs["default"] = func_name # Handle lambdas else: source = inspect.getsource(default_value).strip() # Remove comma if it's trailing in a lambda definition in a list/dict if source.endswith(","): source = source[:-1] constructor_kwargs["default"] = RawCode(source) except (TypeError, OSError) as e: logger.warning( f"Could not introspect callable default for '{model_name}.{field_name}': {e}. " "Falling back to `None`." ) constructor_kwargs["default"] = None constructor_kwargs["null"] = True constructor_kwargs["blank"] = True elif not isinstance(default_value, (list, dict, set)): constructor_kwargs["default"] = default_value else: logger.warning( f"Field {model_name}.{field_name} has mutable default {default_value}. Skipping Django default." ) # --- Handle Relationships Specifically (Adjust Kwargs) --- # is_relationship = issubclass( django_field_class, (models.ForeignKey, models.OneToOneField, models.ManyToManyField) ) if is_relationship: if "to" not in constructor_kwargs: result.error_str = f"Mapper failed to determine 'to' for relationship field '{field_name}'." logger.error(result.error_str) result.context_field = field_info return result # Sanitize and ensure unique related_name user_related_name = django_meta_options.get("related_name") # Check override from metadata target_django_model_str = constructor_kwargs["to"] target_model_cls = None target_model_cls_name_only = target_django_model_str try: app_label, model_cls_name = target_django_model_str.split(".") target_model_cls = apps.get_model(app_label, model_cls_name) target_model_cls_name_only = model_cls_name # Add import for the target model using result helper result.add_import_for_obj(target_model_cls) except Exception: logger.warning( f"Could not get target model class for '{target_django_model_str}' when generating related_name for '{field_name}'. Using model name string." ) target_model_cls_name_only = target_django_model_str.split(".")[-1] related_name_base = ( user_related_name if user_related_name # Use carrier.source_model.__name__ for default related name base else f"{carrier.source_model.__name__.lower()}_{field_name}_set" ) final_related_name_base = sanitize_related_name( str(related_name_base), target_model_cls.__name__ if target_model_cls else target_model_cls_name_only, field_name, ) # Ensure uniqueness using carrier's tracker target_model_key_for_tracker = ( target_model_cls.__name__ if target_model_cls else target_django_model_str ) target_related_names = carrier.used_related_names_per_target.setdefault( target_model_key_for_tracker, set() ) unique_related_name = final_related_name_base counter = 1 while unique_related_name in target_related_names: unique_related_name = f"{final_related_name_base}_{counter}" counter += 1 target_related_names.add(unique_related_name) constructor_kwargs["related_name"] = unique_related_name logger.debug(f"[REL] Dataclass Field '{field_name}': Assigning related_name='{unique_related_name}'") # Re-confirm on_delete (mapper sets default based on Optional, but metadata might override) # Need to check optionality of the original type here origin = get_origin(original_field_type) args = get_args(original_field_type) is_optional = origin is Union and type(None) in args if ( django_field_class in (models.ForeignKey, models.OneToOneField) and "on_delete" not in constructor_kwargs # Only set if not specified in metadata ): constructor_kwargs["on_delete"] = models.SET_NULL if is_optional else models.CASCADE # Add import using result helper result.add_import("django.db.models", "SET_NULL" if is_optional else "CASCADE") elif django_field_class == models.ManyToManyField: constructor_kwargs.pop("on_delete", None) constructor_kwargs.pop("null", None) # M2M cannot be null if "blank" not in constructor_kwargs: # Default M2M to blank=True if not set constructor_kwargs["blank"] = True # --- Perform Instantiation Locally --- # try: logger.debug( f"Instantiating {django_field_class.__name__} for dataclass field '{field_name}' with kwargs: {constructor_kwargs}" ) result.django_field = django_field_class(**constructor_kwargs) result.field_kwargs = constructor_kwargs # Store final kwargs except Exception as e: error_msg = f"Failed to instantiate Django field '{field_name}' (type: {django_field_class.__name__}) with kwargs {constructor_kwargs}: {e}" logger.error(error_msg, exc_info=True) result.error_str = error_msg result.context_field = field_info return result # --- Generate Field Definition String --- # result.field_definition_str = self._generate_field_def_string(result, carrier.meta_app_label) return result # Success except Exception as e: # Catch-all for unexpected errors during conversion error_msg = f"Unexpected error converting dataclass field '{model_name}.{field_name}': {e}" logger.error(error_msg, exc_info=True) result.error_str = error_msg result.context_field = field_info return result def _generate_field_def_string(self, result: FieldConversionResult, app_label: str) -> str: """Generates the field definition string safely.""" if not result.django_field: return "# Field generation failed" try: # Use stored final kwargs if available if result.field_kwargs: # Pass the result's required_imports to the serialization function return generate_field_definition_string( type(result.django_field), result.field_kwargs, app_label, ) else: # Fallback: Basic serialization if final kwargs weren't stored for some reason logger.warning( f"Could not generate definition string for '{result.field_name}': final kwargs not found in result. Using basic serialization." ) return FieldSerializer.serialize_field(result.django_field) except Exception as e: logger.error( f"Failed to generate field definition string for '{result.field_name}': {e}", exc_info=True, ) return f"# Error generating definition: {e}"
__init__(relationship_accessor, bidirectional_mapper)
¶Initializes with dependencies.
Source code in
src/pydantic2django/dataclass/factory.py
50 51 52 53 54 55
def __init__( self, relationship_accessor: RelationshipConversionAccessor, bidirectional_mapper: BidirectionalTypeMapper ): """Initializes with dependencies.""" self.relationship_accessor = relationship_accessor self.bidirectional_mapper = bidirectional_mapper
create_field(field_info, model_name, carrier)
¶Convert a dataclasses.Field to a Django field instance. Uses BidirectionalTypeMapper and local instantiation. Relies on field_info.metadata['django'] for specific overrides. Adds required imports to the result.
Source code in
src/pydantic2django/dataclass/factory.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275
def create_field( self, field_info: dataclasses.Field, model_name: str, carrier: ConversionCarrier[DataclassType] ) -> FieldConversionResult: """ Convert a dataclasses.Field to a Django field instance. Uses BidirectionalTypeMapper and local instantiation. Relies on field_info.metadata['django'] for specific overrides. Adds required imports to the result. """ field_name = field_info.name original_field_type = field_info.type metadata = field_info.metadata or {} # Ensure metadata is a dict django_meta_options = metadata.get("django", {}) # --- Resolve Forward Reference String if necessary --- # type_to_map = original_field_type result = FieldConversionResult(field_info=field_info, field_name=field_name) if isinstance(original_field_type, str): logger.debug( f"Field '{field_name}' has string type '{original_field_type}'. Attempting resolution via RelationshipAccessor." ) # Assume string type is a model name known to the accessor # Use the newly added method: resolved_source_model = self.relationship_accessor.get_source_model_by_name(original_field_type) if resolved_source_model: logger.debug(f"Resolved string '{original_field_type}' to type {resolved_source_model}") type_to_map = resolved_source_model else: # Critical Error: If it's a string but not in accessor, mapping will fail. logger.error( f"Field '{field_name}' type is string '{original_field_type}' but was not found in RelationshipAccessor. Cannot map." ) result.error_str = f"Unresolved forward reference or unknown model name: {original_field_type}" result.context_field = field_info return result # --- End Forward Reference Resolution --- logger.debug( f"Processing dataclass field {model_name}.{field_name}: Type={original_field_type}, Metadata={metadata}" ) try: # --- Use BidirectionalTypeMapper --- # try: # Pass field_info.type, but no Pydantic FieldInfo equivalent for metadata # The mapper primarily relies on the type itself. django_field_class, constructor_kwargs = self.bidirectional_mapper.get_django_mapping( python_type=type_to_map, field_info=None, # Pass None for field_info ) # Add import for the Django field class itself using the result's helper result.add_import_for_obj(django_field_class) except MappingError as e: logger.error(f"Mapping error for '{model_name}.{field_name}' (type: {type_to_map}): {e}") result.error_str = str(e) result.context_field = field_info return result except Exception as e: logger.error( f"Unexpected error getting Django mapping for '{model_name}.{field_name}': {e}", exc_info=True ) result.error_str = f"Unexpected mapping error: {e}" result.context_field = field_info return result # --- Merge Dataclass Metadata Overrides --- # # Apply explicit options from metadata *after* getting defaults from mapper constructor_kwargs.update(django_meta_options) # --- Apply Dataclass Defaults --- # # This logic now handles both `default` and `default_factory`. if "default" not in constructor_kwargs: default_value = field_info.default if default_value is dataclasses.MISSING and field_info.default_factory is not dataclasses.MISSING: default_value = field_info.default_factory if default_value is not dataclasses.MISSING: if callable(default_value): try: # Handle regular importable functions if ( hasattr(default_value, "__module__") and hasattr(default_value, "__name__") and not default_value.__name__ == "<lambda>" ): module_name = default_value.__module__ func_name = default_value.__name__ # Avoid importing builtins if module_name != "builtins": result.add_import(module_name, func_name) constructor_kwargs["default"] = func_name # Handle lambdas else: source = inspect.getsource(default_value).strip() # Remove comma if it's trailing in a lambda definition in a list/dict if source.endswith(","): source = source[:-1] constructor_kwargs["default"] = RawCode(source) except (TypeError, OSError) as e: logger.warning( f"Could not introspect callable default for '{model_name}.{field_name}': {e}. " "Falling back to `None`." ) constructor_kwargs["default"] = None constructor_kwargs["null"] = True constructor_kwargs["blank"] = True elif not isinstance(default_value, (list, dict, set)): constructor_kwargs["default"] = default_value else: logger.warning( f"Field {model_name}.{field_name} has mutable default {default_value}. Skipping Django default." ) # --- Handle Relationships Specifically (Adjust Kwargs) --- # is_relationship = issubclass( django_field_class, (models.ForeignKey, models.OneToOneField, models.ManyToManyField) ) if is_relationship: if "to" not in constructor_kwargs: result.error_str = f"Mapper failed to determine 'to' for relationship field '{field_name}'." logger.error(result.error_str) result.context_field = field_info return result # Sanitize and ensure unique related_name user_related_name = django_meta_options.get("related_name") # Check override from metadata target_django_model_str = constructor_kwargs["to"] target_model_cls = None target_model_cls_name_only = target_django_model_str try: app_label, model_cls_name = target_django_model_str.split(".") target_model_cls = apps.get_model(app_label, model_cls_name) target_model_cls_name_only = model_cls_name # Add import for the target model using result helper result.add_import_for_obj(target_model_cls) except Exception: logger.warning( f"Could not get target model class for '{target_django_model_str}' when generating related_name for '{field_name}'. Using model name string." ) target_model_cls_name_only = target_django_model_str.split(".")[-1] related_name_base = ( user_related_name if user_related_name # Use carrier.source_model.__name__ for default related name base else f"{carrier.source_model.__name__.lower()}_{field_name}_set" ) final_related_name_base = sanitize_related_name( str(related_name_base), target_model_cls.__name__ if target_model_cls else target_model_cls_name_only, field_name, ) # Ensure uniqueness using carrier's tracker target_model_key_for_tracker = ( target_model_cls.__name__ if target_model_cls else target_django_model_str ) target_related_names = carrier.used_related_names_per_target.setdefault( target_model_key_for_tracker, set() ) unique_related_name = final_related_name_base counter = 1 while unique_related_name in target_related_names: unique_related_name = f"{final_related_name_base}_{counter}" counter += 1 target_related_names.add(unique_related_name) constructor_kwargs["related_name"] = unique_related_name logger.debug(f"[REL] Dataclass Field '{field_name}': Assigning related_name='{unique_related_name}'") # Re-confirm on_delete (mapper sets default based on Optional, but metadata might override) # Need to check optionality of the original type here origin = get_origin(original_field_type) args = get_args(original_field_type) is_optional = origin is Union and type(None) in args if ( django_field_class in (models.ForeignKey, models.OneToOneField) and "on_delete" not in constructor_kwargs # Only set if not specified in metadata ): constructor_kwargs["on_delete"] = models.SET_NULL if is_optional else models.CASCADE # Add import using result helper result.add_import("django.db.models", "SET_NULL" if is_optional else "CASCADE") elif django_field_class == models.ManyToManyField: constructor_kwargs.pop("on_delete", None) constructor_kwargs.pop("null", None) # M2M cannot be null if "blank" not in constructor_kwargs: # Default M2M to blank=True if not set constructor_kwargs["blank"] = True # --- Perform Instantiation Locally --- # try: logger.debug( f"Instantiating {django_field_class.__name__} for dataclass field '{field_name}' with kwargs: {constructor_kwargs}" ) result.django_field = django_field_class(**constructor_kwargs) result.field_kwargs = constructor_kwargs # Store final kwargs except Exception as e: error_msg = f"Failed to instantiate Django field '{field_name}' (type: {django_field_class.__name__}) with kwargs {constructor_kwargs}: {e}" logger.error(error_msg, exc_info=True) result.error_str = error_msg result.context_field = field_info return result # --- Generate Field Definition String --- # result.field_definition_str = self._generate_field_def_string(result, carrier.meta_app_label) return result # Success except Exception as e: # Catch-all for unexpected errors during conversion error_msg = f"Unexpected error converting dataclass field '{model_name}.{field_name}': {e}" logger.error(error_msg, exc_info=True) result.error_str = error_msg result.context_field = field_info return result
-
Bases:
BaseStaticGenerator[DataclassType, DataclassFieldInfo]
Generates Django models.py file content from Python dataclasses.
Source code in
src/pydantic2django/dataclass/generator.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160
class DataclassDjangoModelGenerator( BaseStaticGenerator[DataclassType, DataclassFieldInfo] # Inherit from BaseStaticGenerator ): """Generates Django models.py file content from Python dataclasses.""" def __init__( self, output_path: str, app_label: str, filter_function: Optional[Callable[[DataclassType], bool]], verbose: bool, # Accept specific discovery and factories, or create defaults packages: list[str] | None = None, discovery_instance: Optional[DataclassDiscovery] = None, model_factory_instance: Optional[DataclassModelFactory] = None, field_factory_instance: Optional[DataclassFieldFactory] = None, # Add field factory param relationship_accessor: Optional[RelationshipConversionAccessor] = None, # Accept accessor module_mappings: Optional[dict[str, str]] = None, # Default base class can be models.Model or a custom one base_model_class: type[models.Model] = Dataclass2DjangoBaseClass, # Use the correct base for dataclasses ): # 1. Initialize Dataclass-specific discovery self.dataclass_discovery_instance = discovery_instance or DataclassDiscovery() # 2. Initialize Dataclass-specific factories # Dataclass factories might not need RelationshipAccessor, check their definitions # Assuming they don't for now. # --- Correction: They DO need them now --- # Use provided accessor or create a new one self.relationship_accessor = relationship_accessor or RelationshipConversionAccessor() # Create mapper using the (potentially provided) accessor self.bidirectional_mapper = BidirectionalTypeMapper(relationship_accessor=self.relationship_accessor) self.dataclass_field_factory = field_factory_instance or DataclassFieldFactory( relationship_accessor=self.relationship_accessor, bidirectional_mapper=self.bidirectional_mapper, ) self.dataclass_model_factory = model_factory_instance or DataclassModelFactory( field_factory=self.dataclass_field_factory, relationship_accessor=self.relationship_accessor, # Pass only accessor ) # 3. Call the base class __init__ super().__init__( output_path=output_path, packages=packages, app_label=app_label, filter_function=filter_function, verbose=verbose, discovery_instance=self.dataclass_discovery_instance, model_factory_instance=self.dataclass_model_factory, module_mappings=module_mappings, base_model_class=base_model_class, ) logger.info("DataclassDjangoModelGenerator initialized using BaseStaticGenerator.") # --- Implement abstract methods from BaseStaticGenerator --- def _get_source_model_name(self, carrier: ConversionCarrier[DataclassType]) -> str: """Get the name of the original dataclass from the carrier.""" # Use carrier.source_model (consistent with Base class) if carrier.source_model: return carrier.source_model.__name__ # Fallback if source model somehow missing # Check if carrier has pydantic_model attribute as a legacy fallback? legacy_model = getattr(carrier, "pydantic_model", None) # Safely check old attribute if legacy_model: return legacy_model.__name__ return "UnknownDataclass" def _add_source_model_import(self, carrier: ConversionCarrier[DataclassType]): """Add the necessary import for the original dataclass.""" # Use carrier.source_model model_to_import = carrier.source_model if not model_to_import: # Legacy fallback check model_to_import = getattr(carrier, "pydantic_model", None) if model_to_import: # Use add_pydantic_model_import for consistency? Or add_context_field_type_import? # Let's assume add_context_field_type_import handles dataclasses too. # A dedicated add_dataclass_import or add_general_import would be clearer. self.import_handler.add_context_field_type_import(model_to_import) else: logger.warning("Cannot add source model import: source model missing in carrier.") def _prepare_template_context(self, unique_model_definitions, django_model_names, imports) -> dict: """Prepare the context specific to dataclasses for the main models_file.py.j2 template.""" # Base context items are passed in. # Add Dataclass-specific items. base_context = { "model_definitions": unique_model_definitions, # Already joined by base class "django_model_names": django_model_names, # Already list of quoted names # Pass the structured imports dict "imports": imports, # --- Dataclass Specific --- "generation_source_type": "dataclass", # Flag for template logic # --- Keep compatibility if templates expect these --- (review templates later) # "django_imports": sorted(imports.get("django", [])), # Provided by imports dict # "pydantic_imports": sorted(imports.get("pydantic", [])), # Likely empty for dataclass # "general_imports": sorted(imports.get("general", [])), # "context_imports": sorted(imports.get("context", [])), # Add other dataclass specific flags/lists if needed by the template "context_definitions": [], # Dataclasses don't have separate context classes? Assume empty. "context_class_names": [], "model_has_context": {}, # Assume no context model mapping needed } # Common items added by base class generate_models_file after this call. return base_context def _get_models_in_processing_order(self) -> list[DataclassType]: """Return dataclasses in dependency order using the discovery instance.""" # Add assertion for type checker clarity assert isinstance( self.discovery_instance, DataclassDiscovery ), "Discovery instance must be DataclassDiscovery for this generator" # Dependencies analyzed by base class discover_models call return self.discovery_instance.get_models_in_registration_order() def _get_model_definition_extra_context(self, carrier: ConversionCarrier[DataclassType]) -> dict: """Provide extra context specific to dataclasses for model_definition.py.j2.""" # Removed problematic metadata access from original # Add flags for template conditional logic return { "is_dataclass_source": True, "is_pydantic_source": False, "has_context": False, # Dataclasses likely don't generate separate context fields/classes # Pass the field definitions dictionary from the carrier "field_definitions": carrier.django_field_definitions, # Add other specific details if needed, ensuring they access carrier correctly # Example: "source_model_module": carrier.source_model.__module__ if carrier.source_model else "" }
-
Notes:
- Reads field metadata and optional per-field
django
overrides fromdataclasses.Field.metadata
. - Uses
typing.get_type_hints
to resolve forward references.
TypedClass (experimental)¶
- Discovery and factory (subject to change as APIs stabilize):
-
Bases:
BaseDiscovery[TypedClassType]
Discovers generic Python classes within specified packages.
Source code in
src/pydantic2django/typedclass/discovery.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132
class TypedClassDiscovery(BaseDiscovery[TypedClassType]): """Discovers generic Python classes within specified packages.""" def __init__(self): super().__init__() # TypedClass specific attributes, if any, can be initialized here. def _is_pydantic_model(self, obj: Any) -> bool: """Checks if an object is a Pydantic model.""" return inspect.isclass(obj) and issubclass(obj, BaseModel) def _is_target_model(self, obj: Any) -> bool: """ Check if an object is a generic class suitable for conversion. It must be a class, not an ABC, not a Pydantic model, and not a dataclass. """ if not inspect.isclass(obj): return False if inspect.isabstract(obj): logger.debug(f"Skipping abstract class {obj.__name__}") return False if self._is_pydantic_model(obj): # Check if it's a Pydantic model logger.debug(f"Skipping Pydantic model {obj.__name__}") return False if dataclasses.is_dataclass(obj): logger.debug(f"Skipping dataclass {obj.__name__}") return False # Further checks can be added here, e.g., must be in a specific list # or have certain characteristics. For now, this is a basic filter. logger.debug(f"Identified potential target typed class: {obj.__name__}") return True def _default_eligibility_filter(self, model: TypedClassType) -> bool: """ Check default eligibility for generic classes. For example, we might want to ensure it's not an ABC, though _is_target_model should already catch this. """ # Redundant check if _is_target_model is comprehensive, but good for safety. if inspect.isabstract(model): logger.debug(f"Filtering out typed class {model.__name__} (is abstract)") return False # Add other default checks if needed. # For instance, are there specific base classes (not ABCs) we want to exclude/include? return True def analyze_dependencies(self) -> None: """ Build the dependency graph for the filtered generic classes. Dependencies are determined by type hints in __init__ arguments and class-level attribute annotations. """ logger.info("Analyzing dependencies between filtered typed classes...") self.dependencies: dict[TypedClassType, set[TypedClassType]] = {} # Ensure all filtered models are keys in the dependencies dict for model_qualname in self.filtered_models: model_obj = self.filtered_models[model_qualname] self.dependencies[model_obj] = set() filtered_model_qualnames = set(self.filtered_models.keys()) def _find_and_add_dependency(source_model: TypedClassType, potential_dep_type: Any): """ Helper to check if a potential dependency type is a target model and add it to the graph. """ # Check if potential_dep_type itself is a class and one of our targets if self._is_target_model(potential_dep_type): dep_qualname = f"{potential_dep_type.__module__}.{potential_dep_type.__name__}" if dep_qualname in filtered_model_qualnames and potential_dep_type is not source_model: dep_model_obj = self.filtered_models.get(dep_qualname) if dep_model_obj: self.dependencies[source_model].add(dep_model_obj) else: logger.warning( f"Inconsistency: Dependency '{dep_qualname}' for typed class " f"'{source_model.__name__}' found by name but not as object in filtered set." ) # TODO: Handle generics like list[TargetType], dict[str, TargetType], Union[TargetType, None] # This would involve using get_origin and get_args from typing. for model_type in self.filtered_models.values(): # 1. Analyze __init__ parameters try: init_signature = inspect.signature(model_type.__init__) for param in init_signature.parameters.values(): if param.name == "self" or param.annotation is inspect.Parameter.empty: continue _find_and_add_dependency(model_type, param.annotation) except (ValueError, TypeError) as e: # Some built-ins or exotic classes might not have inspectable __init__ logger.debug(f"Could not inspect __init__ for {model_type.__name__}: {e}") # 2. Analyze class-level annotations try: annotations = inspect.get_annotations(model_type, eval_str=True) for _, attr_type in annotations.items(): _find_and_add_dependency(model_type, attr_type) except Exception as e: logger.debug(f"Could not get annotations for {model_type.__name__}: {e}") logger.info("Typed class dependency analysis complete.") # Debug logging of dependencies will be handled by BaseDiscovery.log_dependencies def get_models_in_registration_order(self) -> list[TypedClassType]: """ Return generic classes sorted topologically based on dependencies. This method can often be inherited from BaseDiscovery if the dependency graph is built correctly. """ # For now, assume BaseDiscovery's implementation is sufficient. # If specific logic for typed classes is needed, override here. return super().get_models_in_registration_order()
analyze_dependencies()
¶Build the dependency graph for the filtered generic classes. Dependencies are determined by type hints in init arguments and class-level attribute annotations.
Source code in
src/pydantic2django/typedclass/discovery.py
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121
def analyze_dependencies(self) -> None: """ Build the dependency graph for the filtered generic classes. Dependencies are determined by type hints in __init__ arguments and class-level attribute annotations. """ logger.info("Analyzing dependencies between filtered typed classes...") self.dependencies: dict[TypedClassType, set[TypedClassType]] = {} # Ensure all filtered models are keys in the dependencies dict for model_qualname in self.filtered_models: model_obj = self.filtered_models[model_qualname] self.dependencies[model_obj] = set() filtered_model_qualnames = set(self.filtered_models.keys()) def _find_and_add_dependency(source_model: TypedClassType, potential_dep_type: Any): """ Helper to check if a potential dependency type is a target model and add it to the graph. """ # Check if potential_dep_type itself is a class and one of our targets if self._is_target_model(potential_dep_type): dep_qualname = f"{potential_dep_type.__module__}.{potential_dep_type.__name__}" if dep_qualname in filtered_model_qualnames and potential_dep_type is not source_model: dep_model_obj = self.filtered_models.get(dep_qualname) if dep_model_obj: self.dependencies[source_model].add(dep_model_obj) else: logger.warning( f"Inconsistency: Dependency '{dep_qualname}' for typed class " f"'{source_model.__name__}' found by name but not as object in filtered set." ) # TODO: Handle generics like list[TargetType], dict[str, TargetType], Union[TargetType, None] # This would involve using get_origin and get_args from typing. for model_type in self.filtered_models.values(): # 1. Analyze __init__ parameters try: init_signature = inspect.signature(model_type.__init__) for param in init_signature.parameters.values(): if param.name == "self" or param.annotation is inspect.Parameter.empty: continue _find_and_add_dependency(model_type, param.annotation) except (ValueError, TypeError) as e: # Some built-ins or exotic classes might not have inspectable __init__ logger.debug(f"Could not inspect __init__ for {model_type.__name__}: {e}") # 2. Analyze class-level annotations try: annotations = inspect.get_annotations(model_type, eval_str=True) for _, attr_type in annotations.items(): _find_and_add_dependency(model_type, attr_type) except Exception as e: logger.debug(f"Could not get annotations for {model_type.__name__}: {e}") logger.info("Typed class dependency analysis complete.")
get_models_in_registration_order()
¶Return generic classes sorted topologically based on dependencies. This method can often be inherited from BaseDiscovery if the dependency graph is built correctly.
Source code in
src/pydantic2django/typedclass/discovery.py
124 125 126 127 128 129 130 131 132
def get_models_in_registration_order(self) -> list[TypedClassType]: """ Return generic classes sorted topologically based on dependencies. This method can often be inherited from BaseDiscovery if the dependency graph is built correctly. """ # For now, assume BaseDiscovery's implementation is sufficient. # If specific logic for typed classes is needed, override here. return super().get_models_in_registration_order()
-
Bases:
BaseModelFactory[TypedClassType, TypedClassFieldInfo]
Creates Django model definitions from generic Python classes.
Source code in
src/pydantic2django/typedclass/factory.py
155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277
class TypedClassModelFactory(BaseModelFactory[TypedClassType, TypedClassFieldInfo]): """Creates Django model definitions from generic Python classes.""" def __init__( self, field_factory: TypedClassFieldFactory, relationship_accessor: RelationshipConversionAccessor, # Add 'reckless_mode: bool = False' if implementing that flag ): super().__init__(field_factory, relationship_accessor) # self.reckless_mode = reckless_mode def _get_model_fields_info( self, model_class: TypedClassType, carrier: ConversionCarrier ) -> list[TypedClassFieldInfo]: """ Extracts attribute information from a generic class. Prioritizes __init__ signature, then class-level annotations. """ field_infos = [] processed_params = set() # 1. Inspect __init__ method try: init_signature = inspect.signature(model_class.__init__) for name, param in init_signature.parameters.items(): if name == "self": continue type_hint = param.annotation if param.annotation is not inspect.Parameter.empty else Any default_val = param.default if param.default is not inspect.Parameter.empty else inspect.Parameter.empty field_infos.append( TypedClassFieldInfo(name=name, type_hint=type_hint, default_value=default_val, is_from_init=True) ) processed_params.add(name) except (ValueError, TypeError) as e: logger.debug( f"Could not inspect __init__ for {model_class.__name__}: {e}. Proceeding with class annotations." ) # 2. Inspect class-level annotations (for attributes not in __init__) try: annotations = inspect.get_annotations(model_class, eval_str=True) for name, type_hint in annotations.items(): if name not in processed_params and not name.startswith("_"): # Avoid private/protected by convention default_val = getattr(model_class, name, inspect.Parameter.empty) field_infos.append( TypedClassFieldInfo( name=name, type_hint=type_hint, default_value=default_val, is_from_init=False ) ) except Exception as e: # Broad exception as get_annotations can fail in various ways logger.debug(f"Could not get class annotations for {model_class.__name__}: {e}") logger.debug(f"Discovered field infos for {model_class.__name__}: {field_infos}") return field_infos def create_model_definition( self, model_class: TypedClassType, app_label: str, base_model_class: type[models.Model], module_mappings: Optional[dict[str, str]] = None, ) -> ConversionCarrier[TypedClassType]: """ Generates a ConversionCarrier containing the Django model string and related info. """ model_name = model_class.__name__ django_model_name = f"{model_name}DjangoModel" # Or some other naming convention carrier = ConversionCarrier( source_model=model_class, source_model_name=model_name, django_model_name=django_model_name, app_label=app_label, module_mappings=module_mappings or {}, relationship_accessor=self.relationship_accessor, ) # Add import for the base model class carrier.add_django_model_import(base_model_class) field_definitions = [] model_fields_info = self._get_model_fields_info(model_class, carrier) if not model_fields_info: logger.warning(f"No fields discovered for class {model_name}. Generating an empty Django model.") # Optionally, add a default placeholder field if empty models are problematic # field_definitions.append(" # No convertible fields found") for field_info in model_fields_info: try: field_def_str = self.field_factory.create_field_definition(field_info, carrier) field_definitions.append(f" {field_def_str}") except Exception as e: logger.error( f"Error creating field definition for {field_info.name} in {model_name}: {e}", exc_info=True ) # Optionally, add a placeholder or skip this field field_definitions.append(f" # Error processing field: {field_info.name} - {e}") carrier.django_field_definitions = field_definitions # Meta class carrier.meta_class_string = generate_meta_class_string( app_label=app_label, django_model_name=django_model_name, # Use the generated Django model name verbose_name=model_name, ) # __str__ method # Heuristic: use 'name' or 'id' attribute if present in field_infos, else default str_field = "id" # Django models get 'id' by default from models.Model for finfo in model_fields_info: if finfo.name in ["name", "title", "identifier"]: # common __str__ candidates str_field = finfo.name break carrier.str_method_string = f" def __str__(self):\n return str(self.{str_field})" logger.info(f"Prepared ConversionCarrier for {model_name} -> {django_model_name}") return carrier
create_model_definition(model_class, app_label, base_model_class, module_mappings=None)
¶Generates a ConversionCarrier containing the Django model string and related info.
Source code in
src/pydantic2django/typedclass/factory.py
213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277
def create_model_definition( self, model_class: TypedClassType, app_label: str, base_model_class: type[models.Model], module_mappings: Optional[dict[str, str]] = None, ) -> ConversionCarrier[TypedClassType]: """ Generates a ConversionCarrier containing the Django model string and related info. """ model_name = model_class.__name__ django_model_name = f"{model_name}DjangoModel" # Or some other naming convention carrier = ConversionCarrier( source_model=model_class, source_model_name=model_name, django_model_name=django_model_name, app_label=app_label, module_mappings=module_mappings or {}, relationship_accessor=self.relationship_accessor, ) # Add import for the base model class carrier.add_django_model_import(base_model_class) field_definitions = [] model_fields_info = self._get_model_fields_info(model_class, carrier) if not model_fields_info: logger.warning(f"No fields discovered for class {model_name}. Generating an empty Django model.") # Optionally, add a default placeholder field if empty models are problematic # field_definitions.append(" # No convertible fields found") for field_info in model_fields_info: try: field_def_str = self.field_factory.create_field_definition(field_info, carrier) field_definitions.append(f" {field_def_str}") except Exception as e: logger.error( f"Error creating field definition for {field_info.name} in {model_name}: {e}", exc_info=True ) # Optionally, add a placeholder or skip this field field_definitions.append(f" # Error processing field: {field_info.name} - {e}") carrier.django_field_definitions = field_definitions # Meta class carrier.meta_class_string = generate_meta_class_string( app_label=app_label, django_model_name=django_model_name, # Use the generated Django model name verbose_name=model_name, ) # __str__ method # Heuristic: use 'name' or 'id' attribute if present in field_infos, else default str_field = "id" # Django models get 'id' by default from models.Model for finfo in model_fields_info: if finfo.name in ["name", "title", "identifier"]: # common __str__ candidates str_field = finfo.name break carrier.str_method_string = f" def __str__(self):\n return str(self.{str_field})" logger.info(f"Prepared ConversionCarrier for {model_name} -> {django_model_name}") return carrier
-
Notes:
- Targets arbitrary typed Python classes (non-Pydantic, non-dataclass). APIs are evolving.
Django base models (runtime mapping and storage)¶
These abstract models provide runtime persistence patterns for typed objects.
- Store entire objects as JSON or map their fields to columns. See:
-
Bases:
Pydantic2DjangoBase
,Generic[PydanticT]
Base class for mapping Pydantic model fields to Django model fields.
Source code in
src/pydantic2django/django/models.py
1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311
class Pydantic2DjangoBaseClass(Pydantic2DjangoBase, Generic[PydanticT]): """ Base class for mapping Pydantic model fields to Django model fields. """ class Meta(Pydantic2DjangoBase.Meta): abstract = True verbose_name = "Mapped Pydantic Object" verbose_name_plural = "Mapped Pydantic Objects" def __new__(cls, *args, **kwargs): """ Override __new__ to ensure proper type checking. Needed because Django's model metaclass doesn't preserve Generic type parameters well. """ # The check itself might be complex. This placeholder ensures __new__ is considered. # Proper generic handling might require metaclass adjustments beyond this scope. return super().__new__(cls) def __getattr__(self, name: str) -> Any: """ Forward method calls to the Pydantic model implementation. Enables type checking and execution of methods defined on the Pydantic model. """ # Get the Pydantic model class try: pydantic_cls = self._get_class() # Use common method if not issubclass(pydantic_cls, BaseModel): # This path shouldn't be hit if used correctly, but safeguard raise AttributeError(f"Stored class '{self.class_path}' is not a Pydantic BaseModel.") except ValueError as e: raise AttributeError(f"Cannot forward attribute '{name}': {e}") from e # Check if the attribute exists in the Pydantic model if hasattr(pydantic_cls, name): attr = getattr(pydantic_cls, name) # If it's a callable method (and not the type itself), wrap it if callable(attr) and not isinstance(attr, type): def wrapped_method(*args, **kwargs): # Convert self (Django model) to Pydantic instance first try: pydantic_instance = self.to_pydantic() # Assuming no context needed here except ValueError as e: # Handle potential errors during conversion (e.g., context missing) raise RuntimeError( f"Failed to convert Django model to Pydantic before calling '{name}': {e}" ) from e # Call the method on the Pydantic instance result = getattr(pydantic_instance, name)(*args, **kwargs) # TODO: Handle potential need to update self from result? Unlikely for most methods. return result return wrapped_method else: # For non-method attributes (like class vars), return directly # This might need refinement depending on desired behavior for class vs instance attrs return attr # If attribute doesn't exist on Pydantic model, raise standard AttributeError raise AttributeError( f"'{self.__class__.__name__}' object has no attribute '{name}', " f"and Pydantic model '{pydantic_cls.__name__}' has no attribute '{name}'." ) @classmethod def from_pydantic(cls, pydantic_obj: PydanticT, name: str | None = None) -> "Pydantic2DjangoBaseClass[PydanticT]": """ Create a Django model instance from a Pydantic object, mapping fields. Args: pydantic_obj: The Pydantic object to convert. name: Optional name for the Django model instance. Returns: A new instance of this Django model subclass. Raises: TypeError: If the object is not a Pydantic model or not of the expected type. """ # Get class info and check type ( pydantic_class, class_name, module_name, fully_qualified_name, ) = cls._get_class_info(pydantic_obj) cls._check_expected_type(pydantic_obj, class_name) # Verifies it's a Pydantic model # Derive name derived_name = cls._derive_name(pydantic_obj, name, class_name) # Create instance with basic fields instance = cls( name=derived_name, class_path=fully_qualified_name, # Use renamed field ) # Update mapped fields instance.update_fields_from_pydantic(pydantic_obj) return instance def update_fields_from_pydantic(self, pydantic_obj: PydanticT) -> None: """ Update this Django model's fields from a Pydantic object's fields. Args: pydantic_obj: The Pydantic object containing source values. """ if ( not isinstance(pydantic_obj, BaseModel) or pydantic_obj.__class__.__module__ != self._get_class().__module__ or pydantic_obj.__class__.__name__ != self._get_class().__name__ ): # Check type consistency before proceeding raise TypeError( f"Provided object type {type(pydantic_obj)} does not match expected type {self.class_path} for update." ) # Get data from the Pydantic object try: pydantic_data = pydantic_obj.model_dump() except AttributeError as err: raise TypeError( "Failed to dump Pydantic model for update. Ensure you are using Pydantic v2+ with model_dump()." ) from err # Get Django model fields excluding common/meta ones model_field_names = { field.name for field in self._meta.fields if field.name not in ("id", "name", "class_path", "created_at", "updated_at") } # Update each Django field if it matches a field in the Pydantic data for field_name in model_field_names: if field_name in pydantic_data: value = pydantic_data[field_name] # Apply serialization (important for complex types) serialized_value = serialize_value(value) setattr(self, field_name, serialized_value) # Else: Field exists on Django model but not on Pydantic model, leave it unchanged. def to_pydantic(self, context: Optional[ModelContext] = None) -> PydanticT: """ Convert this Django model instance back to a Pydantic object. Args: context: Optional ModelContext instance containing values for non-serializable fields. Returns: The corresponding Pydantic object. Raises: ValueError: If context is required but not provided, or if class load/instantiation fails. """ pydantic_class = self._get_class() # Use common method if not issubclass(pydantic_class, BaseModel): raise ValueError(f"Stored class path '{self.class_path}' does not point to a Pydantic BaseModel.") # Get data from Django fields corresponding to Pydantic fields data = self._get_data_for_pydantic(pydantic_class) # Handle context if required and provided required_context_keys = self._get_required_context_fields() # Check if context is needed if required_context_keys: if not context: raise ValueError( f"Conversion to Pydantic model '{pydantic_class.__name__}' requires context " f"for fields: {', '.join(required_context_keys)}. Please provide a ModelContext instance." ) # Validate and merge context data context_dict = context.to_conversion_dict() context.validate_context(context_dict) # Validate required keys are present data.update(context_dict) # Merge context, potentially overwriting DB values if keys overlap # Reconstruct the Pydantic object try: # TODO: Add potential deserialization logic here if needed before validation instance = pydantic_class.model_validate(data) # Cast to the generic type variable return cast(PydanticT, instance) except TypeError: raise # Re-raise TypeError as it's a specific, meaningful exception here except Exception as e: # Catch any other unexpected error during Pydantic object creation logger.error( f"An unexpected error occurred creating Pydantic model for '{pydantic_class.__name__}': {e}", exc_info=True, ) raise ValueError(f"Unexpected error creating Pydantic model for '{pydantic_class.__name__}'") from e def _get_data_for_pydantic(self, pydantic_class: type[BaseModel]) -> dict[str, Any]: """Get data from Django fields that correspond to the target Pydantic model fields.""" data = {} try: pydantic_field_names = set(pydantic_class.model_fields.keys()) except AttributeError as err: # Should not happen if issubclass(BaseModel) check passed raise ValueError(f"Could not get fields for non-Pydantic type '{pydantic_class.__name__}'") from err # Add DB fields that are part of the Pydantic model for field in self._meta.fields: if field.name in pydantic_field_names: # TODO: Add potential deserialization based on target Pydantic field type? data[field.name] = getattr(self, field.name) # Context values are merged in the calling `to_pydantic` method return data def _get_required_context_fields(self) -> set[str]: """ Get the set of field names that require context when converting to Pydantic. (Placeholder implementation - needs refinement based on how context is defined). """ # This requires a mechanism to identify which Django fields represent # non-serializable data that must come from context. # For now, assume no context is required by default for the base class. # Subclasses might override this or a more sophisticated mechanism could be added. # Example: Check for a custom field attribute like `is_context_field=True` required_fields = set() # pydantic_class = self._get_class() # pydantic_field_names = set(pydantic_class.model_fields.keys()) # for field in self._meta.fields: # if field.name in pydantic_field_names and getattr(field, 'is_context_field', False): # required_fields.add(field.name) return required_fields # Return empty set for now def update_from_pydantic(self, pydantic_obj: PydanticT) -> None: """ Update this Django model with new data from a Pydantic object and save. Args: pydantic_obj: The Pydantic object with updated data. """ # Verify the object type matches first (includes check if it's a BaseModel) fully_qualified_name = self._verify_object_type_match(pydantic_obj) # Update the class_path if somehow inconsistent if self.class_path != fully_qualified_name: self.class_path = fully_qualified_name self.update_fields_from_pydantic(pydantic_obj) self.save() def save_as_pydantic(self) -> PydanticT: """ Save the Django model and return the corresponding Pydantic object. Returns: The corresponding Pydantic object. """ self.save() return self.to_pydantic(context=None)
__getattr__(name)
¶Forward method calls to the Pydantic model implementation. Enables type checking and execution of methods defined on the Pydantic model.
Source code in
src/pydantic2django/django/models.py
1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120
def __getattr__(self, name: str) -> Any: """ Forward method calls to the Pydantic model implementation. Enables type checking and execution of methods defined on the Pydantic model. """ # Get the Pydantic model class try: pydantic_cls = self._get_class() # Use common method if not issubclass(pydantic_cls, BaseModel): # This path shouldn't be hit if used correctly, but safeguard raise AttributeError(f"Stored class '{self.class_path}' is not a Pydantic BaseModel.") except ValueError as e: raise AttributeError(f"Cannot forward attribute '{name}': {e}") from e # Check if the attribute exists in the Pydantic model if hasattr(pydantic_cls, name): attr = getattr(pydantic_cls, name) # If it's a callable method (and not the type itself), wrap it if callable(attr) and not isinstance(attr, type): def wrapped_method(*args, **kwargs): # Convert self (Django model) to Pydantic instance first try: pydantic_instance = self.to_pydantic() # Assuming no context needed here except ValueError as e: # Handle potential errors during conversion (e.g., context missing) raise RuntimeError( f"Failed to convert Django model to Pydantic before calling '{name}': {e}" ) from e # Call the method on the Pydantic instance result = getattr(pydantic_instance, name)(*args, **kwargs) # TODO: Handle potential need to update self from result? Unlikely for most methods. return result return wrapped_method else: # For non-method attributes (like class vars), return directly # This might need refinement depending on desired behavior for class vs instance attrs return attr # If attribute doesn't exist on Pydantic model, raise standard AttributeError raise AttributeError( f"'{self.__class__.__name__}' object has no attribute '{name}', " f"and Pydantic model '{pydantic_cls.__name__}' has no attribute '{name}'." )
__new__(*args, **kwargs)
¶Override new to ensure proper type checking. Needed because Django's model metaclass doesn't preserve Generic type parameters well.
Source code in
src/pydantic2django/django/models.py
1065 1066 1067 1068 1069 1070 1071 1072
def __new__(cls, *args, **kwargs): """ Override __new__ to ensure proper type checking. Needed because Django's model metaclass doesn't preserve Generic type parameters well. """ # The check itself might be complex. This placeholder ensures __new__ is considered. # Proper generic handling might require metaclass adjustments beyond this scope. return super().__new__(cls)
from_pydantic(pydantic_obj, name=None)
classmethod
¶Create a Django model instance from a Pydantic object, mapping fields.
Parameters:
Name Type Description Default pydantic_obj
PydanticT
The Pydantic object to convert.
required name
str | None
Optional name for the Django model instance.
None
Returns:
Type Description Pydantic2DjangoBaseClass[PydanticT]
A new instance of this Django model subclass.
Raises:
Type Description TypeError
If the object is not a Pydantic model or not of the expected type.
Source code in
src/pydantic2django/django/models.py
1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158
@classmethod def from_pydantic(cls, pydantic_obj: PydanticT, name: str | None = None) -> "Pydantic2DjangoBaseClass[PydanticT]": """ Create a Django model instance from a Pydantic object, mapping fields. Args: pydantic_obj: The Pydantic object to convert. name: Optional name for the Django model instance. Returns: A new instance of this Django model subclass. Raises: TypeError: If the object is not a Pydantic model or not of the expected type. """ # Get class info and check type ( pydantic_class, class_name, module_name, fully_qualified_name, ) = cls._get_class_info(pydantic_obj) cls._check_expected_type(pydantic_obj, class_name) # Verifies it's a Pydantic model # Derive name derived_name = cls._derive_name(pydantic_obj, name, class_name) # Create instance with basic fields instance = cls( name=derived_name, class_path=fully_qualified_name, # Use renamed field ) # Update mapped fields instance.update_fields_from_pydantic(pydantic_obj) return instance
save_as_pydantic()
¶Save the Django model and return the corresponding Pydantic object.
Returns:
Type Description PydanticT
The corresponding Pydantic object.
Source code in
src/pydantic2django/django/models.py
1303 1304 1305 1306 1307 1308 1309 1310 1311
def save_as_pydantic(self) -> PydanticT: """ Save the Django model and return the corresponding Pydantic object. Returns: The corresponding Pydantic object. """ self.save() return self.to_pydantic(context=None)
to_pydantic(context=None)
¶Convert this Django model instance back to a Pydantic object.
Parameters:
Name Type Description Default context
Optional[ModelContext]
Optional ModelContext instance containing values for non-serializable fields.
None
Returns:
Type Description PydanticT
The corresponding Pydantic object.
Raises:
Type Description ValueError
If context is required but not provided, or if class load/instantiation fails.
Source code in
src/pydantic2django/django/models.py
1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248
def to_pydantic(self, context: Optional[ModelContext] = None) -> PydanticT: """ Convert this Django model instance back to a Pydantic object. Args: context: Optional ModelContext instance containing values for non-serializable fields. Returns: The corresponding Pydantic object. Raises: ValueError: If context is required but not provided, or if class load/instantiation fails. """ pydantic_class = self._get_class() # Use common method if not issubclass(pydantic_class, BaseModel): raise ValueError(f"Stored class path '{self.class_path}' does not point to a Pydantic BaseModel.") # Get data from Django fields corresponding to Pydantic fields data = self._get_data_for_pydantic(pydantic_class) # Handle context if required and provided required_context_keys = self._get_required_context_fields() # Check if context is needed if required_context_keys: if not context: raise ValueError( f"Conversion to Pydantic model '{pydantic_class.__name__}' requires context " f"for fields: {', '.join(required_context_keys)}. Please provide a ModelContext instance." ) # Validate and merge context data context_dict = context.to_conversion_dict() context.validate_context(context_dict) # Validate required keys are present data.update(context_dict) # Merge context, potentially overwriting DB values if keys overlap # Reconstruct the Pydantic object try: # TODO: Add potential deserialization logic here if needed before validation instance = pydantic_class.model_validate(data) # Cast to the generic type variable return cast(PydanticT, instance) except TypeError: raise # Re-raise TypeError as it's a specific, meaningful exception here except Exception as e: # Catch any other unexpected error during Pydantic object creation logger.error( f"An unexpected error occurred creating Pydantic model for '{pydantic_class.__name__}': {e}", exc_info=True, ) raise ValueError(f"Unexpected error creating Pydantic model for '{pydantic_class.__name__}'") from e
update_fields_from_pydantic(pydantic_obj)
¶Update this Django model's fields from a Pydantic object's fields.
Parameters:
Name Type Description Default pydantic_obj
PydanticT
The Pydantic object containing source values.
required Source code in
src/pydantic2django/django/models.py
1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198
def update_fields_from_pydantic(self, pydantic_obj: PydanticT) -> None: """ Update this Django model's fields from a Pydantic object's fields. Args: pydantic_obj: The Pydantic object containing source values. """ if ( not isinstance(pydantic_obj, BaseModel) or pydantic_obj.__class__.__module__ != self._get_class().__module__ or pydantic_obj.__class__.__name__ != self._get_class().__name__ ): # Check type consistency before proceeding raise TypeError( f"Provided object type {type(pydantic_obj)} does not match expected type {self.class_path} for update." ) # Get data from the Pydantic object try: pydantic_data = pydantic_obj.model_dump() except AttributeError as err: raise TypeError( "Failed to dump Pydantic model for update. Ensure you are using Pydantic v2+ with model_dump()." ) from err # Get Django model fields excluding common/meta ones model_field_names = { field.name for field in self._meta.fields if field.name not in ("id", "name", "class_path", "created_at", "updated_at") } # Update each Django field if it matches a field in the Pydantic data for field_name in model_field_names: if field_name in pydantic_data: value = pydantic_data[field_name] # Apply serialization (important for complex types) serialized_value = serialize_value(value) setattr(self, field_name, serialized_value)
update_from_pydantic(pydantic_obj)
¶Update this Django model with new data from a Pydantic object and save.
Parameters:
Name Type Description Default pydantic_obj
PydanticT
The Pydantic object with updated data.
required Source code in
src/pydantic2django/django/models.py
1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301
def update_from_pydantic(self, pydantic_obj: PydanticT) -> None: """ Update this Django model with new data from a Pydantic object and save. Args: pydantic_obj: The Pydantic object with updated data. """ # Verify the object type matches first (includes check if it's a BaseModel) fully_qualified_name = self._verify_object_type_match(pydantic_obj) # Update the class_path if somehow inconsistent if self.class_path != fully_qualified_name: self.class_path = fully_qualified_name self.update_fields_from_pydantic(pydantic_obj) self.save()
-
Bases:
Dataclass2DjangoBase
,Generic[DataclassT]
Base class for mapping Python Dataclass fields to Django model fields.
Inherits from Dataclass2DjangoBase and provides methods to convert between the Dataclass instance and the Django model instance by matching field names.
Source code in
src/pydantic2django/django/models.py
883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052
class Dataclass2DjangoBaseClass(Dataclass2DjangoBase, Generic[DataclassT]): """ Base class for mapping Python Dataclass fields to Django model fields. Inherits from Dataclass2DjangoBase and provides methods to convert between the Dataclass instance and the Django model instance by matching field names. """ class Meta(Dataclass2DjangoBase.Meta): abstract = True verbose_name = "Mapped Dataclass" verbose_name_plural = "Mapped Dataclasses" # __getattr__ is less likely needed/useful for standard dataclasses compared to Pydantic models # which might have complex methods. Skip for now. @classmethod def from_dataclass(cls, dc_obj: DataclassT, name: str | None = None) -> "Dataclass2DjangoBaseClass[DataclassT]": """ Create a Django model instance from a Dataclass object, mapping fields. Args: dc_obj: The Dataclass object to convert. name: Optional name for the Django model instance. Returns: A new instance of this Django model subclass. Raises: TypeError: If the object is not a dataclass or not of the expected type. """ # Get class info and check type ( dc_class, class_name, module_name, fully_qualified_name, ) = cls._get_class_info(dc_obj) cls._check_expected_type(dc_obj, class_name) # Verifies it's a dataclass # Derive name derived_name = cls._derive_name(dc_obj, name, class_name) # Create instance with basic fields instance = cls( name=derived_name, class_path=fully_qualified_name, ) # Update mapped fields instance.update_fields_from_dataclass(dc_obj) return instance def update_fields_from_dataclass(self, dc_obj: DataclassT) -> None: """ Update this Django model's fields from a Dataclass object's fields. Args: dc_obj: The Dataclass object containing source values. Raises: TypeError: If conversion to dict fails. """ if ( not dataclasses.is_dataclass(dc_obj) or dc_obj.__class__.__module__ != self._get_class().__module__ or dc_obj.__class__.__name__ != self._get_class().__name__ ): # Check type consistency before proceeding raise TypeError( f"Provided object type {type(dc_obj)} does not match expected type {self.class_path} for update." ) try: dc_data = dataclasses.asdict(dc_obj) except TypeError as e: raise TypeError(f"Could not convert dataclass '{dc_obj.__class__.__name__}' to dict for update: {e}") from e # Get Django model fields excluding common/meta ones model_field_names = { field.name for field in self._meta.fields if field.name not in ("id", "name", "class_path", "created_at", "updated_at") } for field_name in model_field_names: if field_name in dc_data: value = dc_data[field_name] # Apply serialization (important for complex types like datetime, UUID, etc.) serialized_value = serialize_value(value) setattr(self, field_name, serialized_value) # Else: Field exists on Django model but not on dataclass, leave it unchanged. def to_dataclass(self) -> DataclassT: """ Convert this Django model instance back to a Dataclass object. Returns: The reconstructed Dataclass object. Raises: ValueError: If the class cannot be loaded or instantiation fails. """ dataclass_type = self._get_class() if not dataclasses.is_dataclass(dataclass_type): raise ValueError(f"Stored class path '{self.class_path}' does not point to a dataclass.") # Get data from Django fields corresponding to dataclass fields data_for_dc = self._get_data_for_dataclass(dataclass_type) # Instantiate the dataclass try: # TODO: Add deserialization logic if needed instance = dataclass_type(**data_for_dc) # Cast to the generic type variable for type hinting return cast(DataclassT, instance) except TypeError as e: raise ValueError( f"Failed to instantiate dataclass '{dataclass_type.__name__}' from Django model fields. " f"Ensure required fields exist and types are compatible. Error: {e}" ) from e except Exception as e: logger.error(f"An unexpected error occurred during dataclass reconstruction: {e}", exc_info=True) raise ValueError(f"An unexpected error occurred during dataclass reconstruction: {e}") from e def _get_data_for_dataclass(self, dataclass_type: type) -> dict[str, Any]: """Get data from Django fields that correspond to the target dataclass fields.""" data = {} try: dc_field_names = {f.name for f in dataclasses.fields(dataclass_type)} except TypeError as err: # Should not happen if is_dataclass check passed, but handle defensively raise ValueError(f"Could not get fields for non-dataclass type '{dataclass_type.__name__}'") from err # Add DB fields that are part of the dataclass for field in self._meta.fields: if field.name in dc_field_names: # TODO: Add potential deserialization based on target dataclass field type? data[field.name] = getattr(self, field.name) # Context handling is usually Pydantic-specific, skip for dataclasses unless needed return data def update_from_dataclass(self, dc_obj: DataclassT) -> None: """ Update this Django model with new data from a Dataclass object and save. Args: dc_obj: The Dataclass object with updated data. """ # Verify the object type matches first (includes check if it's a dataclass) fully_qualified_name = self._verify_object_type_match(dc_obj) # Update the class_path if somehow inconsistent if self.class_path != fully_qualified_name: self.class_path = fully_qualified_name self.update_fields_from_dataclass(dc_obj) self.save() def save_as_dataclass(self) -> DataclassT: """ Save the Django model and return the corresponding Dataclass object. Returns: The corresponding Dataclass object. """ self.save() return self.to_dataclass()
from_dataclass(dc_obj, name=None)
classmethod
¶Create a Django model instance from a Dataclass object, mapping fields.
Parameters:
Name Type Description Default dc_obj
DataclassT
The Dataclass object to convert.
required name
str | None
Optional name for the Django model instance.
None
Returns:
Type Description Dataclass2DjangoBaseClass[DataclassT]
A new instance of this Django model subclass.
Raises:
Type Description TypeError
If the object is not a dataclass or not of the expected type.
Source code in
src/pydantic2django/django/models.py
899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935
@classmethod def from_dataclass(cls, dc_obj: DataclassT, name: str | None = None) -> "Dataclass2DjangoBaseClass[DataclassT]": """ Create a Django model instance from a Dataclass object, mapping fields. Args: dc_obj: The Dataclass object to convert. name: Optional name for the Django model instance. Returns: A new instance of this Django model subclass. Raises: TypeError: If the object is not a dataclass or not of the expected type. """ # Get class info and check type ( dc_class, class_name, module_name, fully_qualified_name, ) = cls._get_class_info(dc_obj) cls._check_expected_type(dc_obj, class_name) # Verifies it's a dataclass # Derive name derived_name = cls._derive_name(dc_obj, name, class_name) # Create instance with basic fields instance = cls( name=derived_name, class_path=fully_qualified_name, ) # Update mapped fields instance.update_fields_from_dataclass(dc_obj) return instance
save_as_dataclass()
¶Save the Django model and return the corresponding Dataclass object.
Returns:
Type Description DataclassT
The corresponding Dataclass object.
Source code in
src/pydantic2django/django/models.py
1044 1045 1046 1047 1048 1049 1050 1051 1052
def save_as_dataclass(self) -> DataclassT: """ Save the Django model and return the corresponding Dataclass object. Returns: The corresponding Dataclass object. """ self.save() return self.to_dataclass()
to_dataclass()
¶Convert this Django model instance back to a Dataclass object.
Returns:
Type Description DataclassT
The reconstructed Dataclass object.
Raises:
Type Description ValueError
If the class cannot be loaded or instantiation fails.
Source code in
src/pydantic2django/django/models.py
977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007
def to_dataclass(self) -> DataclassT: """ Convert this Django model instance back to a Dataclass object. Returns: The reconstructed Dataclass object. Raises: ValueError: If the class cannot be loaded or instantiation fails. """ dataclass_type = self._get_class() if not dataclasses.is_dataclass(dataclass_type): raise ValueError(f"Stored class path '{self.class_path}' does not point to a dataclass.") # Get data from Django fields corresponding to dataclass fields data_for_dc = self._get_data_for_dataclass(dataclass_type) # Instantiate the dataclass try: # TODO: Add deserialization logic if needed instance = dataclass_type(**data_for_dc) # Cast to the generic type variable for type hinting return cast(DataclassT, instance) except TypeError as e: raise ValueError( f"Failed to instantiate dataclass '{dataclass_type.__name__}' from Django model fields. " f"Ensure required fields exist and types are compatible. Error: {e}" ) from e except Exception as e: logger.error(f"An unexpected error occurred during dataclass reconstruction: {e}", exc_info=True) raise ValueError(f"An unexpected error occurred during dataclass reconstruction: {e}") from e
update_fields_from_dataclass(dc_obj)
¶Update this Django model's fields from a Dataclass object's fields.
Parameters:
Name Type Description Default dc_obj
DataclassT
The Dataclass object containing source values.
required Raises:
Type Description TypeError
If conversion to dict fails.
Source code in
src/pydantic2django/django/models.py
937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974
def update_fields_from_dataclass(self, dc_obj: DataclassT) -> None: """ Update this Django model's fields from a Dataclass object's fields. Args: dc_obj: The Dataclass object containing source values. Raises: TypeError: If conversion to dict fails. """ if ( not dataclasses.is_dataclass(dc_obj) or dc_obj.__class__.__module__ != self._get_class().__module__ or dc_obj.__class__.__name__ != self._get_class().__name__ ): # Check type consistency before proceeding raise TypeError( f"Provided object type {type(dc_obj)} does not match expected type {self.class_path} for update." ) try: dc_data = dataclasses.asdict(dc_obj) except TypeError as e: raise TypeError(f"Could not convert dataclass '{dc_obj.__class__.__name__}' to dict for update: {e}") from e # Get Django model fields excluding common/meta ones model_field_names = { field.name for field in self._meta.fields if field.name not in ("id", "name", "class_path", "created_at", "updated_at") } for field_name in model_field_names: if field_name in dc_data: value = dc_data[field_name] # Apply serialization (important for complex types like datetime, UUID, etc.) serialized_value = serialize_value(value) setattr(self, field_name, serialized_value)
update_from_dataclass(dc_obj)
¶Update this Django model with new data from a Dataclass object and save.
Parameters:
Name Type Description Default dc_obj
DataclassT
The Dataclass object with updated data.
required Source code in
src/pydantic2django/django/models.py
1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042
def update_from_dataclass(self, dc_obj: DataclassT) -> None: """ Update this Django model with new data from a Dataclass object and save. Args: dc_obj: The Dataclass object with updated data. """ # Verify the object type matches first (includes check if it's a dataclass) fully_qualified_name = self._verify_object_type_match(dc_obj) # Update the class_path if somehow inconsistent if self.class_path != fully_qualified_name: self.class_path = fully_qualified_name self.update_fields_from_dataclass(dc_obj) self.save()
-
Bases:
TypedClass2DjangoBase
,Generic[TypedClassT]
Base class for mapping generic Python class fields to Django model fields.
Source code in
src/pydantic2django/django/models.py
322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458
class TypedClass2DjangoBaseClass(TypedClass2DjangoBase, Generic[TypedClassT]): """ Base class for mapping generic Python class fields to Django model fields. """ class Meta(TypedClass2DjangoBase.Meta): abstract = True verbose_name = "Mapped Typed Class" verbose_name_plural = "Mapped Typed Classes" @classmethod def from_typedclass( cls, typed_obj: TypedClassT, name: str | None = None ) -> "TypedClass2DjangoBaseClass[TypedClassT]": """ Create a Django model instance from a generic class object, mapping fields. """ obj_class, class_name, module_name, fqn = cls._get_class_info(typed_obj) cls._check_expected_type(typed_obj, class_name) derived_name = cls._derive_name(typed_obj, name, class_name) instance = cls( name=derived_name, class_path=fqn, ) instance.update_fields_from_typedclass(typed_obj) return instance def update_fields_from_typedclass(self, typed_obj: TypedClassT) -> None: """ Update this Django model's fields from a generic class object's fields. """ if ( typed_obj.__class__.__module__ != self._get_class().__module__ or typed_obj.__class__.__name__ != self._get_class().__name__ ): raise TypeError( f"Provided object type {type(typed_obj)} does not match expected type {self.class_path} for update." ) # Extract data. For mapped fields, we prefer __init__ params or direct attrs. # Using __dict__ might be too broad here if not all dict items are mapped fields. # Let's assume attributes corresponding to Django fields are directly accessible. model_field_names = { field.name for field in self._meta.fields if field.name not in ("id", "name", "class_path", "created_at", "updated_at") } for field_name in model_field_names: if hasattr(typed_obj, field_name): value = getattr(typed_obj, field_name) setattr(self, field_name, serialize_value(value)) # Else: Field on Django model, not on typed_obj. Leave as is or handle. def to_typedclass(self) -> TypedClassT: """ Convert this Django model instance back to a generic class object. """ target_class = self._get_class() data_for_typedclass = self._get_data_for_typedclass(target_class) try: # This assumes target_class can be instantiated with these parameters. # Deserialization from serialize_value needs to be considered for complex types. # TODO: Implement robust deserialization deserialized_data = dict(data_for_typedclass.items()) # Placeholder init_sig = inspect.signature(target_class.__init__) valid_params = {k: v for k, v in deserialized_data.items() if k in init_sig.parameters} instance = target_class(**valid_params) # For attributes not in __init__ but set on Django model, try to setattr for key, value in deserialized_data.items(): if key not in valid_params and hasattr(instance, key) and not key.startswith("_"): try: setattr(instance, key, value) # value is already deserialized_data[key] except AttributeError: logger.debug( f"Could not setattr {key} on {target_class.__name__} during to_typedclass reconstruction." ) return cast(TypedClassT, instance) except TypeError as e: raise ValueError( f"Failed to instantiate typed class '{target_class.__name__}' from Django fields. Error: {e}" ) from e except Exception as e: logger.error(f"An unexpected error occurred during typed class reconstruction: {e}", exc_info=True) raise ValueError(f"An unexpected error occurred during typed class reconstruction: {e}") from e def _get_data_for_typedclass(self, target_class: type) -> dict[str, Any]: """Get data from Django fields that correspond to the target class's likely attributes.""" data = {} # Heuristic: get attributes from __init__ signature and class annotations # This is a simplified version. A robust solution might need to align with TypedClassFieldFactory's discovery. # Potential attributes: from __init__ potential_attrs = set() try: init_sig = inspect.signature(target_class.__init__) potential_attrs.update(p for p in init_sig.parameters if p != "self") except (TypeError, ValueError): pass # Potential attributes: from class annotations (less reliable for instance data if not in __init__) # try: # annotations = inspect.get_annotations(target_class) # potential_attrs.update(annotations.keys()) # except Exception: # pass for field in self._meta.fields: if field.name in potential_attrs or hasattr(target_class, field.name): # Check if it's a likely attribute # TODO: Add deserialization logic based on target_class's type hint for field.name data[field.name] = getattr(self, field.name) return data def update_from_typedclass(self, typed_obj: TypedClassT) -> None: """ Update this Django model with new data from a generic class object and save. """ fqn = self._verify_object_type_match(typed_obj) if self.class_path != fqn: self.class_path = fqn self.update_fields_from_typedclass(typed_obj) self.save() def save_as_typedclass(self) -> TypedClassT: """ Save the Django model and return the corresponding generic class object. """ self.save() return self.to_typedclass()
from_typedclass(typed_obj, name=None)
classmethod
¶Create a Django model instance from a generic class object, mapping fields.
Source code in
src/pydantic2django/django/models.py
332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348
@classmethod def from_typedclass( cls, typed_obj: TypedClassT, name: str | None = None ) -> "TypedClass2DjangoBaseClass[TypedClassT]": """ Create a Django model instance from a generic class object, mapping fields. """ obj_class, class_name, module_name, fqn = cls._get_class_info(typed_obj) cls._check_expected_type(typed_obj, class_name) derived_name = cls._derive_name(typed_obj, name, class_name) instance = cls( name=derived_name, class_path=fqn, ) instance.update_fields_from_typedclass(typed_obj) return instance
save_as_typedclass()
¶Save the Django model and return the corresponding generic class object.
Source code in
src/pydantic2django/django/models.py
453 454 455 456 457 458
def save_as_typedclass(self) -> TypedClassT: """ Save the Django model and return the corresponding generic class object. """ self.save() return self.to_typedclass()
to_typedclass()
¶Convert this Django model instance back to a generic class object.
Source code in
src/pydantic2django/django/models.py
378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413
def to_typedclass(self) -> TypedClassT: """ Convert this Django model instance back to a generic class object. """ target_class = self._get_class() data_for_typedclass = self._get_data_for_typedclass(target_class) try: # This assumes target_class can be instantiated with these parameters. # Deserialization from serialize_value needs to be considered for complex types. # TODO: Implement robust deserialization deserialized_data = dict(data_for_typedclass.items()) # Placeholder init_sig = inspect.signature(target_class.__init__) valid_params = {k: v for k, v in deserialized_data.items() if k in init_sig.parameters} instance = target_class(**valid_params) # For attributes not in __init__ but set on Django model, try to setattr for key, value in deserialized_data.items(): if key not in valid_params and hasattr(instance, key) and not key.startswith("_"): try: setattr(instance, key, value) # value is already deserialized_data[key] except AttributeError: logger.debug( f"Could not setattr {key} on {target_class.__name__} during to_typedclass reconstruction." ) return cast(TypedClassT, instance) except TypeError as e: raise ValueError( f"Failed to instantiate typed class '{target_class.__name__}' from Django fields. Error: {e}" ) from e except Exception as e: logger.error(f"An unexpected error occurred during typed class reconstruction: {e}", exc_info=True) raise ValueError(f"An unexpected error occurred during typed class reconstruction: {e}") from e
update_fields_from_typedclass(typed_obj)
¶Update this Django model's fields from a generic class object's fields.
Source code in
src/pydantic2django/django/models.py
350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375
def update_fields_from_typedclass(self, typed_obj: TypedClassT) -> None: """ Update this Django model's fields from a generic class object's fields. """ if ( typed_obj.__class__.__module__ != self._get_class().__module__ or typed_obj.__class__.__name__ != self._get_class().__name__ ): raise TypeError( f"Provided object type {type(typed_obj)} does not match expected type {self.class_path} for update." ) # Extract data. For mapped fields, we prefer __init__ params or direct attrs. # Using __dict__ might be too broad here if not all dict items are mapped fields. # Let's assume attributes corresponding to Django fields are directly accessible. model_field_names = { field.name for field in self._meta.fields if field.name not in ("id", "name", "class_path", "created_at", "updated_at") } for field_name in model_field_names: if hasattr(typed_obj, field_name): value = getattr(typed_obj, field_name) setattr(self, field_name, serialize_value(value))
update_from_typedclass(typed_obj)
¶Update this Django model with new data from a generic class object and save.
Source code in
src/pydantic2django/django/models.py
442 443 444 445 446 447 448 449 450 451
def update_from_typedclass(self, typed_obj: TypedClassT) -> None: """ Update this Django model with new data from a generic class object and save. """ fqn = self._verify_object_type_match(typed_obj) if self.class_path != fqn: self.class_path = fqn self.update_fields_from_typedclass(typed_obj) self.save()
Additional docs:
- Context handling:
docs/Architecture/CONTEXT_HANDLING.md
- Pydantic ↔ Django methods:
docs/Architecture/README_PYDANTIC_DJANGO_METHODS.md
- Django storage options:
docs/Architecture/README_DJANGO_STORAGE_OPTIONS.md
Extending the system¶
- To support a new source type, implement:
- A
Discovery
subclass ofBaseDiscovery
. - A
FieldFactory
andModelFactory
pair usingBidirectionalTypeMapper
. - A
Generator
subclass ofBaseStaticGenerator
that wires everything together and prepares template context. - To add or refine type mappings, provide new
TypeMappingUnit
subclasses and ensureBidirectionalTypeMapper
registry order or match scores select them appropriately.
mkdocstrings usage¶
Each :::
block above renders live API documentation using mkdocstrings with the Python handler, configured in mkdocs.yml
with handlers.python.paths: [src]
. See the official guide: mkdocstrings usage.