from issue 409, the application to tenant-specific schemas of upgrade/downgrade operations can most easily be done using translated schema names, which is also how you would normally be doing the main application as well for multi-tenant.
Go into env.py:
def run_migrations_online():
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
with connectable.connect() as connection:
for tenant_schema_name in all_my_tenant_names:
conn = connection.execution_options(schema_translate_map={None: tenant_schema_name}
logger.info("Migrating tenant schema %s" % tenant_schema_name)
context.configure(
connection=conn,
target_metadata=target_metadata
)
# to do each tenant in its own transaction.
# move this up to do all tenants in one giant transaction
with context.begin_transaction():
context.run_migrations()
Above will translate the "None" schema name into the given tenant name. If the application shares tenant-based schemas with a default schema that has global tables, then you'd be using some token like "tenant_schema" as the symbol:
for tenant_schema_name in all_my_tenant_names:
conn = connection.execution_options(schema_translate_map={"tenant_schema": tenant_schema_name}
and in migration files refer to "tenant_schema" where the actual tenant-specific schema name goes:
def upgrade():
op.alter_column("some_table", "some_column", <migration options>, schema="tenant_schema")
For the "autogenerate" case, the solution @nick-retallack provides has some more of the pieces you would use on that end, namely the use of include_schemas so that autogenerate looks only at a "specimen" schema that represents the latest version of the tenant-specific schema.
In order to set up env.py to use the right system for the right command, the behaviors can be controlled using user-defined options with migration_context.get_x_argument().
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…