Activity log for bug #2024258

Date Who What changed Old value New value Message
2023-06-16 18:31:23 melanie witt bug added bug
2023-06-16 18:33:36 melanie witt description Observed downstream in a large scale cluster with constant create/delete server activity and hundreds of thousands of deleted instances rows. Currently, we archive deleted rows in batches of max_rows parents + their child rows in a single database transaction. Doing it that way limits how high a value of max_rows can be specified by the caller because of the size of the database transaction it could generate. For example, in a large scale deployment with hundreds of thousands of deleted rows and constant server creation and deletion activity, a value of max_rows=1000 might exceed the database's configured maximum packet size or timeout due to a database deadlock, forcing the operator to use a much lower max_rows value like 100 or 50. And when the operator has e.g. 500,000 deleted instances rows (and millions of deleted rows total) they are trying to archive, being forced to use a max_rows value several orders of magnitude lower than the number of rows they need to archive is a poor user experience and makes it unclear if archive progress is actually being made. Observed downstream in a large scale cluster with constant create/delete server activity and hundreds of thousands of deleted instances rows. Currently, we archive deleted rows in batches of max_rows parents + their child rows in a single database transaction. Doing it that way limits how high a value of max_rows can be specified by the caller because of the size of the database transaction it could generate. For example, in a large scale deployment with hundreds of thousands of deleted rows and constant server creation and deletion activity, a value of max_rows=1000 might exceed the database's configured maximum packet size or timeout due to a database deadlock, forcing the operator to use a much lower max_rows value like 100 or 50. And when the operator has e.g. 500,000 deleted instances rows (and millions of deleted rows total) they are trying to archive, being forced to use a max_rows value several orders of magnitude lower than the number of rows they need to archive is a poor user experience and also makes it unclear if archive progress is actually being made.
2023-06-16 18:34:22 melanie witt nominated for series nova/xena
2023-06-16 18:34:22 melanie witt bug task added nova/xena
2023-06-16 18:34:22 melanie witt nominated for series nova/antelope
2023-06-16 18:34:22 melanie witt bug task added nova/antelope
2023-06-16 18:34:22 melanie witt nominated for series nova/zed
2023-06-16 18:34:22 melanie witt bug task added nova/zed
2023-06-16 18:34:22 melanie witt nominated for series nova/wallaby
2023-06-16 18:34:22 melanie witt bug task added nova/wallaby
2023-06-16 18:34:22 melanie witt nominated for series nova/yoga
2023-06-16 18:34:22 melanie witt bug task added nova/yoga
2023-06-16 19:08:04 OpenStack Infra nova: status New In Progress
2023-08-21 13:43:46 Christian Rohmann bug added subscriber Christian Rohmann
2023-10-24 23:05:33 melanie witt nova: status In Progress Fix Released
2023-10-24 23:06:38 melanie witt nova/antelope: status New In Progress
2023-10-24 23:07:12 melanie witt nova/zed: status New In Progress
2023-10-24 23:07:41 melanie witt nova/yoga: status New In Progress
2023-10-24 23:08:10 melanie witt nova/xena: status New In Progress
2023-10-24 23:08:38 melanie witt nova/wallaby: status New In Progress