Although this does not fix the issue it can sometimes be mitigated by archiving old data from the database. The two biggest culprits are nova tables and gnocchi tables. You can see the row counts with:
SELECT table_name, table_rows FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'nova'
ORDER BY table_rows;
and
SELECT table_name, table_rows
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'gnocchi'
ORDER BY table_rows;
The nova-cloud-controller charm exposes one action called 'archive-data' which runs "nova-manage db archive_deleted_rows" with a given batch size. This will move deleted rows from production tables to shadow tables. If there is a large backlog you will need to run this many times.
Unfortunately things are bit more tricky with gnocchi, as far as I know the only option is to purge it through the api and that is slow. It took about three weeks for us to purge old data. I'll pastebin the script we use but it will DELETE INDISCRIMINATELY https://paste.ubuntu.com/p/gNN6rZbVDj/
Although this does not fix the issue it can sometimes be mitigated by archiving old data from the database. The two biggest culprits are nova tables and gnocchi tables. You can see the row counts with:
SELECT table_name, table_rows FROM INFORMATION_ SCHEMA. TABLES
WHERE TABLE_SCHEMA = 'nova'
ORDER BY table_rows;
and
SELECT table_name, table_rows SCHEMA. TABLES
FROM INFORMATION_
WHERE TABLE_SCHEMA = 'gnocchi'
ORDER BY table_rows;
The nova-cloud- controller charm exposes one action called 'archive-data' which runs "nova-manage db archive_ deleted_ rows" with a given batch size. This will move deleted rows from production tables to shadow tables. If there is a large backlog you will need to run this many times.
Unfortunately things are bit more tricky with gnocchi, as far as I know the only option is to purge it through the api and that is slow. It took about three weeks for us to purge old data. I'll pastebin the script we use but it will DELETE INDISCRIMINATELY https:/ /paste. ubuntu. com/p/gNN6rZbVD j/