This issue is related to performance of DDLs when AHI is enabled
When DDL drop huge indexes, for all the pages belonging to index and in buffer pool, AHI entries have to be removed.
This is because once a table or index is dropped (dict_table_t or dict_index_t) is gone, a page cannot be parsed to individual records. AHI hash key values are table_id, index_id and record prefix (PK?).
So AHI entries have to be removed before a table or index can be dropped. This takes lot of CPU when buffer pool and indexes are huge.
The idea of this enhancement is to allow lazy deletion of AHI entries even after a table is dropped.
Currently I have two ideas
1) Use (table_id, index_id) to identify records from AHI hash table. This way there is no need to depend on actual record structure( and the dependency on table/index structure shouldn't be present).
2) Create a minmal dict_index_t that can be used to create record offsets. So after the index is gone, we should use minimal dict_index_t to create AHI entry to be removed.
These are just ideas and lot more work is required them to make concrete.