The PMM server AMI currently has a ticking time bomb built in, due to how it is configured.
The DataLV logical volume is currently built in a ThinPool LV. The ThinPool is made of a ThinPool_tdata and a ThinPool_tmeta volume. The default size of the ThinPool_tmeta LV, where metadata is stored, is only 16 megs (maybe because the default volume is only 8G?)
Increasing the size of the DataLV/ThinPool does not increase the size of ThinPool_tmeta. Not being familiar with thin pools, I thought I had plenty of disk space after increasing the DataLV. I found pmm-server completely locked up today, only to finally realize that the metadata partition was full. (This lack of space does not show in df, only lvs reflects it)
It's not clear to me what benefits the thin pool brings here. At a minimum, this feels like something that should be documented as part of the install process.
The documentation here https://www.percona.com/doc/percona-monitoring-and-management/deploy/server/ami.html#resizing-the-ebs-volume mentions how to resize the ThinPool but does not mention anything about ensuring that the metadata portion is also large enough.
As another option, it might be worth enabling the thin_pool_autoextend_threshold in /etc/lvm/lvm.conf?
Here is the relevant section as it is in the AMI: