Forum
perch2_resource_log size
Hi,
I'm using Perch Runway (2.7.6).
perch2_resource_log table has become very large - over 4 million rows.
There are only a few hundred assets uploaded and the website is not really that large.
Is it correct that this table has become this large?
Just concerned this is going to get bigger and bigger and make migrating the database to the production server troublesome.
Thanks,
Rob
It should not be that large, no.
What can you tell me about how you're using assets?
I'm using a few different Collection templates. They mainly have 2 images - thumbnail and hero image. Then a repeating region (x 4 at most) with an image each. So maybe 7 images total max per Collection item.
It actually seems like this table has become corrupt now - couldn't open it or even run a repair. I've deleted the table and created a fresh one (structure only - no data). Is that ok or will it cause me issues?
I do have a backup of the table, but don't really want to restore 4 million records if not needed.
That will cause you issues. Turn off resource clean up or you images will be removed.
I think the repeaters are the issue - I'm looking into it.
Thought that might be the case, so I'd put this in my config file:
Just let me know if you need anything from me - can send you a dump of the table if needed.
I've done a bunch of testing and code auditing, and I can't find any way that causes the resource log to grow more than expected.
Can you show me the output of these two statements for your DB?
Sure, here you go (I've changed the table prefix to ptbl)...
I think we've got it.
idx_uni
as the name suggests should be aUNIQUE
index.We insert into the resource log with an
INSERT IGNORE INTO
expecting many (most, even) inserts to fail due to the unique key constraint. It's faster to do that than to search and only insert if it exists.Your table has an index, but it's not unique, so the insert would succeed every time. I'm not sure how that gets up to 4 million - but as we're then into bug territory it almost isn't important. I think there must be exponential duplication happening when a new revision is created.
So, how to fix this for you...
We can't just make the key unique - that will fail as you already have duplicate content. We'll need to de-duplicate it first. I'll see what I can figure out.
Ok, can you make a backup of your database, and then try this:
Thanks Drew.
That's reduced the rows to 3k ish.
That sounds much better. Let me know how it goes - it should grow as you edit, but only moderately.