zingg icon indicating copy to clipboard operation
zingg copied to clipboard

zingg 0.4.0 in docker consume lots disk space

Open iqoOopi opened this issue 1 year ago • 26 comments

I'm running a docker zingg with 2.7M records with 10 columns match. On my 6 core, 32gb RAM, it took 14 hours now and still running. It already has 230GB disk space.

For the past few hours, looks like it keep writing to rdd_71_1 nonstop.

Is this normal?

image image image image

iqoOopi avatar Sep 15 '24 04:09 iqoOopi

That should not happen. What is the labelDataSampleSize and number of matches you have labeled?

sonalgoyal avatar Sep 15 '24 06:09 sonalgoyal

so I trained the model with 60K records first, and runner the match with 60K. Everything works and the results is accurate. labelDataSampleSize for 60K records is 0.1. Number of match is roughly 60 records.

Then I want to try on 2.7M records (same table schema but just more records). So I changed the labelDataSampleSize to 0.001. "FindAndLabel" command works fine. But the "match" never finished and took all my disk space (more than 300GB) and never finish

iqoOopi avatar Sep 15 '24 11:09 iqoOopi

It is better to run train on a data size close to the one you want to run match on. Can you please run train with the bigger 2.7m dataset and try match post that?

sonalgoyal avatar Sep 15 '24 11:09 sonalgoyal

Oh, I forgot to mention. After I switched to 2.7m, I reruned few round of "findandLabel" and retrain the model as well. Then I started the match

iqoOopi avatar Sep 15 '24 11:09 iqoOopi

Ok. Seems it’s not trained the blocking model. Zingg jobs are more compute intensive than memory intensive, and the training samples help to learn the blocking model which helps to parallelise the job across cores. Do you have some major null values in your dataset which may be getting clubbed together?

sonalgoyal avatar Sep 15 '24 12:09 sonalgoyal

Yes, each record has 15 fields. all these fields are nullable. I do have quite many records that only have value for 2 fields (firstName, phone) then rest 13 fields are null

iqoOopi avatar Sep 15 '24 12:09 iqoOopi

Ah. That’s a tricky one. Are the null fields important to the match?

sonalgoyal avatar Sep 15 '24 12:09 sonalgoyal

yes, they are important if they have a value. Like (SIN number, DOB, email address etc)

iqoOopi avatar Sep 15 '24 12:09 iqoOopi

I suspect you will need a lot more training in terms of matching data for combinations of these null values. Nulls are tough to block as they have zero signal. Maybe add through trainingSamples and see if that changes things?

sonalgoyal avatar Sep 15 '24 12:09 sonalgoyal

Seems it’s not trained the blocking model.

will try train more. Btw, how do you figure out it is not using the block model?

iqoOopi avatar Sep 15 '24 12:09 iqoOopi

I have seen it in the past on one dataset which did not have a lot of values populated. If there is no signal, it is hard to learn. I think we should build some way to let users know this while running Zingg.

sonalgoyal avatar Sep 15 '24 13:09 sonalgoyal

image Also when running Zingg, how can I tell it is analyzing something? nothing from log. Hard to tell whether it is working normal or need abort the current task.

iqoOopi avatar Sep 16 '24 04:09 iqoOopi

These warnings are ok. If the logs are not moving at all, that may be a sign.

sonalgoyal avatar Sep 16 '24 04:09 sonalgoyal

If you are familiar with spark, you could look at the spark gui

sonalgoyal avatar Sep 16 '24 04:09 sonalgoyal

btw, in the config.json file I have "trainingSamples" and "data" section, both are pointing to SQL server table. wondering is the schema order matters? like in training samples, I have "schema": A string,B string,C string, but in the data section I have column "schema" as A string,C string,B string. Since they are SQL table, so I think the sequence does not matter, just want to confirm

iqoOopi avatar Sep 16 '24 04:09 iqoOopi

image image

Just restart the match after re-trained for around 100 matches. I saw from spark GUI, it only have 1 active job with 3 tasks. Is this normal?

iqoOopi avatar Sep 16 '24 16:09 iqoOopi

what is the numPartitions setting? For better parallelisation, you want it to be 4-5 times your number of cores

sonalgoyal avatar Sep 16 '24 16:09 sonalgoyal

AH, thanks @sonalgoyal for the quick reply. In the config.json file, it is only 4. My CPU has 4 core, 8 thread. (docker interface shows 8 cpus), so I should give it 4 X 5 or 8 X 5? Also in the zingg.conf file, I saw there is a "spark.default.parallelism" setting being commented out, what should be that value?

iqoOopi avatar Sep 16 '24 16:09 iqoOopi

btw, in the config.json file I have "trainingSamples" and "data" section, both are pointing to SQL server table. wondering is the schema order matters? like in training samples, I have "schema": A string,B string,C string, but in the data section I have column "schema" as A string,C string,B string. Since they are SQL table, so I think the sequence does not matter, just want to confirm

@sonalgoyal also how about this Q?

iqoOopi avatar Sep 16 '24 16:09 iqoOopi

I make both numberOfPartitions and spark.default.parallelsim settings to 20. Is this looks normal?

image image image

iqoOopi avatar Sep 16 '24 17:09 iqoOopi

btw, in the config.json file I have "trainingSamples" and "data" section, both are pointing to SQL server table. wondering is the schema order matters? like in training samples, I have "schema": A string,B string,C string, but in the data section I have column "schema" as A string,C string,B string. Since they are SQL table, so I think the sequence does not matter, just want to confirm

@sonalgoyal also how about this Q?

I think the SQL server dataframe should be read correctly, but I am not 100% sure as thats a case we have not tested. Are you seeing an issue?

sonalgoyal avatar Sep 16 '24 17:09 sonalgoyal

btw, in the config.json file I have "trainingSamples" and "data" section, both are pointing to SQL server table. wondering is the schema order matters? like in training samples, I have "schema": A string,B string,C string, but in the data section I have column "schema" as A string,C string,B string. Since they are SQL table, so I think the sequence does not matter, just want to confirm

@sonalgoyal also how about this Q?

I think the SQL server dataframe should be read correctly, but I am not 100% sure as thats a case we have not tested. Are you seeing an issue?

NO, I have not seen any issues. From the findandlabel command result, I can see the model are doing its job.

iqoOopi avatar Sep 16 '24 17:09 iqoOopi

how did it go @iqoOopi ?

sonalgoyal avatar Oct 01 '24 13:10 sonalgoyal

No success yet, it never finished the scan on our 2.7M records.

iqoOopi avatar Oct 02 '24 19:10 iqoOopi

can you share the complete logs?

sonalgoyal avatar Oct 02 '24 19:10 sonalgoyal

this is likely the case of a poorly formed blocking model. this can happen due to less traiing data, but the user has mentioned that they have labelled sufficiently. hard to say further without logs or sample data. @iqoOopi will you be open to a debug session on this?

sonalgoyal avatar Oct 16 '24 15:10 sonalgoyal

the verifyBlocking phase should now give the ability to inspect the blocking tree

sonalgoyal avatar Jan 22 '25 05:01 sonalgoyal