Syncing at this rate takes months
I have tried it with a VPS with 4 cpu's 8 cpu's and it all kept going as quick as it is going now.
At the time of writing it is now at block 15216. It needs to go to block 386980.
Now i see that there are chains with blocks in the millions using this type of explorer how did they sync it?
At the moment it goes with 90 blocks in 20 seconds.
https://i.gyazo.com/5b3c94a4322d48c1da2ab242e9bf1467.mp4
This is a video from the speed of transaction indexing.
Could someone help?
A number of factors can lead to the slow sync:
- If on a version less than 1.6.2 then explorer was not coded for parallel sync and is missing a number of enhancements that were brought in recently.
- If you're on the recent version and your coind is running remotely, this can lead to latency between you and the RPC response
- If on the older version, the sync required to hit the /api of the explorer, which means you're also waiting on additional HTTP/TCP traffic to happen before even querying your coind. Newer versions we added "use_rpc" to bypass this /api and go directly towards hitting the coind's rpc.
- If you have a large number of txes per block, that'll severely slow you down.
- If you have a large number of vins and vouts (inputs/outputs) per tx per block
- If mongo is on a remote server or your server has slow I/O and write speeds, that'll slow down.
Probably the best way to really figure out how to help would be the following information: Version of Iquidius, and what repo you obtained it from Coin being sync'd Is everything on the single VPS or are there any remote servers
Thanks for your quick response!
They are all on the same VPS, i started over today since it was at block 130k after almost a month of indexing. i saw that there were supposed to be improvements.
the version is Iquidus Explorer v1.7.3.
in the settings i have: //heavy (enable/disable additional heavy features) "heavy": false,
//disable saving blocks & TXs via API during indexing. "lock_during_index": false,
Heavy, i didnt know what it does.
Also i put // wallet settings "use_rpc": true,
i thought this would be the thing that allows for a direct RPC and not through the API.
this is a small part from forever running npm start: [0mGET /ext/summary [36m304 [0m24.784 ms - -[0m [0mGET /ext/getlasttxsajax/0?draw=10&columns%5B0%5D%5Bdata%5D=0&columns%5B0%5D%5Bname%5D=&columns%5B0%5D%5Bsearchable%5D=true&columns%5B0%5D%5Borderable%5D=false&columns%5B0%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B0%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B1%5D%5Bdata%5D=1&columns%5B1%5D%5Bname%5D=&columns%5B1%5D%5Bsearchable%5D=true&columns%5B1%5D%5Borderable%5D=false&columns%5B1%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B1%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B2%5D%5Bdata%5D=2&columns%5B2%5D%5Bname%5D=&columns%5B2%5D%5Bsearchable%5D=true&columns%5B2%5D%5Borderable%5D=false&columns%5B2%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B2%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B3%5D%5Bdata%5D=3&columns%5B3%5D%5Bname%5D=&columns%5B3%5D%5Bsearchable%5D=true&columns%5B3%5D%5Borderable%5D=false&columns%5B3%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B3%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B4%5D%5Bdata%5D=4&columns%5B4%5D%5Bname%5D=&columns%5B4%5D%5Bsearchable%5D=true&columns%5B4%5D%5Borderable%5D=false&columns%5B4%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B4%5D%5Bsearch%5D%5Bregex%5D=false&start=0&length=10&search%5Bvalue%5D=&search%5Bregex%5D=false&=1578601327796 [32m200 [0m237.746 ms - 1964[0m [0mGET /ext/summary [36m304 [0m19.722 ms - -[0m [0mGET /ext/getlasttxsajax/0?draw=11&columns%5B0%5D%5Bdata%5D=0&columns%5B0%5D%5Bname%5D=&columns%5B0%5D%5Bsearchable%5D=true&columns%5B0%5D%5Borderable%5D=false&columns%5B0%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B0%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B1%5D%5Bdata%5D=1&columns%5B1%5D%5Bname%5D=&columns%5B1%5D%5Bsearchable%5D=true&columns%5B1%5D%5Borderable%5D=false&columns%5B1%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B1%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B2%5D%5Bdata%5D=2&columns%5B2%5D%5Bname%5D=&columns%5B2%5D%5Bsearchable%5D=true&columns%5B2%5D%5Borderable%5D=false&columns%5B2%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B2%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B3%5D%5Bdata%5D=3&columns%5B3%5D%5Bname%5D=&columns%5B3%5D%5Bsearchable%5D=true&columns%5B3%5D%5Borderable%5D=false&columns%5B3%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B3%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B4%5D%5Bdata%5D=4&columns%5B4%5D%5Bname%5D=&columns%5B4%5D%5Bsearchable%5D=true&columns%5B4%5D%5Borderable%5D=false&columns%5B4%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B4%5D%5Bsearch%5D%5Bregex%5D=false&start=0&length=10&search%5Bvalue%5D=&search%5Bregex%5D=false&=1578601327797 [32m200 [0m301.790 ms - 1983[0m [0mGET /ext/summary [36m304 [0m26.404 ms - -[0m [0mGET /ext/getlasttxsajax/0?draw=12&columns%5B0%5D%5Bdata%5D=0&columns%5B0%5D%5Bname%5D=&columns%5B0%5D%5Bsearchable%5D=true&columns%5B0%5D%5Borderable%5D=false&columns%5B0%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B0%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B1%5D%5Bdata%5D=1&columns%5B1%5D%5Bname%5D=&columns%5B1%5D%5Bsearchable%5D=true&columns%5B1%5D%5Borderable%5D=false&columns%5B1%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B1%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B2%5D%5Bdata%5D=2&columns%5B2%5D%5Bname%5D=&columns%5B2%5D%5Bsearchable%5D=true&columns%5B2%5D%5Borderable%5D=false&columns%5B2%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B2%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B3%5D%5Bdata%5D=3&columns%5B3%5D%5Bname%5D=&columns%5B3%5D%5Bsearchable%5D=true&columns%5B3%5D%5Borderable%5D=false&columns%5B3%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B3%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B4%5D%5Bdata%5D=4&columns%5B4%5D%5Bname%5D=&columns%5B4%5D%5Bsearchable%5D=true&columns%5B4%5D%5Borderable%5D=false&columns%5B4%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B4%5D%5Bsearch%5D%5Bregex%5D=false&start=0&length=10&search%5Bvalue%5D=&search%5Bregex%5D=false&=1578601327798 [32m200 [0m232.297 ms - 1990[0m [0mGET /ext/summary [36m304 [0m15.535 ms - -[0m [0mGET /ext/getlasttxsajax/0?draw=13&columns%5B0%5D%5Bdata%5D=0&columns%5B0%5D%5Bname%5D=&columns%5B0%5D%5Bsearchable%5D=true&columns%5B0%5D%5Borderable%5D=false&columns%5B0%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B0%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B1%5D%5Bdata%5D=1&columns%5B1%5D%5Bname%5D=&columns%5B1%5D%5Bsearchable%5D=true&columns%5B1%5D%5Borderable%5D=false&columns%5B1%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B1%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B2%5D%5Bdata%5D=2&columns%5B2%5D%5Bname%5D=&columns%5B2%5D%5Bsearchable%5D=true&columns%5B2%5D%5Borderable%5D=false&columns%5B2%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B2%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B3%5D%5Bdata%5D=3&columns%5B3%5D%5Bname%5D=&columns%5B3%5D%5Bsearchable%5D=true&columns%5B3%5D%5Borderable%5D=false&columns%5B3%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B3%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B4%5D%5Bdata%5D=4&columns%5B4%5D%5Bname%5D=&columns%5B4%5D%5Bsearchable%5D=true&columns%5B4%5D%5Borderable%5D=false&columns%5B4%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B4%5D%5Bsearch%5D%5Bregex%5D=false&start=0&length=10&search%5Bvalue%5D=&search%5Bregex%5D=false&=1578601327799 [32m200 [0m462.325 ms - 1972[0m [0mGET /api/getpeerinfo [32m200 [0m7.700 ms - 8163[0m [0mGET /ext/summary [36m304 [0m16.552 ms - -[0m [0mGET /ext/getlasttxsajax/0?draw=14&columns%5B0%5D%5Bdata%5D=0&columns%5B0%5D%5Bname%5D=&columns%5B0%5D%5Bsearchable%5D=true&columns%5B0%5D%5Borderable%5D=false&columns%5B0%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B0%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B1%5D%5Bdata%5D=1&columns%5B1%5D%5Bname%5D=&columns%5B1%5D%5Bsearchable%5D=true&columns%5B1%5D%5Borderable%5D=false&columns%5B1%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B1%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B2%5D%5Bdata%5D=2&columns%5B2%5D%5Bname%5D=&columns%5B2%5D%5Bsearchable%5D=true&columns%5B2%5D%5Borderable%5D=false&columns%5B2%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B2%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B3%5D%5Bdata%5D=3&columns%5B3%5D%5Bname%5D=&columns%5B3%5D%5Bsearchable%5D=true&columns%5B3%5D%5Borderable%5D=false&columns%5B3%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B3%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B4%5D%5Bdata%5D=4&columns%5B4%5D%5Bname%5D=&columns%5B4%5D%5Bsearchable%5D=true&columns%5B4%5D%5Borderable%5D=false&columns%5B4%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B4%5D%5Bsearch%5D%5Bregex%5D=false&start=0&length=10&search%5Bvalue%5D=&search%5Bregex%5D=false&=1578601327800 [32m200 [0m328.477 ms - 1964[0m [0mGET /ext/summary [36m304 [0m13.057 ms - -[0m [0mGET /ext/getlasttxsajax/0?draw=15&columns%5B0%5D%5Bdata%5D=0&columns%5B0%5D%5Bname%5D=&columns%5B0%5D%5Bsearchable%5D=true&columns%5B0%5D%5Borderable%5D=false&columns%5B0%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B0%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B1%5D%5Bdata%5D=1&columns%5B1%5D%5Bname%5D=&columns%5B1%5D%5Bsearchable%5D=true&columns%5B1%5D%5Borderable%5D=false&columns%5B1%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B1%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B2%5D%5Bdata%5D=2&columns%5B2%5D%5Bname%5D=&columns%5B2%5D%5Bsearchable%5D=true&columns%5B2%5D%5Borderable%5D=false&columns%5B2%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B2%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B3%5D%5Bdata%5D=3&columns%5B3%5D%5Bname%5D=&columns%5B3%5D%5Bsearchable%5D=true&columns%5B3%5D%5Borderable%5D=false&columns%5B3%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B3%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B4%5D%5Bdata%5D=4&columns%5B4%5D%5Bname%5D=&columns%5B4%5D%5Bsearchable%5D=true&columns%5B4%5D%5Borderable%5D=false&columns%5B4%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B4%5D%5Bsearch%5D%5Bregex%5D=false&start=0&length=10&search%5Bvalue%5D=&search%5Bregex%5D=false&=1578601327801 [32m200 [0m202.136 ms - 1947[0m [0mGET /ext/summary [32m200 [0m25.629 ms - 159[0m [0mGET /ext/getlasttxsajax/0?draw=16&columns%5B0%5D%5Bdata%5D=0&columns%5B0%5D%5Bname%5D=&columns%5B0%5D%5Bsearchable%5D=true&columns%5B0%5D%5Borderable%5D=false&columns%5B0%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B0%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B1%5D%5Bdata%5D=1&columns%5B1%5D%5Bname%5D=&columns%5B1%5D%5Bsearchable%5D=true&columns%5B1%5D%5Borderable%5D=false&columns%5B1%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B1%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B2%5D%5Bdata%5D=2&columns%5B2%5D%5Bname%5D=&columns%5B2%5D%5Bsearchable%5D=true&columns%5B2%5D%5Borderable%5D=false&columns%5B2%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B2%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B3%5D%5Bdata%5D=3&columns%5B3%5D%5Bname%5D=&columns%5B3%5D%5Bsearchable%5D=true&columns%5B3%5D%5Borderable%5D=false&columns%5B3%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B3%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B4%5D%5Bdata%5D=4&columns%5B4%5D%5Bname%5D=&columns%5B4%5D%5Bsearchable%5D=true&columns%5B4%5D%5Borderable%5D=false&columns%5B4%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B4%5D%5Bsearch%5D%5Bregex%5D=false&start=0&length=10&search%5Bvalue%5D=&search%5Bregex%5D=false&=1578601327802 [32m200 [0m228.463 ms - 1971[0m [0mGET /ext/summary [32m200 [0m63.031 ms - 159[0m [0mGET /ext/getlasttxsajax/0?draw=17&columns%5B0%5D%5Bdata%5D=0&columns%5B0%5D%5Bname%5D=&columns%5B0%5D%5Bsearchable%5D=true&columns%5B0%5D%5Borderable%5D=false&columns%5B0%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B0%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B1%5D%5Bdata%5D=1&columns%5B1%5D%5Bname%5D=&columns%5B1%5D%5Bsearchable%5D=true&columns%5B1%5D%5Borderable%5D=false&columns%5B1%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B1%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B2%5D%5Bdata%5D=2&columns%5B2%5D%5Bname%5D=&columns%5B2%5D%5Bsearchable%5D=true&columns%5B2%5D%5Borderable%5D=false&columns%5B2%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B2%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B3%5D%5Bdata%5D=3&columns%5B3%5D%5Bname%5D=&columns%5B3%5D%5Bsearchable%5D=true&columns%5B3%5D%5Borderable%5D=false&columns%5B3%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B3%5D%5Bsearch%5D%5Bregex%5D=false&columns%5B4%5D%5Bdata%5D=4&columns%5B4%5D%5Bname%5D=&columns%5B4%5D%5Bsearchable%5D=true&columns%5B4%5D%5Borderable%5D=false&columns%5B4%5D%5Bsearch%5D%5Bvalue%5D=&columns%5B4%5D%5Bsearch%5D%5Bregex%5D=false&start=0&length=10&search%5Bvalue%5D=&search%5Bregex%5D=false&=1578601327803 [32m200 [0m427.290 ms - 1964[0m [0mGET /api/getpeerinfo [32m200 [0m5.606 ms - 8218[0m
I personally dont see a lot in this.
thanks again
Yep so that output is normal explorer output, unrelated to the sync process and instead related to the fact you're getting page loads on the explorer (or have just a single tab open with JavaScript doing it's normal table refreshes)
So that's good, if that's the output while indexing then you are definitely hitting RPC and bypassing /api.
Are you still slow as molasses now after the restart on the 1.7.3 codebase?
If no improvement, answer the other questions please and I'll do a test run myself.
it is 100% quicker. am at 54k blocks now. So its quicker but still will take multiple days. Would be nice if it was quicker.
It started way slower. probably because of more transactions there. wonder what would happen in the part where there were transaction attacks for like 10k blocks
Yea I believe there could be one more setting the @TheHolyRoger built in with his update to add more tasks per run that may help. Have to take a look at settings.js for what it is.
Edit: https://github.com/iquidus/explorer/blob/56e37523263bd81d57dfc420416c1b0426b875a7/lib/settings.js#L125
"block_parallel_tasks" try setting that to 4. Don't go extreme with it, but tinker around and see if it helps at all.
what i see in htop when i use an 8 core server is that every core is only getting used for like 10-20% and with a 4 core thats a bit more and with 2 core its at 60-60 %
maybe its a way to max out the cores while syncing?
Check my edit above just in case you don't get a notification that I updated
the thing is, im afraid that if it stop it it will need to go back from scratch
okey i just did it, put the number on 6. but i feel like its skipping blocks. its going crazy fast.
https://i.gyazo.com/e7f52212f6d06b9ec75620f562ec03df.mp4
check this vid ^
and it is all while being at round 20-30% cpu usage.
https://i.gyazo.com/edd9b7d51f9259a7cd3bfa04402e11b9.png
i feel like its skipping blocks or transactions or something
is there some way i can check in the database or something if its all going good?
In a way it is skipping, Basically that setting is saying that by default we only want 1 worker to work from 0 to blockheight (at the time of execution). By changing it to 6, we have now set up 6 jobs where job 1 starts at block 1, job 2 at block 2, so on and so far, but they start at the same time and move up the blocks from there. so if Job2 has a block with say 10 transactions, and Job3 has a block with 4, then Job2 will finish and move on to the next block while Job1 is still working.
If you look at your video, and stop it, you'll see something like the following:
64987: txhash 64998: txhash 64997: txhash 64993: txhash 65000: txhash 64997: txhash 64999: txhash 64831: txhash 64993: txhash 65001: txhash
So if I was to guess and put this into perspective as to my first paragraph description: JobA 64987: txhash JobB 64998: txhash JobC 64997: txhash JobD 64993: txhash JobE 65000: txhash JobC 64997: txhash JobF 64999: txhash JobF 64831: txhash JobA 64993: txhash JobB 65001: txhash
Now obviously the order of which job is on which block is unimportant and just for illustration purposes. But this also shows us that JobF on block 64831 probably has a crap ton of txes (or that process is just severely behind) because it's still trucking through while the others are already moving up.
As for a test you can do to make sure nothing is being skipped: My mongo is pretty shit but I'd presume it's something along the lines of "db.txes.distinct("blockindex")" that'll give you from block 1 to the highest you have (you may need a .sort({ blockindex: -1}) to get it in order? But anyway, that'll output each one. Then, you can either program a simple for-loop to scan over each on and see if anything is missing or put it into a spread sheet on column A, then do a 1 to highest number in column B, and ensure that they are even (look at the last number/row)
Its going way faster now, when i check it in the browser i can see it all happening and it looks like it isnt skipping since they are all in the correct order.
Thanks for the help!
@uaktags I hope you are able to help or clear this up aswel.
You can check on usa.pacglobalexplorer.com or on eu.pacglobalexplorer.com both instances indexing and both have their own node on their server. ( the EU one was set up when i opened this issue)
now they have gone from insanly fast to slow as a snail. This part has quite some transaction in them since there was a transaction attack going from block 140k to about 150k.
at the moment i have set it on 10 "workers" in the settings and the CPU's on both machines are aroung 30-50%.
Is there a way to do this part quicker?
CPU usage: https://i.gyazo.com/d3df8eeb2430cf4f9150f5e24dee68c8.png
I have PAC syncing now (1hr to go for the coind to be updated) just started iquidus.
this really goes very slow, you can check on the websites
So I've run into the same, removing the idea of your VPS being to culprit. I think it has to do with your coins allowance of these extremely low tx amounts. Some blocks have numerous txes and the txes are the same address but with what looks like dust amounts. So we're querying through, and have to calculate through it all which appears to be taxing. I havent yet broken this out to start echo'ing out where the bottleneck is, but I'm afraid I'm probably not going to be able to resolve this for the time being.
I'll keep a branch on my local running with pac for various testing in the meantime.
Yeah, at that time there were "transaction attacks" where someone was spamming the chain and filling up the blocks.
Fun stuff. It may be a point to look at to see what can be refactored in our transaction calculations to speed up the process. Unfortunately it's just going to have to wait at the moment I think. Let's get other updates pushed, everything stable, and then try tuning.
On Tue, Jan 14, 2020, 9:16 AM koentjeappel [email protected] wrote:
Yeah, at that time there were "transaction attacks" where someone was spamming the chain and filling up the blocks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/iquidus/explorer/issues/338?email_source=notifications&email_token=AAG2F5EFSAKHXQWFMIHL6TLQ5XCMVA5CNFSM4KE52IJ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEI4YEMY#issuecomment-574194227, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG2F5FFDXNZVFKLEFQP4D3Q5XCMVANCNFSM4KE52IJQ .
@koentjeappel you can try increasing this number https://github.com/iquidus/explorer/blob/master/lib/database.js#L790
But I'm 99% sure you'll end up with missing transactions/errors.
The save_tx logic (https://github.com/iquidus/explorer/blob/master/lib/database.js#L124) is too complex to be combined into one mongo .aggregate call
I'm in agreement, I went through and added some timers to see where the chugging and it's definitely within the sync over vin/vouts within save_tx.
150482: 22b3532c63da27688f88366c904df54f97d490707c59527517ab8ab7b1d5bf31
find tx: a7f66c08b5a3390f3b8e50e98c819c24fe058687f0bd35d3985f8f61e0a6d230
save tx: a7f66c08b5a3390f3b8e50e98c819c24fe058687f0bd35d3985f8f61e0a6d230
prepare vin a7f66c08b5a3390f3b8e50e98c819c24fe058687f0bd35d3985f8f61e0a6d230
prepare vout a7f66c08b5a3390f3b8e50e98c819c24fe058687f0bd35d3985f8f61e0a6d230
Elapsed: 2 seconds
sync loop over vin a7f66c08b5a3390f3b8e50e98c819c24fe058687f0bd35d3985f8f61e0a6d230 [ { addresses: 'PQjPFiHgMLeVxkVwSLHiNViScyCye2KpUR',
amount: 60500 } ]
Elapsed: 0 seconds
sync loop over vout a7f66c08b5a3390f3b8e50e98c819c24fe058687f0bd35d3985f8f61e0a6d230 [ { addresses: 'PQjPFiHgMLeVxkVwSLHiNViScyCye2KpUR',
amount: 44200 } ]
Time: 1579120431325 Elapsed: 2 seconds
Calculate Total a7f66c08b5a3390f3b8e50e98c819c24fe058687f0bd35d3985f8f61e0a6d230 [ { addresses: 'PQjPFiHgMLeVxkVwSLHiNViScyCye2KpUR',
amount: 44200 } ]
Time: 1579120431.327 Elapsed: 2 seconds
Time Finished: 1579120431328 Elapsed: 2 seconds
If we change the prepare_vin and prepare_vout functions to use objects rather than arrays and get rid of the expensive is_unique function it should speed this up quite dramatically.
https://github.com/iquidus/explorer/blob/master/lib/explorer.js#L503
My indexing explorers are almost through that part :) So for me it wont change but would be cool if you guys could improve.
nice that people are working on this explorer it looks good :)
Yea, I think it'll be important to have, even for you, as who knows if a future update may (and it's likely to occur) require a reindex again. Atleast until some plan is formed how to check/repair a database. That'd be fun. haha.
On topic though, yea @TheHolyRoger this is where I started to really fumble at last year, trying to improve this save_tx as the main headache in use clustering in node. I'll have to find some old examples/trials I had.
@uaktags @TheHolyRoger
now it is past those slow blocks and went like a train. but now it ran into this error, which i cant find on the github: /root/explorer/node_modules/bluebird/js/release/async.js:49 fn = function () { throw arg; }; ^
TypeError: Cannot read property 'scriptPubKey' of undefined
at /root/explorer/lib/explorer.js:523:19
at Object.next (/root/explorer/lib/explorer.js:365:24)
at Object.syncLoop (/root/explorer/lib/explorer.js:376:10)
at Object.prepare_vout (/root/explorer/lib/explorer.js:497:20)
at /root/explorer/lib/database.js:129:13
at /root/explorer/lib/explorer.js:605:14
at Object.next (/root/explorer/lib/explorer.js:365:24)
at Object.syncLoop (/root/explorer/lib/explorer.js:376:10)
at Object.prepare_vin (/root/explorer/lib/explorer.js:582:20)
at /root/explorer/lib/database.js:128:11
at /root/explorer/lib/explorer.js:207:16
at Client.
@koentjeappel can you paste the block height or TX hash?
there's something weird going on in your vins somewhere
You can check the explorer for info you might need on pacglobalexplorer.com
330.000 is the block we swapped from POW to POS.
From: TheHolyRoger [email protected] Sent: Thursday, January 16, 2020 9:48:02 AM To: iquidus/explorer [email protected] Cc: koentjeappel [email protected]; Mention [email protected] Subject: Re: [iquidus/explorer] Syncing at this rate takes months (#338)
@koentjeappelhttps://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkoentjeappel&data=02%7C01%7C%7Cc73501a992b44d2873a008d79a60cc37%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637147612837361870&sdata=S3RchVhDrpzGmSBW7RutdBtHHrLhWT0StQKeghcXhFM%3D&reserved=0 can you paste the block height or TX hash?
there's something weird going on in your vins somewhere
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fiquidus%2Fexplorer%2Fissues%2F338%3Femail_source%3Dnotifications%26email_token%3DANW5ZZZCAZFPBPAB5HCDCTDQ6ANMFA5CNFSM4KE52IJ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJDIL5Y%23issuecomment-575047159&data=02%7C01%7C%7Cc73501a992b44d2873a008d79a60cc37%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637147612837361870&sdata=Fk8txNHtEPNDmS0ZfxWRtPC1cNHE%2Bm5XyyI7yPDlT7g%3D&reserved=0, or unsubscribehttps://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FANW5ZZ6565KWUCKWTVX7ZUTQ6ANMFANCNFSM4KE52IJQ&data=02%7C01%7C%7Cc73501a992b44d2873a008d79a60cc37%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637147612837371870&sdata=J0ZETGEqKmeA9X%2FzJINzK5%2BEqdz8YtOoEoMAss1CHVs%3D&reserved=0.
@koentjeappel the problem transaction is this one:
https://pacglobalexplorer.com/api/getrawtransaction?txid=9d5e10d4bd2bc486b918a87019746c096d76cea4612e34eee228262ccafec28d&decrypt=1
It has no vin's or vout's... It's an easy fix, but I don't know the history of this TX?
What's the reason for no vins/vouts? Why does it have this "qcTx" attribute?
To fix: add a check for scriptPubKey https://github.com/iquidus/explorer/blob/master/lib/explorer.js#L523
if (vout.length > 0 && vout[0].hasOwnProperty("scriptPubKey") && vout[0].scriptPubKey.type == 'nonstandard') {
I have no idea why that is, but will ask our dev
{ "txid": "9d5e10d4bd2bc486b918a87019746c096d76cea4612e34eee228262ccafec28d", "size": 342, "version": 3, "type": 6, "locktime": 0, "vin": [ ], "vout": [ ], "extraPayloadSize": 329, "extraPayload": "01002209050001000143d1ac7fa6984b5d9ff7051d001cb7f2d2b1abd6e8cdee990d90abe433f96e81320000000000000032000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "qcTx": { "version": 1, "height": 330018, "commitment": { "version": 1, "llmqType": 1, "quorumHash": "816ef933e4ab900d99eecde8d6abb1d2f2b71c001d05f79f5d4b98a67facd143", "signersCount": 0, "validMembersCount": 0, "quorumPublicKey": "000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" } }, "instantlock": false, "instantlock_internal": false, "chainlock": false }
with your fix it is syncing again, i added it wrong at first but after looking at it a couple of times, i saw what i did wrong and now its syncing
But i see quite some 0 pac transactions, how would that be possible?