Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not loading transactions #367

Closed
buzzkillb opened this issue Apr 11, 2020 · 6 comments
Closed

Not loading transactions #367

buzzkillb opened this issue Apr 11, 2020 · 6 comments

Comments

@buzzkillb
Copy link

I am using latest iquidus on Denarius. When I click an address I see the processing image, and then it errors out and doesn't show anything. Currently the chain is over block 3mil. During the initial sync I can see those transaction before the explorer syncs to block 1mil, but somewhere after block 1.2mil this error starts to happen. I am using a celeron dual core (4 threads) cpu with 8gb ram. What are you guys suggesting on using for cpu's? Any ideas on if this is just a too slow of a CPU to work on an older blockchain? Anything else I can try to do before moving this to a faster PC?

@uaktags
Copy link
Collaborator

uaktags commented Apr 11, 2020

The question is not actually an easy one, but hopefully the answer is clear.

There are two very separate concerns to evaluate: syncing the chain and serving visitors.

The dual core, while is eh, is totally fine for syncing. It's mostly a single thread process as is, and you're limited by the I/O of your coind and writing to mongo, rather than computational data.

However, to serve visitors as a web server, you're at a huge loss. Currently iquidus uses all available threads for the webserver. So if you have 4threads or 64threads, it'll use them up as needed for visitors.

So you specifically with only 4threads can only server 4 processes, but also have to share a thread with syncing. That's where bottlenecks come into place.

Now. That has nothing, imo, to do with your issue, I just hope that paints a picture of the cpu aspect.

Your issue personally feels like something happened around the 1.2mil area and your data got corrupted. Post what errors you're getting.

@buzzkillb
Copy link
Author

After I looked at passmark scores of that celeron I decided to just move over to a 3vCPU VPS to see what happens as this particular VPS is using Epyc, mainly more as a test so I have something to compare. Up until this most recent version I was never able to fully index the chain, even on my 2950X threadripper PC.

Your feedback helps a lot more than you realize as it wasn't clear where the bottlenecks are in this as the celeron never showed working at 100% on any threads. I am still syncing up on this VPS, but depending how that goes will give some feedback so others have an idea of my issues. And then maybe I go to a more powerful CPU to compare again. Hopefully just a few more days to fully sync to give some feedback from that.

@uaktags
Copy link
Collaborator

uaktags commented Apr 14, 2020

Depending on your IT expertise (no offense intended), and financial constraints for the project, you may opt for an effort to use a more powerful server to run the sync, while connecting to your lesser powerful one. There's no reason that the mongodb/sync/explorer all have to be on the same machine, competing for resources (it's also one of my larger complaints with the all in one approach currently).

Instead, you can clone the repo onto a server that has alot of threads (even slower ones) with ssds and use it as your Sync/mongo db. This way you can punch through your blockchain until you hit parity (or get close). Then, again depending on financial constraints, you can either leave that server up and have your webserver/explorer instance connect remotely to it (using vpn or v-lan). Or you can move the mongodb over to the webserver and run it locally since now you're in "maintenance" mode (if you will) rather than initial sync.

A few things to keep in mind though, as I've kinda stated, is the sync is less cpu bound and more I/O bound. So i'd stay away from conventional VPCs and go more with cloud servers or true VM hypervisor VPCs. For example, if your VPC is a container (think Parralells Virtuozzo or even a dockerized container), or a VM hosted on a single baremetal server rather than a cluster, then there's probably no Quality of Life aspects built in, so you're using free resources up until demand on the Hypervisor is under load, or until you reach your max (which ever is sooner).
True VM solutions on clusterized hypervisors allocate their resources to you and tend to have better QoL so that things happening on the server by other VPCs don't affect yours in performance.

So a true VM solution or a baremetal server so we're not sharing resources, check...then also make sure your mongodb is on SSDs rather than HDDs. Typically cloud server providers make this very clear that they're giving you SSDs, whereas older VPC offerings were just HDDs. MongoDB is heavily written to the disk so I/O performance is a huge barrier.

One thing to be careful with is your Coin daemon. It's not all that uncommon that I find that trying to move blocks from one machine to another fails miserably because of version differences with the OS and what not. I've had greater success with Windows having block portability than linux (unless i use my own linux image that has libraries pre-built for greater success). This is important so you don't have to resync the damn coin as well as trying to resync iquidus which adds to the delays. Trying to do both at the same time will definitely impede the performance as you use up threads for the coin and iquidus all at the same time.

I think that's all the recommendations I have off the top of my head.

@buzzkillb
Copy link
Author

Think of me as 0 experience in general, plus I am sure this helps out others trying to get the best bang for the buck running this. The VPS hit a similar problem overnight while still syncing. Thankfully Denarius blockchain is portable between OS's on same architecture. Now to try a much faster PC. This is the error message.

DataTables warning: table id=recent-table - Ajax error. For more information about this error, please see http://datatables.net/tn/7

@buzzkillb
Copy link
Author

Went to a faster single core vps with 2gb ram as I work my way through different hosts to test on. Once fully sync'd loading the main page was very slow, including going to transactions and then to a specific address.

I did this change #363 and the page is very usable as a backup explorer. https://explorer.denarius.pro/

@buzzkillb
Copy link
Author

After a night of everything settling, the explorer is nice and quick. And very good to know I need more cores if people start using this backup explorer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants