-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NEP-541: Transaction priority fee #541
base: master
Are you sure you want to change the base?
Conversation
Your Render PR Server URL is https://nomicon-pr-541.onrender.com. Follow its progress at https://dashboard.render.com/static/srv-co6pppu3e1ms73dfmg80. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty solid already. I left some thought in the comments as was reading through.
while gas_used < gas_limit: | ||
delayed_receipt_head = if delayed_receipts.empty() {-Inf} else {delayed_receipts.top() } | ||
incoming_receipt_head = if incoming_receipts.empty() {-Inf} else {incoming_receipts.top() } | ||
receipt = None | ||
if delayed_receipt_head.priority > incoming_receipts_head.priority: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are prioritized delayed receipts executed before local receipts? What about prioritized local receipts?
Today's order among receipts is
- local receipts
- delayed receipts
- new incoming receipts
Note that if we don't execute local receipts in the first chunk, they end up in the delayed receipts queue one chunk later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes they are executed before local receipts. The way to think of it is as follows: priority execution always happens first and regular execution happens. During regular execution, we preserve the order of execution we do today (local receipts, delayed receipts, and incoming receipts). We can also change the order within regular execution, but that is orthogonal to this proposal.
neps/nep-0541.md
Outdated
|
||
## Future possibilities | ||
|
||
This NEP should be combined with [NEP-513](https://github.com/near/NEPs/pull/539). They together redefine how congestion is handled and how users can still send transactions during congestion. It is possible to explore more complex mechanism on priority fees when there is congestion. For example, the protocol could require that transactions to a congested shard must attach a priority fee and even place a minimum on the priority fee based on previous chunk's priority fees. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh and how are we dealing with priority in the outgoing buffers that NEP-539 currently proposes?
I would probably suggest that, at least initially, draining the outgoing buffers should follow a strict FIFO order. The priority already helped to get inside faster, so maybe the "cross-shard delay" can be fair.
Alternatively, we can add more priority queues (one extra for each receiving shard, per shard) and give truly fast speed even to cross-contract calls during congestion. It would certainly be a better user experience, since otherwise one can still wait a long time for a cross-contract call during congestion, no matter how much one pays.
I just feel perhaps an iterative approach would be better than changing so much all at once, without the experience of how congestion control works in practice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I think that makes sense
how do we determine 'priority fee'? Is the amount preset by protocol? Does a user have to guess and attach arbitrary amount, hoping it would be high enough? |
Co-authored-by: Jakob Meier <jakob@near.org>
It is an arbitrary amount decided by a user |
It's a good starting point, but it may not be intuitive enough. How can a user ensure that their txn will be 'prioritized' with the minimum amount of premium? Or is that out of our scope? |
The protocol working group met today and had a lively discussion about this NEP along with #539 The primary concern raised with this proposal is if there are no protocol limits on priority fees then validators can collude to create an effective minimum fee (by censoring transactions that are below their desired fee). But we think this can be addressed by combining the idea of a priority fee with the congestion metrics present in #539 . It would work something like this. Below a certain threshold the system is considered "not congested" and receipt priority is ignored. Beyond this threshold, the priority is used as the back-pressure mechanism by making the minimum required priority higher as the queue fills up. If a receipt does not have a high enough priority to be added to the target shard's incoming queue, then it remains in the outgoing queue of its source shard. If a new transaction does not have a high enough priority fee so that the priority of its initial receipt is high enough to be added to the shard's queue then it is rejected. To prevent receipts from being stuck forever, the priority of receipts in outgoing queues can increase over time so that eventually either the congestion will alleviate or the priority will be high enough that the target shard will accept it. As part of this proposal, it was suggested that 100% of the priority fee be burned so that there is not an incentive for validators to artificially keep the system in a congested state. |
@birchmd I am not quite sure I understand the concerns the protocol wg is trying to address.
If validators collude, the entire system is compromised, isn't it? Maybe I am missing what attacker model you assumed for your discussion.
But then chunk producer have no incentive to actually include higher priority fee transactions over normal fee transactions they simply "like" more for one reason for another. Keep in mind, chunk producer ultimately hold the power over which transactions are included on chain at all. If we don't give any of the priority fees to them, they can extract that value in other ways. For example, they could offer subscriptions to prioritize transactions from certain accounts and make extra profit this way. With the right pricing, this would also be cheaper for users, so it's a win-win situation for chunk proposers and transaction sender, at a loss for the protocol. If we give rewards to the chunk producer, they are more incentivised to just follow the rules without off-chain shenanigans. |
The level of cooperation between validators in this case is lower than something extreme like including an incorrect state transition. It's more like price fixing in oligopoly situations. Each validator can notice the minimum priority fee being accepted by other validators and raise the minimum fee they accept to be in line. In this way the validators can come to an unspoken agreement to profit at users' expense.
Yes, this is the whole idea with price fixing we are thinking about. The validators would simply censor incoming transactions lower than the minimum priority fee they have chosen; including in the case where such a fee is not needed to control congestion.
Yes, this also came up during our discussion. And I agree we do not want any kind of secondary market on blockspace. But at the same time, I think it is important to recognize that if validators profit off congestion then they will have an incentive to create congestion (perhaps even artificially, by sending the transactions themselves). Instead of giving no part of the priority fee to validators, another idea we had was to make the proportion of the priority fee the validator receives be a function with diminishing returns (eg log or sqrt) so that maybe there would be an optimal amount of congestion for validators. More detailed analysis of this is needed to see if it makes sense. We also talked about how Ethereum faced this same issue with their gas price auction and they moved to a base-fee + tip model to make gas price fixing by validators less of a concern. We can't directly apply Ethereum's unsharded, synchronous execution setting directly to Near of course, but I think it does provide strong evidence that a pure auction is not the ideal model for users. |
To add to @birchmd's point, validators don't need to collude. They can raise the minimum accepted fee, and as long as enough of them have done that, submitters using a low fee will notice a lower throughput.
One of the ideas discussed is that the system will operate at a desired capacity (which could be half of the total capacity). To include a transaction that uses above half of the capacity, a fee needs to be attached. The required fee will increase exponentially as more transactions go beyond the desired capacity. |
A first step towards near/NEPs#541 by introducing a priority field in both transaction and receipt. This is not entirely trivial due to the need to maintain backward compatibility. This PR accomplishes backward compatibility by leveraging the account id serialization and implement manual deserialization for the new transaction and receipt structures. While this PR appears to be quite large, most of the changes are trivial. The core of the changes are the serialization/deserialization of transaction and receipt. While this change introduces the new versions, they are prohibited from being used in the current protocol until the introduction of the protocol change that leverages priorities.
Proposal to add transaction priority fee to the protocol