Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement LLO (data streams/mercury v1.0) #5

Merged

Conversation

samsondav
Copy link
Contributor

@samsondav samsondav commented Dec 5, 2023

No description provided.

@samsondav samsondav marked this pull request as ready for review February 6, 2024 20:51
@samsondav samsondav changed the title MERC 1516 v 1 0 multichain mercury parallel composition Implement LLO (data streams/mercury v1.0) Feb 6, 2024
@samsondav samsondav force-pushed the MERC-1516-v-1-0-multichain-mercury-parallel-composition branch 5 times, most recently from 50663d4 to da9c0a0 Compare February 6, 2024 21:41
@samsondav samsondav force-pushed the MERC-1516-v-1-0-multichain-mercury-parallel-composition branch from da9c0a0 to 35308d3 Compare February 6, 2024 22:05

// Indicates whether a report can be generated for the given channel.
// Returns nil if channel is reportable
func (out *Outcome) IsReportable(channelID commontypes.ChannelID) error {
Copy link

@akuzni2 akuzni2 Feb 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it necessary to have this at the channel level and rather than the stream level?

I see this:

	for _, streamID := range channelDefinition.StreamIDs {
		if out.StreamMedians[streamID] == nil {
			return errors.New("IsReportable=false; median was nil")
		}
	}

does that mean if a single stream is failing - the entire channel fails to report? Is there a reason why that must happen?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, good question. It think it probably does need to be at the channel level because otherwise what would be the outcome?

Imagine you try to create a report and one of the streams is bad. The datatype has no way to mark this. Do you just write zeroes? That could lead to some really unexpected and bad results. I think we cannot bake a report at all if any one of the streams is bad.

timestampsNanoseconds = append(timestampsNanoseconds, observation.UnixTimestampNanoseconds)

for channelID := range observation.RemoveChannelIDs {
removeChannelVotesByID[channelID]++
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we verify the channel-id exists in removeChannelVotesByID - idk if it's possible for a faulty/malicious observation to come in and break this flow by adding a non-existent channel id

Copy link
Contributor Author

@samsondav samsondav Feb 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to verify it exists, in fact this map is empty so none of them will exist.

We need to handle the scenario that the node executing this code is:

  1. Honest
  2. May have an outdated/incorrect view of channel definitions

We come to consensus on which channels should be added/removed. It requires that at least 2f+1 (I think) nodes are honest and have a proper view on channel definitions. So even if you have a malicious node put whatever garbage it wants here, unless 2f+1 are in agreement, it won't be applied.

/////////////////////////////////
// outcome.LifeCycleStage
/////////////////////////////////
if previousOutcome.LifeCycleStage == LifeCycleStageStaging && validPredecessorRetirementReport != nil {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if we don't want to promote to LifeCylceStageProduction? is there a way to cancel - is that just canceling the job and creating new configs on-chain?

Copy link
Contributor Author

@samsondav samsondav Feb 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LifeCycleStage code currently does nothing since it is not even implemented on the contract. It was copied direct from the sketch and I didn't touch any of it. It will be implemented in https://smartcontract-it.atlassian.net/browse/MERC-3386

for streamID, observations := range streamObservations {
sort.Slice(observations, func(i, j int) bool { return observations[i].Cmp(observations[j]) < 0 })
if len(observations) <= p.F {
continue
Copy link

@akuzni2 akuzni2 Feb 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we be checking for 2f+1 observations?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good question. This probably needs to be refactored into an aggregate function like we do for mercury here: https://github.com/smartcontractkit/chainlink-data-streams/blob/master/mercury/aggregate_functions.go#L23

We have a ticket open for refactoring this entire file and adding more tests: https://smartcontract-it.atlassian.net/browse/MERC-3524

I have pushed a change with a comment that explains this check.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice - thanks for opening that thread with research clarified the check here

Copy link

@akuzni2 akuzni2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left some comments - some of are just questions - I'm a little confused about the "retirement" process that is tied with observations. happy to just approve it though so you can proceed with benchmarking

akuzni2

This comment was marked as duplicate.

@samsondav samsondav merged commit 2032ee6 into master Feb 9, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants