Asynchronous networking - delay vs persistence

Hello,
an uneducated practitioner here, hoping to be enlightened:) The asynchronous network model assumes messages can be arbitrary delayed, but they cannot be lost.
I wonder if there is any practical difference between these two.

Here is the best what I could think of: A model where message cannot be lost is stronger that the one where messages can be lost. Hence results on this model are stronger too. But is the model where messages can be arbitrary delayed, but not lost, really stronger than the one where they can be lost? Why? What makes it stronger?

Is there anything completely different I’m missing? Perhaps the model where messages cannot be lost is just simpler to reason about? (and leading to the same results)

Is message ordering implied in the model?
if the delay is infinite it may be counted as lost

That’s a good question.

this is from the FLP paper. it does not assume ordering.

If this model defines the async network then its not allowed to have an infinite delayed messages (lost) because there is a bound on the number of returned 0 (zeros) + it says explicitly that all msgs are eventually delivered with unbounded calls to recv. it practice the lost message would be something determined by the failure detector, but you prob want to stay in the theoretical model while reading

Also it seems there is no order over the messages

Arbitrary delay == loss

There is no difference in the limit

At some point, you have to give up on it

And since you can’t distinguish a long enough delay from actual loss as a receiver, its a distinction without a difference IMO

That’s what my intuition tells me as well: there is no practical difference between a model when messages can be just arbitrary delayed and a model when they can be lost. I was hoping someone to prove me otherwise :slightly_smiling_face:

perhaps the problem with me mixing the words “practical” and “model”.

Even in modeling, its not that useful

Assume you really could differentiate between delay and loss

The amount of delay you’re willing to accept is directionally proportional to the memory required to handle said delay

Eventually you run out of memory :wink:

Yeah, well, my model could have unbounded memory. that’s the beauty of models. I guess :slightly_smiling_face:

The arbitrary delay with no ordering is interesting because it forces you not to assume messages may be lost because a message assumed to be lost may arrive at a later point in time. It more or less forces you to assume garbage may be on the line.

There is a difference, which is kind of what the FLP paper is all about: if we only assume messages can be arbitrarily delayed we can write algorithms that eventually terminate with 100% probability.

However, if a node can also fail in a different way, by failing to send or deliver any messages, it’s impossible to write always terminating algorithms for a certain class of problems.

So I think it’s a real distinction, one that is very important theoretically. It’s not so relevant practically because we don’t tend to have 100% reliable delivery, so we’re always dealing with arbitrary delay + message loss.

I need some time to process what you wrote, but it’s already useful! Thank you!