Average time complexity takes a mean over many possible inputs to a single operation. The bound may be exceeded for a few “hard” inputs, but not for a sufficiently large number of inputs. For example think about the quick-sort algorithm where the average time complexity is
The aggregate method is a simply method of amortized analysis, not very precised that it's based on these relations:
$$\sum^{}{operations} \text{amortized cost} \ge\sum^{}{operations} \text{actual cost} $$
This two equations say that we can find an upper bound on total cost by adding the cost of all the operations (cheap and expensive ones) and then divide by the number of ops.
The concept is that you can 'store' some value during the analysis of the operations. For example an insertion uses a coin and store another coin. And then during the elimination I can consume the coins stored. Obviously all of this is.
An example for this could be the table doubling .
Taking the Accounting method and evolving it we obtain the 'Potential method' that basically it's the same but in 'physics style'. It consists in :
- First find the potential function which fits best
- the amortized cost of each operation is the actual cost + the difference in potential
$\Delta ( \phi)$ that that operation caused$$\hat c = c + \phi(i+1)-\phi(i)$$
So:
We would like to have a 0 potential energy at the "start" of the data structure (when for example it's empty) and then (based on the operations that we do) different potentials associated with the energy that we have 'stored' in the data structure and the energy we have used and subtracted from it.
For example the potential function
The potential function could be the number of 1s. More there are and more you are near the carry which will cause a 'big change of the system'.