There's also a linear time and constant space algorithm based on partitioning, which may be more flexible if you're trying to apply this to variants of the problem that the mathematical approach doesn't work well on. This requires mutating the underlying array and has worse constant factors than the mathematical approach. More specifically, I believe the costs in terms of the total number of values nn and the number of duplicates dd are O(nlogd)O(nlogd) and O(d)O(d) respectively, though proving it rigorously will take more time than I have at the moment.
Algorithm
Start with a list of pairs, where the first pair is the range over the whole array, or [(1,n)][(1,n)] if 1-indexed.
Repeat the following steps until the list is empty:
- Take and remove any pair (i,j)(i,j) from the list.
- Find the minimum and maximum, minmin and maxmax, of the denoted subarray.
- If min=maxmin=max, the subarray consists only of equal elements. Yield its elements except one and skip steps 4 to 6.
- If max−min=j−imax−min=j−i, the subarray contains no duplicates. Skip steps 5 and 6.
- Partition the subarray around min+max2min+max2, such that elements up to some index kk are smaller than the separator and elements above that index are not.
- Add (i,k)(i,k) and (k+1,j)(k+1,j) to the list.
Cursory analysis of time complexity.
Steps 1 to 6 take O(j−i)O(j−i) time, since finding the minimum and maximum and partitioning can be done in linear time.
Every pair (i,j)(i,j) in the list is either the first pair, (1,n)(1,n), or a child of some pair for which the corresponding subarray contains a duplicate element. There are at most d⌈log2n+1⌉d⌈log2n+1⌉ such parents, since each traversal halves the range in which a duplicate can be, so there are at most 2d⌈log2n+1⌉2d⌈log2n+1⌉ total when including pairs over subarrays with no duplicates. At any one time, the size of the list is no more than 2d2d.
Consider the work to find any one duplicate. This consists of a sequence of pairs over an exponentially decreasing range, so the total work is the sum of the geometric sequence, or O(n)O(n). This produces an obvious corollary that the total work for dd duplicates must be O(nd)O(nd), which is linear in nn.
To find a tighter bound, consider the worst-case scenario of maximally spread out duplicates. Intuitively, the search takes two phases, one where the full array is being traversed each time, in progressively smaller parts, and one where the parts are smaller than ndnd so only parts of the array are traversed. The first phase can only be logdlogd deep, so has cost O(nlogd)O(nlogd), and the second phase has cost O(n)O(n) because the total area being searched is again exponentially decreasing.