Smearing the leap second is a solution Google came up with for their own data centers when they realized that they had too many protocols that didn't know how to handle UTC leaps properly, it really cannot be applied generally unless everyone can agree on exactly how to do it.
In my own test code, I did the smearing over a 24 hour period, centered around the leap event. The main arguments here are on how to determine an optimal smearing function: You want a gradual increase, then a mostly constant slope period, before a gradual decrease near the end.
There are however several potential problem areas with smearing:
a) ntpd works within a maximum of 500 ppm adjustment rate, of which the majority must be reserved for correcting the local clock, leaving maybe 100 ppm as the maximum smearing, so at that point it will take about 10000 seconds (or ~3 hours) to smear a second. Reducing the max smear rate to around 20 ppm is compatible with a 24-hour adjustment.
b) Very stable clients will only poll the server(s) every 1024 seconds, or even less (every 2K/4K/8K seconds), and to detect a change in the reference clock, a client needs 4 consecutive polls showing a drift from the previous stable value.
c) ntpd considers an offset of 128 ms to be infinity, at that point it will restart the protocol engine and losing sync until everything has been stabilized against the current smearing rate. It should be obvious that a smearing setup which drops sync at both ends of the process would be really bad.
d) If you can force the protocol to drop the sync interval, from 1024 s down to the standard minimum of 64 sec, then it becomes much easier to track/follow a smearing server.
Terje