My method is not limited to horizontal surfaces either, and is fairly simple.
The main problem is that the Z data are really not accurate to more than 1 meter, and almost everything else in the flt file is random junk. I tried stepping through forced floating point precisions of 0.125, 0.25, 0.5 and 1 meter. Only 1 meter removes all the junk, but even 0.125 meter is better than with no precision control. A very small number of the bizarre contours remain, but by 0.5 meters forced precision, they are not too obnoxius.
I check each 3x3 block by fitting a plane to the cells (normal equations are very simple), and look for sd relative to the plane (chi-sqrd sort of) being larger than the gradient of the plane; areas that have gradients above the sd are deemed fine (they never have odd contours anyway, based on visual inspection).
It's an area with sand dunes and lava flows, so at first I thought the odd contours were real (perhaps reflecting individual dunes or ridges in the flows). But they merely reflect the contour algorithms trying to thread contours through areas with very small differences in reported (gridfloat) elevation; I've seen those areas in aerial and ground photos, and there is rarely any correspondence between reality and the contours.
Maybe the data started off as being good to just one meter, then the usgs post-processed to add corrections (perhaps for a new geoid/datum), leaving very small deviations. For example, in this area the geoid change added about 1 m to the local elevations; the amount added changes very slightly over ten miles or so, so the corrections might locally be (purely made up!) 1.001, 1.0015, 1.002, 1.0025, 1.003... etc. The exactness is meant for monumented sites, whose accuracy is established. But if you apply that precision in a correction to all the data (mainly derived by photogrammetry) in between monumented points, you get a false sense of precision on elevation that are probably known to a few meters of accuracy at best.