The basic idea is you render objects like the clouds into a smaller texture, and then later composite that texture into the full scene render, using linear filtering in the process to smooth out the slightly lower resolution clouds as they are drawn. This is a big performance saver because the number of pixels drawn by the the cloud system is much less than when drawing at the full screen resolution. There is a catch though; a smaller texture rendered over the scene like this will have artifacts where the texture meets another part of the scene even with the linear filtering. Unfortunately I didn't capture a screenshot of that for this post, and I shouldn't take the time to go set this up again but if you check out the linked article they have good examples of the artifacts I'm talking about. They are most noticeable when the camera is moving around.
So fixing the artifacts requires detecting the edges in the low res buffer and drawing the full resolution clouds into those places the edges are detected.
The thumbnails below all link to the 1080p versions.
So, here is the original scene:
|The original scene with clouds rendered at full resolution.|
|The edge detection mask rendered over the scene.|
|The low res buffer without drawing the full resolution edges.|
|The low res buffer composited the full resolution clouds drawn only where edges were detected.|