the Twarchive

This is a record of a twitter thread, originally posted in 2021

Thew
@AmazingThew

my friends!

I have done it at last

BEHOLD:
REAL TIME SHITTY HDR

Thew
@AmazingThew

Reveal! subtle details lost in overexposed regions, then Annihilate them once again with lurid ringing artifacts!

Transform! natural textures into Microwaved blue noise!

Give! yourself a migraine at Sixty Frames Per Second!

Thew
@AmazingThew

so if you, like me, have always been kinda fascinated at a technical level by Shitty HDR, wondering what sort of math could ruin a photograph in such an upsetting way, and what produces all those bizarre halo artifacts:

I have now learned it so you don't have to

Thew
@AmazingThew

If you want to skip ahead, I'm pretty sure this is the actual approach that Photoshop uses, although the bilateral filter might be written differently:

Adobe added "Local Adaptation" in 2010, which is why Shitty HDR was such a phenomenon circa 2011-13ish

attached image

https://people.csail.mit.edu/fredo/PUBLI/Siggraph2002/
Fast Bilateral Filtering for the Display of High-Dynamic-Range Images

Thew
@AmazingThew

Interestingly, it turns out there's not really one specific algorithm for Shitty HDR, which made researching it a pain. It's more like SSAO or TAA, where the basic *approach* is established in a couple papers, but the implementation details vary a ton, and advance constantly

Thew
@AmazingThew

The core idea is just: frequency separation. A paper in 1993 noted that the eye can see detail in both bright and dark regions, and so an eye-like tonemapping operator should preserve high-frequency detail while lowering the impact of large, low-frequency brightness variation

Thew
@AmazingThew

so the idea is: split the image into high-freq "detail" and low-freq "illumination" components, tonemap only the lower band, then merge the extracted detail back over top of the tonemapped illumination

i.e.
blur image
divide by blur
tonemap blurred part
multiply back together

Thew
@AmazingThew

You can also choose to boost the intensity of the high freq component before combining, which can bring out additional detail

if you're familiar with image processing, what I have just described is LITERALLY a sharpening filter, so that's why Shitty HDR looks badly sharpened lol

Thew
@AmazingThew

This approach has a problem with severe ringing artifacts. Changes in intensity are surrounded by halos, and the halo's strength is related to the intensity delta

this becomes EXTREMELY severe around large changes, like the edge of the sun's disc, even worse if we boost detail

Thew
@AmazingThew

while the result is undeniably Shitty, it is not true Shitty HDR. Photoshop doesn't have this problem, so it must be doing something else

Thew
@AmazingThew

The "something else" has been an ongoing research thing since the 90s. The core idea of separating surface detail/texture from lighting, then tonemapping only the lighting, is very powerful. There are LOTS of papers proposing different ways of improving this separation

Thew
@AmazingThew

After realizing there are tons of papers on this subject and Shitty HDR isn't just, like, one random filter like Render Clouds or whatever, I was like "Wait, why do all of this? Why's there so much research here when you could just use a tonemap curve like Reinhard or ACES?"

Thew
@AmazingThew

I'm pretty sure the answer is "phones"

Games and film use tonemapping functions bc we want a "filmic" response curve. Film has a high dynamic range but rolls off highlights in an aesthetically pleasant way

This is very much NOT the state of the art for computational photography

Thew
@AmazingThew

Phones can shoot massive dynamic range now, and nobody working on them is interested in emulating film. If you point an iphone at a sunset, or a skyscraper at night, they do everything possible to emulate EYESIGHT

they want the image on the screen to look like what your eyes see

Thew
@AmazingThew

This was also the motivation stated in the papers I read. The whole Shitty HDR approach of splitting lighting apart from texture was explicitly motivated by how eyes work, and how artists can paint intensely realistic images that look nothing like film

Thew
@AmazingThew

this is, unexpectedly, really fascinating?

thinking about ways to approach tonemapping from this direction in a game engine feels like it could produce some really neat stuff

Thew
@AmazingThew

but, back on subject

the Durand/Dorsey paper I linked above suggests a way to mitigate the ringing artifacts: Use a bilateral filter for the low frequencies, instead of a blur

As near as I can tell this is what Photoshop does

Thew
@AmazingThew

a bilateral filter works like a regular blur, except in addition to the gaussian kernel you also have a SECOND kernel that reduces the contribution from samples with large luminance differences

if you zero the Detail slider in PS, killing the high freqs, yep sure looks bilateral

Thew
@AmazingThew

So here's my version:

- Bilateral filter
- Detail extracted by dividing original image by the bilateral'd version
- Tonemap bilateral (not shown but I just used ACES)
- Boost intensity of detail to improve Shittiness
- Multiply detail and tonemapped values back together

Thew
@AmazingThew

Also: big thanks to @Atrix256 for explaining how to accelerate the bilateral step

a correct bilateral is super expensive for wide radii, but it turns out if you use a good enough stochastic sampling pattern you can get very nice results while heavily undersampling

Thew
@AmazingThew

@Atrix256 so it turns out the bilateral filter is really the key component of Shitty HDR

high-intensity boundaries like the sun disc don't get blurred, so they don't form halos. Low-intensity boudaries like the edges of the clouds WILL still blur though, and thus get boosted/sharpened

Thew
@AmazingThew

(ignore the black pixels around the sun; that's due to my filter being undersampled to keep the framerate manageable)

Thew
@AmazingThew

From here I'm pretty sure what PS's sliders actually do

"Edge Glow" is the bilateral stuff. Radius is self-explanatory, and Strength changes the filter's sensitivity to brightness changes

Detail is the multiplier applied to the high freqs

Everything else controls tonemapping

Thew
@AmazingThew

also the Smooth Edges checkbox appears to switch to a different filtering algorithm entirely, changing the function of the Strength slider and producing dramatically different results lol

No idea what's going on there tbh lol

Thew
@AmazingThew

This all reveals why there's so much variation in the Shitty HDR aesthetic. The implementation details matter A LOT, even when the approach is the same. Every program that implements Shitty HDR behaves slightly differently

Thew
@AmazingThew

Deciding which steps should use RGB and which should use luminance, which parts should use linear vs log values, etc are mostly up to preference (I'm using RGB bc it looks Shittier)

the tonemapping implementation is whatever you want, and it has a HUGE effect on the final image

Thew
@AmazingThew

For example: Shitty HDR is commonly associated with luridly over-saturated colors, likely due to Adobe's wildly irresponsible decision to include a Vibrance slider

But if you underexpose and OVER-gamma, you can achieve the Shitty HDR subgenre of Irradiated Greyscape Zebra Hell

Thew
@AmazingThew

I don't have a conclusion for this thread but uhhhhh that's the end

now you know all of the things that I have learned about Shitty HDR

now you too have the power to create images so Shitty that they are physically upsetting to behold