All the ice is gone now, but the water is still around 0C. So I put a wetsuit on and… it’s too buoyant for swimming. I’m gonna have to get me a weight belt.
So, not great, but at least it’s nice to be in the water.
All the ice is gone now, but the water is still around 0C. So I put a wetsuit on and… it’s too buoyant for swimming. I’m gonna have to get me a weight belt.
So, not great, but at least it’s nice to be in the water.
I’ve been rabidly protecting my privacy ever since I understood what Big Tech had in store for us in 1999 thanks to this idiot who spilled the beans - but somehow nobody grasped the importance of what he said back then - and I’d rather be safe than sorry. A tinge of paranoia has served me well over the decades.
Definitely. If I had any discipline, I’d be right there with you.
In that case, use black bars or solid cover, like an emoji, instead of pixelation.
It’s possible to reconstruct the image behind the pixels(or blur). Interesting article, YT video and demonstration here: https://www.jeffgeerling.com/blog/2025/its-easier-ever-de-censor-videos/
I’m aware of this, but it only works if the image under the pixelation is static and the pixelation is moving over it somehow. When I pixelate my face in a video, my face isn’t static. Also, the clearer my face, the coarser I choose the pixelation level.
(I don’t seriously think this is anything that you have to worry about, at all. It’s just a neat topic!)
I went down the rabbit hole after watching that video.
It is much easier with text, because the search space is constrained (there’s only so many letter/font combinations to look through) you can pixelate each possible character and compare them to the original until you find the closest match.
With enough frames, memory and compute you can extract detailed images through temporal accumulation. Instead of using a lookup table of various characters you can use one of two different methods of motion estimation (Horn-Schunck and Lucas-Kanade [Wikipedia warning: Here thar be mathematics]) and then, once you know the motion, you can use reconstruction algorithms like MAP estimation, or IBP to approximate the high res image that created the low res pixelation. Each frame additional would allow you to refine your ‘guess’ until you arrived at a unique solution
This would be computationally expensive, on the order of several petaFLOPS and 50-100GB of RAM for even a short video.
A fork of something like this: https://github.com/rafaelmaeuer/MultiFrameSuperResolution taking into account that you don’t need to reconstruct the entire image, just a small area.
There are some AI tools that claim to do this as well, but they’re just doing image generation and hallucinating the details.