Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Comparing H.265 (HEVC) and H265 video file size (janstechtalk.blogspot.com)
115 points by janandonly on March 25, 2021 | hide | past | favorite | 119 comments


The comparison of H.264 and H.265 in this article doesn't make any sense. Lossy codecs can make files almost arbitrarily small at the expanse of quality. You should look at both quality and size; size alone doesn't mean anything.

Also H.264 and H.265 are just formats; there are many encoders of varying quality which can generate such files and each encoder has tons of settings. I don't know which encoder macOS uses but ffmpeg includes state-of-the-art H.264 encoder (x264). Simply reencoding video from H.264 to H.264 using ffmpeg can give you better quality/size ratio than the original file.

> H.264 codec (or MPEG-4 Part 10), once upon a time know in the scene as DIVX.

No, DivX refers to MPEG-4 Part 2 which is a completely different codec.


>there are many encoders of varying quality which can generate such files and each encoder has tons of settings. I don't know which encoder macOS uses but ffmpeg includes state-of-the-art H.264 encoder (x264). Simply reencoding video from H.264 to H.264 using ffmpeg can give you better quality/size ratio than the original file.

It's actually even more complicated than that. Each encoder has multiple settings on how much CPU to spend on the compression. Live screen recording (which this was) usually has the setting set to spend little CPU and get little compression. So like you said you don't need to change which format you use to get better compression, but you also often don't even need to change what encoder you use, you can just change the compression setting to use extra CPU (which might not be possible on a live recording).


Another factor is keyframe rate, which makes the bitrate requirement much higher at the same quality but is needed to let people join the stream more than every 30 seconds.


ffmpeg only uses x264 if you tell it too. if you don't set -c:v libx264 when creating a .mp4 output, i'm pretty sure it uses the libav h264 encoder. if you use a binary of ffmpeg without --non-free/--non-gpl (forget which) enabled, do you even get x264?


Probably not --non-gpl, x264 is under the GNU GPL.


It doesn't have a builtin h264 encoder unless I'm very out of date (looks like I'm not). It has support for some hardware encoders, which can be good enough for livestreaming.


> No, DivX refers to MPEG-4 Part 2 which is a completely different codec.

Poster is referring to the DivX Plus HD codec [1].

[1] https://en.m.wikipedia.org/wiki/DivX_Plus_HD


I wouldn't call that "what was once known in the scene as DivX"...


Neither would I. I'm just clarifying, for the benefit of other readers.


Show me any popular torrent using the word "DIVX" to refer to that.


show me any popular torrent using the word DIVX that isn't actually using Xvid


Well at least DivX and Xvid use the same video format: MPEG-4 Part 2.

MPEG-4 Part 10/H.264/AVC is a different format.


> video: the H.264 codec (or MPEG-4 Part 10), once upon a time know in the scene as DIVX.

That is incorrect and bothered me enough that I scoured the web trying to find old scene release standards. Fascinatingly, there's a whole Wikipedia article on those[1] and it's still possible to find the original .nfos in strange corners of the internet[2].

But long story short, a DivX release was standardized to DivX 3.11, which was actually a reverse-engineered Microsoft codec and not actually MPEG-4 compatible[3]. But later DivX versions implemented MPEG-4 Part 2[4] -- that is still a completely different thing than MPEG-4 Part 10[5] though, which describes H.264 (but calls it AVC).

[1]: https://en.wikipedia.org/wiki/Standard_(warez) [2]: http://lunamoth.biz/pe.kr/cgi-bin/gm/archives/tdx2k2.nfo [3]: https://en.wikipedia.org/wiki/DivX [4]: https://en.wikipedia.org/wiki/MPEG-4_Part_2 [5]: https://en.wikipedia.org/wiki/Advanced_Video_Coding


This is compressing video recordings of a computer desktop. The frames don't change much so this isn't surprising.


H.264 would be able to take advantage of the same non-changing/static in the hands of a compressionist. Realtime encoding is always subpar compared to that due to the trade-offs required. What this is telling me is that the default settings of the OS screen capture are not optimized. To be expected though, as they have no idea what you may be attempting to encode so a safe setting that is less efficient on static images but performs reasonably well for higher action would be acceptable.


Also, screen capture software usually needs to minimize CPU usage so you can actually, yknow, do the stuff you are trying to record. The blog post mentions that the transcode caused his laptop to heat way up. It was almost certainly a way different encoder profile.


Can you clarify this please? Does h.265 have some better deduplication compression or does reencoding apply something that the original recording could not, meaning h.264 would have the same benefit.


The original h264 file was likely optimized for write speed and not storage (which takes cpu time and can introduce lag in real time recording).

Reencoding the same input file as h264 again but using FFmpeg defaults would probably yield similar file size reductions.

This is just a bogus post without knowing what parameters were used to encode the original h264.


For science, I just took a 10 second screen recording of my Mac with nothing but my tiny mouse cursor moving around for a few seconds which means a very static shot. The recording came out at 7.5MB. There was no audio in the original recoding.

I then used that as a source to transcode with x264 crf 23. New filesize 2.0M. ~26%

ffmpeg -i input.mov -c:v libx264 -crf 23 x264.mp4

Next, I used the same 7.5MB source to transcode with x265 crf 28. New filesize 968KB. ~13%

ffmpeg -i input.mov -c:v libx265 -crf 28 x265.mp4

CRF values were taken from [0] which states x265 crf28 should produce same visual quality of x264 crf23, but at half the size. That holds true.

[0]https://trac.ffmpeg.org/wiki/Encode/H.265

Edit: forgot link

So, I don't think the author's post is bogus. They just lose points for not showing their work (not that they did it, but you know).

Edit 2: I have no idea what the author of the post used for encode settings. I picked 2 based on the vendor's claim the 2 settings should look the same at half the bit rate. I easily could have changed the values to lower the bitrate to get from 50% to 94%. I just assumed the reader would be able to make the mental leap


I'd still call the original bogus. Its not unexpected that H.265 is smaller than H.264, just that the article made an unfair comparision.

As your comparision found, the majority of the size reduction relative to the original was related to encoding parameters, only 13% was related to codec choice. Of course if you start from the better encoded H.264 file things look better for H.265, but not 94% better.


Without at least some semi-objective compassions on the "visual quality", your data is meaningless. Like, how can you tell crf23 of x264 is closer to your x265-x265 version than, say, a crf24 x264 one?

As a rule of thumb, x265 works better (compared to x264) at lower bitrate / worse quality. You won't be able to achieve "same quality, half size" at higher bitrate easily.


Sure, we could runs some PSNR or SSIM comparisons. However, I've been doing this for a hot minute, and h.265 is pretty damn good. Getting 4K video filesize for VR to run on a phone based headset was never going to happen without it. h.264 encodes just were never going to make it in the real world for the frame size and TRTs. Once we knew the target devices could handle h.265, it was a game changer. That was way back in 2015 when live action VR was almost a thing.

From years of real world experience, h.265 produces much smaller files by being able to use a lower bitrate to achieve the same visual quality has previous h.264. Guess what. H.264 did the same thing to MPEG2. Not really sure why this is so unbelievable.


> Not really sure why this is so unbelievable

No one is saying that. Everyone knows H265 is better than H264. Just saying your number is meaningless without either fixing the bitrate or the quality.


Sure. Don't believe some random dude from the internet. But if you're not going to go through the hassle of running the tests on your own, your "meaningless" claims are meaningless. Have you spent time in your career as a compressionist? Have you encoded hundred of hours in various codecs looking at the best tweaks to get the highest visual quality for the smallest file sizes?

The test files that I produced resulted in the h.264/crf23 with a bitrate of 970kbps while the h.264/crf28 result with a bitrate of 800kbps. The h.265 used less bitrate to achieve half the filesize. I have stacked the 2 videos and toggle between the two. There is no visible difference. I have done this experiment. It is up to you to do the same thing to see if you can replicate the results or not. That's science. But it is much easier to just post "no it doesn't" comments on the internet.


Yes, I have spent hundreds of hours in comparing video encoder results, but that's beside the point. One don't need to have done that to be able to tell your experiment is flawed.

All you have is literally one datapoint, 970kbps h264 has "no visible difference" compared with 800kbps h265, and I believe that. But maybe 800kbps h264 also doesn't have visible difference from it. Maybe there is just not much difference to notice from 500 to 1200kbps, for your specific video.

Again, my point is your experiment has too few samples (literally 1 pair). Using that to conclude anything is the exact opposite of "scientific" method. I never say "h265 isn't better than h264" or "your result is wrong"; just your result doesn't prove the former.

If you insist to use just one sample, you could at least use the same bitrate (2-pass if you want to have better rate control) and say "hey, the h265 one is visibly better!", and that would be more convincing.


>Again, my point is your experiment has too few samples (literally 1 pair).

This experiment was to show that the gains the original post made wasn't solely due to the video being an unchanging desktop. There were enough samples done to show that claim seems to be false, though more data could change that.

Basically, you're misunderstanding the theory tested, and while your complaints would make sense testing a different theory, they don't discount the experimental evidence for the tested theory.


> was to show that the gains the original post made wasn't solely due to the video being an unchanging desktop

I assume your point is "while both reduced the size (from an unchanging desktop, which is easily compressible), H.265 reduces the size even further. So it (the choice of codec) is also a contributing factor."

And I get it. My point is that even this wasn't proved by his experiment, because he failed to prove the two outputs have the same quality. And without having this, comparing file size is meaningless (because as stated by others, with lossy codec you can generate arbitrary file size by tweaking the parameters).

Yes, he did say "there is no visible difference". However, most of people can't tell any difference bwtween 600kbps and 1000kbps, using the same codec, either. In other words, there is a huge range of bitrates where average eyes can't tell difference. So the "same quality" premise has to be very strict and objective. It's not I'm trying to be picky.

A better approach, as I have said above, is to fix bitrate instead of quality. This way, you can find a certain point where people DO find the difference in quality, and then you can conclude more confidently that one is better than the other due to higher quality/filesize ratio.


>My point is that even this wasn't proved by his experiment, because he failed to prove the two outputs have the same quality.

If the gains were purely due to an unchanging image, both conversions should have chopped the file size to basically nothing. Any file size difference due to quality would be dwarfed by reduction due to compression of like 600 images down to a handful.


My original comment is about his experiment can't prove H.265 is better than H.264 (and I think that's what he wanted to show).

Of course modern video codecs can compress video very well regardless if the input is "unchanged image" or not. I was not arguing about this.


>and I think that's what he wanted to show).

It wasn't. The test was whether reencoding was what's important, or the codec used. I'm the one that asked the question Dylan was kind enough to try and answer, your further testing would not answer my question any better.


He was replying to stevenhuang, not you directly.

stevenhuang said "Reencoding the same input file as h264 again but using FFmpeg defaults would probably yield similar file size reductions."

So, it looks to me he was trying to refute that "using H264 again would yield similar file size reduction".

If you think it's not, that's fine. Not something worth debating.


If you re-encoded to h.264 with ffmpeg it would also probably get smaller.

Video codecs have lots of options and trade-offs. Its not just quality and file size, but also cpu usage. I suspect the original was encoded in minimize cpu usage while keeping quality constant, and the new one was minimize file size while keeping quality constant and use as much cpu as you want.


challenge accepted. see reply to sibling comment


A compression algorithm allows multiple different encodings of the same underlying data - spend more CPU time finding redundancies in the data, and you can make the file smaller.

I strongly suspect that the difference here is between real-time compression that runs while capturing the screen recording, and less time-constrained offline compression during transcoding.


I'm willing to bet it's not about CPU time constraints. My guess is that it's bitrate settings and/or keyframe interval settings. And that you could get similar results by tweaking those, even if you're encoding at veryfast.


h.265 makes files smaller than h.264 for a given bit rate. So an h.265 I-frame will be smaller than an h.264 I-frame. If it's a mostly static computer screen, then you can get even better encoding effeciency by using a longer GOP structure so the I-frames are spread out even futher. Of course, the trade of is seeking gets worse, but works out great for push play and watch type of deliverables. h.264 can do this as well, but it still goes back to the purpose of h.265 is to be better than h.264


> h.265 makes files smaller than h.264 for a given bit rate.

You mean for a given CRF ("crf" isn't an official term, it's what x264/x265 call their fixed quality mode). "Bitrate" is literally how big the file is.


Huh? No, I mean for a given bitrate. If I encode a video at 500kbps that is the bitrate. A video at 500kbps in h.265 should be the same visual quality as an h.264 at 1000kbps.

Filesize is literally how big the file is. Bitrate is how much data can be used over a window of time.


> A video at 500kbps in h.265 should be the same visual quality as an h.264 at 1000kbps.

Sure, that's true. Then the only problem is you said "smaller" and not "better looking" above.


Is there any reason to suspect that they used a longer GOP structure? Is the default one for ffmpeg longer than the recording software?


I didn't see ffmpeg command to compare. I was talking in general. Usually, it is just a value you modify. A typical GOP is 1 second. That allows decent seeking within the file. On something static, you could push it to 10 seconds. If you use ffmpeg, you can fiddle with the switches until the cows come home. For homework, you can use the same methods and use MediaInfo or ffprobe to see the GOP sizes. My money would be on 1 second GOPs from both.


From my specific tests from elsewhere in this thread, I did further analysis of the files that were run based on the examples listed in the comment.

The original screen capture H.264 files did have a 1 second GOP size. However, the very simplistic ffmpeg commands to make the h.265 & h.264 files that I made used the default GOP settings from ffmpeg. Turns out, the default is 4 seconds. That would easily help get the final file size down.


You have went far beyond what I expected, thank you. And yet the original post I replied to is still a bit of a mystery. A recording of a desktop played a role, but h.265 still seems like a large part.


Although I have not worked with '265 or any of the other newer codecs past '263, I suspect it's the latter --- encoders running in realtime tend to aim for a constant bitrate rather than maximum compression.


zipping the raw avi would probably get most of the way to 95%


but can you stream a zip file compressed this way to play the video on a streaming protocol?


Yes. No commonly-used compression formats lack streaming capabilities.


Kodi even supports playback from multi part rars.


Don't zip files store the directory structure/index info at the end? I would assume that's pretty important for decompressing.


If you assume there's only one file you could probably skip it.

But the question was about using the zip file as a source for streaming, and that will work fine even if you have to spend an extra read to load the directory first.


I don't think you can safely assume random reads if you are streaming. Usually the point of a stream is you have to process the file in-order.

Regardless, if for some weird reason you really liked the zip compression algorithm (DEFLATE), you would probably just use gzip instead (same compression algorithm, no weird file format with critical metadata at the end). Its also the compression algo used by PNGs.


> I don't think you ca safely assume random reads if you are streaming.

I could interpret this a couple ways, so let me try to clarify.

If you're talking about the ability to seek around inside the video, it is correct that this will not work well.

If you're talking about seeking to the end of the zip file to read the directory, I don't think that was part of the scenario. My interpretation of the question is that the server has a zip file and has to output a video stream directly from the zip.

If the zip itself is being delivered as a single stream, start to end... I have no idea what kind of scenario this is. There's no zip-streaming protocol out there. Filesystems can give you the end of a zip. HTTP can give you the end of a zip.

> Regardless, if for some weird reason you really liked the zip compression algorithm (DEFLATE), you would probably just use gzip instead (same compression algorithm, no weird file format with critical metadata at the end). Its also the compression algo used by PNGs.

Maybe. You might want windows users to be able to easily extract the video.


the question in the OP was to have the avi file be zipped, and it would offer similar levels of compression.

But if you're streaming a zip file down the line, can you view the stream as it partially loads, which is possible with certain video formats (esp. if the player can handle missing key frames etc).

I was under the impression that a zip file cannot be partially read and reconstruct partially the contents.


We have to make two distinctions here.

The first distinction is between streaming from the start of a file, without needing any storage space, versus streaming from a random point in the middle of the file. With a zip you can do the former, but you cannot do the latter.

The second distinction is between streaming the zip itself, versus streaming from the zip without needing any storage space. If you stream the zip itself, and don't allow even a single extra read to get the directory, it will be trickier to extract the video on the fly but still probably possible. If you are streaming from the zip then you can read the end then jump to the start and begin streaming, and it will work fine.

If that's a confusing mess, then I guess be more specific. What data does the video player see, and from what protocol?


Let's imagine the scenario : you have a video file on a http server, and you want to serve the video to the end user.

The end user can buffer the entire video, but that takes too long. So ideally, the player can download the video partially, and start watching while buffering the remaining. (assume the video is in a format that's streamable).

So is it possible to zip the video on the server, and rewrite the http server to serve the zip file directly to the end user, and yet still have the end user be able to partially view the video as a stream, while the zip is still downloading?

> it will be trickier to extract the video on the fly but still probably possible

i think this is what i'm looking for - extract a partial video file from a partially downloaded zip file.


Okay, so I have a couple answers to that.

If the user is viewing the video from the start, then there's no problem. There are multiple ways to make this work.

If the user wants to jump into the middle of the video, it won't work.

That's the short answer.

The long answer goes into more detail on how to make it work:

It would be possible to have the http server unzip a megabyte at a time and send it to the client, but you said you want to send the zip to the user so we'll cross out this possibility.

That leaves two ways of doing things.

The easy way is to have the client make an HTTP Range request to get the last 100KB of the zip file, then make a second request to start at the beginning of the zip file. This will let the client act like a completely normal unzipper, and it can start playing immediately.

The hard way is to go into more detail about how a zip file is structured. We don't actually need the directory to start decompressing. A zip file is structured as [file header][file data][file header][file data][file header][file data][directory]. You can just start decompressing the first file in the zip and feeding it into your media decoder. It'll work.

For watching videos from the start, zipping is only be a problem if you put multiple videos into a single zip and use a web server that doesn't support Range requests.


gzip and zlib (and tar) are "streaming" formats


i seriously don't see zipping a RAW video file achieving any compression gains let alone 95%


If you mean raw, in the sense of a stream of RGB (or YUV) images one after each other, i would expect zip to give significant compression, especially for something with solid colours like a desktop screen.

H.265 is obviously going to be a lot better, but compared to doing nothing, even something stupid like RLE compression is going to be helpful.


that's not how Zip works though

Edit: You can disagree, but video files do not compress with zip. Feel free to do the experiments on your own time. Zip looks for combinations of letters/sequences. That's not how a video file structured. To make video smaller, smart people created a dedicated type of compressors.


Which part is not how zip works?

> That's not how a video file structured

You're under the impression that uncompresed raw video does not contain repeated bit patterns??? How do you think it is structured?


please, show me an example with downloadable samples to recreate the experiment where zipping a video file decreases the file size.


Using https://archive.org/download/ElephantsDream/ed_hd.mp4

Convert to uncompressed YUV420P:

  $ ffmpeg -i ed_hd.mp4 elephant_dream_uncompressed.yuv
Compress with zip with default compression (I also tried -9 but it didn't make much difference):

  $ zip compressed_elephants_dream.zip elephant_dream_uncompressed.yuv
     adding: elephant_dream_uncompressed.yuv (deflated 64%)

  $ ls -la elephant_dream_uncompressed.yuv compressed_elephants_dream.zip 
  -rw-r--r-- 1 bawolff bawolff 1961679613 Mar 25 08:54 compressed_elephants_dream.zip
  -rw-r--r-- 1 bawolff bawolff 5422809600 Mar 25 08:46 elephant_dream_uncompressed.yuv
1-1961679613/5422809600 = .63826

-----

So, 63.8% compression ratio. This is terrible compared to any lossy video codec obviously, but its hardly nothing.

i would expect even better ratio if the source contained mostly solid colours like a screencast


sorry, i forgot about this thread, and absolutely did not think you'd respond. kudos.

yay, you proved a RAW video can be zipped. you're right, i'm wrong. i have to ask though, why? what in the world is this trying to achieve? this has to be one of the worst ideas for making video smaller and useable.


I was just responding to the hypothetical, i agree you would never want to do this.

The only case i could see is in high-end video editing or archiving where you need lossless compression, but even then there are lossless formats that provide better compression ratio (i assume), and more importantly better random access which would probably be important to that use case.


It does for extreme static videos. E.g. test patterns for monitor/TV calibration are commonly distributed inside zip files with substantial compression ratios, even though they are already in compressed (h264/h265) video formats


The title is a bit of an exaggeration... Maybe I'm missing something but there's actually nothing in the article about the FFMPEG setting used, most importantly the encoding bitrate or quality. There's also no quantitative comparison between the old video and the new. It's relatively straightforward to measure the noise added during the encoding process.

That being said, H265 definitely does improve filesizes.


The main hurdle for adoption, aside from onerous patents, is waiting for hardware acceleration to be common place. What is the current state of h.265 hardware accel? Does it come with everything yet like h.264 does?


Most Turing and above Nvidia cards have support for hardware accelerated decode of h.265 [0]. Modern Intel chips also support hardware accelerated decode (support started with Skylake [1]). Not sure about AMD.

[0] https://developer.nvidia.com/video-encode-and-decode-gpu-sup... [1] https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video#Hardwar...


Right, so that's 2 of the top 10 and 4 of the top 20 gpus per the steam hardware survey, representing 10% of steam users: https://store.steampowered.com/hwsurvey

But I don't think desktops are the problem anyway as opposed to laptops/tablets. Desktop cpus are perfectly capable of decoding h265 in real time without acceleration


Looks like most of the cards in the top 20 support H.265 4:2:0 (NVidia 10XX and up), just not 4:4:4 (NVidia 16XX and up).

I don't think the steam hardware survey is a good sample of the wider computer market though - the majority of people using youtube/netflix/etc on x86 devices are probably using Intel laptop chips with integrated graphics, not $500+ dedicated GPUs.

(Although presumably this is probably becoming less true over time, those low end users are moving more and more to mobile devices and smart TVs/chromecast/console, leaving workstation/gaming systems to be a larger percentage of the market)


Actually, Maxwell (9xx) already supports hardware-accelerated H.265 decoding, but on first-gen Maxwell (the higher-end GTX 970/980 at least) it's not pure hardware decoding and didn't work on Linux last I tried.


> But I don't think desktops are the problem anyway as opposed to laptops/tablets

On the Apple end of things, h265 hardware decode has been supported since the A9 (iPhone 6S, introduced 2015)

They also list hardware decode support on Macs as of Intel 6th gen CPUs (Skylake, introduced 2015).


I think basically all of Intel and NVidias current offerings have hardware 265 decode. But AMD GPUs still only do h.264 (which might be a patent thing, or might just be that their target market doesn't really care. I know I don't, my gaming PC is plenty powerful to just software decode H.265 anyway, although the power efficiency gains would be nice)

The mobile market I have no idea about.

EDIT: I was wrong about AMD - see replies.


> But AMD GPUs still only do h.264

Ryzen 4750U on a T14 reporting in:

    vainfo: VA-API version: 1.10 (libva 2.10.0)
    vainfo: Driver version: Mesa Gallium driver 20.3.4 for AMD RENOIR (DRM 3.40.0, 5.11.4-arch1-1, LLVM 11.1.0)
    vainfo: Supported profile and entrypoints
          VAProfileMPEG2Simple            : VAEntrypointVLD
          VAProfileMPEG2Main              : VAEntrypointVLD
          VAProfileVC1Simple              : VAEntrypointVLD
          VAProfileVC1Main                : VAEntrypointVLD
          VAProfileVC1Advanced            : VAEntrypointVLD
          VAProfileH264ConstrainedBaseline: VAEntrypointVLD
          VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
          VAProfileH264Main               : VAEntrypointVLD
          VAProfileH264Main               : VAEntrypointEncSlice
          VAProfileH264High               : VAEntrypointVLD
          VAProfileH264High               : VAEntrypointEncSlice
          VAProfileHEVCMain               : VAEntrypointVLD
          VAProfileHEVCMain               : VAEntrypointEncSlice
          VAProfileHEVCMain10             : VAEntrypointVLD
          VAProfileHEVCMain10             : VAEntrypointEncSlice
          VAProfileJPEGBaseline           : VAEntrypointVLD
          VAProfileVP9Profile0            : VAEntrypointVLD
          VAProfileVP9Profile2            : VAEntrypointVLD
          VAProfileNone                   : VAEntrypointVideoProc

mpv also reports "Using hardware decoding (vaapi)." when I try to play an HEVC file.


Ah it looks like I was mistaken, they've had h265 encoding and decoding support for a while. Not sure where my brain got this idea from, I even have an AMD gpu atm.

https://en.wikipedia.org/wiki/Unified_Video_Decoder

https://en.wikipedia.org/wiki/Video_Core_Next


How would you if your hardware can or cannot decode h.265? Would it simply not play or might play but stutter, or play in some compromised way?


I have an iPad Mini 2 from 2013. It can play H265 with software decoding, and it doesn't stutter. The only drawback is that the machine gets hot.


It would be slow and your laptop would sound like it’s taking off


Almost all Intel CPU sold in the past 5 years has had an HEVC hardware decoder, and 4 years if you look at HEVC 10.

Most smartphone has also HEVC decode in the past 4 years.

Basically the adoption hurdle has nothing to do with Hardware. Only the two A in FAANG are supportive of JVET or MPEG standard. Others prefer their "patent free" ( cough ) AV1 codec.


Netflix (the N in FAANG) uses device-appropriate codecs and formats, and defaults to AVC (H.264). Googling around shows they provide HEVC (H.265) streams to at least some devices.


Netflix do use HEVC, mostly because there isn't any other solution for high quality 4K streaming in Apple ecosystem. They use it out of absolute necessity. They have been vocal supporter of AV1. And their ex-Head of Video Encoding which now works in Facebook has a distaste for MPEG video, although that is admittedly HEVC and VVC. Everyone seems to be (mildly ) happy with AVC.


Netflix on a recent android mobile showed that the formats supported for hardware decoding are h264 and VP9. So, thought the mobile doesn't have h265 h/w. But then installed mpv from fdroid and played h265 video and mpv said it's using h/w decoding. So for some reason, Netflix chose not to use h265.


Mostly static a screen recordings are pretty much the ideal content to show off H.265. The screen would be broken down into blocks, and ones that don’t contain much motion or complexity would use nearly no bandwidth. This image should demonstrate that:

https://postimg.cc/D8MZyzHd

Also, I’m not criticizing the authors English (which is much better than my Dutch), but some of those mistakes were almost poetic:

> Some more Ducking made me aware that using '-tag:v hvc1' in FFmpeg creates files that QuickTime can eat.


> pretty much the ideal content to show off H.265

Same applies to H.264.

I think the main reason for OP’s observation, the codecs optimized for real time streaming (including the hardware codecs inside GPUs) don’t have enough time (both CPU time, and maximum allowed encoder latency) to achieve what ffmpeg does when asked to re-encode a video clip from hard drive.


Ingest would be a better choice


The submission title claims they are the same quality but the article has a different title, and doesn't make any claims to video quality.

Converting video from a lossy format to another lossy format with these settings will significantly degrade the quality. For a desktop recording that could be problematic, still frames could be unreadable.

Going from a lossless video file to both h264 and h265 will typically have h265 files about 30% smaller at the same quality, but this comes with significant tradeoffs for transcoding time. And there is some nuance to getting the best quality, generally there are more resources on how to do this with h264. And h265 hardware support isn't quite universal, many older mobile devices do not support it for instance.


As of macOS Big Sur, you can Export videos from QuickTime (i.e. the screen recording) in H.265, and it's faster than converting it via ffmpeg as it can leverage the GPU via VideoToolbox.

...but you still have to record-and-save it in H.264 first then reload into QuickTime, and for some reason the H.265 quality isn't great. This is still the case even when using GPU VideoToolbox via ffmpeg/Handbrake, and it's something I've never figured out why.

As an aside, for more serious recordings (and if you don't want to use paid software), I recommend using OBS instead, as it gives you a lot more recording flexibility that the native macOS screen recorder lacks.


This would be true even re-encoding to AVC. Recording, even hardware accelerated, is designed to use little compression and have good quality, so it can keep up with the stream without slowing down your PC. The resulting bitrates are necessarily very high. Re-encoding more efficiently, without the realtime requirements, will usually yield good results.


Sounds likely that the initial recording was a live screen recording so mac probably uses lowest compression settings so to be able to transcode in real time well at the same time not monopolizing the cpu.

Maybe i'm wrong, but the comparision is stupid without having ffmpeg re-encode the video with the same codec. Otherwise its just apples to oranges.


Aside from all the comments mentioning that reencoding in h264 would probably yield similar results (albeit not as good as h265), another issue is the statement that DivX is the same thing as h264. They are related, but not the same: https://videoconverter.wondershare.com/convert-divx/divx-vs-...


> Some more Ducking made me aware that using '-tag:v hvc1' in FFmpeg creates files that QuickTime can eat.

Can you please share the full ffmpeg syntax used for converting the H.264 files to H.265?


I recently did this, and used this to guide me: https://unix.stackexchange.com/a/38380/86174


If you can spare the time I'd also add a slower type preset in there for better compression. eg.

    -preset slower

    # You control the tradeoff between video encoding speed and compression 
    # efficiency with the -preset options. 
    # ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow. 
    # Default is medium.


If your videos are already h.264, you probably should not convert them to h.265. You'll lose some quality to generation loss and the difference in size is not huge.


You're responding in context of an article which says:

> "what if I convert my screen recordings from H.264 to H.265?" Well, two things happend: My files shrink to 6% (not a type)

Sounds like the difference can be huge.


Yeah so if he re-encoded his video in H.264 I bet he would have seen a similar level of compression.


The article doesn't make any claims to the quality, you could reencode the same video to quicktime or mp4 with a similar final size depending on what quality settings you use


The author here. :-)

Sorry for my lame English. I really should not write blogs after midnight... Anyways, a lot of you are asking for what command-line options I used for FFMPEG.

So here it is:

ffmpeg -i input1.mov -c:v libx265 -tag:v hvc1 output1.mp4


The writing here is just bizarre.

- The use of the term "Ducked" instead of "searched" or "looked up" which is clearly forced and I guess trying to promote DuckDuckGo. - "Brew.SH" The program is called Homebrew and it is odd to capitalize the website like that. I have never seen someone say Duck.COM. - FFMPEG should be FFmpeg - photos.app should be Photos.app like Terminal.app that he mentioned earlier. - macOS is capitalized a different way literally every time is used.

For any one of these I wouldn't have thought twice, people make up new capitalization for names all the time. It seems to give the writing a condescending tone (helped by comments like "command line for non-Appleonians").


Content aside, the title alone makes no sense.


Yes. That should have been H.264 to H.265 comparison. Sorry.


Seems like a typo. It should be, "Comparing H.265 (HEVC) and H26*4* video file size".


There's a missing dot, too. "H.264" not "H264".

With the current title, I thought maybe there were 2 formats that differed in name only by a dot, and that this post was about clearing up that confusion between the nearly identical names.


Without reading the details, to see a compression improvement of that magnitude I can only suspect that it is a "middle out" compression algorithm...


>My files shrink to 6% (not a type) of their original size

typoception


I don’t really understand the technical details of compression schemes, but my piece of anecdata is this: in my experience installing/maintaining CCTV systems, using H.265 over H.264 approximately halves the used bandwidth and storage for the same video quality. This is much more pronounced in static/less active scenes, but is still a measurable benefit in highly active scenes.


Assuming you meant H.265 over H.264 but could someone confirm?


Yup sorry, typo


Or just use VP9 (or the upcoming AV1) which has fairly wide support and doesn't have any patent issues.


Btw, the way to get it to a clipboard is not exactly cmd-shift-ctrl 3. It’s cmd-shift 3 in order to enable screenshotting and when you end, if you have ctrl active then it puts it on your clipboard.

Knowing this makes it less annoying to do it as you don’t have 4 keys combinations.


Compression depends on a variety of factors - content, type of frames being used (I-P, I-3/7B-P, etc.). Bit rate vs Distortion curve also has to be considered. PSNR of encoded content is a metric. Click bait without much info.


Wat. This is totally offtopic, but in the linked screenshot, https://1.bp.blogspot.com/-4qInHSs_urA/YFvYMbVGmgI/AAAAAAAA4..., the border between the titlebar and the window content on the left Show Info window is, um, so misaligned it's eating into the text.

199x called, it wants its human interface guidelines back


I hoped it would be a comparison to H266 https://en.wikipedia.org/wiki/Versatile_Video_Coding

I feel like it's the year of 4K. Greater than 24 inch, 4K monitors and 4k TV's are getting somewhat standard/affordable.

But looking at EZTV only around 18% of torrents are released as H265, which is disappointing.


He doesn't appear to show his ffmpeg invocation let alone his report, which is a shame.

I wonder which hardware accelerates h265 encoding on Linux.


Apparently there is hardware support for H.265 in the latest version of MacOS. I'm very eager to have it working/playing with the classic QuickLook shortcut from finder by simply hitting the space bar with a video file highlighted.


The author here. :-)

Sorry for my lame English. I really should not write blogs after midnight... Anyways, a lot of you are asking for what command-line options I used for FFMPEG.

So here it is:

ffmpeg -i input1.mov -c:v libx265 -tag:v hvc1 output1.mp4


Does anyone know what is the SOTA approach to lossless video encoding?


It's such shame that browsers don't support h265 (maybe that edge do though?) As any rational user I have to use the UWP Netflix app to get the best bitrate because of this artificial limitation..

I have paid h265 on the windows store for 1 symbolic dollar and I'm sure you can find a license for cheaper, people should respect the media they consume not using h265 for regular video consumers is nonsensical. And no vp9 is inferior and av1 too in addition to the absence of hardware acceleration for av1. H266 has been released last summer, I can't wait for it to become widespread and push the boundaries of the user experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: