Ouch, that's my only 2TB drive in a laptop running a rolling-release GNU/Linux distro with heavy swap usage, encryption and plenty of over-night compilations for a bit more than a year:
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 36 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 1%
Data Units Read: 56 263 143 [28,8 TB]
Data Units Written: 36 077 380 [18,4 TB]
Host Read Commands: 1 252 403 456
Host Write Commands: 1 018 672 820
Controller Busy Time: 15 360
Power Cycles: 234
Power On Hours: 10 255
Unsafe Shutdowns: 47
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
If we are more reasonable and say it's running for 12 hours out of the day then that works out at a continuous 50MB/s of writes.
For comparison, My daily Linux laptop (XPS 13, 512GB Samsung NVMe SSD) has a total of 3.7 TB of writes over 3 years, this is my work dev and home laptop, it's in constant use (although no video editing):
3.7 TB / (1095 days *24*60*60) * 2**20 = 0.041 MB / s
There are three orders of magnitude difference there. I can only think of three explanations: 1. the SMART reporting is wrong, 2. MacOS or M1 SSD controllers have serious write amplification issues, or 3. you are actually doing something that does need serious write throughput like lots of video editing (your stat's aren't impossible after all).
There's an option 4:
you are running out of RAM and macOS is doing a lot of swapping.
These SSDs can probably write on the order of 3000MB/s = 0.003TB/s, so you could end up with 155 TB total writes after just 155/0.003/3600 = 14 hours of RAM heavy workload.
It would have to be a legitimately RAM heavy workload though... running out of RAM due to leaving applications open (or browser tabs open), might result in swapping out those to disk until they are used again. But the level of swapping required to generate this much write usage would essentially be using the disk as RAM, as in the running program doesn't have enough RAM, either due to needing more than physically available or due to buggy memory management.
Right, I was thinking of things like large simulations where you need to keep a large dataset in RAM and update every datapoint at every iteration.
Or maybe multiple VMs running simultaneously doing CI jobs could also do it.
I don't think simple memory errors like leaks would cause it, since that would just end up filling the disk once, but wouldn't go through writes quite as fast.
That reminds me, when SSDs first came out we'd have to set noatime on nix stuff including macs, to prevents the OS from writing file access times every time a file is read, otherwise it would cause significant write amplification. Modern SSD controllers are clever enough to make this unnecessary these days, but it could be something similar... either with the SSD controllers themselves or a file system behavior spaced just right in time to turn 1 byte into 4096 bytes of effective NAND writes - That's only 24GB of requested writes turning into 100TB in the worst case.
Yeah, that's a lot. I've got a 5-year old 512 GB Samsung PM951 that's been a system drive in 3 Windows systems with 16-64 gigs of RAM, and over the course of 6200 power cycles and 16500 hours on it has only 35 TB written and 24 TB read
How do you even get an unsafe shutdown in a notebook with a battery, unless you yank it out, or a buggy OS that will not shutdown properly/preemptively on low power?
I've got a number of unsafe shutdowns listed on my M1 MBP, too. Most likely due to the kernel panics I get when rebooting and connected to my dock in clamshell mode.
2.5 month old, 1 TB, M1 MBA: