I have had small issues with btrfs over the years, but nothing like the dataloss issues people reported a few years back that the devs supposedly fixed. Its scrubbing mechanism doesn't work great, and the failure modes on RAID are fucking goofy. I wouldn't trust it for raid at all, and they've never really fixed the bugs that have been exposed over the years.
Frankly, it does everything worse than ZFS except for be in the kernel. DKMS isn't that hard and I've never had a ZFS build hook fail. The only thing I use btrfs for is cattle computers that I can nuke and pave at will, and most of those could use ext4 just fine, but that's what Fedora uses by default and I can't be arsed to partition manually.
“Energy workers,” a union leader told us, “are politically homeless.” Here’s why the IRA legislation didn’t do much to change things.jacobin.com
BLAHBYLON'S DAILY RETRO MUSIC
Dead or Alive - I'd Do Anything (1984)
https://www.youtube.com/watch?v=uCVLW0J_8fg
Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.www.youtube.com
Christopher Sebaoun, membre de la majorité municipale de Tours, a été placé en garde à vue après une perquisition à son domicile.Glenn Gillet (BFMTV)
Mike Lindell
Lindell Offense Fund
Our Election Crime Bureau, headed up by former Michigan State Senator, Patrick Colbeck is pursuing the over 700 reports of ELECTION FRAUD tracked by our ElectioNexus system!
Thanks to people like you reporting realtime, we have solid information on attempts to cheat! OF COURSE, all seven swing states documented fraud!
Patrick Byrne recently said that when Hugo Chavez, leader of the Venezuelan Mafia, took over that country, he seized control of their Election Council. Then they created a software package designed to CONTROL ELECTIONS forever. They commercialized it as SMARTMATIC and that tech became the operating system of all the other systems out there, including Dominion.
How crazy is that?!!
That’s why LOF and Election Crime Bureau are pushing a 50-state election audit with very specific resolutions. You can read it here:https://x.com/PatrickByrne/status/1858759441539092509?
Elon Musk Uses X To Unleash His Flying Monkeys On Obscure Federal Workers
Musk called out the federal workers by name.Conover Kennard (Crooks and Liars)
Electric cars won’t save us:
“People who biked on a daily basis had 84% lower carbon emissions from all their daily travel than those who didn’t. […] emissions from biking can be more than 30 times lower than driving a gas car, and 10 times lower than driving an electric car.”
Active travel can help tackle the climate crisis earlier than electric vehicles – even if you swap the car for a bike for just one trip a day.The Conversation
People fired because they spoke the truth about the abortion pill. I'm sure they don't want people like that on their committee! Creeps.
I'd suspect the controller or cable first.
You say that as if it's a good thing. If you HDD is "literally dying", you want the filesystem to fail safe to make you (and applications) aware and not continue as if nothing happened. extfs doesn't fail here because it cannot even detect that something is wrong.
btrfs has its own share of bugs but, in theory, this is actually a feature.
Not any issue that you know of. For all extfs (and, by extension, you) knows, the disk/cable/controller/whatever could have mangled your most precious files and it would be none the wiser; happily passing mangled data to applications.
You have backups of course (right?), so that's not an issue you might say but if
... toon meerI'd suspect the controller or cable first.
You say that as if it's a good thing. If you HDD is "literally dying", you want the filesystem to fail safe to make you (and applications) aware and not continue as if nothing happened. extfs doesn't fail here because it cannot even detect that something is wrong.
btrfs has its own share of bugs but, in theory, this is actually a feature.
Not any issue that you know of. For all extfs (and, by extension, you) knows, the disk/cable/controller/whatever could have mangled your most precious files and it would be none the wiser; happily passing mangled data to applications.
You have backups of course (right?), so that's not an issue you might say but if the filesystem isn't integer, that can permeate to your backups because the backup tool reading those files is none the wiser too; it relies on the filesystem to return the correct data. If you don't manually verify each and every file on a higher level (e.g. manual inspection or hashing) and prune old backups, this has potential for actual data loss.
If your hardware isn't handling the storage of data as it should, you want to know.
While the behaviour upon encountering an issue is in theory correct, btrfs is quite fragile. Hardware issues shouldn't happen but when they happen, you're quite doomed because btrfs doesn't have the option to continue despite the integrity of a part of it being compromised.
btrfs-restore
disables btrfs' integrity; emulating extfs's failure mode but it's only for extracting files from the raw disks, not for continuing to use it as a filesystem.I don't know enough about btrfs to know whether this is feasible but perhaps it could be made a bit more log-structured such that old data is overwritten first which would allow you to simply roll back the filesystem state to a wide range of previous generations, of which some are hopefully not corrupted. You'd then discard the newer generations which would allow you to keep using the filesystem.
You'd risk losing data that was written since that generation of course but that's often a much lesser evil. This isn't applicable to all kinds of corruption because older generations can become corrupted retroactively of course but at least a good amount of them I suspect.
as i said, maybe that's the ideal for industrial/business applications (e.g. servers, remote storage) where the cost of replacing disks due to failure is already accounted for and the company has a process ready and pristine data integrity is of utmost importance, but for home use, reliability of the hardware you do have right now is more important than perfect data integrity, because i want to be as confident as possible that my system is going to boot up next time i turn it on. in my experience, i've never had any major data loss in ext4 due to hardware malfunction. also, most files on a filesystem are replaceable anyway (especially the system files), so it makes even less sense to install your system on a btrfs drive from that perspective.
what you're saying me is basically "btrfs should never be advised for home use"
I mean, as someone who hasn't encountered these same issues as you, I found btrfs really useful for home use. The snapshotting functionality is what gives me a safe feeling that I'll be able to boot my system. On ext4, any OS update could break your system and you'd have to resort to backups or a reinstall to fix it.
But yeah, it's quite possible that my hard drives were never old/bad enough that I ran into major issues...
honestly, i do get the appeal of btrfs, which is why i wanted to try it out one more time. but i feel i can't trust it if it is really that fault intolerant. ext4 might not have as many features as btrfs, but it is more lenient and more predictable
(also, recovering from update failures should be the job of the package system imo)
I think it cannot be expected from the package manager, because it cannot revert database and config structure updates that were automatically done by the programs themselves. if you just restore the old versions of packages, some of them will refuse to start up, crash, or lose data
i’m not sure i understand quite what you’re suggesting, but BTRFD is a copy on write filesystem
so when you write a block, you’re not writing over the old data: you’re writing to empty space, and then BTRFS is marking the old space as unused - or in the case of snapshots, marking it to be kept as old data
I am well aware of how CoW works. What I wrote does not stand in conflict with it.
Perhaps I wasn't clear enough in what I said though:
Each metadata operation ("commit" I think it's called) has a generation number; it first builds this generation (efficiently in a non-damaging way via CoW) and then atomically switches to it. The next generation is built with an incremented generation number and atomically switched again.
That's my understanding of how btrfs generally operates.
When things go awry, some sector that holds some of the newest generation may be corrupt but it might be that a relatively recent generation does not contain this data and is therefore unaffected.
What I'm suggesting is that you should be able to roll back to such a generation at the cost of the changes which happened in between in order to restore a usable filesystem. For this to be feasible, btrfs would need to take greater care not to overwrite recent generation data though which is what I meant by making it "more log-structured".
I don't know whether any of this is reali
... toon meerI am well aware of how CoW works. What I wrote does not stand in conflict with it.
Perhaps I wasn't clear enough in what I said though:
Each metadata operation ("commit" I think it's called) has a generation number; it first builds this generation (efficiently in a non-damaging way via CoW) and then atomically switches to it. The next generation is built with an incremented generation number and atomically switched again.
That's my understanding of how btrfs generally operates.
When things go awry, some sector that holds some of the newest generation may be corrupt but it might be that a relatively recent generation does not contain this data and is therefore unaffected.
What I'm suggesting is that you should be able to roll back to such a generation at the cost of the changes which happened in between in order to restore a usable filesystem. For this to be feasible, btrfs would need to take greater care not to overwrite recent generation data though which is what I meant by making it "more log-structured".
I don't know whether any of this is realistically doable though; my knowledge of btrfs isn't enough to ascertain this.
right! okay, i believe that’s theoretically possible, but the tools don’t exist - which is the constant problem with btrfs
… and i could be completely wrong too - this is getting to the limits of my knowledge
I realize this is a rant but you coulda included hardware details.
I'm gonna contrast your experience with about 300 or so installs I did in the last couple of years, all on btrfs, 90% fedora, 9% ubuntu and the rest debian and mint and other stragglers, nothing but the cheapest and trashiest SSDs money can buy, the users are predominantly linux illiterate. I also run all my stuff (5 workstations and laptops) exclusively on btrfs and have so for 5+ years. not one of those manifested anything close to what you're describing.
so I hope the people that get your recommendations also take into consideration your sample size.
I run btrfs on every hard drive that my Linux boxes use and there's the occasional hiccup but I've never run into anything "unrecoverable."
I will say that compared to extfs, where the files will just eat shit if there's a write corruption, because btrfs tries to baby the data I think there appear to be more "filesystem" issues.
Sad to hear. I don’t know if it’s luck or something else.
I’ve been running Debian on btrfs on my laptop for 3 months without issue; I still use ext4 on my desktop, as I just went with defaults when I installed the operating system.
Typically when there are "can't mount" issues with btrfs it's cause the write log got corrupted, and memory errors are usually the cause.
BTRFS needs a clean write log to guarantee the state of the blocks to put the filesystem overlay on top of, so if it's corrupted btrfs usually chooses to not mount until you do some manual remediations.
If the data verification stuff seems more of a pain in the ass than it's worth you can turn most of those features off with mount options.
Not really. Even TrueNAS Core (ZFS) highly recommends ECC memory to mitigate this possibility from occurring. After reading more about filesystems in general and when money allowed, I took this advice as gospel when upgrading my server from junk I found laying around to a proper Supermicro ATX server mobo.
The difference I think is that BRTFS is more vulnerable to becoming unmountable whereas other filesystems have a better chance of still being mountable but contain missing or corrupted data. The latter usually being preferable.
For desktop use some people don't recommend ZFS as if the right memory corruption conditions are met, it can eat your data as well. It's why Linus Torvalds goes on a rant every now and then about how bullshit it is that Intel normalized paywalling ECC memory to servers only.
I disagree and think the benefits of ZFS on a desktop without ECC outweigh a rare possibility that can be mitigated with backups.
It's the other way around: The memory failure causes the corruption.
Btrfs is merely able to detect it while i.e. extfs is not.
Oh, I mirror my root drives. So:
https://unix.stackexchange.com/questions/340947/does-btrfs-guarantee-data-consistency-on-power-outages#520063
Does BTRFS guarantee data consistency on power outages?
Unix & Linux Stack ExchangeIt only works if the hardware doesn't lie about write barriers. If it says it's written some sectors, btrfs assumes that reading any of those sectors will return the written data rather than the data that was there before. What's important here isn't that the data will forever stay in-tact but ordering. Once a metadata generation has been written to disk, btrfs waits on the write barrier and only updates the superblock (the final metadata "root") afterwards.
If the system loses power while the metadata generation is being written, all is well because the superblock still points at the old generation as the write barrier hasn't passed yet. On the next boot, btrfs will simply continue with the previous generation referenced in the superblock which is fully committed.
If the hardware lied about the write barrier before the superblock update though (i.e. for performance reasons) and has only written e.g. half of the sectors containing the metadata generation but did write the superblock, that would be an inconsistent state which btrfs cannot trivially recover from.
If t
... toon meerIt only works if the hardware doesn't lie about write barriers. If it says it's written some sectors, btrfs assumes that reading any of those sectors will return the written data rather than the data that was there before. What's important here isn't that the data will forever stay in-tact but ordering. Once a metadata generation has been written to disk, btrfs waits on the write barrier and only updates the superblock (the final metadata "root") afterwards.
If the system loses power while the metadata generation is being written, all is well because the superblock still points at the old generation as the write barrier hasn't passed yet. On the next boot, btrfs will simply continue with the previous generation referenced in the superblock which is fully committed.
If the hardware lied about the write barrier before the superblock update though (i.e. for performance reasons) and has only written e.g. half of the sectors containing the metadata generation but did write the superblock, that would be an inconsistent state which btrfs cannot trivially recover from.
If that promise is broken, there's nothing btrfs (or ZFS for that matter) can do. Software cannot reliably protect against this failure mode.
You could mitigate it by waiting some amount of time which would reduce (but not eliminate) the risk of the data before the barrier not being written yet but that would also make every commit take that much longer which would kill performance.
It can reliably protect against power loss (bugs not withstanding) but only if the hardware doesn't lie about some basic guarantees.
hdparm
until I was able to replace that drive.who doesn't? even if rarely, but it just happens
being able to revert a failed upgrade by restoring a snapshot is not a power user need but a very basic feature for everyday users who do not want to debug every little problem that can go wrong, but just want to use their computer.
ext4 does not allow that.
by consuming much more space. but you're right, I did not think about it
I agree, but these are not really backups, but snapshots, which are stored more efficiently, without duplicating data. of course it does not replace an off site backup, but I think it has its use cases.
I am running BTRFS on multiple PCs and Laptops since about 8-10 years ago, and i had 2 incidents:
1. Cheap SSD: BTRFS reported errors, half a year later the SSD failed and never worked again.
2. Unstable RAM: BTRFS reported errors, i did a memtest and found RAM was unstable.
I am using BTRFS RAID0 since about 6 years. Even there, i had 0 issues.
In all those years BTRFS snapshoting has saved me countless hours when i accidentially misconfigured a program or did a accidential rm -r ~/xyz.
For me the real risk in BTRFS comes from snapper, which takes snapshots even when the disk is almost full. This has resulted in multiple systems not booting because there was no space left. That's why i prefer Timeshift for anything but my main PC.
My two cents: the only time I had an issue with Btrfs, it refused to mount without using a FS repair tool (and was fine afterwards, and I knew which files needed to be checked for possible corruption). When I had an issue with ext4, I didn't know about it until I tried to access an old file and it was 0 bytes - a completely silent corruption I found out probably months after it actually happened.
Both filesystems failed, but one at least notified me about it, while the second just "pretended" everything was fine while it ate my data.